Method, apparatus and system for encoding and decoding of directional sound source

文档序号:991511 发布日期:2020-10-20 浏览:10次 中文

阅读说明:本技术 用于定向声源的编码及解码的方法、设备及系统 (Method, apparatus and system for encoding and decoding of directional sound source ) 是由 N·R·钦格斯 M·R·P·托马斯 C·费尔施 于 2019-04-15 设计创作,主要内容包括:一些所揭示方法涉及对定向音频数据进行编码或解码。一些编码方法可涉及接收对应于音频对象的单声道信号及对应于所述音频对象的辐射模式的表示。所述辐射模式可包含对应于多个采样时间、多个频带及多个方向的声级。所述方法可涉及对所述单声道音频信号进行编码以及对所述源辐射模式进行编码以确定辐射模式元数据。对所述辐射模式进行编码可涉及确定辐射模式的所述表示的球面调和变换并压缩所述球面调和变换以获得经编码辐射模式元数据。(Some disclosed methods relate to encoding or decoding directional audio data. Some encoding methods may involve receiving a mono signal corresponding to an audio object and a representation of a radiation pattern corresponding to the audio object. The radiation pattern may include sound levels corresponding to a plurality of sampling times, a plurality of frequency bands, and a plurality of directions. The method may involve encoding the mono audio signal and encoding the source radiation pattern to determine radiation pattern metadata. Encoding the radiation pattern may involve determining a spherical harmonic transform of the representation of the radiation pattern and compressing the spherical harmonic transform to obtain encoded radiation pattern metadata.)

1. A method for encoding directional audio data, comprising:

receiving a mono audio signal corresponding to an audio object and a representation of a radiation pattern corresponding to the audio object, the radiation pattern comprising sound levels corresponding to a plurality of sample times, a plurality of frequency bands, and a plurality of directions;

encoding the mono audio signal; and

encoding a source radiation pattern to determine radiation pattern metadata;

wherein the encoding of the radiation pattern comprises determining a spherical harmonic transform of the representation of the radiation pattern and compressing the spherical harmonic transform to obtain encoded radiation pattern metadata.

2. The method of claim 1, further comprising encoding a plurality of directional audio objects based on clusters of audio objects, wherein the radiation pattern represents a centroid that reflects an average sound level value for each frequency band.

3. The method of claim 2, wherein the plurality of directional audio objects are encoded as a single directional audio object whose directionality corresponds to a time-varying energy-weighted average of spherical harmonic coefficients of each audio object.

4. The method of claim 2 or claim 3, wherein the encoded radiation pattern metadata indicates locations of clusters of audio objects, which is an average of the locations of each audio object.

5. The method of any of claims 1-4, further comprising encoding group metadata regarding radiation patterns of groups of directional audio objects.

6. The method according to any one of claims 1-5, wherein the source radiation pattern is rescaled to an amplitude of the input radiation pattern in a direction based on each frequency to determine a normalized radiation pattern.

7. The method of any one of claims 1-6, wherein compressing the spherical harmonic transform comprises: at least one of a singular value decomposition method, a principal component analysis, a discrete cosine transform, a data-independent basis, or a spherical harmonic coefficient that eliminates a threshold order of the spherical harmonic transform above a spherical harmonic coefficient.

8. A method for decoding audio data, comprising:

receiving an encoded core audio signal, encoded radiation pattern metadata, and encoded audio object metadata;

decoding the encoded core audio signal to determine a core audio signal;

decoding the encoded radiation pattern metadata to determine a decoded radiation pattern;

decoding the audio object metadata; and

rendering the core audio signal based on the audio object metadata and the decoded radiation pattern.

9. The method of claim 8, wherein the audio object metadata includes at least one of time-varying 3 degree of freedom (DoF) or 6DoF source orientation information.

10. The method according to claim 8 or claim 9, wherein the core audio signal comprises a plurality of directional objects based on a cluster of objects, and wherein the decoded radiation pattern represents a centroid that reflects an average value for each frequency band.

11. The method of any of claims 8-10, wherein the rendering is based on applying subband gains based at least in part on the decoded radiation data to the decoded core audio signal.

12. The method of any of claims 8-11, wherein the encoded radiation pattern metadata corresponds to time-and frequency-variant sets of spherical harmonic coefficients.

13. The method of any of claims 8-12, wherein the encoded radiation pattern metadata comprises audio object type metadata.

14. The method of claim 13, wherein the audio object type metadata indicates parametric directionality pattern data, and wherein the parametric directionality pattern data includes one or more functions selected from a list of functions consisting of a cosine function, a sine function, or a cardioid function.

15. The method of claim 13, wherein the audio object type metadata indicates database directivity pattern data, and wherein decoding the encoded radiation pattern metadata to determine the decoded radiation pattern comprises querying a directivity data structure that includes an audio object type and corresponding directivity pattern data.

16. The method of claim 13, wherein the audio object type metadata indicates dynamic directional pattern data, and wherein the dynamic directional pattern data corresponds to time-varying and frequency-varying sets of spherical harmonic coefficients.

17. The method of claim 16, further comprising, prior to receiving the encoded core audio signal, receiving the dynamic directivity mode data.

18. An audio decoding apparatus, comprising:

an interface system; and

a control system configured for:

receiving, via the interface system, audio data corresponding to at least one audio object, the audio data including a mono audio signal, audio object position metadata, audio object size metadata, and rendering parameters;

determining whether the rendering parameter indicates a position mode or a directivity mode; and after determining that the rendering parameters indicate a directivity mode, render the audio data for rendering via at least one speaker in accordance with a directivity mode indicated by at least one of the location metadata or the size metadata.

19. The apparatus of claim 18, wherein rendering the audio data comprises interpreting the audio object position metadata as audio object orientation metadata.

20. The apparatus of claim 19, wherein the audio object position metadata comprises at least one of x, y, z coordinate data, spherical coordinate data, or cylindrical coordinate data, and wherein the audio object orientation metadata comprises roll, pitch, and roll data.

21. The apparatus of any of claims 18-20, wherein rendering the audio data comprises interpreting the audio object size metadata as directional metadata corresponding to the directional pattern.

22. The apparatus of any of claims 18-21, wherein rendering the audio data comprises querying a data structure that includes a plurality of directionality patterns, and mapping at least one of the location metadata or the size metadata to one or more of the directionality patterns.

23. The apparatus of claim 22, wherein the control system is configured for receiving the data structure via the interface system.

24. The apparatus of claim 23, wherein the data structure is received before the audio data.

25. The apparatus of any of claims 18-24, wherein the audio data is received in a dolby panoramagram format.

26. The apparatus of any of claims 18-25, wherein the audio object location metadata corresponds to world coordinates or model coordinates.

Technical Field

The invention relates to the encoding and decoding of a directional sound source and an auditory scene based on multiple dynamic and/or mobile directional sources

Background

Real-world sound sources, whether natural or artificial (loudspeakers, musical instruments, speech, mechanical devices) radiate sound in a non-isotropic manner. Characterizing the radiation pattern (or "directivity") of a sound source can be crucial for proper rendering, especially in the context of interactive environments such as video games and virtual/augmented reality (VR/AR) applications. In these environments, users typically interact with directional audio objects by walking around them, thereby changing their auditory perspective (also known as 6-degree-of-freedom [ DoF ] rendering) of the generated sound. The user may also grab and dynamically rotate the virtual object, also requiring different directions to be rendered in the radiation pattern of the corresponding sound source. In addition to more realistically rendering the direct propagation effect from the source to the listener, the radiation characteristics will also play a major role in the higher order acoustic coupling between the source and its environment (e.g. the virtual environment in a game), thereby affecting reverberant sound (i.e. sound waves traveling back and forth as in echoes). Thus, such reverberation may affect other spatial cues, such as perceived distance.

Most audio game engines provide some way of representing and rendering directional sound sources, but are typically limited to simple directional gains, which rely on simple first-order cosine functions or "sound cones" (e.g., power cosine functions) and simple high-frequency roll-off filters. These representations are not sufficient to represent real world radiation patterns and are also not well suited for simplified/combined representation of multiple directional sound sources.

Disclosure of Invention

Various audio processing methods are disclosed herein. Some such methods may involve encoding directional audio data. For example, some methods may involve receiving a mono audio signal corresponding to an audio object and a representation of a radiation pattern corresponding to the audio object. The radiation patterns may, for example, include sound levels corresponding to multiple sampling times, multiple frequency bands, and multiple directions. Some such methods may involve encoding the mono audio signal and encoding a source radiation pattern to determine radiation pattern metadata. The encoding of the radiation pattern may involve determining a spherical harmonic transform of the representation of the radiation pattern and compressing the spherical harmonic transform to obtain encoded radiation pattern metadata.

Some such methods may involve encoding a plurality of directional audio objects based on a cluster of audio objects. The radiation pattern may represent a centroid reflecting an average sound level value for each frequency band. In some such implementations, the plurality of directional audio objects are encoded as a single directional audio object whose directionality corresponds to a time-varying energy-weighted average of the spherical harmonic coefficients of each audio object. The encoded radiation pattern metadata may indicate a location of a cluster of audio objects, which is an average of the locations of each audio object.

Some methods may involve encoding group metadata regarding radiation patterns of a group of directional audio objects. In some examples, the source radiation pattern may be rescaled to an amplitude of the input radiation pattern in a direction based on each frequency to determine a normalized radiation pattern. According to some implementations, compressing the spherical harmonic transform may involve singular value decomposition methods, principal component analysis, discrete cosine transforms, data-independent bases, and/or eliminating spherical harmonic coefficients of the spherical harmonic transform above a threshold order of spherical harmonic coefficients.

Some alternative approaches may involve decoding the audio data. For example, some such methods may involve receiving an encoded core audio signal, encoded radiation pattern metadata, and encoded audio object metadata, and decoding the encoded core audio signal to determine a core audio signal. Some such methods may involve decoding the encoded radiation pattern metadata to determine a decoded radiation pattern, decoding the audio object metadata, and rendering the core audio signal based on the audio object metadata and the decoded radiation pattern.

In some cases, the audio object metadata may include at least one of time-varying 3 degree of freedom (3DoF) or 6 degree of freedom (6DoF) source orientation information. The core audio signal may include a plurality of directional objects based on clusters of objects. The decoded radiation pattern may represent a centroid that reflects an average value for each frequency band. In some examples, the rendering may be based on applying a subband gain based at least in part on the decoded radiation data to the decoded core audio signal. The encoded radiation pattern metadata may correspond to a set of time-varying and frequency-varying spherical harmonic coefficients.

According to some implementations, the encoded radiation pattern metadata may include audio object type metadata. The audio object type metadata may for example indicate parametric directional pattern data. The parametric directional pattern data may include a cosine function, a sine function, or a cardioid function. In some examples, the audio object type metadata may indicate database directionality pattern data. Decoding the encoded radiation pattern metadata to determine the decoded radiation pattern may involve querying a directivity data structure that includes an audio object type and corresponding directivity pattern data. In some examples, the audio object type metadata indicates dynamic directional pattern data. The dynamic directional pattern data may correspond to a set of time-varying and frequency-varying spherical harmonic coefficients. Some methods may involve receiving the dynamic directivity mode data prior to receiving the encoded core audio signal.

Some or all of the methods described herein may be performed by one or more devices according to instructions (e.g., software) stored on one or more non-transitory media. Such a non-transitory medium may include a memory device such as those described herein, including, but not limited to, a Random Access Memory (RAM) device, a Read Only Memory (ROM) device, and the like. Accordingly, various innovative aspects of the subject matter described in this disclosure can be implemented in one or more non-transitory media having software stored thereon. The software may, for example, include instructions for controlling at least one device to process audio data. The software may be executed, for example, by one or more components of a control system, such as those disclosed herein. The software may, for example, include instructions for performing one or more of the methods disclosed herein.

At least some aspects of the present invention may be implemented via an apparatus. For example, one or more devices may be configured to perform, at least in part, the methods disclosed herein. In some implementations, an apparatus may include an interface system and a control system. The interface system may include one or more network interfaces, one or more interfaces between the control system and a memory system, one or more interfaces between the control system and another device, and/or one or more external device interfaces. The control system may include at least one of a general purpose single-or multi-chip processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, or discrete hardware components. Thus, in some implementations, the control system may include one or more processors and one or more non-transitory storage media operably coupled to the one or more processors.

According to some such examples, the control system may be configured for receiving audio data corresponding to at least one audio object via the interface system. In some examples, the audio data may include a mono audio signal, audio object position metadata, audio object size metadata, and rendering parameters. Some such methods may involve determining whether the rendering parameters indicate a position mode or a directivity mode, and rendering the audio data for rendering via at least one speaker according to the directivity mode indicated by the position metadata and/or the size metadata when it is determined that the rendering parameters indicate a directivity mode.

In some examples, rendering the audio data may involve interpreting the audio object position metadata as audio object orientation metadata. The audio object position metadata may, for example, include x, y, z coordinate data, spherical coordinate data, or cylindrical coordinate data. In some cases, the audio object orientation metadata may include roll, pitch, and roll data.

According to some examples, rendering the audio data may involve interpreting the audio object size metadata as directional metadata corresponding to the directional pattern. In some implementations, rendering the audio data may involve querying a data structure that includes a plurality of directional patterns, and mapping one or more of the position metadata and/or the size metadata to the directional patterns. In some cases, the control system may be configured for receiving the data structure via the interface system. In some examples, the data structure may be received before the audio data. In some embodiments, wherein the audio data may be received in a dolby panoramagram format. The audio object location metadata may, for example, correspond to world coordinates or model coordinates.

The details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages will become apparent from the description, the drawings, and the claims. It should be noted that the relative dimensions of the following figures may not be drawn to scale. Like reference numbers and designations in the various drawings generally indicate like elements.

Drawings

Fig. 1A is a flow diagram showing blocks of an audio encoding method according to one example.

Fig. 1B illustrates process blocks that may be implemented by an encoding system for dynamically encoding each frame of directionality information for a directional audio object, according to one example.

Fig. 1C illustrates process blocks implemented by a decoding system, according to one example.

Fig. 2A and 2B show radiation patterns of audio objects in two different frequency bands.

Fig. 2C is a graph showing an example of normalized and non-normalized radiation patterns according to one example.

Fig. 3 shows an example of a hierarchy including audio data and various types of metadata.

Fig. 4 is a flow diagram showing blocks of an audio decoding method according to one example.

Fig. 5A depicts a cymbal.

Fig. 5B shows an example of a speaker system.

Fig. 6 is a flow diagram showing blocks of an audio decoding method according to one example.

Fig. 7 illustrates an example of encoding a plurality of audio objects.

FIG. 8 is a block diagram showing an example of components of an apparatus that may be configured to perform at least some of the methods disclosed herein.

Like reference numbers and designations in the various drawings indicate like elements.

Detailed Description

Aspects of the present disclosure relate to representation and efficient coding of complex radiation patterns. Some such implementations may include one or more of the following:

1. a general sound radiation pattern is represented as an N-order coefficient of a real-valued spherical harmonic function (SPH) decomposition (N > -1) with respect to time and frequency. This representation can also be extended to depend on the level of the playback audio signal. In contrast to the case where the directional source signal is itself an HOA-like PCM representation, the mono target signal may be encoded separately from its directional information, which is represented as a set of time-dependent scalar SPH coefficients in the sub-bands.

2. An efficient coding scheme to reduce the bit rate required to represent this information

3. A solution to dynamically combine radiation patterns so that a scene consisting of several sources of radiation can be represented by an equivalent reduced number of sources, while maintaining its perceptual quality at rendering time.

Aspects of the present invention relate to representing a general radiation pattern so as to supplement metadata for each mono audio object by a set of time/frequency correlation coefficients representing the directionality of the mono audio object projected in an N-th order spherical harmonic basis (N > -1).

A first-order radiation pattern may be represented by a set of 4 scalar gain coefficients for a set of predefined frequency bands (e.g., 1/3 octaves). The set of frequency bands may also be referred to as a packet or a subband. The grouping or sub-bands may be determined based on a Short Time Fourier Transform (STFT) or a perceptual filter bank for a single frame of data (e.g., 512 samples as in Dolby panoramas). The resulting pattern may be rendered by evaluating a spherical harmonic function decomposition in a desired direction around the object.

Generally, this radiation pattern is characteristic of the source and may be constant over time. However, to represent dynamic scenarios where objects rotate or change, or to ensure that data is randomly accessible, it may be beneficial to update this set of coefficients at regular time intervals. In the case of dynamic auditory scenes with moving objects, the result of the object rotation can be encoded directly in the time-varying coefficients without explicit separate encoding of the object orientation.

Each type of sound source has a characteristic radiation/emission pattern, which typically differs from frequency band to frequency band. For example, a violin may have a very different radiation pattern than a trumpet, drum or bell. Furthermore, sound sources, such as musical instruments, may radiate differently at very weak and very strong playing levels. Thus, the radiation pattern may also be a function of the pressure level of the audio signal radiated by the sound-emitting object, not only in the direction around it, but also in time-varying manner.

Thus, instead of simply representing the sound field at a midpoint in space, some implementations involve encoding the audio data corresponding to the radiation pattern of the audio object so that it can be rendered from different vantage points. In some cases, the radiation pattern may be time-varying and frequency-varying radiation patterns. In some cases, the audio data input to the encoding process may include multiple channels (e.g., 4, 6, 8, 20, or more channels) of audio data from the directional microphones. Each channel may correspond to data from a microphone at a particular location in space surrounding a sound source from which a radiation pattern may be derived. Assuming that the relative direction from each microphone to the source is known, this can be achieved by numerical fitting a set of spherical harmonic coefficients such that the resulting spherical function best matches the observed energy levels in the different subbands of each input microphone signal. See, for example, methods and Systems described in PCT/US2017/053946 application for methods and apparatus for Determining Audio presentation, incorporated by reference herein, in conjunction with niguls cingsus (Nicolas Tsingos) and ladiep kumarma geonadara, as (praadep KumarGovindaraju), which is hereby incorporated by reference. In other examples, the radiation pattern of an audio object may be determined via numerical simulation.

Instead of simply encoding audio data from directional microphones at the sample level, some implementations involve encoding a mono audio object signal with corresponding radiation pattern metadata representing radiation patterns for at least some of the encoded audio objects. In some implementations, the radiation pattern metadata can be represented as spherical harmonic data. Some such implementations may involve smoothing processes and/or compression/data reduction processes.

Fig. 1A is a flow diagram showing blocks of an audio encoding method according to one example. Method 1 may be implemented, for example, by a control system (e.g., control system 815 described below with reference to fig. 8) that includes one or more processors and one or more non-transitory memory devices. As with other disclosed methods, not all of the blocks of method 1 must be performed in the order shown in fig. 1A. Moreover, alternative methods may include more or fewer blocks.

In this example, block 5 relates to receiving a mono audio signal corresponding to an audio object, and also receiving a representation of a radiation pattern corresponding to the audio object. According to this implementation, the radiation pattern includes sound levels corresponding to a plurality of sample times, a plurality of frequency bands, and a plurality of directions. According to this example, block 10 relates to encoding a mono audio signal.

In the example shown in fig. 1A, block 15 relates to encoding the source radiation pattern to determine radiation pattern metadata. According to this implementation, encoding the representation of the radiation pattern involves determining a spherical harmonic transform of the representation of the radiation pattern and compressing the spherical harmonic transform to obtain encoded radiation pattern metadata. In some implementations, the representation of the radiation pattern can be rescaled to the amplitude of the input radiation pattern in a direction based on each frequency to determine a normalized radiation pattern.

In some cases, compressing the spherical harmonic transform may involve discarding some higher-order spherical harmonic coefficients. Some such examples may involve eliminating spherical harmonic coefficients of a spherical harmonic transform above a threshold order of spherical harmonic coefficients (e.g., above order 3, above order 4, above order 5, etc.).

However, some implementations may involve alternative and/or additional compression methods. According to some such implementations, the compressed spherical harmonic transform may involve singular value decomposition methods, principal component analysis, discrete cosine transforms, data-independent bases, and/or other methods.

According to some examples, method 1 may also involve encoding the plurality of directional audio objects as a group or "cluster" of audio objects. Some implementations may involve encoding group metadata regarding radiation patterns of groups of directional audio objects. In some cases, the plurality of directional audio objects are encoded as a single directional audio object whose directionality corresponds to a time-varying energy-weighted average of the spherical harmonic coefficients of each audio object. In some such examples, the encoded radiation pattern metadata may represent a centroid corresponding to the average sound level value for each frequency band. For example, the encoded radiation pattern metadata (or related metadata) may indicate the position of a cluster of audio objects, which is an average of the position of each directional audio object in the cluster.

Fig. 1B illustrates process blocks that may be implemented by the encoding system 100 for dynamically encoding each frame of directionality information for a directional audio object, according to one example. For example, the process may be implemented via a control system, such as control system 815 described below with reference to fig. 8. The encoding system 100 may receive a mono audio signal 101, which may correspond to a mono object signal as discussed above. The mono audio signal 101 may be encoded at block 111 and provided to a serialization block 112.

At block 102, static or time-varying directional energy samples at different sound levels in a set of frequency bands relative to a reference coordinate system may be processed. The reference coordinate system may be determined in some coordinate space, such as a model coordinate space or a world coordinate space.

At block 105, frequency-dependent rescaling of the time-varying directed energy samples from block 102 may be performed. In one example, frequency-dependent rescaling may be performed according to the examples illustrated in fig. 2A-2C. The normalization may be based on e.g. a rescaling of the amplitude of the high frequencies with respect to the low frequency direction.

The frequency dependent rescaling may be renormalized based on the core audio assumed capture direction. This core audio assumes that the capture direction may represent the listening direction relative to the sound source. For example, such a listening direction may be referred to as a viewing direction, where the viewing direction may be in a certain direction (e.g., a forward direction or a backward direction) relative to a coordinate system.

At block 106, the rescaled directional output of 105 may be projected onto the spherical harmonic function base, resulting in coefficients of the spherical harmonic function.

At block 108, the spherical coefficients of block 106 are processed based on the instantaneous sound level 107 and/or information from the rotation block 109. The instantaneous sound level 107 may be measured in a certain direction at a certain time. The information from the rotation block 109 may indicate an (optional) rotation of the time-varying source orientation 103. In one example, at block 109, the spherical coefficients may be adjusted to account for time-dependent modifications in source orientation relative to the original recorded input data.

At block 108, the target level determination may be further performed based on the equalization of the direction determination relative to the assumed capture direction of the core audio signal. Block 108 may output a set of rotated spherical coefficients that have been equalized based on the target level.

At block 110, the encoding of the radiation pattern may be based on projection onto a smaller subspace of spherical coefficients related to the source radiation pattern, resulting in encoded radiation pattern metadata. As shown in fig. 1A, at block 110, SVD decomposition and compression algorithms may be performed on the spherical coefficients output by block 108. In one example, the SVD decomposition and compression algorithm of block 110 may be performed according to the principles described in connection with equations 11 through 13 described below.

Alternatively, block 110 may involve utilizing other methods, such as Principal Component Analysis (PCA) and/or data-independent bases, such as 2D DCT, to represent the spherical harmony

Figure BDA0002635391130000071

Projected into a space conducive to lossy compression. The output of 110 may be a matrix T representing the projection of data into a smaller subspace of inputs, i.e. the encoded radiation pattern T. The encoded radiation pattern T, the encoded core mono audio signal 111, and any other object metadata 104 (e.g., x, y, z, optional source orientation, etc.) may be serialized at the serialization block 112 to output an encoded bitstream. In some examples, the radiating structure may be represented by the following bitstream syntax structure in each encoded audio frame:

byte freqBandModePreset (e.g., wideband, octave, wideband, 1/3 octaves, regular).

This determines the number N of subbands and the center frequency value)

Byte order (spherical harmonic order N)

Int coefficient ((N + 1)) × K value)

This syntax may encompass different sets of coefficients for different pressure/intensity levels of the sound source. Alternatively, a single set of coefficients may be dynamically generated if the directional information is available at different signal levels and if the level of the source cannot be further determined at the playback time. For example, such coefficients may be generated by interpolating between low-level coefficients and high-level coefficients based on the time-varying level of the object audio signal at the encoding time.

The input radiation pattern relative to the mono audio object signal may also be 'normalized' to a given direction, such as the primary response axis (which may be the direction from which the mono audio object signal was recorded or an average of multiple recordings), and the encoded directionality and final rendering may need to be consistent with this 'normalization'. In one example, this normalization may be specified as metadata. In general, it is desirable to encode a core audio signal that would convey a good representation of the object timbre without applying directionality information.

Directional coding

Aspects of the present invention are directed to implementing an efficient coding scheme for directional information because the number of coefficients grows twice in the order of decomposition. An efficient encoding scheme for directional information may be implemented for the ultimate transmission delivery of auditory scenes, such as over a limited bandwidth network to an endpoint rendering device.

Assuming that 16 bits are used to represent each coefficient, a 4-order spherical harmonic representation in the 1/3 octave band would require 25 × 31 ═ 12kbit per frame. Refreshing this information at 30Hz would require a transmission bit rate of at least 400kbps, which is higher than the bit rate currently required by the current object-based audio codec for transmitting both audio and object metadata. In one example, the radiation pattern may be represented by:

G(θiiω) equation (1)

In equation (1), (θ)ii) I e {1 … P } represents a discrete cosine angle theta e 0, pi relative to the sound source]And azimuth angle φ ∈ [0,2 π), P represents the total number of dispersion angles, and ω represents the spectral frequency. Fig. 2A and 2B show radiation patterns of audio objects in two different frequency bands. For example, fig. 2A may represent radiation patterns of audio objects in a frequency band from 100 to 300Hz, while fig. 2B may, for example, represent radiation patterns of the same audio objects in a frequency band from 1kHz to 2 kHz. The low frequencies tend to be relatively more omnidirectional, so the radiation pattern shown in fig. 2A is relatively more circular than the radiation pattern shown in fig. 2B. In FIG. 2A, G (θ)00ω) represents the radiation pattern in the direction of the main response axis 200, and G (θ)11ω) represents a radiation pattern in any direction 205.

In some examples, the radiation pattern may be captured and determined by a plurality of microphones physically placed around a sound source corresponding to the audio object, while in other examples, the radiation pattern may be determined via numerical simulation. In the multiple microphone example, the radiation pattern may be a time-varying reflection, such as a real-time recording. Radiation patterns can be captured at various frequencies, including low frequencies (e.g., <100Hz), mid frequencies (100Hz < and >1kHz), and high frequencies (>10 kHz). The radiation pattern may also be referred to as a spatial representation.

In another example, the radiation pattern may be based on a pattern in a certain direction G (θ)iiω) at a certain frequency, such as (for example):

in equation (2), G (θ)00And ω) represents the radiation pattern in the direction of the main response axis. Referring again to FIG. 2B, in one example, one can see radiation pattern G (θ)iiω) and normalized radiation pattern H (θ)iiω). Fig. 2C is a graph showing an example of normalized and non-normalized radiation patterns according to one example. In this example, the normalized radiation pattern in the direction of the principal response axis (which is denoted as H (θ) in FIG. 2C00ω)) have substantially the same amplitude across the illustrated range of frequency bands. In this example, the normalized radiation pattern in direction 205 (shown in fig. 2A) (which is represented as H (θ) in fig. 2C11ω)) and non-normalized radiation patterns, which are denoted as G (θ) in fig. 2C11ω)) have a relatively high amplitude in higher frequencies. For convenience in labeling the symbols for a given frequency band, it may be assumed that the radiation pattern is constant, but in practice it may vary over time, for example with different arches employed on stringed instruments.

A radiation pattern or a parametric representation thereof may be transmitted. The pre-processing of the radiation pattern may be performed prior to its transmission. In one example, the radiation pattern or parametric representation may be pre-processed by a computational algorithm, an example of which is shown with respect to fig. 1A. After preprocessing, the radiation pattern can be decomposed on an orthogonal spherical basis based on, for example:

Figure BDA0002635391130000092

in equation (3), H (θ)iiω) represents a spatial representation, and

Figure BDA0002635391130000093

representing a spherical harmonic function representation with fewer elements than the spatial representation. H (theta)iiω) and

Figure BDA0002635391130000094

the conversion between may be based on, for example, using true fully normalized spherical reconciliation:

Figure BDA0002635391130000095

in the equation (4) in which,representing an associated Legendre polynomial, the order m ∈ { -N … N }, the degree N ∈ {0 … N }, and

other spherical bases may also be used. Any method for performing a spherical harmonic function transform on discrete data may be used. In one example, the transformation matrix may be determined by first defining the transformation matrix

Figure BDA0002635391130000098

To use the least squares method:

whereby the spherical harmonic function representation is associated with the spatial representation as

Figure BDA0002635391130000102

In the equation (7), in the case of,

Figure BDA0002635391130000103

the spherical harmonic representation and/or the spatial representation may be stored for further processing.

Pseudo inverse

Figure BDA0002635391130000104

A weighted least squares solution of the form:

Figure BDA0002635391130000105

the regularization solution may also be applicable for cases where the distribution of spherical samples contains a large amount of missing data. The missing data may correspond to no directional samples being available for it (e.g., due to non-uniform microphone)A range of wind coverage) or direction. In many cases, the distribution of the spatial samples is sufficiently uniform that the identity weighting matrix W produces acceptable results. In general, P > (N +1)2Thus the spherical harmonic function representation

Figure BDA0002635391130000106

Contains fewer elements than the spatial representation H (ω), thereby creating a first order lossy compression that smoothes the radiation pattern data.

Now consider the discrete frequency band ωkK is equal to {1 … K }. The matrix H (ω) may be stacked such that each frequency band is represented by a column of matrices

That is, the spatial representation H (ω) may be determined based on frequency grouping/bands/groups. Thus, the spherical harmonic representation may be based on:

Figure BDA0002635391130000108

in the equation (10) in which,

Figure BDA0002635391130000109

representing radiation patterns for all discrete frequencies in the spherical harmonic domain. It is contemplated that,are highly correlated, resulting in redundancy in the representation. Some embodiments involve further decomposition by matrix factorization of the form

Figure BDA00026353911300001012

Some embodiments may involve performing Singular Value Decomposition (SVD), whereinAnd

Figure BDA00026353911300001014

representing left and right singular matrices andrepresenting a matrix with reduced singular values along its diagonal. The matrix V information may be received or stored. Alternatively, Principal Component Analysis (PCA) and data-independent basis (e.g., 2D DCT) may be used to analyze

Figure BDA00026353911300001016

Projected into a space conducive to lossy compression.

Let O be (N +1)2. In some examples, to achieve compression, the encoder may discard components corresponding to smaller singular values by calculating products based on:

t ═ U Σ', equation (12) th

In the equation (12) in which,representing a truncated copy of sigma. The matrix T may represent the projection of data into a smaller subspace of inputs. T represents the encoded radiation pattern data that is then transmitted for further processing. On the decoding, receiving side, in some examples, the matrix T may be received and pairs may be reconstructed based on

Figure BDA0002635391130000112

Low rank approximation of (d):

in the equation (13) in which,

Figure BDA0002635391130000114

representing a truncated copy of V. The matrix V may be transmitted or stored on the decoder side.

The following are three examples for transmitting truncated decomposition and truncated right singular vectors:

1. the transmitter may transmit the encoded radiation T and truncated right singular vector V' independently for each object.

2. Objects may be grouped, for example, by similarity metrics, and U and V may be calculated as representative bases for multiple objects. Thus, the encoded radiation T may be transmitted per object, and U and V may be transmitted per group of objects.

3. The left and right singular matrices U and V may be pre-computed on a large database of representative data (e.g., training data), and information about V may be stored on the side of the receiver. In some such examples, the encoded radiation T may only be transmitted per object. DCT is another example of a basis that can be stored on the side of the receiver.

Spatial encoding of directional objects

When encoding and transmitting complex auditory scenes comprising multiple objects, it is possible to apply spatial coding techniques, wherein individual objects are replaced by a smaller number of representative clusters in a way that best preserves the auditory perception of the scene. In general, replacing a group of sound sources by a representative "centroid" requires computing an aggregate value/average for each metadata field. For example, the location of the sound source cluster may be an average of the locations of each source. By representing the radiation pattern of each source using spherical harmonic decomposition as outlined above (e.g. with reference to equations 1 to 12), the sets of coefficients in each sub-band for each source can be linearly combined in order to construct an average radiation pattern for the cluster of sources. By calculating the loudness or energy weighted average of the spherical harmonic coefficients over time, a time-varying perceptual optimal representation can be constructed that better preserves the original scene.

Fig. 1C shows blocks of a process that may be implemented by a decoding system, according to one example. For example, the blocks shown in fig. 1C may be implemented by a control system of a decoding device (such as control system 815 described below with reference to fig. 8) that includes one or more processors and one or more non-transitory memory devices. At block 150, may receive and reverseSerializing the metadata and the encoded core mono audio signal. The deserialization information may include object metadata 151, an encoded core audio signal, and encoded spherical coefficients. At block 152, the encoded core audio signal may be decoded. At block 153, the encoded spherical coefficients may be decoded. The encoded radiation pattern information may include an encoded radiation pattern T and/or a matrix V. The matrix V will depend on the matrix to beA method of projection into space. If the SVD algorithm is used at block 110 of FIG. 1B, the matrix V may be received or stored by the decoding system.

Object metadata 151 may contain information about the relative direction of the source to the listener. In one example, metadata 151 may include information about the distance and direction of a listener and one or more object distances and directions relative to 6DoF space. For example, metadata 151 may include information about the relative rotation, distance, and direction of the source in 6DoF space. In instances of multiple objects in a cluster, the metadata field may reflect information about a representative "centroid" that reflects an aggregate value/average of the cluster of objects.

The renderer 154 may then render the decoded core audio signal and the decoded spherical harmonic function coefficients. In one example, renderer 154 may render the decoded core audio signal and decoded spherical harmonic function coefficients based on object metadata 151. Renderer 154 may determine the sub-band gains for the spherical coefficients of the radiation pattern based on information from metadata 151 (e.g., source to listener relative direction). The renderer 154 may then render the core audio object signals based on the determined sub-band gains, source and/or listener pose information (e.g., x, y, z, roll, pitch, roll) 155 for the corresponding decoded radiation patterns. The listener pose information may correspond to a position and a viewing direction of the user in 6DoF space. The listener gesture information may be received from a source local to the VR playback system, such as an optical tracking device. The source pose information corresponds to the position and orientation of the sound generating object in space. It may also be inferred from the local tracking system, for example, whether the user's hand is and tracks and interactively manipulates a virtual sound object, or whether a tracked physical prop (prop)/proxy object is used.

Fig. 3 shows an example of a hierarchy including audio data and various types of metadata. As with other figures provided herein, the number and types of audio data and metadata shown in fig. 3 are provided by way of example only. Some encoders may provide the complete set of audio data and metadata (data set 345) shown in fig. 3, while other encoders may provide only a portion of the metadata shown in fig. 3, e.g., data set 315 only, data set 325 only, or data set 335 only.

In this example, the audio data includes a mono audio signal 301. The mono audio signal 301 is one example of an audio signal sometimes referred to herein as a "core audio signal". However, in some examples, the core audio signal may include audio signals corresponding to a plurality of audio objects included in the cluster.

In this example, audio object position metadata 305 is expressed as cartesian coordinates. However, in alternative examples, audio object position metadata 305 may be expressed via other types of coordinates, such as spherical or polar coordinates. Thus, the audio object position metadata 305 may include three degrees of freedom (3DoF) position information. According to this example, the audio object metadata includes audio object size metadata 310. In alternative examples, the audio object metadata may include one or more other types of audio object metadata.

In this implementation, the data set 315 includes a mono audio signal 301, audio object position metadata 305, and audio object size metadata 310. Data set 315 may be, for example, Dolby AtmosTMThe audio data format is provided.

In this example, data set 315 also includes an optional rendering parameter R. According to some disclosed implementations, the optional rendering parameter R may indicate whether at least some of the audio object metadata of the data set 315 should be interpreted in its "normal" sense (e.g., as position or size metadata) or as directional metadata. In some disclosed implementations, the "normal" mode may be referred to herein as the "position mode" and the alternative mode may be referred to herein as the "directivity mode". Some examples are described below with reference to fig. 5A-6.

According to this example, orientation metadata 320 includes angular information for expressing the roll, pitch, and roll of the audio object. In this example, the orientation metadata 320 indicates the roll, pitch, and roll as phi, theta, and psi. Data set 325 contains enough information to orient an audio object for a six degree of freedom (6DoF) application.

In this example, data set 335 contains audio object type metadata 330. In some implementations, audio object type metadata 330 may be used to indicate corresponding radiation pattern metadata. The encoded radiation pattern metadata may be used to determine a decoded radiation pattern (e.g., by a decoder or a device receiving audio data from a decoder). In some instances, audio object type metadata 330 may essentially indicate "i am a trumpet", "i am a violin", and so on. In some examples, a decoding device may access a database of audio object types and corresponding directivity patterns. According to some examples, the database may be provided with the encoded audio data, or prior to transmission of the audio data. This audio object type metadata 330 may be referred to herein as "database directionality pattern data".

According to some examples, the audio object type metadata may indicate parametric directional pattern data. In some examples, audio object type metadata 330 may indicate a directional pattern corresponding to a cosine function of a specified power, may indicate a cardioid function, and so on.

In some examples, audio object type metadata 330 may indicate that the radiation pattern corresponds to a set of spherical harmonic coefficients. For example, audio object type metadata 330 may indicate that spherical harmonic coefficients 340 are being provided in data set 345. In some such examples, spherical harmonic coefficients 340 may be time-varying and/or frequency-varying sets of spherical harmonic coefficients, e.g., as described above. This information may require the maximum amount of data compared to the remaining metadata levels shown in fig. 3. Thus, in some such examples, the spherical harmonic coefficients 340 may be provided separately from the mono audio signal 301 and the corresponding audio object metadata. For example, the spherical harmonic coefficients 340 may be provided at the beginning of the transmission of audio data prior to initiating a real-time operation (e.g., a real-time rendering operation for a game, movie, music performance, etc.).

According to some embodiments, a device at the decoder side (e.g., a device providing audio to a rendering system) may determine capabilities of the rendering system and provide directionality information according to the capabilities. For example, in some such implementations, only the usable portion of the directional information may be provided to the rendering system even if the entire data set 345 is provided to the decoder. In some examples, the decoding device may determine which type(s) of directional information to use based on the capabilities of the decoding device.

Fig. 4 is a flow diagram showing blocks of an audio decoding method according to one example. The method 400 may be implemented, for example, by a control system of a decoding device (such as the control system 815 described below with reference to fig. 8) that includes one or more processors and one or more non-transitory memory devices. As with other disclosed methods, not all of the blocks of method 400 need to be performed in the order shown in fig. 4. Moreover, alternative methods may include more or fewer blocks.

In this example, block 405 relates to receiving an encoded core audio signal, encoded radiation pattern metadata, and encoded audio object metadata. The encoded radiation pattern metadata may include audio object type metadata. The encoded core audio signal may, for example, comprise a mono audio signal. In some examples, the audio object metadata may include 3DoF position information, 6DoF position and source orientation information, audio object size metadata, and so on. In some cases, the audio object metadata may be time-varying.

In this example, block 410 involves decoding the encoded core audio signal to determine the core audio signal. Here, block 415 relates to decoding the encoded radiation pattern metadata to determine a decoded radiation pattern. In this example, block 420 involves decoding at least some of the other encoded audio object metadata. Here, block 430 involves rendering a core audio signal based on audio object metadata (e.g., audio object position, orientation, and/or size metadata) and decoded radiation patterns.

Block 415 may involve various types of operations depending on the particular implementation. In some cases, the audio object type metadata may indicate database directionality pattern data. Decoding the encoded radiation pattern metadata to determine a decoded radiation pattern may involve querying a directivity data structure that includes an audio object type and corresponding directivity pattern data. In some examples, the audio object type metadata may indicate parametric directivity pattern data, such as directivity pattern data corresponding to a cosine function, a sine function, or a cardioid function.

According to some implementations, the audio object type metadata may indicate dynamic directional pattern data, such as time-varying and/or frequency-varying sets of spherical harmonic coefficients. Some such implementations may involve receiving dynamic directivity mode data prior to receiving an encoded core audio signal.

In some cases, the core audio signal received in block 405 may include audio signals corresponding to a plurality of audio objects included in the cluster. According to some such examples, the core audio signal may be based on a cluster of audio objects that may include a plurality of directional audio objects. The decoded radiation pattern determined in block 415 may correspond to a centroid of the cluster and may represent an average of each frequency band for each of the plurality of directional audio objects. The rendering process of block 430 may involve applying subband gains to the decoded core audio signal based at least in part on the decoded radiation data. In some examples, after decoding and applying the directionality processing to the core audio signal, the signal may be further virtualized using audio object location metadata and known rendering processes (e.g., binaural rendering by headphones, rendering using speakers of the rendering environment, etc.) to its intended location relative to the listener's location.

As discussed above with reference to fig. 3, in some implementations, the audio data may be accompanied by rendering parameters (shown as R in fig. 3). The rendering parameters may indicate whether at least some audio object metadata (e.g., dolby panoramag sound metadata) should be interpreted in a normal manner (e.g., as position or size metadata) or as directional metadata. The normal mode may be referred to as a "location mode" and the alternative mode may be referred to herein as a "directivity mode". Thus, in some examples, the rendering parameters may indicate whether at least some of the audio object metadata is interpreted as being directional with respect to the speakers or positional with respect to the room or other reproduction environment. Such implementations are particularly useful for directional rendering using smart speakers with multiple drivers, e.g., as described below.

Fig. 5A depicts a cymbal. In this example, a cymbal drum 505 is shown that emits sound having a directional pattern 510, the directional pattern 510 having a main response axis 515 that is substantially perpendicular. The directional pattern 510 itself is also primarily vertical and extends somewhat from the main response axis 515.

Fig. 5B shows an example of a speaker system. In this example, the speaker system 525 includes a plurality of speakers/transducers configured for emitting sound in various directions including upward. In some cases, the topmost speaker may be used in a conventional dolby panoramagical manner ("position mode") to render a position, e.g., to cause sound to reflect from the ceiling to simulate a height/ceiling speaker (z ═ 1). In some such cases, the corresponding dolby panoramagical sound rendering may include an additional highly virtualized process that enhances the perception of audio objects having a particular location.

In other use cases, the same sound-emitting-up speaker may be operated in a "directivity mode," e.g., to simulate, for example, a drum, a symbol, or another audio object having a directivity pattern similar to directivity pattern 510 shown in fig. 5A. Some speaker systems 525 may be capable of beamforming, which may help in constructing a desired directivity pattern. In some instances, no virtualization process will be involved in order to reduce perception of audio objects having a particular location.

Fig. 6 is a flow diagram showing blocks of an audio decoding method according to one example. Method 600 may be implemented, for example, by a control system of a decoding device (such as control system 815 described below with reference to fig. 8) that includes one or more processors and one or more non-transitory memory devices. As with other disclosed methods, not all of the blocks of method 600 have to be performed in the order shown in fig. 6. Moreover, alternative methods may include more or fewer blocks.

In this example, block 605 involves receiving audio data corresponding to at least one audio object, the audio data including a mono audio signal, audio object position metadata, audio object size metadata, and rendering parameters. In this implementation, block 605 involves receiving the data via an interface system of the decoding device (e.g., interface system 810 of fig. 8). In some cases, Dolby Atmos may be usedTMThe format receives audio data. Depending on the particular implementation, the audio object location metadata may correspond to world coordinates or model coordinates.

In this example, block 610 relates to determining whether the rendering parameter indicates a position mode or a directional mode. In the example shown in fig. 6, if it is determined that the rendering parameters indicate a directivity mode, then in block 615, the audio data is rendered for rendering (e.g., via at least one speaker, via headphones, etc.) in accordance with the directivity mode indicated by at least one of the location metadata or the size metadata. For example, the directivity pattern may be similar to that shown in fig. 5A.

In some examples, rendering the audio data may involve interpreting audio object position metadata as audio object orientation metadata. The audio object position metadata may be cartesian/x, y, z coordinate data, spherical coordinate data, or cylindrical coordinate data. The audio object orientation metadata may be roll, pitch, and roll metadata.

According to some embodiments, rendering the audio data may involve interpreting audio object size metadata as directional metadata corresponding to a directional pattern. In some such examples, rendering the audio data may involve querying a data structure that includes a plurality of directional patterns and mapping at least one of the position metadata or the size metadata to one or more of the directional patterns. Some such implementations may involve receiving a data structure via an interface system. According to some such implementations, the data structure may be received before the audio data.

Fig. 7 illustrates an example of encoding a plurality of audio objects. In one example, object 1-n information 701, 702, 703, etc. may be encoded. In one example, representative clusters for audio objects 701-703 may be determined at block 710. In one example, a group of sound sources may be aggregated and represented by a representative "centroid" that involves computing an aggregated value/average for a metadata field. For example, the location of the cluster of sound sources may be an average of the location of each source. At block 720, the radiation pattern for the representative cluster may be encoded. In some examples, the radiation pattern for the clusters may be encoded according to the principles described above with reference to fig. 1A or 1B.

FIG. 8 is a block diagram showing an example of components of an apparatus that may be configured to perform at least some of the methods disclosed herein. For example, apparatus 805 may be configured to perform one or more of the methods described above with reference to fig. 1A-1C, 4, 6, and/or 7. In some examples, apparatus 805 may be or may include a personal computer, desktop computer, or other local device configured to provide audio processing. In some examples, apparatus 805 may be or may include a server. According to some examples, apparatus 805 may be a client device configured for communication with a server via a network interface. The components of apparatus 805 may be implemented via hardware, via software stored on a non-transitory medium, via firmware, and/or by a combination thereof. The types and numbers of components shown in fig. 8, as well as in other figures disclosed herein, are shown by way of example only. Alternative implementations may include more, fewer, and/or different components.

In this example, the apparatus 805 includes an interface system 810 and a control system 815. The interface system 810 may include one or more network interfaces, one or more interfaces between the control system 815 and the memory system, and/or one or more external device interfaces, such as one or more Universal Serial Bus (USB) interfaces. In some implementations, the interface system 810 can include a user interface system. The user interface system may be configured for receiving input from a user. In some implementations, the user interface system can be configured to provide feedback to the user. For example, the user interface system may include one or more displays with corresponding touch and/or gesture detection systems. In some examples, the user interface system may include one or more microphones and/or speakers. According to some examples, the user interface system may include an apparatus for providing haptic feedback, such as a motor, vibrator, or the like. The control system 815 may, for example, include a general purpose single-or multi-chip processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, and/or discrete hardware components.

In some examples, apparatus 805 may be implemented in a single device. However, in some implementations, the apparatus 805 may be implemented in more than one device. In some such implementations, the functionality of the control system 815 may be included in more than one device. In some examples, apparatus 805 may be a component of another device.

The various example embodiments of this invention may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. In general, this disclosure should be understood to also encompass apparatuses suitable for performing the methods described above, such as apparatuses having a memory and a processor coupled to the memory (spatial renderer), wherein the processor is configured to execute instructions and perform methods according to embodiments of the present disclosure.

While various aspects of the example embodiments of this disclosure are illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that the blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller, or other computing devices, or some combination thereof.

Additionally, each block shown in the flow diagrams may be viewed as a method step, and/or as an operation resulting from operation of computer program code, and/or as a plurality of coupled logic circuit elements constructed to carry out the associated function. For example, embodiments of the invention include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, wherein the computer program contains program code configured to carry out a method as described above.

In the context of this disclosure, a machine-readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable reader-read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

Computer program code for carrying out the methods of the present invention may be written in any combination of one or more programming languages. These computer program code may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor of the computer or other programmable data processing apparatus, causes the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. The program code may execute entirely on the computer, partly on the computer as a stand-alone software package, partly on the computer and partly on a remote computer or entirely on the remote computer or server.

Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Likewise, although the above discussion contains several specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.

It should be noted that the description and drawings merely illustrate the principles of the proposed method and apparatus. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the invention and are included within its spirit and scope. Moreover, all examples recited herein are principally intended expressly to be only for pedagogical purposes to aid the reader in understanding the principles of the proposed methods and apparatuses and the concepts contributed by the inventors to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass equivalents thereof.

29页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:对网络路径进行区域化解析的系统和方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类