Decoding dense transient events with companding

文档序号:621389 发布日期:2021-05-07 浏览:34次 中文

阅读说明:本技术 用压扩对密集瞬态事件进行译码 (Decoding dense transient events with companding ) 是由 A·比斯瓦斯 H·蒙特 于 2019-08-21 设计创作,主要内容包括:实施例涉及用于降低音频编解码器中的译码噪声的压扩方法及系统。一种处理音频信号的方法包含以下操作。系统接收音频信号。所述系统确定所述音频信号的第一帧包含稀疏瞬态信号。所述系统确定所述音频信号的第二帧包含密集瞬态信号。所述系统使用压扩规则压缩/扩展(压扩)所述音频信号,所述压扩规则将第一压扩指数应用于所述音频信号的所述第一帧并且将第二压扩指数应用于所述音频信号的所述第二帧,每一压扩指数用于导出针对对应帧的相应动态范围压缩及扩展程度。所述系统接着将所述经压扩音频信号提供到下游装置。(Embodiments relate to companding methods and systems for reducing coding noise in audio codecs. A method of processing an audio signal includes the following operations. The system receives an audio signal. The system determines that a first frame of the audio signal includes a sparse transient signal. The system determines that a second frame of the audio signal includes a dense transient signal. The system compresses/expands (compands) the audio signal using a companding rule that applies a first companding index to the first frame of the audio signal and a second companding index to the second frame of the audio signal, each companding index used to derive a respective degree of dynamic range compression and expansion for a corresponding frame. The system then provides the companded audio signal to a downstream device.)

1. A method of processing an audio signal, comprising:

receiving an audio signal;

for a time segment of the audio signal, analyzing the time segment of the audio signal to determine whether the time segment of the audio signal includes a sparse transient signal or a dense transient signal; and

companding the time segment of the audio signal based on a result of the determining; and

outputting the companded time segment of the audio signal,

wherein companding the time segment of the audio signal comprises compressing or expanding a dynamic range of the time segment of the audio signal based on a companding index;

wherein a first companding index is used in the companding if it is determined that the time period of the audio signal includes the sparse transient signal; and is

Wherein if it is determined that the time period of the audio signal includes the dense transient signal, a second companding index different from the first companding index is used in the companding.

2. The method of claim 1, wherein the sparse transient signal contains transient events having a first transient event density and the dense transient signal contains transient events having a second transient event density higher than the first transient event density.

3. The method of claim 1, wherein the sparse transient signal includes transient events having a first transient event density below a predefined threshold and the dense transient signal includes transient events having a second transient event density above the predetermined threshold.

4. The method of any of the preceding claims, wherein the sparse transient signal relates to at least one of a applause, a rain, or a crackling fire.

5. The method of any of the preceding claims, wherein the second companding index corresponds to a higher degree of dynamic range compression or expansion than the first companding index.

6. The method of any one of the preceding claims, wherein the second companding index is lower in value than the first companding index.

7. The method of any of the preceding claims, further comprising:

generating and outputting an indication of the companding index that has been used to compand the time segment of the audio signal.

8. A method of processing an audio signal, the method comprising:

receiving an audio signal;

determining, based on content of the audio signal in each time period, a respective companding index for the respective time period of the audio signal, each companding index corresponding to a respective degree of dynamic range compression or expansion for the respective time period, the determining comprising:

assigning a first companding index to a first set of time segments consisting of all those time segments of the audio signal determined to include sparse transient signals; and

assigning a second companding index, different from the first companding index, to a second set of time segments consisting of all those time segments of the audio signal determined to include dense transient signals;

applying a companding operation to the audio signal, including compressing the first set of time segments according to the first companding index and compressing the second set of time segments according to the second companding index;

providing the compressed audio signal to a core encoder; and

providing respective indicators of the first companding index and the second companding index to a bitstream associated with the compressed audio signal.

9. The method of claim 8, wherein the first companding index is higher in value than the second companding index.

10. The method of claim 8 or 9, wherein the companding index controls a degree of dynamic range compression used in the companding, and wherein a lower value of the companding index corresponds to a higher degree of dynamic range compression.

11. The method according to one of claims 8-10, wherein the sparse transient signal includes transient events having a first transient event density and the dense transient signal includes transient events having a second transient event density that is higher than the first transient event density.

12. The method according to one of claims 8-10, wherein the sparse transient signal includes transient events having a first transient event density below a predefined threshold and the dense transient signal includes transient events having a second transient event density above the predetermined threshold.

13. The method of any one of claims 8-12, wherein the sparse transient signal relates to at least one of a applause, a rain, or a crackling fire.

14. The method according to any one of claims 8-13, wherein each indicator includes a respective indicator bit for each time segment of the audio signal.

15. The method of claim 14, wherein each indicator includes a respective second indicator bit for each time period indicating whether companding is on or off.

16. The method of claim 14 or 15, wherein each indicator includes at least two indicator bits indicating at least four companded states, each of the four states corresponding to a respective type of content for the respective time segment of the audio signal.

17. A method of decoding an audio signal, comprising:

receiving an audio signal and at least one associated indicator for each time segment of the audio signal, each at least one associated indicator indicating a respective companding index corresponding to a degree of compression or expansion that has been applied to the respective time segment of the audio signal during a companding operation prior to encoding;

determining a first set of time segments consisting of all those time segments of the audio signal associated with a first indicator and determining a second set of time segments consisting of all those time segments of the audio signal associated with a second indicator;

for each time segment of the audio signal, determining a respective companding index for an expansion operation for the respective time segment, wherein it is determined that a first companding index applies to the first set of time segments and a second companding index applies to the second set of time segments, wherein the first companding index is different from the second companding index;

applying an expansion operation to the audio signal including expanding the first set of time segments according to a first degree of dynamic range expansion derived from the first companding index and expanding the second set of time segments according to a second degree of dynamic range expansion derived from the second companding index; and

outputting the expanded audio signal.

18. The method of claim 17, wherein each indicator corresponds to a respective channel or object in the received audio signal.

19. The method of claim 17 or 18, wherein each indicator comprises a one-bit value in a companded control data structure in metadata associated with the received audio signal.

20. The method of claim 19, wherein each indicator includes at least two bits of companding status data configured to indicate various companding indices, the at least two bits corresponding to at least four companding statuses, each corresponding to content of a respective transient type of the audio signal.

21. The method of any of claims 17-20, wherein the expanded audio signal is output to at least one of a storage device, a streaming media server, an audio processor, or an amplifier.

22. An apparatus, comprising:

one or more processors; and

a non-transitory computer-readable storage medium storing instructions that, when executed by the one or more processors, cause the one or more processors to perform the operations of any of the preceding claims.

23. A non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations according to any one of claims 1-21.

Technical Field

One or more embodiments relate generally to audio signal processing and, more particularly, to optimally using compression/expansion (companding) techniques in a signal-dependent manner during digital audio encoding.

Copyright notice

A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the patent and trademark office patent file or records, but otherwise reserves all copyright rights whatsoever.

Background

Many popular digital sound formats utilize lossy data compression techniques that discard some data to reduce storage or data rate requirements. The application of lossy data compression not only reduces the fidelity of the source content (e.g., audio content), but may also introduce significant distortion in the form of compression artifacts. In the context of audio coding systems, these sound artifacts are referred to as coding noise or quantization noise. Digital audio systems employ codecs (coder-decoder components) to compress and decompress audio data according to a defined audio file format or streaming audio format. The codec implemented algorithm tries to represent the audio signal with a minimum number of bits while retaining as high a fidelity as possible. Lossy compression techniques, which are commonly used in audio codecs, act on psychoacoustic models of human auditory perception. Audio formats typically involve the use of a time/frequency domain transform (e.g., modified discrete cosine transform-MDCT), and the use of masking effects, such as frequency masking or temporal masking, so that certain sounds, which contain any significant quantization noise, are hidden or masked by the actual content.

As is well known, audio codecs typically shape coding noise in the frequency domain such that it becomes minimally audible. In a frame-based encoder, coding noise may be maximally audible during low-strength portions of a frame and may be heard as pre-echo distortion, where silence (or low-level signals) preceding high-strength segments are overwhelmed by noise in the decoded audio signal. This effect may be most pronounced in transient sounds or impulses from percussion instruments, such as castanets or other sharp percussion sound sources, and is typically caused by quantization noise introduced in the frequency domain being spread throughout the transform window of the codec in the time domain.

Although filters have been used to minimize pre-echo artifacts, such filters typically introduce phase distortion and time smearing. Using a smaller transform window is also one approach, but this can significantly reduce the frequency resolution, and using multiple smaller transform windows in a frame increases the "side information" bit rate.

A system has been developed to overcome the effects of pre-echo artifacts by using companding techniques to achieve temporal noise shaping of quantization noise in audio codecs. Such embodiments include using a companding algorithm implemented in the QMF domain to achieve temporal shaping of quantization noise in conjunction with a masking threshold computation strategy. However, guessing the type of companding that needs to be applied for a particular signal type is often not straightforward. In general, companding provides benefits in time-domain (temporal) noise shaping, but it can also generally provide benefits in frequency-domain noise shaping. However, computing the masking threshold and the threshold reduction strategy that satisfies the bitrate constraint are highly nonlinear operations and it is difficult to predict the final result of frequency domain noise shaping. For this reason, and the inherent non-linear operation of companding, it is extremely difficult to predict the type of companding that needs to be applied in a content-dependent manner. With certain data collection efforts, it has been found advantageous to compand audio content consisting entirely or primarily of speech or applause. Although it is possible to design detectors that function independently for speech and applause, it is not easy to design a simple detector that is capable of detecting both speech and applause that features low complexity and without any delay. Furthermore, current detectors are not always 100% accurate.

Therefore, there is a need for a signal dependent companding system that can adaptively apply companding based on input signal content. There is a further need for a detector circuit that is able to better distinguish speech/applause and more tonal audio content to properly apply companding to complex audio signals.

The subject matter discussed in the background section should not be assumed to be prior art merely because of the mention in the background section. Similarly, the problems mentioned in the background section or associated with the subject matter of the background section should not be assumed to have been previously recognized in the prior art. The subject matter in the background section merely represents different approaches that may also be inventions in their own right.

Disclosure of Invention

Embodiments relate to a method of processing an audio signal by: receiving an audio signal; classifying the audio signal as one of a pure sinusoidal, mixed or pure transient signal using two defined thresholds; and applying a selective companding (compression/expansion) operation to the classified hybrid signal using a companding rule that uses a temporal sharpness metric in a Quadrature Modulation Filter (QMF) domain. The selective companding operation includes one of: a companding off mode, a companding on mode, and an average companding mode. The average companding pattern is derived by measuring a gain factor for each of a plurality of frames of the audio signal and applying a constant gain factor to each frame, wherein the gain factor is closer to the gain factor for an adjacent frame of the companding on pattern than to the gain factor of 1.0 for an adjacent frame of the companding off pattern. The method may further comprise calculating the gain factor by averaging the average absolute energy level over a plurality of time slots in a frame. For the classified mixed signal, the selective companding operation comprises one of: one of a companding on mode and an average companding mode.

In an embodiment, the method further comprises turning off companding for classified pure sinusoidal signals and turning on companding for classified pure transient signals. The classified mixed signal may include applause or speech content. The companding rule may further use a spectral sharpness metric in the Quadrature Modulation Filter (QMF) domain. In an embodiment, the method further comprises generating control information encoding the selective companding operation, and transmitting the control information in a bitstream that is transmitted to an audio decoder along with digital audio output from an audio encoder. The classified mixed signal includes a combination of at least partial sinusoidal and partial transient signals, and is further processed to distinguish the partial sinusoidal and partial transient signals to apply a selective companding operation based on a principal component of the mixed signal in order to provide continuity of gain applied in compression and reduce audio distortion caused by switching artifacts. The companding rule uses a first metric that has a number of frequency bands with temporal sharpness greater than a first threshold number, and a second metric that is based on an average of the temporal sharpness values less than the first threshold number.

Embodiments further relate to a system, comprising: an encoder applying compression to modify a Quadrature Modulation Filter (QMF) time slot by a wideband gain value, wherein a larger gain value results in amplification of a relatively low strength time slot or a smaller attenuation of a relatively high strength time slot; an interface for transmitting audio output from an encoder to a decoder, the decoder configured to apply expansion in a companding operation to reverse compression; and a companding controller having a detector configured to receive an input audio signal and classify the input audio signal based on signal characteristics; and a switch configured to switch among a plurality of companding modes based on the classified input audio signal.

Embodiments are still further directed to an audio decoder, comprising: a first interface that receives an encoded compressed audio signal from an encoder that applies compression to modify a Quadrature Modulation Filter (QMF) time slot by a wideband gain value, wherein a larger gain value results in amplification of a relatively low strength time slot or a smaller attenuation of a relatively high strength time slot; an expander component that applies expansion to reverse compression in a companding operation; and a second interface that receives a bitstream encoding a companding control mode from a controller that classifies an input audio signal based on signal characteristics, and switches among a plurality of companding modes based on the classified input audio signal.

Another embodiment relates to methods, systems, devices, and non-transitory computer-readable media storing instructions configured to process audio signals. In one embodiment, audio is received. For a time segment (e.g., a frame) of the audio signal, the time segment of the audio signal is analyzed to determine whether the time segment of the audio signal contains a sparse transient signal or a dense transient signal. The time period of the audio signal is companded (dynamic range compression or expansion) based on the result of the determination. And outputting the companding time segment of the audio signal. Wherein compressing the time segments of the audio signal comprises compressing or expanding a dynamic range of the time segments of the audio signal based on the compression index. If it is determined that the time segment of the audio signal contains a sparse transient signal (e.g., contains a first transient type of signal), then a first companding index is used in companding. If it is determined that the time segment of the audio signal contains a dense transient signal (e.g., contains a second transient type of signal), a second companding index different from the first companding index is used in companding.

According to another embodiment, a system receives an audio signal. The system determines that a first frame of the audio signal contains a sparse transient signal (e.g., a first transient type of signal). The system determines that a second frame of the audio signal includes a dense transient signal (e.g., a second transient type of signal). The system applies a compression/expansion (companding) operation to an audio signal using a companding rule that applies a first companding index to a first frame of the audio signal and a second companding index to a second frame of the audio signal. Each companding index is used to derive a respective degree of dynamic range compression for the corresponding frame. The system then provides the companded audio signal and the corresponding companded index to a downstream device for consumption.

The techniques disclosed in this specification may be implemented to realize one or more advantages over conventional audio processing techniques. For example, conventionally, the focus of companding tools is to improve speech quality at low bit rates. Therefore, after tuning the speech pitch, a fixed companding index α in the companding tool is used of 0.65 and normalized. A companding index α of 0.65 also seems to improve applause. The techniques disclosed in this specification improve upon conventional techniques by improving "hard to code" dense transient signals, such as applause, crackles, or rain. By selecting different values for the companding index based on the transient type of the content, the disclosed techniques may produce better quality sound for the dense transient signal.

The disclosed techniques add minimal overhead in audio coding. As described in more detail below, the disclosed techniques may improve the sound of transient-type content by adding only one bit in a companding control data structure in an audio compression encoding scheme, such as the digital audio compression (AC-4) standard. Thus, the disclosed techniques are simple and efficient.

Embodiments are still further directed to methods of making and using or deploying circuits and designs embodying or implementing signal dependent companding systems that may be used as part of an encoder, decoder or combined encoder/decoder system.

Is incorporated by reference

The entire contents of each specification, publication, patent, and/or patent application referred to in this specification is incorporated herein by reference to the same extent as if each individual publication and/or patent application was specifically and individually indicated to be incorporated by reference.

Drawings

In the following drawings, like reference numerals are used to refer to like elements. Although the following figures depict various examples, one or more implementations are not limited to the examples depicted in the figures.

Fig. 1 illustrates, under some embodiments, a companding system for reducing quantization noise in a codec-based audio processing system that may be used with a content detector.

Fig. 2A illustrates an audio signal divided into a plurality of short time periods under an embodiment.

Fig. 2B illustrates the audio signal of fig. 2A after applying a wideband gain for each of short periods of time, under an embodiment.

Fig. 3A is a flowchart illustrating a method of compressing an audio signal under an embodiment.

Fig. 3B is a flow chart illustrating a method of expanding an audio signal under an embodiment.

Fig. 4 is a block diagram illustrating a system for compressing an audio signal under an embodiment.

Fig. 5 is a block diagram illustrating a system for expanding an audio signal, under an embodiment.

Fig. 6 illustrates the division of an audio signal into a plurality of short time periods under an embodiment.

Fig. 7 illustrates an example QMF slot for a frame for a chord in an example embodiment.

Fig. 8 is a flow diagram illustrating a method of classifying audio content using a signal adaptive compander, under some embodiments.

Fig. 9 is a flow diagram illustrating a method of using spectral sharpness to distinguish speech or applause from a tonal signal, under some embodiments.

FIG. 10 illustrates an example technique for selecting a companding index based on content.

FIG. 11 is a table indicating example values of companding indices and corresponding companding states.

FIG. 12 is a flow chart illustrating a first example process of transient density based companding.

FIG. 13 is a flow chart illustrating a second example process of transient density based companding.

FIG. 14 is a flow chart illustrating a third example process of transient density based companding.

Detailed Description

Systems and methods for achieving temporal noise shaping of quantization noise in audio codecs using certain improvements to companding techniques by using companding algorithms implemented in the QMF domain to achieve temporal shaping of quantization noise. Embodiments include detectors for signal content (e.g., speech and applause) within audio content and apply an appropriate type or amount of companding based on the detected content to provide optimal companding in a signal-dependent manner.

Aspects of one or more embodiments described herein may be implemented in an audio system that processes audio signals for transmission across a network that includes one or more computers or processing devices that execute software instructions. Any of the described embodiments may be used alone or with each other in any combination. Although various embodiments may have been motivated by various deficiencies with the prior art that may be discussed or referred to in one or more places in the specification, embodiments do not necessarily address any of these deficiencies. In other words, different embodiments may address different deficiencies that may be discussed in the specification. Some embodiments may only partially address some or only one of the deficiencies that may be discussed in the specification, and some embodiments may not address any of these deficiencies.

Fig. 1 illustrates a companding system for reducing quantization noise in a codec-based audio processing system that may be used with a content detector, under some embodiments. Fig. 1 illustrates an audio signal processing system that is built around an audio codec that includes an encoder (or "core encoder") 106 and a decoder (or "core decoder") 112. The encoder 106 encodes the audio content into a data stream or signal for transmission over the network 110, where the data stream or signal is decoded by the decoder 112 for playback or further processing. In an embodiment, the encoder 106 and decoder 112 of the codec implement lossy compression methods to reduce storage and/or data rate requirements of digital audio data, and this codec may be implemented as an MP3, Vorbis, Dolby digital (AC-3 or AC-4), AAC, or similar codec. The lossy compression method of the codec generates coding noise, which is typically static in level in the evolution of the frame defined by the codec. This coding noise is typically most audible during low intensity portions of the frame. The system 100 includes components that reduce perceptual coding noise in existing coding systems by providing a pre-compression step component 104 prior to the core encoder 106 of the codec and a post-extension step component 114 that operates on the core decoder 112 output. The compression component 104 is configured to divide the original audio input signal 102 into a plurality of time segments using a defined window shape, calculate and apply a wideband gain in the frequency domain using a non-energy-based average of frequency-domain samples of the original audio signal, wherein a gain value applied to each time segment amplifies segments of relatively low intensity and attenuates segments of relatively high intensity. This gain modification has the effect of compressing or significantly reducing the original dynamic range of the input audio signal 102. The compressed audio signal is then coded in an encoder 106, transmitted over a network 110, and decoded in a decoder 112. The decoded compressed signal is input to an expansion component 114 that is configured to perform the inverse operation of the pre-compression step 104 by applying an inverse gain value to each time segment to expand the dynamic range of the compressed audio signal back to the dynamic range of the original input audio signal 102. Thus, the audio output signal 116 comprises an audio signal having an original dynamic range, wherein the coding noise is removed by a pre-and post-step companding process.

System 100 performs compression and expansion (companding) in the QMF domain to achieve temporal shaping of quantization noise of the quantization noise of a digital decoder (i.e., audio or speech spectral front end). The encoder may be a dolby digital AC-3 or AC-4 core encoder, or any other similar system. It performs some pre-processing functions, including compression before the core encoder; and a post-processing function comprising an extension of the core decoder output that accurately performs the inverse of the pre-processing. The system includes a signal-dependent encoder control that expects a decoder companding level and a signal-dependent stereo (and multi-channel) companding process. As shown in fig. 1, the encoder 106 is an encoder, and the compression component 104 includes a companding detector 105 that detects a state of a companding decision. The companding on/off/averaging decision is detected in the encoder 106 and transmitted to the decoder 112 so that the compressor and expander can be turned on/off/averaged at the same QMF slot, where the QMF slot processing is described in more detail below.

As further shown in fig. 1, the compression component or pre-compression step 104 is configured to reduce the dynamic range of the audio signal 102 input to the core encoder 106. The input audio signal is divided into a number of short segments. The size or length of each short segment is a fraction of the frame size used by the core encoder 106. For example, a typical frame size for a core coder may be on the order of 40-80 milliseconds. In this case, each short segment may be on the order of 1 to 3 milliseconds. The compression component 104 calculates appropriate wideband gain values to compress the input audio signal on a per segment basis. This is achieved by modifying the short segments of the signal with appropriate gain values for each segment. A relatively large gain value is selected to amplify segments of relatively low intensity and a small gain value is selected to attenuate segments of high intensity.

Fig. 2A illustrates an audio signal divided into a plurality of short time periods under an embodiment, and fig. 2B illustrates the same audio signal after applying a wideband gain through a compression component. As shown in fig. 2A, the audio signal 202 represents a transient (transient event) or sound pulse, such as may be produced by a percussion instrument (e.g., soundboard). The signal is characterized by a spike in amplitude, as shown in a plot of voltage V versus time t. In general, the amplitude of a signal is related to the acoustic energy or intensity of a sound and represents a measure of the power of the sound at any point in time. When the audio signal 202 is processed by a frame-based audio codec, portions of the signal are processed within transform (e.g., MDCT) frames 204. Typical current digital audio systems utilize frames of relatively long duration such that for sharp transient or short burst sounds, a single frame may contain both low intensity and high intensity sounds. Thus, as shown in fig. 2, a single MDCT frame 204 includes an impulse portion (spike) of the audio signal and a relatively large amount of low-intensity signals before and after the spike. In an embodiment, the compression component 104 divides the signal into a number of short time segments 206 and applies a wideband gain to each segment in order to compress the dynamic range of the signal 202. The number and size of each short segment may be selected based on application requirements and system constraints. The number of short segments may range from 12 segments to 64 segments, and may typically include 32 segments, relative to the size of the individual MDCT frames, although embodiments are not so limited.

Fig. 2B illustrates the audio signal of fig. 2A after applying a wideband gain for each of short periods of time, under an embodiment. As shown in fig. 2B, the audio signal 212 has the same relative shape as the original signal 202, however, the amplitude of the low-intensity segments has been increased by applying amplification gain values, and the amplitude of the high-intensity segments has been decreased by applying attenuation gain values.

The output of the core decoder 112 is an input audio signal with reduced dynamic range (e.g., signal 212) plus quantization noise introduced by the core encoder 106. This quantization noise is characterized by an almost uniform level across time within each frame. The expansion component 114 acts on the decoded signal to recover the dynamic range of the original signal. Which uses the same short time resolution based on the short segment size 206 and inverts the gain applied in the compression component 104. Thus, the expansion component 114 applies a small gain (attenuation) to the segments of the original signal that have low intensity and have been amplified by the compressor, and applies a large gain (amplification) to the segments of the original signal that have high intensity and have been attenuated by the compressor. Thus, the quantization noise with a uniform temporal envelope added by the core encoder is simultaneously shaped by the post-processor gain to approximately follow the temporal envelope of the original signal. This process effectively makes the quantization noise less audible during quiet passes. Although the noise may be amplified during the high intensity pass, the noise is still less audible due to the masking effect of the loud signal of the audio content itself.

As shown in fig. 2A, the companding process individually modifies discrete segments of an audio signal with respective gain values. In some cases, this may result in a discontinuity at the output of the compressed component, which may lead to problems in the core encoder 106. Likewise, discontinuities in gain at the expansion component 114 may result in discontinuities in the envelope of the shaped noise, which may result in audible clicks in the audio output 116. Another problem associated with applying individual gain values to short segments of an audio signal is based on the fact that a typical audio signal is a mixture of many individual sources. Some of these sources may be stationary across time and some may be transient. The statistical parameters of stationary signals are generally constant over time, while transient signals are generally not constant. Given the broadband nature of the transient, its fingerprints in this mixture are generally more visible at higher frequencies. Gain calculations based on the short-term energy (RMS) of the signal tend to be biased towards stronger low frequencies and are therefore dominated by stationary sources and vary little across time. Therefore, this energy-based approach is generally ineffective in shaping the noise introduced by the core encoder.

In an embodiment, the system 100 computes and applies gains at the compression and expansion components in the filter bank with short prototype filters in order to address potential issues associated with applying individual gain values. The signal to be modified (the original signal at the compression component 104 and the output of the core decoder 112 in the expansion component 114) is first analyzed by a filter bank and the wideband gain is applied directly in the frequency domain. The corresponding effect in the time domain is to naturally smooth the gain application according to the shape of the prototype filter. This solves the discontinuity problem described above. The modified frequency domain signal is then converted back to the time domain via a corresponding synthesis filter bank. Analysis of the signal with a filter bank provides access to its spectral content and allows calculation of gains that preferentially increase in contribution due to high frequencies (or increase in contribution due to any weaker spectral content), providing gain values that are not dominated by the strongest components in the signal. This addresses the problems associated with audio sources that comprise a mixture of different sources, as described above. In an embodiment, the system uses the p-norm of the spectral magnitudes to calculate the gain, where p is typically less than 2(p < 2). This makes the weak spectral content more emphasized than when based on energy (p ═ 2).

As stated above, the system includes a prototype filter to smooth the gain application. In general, a prototype filter is a basic window shape in a filter bank that is modulated by a sinusoidal waveform to get an impulse response for different subband filters in the filter bank. For example, a short-time fourier transform (STFT) is a filter bank, and each frequency line of this transform is a subband of the filter bank. The short-time fourier transform is implemented by multiplying the signal with a window shape (N-sample window), which may be a rectangle, Hann, Kaiser-Bessel derivation (KBD), or some other shape. Next, a Discrete Fourier Transform (DFT) operation is performed on the windowed signal to obtain an STFT. The window shape in this case is the prototype filter. The DFT consists of sinusoidal basis functions each having a different frequency. Multiplying by the sine function window shape then provides a filter for the sub-band corresponding to that frequency. Because the window shape is the same at all frequencies, it is referred to as a "prototype".

In an embodiment, the system uses a QMF (quadrature modulation filter) bank for the filter bank. In particular embodiments, the QMF bank may have a 64-pt window, which forms a prototype. This window, modulated by cosine and sine functions (corresponding to 64 equally spaced frequencies), forms a subband filter for the QMF bank. After each application of the QMF function, the window is shifted by 64 samples, i.e. in this case the overlap between the time segments is 640-64-576 samples. However, although the window shape spans ten time segments (640 ═ 10 × 64) in this case, the main lobe of the window (where its sample values are very important) is approximately 128 samples long. Thus, the effective length of the window is still relatively short.

In an embodiment, the expansion component 114 desirably inverts the gain applied by the compression component 104. While it is possible to transmit the gain applied by the compression component to the decoder through the bitstream, this approach will typically consume a significant bit rate. In an embodiment, system 100 instead estimates the gain required by spreading component 114 directly from the signal available to spreading component 114 (i.e., the output of decoder 112), which effectively does not require additional bits. The filter banks at the compression and expansion components are selected to be identical in order to compute gains that are reciprocal to each other. In addition, these filter banks are time synchronized such that any effective delay between the output of the compression component 104 and the input to the expansion component 114 is a multiple of the stride of the filter banks. If the core encoder-decoder is lossless and the filter bank provides a perfect reconstruction, the gains at the compression and expansion components will be exact inverses of each other, thus allowing an exact reconstruction of the original signal. However, in practice, the gain applied by the expansion component 114 is only an approximation of the inverse of the gain applied by the compression component 104.

In an embodiment, the filter bank used in the compression and expansion component is a QMF bank. In a typical usage application, the core audio frame may be 4096 samples long with an overlap of 2048 with adjacent frames. At 48kHz, this frame will be 85.3 milliseconds long. In contrast, the QMF bank used may have a step size of 64 samples (which is 1.3 milliseconds long), which provides good time resolution for the gain. Furthermore, QMF has a smoothing prototype filter that is 640 samples long, ensuring that the gain application varies smoothly across time. Analysis with this QMF filter bank may provide a time-frequency tiled representation of the signal. Each QMF slot is equal to one stride and there are 64 uniformly spaced subbands in each QMF slot. Alternatively, other filter banks may be employed, such as a Short Time Fourier Transform (STFT), and this time-frequency tiled representation may still be obtained.

In an embodiment, the compression component 104 performs a pre-processing step of scaling the codec input. For this example, St(k) Are complex valued filterbank samples at time slot t and frequency slot k. Fig. 6 illustrates, under an embodiment, the division of an audio signal into a number of time slots for a series of frequencies. For the embodiment of diagram 600, there are 64 frequency slots k and 32 time slots t, which produce multiple time-frequency tiles as shown (although not necessarily drawn to scale). Pre-compression step scales codec input to become S't(k)=St(k)/gt. In the context of this equation, the equation,is the normalized slot average.

In the above equation, the expressionIs the mean absolute level/1-norm, and S0Is a suitable constant. The general p-norm in this context is defined as follows:

it has been shown that a 1-norm can give much better results than using energy (RMS/2-norm). The value of the exponential term gamma is generally withinIn the range between 0 and 1, and optionally 1/3. Constant S0Ensuring a reasonable gain value independent of the implementation platform. For example, when in all St(k) The absolute value of the value may be 1 when implemented in a platform that may be limited to 1. At St(k) It may be different in platforms that may have different maximum absolute values. It can also be used to ensure that the average gain value across a large number of signals is close to 1. That is, it may be an intermediate signal value between the maximum signal value and the minimum signal value determined from a large amount of content.

In a post-step process performed by the expansion component 114, the codec output is expanded by the inverse gain applied by the compression component 104. This requires an accurate or near accurate copy of the filter bank of the compression component. In this case, it is preferable that,representing complex-valued samples of this second filter bank. Extension component 114 scales the codec output to become

In the above-described equations, the process of the present invention,is a normalized slot average given by:

and

in general, the expansion component 114 will use the same p-norm as used in the compression component 104. Thus, if the average absolute level is used inDefinition in compression component 104ThenIs also defined using the 1-norm (p ═ 1) in the above equation.

When complex filter banks (composed of cosine and sine basis functions) such as STFT or complex QMF are used in compressing and expanding components, of magnitude or complex subband samplesOrThe computation of (a) requires computationally intensive square root operations. This can be avoided by approximating the magnitudes of the complex subband samples in various ways, e.g., by adding the magnitudes of their real and imaginary parts.

In the above equation, the value K is equal to or less than the number of subbands in the filter bank. In general, the p-norm may be computed using any subset of the subbands in the filter bank. However, the same subset should be employed at both the encoder 106 and the decoder 112. In an embodiment, a high frequency portion of an audio signal (e.g., audio components above 6 kHz) may be coded with advanced spectral extension (A-SPX) tools. In addition, it may be desirable to use only signals above 1kHz (or similar frequencies) to guide noise shaping. In this case, only those subbands in the range 1kHz to 6kHz may be used to calculate the p-norm, and hence the gain value. Furthermore, although the gain is calculated from one subset of subbands, it is still applicable to a different and possibly larger subset of subbands.

As shown in fig. 1, when two separate components 104 and 114 perform some pre-encoder compression functions and post-decoder expansion functions, a companding function to shape the quantization noise introduced by the core encoder 106 of the audio codec is performed. Fig. 3A is a flowchart illustrating a method of compressing an audio signal in a precoder compression component, and fig. 3B is a flowchart illustrating a method of expanding an audio signal in a post-decoder expansion component, under an embodiment.

As shown in fig. 3A, the process 300 begins with the compression component receiving an input audio signal (302). This component then divides the audio signal into short time segments (304) and compresses the audio signal to a reduced dynamic range by applying a wideband gain value to each of the short segments (306). The compression component also implements certain prototype filtering and QMF filterbank components to reduce or eliminate any discontinuities caused by applying different gain values to neighboring segments, as described above (308). In some cases, compression and expansion of audio signals before and after the encoding/decoding stages of an audio codec may degrade rather than enhance the output audio quality, e.g., based on the type of audio content or certain characteristics of the audio content. In such examples, the companding process may be shut down or modified to return to different companding (compression/expansion) levels. Thus, the compression component determines the appropriateness of the companding function and/or the optimal level of companding required for the particular signal input and audio playback environment, among other variables (310). This determination step 310 may occur at any practical point in the process 300, such as prior to the partitioning of the audio signal 304 or the compression of the audio signal 306. If companding is deemed appropriate, gain is applied (306), and the encoder then encodes the signal according to the codec's data format for transmission to the decoder (312). Certain companding control data, such as activation data, synchronization data, companding level data, and other similar control data, may be transmitted as part of the bitstream for processing by the expansion component.

Fig. 3B is a flow diagram illustrating a method of extending an audio signal in a post-decoder extension component, under an embodiment. As shown in process 350, a decoder stage of a codec receives a bitstream encoding an audio signal from an encoder stage (352). The decoder then decodes (353) the encoded signal according to the codec data format. The expansion component then processes the bitstream and applies any encoded control data to turn off expansion or modify the expansion parameters based on the control data (354). The extension component divides the audio signal into time segments using the appropriate window shape (356). In an embodiment, the time period corresponds to the same time period used by the compression component. The expansion component then calculates an appropriate gain value for each segment in the frequency domain (358) and applies the gain value to each time segment to expand the dynamic range of the audio signal back to the original dynamic range or any other appropriate dynamic range (360).

Companding control

The compression and expansion components comprising the compander of system 100 may be configured to apply pre-and post-processing steps only at certain times during audio signal processing, or to apply pre-and post-processing steps only for certain types of audio content. For example, companding may show benefits for speech (which consists of a pseudo-stationary series of pulse-like events) and music transient signals. However, for other signals, such as stationary signals, companding may degrade signal quality. Thus, as shown in fig. 3A, a companding control mechanism is provided as block 310, and control data is transmitted from the compression component 104 to the expansion component 114 to coordinate the companding operation. The simplest form of this control mechanism is to turn off the companding function for a block of audio samples, where applying companding is degrading audio quality. In an embodiment, companding on/off decisions are detected in the encoder and transmitted as bitstream elements to the decoder, enabling turning on/off the compressor and expander in the same QMF time slot.

Switching between the two states will typically result in a discontinuity in the applied gain, resulting in audible switching artifacts or clicks. Embodiments include mechanisms to reduce or eliminate these artifacts. In a first embodiment, the system allows the companding function to be turned off and on only at frames with gain close to 1. In this case, there is only a small discontinuity between the on/off companding functions. In a second embodiment, a third weak companding mode between on and off modes is applied in the audio frames between the on and off frames and signaled in the bitstream. The weak companding mode slowly transitions the exponential term γ from its default value during companding to 0, which is equivalent to no companding. As an alternative to the intermediate weak companding mode, the system may implement a start frame and a stop frame that smoothly fade into the non-companding mode over a block of audio samples, rather than abruptly turning off the companding function. In another embodiment, the system is configured not to simply turn off companding, but to apply an average gain. In some cases, the audio quality of a tonal stationary signal may be increased if a constant gain factor that is more similar to the gain factor of an adjacent companded on frame than the constant gain factor of 1.0 in the companded off case is applied to the audio frame. This constant average companding gain factor can be calculated by averaging all the average absolute levels/1-norms calculated per slot over one frame. Thus, frames with constant average companding gain are signaled in the bitstream.

Although the embodiments are described in the context of a mono audio channel, it should be noted that in a simple extension, multiple channels may be handled by repeating the method on each channel individually. However, audio signals comprising two or more channels present some additional complexities that are addressed by embodiments of the companding system of fig. 1. The companding strategy should rely on similarities between channels.

For example, in the case of stereo panning transient signals, it has been observed that independent companding of individual channels may lead to audible stereo image artifacts. In an embodiment, the system determines a single gain value for each time segment from the subband samples of the two channels and compresses/expands the two signals using the same gain value. This approach is generally suitable, for example, when two channels have very similar signals, where similarity is defined using cross-correlation. The detector calculates similarities between channels and switches between using individual companded or joint companded channels of the channels. Extending to more channels would use a similarity criterion to divide the channels into multiple channel groups and apply joint companding to the groups. This packet information may then be transmitted over a bitstream.

System embodiments

Fig. 4 is a block diagram illustrating a system for compressing an audio signal in conjunction with an encoder stage of a codec, under an embodiment. Fig. 4 illustrates a hardware circuit or system implementing at least a portion of the compression method used in the codec based system shown in fig. 3A. As shown in system 400, an input audio signal 401 in the time domain is input to a QMF filter bank 402. This filter bank performs an analysis operation that separates the input signal into multiple components, with each band pass filter carrying a frequency sub-band of the original signal. The reconstruction of the signal is performed in a synthesis operation performed by the QMF filter bank 410. In the example embodiment of fig. 4, the analysis and synthesis filter bank handles 64 frequency bands. The core encoder 412 receives the audio signals from the synthesis filterbank 410 and generates a bitstream 414 in the frequency domain by encoding the audio signals in a suitable digital format (e.g., MP3, AAC, AC-4, etc.).

The system 400 includes a compressor 406 that applies a gain value to each of the short segments into which the audio signal is divided. This produces a compressed dynamic range audio signal, such as shown in fig. 2B. Companding control unit 404 analyzes the audio signal based on the type of signal (e.g., speech) or characteristics of the signal (e.g., stationary versus transient) or other relevant parameters to determine whether or how much compression should be applied. The control unit 404 may include a detection mechanism to detect a time-spike characteristic of the audio signal. Based on the detected characteristics of the audio signal and some predefined criteria, the control unit 404 sends an appropriate control signal to the compressor 406 to turn off the compression function or modify the gain values applied to the short segments.

It should be noted that the term "spike" may also be referred to as "sharpness" (e.g., T)pOr Ts) And both refer to the transient energy of the signal at a particular time relative to the most recent past and future times, such that a spike or sharp signal appears as a pulse or spike in energy.

In addition to companding, many other decoding tools may also operate in the QMF domain. One such tool is A-SPX, which is shown in block 408 of FIG. 4. a-SPX is a technique for allowing perceptually less important frequencies to be coded with a coarser coding scheme than more important frequencies. For example, in a-SPX at the decoder end, QMF subband samples from lower frequencies may be replicated at higher frequencies, and then the side information transmitted from the encoder to the decoder is used to shape the spectral envelope in the higher frequency band. For example, A-SPX is used by some high-level code (e.g., AC-4), and other similar tools may also be used.

In a system where both companding and a-SPX encoding are performed in the QMF domain, at the encoder, the envelope data for higher frequencies may be extracted from the sub-band samples that are not yet compressed, as shown in fig. 4, and compression may only be applied to the lower frequency QMF samples corresponding to the frequency range of the signal encoded by the core encoder 412. At the decoder 502 of fig. 5, after QMF analysis 504 of the decoded signal, an extension process 506 is first applied, and a-SPX operation 508 then reproduces the higher sub-band samples from the extended signal in the lower frequencies.

In this example implementation, the QMF synthesis filter bank 410 at the encoder and the QMF analysis filter bank at the decoder together introduce a 640-64 +1 sample delay (9 QMF slots). The core codec delay in this example is 3200 samples (50 QMF slots), so the total delay is 59 slots. This delay is taken into account by embedding control data in the bitstream and used at the decoder so that both the encoder compressor and decoder expander operations are synchronized.

Alternatively, at the encoder, compression may be applied to the entire bandwidth of the original signal. Envelope data may then be extracted from the compressed sub-band samples. In this case, the decoder, after QMF analysis, first runs the tools to first reconstruct the full bandwidth compressed signal. Then, an extension stage is applied to recover the signal with its original dynamic range.

Yet another tool that may operate in the QMF domain may be a Parametric Stereo (PS) tool (not shown) in fig. 4. In parametric stereo, two channels are encoded as a mono downmix with additional parametric spatial information, which can be applied in the QMF domain at the decoder to reconstruct the stereo output. Another such tool is, for example, the advanced coupling (a-CPL) toolset, which is used by certain high-level code, such as AC-4. When parametric stereo (or a-CPL) and companding are used in conjunction with each other, the parametric stereo tool may be placed after the compression stage 406 at the encoder, in which case it will be applied before the expansion stage 506 at the decoder. Alternatively, the parametric stereo side information may be extracted from the uncompressed stereo signal, in which case the parametric stereo tool will operate after the expansion stage 506 at the decoder.

As shown in fig. 3A and 3B, the bitstream transmitted between the encoder and decoder stages of the codec includes certain control data. This control data constitutes side information that allows the system to switch between different companding modes. The switching control data (for switching companding on/off) plus some potential intermediate states may be on the order of 1 or 2 bits per channel. Other control data may include signals to determine whether all channels of a discrete stereo or multi-channel configuration will use a common companding gain factor, or whether they should be calculated independently for each channel. This data may require only a single additional bit. Other similar control data elements and their appropriate bit weights may be used depending on system requirements and constraints.

Detection mechanism

In an embodiment, a companding control mechanism is included as part of the compression component 104 to provide control over companding in the QMF domain. Companding control can be configured based on several factors, such as the type of audio signal. For example, in most applications companding should be turned on for speech signals and any other signal within the category of transient signals or time spikes (e.g., applause). The system includes a detection mechanism 405 to detect spikes in the signal in order to help generate the appropriate control signals for the compander function. The detection mechanism 405 may be considered to be for analyzing the signal to determine, for example, whether the signal is a sparse transient signal or a dense transient signal. In this sense, the time spikes of the signal can be used to derive a measure of the density of transients (transient events) in the signal.

In an embodiment, the normalized 4 th moment is used to measure the degree of fluctuation in the envelope signal. Computing a time spike TP (k) over a frequency bin k for a given core codecframeAnd is calculated using the following formula:

similarly, a spectral spike metric may be calculated over time slot t. In the above equation, St(k) Is the subband signal and T is the number of QMF slots corresponding to one core encoder frame. In an example implementation, the value of T may be 32. The per-band calculated time spikes can be used to classify sound content into two general categories: static music signals and music transient signals or speech signals. If TP (k)frameIs less than a defined value (e.g., 1.2), then the signals in that sub-band of the frame are likely to be stationary music signals. If TP (k)frameIs greater than this value, the signal is likely to be a music transient signal or a speech signal. If the value is greater than an even higher threshold (e.g., 1.6), then the signal is likely to be a pure music transient signal, e.g., a soundboard. Furthermore, it has been observed that the values of the temporal spikes obtained in the different bands are more or less similar for naturally occurring signals, and this characteristic can be exploited to reduce the number of sub-bands for which temporal spikes are to be calculated.

It should be noted that any flatness-based metric may be used in a similar manner, since the spikes (sharpness) are opposite to the flatness. For complex value transforms as used in AC-4, S is usedt(k) The magnitude of the complex value of (a). The above temporal sharpness metric may also be applied to real-valued transforms. In the above expression, for the AC-4/A-SPX embodiment, T is the total number of QMF slots in the frame, the final value of which (depending on static or transient content) is determined by the A-SPX frame generator. For 2048 frame length, T is 2048/64 ═ 32 for static content. Because AC-4 supports various frame lengths (to support video frame-synchronous audio coding); the value of T is different for different frame lengths. As stated above, computing the magnitudes of complex subband samples requires computationally intensive square root operations, which may be avoided by approximating the magnitudes of the complex subband samples in various ways, such as by adding the magnitudes of their real and imaginary parts.

Referring to fig. 4, it should be noted that for a QMF matrix, the number of slots may vary based on the a-SPX analysis and may vary with the signal, so the time boundary data must come from the a-SPX analysis component.

Pressure expanding switch

In an embodiment, the system described above reduces the dynamic range of the input signal prior to the core encoder. In this sense, companding prior to core encoding may correspond to compression of the dynamic range of the input signal. The system does this by modifying the QMF slots (in core coding or equivalently in non-a-SPX frequency ranges) with wideband gain values. The gain value is larger (i.e., amplified) for relatively low intensity slots and smaller (i.e., attenuated) for high intensity slots.

In general, companding has been found to help process content such as applause or speech, or signals with sharp impacts (e.g., percussive effects), and is not helpful for other types of content (e.g., tonal audio). Thus, signal adaptive companding applies companding depending on the detected signal. In an embodiment, the encoder/decoder system 100 of fig. 1 performs signal-adaptive or signal-dependent companding to implement a companding mode switching process that provides an optimal amount of companding based on signal content. As stated above, companding provides temporal noise shaping, and has been observed to provide perceptually beneficial frequency domain noise shaping (where perceptually beneficial means that the quantization noise is better shaped to (follow and) remain below the masking curve). However, since companding is a non-linear operation, it is often difficult to predict its frequency domain benefit in conjunction with a psychoacoustic model (which is also a non-linear model). Incorrectly applying companding, such as with sub-optimal switching strategies, can lead to switching artifacts and increase system complexity and delay. The companding handover process under certain embodiments determines when companding is helpful and how best to apply signal adaptive companding.

Fig. 4 shows a system for compressing an audio signal in conjunction with an encoder stage of a codec and including a compander switching component or function 407. Switch 407 is configured to facilitate optimal companding selection by not simply abruptly turning off companding but applying a constant gain factor to the audio frame that is more similar to the gain factor of an adjacent companding on frame than the constant gain factor of 1.0 with companding off. This gain factor is calculated by averaging the average absolute level over the time slot over one frame. Frames with average companding are signaled in the bitstream (e.g., b _ comp _ avg). By average in this context is meant the average of the average absolute levels.

In one embodiment, switch 407 is configured to switch between one of three companded states: no Compand _ Off, normal Compand _ On, and mean Compand _ Ave. In some embodiments, the Compand Off mode is used for pure sinusoidal signals only, and the system switches between on mode and average mode for all other signals.

Aiming at normal companding: if S ist(k) Is a complex valued filterbank sample at time slot t and band k, then the pre-processing step scales the core codec input to SCt(k)=St(k)gtWherein g ist=(SMt)α-1And is a normalized slot average (or gain); wherein SMtIs the mean absolute level (1-norm) of SM summed over the range of K1 to Kt(k)=1/K∑|St(k) I is given; and α is 0.65. Here, α may be referred to as a companding index. In an embodiment, the companding detector is designed for complex values S whose magnitude is between + -64t(k) In that respect If the range of complex values is different, the design needs to be scaled accordingly, so other embodiments may optionally feature different values.

For average companding, in an example embodiment, FIG. 7 illustrates example QMF slots for frames used for chords. The diagram 700 of fig. 7 shows the tone/and harmonic content of an example polyphonic chord played on a suitable instrument (e.g., a piano or guitar). As shown in fig. 7, the resulting gains for three different companding settings are shown. The following _ off trace 702 shows a flat gain, while the companded on trace 706 shows a relatively abrupt discontinuity in gain. This discontinuity at the post-processor (i.e., companded decoder) results in a discontinuity in the envelope of the shaped noise that results in an audible click that can be considered annoying pop noise. The companded average trace 704 shows that replacing normal companding with average companding (as described immediately above) eliminates audible clicks. Similar observations show that this also applies to other tonal/and harmonic content (e.g., a harpsichord or similar instrument). In other words, companding is detrimental to tonal/harmonic content, and for tonal/harmonic content companding should be turned off or average companding should be employed.

If companding is applied in the encoder, the output of the core decoder is this signal with reduced dynamic range and quantization noise of almost uniform level (temporal envelope) is added across time within each frame. Small gains (attenuations) are applied to time slots in the original signal that are of lower strength and have been amplified by the pre-processor, and large gains (amplifications) are applied to time slots in the original signal that are of higher strength and have been attenuated by the pre-processor. Thus, the quantization noise is simultaneously shaped by the post-processor gain to approximately follow the temporal envelope of the original signal. In case of applying average companding in the encoder, it is also necessary to apply average companding in the decoder, i.e. applying a constant gain factor to the audio frame.

In an embodiment, the per-band calculated time spikes (or sharpness) may be used to roughly classify audio content into the following categories as defined by two thresholds:

(1) pure sine, stationary music: (TP (k)frame<1.2)

(2) Still/tone/transient music + speech + applause: (1.2<TP(k)frame<1.6)

(3) Pure transient (e.g., percussion): (TP (k)frame>1.6)

The thresholds 1.2 and 1.6 that distinguish the three classes of pure sine/tone/pure transient audio are derived from experimental data and may differ depending on the entire range and unit of measurement. For complex values S of magnitude between + -64t(k) Designed companding detector, derive specific values of 1.2 and 1.6. If the ranges of complex values are different, then different thresholds will be used.

Fig. 8 is a flow diagram illustrating a method of classifying audio content using a signal adaptive compander, under some embodiments. The method begins in step 802 by defining a threshold that distinguishes three main content categories: (1) pure sine; (2) stationary/tone; and (3) pure transients. Second, still/tonal can be any signal that includes a mixture of sinusoids, transients, tones, partial tone signals, etc., and generally includes most of the signals present in an audio program. Thus, this content represents a mixture of transient and sinusoidal signals, and is referred to as a "mixed" signal. To classify into the three main categories, two thresholds are defined. The threshold is defined with respect to certain companding detector parameters, such as the magnitude of a complex value, as described above, e.g., 1.2 and 1.6, although any other value is possible. Based on these thresholds, the input audio is roughly classified into three categories at step 804, and a determination is made in decision block 806 whether the signals are mixed. If the signals are not mixed, they are pure sinusoids or pure transients, in which case appropriate companding rules may be applied, such as closing companding for pure sinusoids and turning companding on for pure transients, block 808. If the signals are mixed, they include sinusoidal and transient signals, and turning the companding on or off settings may not be optimal. In this case, further processing is required to distinguish the tonal signal from transient or partially transient signals (e.g. due to speech or applause) or similar effects (e.g. percussion or similar instruments). In an embodiment, the temporal sharpness characteristics are used to derive a residual metric that helps to distinguish tonal signals from such speech/applause signals, block 810. Details of this step of the process are provided below with reference to fig. 9.

Thus, in an embodiment, the detection component 405 is configured to detect the type of signal based on the value of the input signal compared to a defined threshold. This allows the system to distinguish between still/tonal music and speech, which may also have tonal portions. The detector also uses a spectral sharpness metric to make better distinctions. It derives a residual metric from the temporal sharpness metric using the fact that: anything that is clearly not temporally sharp is spectrally sharp. Thus, after the signal is roughly classified as pure pitch or pure transient (category 1 or 3 above) rather than stationary or transient (category 2 above), spectral sharpness is used to further distinguish the signals. The spectral sharpness is not calculated directly but is derived from other calculations as a residual measure.

With respect to residual derivation, fig. 9 is a flow diagram illustrating a method of using spectral sharpness to distinguish speech from tonal signals under some embodiments. In step 902, the process takes the metric 1, which is the number of bands with temporal sharpness greater than 1.2. In step 904, the process takes metric 2, which is the average of the temporal sharpness values less than 1.2, which is the remaining measurement. The process then applies defined rules to turn companding off or average companding, block 906. This allows the system to adaptively employ companding depending on the content and takes into account the fact that companding is generally detrimental to tonal/harmonic content and that companding should be turned off or averaged, as illustrated in fig. 7.

The following code fragment illustrates an example rule to turn on companding or to average companding, and [1] indicates metric 1, and [2] indicates metric 2:

this rule produces a series of ones and zeros. A value of one indicates that the companding mode is set on and a value of zero indicates that the companding mode is off, but off may result in the use of the averaging mode. Thus, in the code example above, 0 means average mode, and thus the code segment enables switching between companding on and companding average.

In the above rule, metric 2 attempts another round of classification to distinguish tonal signals from speech. The threshold is suitably defined (e.g., based on an overall measurement scale) such that anything above 1.18 is a pure transient and anything below 1.1 is a pure tone signal. But this pure transient or pure tone signal is likely to have been classified by the outermost if condition. Thus, the internal if statement attempts to further fine tune the classification. For the region between 1.1 and 1.18, it has been found that most tonal components of speech are in the range 1.12 to 1.18, and tonal components of music are between 1.1 to 1.12.

As can be seen with respect to the above rules, in one embodiment, the "on" and "average" sequences generate a detector configured 11110100 with respect to the on/off or on/average settings of the companding mode. The alternative detector may look like: 10111000. for the above example, eight possibilities of "turning on" companding or "averaging companding" are provided. Generally, bit assignments such as 11110100 and 10111000 are found by key snooping and/or using certain snooping tools. Alternative configurations represent a trade-off of turning companding off slightly more frequently for tonal signals at the expense of turning it off slightly more for speech. These may represent "second best" alternatives because the speech quality is somewhat degraded. The configuration may be changed or modified based on system requirements and subjective measures of optimal and sub-optimal sounds and desired tradeoffs between speech/applause and tonal sounds.

For extreme cases, such as pure sinusoids, the companding is "turned off" as shown in block 808 of fig. 8 and the code segment shown below.

Under some embodiments, the above code segments illustrate an implementation of the switching method. It should be understood that the code segments illustrate example software implementations, and that variations and additional or different code structures may also be used.

The relationship between temporal sharpness and spectral sharpness is based on the fact that observations have shown that companding can provide some perceptually beneficial noise shaping effect in the frequency domain in addition to affecting temporal noise shaping. Referring to fig. 6, in the QMF domain, the output of QMF is a matrix, where the y-axis is frequency and the x-axis is time slot. Each slot consists of a number of samples and each band consists of a number of frequencies. This frequency times time matrix can be used to detect the time sharpness per frequency band, where the x-axis gives the time sharpness. Likewise, the y-axis gives the frequency sharpness, and although this is not necessarily calculated, the frequency sharpness may be derived from this matrix.

Fig. 4 illustrates a system based on the dolby AC-4 audio delivery system and format, which is standardized by the European Telecommunications Standards Institute (ETSI) as TS 103190 and adopted by Digital Video Broadcasting (DVB) in TS 101154. Embodiments are also described with respect to advanced spectral extension (a-SPX) coding tools for efficient coding of high frequencies at low bitrates. It should be noted that embodiments are not so limited and any suitable codec design and audio coding and transmission method may be used.

In an embodiment, at the encoder (for the A-SPX only case or the A-SPX + A-CPL case), the compressor is the last step before QMF synthesis. For the A-SPX + A-CPL case, the hybrid analysis/synthesis at the encoder comes into play before the compressor. Depending on the output of the companding controller 404, the compressor 406 may perform a normal companding mode or an average companding mode based on the switch 407 function.

Testing the companding modes of different audio segments through various experiments, and using a monitoring tool to evaluate the quality of audio output according to the degradation caused by the audio decoding process, and finding that segment improvement accompanied with companding start degradation when average companding is used; and when using average companding, the fragments that improve with companding "on" degraded very slightly. Both of these means that the system can switch between companding on and average companding most of the time. This provides the advantage of switching with more continuity in applying gain and avoids potential switching artifacts. It also results in low complexity and delay-free detectors incorporating companding control.

Although the embodiments described so far include a companding process for reducing quantization noise introduced by an encoder in a codec, it should be noted that aspects of this companding process may also be applied in signal processing systems that do not include encoder and decoder (codec) stages. Further, where the companding process is used in conjunction with a codec, the codec may be transform-based or non-transform-based.

Fig. 10 illustrates an example technique for selecting a companding index (a) based on content (audio content). It is worth noting that in the following, reference will be made to frames of an audio signal which will be understood as a non-limiting example of a time segment of an audio signal. The invention should not be understood as being limited to frames but equally applicable to all possible implementations of time periods.

A system including one or more computer processors receives (1004) one or more audio signals. The system determines that the first frame F0 of the signal contains a first transient type of signal, e.g., a sparse transient signal in which the transients are widely spaced. This may mean that transients may be individually perceived and distinguished, with (short) periods of silence between transients. Some examples of signals of the first transient type are castanets, electronic music, lectures or some applause. In response, the system specifies the companding index value as a first value (e.g., α ═ 0.65) for the first frame F0.

The system may determine that the second frame F1 of the audio signal includes content of the second transient type. The content of the second transient type comprises a dense transient signal. An example of a second transient type of content is applause with a more dense transient than the first type of content. In response, the system specifies the companding index value as a second value for the second frame (e.g., α ═ 0.5).

The system determines that the third frame F2 of the audio signal contains content of the third transient type. The content of the third transient type includes transient signals having transients that are more dense than the content of the second transient type. An example of content of the third transient type is a dense applause with a high tap density. In response, the system specifies the companding index value as a third value for the third frame (e.g., α ═ 0.35). In general, the first to third values may decrease in value from the first value to the third value, for example, from α ═ 0.65, via α ═ 0.5, to α ═ 0.35.

The system determines that the fourth frame F3 of the audio signal contains content of the fourth transient type. The content of the fourth transient type contains transient signals that are so dense in the transient as to be considered noise. In response, the system specifies the companding index value as a fourth value for a fourth frame. The fourth value may be equal to the first value (e.g., α ═ 0.65). Alternatively, the system may turn off companding for the fourth frame. Designating the companding index value as having a value of 1.0 will close companding.

Thus, the system may analyze frames of the audio signal (as a non-limiting example of a time period) to determine, for each frame, whether the respective frame includes signals of the first through fourth transient content types. In some implementations, the system may only distinguish between the contents of two (or three) transient types, such as a sparse transient type (first transient type) and a dense transient type (second or third transient type). The system may then treat the frames of the respective transient type as belonging to a respective set of frames (e.g., first through fourth sets of frames) and assign a respective companding index to each set of frames. For example, a first value of the companding index may be assigned to a first set of frames consisting of all the frames including a signal of a first transient type, a second value of the companding index may be assigned to a second set of frames consisting of all the frames including a signal of a second transient type, a third value of the companding index may be assigned to a third set of frames consisting of all the frames including a signal of a third transient type, and a fourth value of the companding index may be assigned to a fourth set of frames consisting of all the frames including a signal of a fourth transient type.

FIG. 11 is a table indicating example values of companding indices and corresponding companding states. Conventionally, a one-bit value in the companding control data structure determines whether companding is on or off. If the system determines that companding is on, the system uses a fixed companding index value α of 0.65. In transient density based companding as disclosed in this specification, two new companding index values α of 0.5 and α of 0.35 are used for the second and third types of content as disclosed with reference to fig. 10. Depending on the number of bits used to signal the companding exponent value between the encoding side and the decoding side, different sets of companding exponents may be used. For example, if one bit is used to signal the value of the companding index, a distinction can be made between sparse and dense transient events (e.g., having a predefined threshold in the density of transient events for separating the standby and dense transient types from each other). Then, a first value α of 0.65 may be used for the sparse transient event frame and a second value α of 0.5 or α of 0.35 may be used for the dense transient event frame. If two bits are used to signal the value of the companding index, it can distinguish between the four different types of frames using, for example, the first through fourth values of the companding index given above.

A lower value of companding a corresponds to a higher degree of dynamic range compression in companding (e.g., before core encoding). The value α -1 indicates no companding. Correspondingly, a lower value of companding a corresponds to a higher degree of dynamic range extension in companding (e.g., after core decoding). A higher degree of dynamic range compression means that low-intensity signals will be more enhanced and high-intensity signals will be more attenuated.

The system may indicate the value of the companding index a in the companding control data structure, as shown below.

In the data structure, b _ match _ on [ ch ] contains the specific matching _ control (num _ chan)

Two bit values for channel ch. b _ com _ on [ ch ] may have a binary value of 00, 01, 10, or 11, indicating that the values of the companding index α are 1, 0.65, 0.5, and 0.35, respectively, for a particular frame. Other combinations of values are possible.

Fig. 12 is a flow chart illustrating an example process 1200 of transient density based companding. Process 1200 is an example implementation of the techniques described with reference to fig. 10 and 11. Process 1200 may be performed by a system including one or more computer processors. The system may be an audio encoder, an audio decoder, or both.

The system receives (1202) an audio signal. The system determines (1204) that a first frame of the audio signal contains a sparse transient signal. The sparse transient signal comprises an audio signal of a transient type having a first transient density. The system determines (1206) that a second frame of the audio signal contains a dense transient signal. The dense transient signal includes an audio signal of a transient type having a second transient density higher than the first density. The transient type audio signal includes at least one of a applause, a rain crack, or a crackling fire. In general, for a time segment (e.g., a frame) of an audio signal, a system may analyze the time segment to determine whether the time segment of the audio signal includes a sparse transient signal or a dense transient signal.

The system compresses 1208 the audio signal. Companding the audio signal includes applying a companding operation to the audio signal using a companding rule that applies a first companding index to a first frame of the audio signal and a second companding index to a second frame of the audio signal. Generally, the system applies companding to time segments of the audio signal based on the results of the above determination. This companding of the time segments may include compressing or expanding a dynamic range of the time segments of the audio signal based on a companding index. If it has been determined that the time segment of the audio signal contains a sparse transient signal, a first companding index (e.g., α -0.65) may be used for companding, and if it has been determined that the time segment of the audio signal contains a dense transient signal, a second companding index (e.g., α -0.5 or α -0.35) different from the first companding index may be used for companding. Each companding index is used to derive a respective degree of dynamic range compression and expansion for the corresponding frame. The second companding index is lower in value than the first companding index and corresponds to a higher degree of dynamic range compression and expansion than the first companding index. For example, dynamic range compression may be achieved by a method according to SC (k)t=St(k)gtFor complex valued samples S at time slot t and frequency band kt(k) Performing audio sample scaling, wherein gt=(SMt)α-1And is a normalized slot average (or gain), where SMtIs the mean absolute level (1-norm) from SMt(k)=1/K∑|St(k) And | is obtained by summing in the range of K ═ 1 to K.

The system provides 1208 the companded audio signal to the downstream device, i.e., outputs a compressed audio signal. The downstream device may be at least one of an encoder, a decoder, an output device, or a storage device.

Fig. 13 is a flow chart illustrating an example process 1300 of transient density based companding. Process 1300 is an example implementation of the techniques described with reference to fig. 10 and 11. Process 1300 may be performed by a system including one or more computer processors. The system may include at least one of an audio encoder, an audio decoder, or a companded encoding device. In particular, process 1300 may be performed on the encoding side, in which case companding may include compressing the dynamic range of the audio signal.

The system receives (1302) an audio signal. The audio signal may comprise a series of frames (as a non-limiting example of a time period).

The system determines (1304) a respective companding index for each frame of the audio signal based on the content of the audio signal in the corresponding frame. This may involve analyzing frames of the audio signal, for example with respect to their content. Each companding index is used to derive a corresponding degree of dynamic range compression and expansion for the corresponding frame. Determining the companding index comprises the following operations. The system specifies a first companding index for a first frame of the audio signal determined to contain the sparse transient signal. The system assigns a second companding index for a second frame of the audio signal determined to contain a dense transient signal. The first companding index is higher in value than the second companding index, indicating a lower degree of dynamic range compression and expansion. As disclosed above with reference to fig. 10, the companding index controls the amount of dynamic range compression used in companding. Lower values of the companding index correspond to higher dynamic range compression and expansion.

In general, this may correspond to assigning a first companding index to a first set of time segments (e.g., frames) consisting of all those time segments of the audio signal determined to include sparse transient signals, and assigning a second companding index, different from the first companding index, to a second set of time segments (e.g., frames) consisting of all those time segments of the audio signal determined to include dense transient signals.

The sparse transient signal includes an audio signal having a first density of transient types. The dense transient signals include signals of a transient type having a second density higher than the first density. For example, sparse transient events may be distinguished from dense transient events based on a predefined threshold for the density of the transient. For example, a measure of density may be derived using spectral or temporal spikes of the signal. The transient type audio signal includes at least one of a applause, a rain crack, or a crackling fire.

The system performs 1306 the compression portion of the companding (i.e., the encoded side portion that performs the companding, which corresponds to the compression), which includes compressing the first frame according to a first companding index and compressing the second frame according to a second companding index. This amounts to applying a companding operation to the audio signal, which includes compressing a first set of time segments according to a first companding index and compressing a second set of time segments according to a second companding index.

The system provides 1308 the compressed audio signal to a core encoder.

The system provides (1310) respective indicators of the first companding index and the second companding index to a bitstream associated with the compressed audio signal. The indicator may be a value in the companding control data structure described with reference to fig. 11. Each indicator may include a respective data bit for each respective channel or respective object in the audio signal, each indicator being stored in a companded control data structure. The total size of the indicators may be a two-bit data structure, where each indicator includes at least two bits of the respective companded state data, the at least two bits determining at least four states of companding, each of the four states corresponding to a respective type of content.

FIG. 14 is a flow chart illustrating a third example process of transient density based companding. Process 1400 is an example implementation of the techniques described with reference to fig. 10 and 11. Process 1400 may be performed by a system including one or more computer processors. The system may include at least one of an audio encoder, an audio decoder, or a companded encoding device. In particular, process 1400 may be performed at the decoding side, in which case companding may include extending the dynamic range of the audio signal.

The system receives (1402) a compressed audio signal associated with a plurality of indicators. Each indicator indicates a respective companding index used to derive a degree of dynamic range compression applied to a corresponding frame of the compressed audio signal. That is, the system may receive an audio signal and at least one associated indicator for each time segment of the audio signal, each at least one associated indicator indicating a respective companding index corresponding to a degree of compression or expansion that has been applied to the respective time segment of the audio signal during a companding operation prior to encoding.

The system determines (1404) that a first frame of content in the compressed audio signal is associated with the first indicator and a second frame of content in the compressed audio signal is associated with the second indicator. Each indicator corresponds to a respective channel or object in the compressed audio signal. Each indicator includes a one-bit value in a companding control data structure in metadata associated with a compressed audio signal. In particular, as described in additional detail in fig. 11, each indicator includes at least two bits of companding status data configured to indicate various companding indices. The at least two bits correspond to at least four companded states, each corresponding to the content of a respective transient type. In general, the system may determine a first set of time segments that consists of all those time segments of the audio signal associated with the first indicator, and determine a second set of time segments that consists of all those time segments of the audio signal associated with the second indicator.

The system determines (1406) that a first companding index applies to a first frame of the expanded content and a second companding index applies to a second frame of the expanded content based on the first indicator and the second indicator. In general, the system may determine, for each time segment of the audio signal, a respective companding index for an expansion operation for the respective time segment. Wherein it may be determined that a first companding index applies to a first set of time periods and a second companding index applies to a second set of time periods. The first companding index may be different from the second companding index.

The system performs (1408) the expanded portion of the companding (i.e., the decoded side portion that performs the companding, which corresponds to the expansion) on the compressed audio signal. The operations include expanding a first frame of content of the compressed audio signal according to a first degree of dynamic range expansion derived from a first companding index, and expanding a second frame of content of the compressed audio signal according to a second degree of dynamic range expansion derived from a second companding index. In general, the system may apply an expansion operation (a decoding side portion of companding) to the audio signal that includes expanding a first set of time segments according to a first degree of dynamic range expansion derived from a first companding index and expanding the second set of time segments according to a second degree of dynamic range expansion derived from a second companding index.

The system provides (1410) the expanded audio signal, e.g., to an output device. The output device includes at least one of a storage device, a streaming media server, an audio processor, or an amplifier.

It is to be understood that processes 1200 and 1300 can be performed at the compression component 104 described above (e.g., at the encoding side). Processes 1200 and 1400 may be performed at extension component 114 (e.g., at the decoding side).

Notably, although processes 1200, 1300, and 1400 involve first and second companding indices, the same applies where discrimination between more than two transient types is performed. For example, the above process may assign/use first through fourth values of the companding index.

Aspects of the system described herein may be implemented in a suitable computer-based sound processing network environment for processing digital or digitized audio files. Portions of the adaptive audio system may include one or more networks, including any desired number of individual machines, including one or more routers (not shown) for buffering and routing data transmitted among the computers. This network may be established over a variety of different network protocols and may be the internet, a Wide Area Network (WAN), a Local Area Network (LAN), or any combination thereof.

One or more of the components, blocks, processes, or other functional components may be implemented by a computer program that controls the execution of a processor-based computing device of the system. It should also be noted that the various functions disclosed herein may be described using any number of combinations of hardware, firmware, and/or as data and/or instructions embodied in various machine-readable or computer-readable media, in terms of their behavioral, register transfer, logic component, and/or other characteristics. Computer-readable media in which such formatted data and/or instructions may be embodied include, but are not limited to, various forms of physical (non-transitory), non-volatile storage media, such as optical, magnetic or semiconductor storage media.

Throughout the description and claims, the words "comprise", "comprising", and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense, unless the context clearly requires otherwise; that is, words using the singular or plural number also include the plural or singular number, respectively, in the sense of "including, but not limited to". Additionally, the words "herein," "below," "above," "below," and words of similar import refer to this application as a whole and not to any particular portions of this application. When the word "or" is used with reference to a list of two or more items, the word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, and any combination of the items in the list.

Although one or more embodiments have been described by way of example and with respect to particular embodiments, it is to be understood that one or more implementations are not limited to the disclosed embodiments. On the contrary, it is intended to cover various modifications and similar arrangements as would be apparent to those skilled in the art. Accordingly, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.

Various aspects and embodiments of the present invention are also apparent from the Enumerated Example Embodiments (EEEs) described below.

EEE 1. a method of processing an audio signal, comprising:

receiving an audio signal;

determining that a first frame of the audio signal contains a sparse transient signal;

determining that a second frame of the audio signal contains a dense transient signal;

companding the audio signal, including applying a compression/expansion (companding) operation to the audio signal using a companding rule that applies a first companding index to the first frame of the audio signal and a second companding index to the second frame of the audio signal, each companding index used to derive a respective degree of dynamic range compression and expansion for a corresponding frame; and

providing the companded audio signal to a downstream device.

EEE 2. the method according to EEE1, wherein the sparse transient signal comprises the audio signal of a transient type having a first transient density and the dense transient signal comprises the audio signal of a transient type having a second transient density higher than the first density, and

wherein the transient type of audio signal includes at least one of a applause, a rain, or a crackling fire.

EEE 3. the method according to EEE1, wherein the second companding index is lower in value than the first companding index and corresponds to a higher degree of dynamic range compression and expansion than the first companding index.

EEE 4. a method of processing an audio signal, the method comprising:

receiving an audio signal by a compression/expansion (companding) encoding apparatus;

determining, by the companding device, a respective companding index for each frame of the audio signal based on content of the audio signal in the corresponding frame, each companding index used to derive a respective degree of dynamic range compression and expansion for the corresponding frame, the determining comprising:

specifying a first companding index for a first frame of the audio signal determined to include a sparse transient signal; and

specifying a second companding index for a second frame of the audio signal determined to include a dense transient signal, the first companding index being higher in value than the second companding index;

a compression portion that performs the companding, including compressing the first frame according to the first companding index and compressing the second frame according to the second companding index;

providing the compressed audio signal to a core encoder; and

providing respective indicators of the first companding index and the second companding index to a bitstream associated with the compressed audio signal.

EEE 5. the method of EEE 4, wherein the companding index controls the amount of dynamic range compression used in the companding, wherein a lower value of the companding index corresponds to a higher dynamic range compression.

EEE 6. the method according to EEE 4, wherein the sparse transient signal includes an audio signal of a transient type having a first density and the dense transient signal includes a signal of the transient type having a second density higher than the first density, the audio signal of the transient type including at least one of a applause, a rain, or a crackling fire.

EEE7. the method according to EEE 4, wherein each indicator comprises a respective data bit for each respective channel or respective object in the audio signal, each indicator being stored in a companded control data structure.

EEE8. the method according to EEE7, wherein each indicator comprises a respective second data bit indicating whether companding is on or off.

EEE9. the method according to EEE8, wherein each indicator comprises at least two bits of respective companded status data, the at least two bits determining at least four states of companded, each of the four states corresponding to a respective type of content.

Eee10. a method of decoding an audio signal, comprising:

receiving, by a decoder device, a compressed audio signal associated with a plurality of indicators, each indicator indicating a respective compression/expansion (companding) index used to derive a degree of dynamic range compression applied to a corresponding frame of the compressed audio signal;

determining that a first frame of content in the compressed audio signal is associated with a first indicator and a second frame of the content in the compressed audio signal is associated with a second indicator;

determining, by the decoder device and based on the first indicator and the second indicator, that the first frame of the content should be extended using a first companding index and the second frame of the content should be extended using a second companding index;

performing the companded expansion operation on the compressed audio signal including expanding the first frame of the content of the compressed audio signal according to a first degree of dynamic range expansion derived from the first companding index, and expanding the second frame of the content of the compressed audio signal according to a second degree of dynamic range expansion derived from the second companding index; and

providing the expanded audio signal to an output device.

EEE 11. the method according to EEE10, wherein each indicator corresponds to a respective channel or object in the compressed audio signal.

EEE12. the method according to EEE10, wherein each indicator comprises a one-bit value in a companding control data structure in metadata associated with the compressed audio signal.

EEE13. the method according to EEE12, wherein each indicator includes at least two bits of companding status data configured to indicate various companding indices, the at least two bits corresponding to at least four companding states, each corresponding to the content of a respective transient type.

EEE14. the method of EEE10, wherein the output device includes at least one of a storage device, a streaming media server, an audio processor, or an amplifier.

Eee15. a system, comprising:

one or more processors; and

a non-transitory computer-readable storage medium storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations according to any one of EEEs 1-14.

A non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations according to any one of EEEs 1-14.

41页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:记录装置、读取装置、记录方法、记录程序、读取方法、读取程序及磁带

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!