Audio reproducing method and sound reproducing system

文档序号:90140 发布日期:2021-10-08 浏览:40次 中文

阅读说明:本技术 音频再现方法和声音再现系统 (Audio reproducing method and sound reproducing system ) 是由 C·查巴尼 N·R·茨恩高斯 C·Q·鲁宾逊 于 2011-03-17 设计创作,主要内容包括:本公开涉及音频再现方法和声音再现系统。提供视觉线索附近的局部中的音频感知。一种设备包括视频显示器、第一行音频换能器和第二行音频换能器。第一行和第二行可以垂直地设置在视频显示器上方和下方。第一行音频换能器和第二行音频换能器形成协调地生成可听信号的列。通过对所述列的音频换能器的输出进行加权,可听信号的感知发射是来自视频显示器的平面(例如,视觉线索的位置)。在某些实施例中,音频换能器在外围间隔地更远,以提高所述平面的中心部分中的保真度并降低外围的保真度。(The present disclosure relates to an audio reproduction method and a sound reproduction system. Providing audio perception in the local vicinity of the visual cue. An apparatus includes a video display, a first row of audio transducers, and a second row of audio transducers. The first and second rows may be vertically disposed above and below the video display. The first row of audio transducers and the second row of audio transducers form columns that generate audible signals in coordination. By weighting the output of the audio transducers of the column, the perceived emission of audible signals is from the plane of the video display (e.g., the location of visual cues). In some embodiments, the audio transducers are spaced farther apart at the periphery to increase fidelity in the central portion of the plane and decrease fidelity at the periphery.)

1. A method of audio reproduction for an audio signal, the method comprising:

receiving an audio signal;

receiving position metadata, wherein the position metadata comprises an identifier and audio signal reproduction position information, wherein the identifier uniquely identifies the audio signal, and wherein the audio signal reproduction position information indicates a sound reproduction position of the audio signal, wherein the position metadata further comprises an offset of a channel of the audio signal, wherein the offset adjusts a desired position of the channel horizontally and/or vertically;

receiving screen size information associated with a display screen;

determining a reproduction position of sound reproduction of the audio signal with respect to the display screen, wherein the reproduction position is determined based on the position metadata and the screen size information;

rendering the audio signal at the reproduction position.

2. The method of claim 1, further comprising receiving a plurality of other audio signals for a front left speaker, a front right speaker, a rear left speaker, and a rear right speaker.

3. The method of claim 1, wherein the audio signal is at least one of a center channel audio signal and an audio object signal.

4. The method of claim 1, wherein the audio signal is one of a plurality of audio signals, and wherein the location metadata comprises an identifier for each of the plurality of audio signals and audio signal reproduction location information.

5. A sound reproduction system, the sound reproduction system comprising:

means for receiving an audio signal;

means for receiving location metadata, wherein the location metadata comprises an identifier and audio signal reproduction location information, wherein the identifier uniquely identifies the audio signal, and wherein the audio signal reproduction location information indicates a sound reproduction location of the audio signal, wherein the location metadata further comprises an offset for a channel of the audio signal, wherein the offset horizontally and/or vertically adjusts a desired position of the channel;

means for receiving screen size information associated with a display screen;

means for determining a reproduction position of sound reproduction of the audio signal with respect to the display screen, wherein the reproduction position is determined based on the position metadata and the screen size information;

means for rendering the audio signal at the reproduction location.

6. The sound reproduction system of claim 5 further comprising means for receiving a plurality of other audio signals for the front left speaker, the front right speaker, the rear left speaker and the rear right speaker.

7. The sound reproduction system of claim 5 wherein the audio signal is at least one of a center channel audio signal and an audio object signal.

8. The sound reproduction system of claim 5 wherein the audio signal is one of a plurality of audio signals, and wherein the location metadata comprises an identifier for each of the plurality of audio signals and audio signal reproduction location information.

9. An apparatus, comprising:

a processor;

a non-transitory storage medium comprising instructions that, when executed by a processor, cause performance of the method of any one of claims 1-4.

10. A non-transitory storage medium comprising instructions that, when executed by a processor, cause performance of the method of any one of claims 1-4.

Technical Field

The present invention relates generally to audio reproduction and, more particularly, to the perception of audio locally in the vicinity of visual cues.

Background

In both residential living rooms and theatre spaces, fidelity sound systems approximate the actual original sound field by using stereo technology. These systems use at least two rendering channels (e.g., left and right channels, surround sound 5.1, 6.1 or 11.1, etc.), typically projected through a symmetric arrangement of speakers. For example, as shown in fig. 1, a conventional surround sound 5.1 system 100 includes: (1) left front speaker 102, (2) right front speaker 104, (3) front center speaker 106 (center channel), (4) low frequency speaker 108 (e.g., subwoofer), (5) left rear speaker 110 (e.g., left surround), and (6) right rear speaker 112 (e.g., right surround). In the system 100, a front center speaker 106 or a single center channel carries all dialog and other audio associated with the image on the screen.

However, these systems suffer from drawbacks, especially when localizing sounds in certain directions, and typically require a single fixed listener position for best performance (e.g., sweet spot 114, the focus between speakers where an individual hears the audio mix desired by the mixer). Many efforts to improve to date have involved increasing the number of presentation channels. Mixing a large number of channels incurs more time and cost penalties for the content producer, but the resulting perception fails to localize the sound near the visual cues of the sound source. In other words, the sound reproduced from these sound systems is not perceived as emanating from the on-screen video plane, and thus lacks a true sense of realism.

The inventors have appreciated from the above that techniques for localized perceptual audio associated with video images are desirable to improve the natural listening experience.

The approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Accordingly, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, unless otherwise indicated, problems identified with respect to one or more methods should not be assumed to have been recognized in any prior art based on this section.

Disclosure of Invention

Methods and apparatus are provided for audio perception in a local vicinity of a visual cue. An analog or digital audio signal is received. The location of the perceptual origin (origin) of the audio signal on the video plane is determined or otherwise provided. A column of audio transducers (e.g., speakers) corresponding to the horizontal position of perceptual origin is selected. The column includes at least two audio transducers selected from a plurality of rows (e.g., 2, 3, or more rows) of audio transducers. For the at least two audio transducers of the column, a weight factor for "panning" (e.g., creating a phantom audio image between physical speaker locations) is determined. These weighting factors correspond to the vertical position of the perceptual origin. Audible signals are presented by the columns with the weight factors.

In one embodiment of the invention, an apparatus includes a video display, a first row of audio transducers, and a second row of audio transducers. The first and second rows are vertically disposed above and below the video display. The first row of audio transducers and the second row of audio transducers form columns that generate audible signals in coordination. By weighting the output of the audio transducers of the column, the perceived emission of audible signals is from the plane of the video display (e.g., the location of visual cues). In some embodiments, the audio transducers are spaced farther apart in the periphery to increase fidelity in the center portion of the plane and decrease fidelity in the periphery.

In another embodiment, a system includes an audio transparent screen, a first row of audio transducers, and a second row of audio transducers. The first and second rows are disposed behind the audio transparent screen (relative to the intended viewer/listener position). The screen is audio transparent at least for the desired frequency range of human hearing. In particular embodiments, the system may also include a third, fourth, or more rows of audio transducers. For example, in a cinema venue, three rows of 9 transducers may provide a reasonable tradeoff between performance and complexity (cost).

In yet another embodiment of the present invention, metadata is received. The metadata includes the location of the perceptual origin of the audio stem (stem) (e.g., the secondary mix (submix), the secondary group (subgroup), or the bus (bus) that can be processed separately before combining into the primary mix (master mix)). Selecting one or more columns of audio transducers closest to the horizontal position of the perceptual origin. Each of the one or more columns includes at least two audio transducers selected from a plurality of rows of audio transducers. Determining weighting factors for the at least two audio transducers. These weighting factors are associated with, or otherwise related to, the vertical location of the perceptual origin. The audio stem is audibly presented by the columns with the weight factors.

As an embodiment of the present invention, an audio signal is received. A first position on a video plane for the audio signal is determined. The first location corresponds to a visual cue on a first frame. A second position on the video plane for the audio signal is determined. The second location corresponds to the visual cue on a second frame. Interpolating or otherwise estimating a third location of the audio signal on the video plane to correspond to a location of the visual cue on a third frame. The third position is disposed between the first position and the second position, and the third frame is interposed between the first frame and the second frame.

Drawings

The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:

fig. 1 shows a conventional surround sound 5.1 system;

FIG. 2 illustrates an exemplary system according to an embodiment of the invention;

FIG. 3 illustrates listening position insensitivity of an embodiment of the present invention;

FIGS. 4A and 4B are diagrams illustrating perceptual sound localization according to embodiments of the present invention;

FIG. 5 is a diagram illustrating interpolation for perceptual sound localization for motion, according to an embodiment of the present invention;

6A, 6B, 6C and 6D illustrate exemplary device configurations according to embodiments of the present invention;

FIGS. 7A, 7B, and 7C illustrate exemplary metadata information for localized perceptual audio in accordance with embodiments of the present invention;

FIG. 8 shows a simplified flow diagram according to an embodiment of the invention; and

FIG. 9 shows another simplified flow diagram according to an embodiment of the invention.

Detailed Description

Fig. 2 illustrates an exemplary system 200 according to an embodiment of the invention. The system 200 includes a video display device 202, the video display device 202 further including a video screen 204 and two rows of audio transducers 206, 208. The rows 206, 208 are vertically disposed with respect to the video screen 204 (e.g., row 206 is above the video screen 204 and row 208 is below the video screen 204). In a particular embodiment, the rows 206, 208 replace the front center speaker 106 to output the center channel audio signal in a surround sound environment. Thus, the system 200 may also include (but need not include) one or more of the following: a front left speaker 102, a front right speaker 104, a woofer 108, a rear left speaker 110, and a rear right speaker 112. The center channel audio signal may be wholly or partially dedicated to reproducing speech segments or other dialogues of the media content.

Each row 206, 208 includes a plurality of audio transducers-2, 3, 4, 5 or more. The audio transducers are aligned to form columns-2, 3, 4, 5 or more. Both rows of 5 transducers provide a sensible trade-off between performance and complexity (cost). In alternative embodiments, the number of transducers in each row may be different, and/or the placement of the transducers may be skewed. The feed to each audio transducer may be personalized based on signal processing and real-time monitoring to obtain a desired perceived origin, source size, source motion, and so on.

The audio transducer may be of any of the following types: speakers (e.g., direct radiating electrodynamic drivers mounted in a housing), horn speakers, piezoelectric speakers, magnetostrictive speakers, electrostatic speakers, ribbon and planar magnetic speakers, bending wave speakers, flat panel speakers, distributed mode speakers, Heil air motion transducers, plasma arc speakers, digital speakers, distributed mode speakers (e.g., operating by bending plate vibration-see, for example, U.S. patent No.7,106,881, which is incorporated herein in its entirety for any purpose), and any combination/mixture thereof. Similarly, the frequency range and fidelity of the transducers may be varied from row to row as well as from row to row as desired. For example, row 206 may include full range (e.g., 3 to 8 inch diameter drivers) or mid range audio transducers and high frequency tweeters. The columns formed by the rows 206, 208 may be designed to include different audio transducers that together provide a robust audible output.

Fig. 3 illustrates, among other features, listening position insensitivity of the display device 202 as compared to the sweet spot 114 of fig. 1. For the center channel, the display device 202 avoids or otherwise mitigates:

(i) timbre impairment-mainly the result of combing (combing) caused by the difference in travel time between the listener and the loudspeakers at the various distances;

(ii) incoherency-primarily a result of different rate end energy vectors (stereo end energy vectors) associated with wavefronts simulated by multiple sources, causing the audio images to either be indistinguishable (e.g., audibly blurred) or perceived at each speaker location rather than a single audio image at an intermediate location; and

(iii) instability-the variation of the audio image position with the listener position, e.g. when the listener moves outside the sweet spot, the audio image will move to the closer loudspeaker, or even collapse.

The display device 202 employs at least one column to present audio, or sometimes referred to hereinafter as "column snapping," to improve spatial resolution of audio image position and size, and to improve integration of the audio with an associated visual scene. In this example, column 302, which includes audio transducers 304 and 306, presents a phantom audible signal at location 307. Regardless of the lateral position of the listener (e.g., listener positions 308 or 310), the audible signal is column captured to position 307. From the listener position 308, the path lengths 312 and 314 are substantially equal. This also applies to listener positions 310 having path lengths 316 and 318. In other words, neither audio transducer 302 nor 304 moves relatively closer to the listener than the other audio transducer in column 302, regardless of any lateral variation in listener position. In contrast, the paths 320 and 322 of the left front speaker 102 and the right front speaker 104, respectively, may vary greatly and still suffer from listener position sensitivity.

Fig. 4A and 4B are diagrams illustrating perceptual sound localization of a device 402 according to an embodiment of the present invention. In fig. 4A, device 402 outputs a perceived sound at location 404 and then jumps to location 406. The jump may be associated with a film shot cut or a sound source change within the same scene (e.g., different speakers, sound effects, etc.). This may be accomplished in the horizontal direction by first capturing the column 408 and then to column 410. Vertical positioning is achieved by varying the relative pan weights between the audio transducers within the captured column. In addition, device 402 may also output two distinct localized sounds at location 404 and location 406 simultaneously using two columns 408 and 410. This is desirable if multiple visual cues are presented on the screen. As a particular embodiment, multiple visual cues may be combined with the use of picture-in-picture (PiP) display to spatially correlate sound with appropriate pictures during simultaneous display of multiple programs.

In fig. 4B, device 402 outputs a perceived sound at location 414 (disposed at an intermediate location between columns 408 and 412). In this case, two columns are used to localize the perceived sound. It will be appreciated that the audio transducers may be independently controlled over the entire listening area to achieve the desired effect. As described above, audio images may be placed anywhere on a video screen display, such as by column capture. The audio image may be a point source or a large area source, depending on the visual cues. For example, a conversation may be perceived as emanating from the mouth of an actor on the screen, while the sound of waves beating the beach may spread across the entire width of the screen. In this example, a conversation may be captured by a column while, at the same time, an entire row of transducers is used to emit a wave of sound. These effects will be perceived similarly for all listener positions. Furthermore, the perceived sound source may travel on the screen when necessary (e.g., as the actor moves across the screen).

Fig. 5 is a diagram illustrating interpolation of perceived sound localization by device 502 for motion effects according to an embodiment of the present invention. This positional interpolation may occur while mixing, encoding, decoding, or post-processing playing, and then the calculated interpolated position (e.g., x, y coordinate position on the display screen) may be used for audio rendering as described herein. For example, at time t0The audio stem may be designated as being located at the start position 506. The starting location 506 may correspond to a visual cue or other source of the audio stem (e.g., the mouth of an actor, a barking dog, a car engine, the muzzle of a gun, etc.). At a later time t9(after 9 frames), the same visual cue or other source may be designated as being at the end position 504, preferably before switching scenes. In the case of this example of the present invention,time t9And time t0The frames of time are "key frames". Given a start position, an end position, and an elapsed time, the estimated position of the moving source may be linearly interpolated for each intervening or non-key frame to be used in the audio presentation. The metadata associated with the scene may include (i) a start position, an end position, and an elapsed time, (ii) an interpolated position, or (iii) both items (i) and (ii).

In alternative embodiments, the interpolation may be a parabolic, piecewise constant, polynomial, spline, or gaussian process. For example, if the audio source is a played bullet, a ballistic trajectory, rather than a line, may be used to more closely match the visual path. In some cases, it may be desirable to smooth the motion using panning in the direction of travel while "catching" to the nearest row or column in a direction perpendicular to the motion to reduce ghost impairments so that the interpolation function can be adjusted accordingly. In other cases, additional positions beyond the specified end position 504 may be calculated by extrapolation, particularly for brief periods of time.

The designation of the starting position 506 and the ending position 504 may be accomplished in a variety of ways. The designation may be performed manually by a mixing operator. Manual specification of temporal variations provides accuracy and excellent control in audio presentation. However, it is labor intensive, especially if the video scene includes multiple sound sources or thoroughfares.

The specification may also be performed automatically using artificial intelligence (such as neural networks, classifiers, statistical learning or pattern matching), object/face recognition, feature extraction, and the like. For example, if it is determined that the audio stem exhibits characteristics of human speech, it may be automatically associated with a face found in the scene by facial recognition techniques. Similarly, if the audio stem exhibits characteristics of a particular instrument (e.g., violin, piano, etc.), the scene may be searched for an appropriate instrument and assigned to a corresponding location. In the case of an orchestra scenario, automatic assignment of each instrument can be significantly more labor-saving than manual assignment.

Another approach to specifying is to provide multiple audio streams for different known locations, each audio stream capturing the entire scene. The relative levels of the scene signals (optimally, each audio object signal is considered) may be analyzed to generate positional metadata for each audio object signal. For example, a stereo microphone pair may be used to capture audio throughout a studio. The relative level of the actor's voice in each of the stereo microphones may be used to estimate the position of the actor in the shed. In the case of Computer Generated Imagery (CGI) or computer based games, the location of audio and video objects throughout the scene is known and can be used directly to generate audio signal size, shape and location metadata.

Fig. 6A, 6B, 6C, and 6D illustrate exemplary device configurations according to embodiments of the present invention. FIG. 6A shows a device 602 having closely spaced transducers in two rows 604, 606. The high density of transducers improves the spatial resolution of the position and size of the audio image and increases the particle motion interpolation. In certain embodiments, adjacent transducers are spaced less than 10 inches apart (center-to-center distance 608), or less than about 6 degrees for a typical listening distance of about 8 feet. However, it will be appreciated that for higher densities, adjacent transducers may abut, and/or the speaker cone size is reduced. Multiple micro-speakers (e.g., sony DAV-IS 10; loose electronics; 2 x1 inch speakers or smaller, etc.) may be utilized.

In fig. 6B, device 620 includes an audio transparent screen 622, a first row of audio transducers 624, and a second row of audio transducers 626. The first and second rows are disposed behind the audio transparent screen (relative to the intended viewer/listener position). The audio transparent screen may be, but is not limited to, a projection screen, a television display screen, a cellular radiotelephone screen (including a touch screen), a laptop computer display, or a desktop/tablet computer display. The screen is audio-transparent at least for the desired frequency range of human hearing (preferably, about 20Hz to about 20kHz, or more preferably, the entire range of human hearing).

In particular embodiments, device 620 may also include a third row, a fourth row, or more rows (not shown) of audio transducers. In such a case, the uppermost and lowermost rows are preferably, but not necessarily, located near the upper and lower edges of the audio transparent screen, respectively. This allows a full range of audio panning across the plane of the display screen. In addition, the distance between rows may vary, providing greater vertical resolution in one section at the expense of another. Similarly, the audio transducers in one or more of the rows may be spaced farther apart at the periphery to increase the horizontal resolution of the central portion of the plane and decrease the resolution at the periphery. A high density of audio transducers (determined by the combination of row spacing and individual transducer spacing) in one or more regions may be configured for higher resolution, and in other regions, a low density may be configured for lower resolution.

The device 640 in fig. 6C also includes two rows of audio transducers 642, 644. In this embodiment, the distance between the audio transducers within a row varies. The distance between adjacent audio transducers may vary as a function of distance from the centerline 646, whether linear, geometric, or otherwise. As shown, distance 648 is greater than distance 650. In this way, the spatial resolution in the plane of the display screen may be different. The spatial resolution of the first location (e.g., the center location) can be increased at the expense of a lower spatial resolution of the second portion (e.g., the peripheral portion). This may be desirable when most of the visual cues for dialog presented in the surround system center channel occur near the center of the screen plane.

Fig. 6D illustrates an example form factor of the device 660. The audio transducer rows 662, 664 providing the high resolution center channel are integrated into a single form factor, with a left front speaker 666 and a right front speaker 668. Integration of these components into a single form factor may provide assembly efficiency, better reliability, and improved aesthetics. However, in some cases, rows 662 and 664 may be assembled as separate sound bars, each of which is physically coupled (e.g., mounted) to a display device. Similarly, each audio transducer may be separately packaged and coupled to a display device. In fact, the position of each audio transducer may be adjusted by the end user to an alternative predetermined position according to end user preferences. For example, the transducer is mounted on a rail with available slot locations. In such cases, the final position of the transducer is input by the user or automatically detected in the playback device for proper manipulation of the localized perceptual audio.

Fig. 7A, 7B, and 7C illustrate types of metadata information for localized perceptual audio according to an embodiment of the present invention. In the simple example of fig. 7A, the metadata information includes a unique identifier, timing information (e.g., start and stop frames, or alternatively, elapsed time), coordinates for audio reproduction, and a desired size of audio reproduction. Coordinates may be provided for one or more conventional video formats or aspect ratios, such as wide screen (greater than 1.37:1), standard (4:3), ISO 216(1.414), 35mm (3:2), WXGA (1.618), super 16mm (5:3), HDTV (16:9), and so on. The size of the audio reproduction, which may be correlated to the size of the visual cues, is provided to allow rendering by multiple transducer columns to increase perceived size.

The metadata information provided in fig. 7B differs from fig. 7A in that the audio signal can be identified for motion interpolation. A start position and an end position of the audio signal are provided. For example, the audio signal 0001 starts at X1, Y2, and moves to X2, Y2 during the frame sequence 0001 to 0009. In a particular embodiment, the metadata information may also include an algorithm or function to be used for motion interpolation.

In fig. 7C, metadata information similar to the example shown in fig. 7B is provided. However, in this example, the reproduction position information is provided as a percentage of the display screen size instead of the cartesian x-y coordinates. This gives the metadata information device independence. For example, the audio signal 0001 starts at P1% (horizontal), P2% (vertical). P1% may be 50% of the display length from the reference point and P2% may be 25% of the display height from the same or another reference point. Alternatively, the position of sound reproduction may be specified in terms of a distance (e.g., radius) and an angle from the reference point. Similarly, the size of the rendering may be expressed as a percentage of the display size or reference value. The reference value may be provided to the playback device as metadata information if used, or it may be predefined and stored on the playback device if associated with the device.

In addition to the above types of metadata information (location, size, etc.), other suitable types may include:

an audio shape;

virtually favoring real images;

required absolute spatial resolution (to help manage the imaging of the phantom during playback to real audio) -resolution can be specified for each dimension (e.g., L/R, front/back); and

the relative spatial resolution required (to help manage the imaging of the phantom onto real audio during playback) -resolution can be specified for each dimension (e.g., L/R, front/back).

In addition, for each signal to a center channel audio transducer or surround sound system speaker, metadata indicating the offset may be sent. For example, the metadata may indicate the desired position more accurately (horizontally and vertically) for each channel to be rendered. For systems with higher spatial resolution, this would allow the process of sending spatial audio in a higher resolution presentation, but backwards compatible.

Fig. 8 shows a simplified flow diagram 800 according to an embodiment of the invention. At step 802, an audio signal is received. At step 804, a location of a perceptual origin of the audio signal on the video plane is determined. Next, at step 806, one or more columns of audio transducers are selected. The selected column corresponds to the horizontal position of the perceptual origin. Each column includes at least two audio transducers. At step 808, weighting factors for the at least two audio transducers are determined or otherwise calculated. These weighting factors correspond to the vertical position of the perceptual origin for audio panning. Finally, at step 810, an audible signal is presented by the column with the weight factor. Other alternatives may also be provided in which steps are added, one or more steps are removed, or one or more steps are provided in a different sequence than the above sequence, without departing from the scope as claimed herein.

Fig. 9 shows a simplified flow diagram 900 according to an embodiment of the invention. At step 902, an audio signal is received. At step 904, a first location of the audio signal on the video plane is determined or otherwise identified. The first location corresponds to a visual cue on the first frame. Next, at step 906, a second location of the audio signal on the video plane is determined or otherwise identified. The second location corresponds to a visual cue on the second frame. For step 908, a third position of the audio signal on the video plane is calculated. The third location is interpolated to correspond to the location of the visual cue on the third frame. The third position is disposed between the first position and the second position, and the third frame is between the first frame and the second frame.

The flowchart also (optionally) includes steps 910 and 912, steps 910 and 912 selecting a list of audio transducers and calculating weighting factors, respectively. The selected column corresponds to the horizontal position of the third position and the weighting factor corresponds to the vertical position of the third position. In step 914, during display of the third frame, an audible signal is optionally presented with the weighting factor through the columns. Flowchart 900 may be performed in whole or in part during reproduction of media by a mixer to produce the necessary metadata or during playing of the presentation audio. Other alternatives may also be provided in which steps are added, one or more steps are removed, or one or more steps are provided in a different sequence than the above sequence, without departing from the scope of what is claimed herein.

The above techniques for localized perceptual audio can be extended to three-dimensional (3D) video, e.g., stereo image pairs: a left eye perceived image and a right eye perceived image. However, identifying only visual cues in one perceptual image for a key frame may result in a horizontal difference between the location of the visual cues in the final stereoscopic image and the perceived audio playback. To compensate, the stereo disparity may be evaluated and the adjusted coordinates may be automatically determined using conventional techniques, such as associating visual neighborhoods in key frames with other perceptual images or computing from 3D depth maps.

The stereo correlation may also be used to automatically generate an additional coordinate z pointing along the normal of the display screen and corresponding to the depth of the sound image. The z coordinate may be normalized such that 1 is just at the viewing position, 0 indicates on the display screen, and less than 0 indicates a position behind the plane. Upon playback, the additional depth coordinates may be used to synthesize additional immersive audio effects in combination with stereo vision.

Implementation mechanisms-hardware overview

According to one embodiment, the techniques described herein are implemented with one or more special-purpose computing devices. The special purpose computing device may be hardwired to perform the techniques, or may include digital electronics such as one or more Application Specific Integrated Circuits (ASICs) or Field Programmable Gate Arrays (FPGAs) permanently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed according to program instructions in firmware, memory, other storage, or a combination to perform the techniques. Such special purpose computing devices may also combine custom hardwired logic, ASICs, or FPGAs with custom programming to implement the techniques. The special purpose computing device may be a desktop computer system, portable computer system, handheld device, networked device, or any other device that incorporates hardwired and/or program logic for implementing the techniques. The techniques are not limited to any specific combination of hardware circuitry and software nor to any particular source for the instructions executed by the computing device or data processing system.

The term "storage medium" as used herein refers to any medium that stores data and/or instructions that cause a machine to operate in a specific manner. It is non-transitory. Such storage media may include non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks. Volatile media includes dynamic memory. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge.

Storage media is distinct from, but can be used in conjunction with, transmission media. Transmission media participate in the transfer of information between storage media. Transmission media include, for example, coaxial cables, copper wire and fiber optics. Transmission media can also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.

Equivalents, extensions, alternatives, and others

In the foregoing specification, possible embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. Thus, the sole and exclusive indicator of what is the invention, and is intended by the applicants to be the invention, is the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. Any definitions expressly set forth herein for terms contained in such claims shall govern the meaning of such terms as used in the claims. Hence, no limitation, element, property, feature, advantage or attribute that is not expressly recited in a claim should limit the scope of such claim in any way. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It should also be understood that for clarity, for example (e.g.) means "by way of example" (and not by way of exhaustive list) that it is different from either (i.e.) or "that is.

Furthermore, in the previous description, numerous specific details are set forth, such as examples of specific components, devices, methods, etc., in order to provide a thorough understanding of embodiments of the present invention. It will be apparent, however, to one skilled in the art that these specific details need not be employed to practice embodiments of the present invention. In other instances, well-known materials or methods have not been described in detail in order to avoid unnecessarily obscuring the embodiments of the invention.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:声音信息处理方法及装置、计算机存储介质、电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!