Event source content and remote content synchronization

文档序号:1440087 发布日期:2020-02-14 浏览:25次 中文

阅读说明:本技术 事件源内容和远程内容同步 (Event source content and remote content synchronization ) 是由 安迪·迪恩 于 2018-02-07 设计创作,主要内容包括:一种用于同步事件媒体内容的方法和装置,所述事件媒体内容包括在事件演出中由观众或粉丝用户所记录的远程音频和视频内容,其中远程音频内容从事件表演的扬声器记录,以及直接从由发起人、俱乐部等所记录的表演中所记录的源音频内容。源音频内容具有比由观众所记录的远程音频内容更高的质量。更高质量的音频源内容替换由观众所记录的较低质量的音频内容。产生的源音频/远程视频媒体内容为用户的个性化的事件留念提供了纯净的录制清晰的音质音频。(A method and apparatus for synchronizing event media content including remote audio and video content recorded by spectators or fan-users in an event performance, where the remote audio content is recorded from speakers of the event performance, and source audio content recorded directly from the performance recorded by the sponsor, club, etc. The source audio content has a higher quality than the remote audio content recorded by the viewer. Higher quality audio source content replaces lower quality audio content recorded by the viewer. The resulting source audio/remote video media content provides a clean, clearly recorded, timbre audio for the user's personalized event belief.)

1. A method of synchronizing event media content including remote content having at least a first type of media and a second type of media recorded by a user on a user device, and source content including the first type of media, the method comprising:

identifying in the data structure of remote content of the first type of media recorded by the user by way of identification;

matching the identification means with the associated source content portion;

replacing the remote content with the associated source content portion; and

compiling the associated source content portion of the first type of media with the remote content of the second type of media recorded by the user.

2. The method of claim 1, wherein the first type of media of the source content is audio, the first type of media recorded by the user is audio, and the second type of media recorded by the user is video.

3. The method of claim 2, wherein a third type of media recorded by the user is a photograph and the associated source content for the first type of media is compiled with the second type of media and the third type of media recorded by the user.

4. The method of claim 1, wherein the first type of media of the source content is audio, the first type of media recorded by the user is audio, and the second type of media recorded by the user is photos.

5. The method of any preceding claim, wherein the source content comprises only the first type of media content audio.

6. A method according to any one of the preceding claims, wherein the identification is made in the data structure of the time and location of the remote content of the first type of media recorded by the user by means of identification.

7. A method according to any one of the preceding claims, wherein the identification is by way of identification in the data structure of remote content of the first type of media recorded by the user with a plurality of tags manually generated by the user.

8. A method according to any preceding claim, wherein the method comprises a plurality of users, each user having separate user equipment for recording first and second types of media recorded by associated users attending the same event, the associated source content portion of the first type of media being compiled with the remote content of the second type of media content recorded by different users at different times in the duration of the source content.

9. The method of any one of the preceding claims, wherein the source content is a recording of studio quality of the event performance.

10. The method of any of the preceding claims, wherein the remote content includes ambient noise of the recording of the event performance and a lower quality recording of the event performance.

11. A system for synchronizing event media content including remote content having at least a first type of media and a second type of media recorded by a user, and source content including the first type of media, the method comprising:

a recognition module having an identification means for identifying remote content of the first type of media recorded by the user and matching the identification means with an associated portion of source content;

a synchronization module to replace the remote content with the associated source content portion; and

a compiler for compiling the associated source content portion of the first type of media with the remote content of the second type of media recorded by the user.

12. The system of claim 11, wherein the recognition module comprises an identification module having an identification pattern in the data structure of the time and location of remote content of the first type of media recorded by the user and a matching module for matching the identification pattern with the associated source content portion.

13. The system of claim 11 or 12, wherein the first type of media of the source content is audio, the first type of media recorded by the user is audio, and the second type of media recorded by the user is video.

14. The system of claim 13, wherein a third type of media recorded by the user is a photograph and the associated source content portion of the first type of media is compiled with the second type of media and the third type of media recorded by the user.

15. The system of claim 11 or 12, wherein the first type of media of the source content is audio, the first type of media recorded by the user is audio, and the second type of media recorded by the user is photos.

16. The system of any of claims 11 to 15, wherein the source content includes only the first type of media content audio.

17. A system according to any one of claims 11 to 16, wherein identification is by way of identification in the data structure of the time and location of remote content of the first type of media recorded by the user.

18. The system of any of claims 11 to 17, wherein the identification is by way of identification in the data structure of remote content of the first type of media recorded by the user having a plurality of tags manually generated by the user.

19. The system of any of claims 11 to 18, wherein the method comprises a plurality of users, each user having separate user equipment for recording first and second types of media recorded by associated users attending the same event, compiling the associated source content portion of the first type of media with the remote content of the second type of media content recorded by different users at different times in the duration of the source content.

20. The system of any one of claims 11 to 19, wherein the source content is a recording of studio quality of the event performance.

21. The system of any of claims 11 to 20, wherein the remote content comprises ambient noise of the recording of the event performance and a lower quality recording of the event performance.

22. A computer-implemented method of synchronizing event media content, the event media content including remote content having at least a first type of media and a second type of media recorded by a user, and source content including the first type of media, the method comprising:

identifying in the data structure of remote content of the first type of media recorded by the user by way of identification;

matching the identification means with the associated source content portion;

replacing the remote content with the associated source content portion; and

compiling the associated source content portion of the first type of media with the remote content of the second type of media recorded by the user.

23. A consumer electronic device for a method of synchronizing event media content, the event media content comprising remote content having at least a first type of media and a second type of media recorded by a user, and source content comprising the first type of media:

a memory storing machine readable instructions; and

a processor configured to execute the machine readable instructions to implement the steps of the method of any of claims 1 to 10.

24. A system for synchronizing event media content, said event media content comprising remote content having at least a first type of media and a second type of media recorded by a user, and source content comprising said first type of media:

a server having a memory for storing machine-readable instructions and a processor configured to execute the machine-readable instructions;

a first consumer electronic device having a memory for storing machine-readable instructions and a processor configured to execute the machine-readable instructions;

the server and the first user electronic device are configured to communicate with each other over a network;

wherein the server and the first user electronic device interoperate to perform the method of any one of claims 1 to 10.

25. A computer readable medium storing machine readable instructions executable by a processor of a consumer electronic device to implement the steps of the method according to any one of claims 1 to 10.

26. A computer readable medium storing machine readable instructions executable by a processor of a server to implement the steps of the method of any one of claims 1 to 10.

Technical Field

The present invention relates generally to a method and system for synchronizing event source content and remote content, and more particularly to synchronizing high-quality recorded media content of a performance event from a source device that directly records the performance with low-quality recorded media content of a remote device recorded by an audience member of the same event.

Background

Viewers can record live or capture a broadcast event show on smartphones and other handheld recording devices. These recordings provide audience members with a personalized picture of the event performance experience. Audience members typically stream, upload and publish remotely recorded video and photo content to share their experiences with others on social networks and video clip capture and sharing applications. However, often the remotely recorded media content of the event performance (particularly the sound quality of the audio content) is of low quality and is often distorted and fragmented, making the published content inaudible and unviewable. Some event organizers may provide "official" recordings of live performances, but these recordings do not record the individual perspectives of fans and viewers, i.e., video and photo descriptions of live performances taken remotely by viewers.

There is a need for a method and system for event source content and viewer remote content synchronization for event performances that addresses or at least mitigates some of the problems and/or limitations discussed above.

Disclosure of Invention

One aspect of the present invention is a method of synchronizing event media content including remote content having at least a first type of media and a second type of media recorded by a user on a user device, and source content including the first type of media, the method comprising: identifying in the data structure of remote content of the first type of media recorded by the user by way of identification; matching the identification means with the associated source content portion; replacing the remote content with the associated source content portion; and compiling the associated source content portion of the first type of media with the remote content of the second type of media recorded by the user.

In an embodiment, wherein the first type of media of the source content is audio, the first type of media recorded by the user is audio, and the second type of media recorded by the user is video. A third type of media recorded by the user is a photograph, and the associated source content of the first type of media is compiled with the second type of media and the third type of media recorded by the user.

In an embodiment, the first type of media of the source content is audio, the first type of media recorded by the user is audio, and the second type of media recorded by the user is photos. The source content may include only the first type of media content audio.

In an embodiment, the identification means may be identified in the data structure of the time and location of the remote content of the first type of media recorded by the user. The identification means may be identified in the data structure of remote content of the first type of media recorded by the user with a plurality of tags manually generated by the user.

In an embodiment, each of a plurality of users may have separate user devices for recording first and second types of media recorded by associated users attending the same event, the associated source content portion of the first type of media being compiled with the remote content of the second type of media content recorded by different users at different times in the duration of the source content.

In one embodiment, the source content is a recording of studio quality of the event performance. The remote content may include ambient noise of the recording of the performance of the event and a lower quality recording of the performance of the event.

One aspect of the present invention is a system for synchronizing event media content including remote content having at least a first type of media and a second type of media recorded by a user, and source content including the first type of media, the method comprising: a recognition module having an identification means for identifying remote content of the first type of media recorded by the user and matching the identification means with an associated portion of source content; a synchronization module to replace the remote content with the associated source content portion; and a compiler for compiling the associated source content portion of the first type of media with the remote content of the second type of media recorded by the user.

In one embodiment, the identification module includes an identification module having an identification pattern in the data structure of the time and location of remote content of the first type of media recorded by the user and a matching module for matching the identification pattern with the associated source content portion.

In an embodiment, the first type of media of the source content is audio, the first type of media recorded by the user is audio, and the second type of media recorded by the user is video. A third type of media recorded by the user is a photograph, and the associated source content portion of the first type of media is compiled with the second type of media and the third type of media recorded by the user.

In an embodiment, the first type of media of the source content is audio, the first type of media recorded by the user is audio, and the second type of media recorded by the user is photos. The source content may include only the first type of media content audio.

In an embodiment, the identification means may be identified in the data structure of the time and location of the remote content of the first type of media recorded by the user. The identification means may be identified in the data structure of remote content of the first type of media recorded by the user with a plurality of tags manually generated by the user.

In an embodiment, each of a plurality of users may have separate user devices for recording first and second types of media recorded by associated users attending the same event, the associated source content portion of the first type of media being compiled with the remote content of the second type of media content recorded by different users at different times in the duration of the source content.

In one embodiment, the source content is a recording of studio quality of the event performance. The remote content may include ambient noise of the recording of the performance of the event and a lower quality recording of the performance of the event.

One aspect of the present invention is a computer-implemented method of synchronizing event media content including remote content having at least a first type of media and a second type of media recorded by a user, and source content including the first type of media, the method comprising: identifying in the data structure of remote content of the first type of media recorded by the user by way of identification; matching the identification means with the associated source content portion; replacing the remote content with the associated source content portion; and compiling the associated source content portion of the first type of media with the remote content of the second type of media recorded by the user.

One aspect of the present invention is a consumer electronic device for a method of synchronizing event media content including remote content having at least a first type of media and a second type of media recorded by a user, and source content including the first type of media: a memory storing machine readable instructions; and a processor configured to execute the machine readable instructions to implement the steps of the method according to embodiments of the invention.

One aspect of the present invention is a system for synchronizing event media content, the event media content including remote content having at least a first type of media and a second type of media recorded by a user, and source content including the first type of media: a server having a memory for storing machine-readable instructions and a processor configured to execute the machine-readable instructions; a first consumer electronic device having a memory for storing machine-readable instructions and a processor configured to execute the machine-readable instructions; the server and the first user electronic device are configured to communicate with each other over a network; wherein the server and the first user electronic device interoperate to perform the method according to an embodiment of the present invention.

One aspect of the invention is a computer readable medium storing machine readable instructions executable by a processor of a consumer electronic device to implement steps of a method according to embodiments of the invention.

A computer readable medium storing machine readable instructions executable by a processor of a server to implement steps of a method according to embodiments of the invention.

Drawings

The accompanying drawings incorporated in and forming a part of the specification illustrate several aspects of the present invention and, together with the description, serve to explain the principles of the invention. While the invention will be described in conjunction with certain embodiments, there is no intent to limit it to those embodiments described. On the contrary, the intent is to cover all alternatives, modifications and equivalents as included within the scope of the invention as defined by the appended claims. In the figure:

FIG. 1 shows a schematic block diagram of a system according to an embodiment of the invention;

FIG. 2 shows a schematic block diagram of a server as shown in FIG. 1 in more detail in accordance with an embodiment of the invention;

FIG. 3 shows a schematic block diagram of the source recording device as shown in FIG. 1 in more detail according to an embodiment of the present invention;

FIG. 4 shows a schematic block diagram of a user equipment recording device as shown in FIG. 1 in more detail according to an embodiment of the present invention;

FIGS. 5-7 are data structure diagrams illustrating remote media content compiled with source media content; and

FIG. 8 is a flow diagram of a method according to an embodiment of the invention.

Detailed Description

One embodiment of the present invention is a method and apparatus for synchronizing event media content including remote audio and video content recorded by spectators or fan-users from speakers at the time of an event performance, as well as source audio content recorded directly from performances recorded by promoters, clubs, music providers, bands, and the like. The source audio content has a better sound quality than the remote audio content recorded by the viewer. Typically, remotely recorded media content of an event performance recorded by a user on a user device (e.g., a smart phone, etc.) is of low quality (especially the sound quality of the audio content) and is often distorted and fragmented, making the recorded remote content inaudible and unviewable. User equipment sound recording devices used to record remote content are typically of much lower quality than sound recording devices used to record source content. The higher quality audio source content replaces the lower quality audio remote content recorded by the user viewer and is synchronized and layered with the video remote content recorded by the user. The generated event source audio/remote video media content provides pure, clearly recorded, timbre audio for a user's personalized account or the memorandum of the event.

Referring to FIG. 1, a schematic block diagram 10 of a system in accordance with an embodiment of the present invention is shown. The event source content and remote content synchronization system 10 illustrates a server 12 and a database 14 in communication with source content 20 and at least one user 22, 24 or a plurality of users 28 via a network 16 (e.g., the internet, a local area network, etc.). User 22 records an event performance 26. The event show may be a live event or a broadcast live event. The event show may be a transmission of a previously recorded event. In an embodiment, the source content 20 may be live broadcast or live recorded at an event. The source content may be recorded music tracks that are recorded in a studio and played or aired in an event, over the air, etc. A user may capture a transmission of a music song in the background while recording a video on the user's device. The content providers 30 may provide higher quality source content than the remote content recorded by the user. Content providers may provide other material that may be relevant to a performance, such as other media content such as text, audio content, images, photographs, videos, video clips, and so forth. External social media/communication sources 32 are shown communicating over a network to upload and share content.

FIG. 2 illustrates a schematic block diagram 50 of the server 12 shown in FIG. 1 in greater detail according to an embodiment of the invention. The server 12 includes a processor 52 and a memory 54 for storing and executing a plurality of applications and different modules of the applications of the processing system. The server may include input devices 56 and output devices 58, as well as an interface module 60 for communicating with the various modules and devices of the system. The plurality of modules of the server may include a user configuration module 62 for maintaining user profile accounts for a plurality of users, a content module 64 for managing content for a plurality of performances, a sharing module 66 for sharing source content of the module with the plurality of users, a recognition module 68 including an identify content module 70 identifying remote content and a match content module 72 matching the remote content with the source content, and a mixing module 74 for replacing, overlaying, etc. unclear audio remote content with clearer audio source content along with other media video remote content.

Fig. 3 shows a schematic block diagram 100 of a recording device of the source content 20 as shown in fig. 1 in more detail according to an embodiment of the invention. The recording device of the source content 20 includes a processor 102 and a memory 104 for storing and executing the source content of the performance and the different modules of the recording device 20 that process the source content. The recording device of the source content may include an input 106 and an output 108, as well as a recording source content module 110 for recording the source content, a source content mixing module 112 for mixing the source content as needed, a sharing module 114 for sharing the source content with the user, and a mark content module 116 for marking the content to allow synchronization of the content. It will be appreciated that the storage of the source content may be stored in storage located on the source content recording device itself, in storage somewhere remote from the source content recording device (e.g., server 12, database 14, content provider storage 30, external social media/communications sources 32, cloud storage (not shown), other remote storage, etc.). The storage device of the source content records the performance content directly from the event performance, or in other words, in a more direct manner than the remote user device. For example, the source content recording device may include an output directly linked to the digital output of the performers' electronic sequencers, synthesizers, audio output of musical instruments, and the like, or a sensitive high-specification analog/digital microphone positioned proximate to the performers and/or musical instruments, and the like, to provide substantially higher sensitivity and higher quality recordings than can be achieved by a remote user recording device. The source content of the event show may be recorded live and broadcast in real-time, live event, or at a later time after the live event. The source content may be stored on stage, recording studio, etc. The source content may be broadcast by some means such as a concert venue, radio station, night club, movie theater, concert hall, theater, concert, etc. The source content of the performance event may be broadcast anywhere on the speaker system, and the user records or captures remote content from the output of the speakers using the user device. The source content recording may be adjusted through filters, sound engineering equipment, etc. to improve the quality of the source content recording. Conversely, user remote recording devices are typically located away from the performers between the speakers of the performance event to obtain interfering ambient sounds, distortion, feedback, and the like. Thus, the recorded source content achieves a quality level much higher than the low quality achievable with the user equipment.

Fig. 4 shows a schematic block diagram 150 of the user equipment recording device 22 as shown in fig. 1 in more detail according to an embodiment of the invention. The user device 22 includes a processor 152 and memory 154 for storing and executing a plurality of applications and processing the various modules of the user device and the plurality of applications of the system, and a user interface module for communicating with the various modules and devices of the system and user. The user device 22 may include input means 156 and output means 158 for user input and retrieval of commands and information for the system and for communication with the various modules and devices of the system. The input device 156 may include a microphone, a video camera, and the like. The output devices may include a display 159, speakers, etc., and the user equipment modules may include an application 162 module for running the method and system according to an embodiment of the present invention, a play content module 164 for playing media content on the user equipment, a compose content module 166 for a user to compose and share media content originating from the user equipment, a manage content and tags module 168 for storing and maintaining media content resident on the user equipment in a content repository or storage area 169, etc. It will be appreciated that the storage of remote content and/or source content may be stored in the content repository 169 in storage located on the user device itself, in storage somewhere remote from the user device (e.g., server 12, database 14, content provider storage 30, external social media/communications sources 32, cloud storage (not shown), other remote storage, etc.). The interaction of the different modules 60, 62, 64, 66 of the server 12, the modules 110, 112, 114, 116 of the source content recording device 20, and the modules 160, 162, 164, 166, 168 of the user device 22 is described in more detail with reference to fig. 5-8.

Fig. 5-7 show schematic diagrams of data structures 170, 180, 190 of remote content and source content. More specifically, fig. 5 shows a schematic diagram 170 of a data structure of remote media content recorded by a user in an event performance. The data structure of remote media content 170 includes stacked or dual media content, i.e., remote content B172 layer and remote content a 174 layer. Remote content B172 may be the video portion of the remote media content and remote content a 174 may be the audio portion of the remote media content. Each portion includes tags 176, 178, metadata, etc., which includes identification means, identification data, etc., to allow remote data and source data to be synchronized. For example, the embedded identification data tag or metadata container may include ID3 metadata, geographic data or geographic location data with latitude and longitude coordinates, time identification data, artist name, song or track name, genre, album title, album track number, release date, etc. to identify multimedia audio and/or video content.

Referring to FIG. 6, a data structure 180 illustrates high quality source content A182 and associated tags 184 for source media content recorded and captured by an actor source recording device.

Referring to fig. 7, a resulting matched data structure 190 of a remote media content B172 layer with the associated tag 176 of fig. 5 (compiled, embedded, and stacked with the high quality source content a182 layer with the associated tag 184 of fig. 6) is shown. The low-quality remote content a 174 of fig. 5 is stripped from the data structure 170 of the remote media content recorded by the user and replaced by the high-quality source content a of fig. 6 with an associated tag 184. This results in the data structure 190 having a double data structure with some remote content captured by the user and some source content captured by the actor source recording device. In this embodiment, remote content B172 may be video content and remote content a 174 and source content a182 may be audio. It will be appreciated that the content may be other forms of media content such as photographs, video, audio, etc.

The markers 176, 178, 184 provide a variety of identification means to achieve synchronization of the content. For example, the plurality of markers in this embodiment identify a time and a geographic location that identifies an event performance and a portion of the performance that is recorded. This information is critical to accurately identifying and matching and synchronizing high quality source content with remote content. For example, in some performance locations, such as a multi-stage music festival or electronic music club venue, several performances may occur simultaneously at different stages or spaces. Thus, in such scenarios, the accuracy of the geographic location is sufficient to distinguish the field phase or space. It will be appreciated that other forms of identification may be used instead of or in addition to time identification and/or geographical location.

When the application 162 of the user device 22 communicates the identifying details of the tag 178 of the low-quality remote content a 174 to the server, the higher-quality source content a182 is identified and sent to the user device. The higher quality source content a182 is synchronized with the remote content B172.

In an embodiment, a certain amount of associated metadata or tags may be automatically and manually generated when clean audio (i.e., source content) is received from a club/sponsor, music or soundtrack producer, soundtrack in a broadcast, etc. The associated metadata or markers may include other information such as start and end times, geographic location, place name, originator, event, location, DJ(s), performer(s), subject matter, music type, occasion, etc. Since the source content is usually recorded by a music or audio track producer, an event organizer, or the like, the quality of the source content has a high studio-like quality. Remote content recorded by a user is typically recorded from a speaker remote or remote from the speaker broadcasting the recorded or live content. Thus, all external and internal background ambient noise at the time of the live event performance is also recorded by the user in the remote content.

When a user uploads remote content (i.e., video, audio, and/or fingerprint data associated with audio) to a server, then there may also be a certain amount of associated metadata in the remote content recorded by the user that is generated and embedded by an application running on the recording device of the user device. Some of the associated metadata and indicia associated with the user remote content may be automatically generated, such as start time, end time, clip length used to obtain the end time, geographic location, time zone, and the like. Additionally, some of the associated metadata or tags associated with the user's remote content may include tags manually generated by the user, such as event name, music type, and the like. The associated metadata may be calculated or obtained from existing automatically generated associated metadata, e.g., when the geographic location is known from an existing geography, then the event and location may be obtained with known or matching known data. In one embodiment, the users' manually generated metadata (e.g., DJ, genre, etc. content) can be played to enrich our clean audio data.

In an embodiment, an audio or acoustic fingerprint search of remote content may be used to search a fingerprint database to match the source content. Multiple content databases or repositories, such as the event content database 14, the content provider 30 database, a content repository 169 that stores existing content (which the user may have stored) on the user device 150, etc., may be searched to find the correct portion of source content audio to match the remote content audio. It will be appreciated that the source content may be searched on any number of storage areas, such as content stored in the content store 169 in storage located on the user device itself, content stored in storage somewhere remote from the user device (e.g., server 12, database 14, content provider storage 30, external social media/communication sources 32, cloud storage (not shown), other remote storage, etc.). Any number of databases and stored content in the store may be searched to determine if there is a match for a live or known event in the event content database 14 or if there are known tracks from the content providers 30. For example, remote content recorded by a user may capture music played in a radio, a jukebox, etc. in the background (e.g., in a car while driving, in a restaurant, etc.). Tracks have been identified and matched. The associated metadata from the user can be used to filter the list of potential audio segments so that the correct segment can be found more quickly, rather than searching for all existing segments that may not be relevant.

Fig. 8 is a flow diagram of a method 200 according to an embodiment of the invention. The method of the user device installs 202 the application on the user device and the user records the remote media content of the performance 204. The user requests and downloads the recorded source media content 206 and the application synchronizes the user remote content with the source content 208. The remote content and the source content are compiled 210.

In one embodiment, the remote media content is identified in the recognition module 68 and matched with stored music tracks. Remote media content or impure audio content may be identified and matched with source content or clean audio having fingerprint type matches, or the like. Acoustic fingerprint processing is used in the industry and may be applied here with embodiments of the present invention. The stored music surfaces (e.g., live event performances, audio tracks provided by content providers 30 such as recorded, etc.) may be stored in the event database 14. Remote content is identified and matched to event performances in an event database and tracks in a content provider database. For example, media content may be classified as a live event with a live event marker and may match event performance source content stored in the event database 14. If no match is found in the event database, the match may be made by a content provider or a music Application Program Interface (API) provider.

In an embodiment, once the clean source audio is compiled and embedded with the user's video, the user may post the user's personal remote content B172 (capturing the user's personal memos taken from the user's perspective) along with higher quality source content a182 onto external social media, video clip capture and sharing systems, and the like. Other users of the plurality of users 28 shown in FIG. 1 may perform several actions within the network and servers, such as viewing posts, commenting on posts, tracking users who posted posts, being prompted for future occurrences of similar events, and so forth.

In one embodiment, the user event content remote video and remote audio are replaced with the user's event content remote video and source audio using the source audio of the event. The source audio is sent to the user device and an application located on the user device synchronizes the event content remote video with the source audio. It will be appreciated that the synchronization may occur at other devices of the multiple systems, such as servers, user devices, etc. In one embodiment, the resulting data structure may include an mp4 format file or the like, having only user video and source audio on the user device. It will be appreciated that any playback file or format may be used for playback on any number of multimedia playback applications to play back the synchronized source audio content along with the fan's remote video/photo content.

In an embodiment, in addition to video only, other multimedia event related content of the user, such as photos, residing on the user device (or other storage associated with the user device) may be synchronized with the source audio along with the video. It will be appreciated that even some of the low quality audio captured by fans may be superimposed on top of the source audio. This would provide an enhanced personal experience of audio playback of the source audio along with the audio portion of the fans. For example, a fan may wish to hear a singing or reciting portion of the fan as the source audio is played back. In one embodiment, the resulting data structure may include mp4 format files or the like with the user's video and other user's multimedia content on the user's device along with the source audio. It will be appreciated that any playback file or format may be used for playback on any number of multimedia playback applications to play back the synchronized source audio content along with the fan's remote video/photo content.

In one embodiment, user video (e.g., photographs taken during a performance event) may be compiled with active audio and source multimedia content. Typically, the photos may be taken on the same user device, which is recorded with the video and audio portions of the event, and the photos may be taken between the videos. The photos or other multimedia content may also have multiple data structures with markers (with geographic location, time identification, etc.) as shown in fig. 5-7, such that during playback of the source audio content and remote video/photos of the synchronized fans, as well as other multimedia content for a particular time at which the photos were taken during a show, the photos will be displayed for a period of time (e.g., about 1-5 seconds, etc.). In one embodiment, the generated data structure may include an mp4 format file or the like with the user video (and other user multimedia content) on the user device along with the source audio and source multimedia content provided by the source server. It will be appreciated that any playback file or format may be used for playback on any number of multimedia playback applications to play back the synchronized source audio content along with the fans' remote video/photos.

In an embodiment, multi-user videos in a group of users may be compiled together into a single video with source audio. This may result in Advanced Audio Coding (AAC), mp4 video format files, etc., with video and other content, such as audio from multiple user devices along with the source audio, photos, etc. The selection of the video/photo segments of the users may be selected randomly or from a group of users having some link between them, i.e. fans have indicated that they agree to share content with each other within an organized group in the users' system network. It will be appreciated that any playback file or format may be used for playback on any number of multimedia playback applications to play back the synchronized source audio content along with the fan's remote video/photo content. It will be understood that the remote content may be recorded by the user, and the user may be a member of an audience, performer, speaker taking a performance, or the like.

In an embodiment, other content from a content provider (e.g., a sponsor, branding material from a sponsor, etc.) may be compiled with the user's content and source content audio as a single video. If there are some gaps in the video/photos of the fan during the entire length of the source audio track of the entire event performance, this can be useful when it is necessary or convenient to fill in any gaps between the fan's time-identifying video/photo sequence in the video portion that is synchronized with the source audio portion.

Embodiments of this invention have been described herein, including the best mode known to the inventors for carrying out the invention. Variations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventors expect skilled artisans to employ such variations as appropriate, and the inventors intend for the invention to be practiced otherwise than as specifically described herein. Accordingly, this invention includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the invention unless otherwise indicated herein or otherwise clearly contradicted by context.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于在可穿戴设备中呈现用户界面的方法、系统和介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类