System for providing multiple virtual reality views

文档序号:1440073 发布日期:2020-02-14 浏览:26次 中文

阅读说明:本技术 用于提供多个虚拟现实视图的系统 (System for providing multiple virtual reality views ) 是由 姜宗贤 尹硕铉 于 2018-03-22 设计创作,主要内容包括:提供了一种用于虚拟现实(VR)客户端的设备、方法和计算机可读介质以及一种用于实时虚拟现实事件的内容服务器。内容服务器从场所处的每个360°摄像机接收馈送。内容服务器将第一流中的主第一360°视频和第二流中的转换后的辅非360°视频发送到VR客户端。VR客户端确定每个非360°视频的相对位置,以便在第一360°视频中显示非360°视频的渲染的缩略图。VR客户端将对非360°视频的选择发送到内容服务器。内容服务器在第一流中发送与该选择有关的第二360°视频,并在第二流中发送转换后的第一非360°视频。(An apparatus, method, and computer-readable medium for a Virtual Reality (VR) client and a content server for real-time virtual reality events are provided. The content server receives feeds from each 360 camera at the site. The content server sends the primary first 360 ° video in the first stream and the converted secondary non-360 ° video in the second stream to the VR client. The VR client determines the relative position of each non-360 ° video in order to display a rendered thumbnail of the non-360 ° video in the first 360 ° video. The VR client sends a selection of non-360 ° videos to the content server. The content server sends a second 360 ° video related to the selection in the first stream and sends the converted first non-360 ° video in the second stream.)

1. An electronic device, comprising:

a transceiver;

a processor coupled to the transceiver; and

a memory coupled to the processor, the memory storing instructions that, when executed, cause the processor to:

receiving, from another electronic device via the transceiver: a first stream of a first 360 ° video taken at a first location at a venue and a second stream of non-360 ° video extracted from a second 360 ° video taken at a second location at the venue,

rendering a thumbnail of the non-360 ° video at a selected location on a frame of the first 360 ° video, wherein the selected location corresponds to the second location relative to the first location,

outputting a portion of the first 360 ° video with the rendered thumbnail to a display,

an input selection of the rendered thumbnail is received through a user interface,

sending the input selection of the non-360 ° video to the other electronic device, and

receiving a third stream of the second 360 ° video from the other electronic device and discontinuing receiving the first stream of the first 360 ° video.

2. The electronic device of claim 1, wherein the second stream comprises a periodically updated two-dimensional "2D" snapshot.

3. The electronic device of claim 2, wherein the 2D snapshot is recommended respective time of focus from the first 360 ° video and the second 360 ° video.

4. The electronic device of claim 1, wherein a first size of the first stream is greater than a second size of the second stream.

5. The electronic device of claim 1, wherein a first frame rate of the first stream is greater than a second frame rate of the second stream.

6. The electronic device of claim 1, wherein the instructions further cause the processor to:

receiving information indicating a recommendation of the non-360 ° video from the other device; and

based on the received information, distinguishing thumbnails of the non-360 ° videos from thumbnails of other non-360 ° videos.

7. The electronic device of claim 1, wherein the instructions further cause the processor to:

receiving, from the other device, information indicating a secondary view of the non-360 ° video based on an object in a primary view of the first 360 ° video; and

adjusting the thumbnail based on the secondary perspective.

8. The electronic device of claim 1, further comprising:

the display coupled to the processor; and

a frame comprising a structure to be worn on the head of a user, the frame mounting the display in a position oriented towards the eyes of the user when worn.

9. The electronic device of claim 1, wherein the portion of the first 360 ° video with the rendered thumbnail is output to an external display via the transceiver.

10. A method for operating an electronic device, the method comprising:

receiving, from another electronic device: a first stream of a first 360 ° video taken at a first location at a venue and a second stream of non-360 ° video extracted from a second 360 ° video taken at a second location at the venue;

rendering a thumbnail of the non-360 ° video at a selected location on a frame of the first 360 ° video, wherein the selected location corresponds to the second location relative to the first location;

displaying a portion of the first 360 ° video with the rendered thumbnail;

receiving an input selection of the rendered thumbnail through a user interface;

sending the input selection of the non-360 ° video to the other electronic device; and

receiving a third stream of the second 360 ° video from the other electronic device and discontinuing receiving the first stream of the first 360 ° video.

11. The method of claim 10, wherein the second stream comprises periodically updated two-dimensional "2D" snapshots.

12. The method of claim 11, wherein the 2D snapshot is recommended respective time of focus from the first 360 ° video and the second 360 ° video.

13. The method of claim 10, wherein a first size of the first stream is greater than a second size of the second stream.

14. The method of claim 10, wherein a first frame rate of the first stream is greater than a second frame rate of the second stream.

15. The method of claim 10, further comprising:

receiving information indicating a recommendation of the non-360 ° video from the other device; and

based on the received information, distinguishing thumbnails of the non-360 ° videos from thumbnails of other non-360 ° videos.

16. The method of claim 10, further comprising:

receiving, from the other device, information indicating a secondary view of the non-360 ° video based on an object in a primary view of the first 360 ° video; and

adjusting the thumbnail based on the secondary perspective.

17. A computer readable medium embodying a computer program, the computer program comprising computer readable program code that, when executed, causes at least one processor to:

receiving, via a transceiver from another electronic device: a first stream of a first 360 ° video taken at a first location at a venue and a second stream of non-360 ° video extracted from a second 360 ° video taken at a second location at the venue;

rendering a thumbnail of the non-360 ° video at a selected location on a frame of the first 360 ° video, wherein the selected location corresponds to the second location relative to the first location;

outputting a portion of the first 360 ° video with the rendered thumbnail to a display;

receiving an input selection of the rendered thumbnail through a user interface;

sending the input selection of the second non-360 ° video to the other electronic device; and

receiving a third stream of the second 360 ° video from the other electronic device and discontinuing receiving the first stream of the first 360 ° video.

Technical Field

The present disclosure generally relates to systems for providing content. More particularly, the present disclosure relates to a system for providing content using a Virtual Reality (VR) device.

Background

Most events, such as concerts, music plays, stage performances, sports (e.g., football, baseball, hockey, basketball, etc.), etc., are recorded with multiple cameras. The production team composes individual streams from those available in real time, and the user views the composed streams. User controlled content flow is required.

Disclosure of Invention

Drawings

For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:

fig. 1A, 1B, and 1C illustrate exemplary VR clients in which various embodiments of the present disclosure may be implemented;

fig. 2 illustrates an example network configuration according to an embodiment of the present disclosure;

FIG. 3 illustrates an exemplary event at a venue having multiple cameras according to an embodiment of the present disclosure;

fig. 4 illustrates an exemplary architecture of a content server for receiving feeds from multiple cameras and providing output to a VR client, in accordance with an embodiment of the disclosure;

FIG. 5 illustrates an exemplary broadcaster selectable view of 360 ° video from a first camera, wherein a plurality of non-360 ° videos are positioned according to the location of respective cameras, in accordance with an embodiment of the present disclosure;

FIG. 6 illustrates an exemplary experience with a recommendation preview in accordance with an embodiment of the present disclosure;

FIG. 7 illustrates an exemplary default perspective for each camera in accordance with an embodiment of the present disclosure;

FIG. 8 illustrates an example of focusing a default perspective of a secondary camera based on a focus of a primary camera, in accordance with an embodiment of the present disclosure;

fig. 9 illustrates exemplary processing of a real-time virtual reality event by a VR client, according to an embodiment of the disclosure; and

fig. 10 illustrates exemplary processing of real-time virtual reality events by a content server according to an embodiment of the disclosure.

Best mode for carrying out the invention

In one embodiment, a VR client embodiment, an electronic device for real-time virtual reality events is provided. The electronic device includes a display, a transceiver, a processor operatively coupled to the transceiver, and a memory operatively coupled to the processor. The memory stores instructions that, when executed, cause the processor to receive, from a content server via the transceiver: a first stream of a first 360 ° video taken at a first location at the venue, and a second stream of a second non-360 ° video extracted from a second 360 ° video taken at a second location at the venue. The instructions also cause the processor to: rendering a thumbnail of a second non-360 ° video at a selected location on a frame of the first 360 ° video, wherein the selected location corresponds to the second location relative to the first location; displaying a portion of the first 360 ° video with the rendered thumbnail on a display or an external display; receiving a selection of a rendered thumbnail from a user; sending a selection of a second non-360 ° video to the content server; and receiving a third stream of the second 360 ° video from the content server and discontinuing receiving the first stream of the first 360 ° video.

In a second embodiment, a method for operating an electronic device for real-time virtual reality events is provided. The method includes receiving from a content server: a first stream of a first 360 ° video taken at a first location at the venue, and a second stream of a second non-360 ° video extracted from a second 360 ° video taken at a second location at the venue. The method further comprises the following steps: rendering a thumbnail of a second non-360 ° video at a selected location on a frame of the first 360 ° video, wherein the selected location corresponds to the second location relative to the first location; displaying a portion of a first 360 ° video with rendered thumbnails; receiving a selection of a rendered thumbnail from a user; sending a selection of a second non-360 ° video to the content server; and receiving a third stream of the second 360 ° video from the content server and discontinuing receiving the first stream of the first 360 ° video.

In a third embodiment, a non-transitory medium embodying a computer program for real-time virtual reality events is provided. The program code, when executed by at least one processor, causes the processor to receive from a content server: a first stream of a first 360 ° video taken at a first location at the venue, and a second stream of a second non-360 ° video extracted from a second 360 ° video taken at a second location at the venue. The program code, when executed by the at least one processor, further causes the processor to: rendering a thumbnail of a second non-360 ° video at a selected location on a frame of the first 360 ° video, wherein the selected location corresponds to the second location relative to the first location; displaying a first 360 ° video with rendered thumbnails; receiving a selection of a rendered thumbnail from a user; sending a selection of a second non-360 ° video to the content server; and receiving a third stream of the second 360 ° video from the content server and discontinuing receiving the first stream of the first 360 ° video.

In a fourth embodiment, a system for real-time virtual reality events is provided. The system includes a network interface, at least one processor coupled to the network interface, and at least one storage device operatively coupled to the processor. The storage device stores instructions that, when executed, cause the processor to receive, via the network interface: a first feed of a first 360 ° video taken from a first location at the venue from a first camera, and a second feed of a second 360 ° video taken from a second location at the venue from a second camera. The instructions also cause the processor to: extracting a second non-360 ° video from the second 360 ° video; transmitting a first stream of the first 360 ° video and a second stream of the extracted second non-360 ° video to an external electronic device via a network interface; receiving a selection of a second non-360 ° video from an external electronic device; in response to the selection, sending a third stream of the second 360 ° video to the external electronic device; and discontinuing sending the first stream of the first 360 ° video.

In an exemplary embodiment, the second stream includes two-dimensional (2D) snapshots of the periodic update.

In an exemplary embodiment, the 2D snapshot is a recommended time of focus from the first 360 ° video and the second 360 ° video.

In an exemplary embodiment, the first size of the first stream is greater than the second size of the second stream.

In an exemplary embodiment, the first frame rate of the first stream is greater than the second frame rate of the second stream.

In an exemplary embodiment, the storage device further includes instructions that cause the processor to: identifying an object of interest in a first 360 ° video; determining the recommended view as a non-360 ° video; and providing information indicating the recommendation of the non-360 ° video to the external device to distinguish the thumbnail of the non-360 ° video from the thumbnails of the other non-360 ° videos displayed.

In an exemplary embodiment, the storage device further includes instructions that cause the processor to: identifying a focus of a primary perspective of a first 360 ° video; determining a secondary view of the second non-360 ° video based on the primary view; and providing information to the external device to adjust the thumbnail based on the determined secondary perspective.

In a fifth embodiment, a method for real-time virtual reality events is provided. The method comprises the following steps: a first feed of a first 360 ° video taken from a first location at a venue from a first camera and a second feed of a second 360 ° video taken from a second location at a venue from a second camera are received. The method further comprises the following steps: extracting a second non-360 ° video from the second 360 ° video; transmitting a first stream of the first 360 ° video and a second stream of the extracted second non-360 ° video to an external electronic device; receiving a selection of a second non-360 ° video from an external electronic device; in response to the selection, sending a third stream of the second 360 ° video to the external electronic device; and discontinuing sending the first stream of the first 360 ° video.

In an exemplary embodiment, the second stream includes two-dimensional (2D) snapshots of the periodic update.

In an exemplary embodiment, the 2D snapshot is a recommended time of focus from the first 360 ° video and the second 360 ° video.

In an exemplary embodiment, the first size of the first stream is greater than the second size of the second stream.

In an exemplary embodiment, the first frame rate of the first stream is greater than the second frame rate of the second stream.

In an exemplary embodiment, includes: identifying an object of interest in a first 360 ° video; determining the recommended view as a non-360 ° video; and providing information indicating the recommendation of the non-360 ° video to the external device to distinguish the thumbnail of the non-360 ° video from the thumbnails of the other non-360 ° videos displayed.

In an exemplary embodiment, the method further comprises: identifying a focus of a primary perspective of a first 360 ° video; determining a secondary view of the non-360 ° video based on the primary view; and providing information to the external device to adjust the thumbnail based on the determined secondary perspective.

In a sixth embodiment, a non-transitory medium embodying a computer program for real-time virtual reality events is provided. The program code, when executed, causes the processor to receive: a first feed of a first 360 ° video taken from a first location at the venue from a first camera, and a second feed of a second 360 ° video taken from a second location at the venue from a second camera. The program code, when executed, further causes the processor to: extracting a second non-360 ° video from the second 360 ° video; transmitting a first stream of the first 360 ° video and a second stream of the extracted second non-360 ° video to an external electronic device; receiving a selection of a second non-360 ° video from an external electronic device; in response to the selection, sending a third stream of the second 360 ° video to the external electronic device; and discontinuing sending the first stream of the first 360 ° video.

Other technical features may be readily apparent to one skilled in the art from the following figures, descriptions, and claims.

27页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:检测摄像头模组的方法、设备、系统、机器可读存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类