Context-aware control of user interfaces displaying video and related user text

文档序号:1895033 发布日期:2021-11-26 浏览:5次 中文

阅读说明:本技术 显示视频和相关用户文本的用户界面的上下文感知控制 (Context-aware control of user interfaces displaying video and related user text ) 是由 D·允 S·S·菲尔斯 M·亚尔曼德 于 2020-03-30 设计创作,主要内容包括:公开的技术提供了一种在用户界面的评论部分内显示视频内容的计算设备。当用户通过选择用户界面评论部分内的链接来调用视频显示时,系统可以控制用户界面的导航位置以在评论部分内同时地显示视频和选定的评论。在一个说明性示例中,系统可以显示具有视频显示区域和评论部分的用户界面。用户界面可以被定位以在显示设备的查看区域内显示评论部分,并且这样的位置可以将视频显示区域放置在查看区域的外部。在这样的场景中,当系统接收到指示对评论部分内显示的评论的选择的用户输入时,系统可以生成视频内容的渲染以在评论部分内显示。(The disclosed technology provides a computing device that displays video content within a commentary portion of a user interface. When a user invokes a video display by selecting a link within a comment portion of the user interface, the system may control the navigation position of the user interface to simultaneously display the video and the selected comment within the comment portion. In one illustrative example, the system may display a user interface having a video display area and a comment portion. The user interface may be positioned to display the comment portion within a viewing area of the display device, and such a position may place the video display area outside of the viewing area. In such a scenario, when the system receives user input indicating a selection of a comment displayed within the comment portion, the system may generate a rendering of the video content for display within the comment portion.)

1. A method to be performed by a data processing system for operation, the method comprising:

cause display of a user interface, the user interface including a video display area and a comment portion, wherein a location of the user interface displays the comment portion within a viewing area of a display device, the location of the user interface positioning the video display area displaying a rendering of content outside of the viewing area;

receiving a user input indicating a selection of at least a portion of a comment displayed within the comment portion; and

in response to receiving the user input, generating a second rendering of the content for display within the comment portion, wherein the user interface is configured to: maintaining the position of the user interface to position the comment portion within a viewing area of a display device while displaying the second rendering of the content within the viewing area of a display device.

2. The method of claim 1, wherein the method further comprises: controlling the position of the user interface to display the comment concurrently with the second rendering of the content.

3. The method of claim 1, wherein the method further comprises: controlling the location of the user interface to display related comments concurrently with the second rendering of the comment and the content.

4. The method of claim 1, wherein the method further comprises: determining that a media type associated with the comment includes a still image of the content, and wherein the second rendering includes the still image of the content displayed within the viewing area of the display device in response to the user input.

5. The method of claim 1, wherein the method further comprises:

determining that the media type associated with the comment includes audio data of the content, and wherein the second rendering includes a graphical user interface indicating playback of the audio data; and

causing an audio device to generate an audio output of the audio data.

6. The method of claim 1, wherein metadata associated with the comment defines a time interval of video data defining the content, wherein the second rendering comprises displaying the time interval of the content within the viewing area of the display device in response to the user input.

7. The method of claim 1, wherein the method further comprises:

analyzing the user input based on data received from the input device to determine an input type;

in response to determining that the input type is a first input type, displaying the second rendering of the content within the comment portion while maintaining a position of the user interface; and

in response to determining that the input type is a second type, displaying a customized user interface that simultaneously displays a selected comment and at least one of the rendering of the content or the second rendering of the content.

8. The method of claim 7, wherein the first input type comprises: a cursor hovers over at least a portion of the comment.

9. The method of claim 7, wherein the second input type comprises: a cursor hovering over at least a portion of the comment; and user actuation of an input device, the user actuation indicating selection of the comment.

10. A method to be performed by a data processing system for operation, the method comprising:

cause display of a user interface, the user interface including a video display area and a comment portion, wherein a location of the user interface displays the comment portion within a viewing area of a display device, the location of the user interface positioning the video display area displaying a rendering of content outside of the viewing area;

receiving a user input indicating a selection of a comment displayed within the comment portion; and

in response to receiving the user input, generating a customized user interface for rendering video content concurrently with display of the commentary, wherein the customized user interface is configured to have a threshold level of overlap between the rendering of the video content and the commentary.

11. The method of claim 10, wherein the method further comprises: controlling the position of the user interface to display the comment concurrently with the second rendering of the content.

12. The method of claim 10, wherein the method further comprises: controlling the location of the user interface to display related comments concurrently with the second rendering of the comment and the content.

13. The method of claim 10, wherein the method further comprises: determining that a media type associated with the comment includes a still image of the content, and wherein the second rendering includes the still image of the content displayed within the viewing area of the display device in response to the user input.

14. The method of claim 10, wherein the method further comprises:

determining that the media type associated with the comment includes audio data of the content, and wherein the second rendering includes a graphical user interface indicating playback of the audio data; and

causing an audio device to generate an audio output of the audio data.

15. The method of claim 10, wherein metadata associated with the comment defines a time interval of video data defining the content, wherein the second rendering comprises displaying the time interval of the content within the viewing area of the display device in response to the user input.

Background

On many social, educational and entertainment platforms, commenting on video is becoming popular and ubiquitous. Many video-based reviewers will reference video content to contextualize (contextalize) and specify their messages. The reviewer may refer to the visual entity or a particular sound clip in a number of ways. For example, a user may reference a person's voice or quote (quote) at a particular time or provide a timestamp, etc. In some systems, a user may include a link to his comments and allow the user to view a video that begins at a particular point in time.

While existing systems provide a platform for users to provide comments, the user interfaces currently in use are inherently simplistic from a reviewer perspective and from a viewer perspective and do not provide tools for optimizing the user experience. For example, in some existing systems, when a viewer selects a video link associated with a comment, current systems often scroll (scroll) the user interface away from the comment portion in order to display the selected video. This feature can lead to a number of inefficiencies and complications. In particular, when interacting with existing systems, users cannot maintain a view of comments when they select them to view a video. This does not allow users to view comments of interest to them continuously while watching the relevant video. By requiring the user to scroll back and forth between the comment portion and the video portion of the user interface, this can result in many inefficiencies, which can be extremely difficult in situations where there are hundreds or thousands of comments. This type of navigation is very inefficient with respect to user productivity and computing resources.

The disclosure herein is presented in view of these and other technical challenges.

Disclosure of Invention

The techniques disclosed herein provide improvements over existing systems by enabling a computing device to display video content within a commentary portion of a user interface. When a user invokes the display of a video by selecting a link to the video within a comment portion of the user interface, the system may control the navigation location of the user interface to simultaneously display the selected comment within the comment portion and also display the video within the comment portion.

In one illustrative example, the system may display a user interface having a video display area and a comment portion. In some scenarios, the user interface may be navigatively positioned to display the comment portion within a viewing area of the display device, and such a location may place the video display area outside of the viewing area of the display screen, e.g., a scrolling position of a web page may place the video off-screen. In such a scenario, when the system receives user input indicating a selection of a comment displayed within the comment portion, the system may generate a rendering of the video content for display within the comment portion.

The techniques described herein may result in more efficient use of a computing system. In particular, by controlling aspects of the user interface to ensure that videos and selection comments are displayed simultaneously, the system may improve a number of efficiencies with respect to user productivity and facilitate more efficient use of computing resources. The system presented herein alleviates the need for a user to manually navigate through the user interface to view a particular comment during video playback. Eliminating or mitigating the manual navigation process results in more efficient use of computing resources, e.g., memory usage, network usage, and processing resources, as this may reduce the time one needs to spend on a computer to browse large pages. Additionally, the system may enhance user engagement by mitigating the need for cumbersome manual user interface navigation processes for pages containing large numbers of comments.

Other features and technical advantages in addition to those expressly described above will also be apparent from reading the following detailed description and viewing the associated drawings. This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key or critical features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. For example, the term "technique" may refer to system(s), method(s), computer-readable instructions, module(s), algorithm(s), hardware logic, and/or operation(s) enabled by the context described above and the entire document.

Drawings

Specific embodiments are described with reference to the accompanying drawings. In the drawings, the left-most digit(s) of a reference number identifies the drawing in which the reference number first appears. The same reference numbers in different drawings identify similar or identical items. References to individual items in a plurality of items may use a reference numeral having a letter in a letter sequence to refer to each individual item. General references to items may use specific reference numerals without a letter sequence.

FIG. 1 shows an example process involving a computing device that may control navigation of a user interface displaying a video display area and a comment portion.

FIG. 2 illustrates an example process involving a computing device that may control navigation of a user interface to display selected commentary concurrently with video content associated with the commentary.

FIG. 3 illustrates an example process involving a computing device that may control navigation of a user interface to display selected commentary and related commentary concurrently with video content associated with the commentary and related commentary.

FIG. 4 illustrates an example process involving a computing device that may generate a reconfigured user interface to simultaneously display a comment and video content associated with the comment.

FIG. 5 illustrates an example process involving a computing device that may transition from a reconfigured user interface back to an original user interface.

FIG. 6A illustrates an example process involving a computing device that may generate a customized user interface to simultaneously display a comment and video content associated with the comment.

FIG. 6B illustrates an example of a customized user interface for simultaneously displaying a comment and video content associated with the comment.

FIG. 7 illustrates an example process involving a computing device that may generate different types of user interfaces depending on the type of user input.

FIG. 8 illustrates an example process involving a computing device that may generate a rendering of an audio user interface element concurrently with an associated comment.

FIG. 9 illustrates an example process involving a computing device that may control navigation of a user interface to display still images of video content concurrently with related commentary.

FIG. 10 is a flow diagram illustrating aspects of a routine for computing spoken content for efficiently generating a video.

FIG. 11 is a computing system diagram showing aspects of an illustrative operating environment for the techniques disclosed herein.

FIG. 12 is a computing architecture diagram illustrating aspects of the configuration and operation of a computing device that may implement aspects of the technology disclosed herein.

Detailed Description

FIG. 1 illustrates an example process involving a computing device 100 that may control and enhance the navigation position of a user interface displaying a video display area and a comment portion. The example shown in fig. 1 shows two stages of a process for controlling the navigation position of the user interface 130 to render video content in the video display area 140 and the comment portion 150. The first stage of the process is shown on the left side of the figure and the second stage of the process is shown on the right side of the figure. The figure is arranged to illustrate how a portion of the user interface 130 may be displayed to a user. In particular, for illustrative purposes, the portion of the user interface 130 within the viewing area 160 of the display device 170 (e.g., screen) may be viewed by a user of the device. However, for illustrative purposes, the portion of the user interface 130 that is not drawn within the viewing area 160 is "off-screen" and cannot be viewed by the user.

As shown in the first stage of the process, computing device 100 may display user interface 130 including video display area 140 and comment portion 150. As shown, the initial navigational position of the user interface 130 displays the comment portion 150 within the viewing area 160 of the display device 170, as well as the rendered video content 116 within the video display area 140. The location of the user interface 130 positions the video display area 140 outside of the viewing area 160. The video display area 140 and the comment portion 150 are distinct from one another, and therefore, the rendering 116 of the video content is "off-screen" and cannot be viewed by a user of the computing device 100. As can be appreciated, the position of the user interface 130 can be manipulated by user interaction of scrolling the user interface up or down to view portions of the user interface 130 within the viewing area 160 of the display device 170. The comment portion 150 is also referred to herein as a "text field 150" or "text portion 150". Comment portion 150 may include any portion of the user interface that contains text. For example, the comment portion 150 may be part of a word processing document, OneNote file, spreadsheet, blog, or any other form of media or data that may cause a computer to render text in conjunction with the rendering of a video.

As also shown in fig. 1, in a first stage of the process, computing device 100 may receive user input 161 indicating a selection of comment 162 displayed within comment portion 160. User input 161 may be based on user interaction with a touch screen or any other mechanism suitable for selecting comment 162. In some configurations, the user input 161 may be a voice command received by a microphone of the computing device 100. In some configurations, user input 161 may be received by a camera or imaging sensor of the computing device. Such an embodiment allows the user to provide a gesture to select comment 162. For example, a user may point to a particular comment within the user interface. It may also be appreciated that the selection of the comment 162 may include a selection of at least a portion of the comment. Thus, if a user selects a word or any other symbol or character associated with a displayed video, such user input may cause the computing device 102 to perform one or more actions (e.g., display of a video, rendering of an audio output, or display of other types of media (e.g., still images or other graphical elements)) while controlling the navigation position of the user interface to continuously display the selected word, symbol or character.

As shown in the second stage of the process, in response to receiving the user input, the computing device 100 may generate a second rendering 117 of the content for display within the comment portion 150, wherein the user interface 130 is configured to maintain a position of the user interface 130 to position the comment portion 150 within the viewing area 160 of the display device 170 while the second rendering 117 is displayed within the viewing area 160 of the display device 170.

The examples provided herein are for illustrative purposes and should not be construed as limiting. It may be appreciated that the techniques disclosed herein may be implemented with any type of computing device. For example, although a mobile device is utilized in some of the examples depicted herein, the disclosed techniques may be implemented using any computing device in communication with a display device (e.g., a monitor) having a display area. For illustrative purposes, the display area of a display device is considered to be the surface of the display device that may generate light for displaying the rendered image.

FIG. 2 illustrates an example process involving a computing device 100, which computing device 100 may control a navigation location of a user interface 130 to display a selected comment 162 concurrently with rendering of video content 117 associated with the comment 162 in a viewing area 160. In this example, in the second stage of the process, the computing device 100 may also automatically scroll or otherwise position the user interface 130 to enable the computing device 100 to simultaneously display the rendered video content 117 so that the video content 117 does not obscure or block the display of the selected commentary 162.

In the example shown in fig. 1, the user interface may be positioned such that the second rendering 117 of the video content within the comment portion may block at least a portion of the selected comment 162. In the example shown in fig. 2, the system may control the navigational position of the user interface 130 in response to user input to allow the comment to be displayed in its entirety simultaneously with the second rendering 117 in the viewing area 160. The control navigation may also select a location for the second rendering 117 such that this mitigates overlap with any comments in the comment portion. The system may analyze the coordinates of the commentary and the coordinates of the rendered video 117 to minimize the amount of overlap in the viewing area 160. The video rendering may also be sized to minimize an amount of overlap between the video rendering and the one or more commentary. The video rendering may be minimized to a predetermined size, which may depend on one or more user settings or other factors, such as the resolution of the video data.

In some configurations, the computing device 100 may scroll the user interface 130 in any direction (e.g., up, down, left, or right) to enable the computing device to simultaneously display the rendered video content 117 such that the video content 117 does not obscure or block the display of the selected commentary 162. Any suitable technique for detecting the location of a user interface element (e.g., the rendering of video content 117 and/or comment 162) may be used for important aspects of the present disclosure. For example, the coordinates of the comment 162 may be analyzed by the computing device 100. The computing device may then determine the coordinates of the rendering of the video content 117 based on the coordinates of the comment 162. Computing device 100 may then position the rendering of video content 117 such that the coordinates of video content 117 have an overlap relative to the coordinates of comment 162 that does not exceed a threshold level. The threshold level of overlap may allow for no overlap to any predetermined level of overlap. Additionally, the computing device 100 may determine coordinates of the rendering of the video content 117 such that the video content 117 is rendered within the viewing area 160 of the display device 170.

FIG. 3 illustrates an example process involving computing device 100, which computing device 100 may control navigation of user interface 130 to display selected commentary 162 and related commentary 163 simultaneously with video content associated with commentary 162 and related commentary 163. This embodiment enables the computing device 100 to analyze various comments to allow for display of comments that have a threshold level of relevance with respect to the rendered video content 117. In one illustrative example, the relevant comments may include any replies to the selected comment 162.

As shown in fig. 3, computing device 100 may apply controlled navigation to display comment 162 and related comment 163. This may be accomplished by scrolling the user interface 130 to a position that allows the comment 162 and related comment 163 to be displayed. At the same time, the rendering of video content 117 is positioned such that the rendering of video content 117 has a threshold level of overlap or non-overlap with respect to comment 162 and related comment 163.

Fig. 4 illustrates an example process involving a computing device 100 that may generate a reconfigured user interface 130 "to simultaneously display a selected comment 162 and a rendering of video content associated with the comment 162. In this example, computing device 100 may modify the size of comment portion 150 to allow video display area 140 to render video content 116 simultaneously with comment 162.

The reconfigured user interface 130 "may be implemented in a variety of ways. In this particular example, the top border of comment portion 150 is modified such that comments above selected comment 162 are hidden and selected comment 162 is displayed at the top of comment portion 150. As shown by the arrows in fig. 4, the top border of the comment portion 150 is adjusted to enable the display area 140. Although the example of fig. 4 shows one example in which the dimensions (e.g., height (H)) of the comment portion 150 are modified, it can be appreciated that the reconfigured user interface 130 "can involve modification of any shape or size of graphical element to allow simultaneous viewing of the video display area 140 and the selected comment 162.

FIG. 5 illustrates an example process involving a computing device that may transition from a reconfigured user interface back to an original user interface. Continuing with the example of FIG. 4, FIG. 5 shows a two-step process starting with a reconfigured user interface 130 ". In this example, the reconfigured user interface 130 "includes a selectable graphical element 164 indicating that the comment portion 150 is folded. The reconfigured user interface 130 "is also configured to receive user input 165 at the selectable graphical element 164. In response to the user input 165, the computing device 100 may revert back to the original user interface 130, which original user interface 130 displays the selected comment 162 and viewing area 160 and places the video display area 140 off-screen.

Although this example utilizes touch-based user input, it may be appreciated that this embodiment may involve a voice command or any other type of input to restore the display back to the original user interface 130. It is also recognized that this embodiment may involve other types of graphical elements besides the selectable graphical element 164. For example, a top border of the comment portion 150 may be bolded, shaped, colored, or otherwise modified to indicate the collapsed state of the comment portion 150.

In addition to providing the reconfigured user interface 130 ", the computing device 100 may generate a customized user interface 130'. In such embodiments, the computing device may generate an entirely new user interface configuration that renders the content within the display area 140 concurrently with at least one comment (e.g., the selected comment 162).

Fig. 6A illustrates an example process involving a computing device that may generate a customized user interface 130' to simultaneously display a selected comment 162 and video content associated with the comment 162. In this illustrative example, the selected comment 162 is located below the video display area 140 within the viewing area 160. It is also shown that the customized user interface 130' is arranged such that the display of the rendered video 116 does not obscure or otherwise block the display of the selected comment 162 or any other comment. This example is provided for illustrative purposes and should not be construed as limiting. The customized user interface 130' may have the comment 162 at any position, orientation, or size relative to the rendered video 116.

Fig. 6B illustrates an example of a customized user interface 130' configured to simultaneously display a comment and video content associated with the comment. In general, the display area 140 of the video content and the selected commentary 162 may be in any arrangement. For example, as shown in example 1, the selected comment 162 and related comment 163 may be oriented proximate to the display area 140 that includes the rendering of the video content 116.

In some configurations, the display area 140 of the video content 116 may have a threshold level of overlap with respect to the selected commentary 162. For illustrative purposes, the threshold level of overlap may include zero overlap up to any predetermined level of overlap, which may be defined in a user preference file or any other contextual data based on user activity. In other embodiments, the threshold level of overlap may be based on the amount of text of comment 162 that is covered. For example, a threshold level of overlap may allow for some overlap as long as text can be interpreted. Thus, as shown in example 2, the amount of overlap may allow the video to overlay or otherwise obscure some words of the comment 162 while still enabling the user to understand the nature of the comment. Thus, when certain keywords (e.g., "THE" and "a" as well as other conjunctions or transitional words) are covered by THE video rendering, THE user interface may have a threshold level of overlap.

The system may also adjust the size and/or position of the display elements to eliminate or minimize any overlap between the display area 140 and the comment 162. As shown in example 3, the height (H) and width (W) of the display area 140 or at least one comment 162 may be adjusted to eliminate any overlap between the two graphical elements. In one illustrative example, the threshold level of overlap may be based on the content of the comment. If it is determined that the comment includes a threshold percentage of the predetermined keywords, the system can determine that the threshold level of overlap is zero and the system can arrange the graphical elements accordingly, as shown in example 3.

Similar to the examples shown in fig. 4 and 5, the customized user interface may also be configured to revert back to the original user interface. In such embodiments, the computing device 100 may revert the display of the customized user interface 130' back to the original user interface 130 in response to the user input. In some configurations, the computing device 100 may also revert the display of the customized user interface 130' back to the original user interface 130 based on other events (e.g., a timer or other criteria).

In some embodiments, the computing device 100 may take different actions depending on the type of input provided by the user. FIG. 7 illustrates an example process involving a computing device 100, which computing device 100 may generate different types of user interfaces depending on the type of user input. In this example, the computing device 100 may display a second rendering 117 of the video content within the comment portion based on a first type of user input. The first type of user input may be any type of input, including passive input (e.g., hovering of cursor 161) or any other type of suitable input. As shown, the user interface 130 may transition from the example on the left side of the figure to the user interface 130 in the upper right corner of fig. 7 in response to a first type of user input. User interface 130 in the upper right corner of fig. 7 shows the rendering of video content 117 within comment portion 150.

Also shown in fig. 7 is that the computing device may take other actions based on the second type of user input. In this example, based on the second type of user input (which may include more active types of input, e.g., hover and click or hover and double click), the computing device 100 may generate a customized user interface 130'. As shown, the user interface 130 can transition from the example on the left side of the figure to the customized user interface 130' in the lower right corner of fig. 7 in response to a second type of user input. User interface 1130' in the lower right corner of fig. 7 shows the rendering of video content 117 along with selected comment 162 and comment 163.

These examples have been provided for illustrative purposes and should not be construed as limiting. It may be appreciated that any type of user interface may be generated in response to various types of user inputs. For example, in the example of fig. 7, in response to the second type of user input, the computing device 100 may generate a reconfigured user interface 130', or any other user interface configuration that simultaneously displays the selected comment 162 and the video associated with the selected comment 162.

In some configurations, the computing device 100 may render video content in a variety of different formats and media. For example, a link within comment portion 150 may be configured to play an audio clip associated with a video. In such embodiments, as shown in fig. 8, the computing device 100 may display the audio interface element 118 simultaneously with the associated comment as opposed to rendering the video in response to the user's selection of the comment 162. Computing device 100 may also render audio output 802 from speakers 801 in response to user selection of comment 162. In some configurations, the computing device 100 may render the audio output 802 from the speaker 801 in response to a user selection of at least a portion of the comment 162 without displaying the audio interface element 118, while also controlling the position of the user interface 130 to display the comment portion 150.

In another example, the link within comment portion 150 may be configured to display a still image captured from the video content. In such embodiments, as shown in fig. 9, computing device 100 may display still images from a particular point in time of the video content as opposed to rendering the video in response to a user selection of comment 162. The display of the image may be displayed simultaneously with the associated comment or otherwise within the comment portion 150 of the user interface 130.

FIG. 10 is a diagram illustrating aspects of a routine 1000 for computationally efficiently displaying a video in a commentary section of a user interface. It should be understood by those of ordinary skill in the art that the operations of the methods disclosed herein are not necessarily presented in any particular order, and that it is possible and contemplated to perform some or all of the operations in an alternative order. The operations have been presented in the order of presentation for ease of description and illustration. Operations may be added, omitted, performed together, and/or performed simultaneously without departing from the scope of the appended claims.

It should also be understood that the illustrated method may end at any time, and need not be performed in its entirety. As defined herein, some or all of the operations of a method and/or substantially equivalent operations may be performed by execution of computer readable instructions included on a computer storage medium. The term "computer readable instructions" and variations thereof as used in the specification and claims is used broadly herein to include routines, applications, application modules, program modules, programs, components, data structures, algorithms, and the like. Computer-readable instructions can be implemented on various system configurations, including single-processor or multiprocessor systems, minicomputers, mainframe computers, personal computers, hand-held computing devices, microprocessor-based, programmable consumer electronics, combinations thereof, and the like.

Thus, it should be appreciated that the logical operations described herein are implemented as: (1) a sequence of computer-implemented acts or program modules running on a computing system such as those described herein; and/or (2) interconnected machine logic circuits or circuit modules within the computing system. The implementation is a matter of choice dependent on the performance and other requirements of the computing system. Accordingly, the logical operations may be implemented in software, firmware, special purpose digital logic, and any combination thereof.

Additionally, the operations illustrated in fig. 10 and other figures may be implemented in association with the example presentation UIs described above. For example, various device and/or module(s) described herein may generate, transmit, receive, and/or display data associated with content of a video (e.g., real-time content, broadcasted events, recorded content, etc.) and/or a presentation UI that includes a rendering of one or more participants of a remote computing device, an avatar (avatar), a channel, a chat session, a video stream, an image, a virtual object, and/or an application associated with the video.

The routine 1000 begins at operation 1002, where the system may cause display of a user interface 130, the user interface 130 having a video display area and a comment portion. One example of a user interface 130 is shown in FIG. 1. The video display area 140 may be configured to render the video 116. The comment section may include a plurality of comments. Each comment may include a link or other associated metadata configured to display a video rendering or display of other media in response to a user selection. The user interface 130 may be displayed on a client device such as a tablet computer, mobile phone, desktop computer, or the like.

Next, at operation 1004, a system (e.g., computing device 100) may receive user input selecting a comment within the comment portion. Because individual reviews may be associated with certain types of media through the use of links or other metadata, in response to user selection of the review 162, the user interface 130 may generate a rendering of the video 117. The user selection may be by any device (e.g., a mouse, a touch surface device, a voice input device, etc.).

At operation 1006, the system may determine a navigational position for the user interface 130 or generate a customized interface or a reconfigured user interface in response to the user input. As described above, the navigation position may be selected based on the size and position of the video rendering (e.g., the second rendering 117 shown in fig. 1). The location of the user interface 130 may be selected to allow the second rendering 117 to be displayed within the comment portion of the user interface. In some configurations, a position of the user interface (e.g., a scroll position) may be selected to allow simultaneous non-overlapping display of the second rendering 117 of the video and the selected commentary and/or related commentary.

In other configurations, the rendering of the video related to the selected comment may be displayed with the selected comment in a reconfigured user interface that changes the size and shape of the comment portion to enable simultaneous display of the video (e.g., the first rendering 116 and the selected comment). In another embodiment, a customized user interface may be generated to display the selected commentary concurrently with the associated rendering of the video. Such embodiments may arrange the layout of the selected commentary and associated video to fit the screen size of the computing device, where the size and shape of the commentary and the size and shape of the video rendering may be adjusted to allow the user to view the commentary in addition to viewing a particular portion of the rendered video.

Next, at operation 1008, the system may render the selected commentary concurrently with the associated rendering of the video. Several examples are disclosed herein. As shown in fig. 1, the second rendering 117 may be displayed within a comment portion of the user interface. In another example (e.g., the example shown in fig. 3), a scroll position may be selected to enable display of the rendered video concurrently with the selected commentary and other related commentary.

As shown in fig. 4, other embodiments enable the comment portion to be modified so that the user interface can display the video rendering simultaneously with at least one comment (including the selected comment 162). These examples are provided for illustrative purposes and should not be construed as limiting. It may be appreciated that any location of the user interface or any shape of the rendered element within the user interface may be modified to enable the system to display the video concurrently with the selected commentary and/or other related commentary.

In some aspects of operation 1008, the system may determine a particular scroll position of the user interface or reconfigure the user interface in response to different types of user inputs. For example, as shown in FIG. 7, in response to a first type of input (e.g., hover), the system may automatically display a video within a comment portion of the user interface. In response to a second type of input (e.g., clicking or double-clicking on a particular comment), the system may generate a customized user interface or a reconfigured user interface to display the rendering of the video and the one or more comments.

Next, at operation 1010, the system may return the user interface 130 to the original layout in response to one or more actions. For example, after video playback has been completed at operation 1008, if the system is displaying a customized user interface or a reconfigured user interface, the system may restore these displays back to the original interface at the end of video playback. For example, the original interface is the user interface 130 shown on the left side of FIG. 1.

It should be appreciated that the above-described subject matter may be implemented as a computer-controlled apparatus, a computer process, a computing system, or as an article of manufacture such as a computer-readable storage medium. Operations of the example methods are illustrated in separate blocks and are summarized with reference to those blocks. The methodologies are shown as a logical flow of blocks, and each of these blocks may represent one or more operations that may be implemented in hardware, software, or a combination thereof. In the context of software, the operations represent computer-executable instructions stored on one or more computer-readable media, which, when executed by one or more processors, enable the one or more processors to perform the recited operations.

Generally, computer-executable instructions include routines, programs, objects, modules, components, data structures, and so forth that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be performed in any order, combined in any order, subdivided into multiple sub-operations, and/or performed in parallel to implement the described processes. The described processes may be performed by resources associated with one or more devices (e.g., one or more internal or external CPUs or GPUs) and/or one or more hardware logic units (e.g., field programmable gate arrays ("FPGAs"), digital signal processors ("DSPs"), or other types of accelerators).

All of the methods and processes described above may be embodied in, and fully automated via, software code modules executed by one or more general-purpose computers or processors. The code modules may be stored, for example, in any type of computer-readable storage media or other computer storage device described below. Some or all of the methods may alternatively be embodied in dedicated computer hardware, for example, as described below.

Any conventional descriptions, elements, or blocks in flow charts described herein and/or depicted in the accompanying drawings should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or elements in the routine. Alternative implementations are included within the scope of the examples described herein in which elements or functions may be deleted from those shown or discussed or performed in a different order, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those skilled in the art.

Fig. 11 is a diagram illustrating an example environment 1100 in which a system 1102 may implement techniques disclosed herein. In some implementations, the system 1102 may be used to collect, analyze, and share data defining one or more objects displayed to a user of the communication session 1004.

As shown, communication session 1104 may be implemented between a plurality of client computing devices 1106(1) through 1106(N) (where N is a number having a value of two or more) associated with or part of system 1102. Client computing devices 1106(1) through 1106(N) enable users (also referred to as individuals) to participate in communication session 1104. While this embodiment illustrates a communication session 1104, it can be appreciated that communication session 1104 is not necessary for each embodiment disclosed herein. It may be appreciated that a video stream may be uploaded by each client 1106 and comments may be provided by each client 1106. It is to be appreciated that any client 1106 can also receive video data and audio data from the server module 1130.

In this example, communication session 1104 is hosted by system 1102 on one or more networks 1108. That is, the system 1102 may provide services that enable users of the client computing devices 1106(1) through 1106(N) to participate in the communication session 1104 (e.g., via real-time viewing and/or recorded viewing). Thus, the "participants" of the communication session 1104 may include users and/or client computing devices (e.g., multiple users may be participating in the communication session in a room via the use of a single client computing device), each of which may be in communication with other participants. Alternatively, communication session 1104 may be hosted by one of client computing devices 1106(1) through 1106(N) using peer-to-peer technology. The system 1102 may also host chat conversations and other team collaboration functions (e.g., as part of an application suite).

In some implementations, such chat conversations and other team collaboration functions are considered external communication sessions other than communication session 1104. A computerized agent for collecting participant data in the communication session 1104 may be able to link to such an external communication session. Thus, the computerized agent may receive information such as date, time, session specific information, etc., which enables connectivity to such external communication sessions. In one example, a chat conversation can be conducted in accordance with communication session 1104. Additionally, the system 1102 may host a communication session 1104, the communication session 1104 including at least a plurality of participants co-located at a conference location (e.g., a conference room or auditorium) or located at different locations. In examples described herein, some embodiments may not utilize communication session 1104. In some embodiments, video may be uploaded to the server module 1130 from at least one of the client computing devices (e.g., 1106(1), 1106 (2)). When video content is uploaded to the server module 1130, any client computing device can access the uploaded video content and display the video content within a user interface, such as those described above.

In examples described herein, client computing devices 1106(1) through 1106(N) participating in communication session 1104 are configured to receive and render communication data for display on a user interface of a display screen. The communication data may include various instances or streams of real-time content and/or recorded content. Various instances or streams of real-time content and/or recorded content may be provided by one or more cameras (e.g., video cameras). For example, a single stream of real-time content or recorded content may include media data (e.g., audio data and visual data that capture the look and voice of users participating in a communication session) associated with a video feed provided by a camera. In some implementations, the video feed may include such audio data and visual data, one or more still images, and/or one or more avatars. The one or more still images may also include one or more avatars.

Another example of a single stream of real-time content or recorded content may include media data including an avatar of a user participating in a communication session and audio data capturing the user's voice. Yet another example of a single stream of real-time content or recorded content may include media data including files displayed on a display screen and audio data capturing a user's voice. Thus, various streams of real-time content or recorded content within the communication data enable facilitating teleconferencing between a group of people and sharing of content within a group of people. In some implementations, various streams of real-time content or recorded content within communication data may originate from multiple co-located cameras located in a space (e.g., a room) for recording or streaming a presentation that includes one or more personal presentations and one or more personal consumption presentations of the content.

The participants or attendees may view the content of communication session 1104 in real time while the activity occurs or, alternatively, at a later time after the activity occurs via recording. In examples described herein, client computing devices 1106(1) through 1106(N) participating in communication session 1104 are configured to receive and render communication data for display on a user interface of a display screen. The communication data may include various instances or streams of real-time content and/or recorded content. For example, a single stream of content may include media data associated with a video feed (e.g., audio data and visual data that capture the appearance and voice of users participating in a communication session). Another example of a single stream of content may include media data that includes an avatar of a user participating in a conference session and audio data that captures the user's voice. Yet another example of a single stream of content may include media data that includes content items displayed on a display screen and/or audio data that captures a user's voice. Thus, the various streams of content within the communication data enable the facilitation of a meeting or broadcast presentation between a group of people dispersed across remote locations. Each stream may also include text, audio, and video data, such as data transmitted within a channel, chat board, or private messaging service.

A participant or attendee of a communication session is a person within range of a camera or other image and/or audio capture device such that actions and/or sounds of the person generated while the person is viewing and/or listening to content shared via the communication session may be captured (e.g., recorded). For example, participants may sit in a crowd viewing shared content that is being played in real time at the broadcast location where the stage presentation occurs. Alternatively, the participants may sit in an office meeting room and view the shared content of the communication session with other colleagues via the display screen. Even further, participants can sit or stand in front of personal devices (e.g., tablet computers, smart phones, computers, etc.), viewing the shared content of the communication session alone in their offices or at home.

System 1102 includes device(s) 1110. Device(s) 1110 and/or other components of system 1102 may include distributed computing resources in communication with each other and/or client computing devices 1106(1) -1106 (N) via one or more networks 1108. In some examples, system 1102 may be a stand-alone system responsible for managing aspects of one or more communication sessions (e.g., communication session 1104). By way of example, system 1102 may be managed by an entity such as YOUTUBE, FACEBOOK, SLACK, WEBEX, GOTOMEETING, GOOGLE HANGOUT, and the like.

Network(s) 1108 may include, for example, a public network (e.g., the internet), a private network (e.g., an institutional and/or personal intranet), or some combination of a private network and a public network. Network(s) 1108 can also include any type of wired and/or wireless network, including but not limited to a local area network ("LAN"), a wide area network ("WAN"), a satellite network, a cable network, a Wi-Fi network, a WiMax network, a mobile communications network (e.g., 3G, 4G, etc.), or any combination thereof. Network(s) 1108 may utilize communication protocols, including packet-based and/or datagram-based protocols, such as internet protocol ("IP"), transmission control protocol ("TCP"), user datagram protocol ("UDP"), or other types of protocols. Further, network(s) 1108 may also include a number of devices that facilitate network communications and/or form the hardware basis of the network, e.g., switches, routers, gateways, access points, firewalls, base stations, repeaters, backbones, and so forth.

In some examples, network(s) 1108 may also include devices that enable connection to wireless networks, e.g., wireless access points ("WAPs"). Examples support connectivity via WAP that transmits and receives data on various electromagnetic frequencies (e.g., radio frequencies), including WAP that supports institute of electrical and electronics engineers ("IEEE") 802.21 standards (e.g., 802.11g, 802.11n, 802.11ac, etc.) and other standards.

In various examples, device(s) 1110 may include one or more computing devices operating in a clustered or other grouped configuration to share resources, balance load, improve performance, provide failover support or redundancy, or for other purposes. For example, device(s) 1110 may belong to various categories of devices, such as traditional server-type devices, desktop-type devices, and/or mobile-type devices. Thus, although device(s) 1110 are illustrated as a single type of device or a server type of device, device(s) 110 may include a wide variety of device types and is not limited to a particular type of device. Device(s) 1110 may represent, but are not limited to, a server computer, desktop computer, web server computer, personal computer, mobile computer, laptop computer, tablet computer, or any other kind of computing device.

The client computing device (e.g., one of client computing devices 1106(1) through 1106(N)) may belong to various categories of devices, which may be the same as or different from device(s) 1110, such as a traditional server-type device, a desktop-type device, a mobile-type device, a dedicated-type device, an embedded-type device, and/or a wearable-type device. Thus, client computing devices may include, but are not limited to, desktop computers, game consoles and/or gaming devices, tablet computers, personal data assistants ("PDAs"), mobile phone/tablet hybrid devices, laptop computers, telecommunication devices, computer navigation type client computing devices (e.g., satellite-based navigation systems including global positioning system ("GPS") devices), wearable devices, virtual reality ("VR") devices, augmented reality ("AR") devices, implantable computing devices, automotive computers, network-enabled televisions, thin clients, terminals, internet of things ("IoT") devices, workstations, media players, personal video recorders ("PVRs"), set-top boxes, cameras, integrated components (e.g., peripherals) for inclusion in computing devices, and the like, A home appliance or any other kind of computing device. Further, the client computing device may include a combination of the earlier listed examples of the client computing device, e.g., a desktop computer type device or a mobile type device in combination with a wearable device or the like.

Client computing devices 1106(1) through 1106(N) of various classes and device types may represent any type of computing device having one or more data processing units 1192 operatively connected to a computer-readable medium 1194 (e.g., via bus 1116), and in some instances, bus 1116 may include one or more of a system bus, a data bus, an address bus, a PCI bus, a Mini-PCI bus, and any of a variety of local, peripheral, and/or independent buses.

Executable instructions stored on computer-readable media 1194 may include, for example, operating system 1119, client modules 1120, profile module 1122, and other modules, programs, or applications that may be loaded and executed by data processing unit(s) 1192.

Client computing devices 1106(1) through 1106(1) N may also include one or more interfaces 1124 to enable communication between client computing devices 1106(1) through 1106(N) and other networked devices (e.g., device(s) 1110) over network(s) 1108. Such network interface 1124 may include one or more Network Interface Controllers (NICs) or other types of transceiver devices to send and receive communications and/or data over a network. Further, client computing devices 1106(1) through 1106(N) may include input/output ("I/O") interface (devices) 1126, the input/output interface (devices) 1126 enabling communication with or including input/output devices (e.g., user input devices including peripheral input devices (e.g., game controllers, keyboards, mice, pens, sound input devices such as microphones, cameras for obtaining and providing video feeds and/or still images, touch input devices, gesture input devices, etc.), and/or output devices including peripheral output devices (e.g., displays, printers, audio speakers, touch output devices, etc.). Fig. 11 illustrates that the client computing device 1106(N) is connected in some manner to a display device (e.g., display screen 1129(1)) that may display a UI in accordance with the techniques described herein.

In the example environment 1100 of fig. 11, the client computing devices 1106(1) through 1106(N) may connect to each other and/or other external device(s) using their respective client modules 1120 in order to participate in the communication session 1104 or otherwise contribute activity to the collaborative environment. For example, a first user may utilize a client computing device 1106(1) to communicate with a second user of another client computing device 1106 (2). When executing the client module 1120, users may share data, which may result in the client computing device 1106(1) being connected to the system 1102 and/or other client computing devices 1106(2) through 1106(N) through network(s) 1108.

Client computing devices 1106(1) to 1106(N) (each of which is also referred to herein as a "data processing system") may use their respective profile modules 1122 to generate participant profiles (not shown in fig. 11) and provide the participant profiles to other client computing devices and/or device(s) 1110 of system 1102. The participant profile may include one or more of the following: an identity of a user or group of users (e.g., name, unique identifier ("ID"), etc.), user data (e.g., personal data), machine data such as location (e.g., IP address, room in a building, etc.), and technical capabilities, among others. The participant profile may be utilized to register the participant for the communication session.

As shown in fig. 11, the device(s) 1110 of the system 1102 include a server module 1130 and an output module 1132. In this example, the server module 1130 is configured to receive media streams 1134(1) through 1134(N) from various client computing devices (e.g., client computing devices 1106(1) through 1106 (N)). As described above, the media stream may include video feeds (e.g., audio and visual data associated with the user), audio data to be output with the presentation of the user's avatar (e.g., a pure audio experience in which the user's video data is not sent), text data (e.g., a text message), file data, and/or screen sharing data (e.g., a document, a slide show layout, an image, a video, etc. displayed on a display screen), and so forth. Accordingly, server module 1130 is configured to receive a set of various media streams 1134(1) through 1134(N) (referred to herein as "media data 1134") during real-time viewing of communication session 1104. In some scenarios, not all client computing devices participating in communication session 1104 provide the media stream. For example, the client computing device may be only a consuming device or a "listening" device such that it only receives content associated with communication session 1104 and does not provide any content to communication session 1104.

In various examples, the server module 1130 may select aspects of the media stream 1134 to be shared with a single one of the participating client computing devices 1106(1) through 1106 (N). Accordingly, the server module 1130 may be configured to generate session data 1136 based on the flow 1134 and/or pass the session data 1136 to the output module 1132. Output module 1132 may then transmit communication data 1139 to client computing devices (e.g., client computing devices 1106(1) through 1106(3)) participating in the live view of the communication session. Communication data 1139 may include video, audio, and/or other content data provided by output module 1132 based on content 1150 associated with output module 1132 and based on received session data 1136.

As shown, output module 1132 sends communication data 1139(1) to client computing device 1106(1), and communication data 1139(2) to client computing device 1106(2), and communication data 1139(3) to client computing device 1106(3), and so on. The communication data 1139 sent to the client computing devices may be the same or may be different (e.g., the location of the content stream within the user interface may be different between the devices).

In various implementations, the device(s) 1110 and/or the client module 1120 can include a GUI presentation module 1140. The GUI presentation module 1140 may be configured to analyze the communication data 1139, the communication data 1139 for delivery to one or more of the client computing devices 1106. In particular, a GUI presentation module 1140 at the device(s) 1110 and/or the client computing device 1106 can analyze the communication data 1139 to determine an appropriate manner for displaying videos, images, and/or content on the display screen 1129 of the associated client computing device 1106. In some implementations, the GUI presentation module 1140 can provide the video, images, and/or content to a presentation GUI1146 rendered on a display screen 1129 of the associated client computing device 1106. The GUI presentation module 1140 may cause the presentation GUI1146 to be rendered on the display screen 1129. The presentation GUI1146 may include video, images, and/or content analyzed by the GUI presentation module 1140.

In some implementations, the presentation GUI1146 may include multiple portions or grids that may render or include video, images, and/or content for display on the display screen 1129. For example, a first portion of the presentation GUI1146 may include a video feed of a presenter or individual and a second portion of the presentation GUI1146 may include a video feed of an individual consuming meeting information provided by the presenter or individual. The GUI presentation module 1140 can populate the first and second portions of the presentation GUI1146 in a manner that appropriately mimics the environmental experience that presenters and individuals can share.

In some implementations, the GUI presentation module 1140 can zoom in or provide a zoomed view of the person represented by the video feed in order to highlight the person's reaction to the presenter, e.g., facial features. In some implementations, the presentation GUI1146 can include video feeds of multiple participants associated with a conference (e.g., a general communication session). In other implementations, the presentation GUI1146 can be associated with a channel such as a chat channel, a corporate team channel, and the like. Thus, the presentation GUI1146 may be associated with an external communication session other than a general communication session.

Fig. 12 shows a diagram illustrating example components of an example device 1200 (also referred to herein as a "computing device"), the example device 1200 being configured to generate data for some of the user interfaces disclosed herein. Device 1200 may generate data that may include one or more portions that may render or include video, images, virtual objects, and/or content for display on display screen 1129. Device 1200 may represent one of the device(s) described herein. Additionally or alternatively, device 1200 may represent one of client computing devices 1106.

As shown, device 1200 includes one or more data processing units 1202, a computer-readable medium 1204, and a communication interface(s) 1206. The components of device 1200 are operatively coupled, for example, via a bus 1209, which bus 1209 may include one or more of a system bus, a data bus, an address bus, a PCI bus, a Mini-PCI bus, and any of a variety of local, peripheral, and/or independent buses.

As utilized herein, data processing unit(s) (e.g., data processing unit(s) 1202 and/or data processing unit(s) 1192) may represent, for example, a CPU-type data processing unit, a GPU-type data processing unit, a field programmable gate array ("FPGA"), another class of DSP or other hardware logic component (which may be driven by the CPU, in some instances). For example, and without limitation, illustrative types of hardware logic components that may be utilized include application specific integrated circuits ("ASICs"), application specific standard products ("ASSPs"), system on a chip ("SOCs"), complex programmable logic devices ("CPLDs"), and the like.

As utilized herein, computer-readable media (e.g., computer-readable media 1204 and computer-readable media 1194) may store instructions that are executable by the data processing unit(s). The computer-readable medium may also store instructions that are executable by an external data processing unit (e.g., by an external CPU, an external GPU) and/or by an external accelerator (e.g., an FPGA-type accelerator, a DSP-type accelerator, or any other internal or external accelerator). In various examples, at least one CPU, GPU, and/or accelerator is incorporated in the computing device, while in some examples, one or more of the CPU, GPU, and/or accelerator is external to the computing device.

A computer-readable medium (also referred to herein as a plurality of computer-readable media) may include computer storage media and/or communication media. Computer storage media may include one or more of volatile memory, non-volatile memory, and/or other persistent and/or secondary computer storage media, removable and non-removable computer storage media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. Thus, computer storage media includes media in tangible and/or physical form that is included in a device and/or hardware components that are part of or external to a device, including, but not limited to, random access memory ("RAM"), static random access memory ("SRAM"), dynamic random access memory ("DRAM"), phase change memory ("PCM"), read only memory ("ROM"), erasable programmable read only memory ("EPROM"), electrically erasable programmable read only memory ("EEPROM"), flash memory, compact disc read only memory ("CD-ROM"), digital versatile discs ("DVD"), optical cards or other optical storage media, magnetic cassettes, magnetic tape, magnetic disk storage, magnetic cards or other magnetic storage devices or media, solid state memory devices, storage arrays, network attached storage, magnetic tape, magnetic disk storage, magnetic tape storage, magnetic cassettes or other magnetic storage devices or media, magnetic storage devices, magnetic disk storage devices, magnetic tape storage devices, magnetic tape storage devices, magnetic tape storage devices, magnetic tape storage devices, magnetic storage devices, A storage area network, a hosted computer storage, or any other storage memory, storage device, and/or storage medium that can be used to store and maintain information for access by a computing device.

In contrast to computer storage media, communication media may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism. As defined herein, computer storage media does not include communication media. That is, computer storage media does not include communication media consisting solely of modulated data signals, carrier waves, or propagated signals per se.

Communication interface(s) 1206 may represent, for example, a network interface controller ("NIC") or other type of transceiver device to send and receive communications over a network. Further, the communication interface(s) 1206 may include one or more cameras and/or audio devices 1222 to enable generation of video feeds and/or still images, among other things.

In the example shown, computer-readable media 1204 includes a data repository 1208. In some examples, data repository 1208 includes a data store, such as a database, data warehouse, or other type of structured or unstructured data store. In some examples, data repository 1208 includes a corpus of one or more tables, indexes, stored procedures, and/or the like, and/or a relational database to enable data access, including, for example, one or more of: a hypertext markup language ("HTML") table, a resource description framework ("RDF") table, a web ontology language ("OWL") table, and/or an extensible markup language ("XML") table.

The data repository 1208 may store data for operations of processes, applications, components, and/or modules stored in the computer-readable medium 1204 and/or executed by the data processing unit(s) 1218 and/or accelerator(s). For example, in some examples, data repository 1208 may store session data 1210 (e.g., session data 1136), profile data 1212 (e.g., associated with participant profiles), and/or other data. The session data 1210 may include the total number of participants (e.g., users and/or client computing devices) in the communication session, activities occurring in the communication session, a list of invitees to the communication session, and/or other data related to when and how the communication session is conducted or hosted. The data repository 1208 may also include content data 1214, e.g., content including video, audio, or other content for rendering and display on one or more of the display screens 1129.

Alternatively, some or all of the data referenced above may be stored on a separate memory 1181 on board the one or more data processing units 1202 (e.g., memory on board a CPU-type processor, a GPU-type processor, an FPGA-type accelerator, a DSP-type accelerator, and/or other accelerators). In this example, computer-readable media 1204 also includes an operating system 1218 and application programming interface(s) 1210 (APIs) configured to expose the functions and data of device 1200 to other devices. Additionally, the computer-readable medium 1204 includes one or more modules (e.g., the server module 1230, the output module 1232, and the GUI presentation module 1246), but the number of modules shown is merely an example, and the number may become higher or lower. That is, the functionality described herein in association with the illustrated modules may be performed by a smaller number of modules or a larger number of modules on a device or spread across multiple devices.

It should be appreciated that conditional language (e.g., "can", "right", "may" or "may" used herein) used herein is understood within the context to express that some examples include and others do not include certain features, elements and/or steps unless specifically stated otherwise. Thus, such conditional language is not generally intended to imply that one or more examples require certain features, elements, and/or steps in any way or that one or more examples necessarily include logic for deciding (with or without user input or prompting) whether certain features, elements, and/or steps are to be included or performed in any particular example. Conjunctive language such as the phrase "at least one of X, Y or Z" is understood to mean that an item, term, etc. may be X, Y, or Z, or a combination thereof unless specifically stated otherwise.

Other variations that apply to the techniques disclosed herein may also be within the scope of the present disclosure. For example, although examples disclosed herein relate to selection of comments, techniques disclosed herein include any user selection of characters, words, and images or any other graphical elements associated with comments or text. Thus, if a user selects a particular word or a particular image within a comment or any other text, the system may respond by displaying a video rendering within the portion of the user interface containing the selected word, image, etc. It may be appreciated that each comment or phrase within the text portion may also include multiple links. Thus, in examples disclosed herein, a single comment may include multiple words, where each word of the single comment has a unique link.

It will also be appreciated that variations and modifications may be made to the examples described above, and that elements thereof should be understood to be among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.

Finally, although various configurations have been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended drawings is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claimed subject matter.

The disclosure presented herein also encompasses the subject matter set forth in the following clauses:

clause 1, a method to be performed by a data processing system for operation, the method comprising: causing display of a user interface, the user interface including a video display region and a comment portion, wherein a location of the user interface displays the comment portion within a viewing region of a display device, the location of the user interface positioning a rendered video display region displaying content outside of the viewing region; receiving a user input indicating a selection of at least a portion of the comment displayed within the comment portion; and in response to receiving the user input, generating a second rendering of the content for display within the comment portion, wherein the user interface is configured to: the position of the user interface is maintained to position the comment portion within the viewing area of the display device while displaying a second rendering of the content within the viewing area of the display device.

Clause 2, the method according to clause 1, wherein the method further comprises: the position of the user interface is controlled to display the comment concurrently with the second rendering of the content.

Clause 3, the method according to clauses 1-2, wherein the method further comprises: the position of the user interface is controlled to display the relevant comment concurrently with the second rendering of the comment and the content.

Clause 4, the method according to clauses 1-3, wherein the method further comprises: it is determined that the media type associated with the comment includes a still image of the content, and wherein the second rendering includes the still image of the content displayed within the viewing area of the display device in response to the user input.

Clause 5, the method according to clauses 1-4, wherein the method further comprises: determining that the media type associated with the comment includes audio data of the content, and wherein the second rendering includes a graphical user interface indicating playback of the audio data; and causing the audio device to generate an audio output of the audio data.

Clause 6, the method of clauses 1-5, wherein the metadata associated with the comment defines a time interval of video data that defines the content, wherein the second rendering comprises displaying the time interval of the content within the viewing area of the display device in response to the user input.

Clause 7, the method according to clauses 1-6, wherein the method further comprises: analyzing the user input based on data received from the input device to determine an input type; in response to determining that the input type is the first input type, displaying a second rendering of the content within the comment portion while maintaining the position of the user interface; and in response to determining that the input type is a second type, displaying a customized user interface that simultaneously displays the selected comment and at least one of the rendering of the content or the second rendering of the content.

Clause 8, the method according to clauses 1-7, wherein the first input type comprises: the cursor hovers over at least a portion of the comment.

Clause 9, the method according to clauses 1-8, wherein the second input type comprises: hovering a cursor over at least a portion of the comment; and user actuation of an input device, the user actuation indicating selection of the comment.

Clause 10, a method to be performed by a data processing system for operation, the method comprising: causing display of a user interface, the user interface including a video display region and a comment portion, wherein a location of the user interface displays the comment portion within a viewing region of a display device, the location of the user interface positioning a rendered video display region displaying content outside of the viewing region; receiving a user input indicating a selection of a comment displayed within the comment portion; and in response to receiving the user input, generating a customized user interface for rendering the video content concurrently with the display of the comment, wherein the customized user interface is configured to have a threshold level of overlap between the rendering of the video content and the comment.

Clause 11, the method according to clause 10, wherein the method further comprises: the position of the user interface is controlled to display the comment concurrently with the second rendering of the content.

Clause 12, the method according to clauses 10-11, wherein the method further comprises: the position of the user interface is controlled to display the relevant comment concurrently with the second rendering of the comment and the content.

Clause 13, the method according to clauses 10-12, wherein the method further comprises: it is determined that the media type associated with the comment includes a still image of the content, and wherein the second rendering includes the still image of the content displayed within the viewing area of the display device in response to the user input.

Clause 14, the method according to clauses 10-13, wherein the method further comprises: determining that the media type associated with the comment includes audio data of the content, and wherein the second rendering includes a graphical user interface indicating playback of the audio data; and causing the audio device to generate an audio output of the audio data.

The method of clause 15, clause 10-14, wherein the metadata associated with the comment defines a time interval of video data defining the content, wherein the second rendering comprises displaying the time interval of the content within the viewing area of the display device in response to the user input.

Clause 16, a system, comprising: means for causing display of a user interface, the user interface including a video display area and a text portion, wherein a navigation position of the user interface displays the text portion within a viewing area of the display device, the navigation position of the user interface positioning the rendered video display area displaying content outside of the viewing area; means for receiving a user input indicating a selection of selected text displayed within a text portion; and means for generating, in response to receiving the user input, a second rendering of content for display within the text portion, wherein the user interface is configured to: a navigational position of the user interface is maintained to position the text portion within a viewing area of the display device while rendering audio or displaying a second rendered still image or video of the content within the viewing area of the display device.

Clause 17, the system according to clause 16, wherein the method further comprises: the navigation position of the user interface is controlled to simultaneously display the comments while rendering audio or displaying a second rendered still image or video of the content within the viewing area of the display device.

Clause 18, the system according to clauses 16-17, wherein the method further comprises: a navigation location of the user interface is controlled to display the relevant comment concurrently with the second rendering of the comment and the content.

Clause 19, the system according to clauses 16-18, further comprising: means for determining that the media type associated with the comment includes a still image of the content, and wherein the second rendering includes the still image of the content displayed within the viewing area of the display device in response to the user input.

Clause 20, the system according to clauses 16-19, further comprising: means for analyzing the user input based on data received from the input device to determine a type of input; means for displaying a second rendering of the content within the text portion while maintaining the navigational position of the user interface in response to determining that the input type is the first input type; and means for displaying a customized user interface that simultaneously displays the selected text and at least one of the rendering of the content 116 or the second rendering of the content 117 in response to determining that the input type is the second type.

32页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于执行内容操纵操作的设备、方法和系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类