Intelligent automated assistant in a media environment

文档序号:1875105 发布日期:2021-11-23 浏览:8次 中文

阅读说明:本技术 媒体环境中的智能自动化助理 (Intelligent automated assistant in a media environment ) 是由 L·T·纳波利塔诺 G·H·黄 H·D·彭哈 J·D·肖 J·S·菲诺 于 2016-08-16 设计创作,主要内容包括:本发明公开了一种用于在媒体环境中操作数字助理的系统和过程。在示例性实施方案中,用户可在内容被媒体设备显示时与媒体设备的数字助理进行交互。在一种方法中,可响应于检测到第一输入类型的用户输入而显示多个示例性自然语言请求。该多个示例性自然语言请求可与所显示的内容上下文相关。在另一种方法中,可响应于检测到第二输入类型的用户输入而接收用户请求。可执行至少部分地满足用户请求的任务。所执行的任务可取决于用户请求的性质以及正被媒体设备显示的内容。具体地,在减少用户消费媒体内容过程中的干扰时,可满足该用户请求。(A system and process for operating a digital assistant in a media environment is disclosed. In an exemplary embodiment, a user may interact with a digital assistant of a media device while content is displayed by the media device. In one approach, a plurality of exemplary natural language requests may be displayed in response to detecting a user input of a first input type. The plurality of exemplary natural language requests may be contextually related to the displayed content. In another approach, a user request may be received in response to detecting a user input of a second input type. A task that at least partially satisfies the user request may be performed. The tasks performed may depend on the nature of the user request and the content being displayed by the media device. In particular, the user request may be satisfied while reducing interference during consumption of the media content by the user.)

1. A method for operating a digital assistant of a media system, the method comprising:

at an electronic device with memory and one or more processors:

displaying content on a display unit;

while displaying the content, detecting a user input, wherein the user input is nonverbal;

in response to detecting the user input, sampling audio data, wherein the audio data comprises a user utterance representing a media search request, the media search request comprising a plurality of parameters;

obtaining a plurality of media items satisfying the media search request;

displaying, via a first user interface, at least a portion of the plurality of media items on the display unit, wherein a display area occupied by the displayed content is larger than a display area occupied by the first user interface;

In response to detecting a continuous contact motion in a first direction, obtaining a second plurality of media items that satisfy at least one parameter of the plurality of parameters; and

displaying the second plurality of media items on the display unit via a second user interface, and wherein a display area occupied by the displayed content is smaller than a display area occupied by the second user interface.

2. The method of claim 1, wherein the content continues to be displayed on the display unit while the at least a portion of the plurality of media items is displayed, and wherein a display area occupied by the first user interface is smaller than a display area occupied by the content.

3. The method of claim 1, further comprising:

determining whether a number of media items in the plurality of media items is less than or equal to a predetermined number, wherein the at least a portion of the plurality of media items includes the plurality of media items in accordance with the determination that the number of media items in the plurality of media items is less than or equal to the predetermined number.

4. The method of claim 3, wherein in accordance with a determination that a number of media items in the plurality of media items is greater than a predetermined number, a number of media items in the at least a portion of the plurality of media items is equal to the predetermined number.

5. The method of any of claims 1-4, wherein each media item of the plurality of media items is associated with a relevance score with respect to the media search request, and wherein the relevance score of the at least a portion of the plurality of media items is the highest of the plurality of media items.

6. The method of any of claims 1-4, wherein each media item of the at least a portion of the plurality of media items is associated with a popularity rating, and wherein the at least a portion of the plurality of media items is arranged in the first user interface based on the popularity rating.

7. The method of claim 1, further comprising:

while displaying the at least a portion of the plurality of media items, detecting a second user input; and

in response to detecting the second user input, expanding the first user interface to occupy at least a majority of a display area of the display unit.

8. The method of claim 7, further comprising:

in response to detecting the second user input:

determining whether a number of media items in the plurality of media items is less than or equal to a predetermined number; and

In accordance with a determination that the number of media items in the plurality of media items is less than or equal to a predetermined number:

obtaining a second plurality of media items that at least partially satisfy the media search request, the second plurality of media items being different from the at least a portion of the media items; and

displaying the second plurality of media items on the display unit via the expanded first user interface.

9. The method of claim 8, further comprising:

determining whether the media search request includes more than one search parameter, wherein in accordance with a determination that the media search request includes more than one search parameter, the second plurality of media items are organized in the expanded first user interface in accordance with the more than one search parameter of the media search request.

10. The method of claim 8, further comprising:

in accordance with a determination that the number of media items in the plurality of media items is greater than the predetermined number:

displaying, via the expanded first user interface, at least a second portion of the plurality of media items, wherein the at least a second portion of the plurality of media items is different from the at least a portion of the plurality of media items.

11. The method of claim 10, wherein the at least a second portion of the plurality of media items comprises two or more media types, and wherein the at least a second portion of the plurality of media items is organized in the expanded first user interface according to each of the two or more media types.

12. The method of any of claims 9 to 11, further comprising:

detecting a third user input;

in response to detecting the third user input, scrolling the expanded first user interface;

determining whether the expanded first user interface has scrolled beyond a predetermined location on the expanded user interface; and

in response to determining that the expanded first user interface has scrolled past a predetermined location on the expanded first user interface, displaying at least a third portion of the plurality of media items on the expanded first user interface, wherein the at least a third portion of the plurality of media items is organized on the expanded first user interface according to one or more media content providers associated with the third plurality of media items.

13. A computer-readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with a display apparatus, the one or more programs comprising instructions for:

displaying content on a display unit;

while displaying the content, detecting a user input, wherein the user input is nonverbal;

in response to detecting the user input, sampling audio data, wherein the audio data comprises a user utterance representing a media search request, the media search request comprising a plurality of parameters;

obtaining a plurality of media items satisfying the media search request;

displaying, via a first user interface, at least a portion of the plurality of media items on the display unit, wherein a display area occupied by the displayed content is larger than a display area occupied by the first user interface;

in response to detecting a continuous contact motion in a first direction, obtaining a second plurality of media items that satisfy at least one parameter of the plurality of parameters; and

displaying the second plurality of media items on the display unit via a second user interface, and wherein a display area occupied by the displayed content is smaller than a display area occupied by the second user interface.

14. The computer-readable storage medium of claim 13, wherein the content continues to be displayed on the display unit while the at least a portion of the plurality of media items are displayed, and wherein a display area occupied by the first user interface is smaller than a display area occupied by the content.

15. The computer readable storage medium of claim 13, the one or more programs further comprising instructions for:

determining whether a number of media items in the plurality of media items is less than or equal to a predetermined number, wherein the at least a portion of the plurality of media items includes the plurality of media items in accordance with the determination that the number of media items in the plurality of media items is less than or equal to the predetermined number.

16. The computer-readable storage medium of claim 15, wherein in accordance with a determination that a number of media items in the plurality of media items is greater than a predetermined number, a number of media items in the at least a portion of the plurality of media items is equal to the predetermined number.

17. The computer-readable storage medium of any of claims 13-16, wherein each media item of the plurality of media items is associated with a relevance score with respect to the media search request, and wherein the relevance score of the at least a portion of the plurality of media items is the highest of the plurality of media items.

18. The computer-readable storage medium of any of claims 13-16, wherein each media item of the at least a portion of the plurality of media items is associated with a popularity rating, and wherein the at least a portion of the plurality of media items is arranged in the first user interface based on the popularity rating.

19. The computer readable storage medium of claim 13, the one or more programs further comprising instructions for:

while displaying the at least a portion of the plurality of media items, detecting a second user input; and

in response to detecting the second user input, expanding the first user interface to occupy at least a majority of a display area of the display unit.

20. The computer readable storage medium of claim 19, the one or more programs further comprising instructions for:

in response to detecting the second user input:

determining whether a number of media items in the plurality of media items is less than or equal to a predetermined number; and

in accordance with a determination that the number of media items in the plurality of media items is less than or equal to a predetermined number:

obtaining a second plurality of media items that at least partially satisfy the media search request, the second plurality of media items being different from the at least a portion of the media items; and

Displaying the second plurality of media items on the display unit via the expanded first user interface.

21. The computer readable storage medium of claim 20, the one or more programs further comprising instructions for:

determining whether the media search request includes more than one search parameter, wherein in accordance with a determination that the media search request includes more than one search parameter, the second plurality of media items are organized in the expanded first user interface in accordance with the more than one search parameter of the media search request.

22. The computer readable storage medium of claim 21, the one or more programs further comprising instructions for:

in accordance with a determination that the number of media items in the plurality of media items is greater than the predetermined number:

displaying, via the expanded first user interface, at least a second portion of the plurality of media items, wherein the at least a second portion of the plurality of media items is different from the at least a portion of the plurality of media items.

23. The computer-readable storage medium of claim 22, wherein the at least a second portion of the plurality of media items includes two or more media types, and wherein the at least a second portion of the plurality of media items is organized in the expanded first user interface according to each of the two or more media types.

24. The computer readable storage medium of any of claims 21-23, the one or more programs further comprising instructions for:

detecting a third user input;

in response to detecting the third user input, scrolling the expanded first user interface;

determining whether the expanded first user interface has scrolled beyond a predetermined location on the expanded user interface; and

in response to determining that the expanded first user interface has scrolled past a predetermined location on the expanded first user interface, displaying at least a third portion of the plurality of media items on the expanded first user interface, wherein the at least a third portion of the plurality of media items is organized on the expanded first user interface according to one or more media content providers associated with the third plurality of media items.

25. An electronic device, comprising:

one or more processors; and

memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for:

Displaying content on a display unit;

while displaying the content, detecting a user input, wherein the user input is nonverbal;

in response to detecting the user input, sampling audio data, wherein the audio data comprises a user utterance representing a media search request, the media search request comprising a plurality of parameters;

obtaining a plurality of media items satisfying the media search request;

displaying, via a first user interface, at least a portion of the plurality of media items on the display unit, wherein a display area occupied by the displayed content is larger than a display area occupied by the first user interface;

in response to detecting a continuous contact motion in a first direction, obtaining a second plurality of media items that satisfy at least one parameter of the plurality of parameters; and

displaying the second plurality of media items on the display unit via a second user interface, and wherein a display area occupied by the displayed content is smaller than a display area occupied by the second user interface.

26. The electronic device of claim 25, wherein the content continues to be displayed on the display unit while the at least a portion of the plurality of media items is displayed, and wherein a display area occupied by the first user interface is smaller than a display area occupied by the content.

27. The electronic device of claim 25, the one or more programs further comprising instructions for:

determining whether a number of media items in the plurality of media items is less than or equal to a predetermined number, wherein the at least a portion of the plurality of media items includes the plurality of media items in accordance with the determination that the number of media items in the plurality of media items is less than or equal to the predetermined number.

28. The electronic device of claim 27, wherein, in accordance with a determination that a number of media items in the plurality of media items is greater than a predetermined number, a number of media items in the at least a portion of the plurality of media items is equal to the predetermined number.

29. The electronic device of any of claims 25-28, wherein each media item of the plurality of media items is associated with a relevance score with respect to the media search request, and wherein the relevance score of the at least a portion of the plurality of media items is the highest of the plurality of media items.

30. The electronic device of any of claims 25-28, wherein each media item of the at least a portion of the plurality of media items is associated with a popularity rating, and wherein the at least a portion of the plurality of media items is arranged in the first user interface based on the popularity rating.

31. The electronic device of claim 25, the one or more programs further comprising instructions for:

while displaying the at least a portion of the plurality of media items, detecting a second user input; and

in response to detecting the second user input, expanding the first user interface to occupy at least a majority of a display area of the display unit.

32. The electronic device of claim 31, the one or more programs further comprising instructions for:

in response to detecting the second user input:

determining whether a number of media items in the plurality of media items is less than or equal to a predetermined number; and

in accordance with a determination that the number of media items in the plurality of media items is less than or equal to a predetermined number:

obtaining a second plurality of media items that at least partially satisfy the media search request, the second plurality of media items being different from the at least a portion of the media items; and

displaying the second plurality of media items on the display unit via the expanded first user interface.

33. The electronic device of claim 32, the one or more programs further comprising instructions for:

Determining whether the media search request includes more than one search parameter, wherein in accordance with a determination that the media search request includes more than one search parameter, the second plurality of media items are organized in the expanded first user interface in accordance with the more than one search parameter of the media search request.

34. The electronic device of claim 32, the one or more programs further comprising instructions for:

in accordance with a determination that the number of media items in the plurality of media items is greater than the predetermined number:

displaying, via the expanded first user interface, at least a second portion of the plurality of media items, wherein the at least a second portion of the plurality of media items is different from the at least a portion of the plurality of media items.

35. The electronic device of claim 34, wherein the at least a second portion of the plurality of media items includes two or more media types, and wherein the at least a second portion of the plurality of media items is organized in the expanded first user interface according to each of the two or more media types.

36. The electronic device of any of claims 33-35, the one or more programs further comprising instructions for:

Detecting a third user input;

in response to detecting the third user input, scrolling the expanded first user interface;

determining whether the expanded first user interface has scrolled beyond a predetermined location on the expanded user interface; and

in response to determining that the expanded first user interface has scrolled past a predetermined location on the expanded first user interface, displaying at least a third portion of the plurality of media items on the expanded first user interface, wherein the at least a third portion of the plurality of media items is organized on the expanded first user interface according to one or more media content providers associated with the third plurality of media items.

37. An electronic device, comprising:

means for performing the method of any one of claims 1-4 and 7-11.

38. An electronic device, comprising:

one or more processors;

a memory; and

one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 1-4 and 7-11.

Technical Field

The present invention relates generally to intelligent automated assistants, and more particularly to intelligent automated assistants operating in a media environment.

Background

An intelligent automated assistant (or digital assistant) can provide an intuitive interface between a user and an electronic device. These assistants may allow users to interact with a device or system in speech and/or text form using natural language. For example, a user may access a service of an electronic device by providing spoken user input in a natural language form to a virtual assistant associated with the electronic device. The virtual assistant can perform natural language processing on the spoken user input to infer user intent and implement the user intent into a task. The tasks may then be performed by performing one or more functions of the electronic device, and in some examples, the relevant output may be returned to the user in a natural language form.

It is desirable to integrate digital assistants into media environments (e.g., televisions, television set-top boxes, cable boxes, gaming devices, streaming media devices, digital video recorders, etc.) to assist users in performing tasks related to media consumption. For example, a digital assistant may be used to help find desired media content for consumption. However, user interaction with the digital assistant may include audio output and video output that may interfere with consumption of the media content. Thus, integrating a digital assistant into a media environment in a manner that makes it challenging to provide sufficient help to a user while minimizing interference with the consumption of media content.

Disclosure of Invention

A system and process for operating a digital assistant in a media environment is disclosed. In some exemplary processes, user input may be detected while the content is displayed. The process may determine whether the user input corresponds to a first input type. In accordance with a determination that the user input corresponds to the first input type, a plurality of exemplary natural language requests may be displayed. The plurality of exemplary natural language requests may be contextually related to the displayed content.

In some embodiments, in accordance with a determination that the user input does not correspond to the first input type, the process may determine whether the user input corresponds to the second input type. In accordance with a determination that the user input corresponds to the second input type, the audio data may be sampled. The process may determine whether the audio data contains a user request. In accordance with a determination that the audio data contains a user request, a task that at least partially satisfies the user request may be performed. In some examples, the task may include obtaining a result that at least partially satisfies the user request, and displaying a second user interface having a portion of the result. A portion of the content may continue to be displayed while the second user interface is displayed, and a display area of the second user interface may be smaller than a display area of the portion of the content.

In some implementations, the third user input can be detected while the second user interface is displayed. In response to detecting the third user input, display of the second user interface may be replaced with display of a third user interface having the portion of the result. The third user interface may occupy at least a majority of the display area of the display unit. Further, a second result that at least partially satisfies the user request may be obtained. The second result may be different from the result. The third user interface may include at least a portion of the second result.

In some implementations, the fourth user input can be detected while the third user interface is displayed. The fourth user input may indicate a direction. In response to detecting the fourth user input, the focus of the third user interface may be switched from the first item in the third user interface to the second item in the third user interface. The second item may be positioned in the indicated direction relative to the first item.

In some implementations, the fifth user input can be detected while the third user interface is displayed. In response to detecting the fifth user input, a search field may be displayed. Further, a virtual keyboard interface may be displayed, wherein input received via the virtual keyboard interface results in text input in the search field. Further, in some embodiments, the selectable affordance may be caused to appear on a display of the second electronic device, wherein selection of the affordance causes text input to be able to be received by the electronic device via a keyboard of the second electronic device.

In some implementations, the sixth user input can be detected while the third user interface is displayed. In response to detecting the sixth user input, second audio data comprising a second user request may be sampled. The process may determine whether the second user request is a request for refining the results of the user request. In accordance with a determination that the second user request is a request for refining results of the user request, a subset of the results may be displayed via a third user interface. In accordance with a determination that the second user request is not a request for refining the results of the user request, a third result that at least partially satisfies the second user request may be obtained. A portion of the third result may be displayed via a third user interface.

In some embodiments, the sampled audio data may include a user utterance, and a user intent corresponding to the user utterance may be determined. The process may determine whether the user intent includes a request to adjust a state or setting of the application. In accordance with a determination that the user intent includes a request to adjust a state or setting of the application, the state or setting of the application may be adjusted to meet the user intent.

In some embodiments, in accordance with a determination that the user intent does not include a request to adjust a state or setting of an application on the electronic device, the process may determine whether the user intent is one of a plurality of predetermined request types. In accordance with a determination that the user intent is one of a plurality of predetermined request types, a plain text result that at least partially satisfies the user intent can be displayed.

In some embodiments, the process may determine whether the displayed content includes media content in accordance with a determination that the user intent is not one of a plurality of predetermined request types. In accordance with a determination that the displayed content includes media content, the process may further determine whether the media content may be paused. In accordance with a determination that the media content can be paused, the media content is paused and results that at least partially satisfy the user's intent can be displayed via a third user interface. The third user interface may occupy at least a majority of the display area of the display unit. In accordance with a determination that the media content may not be paused, the results may be displayed via the second user interface while the media content is displayed. The display area occupied by the second user interface may be smaller than the display area occupied by the media content. Further, in some embodiments, in accordance with a determination that the displayed content does not include media content, the results may be displayed via a third user interface.

Drawings

FIG. 1 illustrates a block diagram of a system and environment for implementing a digital assistant in accordance with various examples.

Fig. 2 illustrates a block diagram of a media system, according to various examples.

Fig. 3 illustrates a block diagram of a user device, according to various examples.

Fig. 4A illustrates a block diagram of a digital assistant system or a server portion thereof, according to various examples.

Fig. 4B illustrates functionality of the digital assistant illustrated in fig. 4A according to various examples.

Fig. 4C illustrates a portion of an ontology according to various examples.

Fig. 5A-5I illustrate processes for operating a digital assistant for a media system, according to various examples.

Fig. 6A-6Q illustrate screenshots displayed by a media device on a display unit at various stages of the process illustrated in fig. 5A-5I, according to various examples. Fig. 6O is intentionally omitted to avoid any confusion between the capital letter O and the number 0 (zero).

Fig. 7A-7C illustrate processes for operating a digital assistant for a media system, according to various examples.

Fig. 8A-8W illustrate screenshots displayed by a media device on a display unit at various stages of the process illustrated in fig. 7A-7C, according to various examples. Fig. 8O is intentionally omitted to avoid any confusion between the capital letter O and the number 0 (zero).

Fig. 9 illustrates a process for operating a digital assistant of a media system, according to various examples.

Fig. 10 illustrates a functional block diagram of an electronic device configured to operate a digital assistant of a media system, according to various examples.

Fig. 11 illustrates a functional block diagram of an electronic device configured to operate a digital assistant of a media system, in accordance with various examples.

Fig. 12 is a block diagram illustrating a system and environment for implementing a digital assistant in accordance with various examples.

Fig. 13 is a block diagram illustrating a media system according to various examples.

Fig. 14 is a block diagram illustrating a user device, according to various examples.

Fig. 15A is a block diagram illustrating a digital assistant system or server portion thereof according to various examples.

Fig. 15B illustrates functionality of the digital assistant illustrated in fig. 15A according to various examples.

Fig. 15C illustrates a portion of an ontology according to various examples.

Fig. 16A-16E illustrate processes for operating a digital assistant for a media system, according to various examples.

Fig. 17A-17K illustrate screenshots displayed by a media device on a display unit at various stages of the process illustrated in fig. 16A-16E, according to various examples.

Fig. 18 illustrates a functional block diagram of an electronic device configured to operate a digital assistant of a media system, according to various examples.

FIG. 19 illustrates an exemplary system for controlling television user interactions using a virtual assistant.

Fig. 20 illustrates a block diagram of an example user device, in accordance with various examples.

FIG. 21 shows a block diagram of an exemplary media control device in a system for controlling television user interaction.

Fig. 22A-22E illustrate exemplary voice input interfaces on video content.

FIG. 23 illustrates an exemplary media content interface on video content.

Fig. 24A-24B illustrate exemplary media detail interfaces on video content.

Fig. 25A-25B illustrate exemplary media transition interfaces.

Fig. 26A to 26B show an exemplary voice input interface on the menu content.

FIG. 27 shows an exemplary virtual assistant results interface on menu content.

FIG. 28 illustrates an exemplary process for using a virtual assistant to control television interactions and using a different interface to display associated information.

FIG. 29 illustrates exemplary television media content on a mobile user device.

Fig. 30 illustrates an exemplary television control using a virtual assistant.

Fig. 31 shows exemplary pictures and video content on a mobile user device.

FIG. 32 illustrates an exemplary media display control using a virtual assistant.

FIG. 33 illustrates an exemplary virtual assistant interaction with results on a mobile user device and a media display device.

FIG. 34 illustrates an exemplary virtual assistant interaction with media results on a media display device and a mobile user device.

FIG. 35 illustrates exemplary proximity-based media device control.

FIG. 36 shows an exemplary process for controlling television interaction using a virtual assistant and a plurality of user devices.

FIG. 37 illustrates an exemplary voice input interface with a virtual assistant query regarding background video content.

FIG. 38 shows an exemplary informational virtual assistant response over video content.

FIG. 39 illustrates an exemplary voice input interface with a virtual assistant query for media content associated with background video content.

FIG. 40 illustrates an exemplary virtual assistant response interface with selectable media content.

Fig. 41A to 41B show exemplary pages of a program menu.

FIG. 42 illustrates an exemplary media menu divided into a plurality of categories.

FIG. 43 illustrates an exemplary process for controlling television interaction using media content viewing history and media content shown on a display.

FIG. 44 illustrates an exemplary interface with virtual assistant query suggestions based on background video content.

FIG. 45 illustrates an exemplary interface for confirming selection of a suggested query.

46A-46B illustrate an exemplary virtual assistant answer interface based on a selected query.

FIG. 47 illustrates a media content notification and an exemplary interface with virtual assistant query suggestions based on the notification.

FIG. 48 illustrates a mobile user device with exemplary picture and video content that can be played on a media control device.

FIG. 49 illustrates an exemplary mobile user device interface with virtual assistant query suggestions based on playable user device content and based on video content shown on a separate display.

FIG. 50 illustrates an exemplary interface with virtual assistant query suggestions based on playable content from a separate user device.

FIG. 51 illustrates an exemplary process for suggesting virtual assistant interactions for controlling media content.

Fig. 52 illustrates a functional block diagram of an electronic device configured to use a virtual assistant to control television interactions and to display associated information using different interfaces, in accordance with various examples.

Fig. 53 illustrates a functional block diagram of an electronic device configured to control television interaction using a virtual assistant and a plurality of user devices, in accordance with various examples.

Fig. 54 illustrates a functional block diagram of an electronic device configured to control television interaction using media content and media content viewing history shown on a display, in accordance with various examples.

Fig. 55 illustrates a functional block diagram of an electronic device configured to suggest virtual assistant interactions for controlling media content, in accordance with various examples.

FIG. 56 illustrates an exemplary system for providing real-time updates to voice control and virtual assistant knowledge for media playback.

Fig. 57 illustrates a block diagram of an exemplary user device, in accordance with various examples.

FIG. 58 illustrates a block diagram of an exemplary media control device in a system for providing voice control of media playback.

Fig. 59 illustrates an exemplary process for voice control of media playback, according to various examples.

Fig. 60 illustrates an exemplary data feed associating an event in a media stream with a particular time in the media stream.

FIG. 61 shows an exemplary virtual assistant query response prompting video playback based on an event in a media stream.

FIG. 62 illustrates exemplary events that occur before and after a playback position that can be used to interpret a user query.

FIG. 63 illustrates an exemplary awards ceremony data feed that associates an event in a media stream with a particular time in the media stream.

Fig. 64 illustrates an exemplary television program data feed associating an event in a media stream with a particular time in the media stream.

Fig. 65 illustrates exemplary closed caption text associated with a particular time in a video that may be used to respond to a user query.

FIG. 66A illustrates a television display with exemplary video content that can be used to interpret a user query.

FIG. 66B illustrates a mobile user device with exemplary image and text content that can be used to interpret a user query.

FIG. 67 shows an exemplary process for integrating information into digital assistant knowledge and responding to user requests.

Fig. 68 illustrates a functional block diagram of an electronic device configured to provide voice control of media playback and real-time updating of virtual assistant knowledge, according to various examples.

Fig. 69 illustrates a functional block diagram of an electronic device configured to integrate information into digital assistant knowledge and respond to user requests, according to various examples.

Detailed Description

In the following description of the examples, reference is made to the accompanying drawings in which are shown, by way of illustration, specific examples that may be implemented. It is to be understood that other examples may be used and structural changes may be made without departing from the scope of the various examples.

The present invention relates to a system and process for operating a digital assistant in a media environment. In one exemplary process, user input may be detected while the content is displayed. The process may determine whether the user input corresponds to a first input type. In accordance with a determination that the user input corresponds to the first input type, a plurality of exemplary natural language requests may be displayed. The plurality of exemplary natural language requests may be contextually related to the displayed content. It may be desirable for the contextually relevant exemplary natural language request to conveniently inform the user of the capabilities of the digital assistant that are most relevant to the user's current usage conditions on the media device. This may encourage users to use the digital assistant service and may also improve the user interactive experience with the digital assistant.

In some embodiments, in accordance with a determination that the user input does not correspond to the first input type, the process may determine whether the user input corresponds to the second input type. In accordance with a determination that the user input corresponds to the second input type, the audio data may be sampled. The process may determine whether the audio data contains a user request. In accordance with a determination that the audio data contains a user request, a task that at least partially satisfies the user request may be performed.

In some embodiments, the task performed may depend on the nature of the user request and the content displayed when the user input of the second input type is detected. If the user request is a request to adjust a state or setting of an application on the electronic device (e.g., to open subtitles for displayed media content), the task may include adjusting the state or setting of the application. If the user request is one of a plurality of predetermined request types associated with plain text output (e.g., a request for a current time), the task may include displaying text that satisfies the user request. If the displayed content includes media content and the user request asks for the results to be retrieved and displayed, the process may determine whether the media content may be paused. If it is determined that the media content may be paused, the media content is paused and the results satisfying the user request may be displayed on an expanded user interface (e.g., the third user interface 626 shown in FIG. 6H). If it is determined that the media content may not be paused, results satisfying the user request may be displayed on a narrowed user interface (e.g., second user interface 618 shown in FIG. 6G) while the media content continues to be displayed. The display area of the second user interface may be smaller than the display area of the media content. Further, if the displayed content does not include media content, results satisfying the user request may be displayed on the expanded user interface. By adjusting the output format according to the type of content displayed and user request, the digital assistant can intelligently balance by providing comprehensive assistance while minimizing interference with user consumption of media content. This may improve the user experience.

1. System and environment

Fig. 1 illustrates an exemplary system 100 for operating a digital assistant, according to various examples. The terms "digital assistant," "virtual assistant," "intelligent automated assistant," or "automatic digital assistant" may refer to any information processing system for interpreting natural language input in spoken and/or textual form to infer user intent and perform actions based on the inferred user intent. For example, to take action in accordance with the inferred user intent, the system may perform one or more of the following: identifying a task flow utilizing steps and parameters designed to achieve the inferred user intent; entering into the task flow specific requirements from the inferred user intent; executing a task flow by calling a program, method, service, Application Programming Interface (API), or the like; and generating an output response to the user in audible (e.g., speech) and/or visual form.

In particular, the digital assistant may be capable of accepting user requests in the form of, at least in part, natural language commands, requests, statements, narratives and/or inquiries. Typically, the user request may seek an informational answer by the digital assistant or seek the digital assistant to perform a task. A satisfactory response to a user request may be to provide a requested informational answer, to perform a requested task, or a combination of both. For example, a user may ask a question to a digital assistant, such as "is Paris now a few? The "digital assistant may retrieve the requested information and answer" Paris is now 4:00 pm. ". The user may also request to perform a task, such as "find me a movie featured by the Reese Witherspoon. ". In response, the digital assistant can execute the requested search query and display the relevant movie names for the user to select from. During the performance of requested tasks, the digital assistant can sometimes interact with the user over a long period of time during a continuous conversation involving multiple exchanges of information. There are many other ways to interact with a digital assistant to request information or perform various tasks. In addition to providing a textual response and taking programmed actions, the digital assistant may also provide other visual or audio forms of response, such as responses in the form of speech, alerts, music, images, videos, animations, and the like. Further, as discussed herein, an exemplary digital assistant can control playback of media content (e.g., on a television set-top box) and display the media content or other information on a display unit (e.g., a television).

As shown in fig. 1, in some examples, the digital assistant may be implemented according to a client-server model. The digital assistant may include a client-side portion 102 (hereinafter "DA client 102") executing on a media device 104, and a server-side portion 106 (hereinafter "DA server 106") executing on a server system 108. Further, in some examples, the client-side portion may also execute on the user device 122. The DA client 102 may communicate with the DA server 106 over one or more networks 110. The DA client 102 may provide client-side functionality, such as user-oriented input and output processing, as well as communication with the DA server 106. DA server 106 may provide server-side functionality for any number of DA clients 102 that each reside on a respective device (e.g., media device 104 and user device 122).

Media device 104 may be any suitable electronic device configured to manage and control media content. For example, media device 104 may comprise a television set-top box, such as a cable box device, a satellite box device, a video player device, a video streaming device, a digital video recorder, a gaming system, a DVD player, a Blu-ray Disc TMPlayer, combination of such devicesAnd the like. As shown in fig. 1, the media device 104 may be part of a media system 128. In addition to the media device 104, the media system 128 may include a remote control 124 and a display unit 126. Media device 104 may display media content on display unit 126. The display unit 126 may be any type of display, such as a television display, monitor, projector, and the like. In some examples, the media device 104 may be connected to an audio system (e.g., an audio receiver) and speakers (not shown) that may be integrated with or separate from the display unit 126. In other examples, the display unit 126 and the media device 104 may be incorporated together in a single device, such as a smart television with advanced processing capabilities and network connection capabilities. In such examples, the functionality of the media device 104 may be performed as an application on the combined device.

In some examples, media device 104 may function as a media control center for multiple types and sources of media content. For example, the media device 104 may facilitate user access to live television (e.g., wireless television, satellite television, or cable television). Thus, the media device 104 may include a cable tuner or a satellite tuner, among others. In some examples, media device 104 may also record the television program for later time-shifted viewing. In other examples, media device 104 may provide access to one or more streaming media services, such as access to cable-delivered video-on-demand programming, video, and music, and internet-delivered television programming, video, and music (e.g., from various free, paid, and subscription streaming services). In other examples, the media device 104 may facilitate playback or display of media content from any other source, such as displaying photos from a mobile user device, playing videos from a coupled storage device, playing music from a coupled music player, and so forth. The media device 104 may also include various other combinations of the media control features discussed herein as desired. The media device 104 is described in detail below with reference to fig. 2.

The user device 122 may be any personal electronic device, such as a mobile phone (e.g., a smartphone), a tablet, a portable media player, a desktop computer, a laptop computer, a PDA, a wearable electronic device (e.g., digital glasses, a wrist band, a watch, a brooch, an armband, etc.), and so forth. The user equipment 122 is described in detail below with reference to fig. 3.

In some examples, a user may interact with media device 104 through user device 122, remote control 124, or an interface element (e.g., a button, microphone, camera, joystick, etc.) integrated with media device 104. For example, voice input including a media-related query or command for a digital assistant may be received at user device 122 and/or remote control 124, and may be used to cause a media-related task to be performed on media device 104. Likewise, haptic commands for controlling media on media device 104 may be received at user device 122 and/or remote control 124 (as well as other devices not shown). Accordingly, various functions of the media device 104 may be controlled in various ways, giving the user a variety of options for controlling media content from multiple devices.

Examples of one or more communication networks 110 may include a Local Area Network (LAN) and a Wide Area Network (WAN), such as the internet. The one or more communication networks 110 may be implemented using any known network protocol, including various wired or wireless protocols, such as, for example, Ethernet, Universal Serial Bus (USB), firewire, Global System for Mobile communications (GSM), Enhanced Data GSM Environment (EDGE), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Bluetooth, Wi-Fi, Voice over Internet protocol (VoIP), Wi-MAX, or any other suitable communication protocol.

DA server 106 may include a client-facing input/output I/O interface 112, one or more processing modules 114, data and models 116, and an I/O interface 118 to external services. The client-facing I/O interface 112 may facilitate client-facing input and output processing of the DA server 106. The one or more processing modules 114 may utilize the data and models 116 to process speech input and determine user intent based on natural language input. Further, the one or more processing modules 114 may perform tasks based on the inferred user intent. In some examples, DA server 106 may communicate with external services 120 (such as telephone services, calendar services, information services, messaging services, navigation services, television programming services, streaming media services, media search services, etc.) over one or more networks 110 to complete tasks or obtain information. An I/O interface 118 to external services may facilitate such communication.

The server system 108 may be implemented on one or more stand-alone data processing devices of a computer or a distributed network. In some examples, the server system 108 may also employ various virtual devices and/or services of a third party service provider (e.g., a third party cloud service provider) to provide potential computing resources and/or infrastructure resources of the server system 108.

While the digital assistant shown in fig. 1 may include both a client-side portion (e.g., DA client 102) and a server-side portion (e.g., DA server 106), in some examples, the functionality of the digital assistant may be implemented as a standalone application installed on a user device or a media device. Moreover, the division of functionality between the client portion and the server portion of the digital assistant may vary in different implementations. For example, in some examples, the DA client executing on the user device 122 or the media device 104 may be a thin client that provides only user-oriented input and output processing functions and delegates all other functions of the digital assistant to a backend server.

2. Media system

Fig. 2 illustrates a block diagram of a media system 128, according to various examples. The media system 128 may include a media device 104 communicatively coupled to a display unit 126, a remote control 124, and speakers 268. Media device 104 may receive user input via remote control 124. Media content from the media device 104 may be displayed on the display unit 126.

In this example, as shown in fig. 2, media device 104 may include a memory interface 202, one or more processors 204, and a peripheral interface 206. The various components in the media device 104 may be coupled together by one or more communication buses or signal lines. Media device 104 may also include various subsystems and peripherals coupled to peripheral interface 206. The subsystems and peripheral devices may gather information and/or facilitate various functions of the media device 104.

For example, media device 104 may include a communication subsystem 224. Communication functions can be facilitated by one or more wired and/or wireless communication subsystems 224, which can include various communication ports, radio frequency receivers and transmitters, and/or optical (e.g., infrared) receivers and transmitters.

In some examples, the media device 104 may also include an I/O subsystem 240 coupled to the peripheral device interface 206. I/O subsystem 240 may include an audio/video output controller 270. Audio/video output controller 270 may be coupled to display unit 126 and speaker 268, or may be capable of otherwise providing audio and video output (e.g., via audio/video ports, wireless transmission, etc.). The I/O subsystem 240 may also include a remote controller 242. The remote controller 242 is communicatively coupled to the remote control 124 (e.g., via a wired connection, bluetooth, Wi-Fi, etc.).

The remote control 124 may include a microphone 272 for capturing audio data (e.g., voice input from a user), buttons 274 for capturing tactile input, and a transceiver 276 for facilitating communication with the media device 104 via the remote control 242. Further, the remote control 124 may include a touch-sensitive surface 278, sensors, or groups of sensors that accept input from a user based on tactile sensation and/or tactile contact. The touch-sensitive surface 278 and the remote controller 242 may detect contact (and any movement or interruption of the contact) on the touch-sensitive surface 278 and convert the detected contact (e.g., a gesture, a contact action, etc.) into interaction with a user interface object (e.g., one or more soft keys, icons, web pages, or images) displayed on the display unit 126. In some examples, the remote control 124 may also include other input mechanisms, such as a keyboard, joystick, or the like. In some examples, the remote control 124 may also include output mechanisms, such as lights, a display, a speaker, and the like. Input received at the remote control 124 (e.g., user speech, button presses, contact actions, etc.) may be communicated to the media device 104 via the remote control 124. The I/O subsystem 240 may also include one or more other input controllers 244. One or more other input controllers 244 can be coupled to other input/control devices 248, such as one or more buttons, rocker switches, thumb wheels, infrared ports, USB ports, and/or pointing devices (such as a stylus).

In some examples, the media device 104 can also include a memory interface 202 coupled to the memory 250. Memory 250 may include any electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device; portable computer diskette (magnetic); random Access Memory (RAM) (magnetic); read Only Memory (ROM) (magnetic); erasable programmable read-only memory (EPROM) (magnetic); portable optical disks such as CD, CD-R, CD-RW, DVD-R, or DVD-RW; or flash memory such as compact flash cards, secure digital cards, USB memory devices, memory sticks, and the like. In some examples, the non-transitory computer-readable storage medium of memory 250 may be used to store instructions (e.g., for performing some or all of the various processes described herein) for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In other examples, the instructions (e.g., for performing some or all of the various processes described herein) may be stored on a non-transitory computer-readable storage medium of the server system 108, or may be divided between the non-transitory computer-readable storage medium of the memory 250 and the non-transitory computer-readable storage medium of the server system 108. In the context of this document, a "non-transitory computer-readable storage medium" can be any medium that can contain or store the program for use by or in connection with the instruction execution system, apparatus, or device.

In some examples, memory 250 may store an operating system 252, a communication module 254, a Graphical User Interface (GUI) module 256, a device built-in media module 258, a device external media module 260, and an application module 262. Operating system 252 may include instructions for handling basic system services and for performing hardware related tasks. Communication module 254 may facilitate communication with one or more additional devices, one or more computers, and/or one or more servers. Graphical user interface module 256 may facilitate graphical user interface processing. The on-device media module 258 may facilitate storage and playback of media content stored locally on the media device 104. The device external media module 260 may facilitate streaming playback or download of media content obtained from an external source (e.g., on a remote server, on the user device 122, etc.). In addition, the device external media module 260 may facilitate reception of broadcast and cable content (e.g., channel tuning). The application module 262 may facilitate various functions of media-related applications, such as web browsing, media processing, gaming, and/or other processes and functions.

As described herein, the memory 250 may also store client-side digital assistant instructions (e.g., in the digital assistant client module 264) and various user data 266 (e.g., user-specific vocabulary data, preference data, and/or other data such as a user's media search history, media viewing lists, recently viewed lists, favorite media items, etc.), for example, to provide client-side functionality of the digital assistant. User data 266 may also be used to perform speech recognition to support a digital assistant or for any other application.

In various examples, the digital assistant client module 264 may be capable of accepting sound input (e.g., speech input), text input, touch input, and/or gesture input through various user interfaces of the media device 104 (e.g., I/O subsystem 240, etc.). The digital assistant client module 264 can also provide output in audio (e.g., speech output), visual, and/or tactile forms. For example, the output may be provided as voice, sound, alarm, text message, menu, graphic, video, animation, vibration, and/or a combination of two or more of the foregoing. During operation, digital assistant client module 264 can use communication subsystem 224 to communicate with a digital assistant server (e.g., DA server 106).

In some examples, the digital assistant client module 264 may utilize various subsystems and peripherals to collect additional information related to the media device 104 from the surroundings of the media device 104 to establish a context associated with the user, the current user interaction, and/or the current user input. Such context may also include information from other devices, such as information from user device 122. In some examples, the digital assistant client module 264 can provide the contextual information or a subset thereof to the digital assistant server along with the user input to help infer the user's intent. The digital assistant can also use the contextual information to determine how to prepare and deliver the output to the user. The contextual information may also be used by the media device 104 or the server system 108 to support accurate speech recognition.

In some examples, contextual information accompanying the user input may include sensor information such as lighting, ambient noise, ambient temperature, distance to another object, and the like. The context information may also include information associated with the physical state of the media device 104 (e.g., device location, device temperature, power level, etc.) or the software state of the media device 104 (e.g., running process, installed applications, past and current network activity, background services, error logs, resource usage, etc.). The contextual information may also include information received from the user (e.g., voice input), information requested by the user, and information presented to the user (e.g., information currently or previously displayed by the media device). The contextual information may also include information associated with the state of the connected device or other devices associated with the user (e.g., content displayed on the user device 122, playable content on the user device 122, etc.). Any of these types of contextual information may be provided to DA server 106 (or for media device 104 itself) as contextual information related to user input.

In some examples, digital assistant client module 264 may selectively provide information (e.g., user data 266) stored on media device 104 in response to a request from DA server 106. Additionally or alternatively, this information may be used on the media device 104 itself to perform speech recognition and/or digital assistant functions. The digital assistant client module 264 may also elicit additional input from the user via a natural language dialog or other user interface upon request by the DA server 106. The digital assistant client module 264 may transmit additional input to the DA server 106 to assist the DA server 106 in intent inference and/or to satisfy the user intent expressed in the user request.

In various examples, memory 250 may include additional instructions or fewer instructions. Further, various functions of the media device 104 may be implemented in hardware and/or firmware, including in one or more signal processing circuits and/or application specific integrated circuits.

3. User equipment

Fig. 3 illustrates a block diagram of an exemplary user device 122, according to various examples. As shown, the user device 122 may include a memory interface 302, one or more processors 304, and a peripheral interface 306. The various components in user device 122 may be coupled together by one or more communication buses or signal lines. User device 122 may also include various sensors, subsystems, and peripherals coupled to peripheral interface 306. The sensors, subsystems, and peripherals may gather information and/or facilitate various functions of user device 122.

For example, the user device 122 may include a motion sensor 310, a light sensor 312, and a proximity sensor 314 coupled to the peripheral interface 306 to facilitate orientation, illumination, and proximity sensing functions. One or more other sensors 316, such as a positioning system (e.g., GPS receiver), temperature sensor, biometric sensor, gyroscope, compass, accelerometer, etc., may also be connected to the peripheral interface 306 to facilitate related functions.

In some examples, the camera subsystem 320 and optical sensor 322 may be used to facilitate camera functions, such as taking pictures and recording video clips. Communication functions can be facilitated by one or more wired and/or wireless communication subsystems 324, which can include various communication ports, radio frequency receivers and transmitters, and/or optical (e.g., infrared) receivers and transmitters. An audio subsystem 326 may be coupled to a speaker 328 and a microphone 330 to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions.

In some examples, the user device 122 may also include an I/O subsystem 340 coupled to the peripheral interface 306. The I/O subsystem 340 may include a touch screen controller 342 and/or one or more other input controllers 344. The touch screen controller 342 can be coupled to a touch screen 346. The touch screen 346 and touch screen controller 342 can, for example, detect contact and movement or breaks thereof using any of a number of touch sensitivity technologies, such as capacitive, resistive, infrared, surface acoustic wave, proximity sensor arrays, and the like. One or more other input controllers 344 may be coupled to other input/control devices 348, such as one or more buttons, rocker switches, thumb wheels, infrared ports, USB ports, and/or pointer devices (such as a stylus).

In some examples, the user device 122 can also include a memory interface 302 coupled to the memory 350. Memory 350 may include any electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device; portable computer diskette (magnetic); random Access Memory (RAM) (magnetic); read Only Memory (ROM) (magnetic); erasable programmable read-only memory (EPROM) (magnetic); portable optical disks such as CD, CD-R, CD-RW, DVD-R, or DVD-RW; or flash memory such as compact flash cards, secure digital cards, USB memory devices, memory sticks, and the like. In some examples, the non-transitory computer-readable storage medium of memory 350 may be used to store instructions (e.g., for performing some or all of the various processes described herein) for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In other examples, the instructions (e.g., for performing some or all of the various processes described herein) may be stored on a non-transitory computer-readable storage medium of the server system 108, or may be divided between the non-transitory computer-readable storage medium of the memory 350 and the non-transitory computer-readable storage medium of the server system 108. In the context of this document, a "non-transitory computer-readable storage medium" can be any medium that can contain or store the program for use by or in connection with the instruction execution system, apparatus, or device.

In some examples, memory 350 may store an operating system 352, a communication module 354, a Graphical User Interface (GUI) module 356, a sensor processing module 358, a phone module 360, and an application module 362. The operating system 352 may include instructions for handling basic system services and for performing hardware related tasks. The communication module 354 may facilitate communication with one or more additional devices, one or more computers, and/or one or more servers. Graphical user interface module 356 may facilitate graphical user interface processing. The sensor processing module 358 may facilitate sensor-related processing and functions. The phone module 360 may facilitate phone-related processes and functions. Application modules 362 can facilitate various functions of user applications such as electronic messaging, web browsing, media processing, navigation, imaging, and/or other processes and functions.

As described herein, the memory 350 may also store client-side digital assistant instructions (e.g., stored in the digital assistant client module 364) as well as various user data 366 (e.g., user-specific vocabulary data, preference data, and/or other data such as a user's electronic address book, to-do list, shopping list, television program collection, etc.), for example, to provide client-side functionality of the digital assistant. User data 366 may also be used to perform speech recognition to support a digital assistant or for any other application. Digital assistant client module 364 and user data 366 may be similar or identical to digital assistant client module 264 and user data 266, respectively, as described above with reference to fig. 2.

In various examples, memory 350 may include additional instructions or fewer instructions. Further, various functions of user device 122 may be performed in hardware and/or firmware, including in one or more signal processing and/or application specific integrated circuits.

In some examples, user device 122 may be configured to control various aspects of media device 104. For example, user device 122 may function as a remote control (e.g., remote control 124). User input received via user device 122 may be transmitted to media device 104 (e.g., using a communication subsystem) to cause media device 104 to perform corresponding actions. Further, the user device 122 may be configured to receive instructions from the media device 104. For example, the media device 104 may hand over the task to the user device 122 to execute and cause an object (e.g., a selectable affordance) to be displayed on the user device 122.

It should be understood that the system 100 and the media system 128 are not limited to the components and configurations shown in fig. 1 and 2, and that the user device 122, the media device 104, and the remote control 124 are likewise not limited to the components and configurations shown in fig. 2 and 3. In various configurations according to various examples, system 100, media system 128, user device 122, media device 104, and remote control 124 may all include fewer components, or include other components.

4. Digital assistant system

Fig. 4A illustrates a block diagram of a digital assistant system 400 according to various examples. In some examples, the digital assistant system 400 may be implemented on a stand-alone computer system. In some examples, the digital assistant system 400 may be distributed across multiple computers. In some examples, some modules and functionality of a digital assistant may be divided into a server portion and a client portion, where the client portion resides on one or more user devices (e.g., device 104 or device 122) and communicates with the server portion (e.g., server system 108) over one or more networks, for example as shown in fig. 1. In some examples, digital assistant system 400 may be a specific implementation of server system 108 (and/or DA server 106) shown in fig. 1. It should be noted that the digital assistant system 400 is only one example of a digital assistant system, and that the digital assistant system 400 may have more or fewer components than shown, may combine two or more components, or may have a different configuration or layout of components. The various components shown in fig. 4A may be implemented in hardware, software instructions for execution by one or more processors, firmware (including one or more signal processing integrated circuits and/or application specific integrated circuits), or a combination thereof.

The digital assistant system 400 can include memory 402, one or more processors 404, an I/O interface 406, and a network communication interface 408. These components may communicate with each other via one or more communication buses or signal lines 410.

In some examples, the memory 402 may include non-transitory computer-readable media, such as high-speed random access memory and/or non-volatile computer-readable storage media (e.g., one or more disk storage devices, flash memory devices, or other non-volatile solid-state memory devices).

In some examples, the I/O interface 406 may couple I/O devices 416 of the digital assistant system 400, such as a display, a keyboard, a touch screen, and a microphone, to the user interface module 422. The I/O interface 406 in conjunction with the user interface module 422 may receive user inputs (e.g., voice inputs, keyboard inputs, touch inputs, etc.) and process those inputs accordingly. In some examples, such as when the digital assistant is implemented on a standalone user device, the digital assistant system 400 may include any of the components and I/O communication interfaces described with respect to the device 104 or device 122 in fig. 2 or fig. 3, respectively. In some examples, digital assistant system 400 may represent a server portion of a digital assistant implementation and may interact with a user through a client-side portion that resides on a client device (e.g., device 104 or device 122).

In some examples, the network communication interface 408 may include one or more wired communication ports 412, and/or wireless transmission and reception circuitry 414. The one or more wired communication ports may receive and transmit communication signals via one or more wired interfaces, such as ethernet, Universal Serial Bus (USB), firewire, and the like. The wireless circuitry 414 may receive RF signals and/or optical signals from, and transmit RF signals and/or optical signals to, communication networks and other communication devices. The wireless communication may use any of a variety of communication standards, protocols, and technologies, such as GSM, EDGE, CDMA, TDMA, Bluetooth, Wi-Fi, VoIP, Wi-MAX, or any other suitable communication protocol. Network communication interface 408 may enable communication between digital assistant system 400 and other devices via a network, such as the internet, an intranet, and/or a wireless network, such as a cellular telephone network, a wireless Local Area Network (LAN), and/or a Metropolitan Area Network (MAN).

In some examples, memory 402 or the computer-readable storage medium of memory 402 may store programs, modules, instructions, and data structures that include all or a subset of the following: an operating system 418, a communication module 420, a user interface module 422, one or more application programs 424, and a digital assistant module 426. In particular, memory 402 or a computer-readable storage medium of memory 402 may store instructions for performing process 800 described below. The one or more processors 404 may execute the programs, modules, and instructions and may read data from, or write data to, the data structures.

The operating system 418 (e.g., Darwin, RTXC, LINUX, UNIX, iOS, OS X, WINDOWS, or an embedded operating system such as VxWorks) may include various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware, firmware, and software components.

The communication module 420 may facilitate communication between the digital assistant system 400 and other devices via the network communication interface 408. For example, the communication module 420 may communicate with a communication subsystem (e.g., 224,324) of an electronic device (e.g., 104,122). The communication module 420 can also include various components for processing data received by the wireless circuitry 414 and/or the wired communication port 412.

User interface module 422 may receive commands and/or input from a user (e.g., from a keyboard, touch screen, pointing device, controller, and/or microphone) via I/O interface 406 and generate user interface objects on a display. The user interface module 422 may also prepare and deliver output (e.g., voice, sound, animation, text, icons, vibrations, haptic feedback, lighting, etc.) to the user via the I/O interface 406 (e.g., through a display, audio channels, speakers, and touchpad, etc.).

The application programs 424 may include programs and/or modules configured to be executed by the one or more processors 404. For example, if the digital assistant system 400 is implemented on a standalone user device, the application programs 424 may include user application programs, such as a game, calendar application program, navigation application program, or email application program. If the digital assistant system 400 is implemented on a server, the application programs 424 may include, for example, a resource management application, a diagnostic application, or a scheduling application.

Memory 402 may also store a digital assistant module 426 (or a server portion of a digital assistant). In some examples, digital assistant module 426 may include the following sub-modules, or a subset or superset thereof: an I/O processing module 428, a Speech To Text (STT) processing module 430, a natural language processing module 432, a dialog flow processing module 434, a task flow processing module 436, a service processing module 438, and a speech synthesis module 440. Each of these modules may have access to one or more, or a subset or superset thereof, of the systems or data and models of the following digital assistant modules 426: ontology 460, vocabulary index 444, user data 448, task flow model 454, service model 456, and Automatic Speech Recognition (ASR) system 431.

In some examples, using the processing modules, data, and models implemented in the digital assistant module 426, the digital assistant may perform at least some of the following operations: converting the speech input to text; identifying a user intent expressed in a natural language input received from a user; actively elicit and obtain information needed to fully infer user intent (e.g., by disambiguating words, games, intent, etc.); determining a task flow for satisfying the inferred intent; and executing the task flow to satisfy the inferred intent.

In some examples, as shown in fig. 4B, I/O processing module 428 may interact with a user through I/O device 416 in fig. 4A or interact with an electronic device (e.g., device 104 or device 122) through network communication interface 408 in fig. 4A to obtain user input (e.g., voice input) and provide a response to the user input (e.g., as voice output). The I/O processing module 428 may optionally obtain contextual information associated with the user input from the electronic device at or shortly after receiving the user input. The contextual information may include user-specific data, vocabulary, and/or preferences related to user input. In some examples, the context information also includes software and hardware states of the electronic device at the time the user request is received, and/or information related to the user's surroundings at the time the user request is received. In some examples, the I/O processing module 428 may also send follow-up questions to the user regarding the user request and receive answers from the user. When a user request is received by the I/O processing module 428 and the user request may include speech input, the I/O processing module 428 may forward the speech input to the STT processing module 430 (or speech recognizer) for speech-to-text conversion.

STT processing module 430 may include one or more ASR systems (e.g., ASR system 431). The one or more ASR systems may process speech input received through the I/O processing module 428 to generate recognition results. Each ASR system may include a front-end speech preprocessor. A front-end speech preprocessor can extract representative features from the speech input. For example, a front-end speech preprocessor may perform a fourier transform on a speech input to extract spectral features characterizing the speech input as a sequence of representative multi-dimensional vectors. Further, each ASR system may include one or more speech recognition models (e.g., acoustic models and/or language models), and may implement one or more speech recognition engines. Examples of speech recognition models may include hidden markov models, gaussian mixture models, deep neural network models, n-gram language models, and other statistical models. Examples of speech recognition engines may include dynamic time warping based engines and Weighted Finite State Transformer (WFST) based engines. One or more speech recognition models and one or more speech recognition engines may be used to process the extracted representative features of the front-end speech preprocessor to produce intermediate recognition results (e.g., phonemes, phoneme strings, and sub-words) and, ultimately, text recognition results (e.g., words, word strings, or sequences of symbols). In some examples, the voice input may be processed at least in part by a third party service or on an electronic device (e.g., device 104 device 122) to produce a recognition result. Once STT processing module 430 generates a recognition result that includes a text string (e.g., a word, a sequence of words, or a sequence of symbols), the recognition result may be passed to natural language processing module 432 for intent inference.

In some examples, one or more language models of one or more ASR systems may be configured to bias towards media-related results. In one example, a corpus of media-related text can be used to train one or more language models. In another example, the ASR system may be configured to facilitate media-related recognition results. In some examples, one or more ASR systems may include a static language model and a dynamic language model. Static language models may be trained using a general corpus of text, while dynamic language models may be trained using user-specific text. For example, a dynamic language model may be generated using text corresponding to previous speech input received from a user. In some examples, one or more ASR systems may be configured to generate recognition results based on a static language model and/or a dynamic language model. Further, in some examples, one or more ASR systems may be configured to facilitate recognition results corresponding to a most recently received previous speech input.

More details regarding the Speech to text process are described in U.S. utility model patent application serial No. 13/236,942 entitled "Consolidating Speech Recognition Results" filed on 20/9/2011, the entire disclosure of which is incorporated herein by reference.

In some examples, STT processing module 430 may include a vocabulary of recognizable words and/or may access the vocabulary via a speech-to-alphabet conversion module 431. Each vocabulary word may be associated with one or more candidate pronunciations of the word represented in speech recognition phonetic letters. In particular, the vocabulary of recognizable words may include words associated with multiple candidate pronunciations. For example, the vocabulary may includeAndthe word "tomato" associated with the candidate pronunciation. Further, the vocabulary words may be associated with custom candidate pronunciations based on previous speech input from the user. Such custom candidate pronunciations can be stored in STT processing module 430 and can be associated with a particular user via a user profile on the device. In some examples, candidate pronunciations for words may be determined based on the spelling of the word and one or more linguistic and/or phonetic rules. In some examples, the candidate pronunciation may be generated manually, e.g., based on a known standard pronunciation.

In some examples, candidate pronunciations may be ranked based on their prevalence. For example, candidate pronunciationsCan be comparedThe ranking is higher because the former is the more common pronunciation (e.g., among all users, for users in a particular geographic area, or for any other suitable subset of users). In some examples, the candidate pronunciations may be ranked based on whether the candidate pronunciations are custom candidate pronunciations associated with the user. For example, the custom candidate pronunciation may be ranked higher than the standard candidate pronunciation. This can be used to identify proper nouns with unique pronunciations that deviate from the standard pronunciation. In some examples, the candidate pronunciation may be associated with one or more speech features, such as a geographic origin, country, or ethnicity. For example, candidate pronunciations Can be associated with the United states to make candidate pronunciationsMay be associated with the united kingdom. Further, the ranking of candidate pronunciations can be based on being stored inOne or more characteristics (e.g., geographic origin, country, race, etc.) of the user in a user profile on the device. For example, it may be determined from a user profile that the user is associated with the united states. Candidate pronunciation based on the user's association with the United statesComparable candidate pronunciation (associated with the United states)Rank high (associated with the uk). In some examples, one of the ranked candidate pronunciations may be selected as a predicted pronunciation (e.g., the most likely pronunciation).

Upon receiving a speech input, the STT processing module 430 may be used to determine a phoneme (e.g., using a sound model) corresponding to the speech input, and may then attempt to determine a word matching the phoneme (e.g., using a language model). For example, if the STT processing module 430 can first identify a phoneme sequence corresponding to a portion of the speech inputIt may then determine that the sequence corresponds to the word "tomato" based on the lexical index 444.

In some examples, STT processing module 430 may use fuzzy matching techniques to determine words in the utterance. Thus, for example, STT processing module 430 may determine a phoneme sequence Corresponding to the word "tomato", even if the particular phoneme sequence is not a candidate phoneme sequence for the word.

A natural language processing module 432 of the digital assistant ("natural language processor") may take a sequence of words or symbols ("symbol sequence") generated by the STT processing module 430 and attempt to associate the symbol sequence with one or more "actionable intents" identified by the digital assistant. An "actionable intent" may represent a task that may be performed by a digital assistant and that may have an associated task flow implemented in the task flow model 454. The associated task flow may be a series of programmed actions and steps taken by the digital assistant to perform the task. The capability scope of the digital assistant may depend on the number and variety of task flows that have been implemented and stored in the task flow model 454, or in other words, on the number and variety of "actionable intents" that the digital assistant recognizes. However, the effectiveness of a digital assistant may also depend on the assistant's ability to infer the correct "executable intent or intents" from a user request expressed in natural language.

In some examples, natural language processor 432 may receive context information associated with the user request (e.g., from I/O processing module 428) in addition to the sequence of words or symbols obtained from STT processing module 430. The natural language processing module 432 may optionally use the context information to clarify, supplement, and/or further qualify the information contained in the symbol sequence received from the STT processing module 430. The context information may include, for example: a user preference; hardware and/or software state of the user device; sensor information collected before, during, or shortly after a user request; previous interactions (e.g., conversations) between the digital assistant and the user, and so on. As described herein, contextual information may be dynamic and may vary with time, location, content of a conversation, and other factors.

In some examples, the natural language processing may be based on, for example, ontology 460. Ontology 460 may be a hierarchical structure containing a number of nodes, each node representing an "actionable intent" or "attribute" related to one or more of the "actionable intents" or other "attributes". As described above, an "actionable intent" may represent a task that a digital assistant is capable of performing, i.e., that task is "actionable" or can be performed. An "attribute" may represent a parameter associated with a sub-aspect of an executable intent or another attribute. The connection between the actionable intent node and the property node in the ontology 460 may define how the parameters represented by the property node relate to the task represented by the actionable intent node.

In some examples, ontology 460 may be composed of actionable intent nodes and property nodes. Within ontology 460, each actionable intent node may be connected to one or more property nodes directly or through one or more intermediate property nodes. Similarly, each property node may be connected directly to one or more actionable intent nodes or through one or more intermediate property nodes. For example, as shown in FIG. 4C, ontology 460 may include a "media" node (i.e., an actionable intent node). The attribute nodes "one or more actors," "media category," and "media title" may each be directly connected to the actionable intent node (i.e., "media search" node). In addition, the attribute nodes "name", "age", "Ulmer scale ranking" and "nationality" may be child nodes of the attribute node "actor".

In another example, as shown in fig. 4C, ontology 460 may also include a "weather" node (i.e., another actionable intent node). The attribute nodes "date/time" and "location" may each be connected to a "weather search" node. It should be appreciated that, in some examples, one or more attribute nodes may be associated with two or more executables. In these examples, the one or more attribute nodes may be connected to respective nodes corresponding to two or more executables in ontology 460.

The actionable intent node along with the concept nodes to which it connects may be described as a "domain". In the present discussion, each domain may be associated with a respective executable intent, and may refer to a set of nodes (and relationships to each other) associated with a particular executable intent. For example, the ontology 460 shown in fig. 4C may include an example of a media domain 462 and an example of a weather domain 464 within the ontology 460. The media field 462 may include the executable intent node "media search" and the attribute nodes "one or more actors", "media category", and "media title". The weather field 464 may include the executable intent node "weather search," as well as the attribute nodes "location" and "date/time. In some examples, ontology 460 may be composed of multiple domains. Each domain may share one or more attribute nodes with one or more other domains.

Although fig. 4C shows two exemplary domains within ontology 460, other domains may include, for example, "athlete," "stock," "direction," "media setting," "sports team," "time," and "joke," among others. The domain "athlete" may be associated with the executable intent node "search for athlete information" and may further include attribute nodes such as "athlete name", "team to which the athlete belongs", and "athlete statistics".

In some examples, ontology 460 may include all domains (and thus executable intents) that the digital assistant is able to understand and act upon. In some examples, ontology 460 may be modified, such as by adding or removing entire domains or nodes, or by modifying relationships between nodes within ontology 460.

In some examples, each node in ontology 460 may be associated with a set of words and/or phrases that are related to the property or actionable intent represented by the node. The respective set of words and/or phrases associated with each node may be a so-called "vocabulary" associated with the node. The respective set of words and/or phrases associated with each node may be stored in a vocabulary index 444 that is associated with the property or executable intent represented by the node. For example, returning to FIG. 4C, the vocabulary associated with the node for the attribute of "actor" may include words such as "A List," "Reese Witherspoon," "Arnold Schwarzenegger," "Brad Pitt," and so forth. In another example, the vocabulary associated with the node of the actionable intent of "weather search" may include words and phrases such as "weather," "how weather," "forecast," and the like. The vocabulary index 444 may optionally include words and phrases in different languages.

Natural language processing module 432 may receive a sequence of symbols (e.g., a text string) from STT processing module 430 and determine which nodes are involved in words in the sequence of symbols. In some examples, a word or phrase in a sequence of symbols may "trigger" or "activate" one or more nodes in ontology 460 if the word or phrase is found to be associated with those nodes (via lexical index 444). Based on the number and/or relative importance of the activated nodes, the natural language processing module 432 may select one of the actionable intents as the task that the user intends for the digital assistant to perform. In some examples, the domain with the most "triggered" nodes may be selected. In some examples, the domain with the highest confidence may be selected (e.g., based on the relative importance of its respective triggered node). In some examples, the domain may be selected based on a combination of the number and importance of triggered nodes. In some examples, additional factors are also considered in selecting a node, such as whether the digital assistant has previously correctly interpreted a similar request from the user.

The user data 448 may include user-specific information such as user-specific vocabulary, user preferences, user addresses, the user's default and second languages, the user's contact list, and other short-term or long-term information for each user. In some examples, the natural language processing module 432 may use user-specific information to supplement information contained in the user input to further define the user intent. For example, for a user request "how the week is, natural language processing module 432 may access user data 448 to determine where the user is located, rather than requiring the user to explicitly provide such information in their request.

Additional details of Searching for ontologies based on symbolic strings are described in U.S. utility patent application serial No. 12/341,743 entitled "Method and Apparatus for Searching Using An Active Ontology," filed on 22.12.2008, the entire disclosure of which is incorporated herein by reference.

In some examples, once the natural language processing module 432 identifies an executable intent (or domain) based on a user request, the natural language processing module 432 may generate a structured query to represent the identified executable intent. In some examples, the structured query may include parameters for one or more nodes of the executable intent within the domain, and at least some of the parameters are populated with specific information and requirements specified in the user request. For example, the user may say "find me other seasons of the television series. ". In this case, the natural language processing module 432 may correctly identify the executable intent as "media search" based on the user input. According to the ontology, a structured query for the "media" domain may include parameters such as { media actor }, { media category }, { media title }, and the like. In some examples, based on the speech input and text derived from the speech input using STT processing module 430, natural language processing module 432 may generate a partially structured query for the restaurant reservation domain, where the partially structured query includes a parameter { media classification ═ tv series }. However, in this example, the user utterance contains insufficient information to complete a structured query associated with the domain. Thus, other necessary parameters, such as { media title }, may not be specified in the structured query based on currently available information. In some examples, the natural language processing module 432 may use the received contextual information to populate some parameters of the structured query. For example, a television series "Mad Men" is currently playing on the media device. Based on this context information, the natural language processing module 432 may use "Mad Men" to populate the { media title } parameter in the structured query.

In some examples, the natural language processing module 432 may transmit the structured query (including any completed parameters) to a task stream processing module 436 ("task stream processor"). The task flow processing module 436 may be configured to receive the structured query from the natural language processing module 432 and, if necessary, complete the structured query and perform the actions required to "complete" the user's final request. In some examples, various processes necessary to accomplish these tasks may be provided in the task flow model 454. In some examples, the task flow model 454 can include processes for obtaining additional information from a user, as well as task flows for performing actions associated with an executable intent.

As described above, to complete a structured query, the task flow processing module 436 may need to initiate additional conversations with the user in order to obtain additional information and/or to disambiguate potentially ambiguous utterances. When such interaction is necessary, the task flow processing module 436 may invoke the conversation flow processing module 434 to participate in a conversation with the user. In some examples, dialog flow processing module 434 can determine how (and/or when) to request additional information from a user, and can receive and process a user response. Questions may be provided to the user and answers may be received from the user via the I/O processing module 428. In some examples, dialog stream processing module 434 can present dialog output to a user via audio and/or video output and can receive input from the user via a spoken or physical (e.g., click) response. For example, the user may ask "how is the weather of Paris? "when the task flow processing module 436 invokes the dialog flow processing module 434 to determine the" location "information of the structured query associated with the domain" weather search ", the dialog flow processing module 434 may generate information such as" which Paris? "etc. to the user. Further, the dialog flow processing module 434 may cause affordances associated with "Paris in texas" and "Paris in france" to be presented for selection by the user. Upon receiving a response from the user, the dialog flow processing module 434 may populate the structured query with the missing information or pass the information to the task flow processing module 436 for the missing information from the completed structured query.

Once the task flow processing module 436 has completed the structured query for the executable intent, the task flow processing module 436 may begin executing the final task associated with the executable intent. Thus, the task flow processing module 436 may execute steps and instructions in the task flow model 454 according to specific parameters contained in the structured query. For example, a task flow model of an actionable intent of a "media search" may include steps and instructions for executing a media search query to obtain related media items. For example, by using structured queries such as: { media search, media category ═ tv series, media title ═ Mad Men }, task stream processing module 436 may perform the following steps: (1) performing a media search query using a media database to obtain related media items; (2) rank the retrieved media items according to relevance and/or popularity, and (3) display the sorted media items according to relevance and/or popularity.

In some examples, the task flow processing module 436 may complete the task requested in the user input or provide the informational answer requested in the user input with the help of the service processing module 438 ("service processing module"). For example, the service processing module 438 may perform media searches, retrieve weather information, invoke or interact with applications installed on other user devices, and invoke or interact with third-party services (e.g., social networking sites, media review sites, media subscription services, etc.) on behalf of the task flow processing module 436. In some examples, the protocols and APIs required for each service may be specified by a corresponding one of service models 456. The service processing module 438 may access the appropriate service model for the service and generate a request for the service according to the service model according to the protocols and APIs required by the service.

For example, a third party media search service may submit a service model for specifying necessary parameters for performing a media search, and an API for communicating values of the necessary parameters to the media search service. When the task stream processing module 436 makes a request, the service processing module 438 may establish a network connection with the media search service and send necessary parameters for the media search (e.g., media actors, media type, media title) to the online booking interface in a format according to the API of the media search service.

In some examples, the natural language processing module 432, the dialog flow processing module 434, and the task flow processing module 436 may be used collectively and iteratively to infer and define the user's intent, to obtain information to further clarify and refine the user's intent, and to ultimately generate a response (i.e., output to the user or complete the task) to satisfy the user's intent. The generated response may be a dialog response to the speech input that at least partially satisfies the user's intent. Further, in some examples, the generated response may be output as a speech output. In these examples, the generated response may be sent to a speech synthesis module 440 (e.g., a speech synthesizer), where the response may be processed to synthesize the dialog response into speech form. In other examples, the generated response may be data content relevant to satisfying the user request in the voice input.

The speech synthesis module 440 may be configured to synthesize speech output for presentation to a user. The speech synthesis module 440 synthesizes speech output based on text provided by the digital assistant. For example, the generated dialog response may be in the form of a text string. The speech synthesis module 440 may convert the text string into audible speech output. Speech synthesis module 440 may use any suitable speech synthesis technique to generate speech output from text, including but not limited to: concatenative synthesis, unit-selective synthesis, diphone synthesis, domain-specific synthesis, format synthesis, articulatory synthesis, Hidden Markov Model (HMM) based synthesis, and sine wave synthesis. In some examples, the speech synthesis module 440 may be configured to synthesize individual words based on phoneme strings corresponding to the words. For example, the phoneme string may be associated with a word in the generated dialog response. The phoneme string may be stored in metadata associated with the word. The speech synthesis module 440 may be configured to directly process the phoneme string in the metadata to synthesize words in speech.

In some examples, instead of (or in addition to) using speech synthesis module 440, speech synthesis may be performed on a remote device (e.g., server system 108), and the synthesized speech may be sent to a user device for output to a user. This may occur, for example, in some implementations where the output of the digital assistant is generated at the server system. And since the server system typically has more processing power or more resources than the user device, it is possible to obtain a higher quality speech output than the client side synthesis will achieve.

More details regarding digital assistants can be found in U.S. utility model patent application No. 12/987,982 entitled "Intelligent Automated Assistant" filed on 10.1.2011 and U.S. utility model patent application No. 13/251,088 entitled "Generating and Processing Task Items at repeat Tasks to performance" filed on 30.9.2011, the entire disclosures of both of which are incorporated herein by reference.

4. Process for interacting with a digital assistant in a media environment

Fig. 5A-5I illustrate a process 500 for operating a digital assistant for a media system, according to various examples. Process 500 may be performed using one or more electronic devices implementing a digital assistant. For example, process 500 may be performed using one or more of system 100, media system 128, media device 104, user device 122, or digital assistant system 400 described above. Fig. 6A-6Q illustrate screenshots displayed by a media device on a display unit at various stages of process 500, according to various examples. The process 500 is described below with simultaneous reference to fig. 5A-5I and 6A-6Q. It should be understood that some operations in process 500 may be combined, the order of some operations may be changed, and some operations may be omitted.

At block 502 of process 500, content may be displayed on a display unit (e.g., display unit 126). In the present example shown in fig. 6A, the displayed content may include media content 602 (e.g., movies, videos, television programs, video games, etc.) played on a media device (e.g., media device 104). In other examples, the displayed content may include other content associated with the media device, such as content associated with an application running on the media device, or a user interface for interacting with a digital assistant of the media device. In particular, the displayed content may include a main menu user interface or a user interface that includes objects or results previously requested by the user (e.g., second user interface 618 or third user interface 626).

At block 504 of process 500, user input may be detected. User input may be detected while the contents of box 502 are displayed. In some examples, the user input may be detected on a remote control (e.g., remote control 124) of the media device. In particular, the user input may be a user interaction with the remote control, such as pressing a button (e.g., button 274) or contacting a touch-sensitive surface of the remote control (e.g., touch-sensitive surface 278). In some examples, the user input may be detected via a second electronic device (e.g., device 122) configured to interact with the media device. In response to detecting the user input, one or more of blocks 506 through 592 may be performed.

At block 506 of process 500, it may be determined whether the user input corresponds to a first input type. The first input type may be a predefined input to the media device. In one example, the first input type may include pressing a particular button of the remote control and releasing the button within a predetermined duration of pressing the button (e.g., a short press). The media device may determine whether the user input matches the first input type. In accordance with a determination that the user input corresponds to the first input type, one or more of blocks 508-514 may be performed.

At block 508 of process 500, and referring to FIG. 6B, text instructions 604 for invoking and interacting with the digital assistant may be displayed. In particular, instructions 604 may describe user inputs required to invoke and interact with the digital assistant. For example, instructions 604 may explain how to perform the second input type described below at block 516.

At block 510 of the process 500, as shown in fig. 6B, the passive visual indicator 606 may be displayed on the display unit. The passive visual indicator 606 may indicate that the digital assistant has not been invoked. In particular, a microphone (e.g., microphone 272) of the media device may not be activated in response to detecting the user input. Thus, passive visual indicator 606 may serve as a visual signal that the digital assistant is not processing audio input. In this example, the visual indicator 606 may be a passive flat waveform that is not responsive to the user's speech. Further, the passive visual indicator 606 may include a neutral color (e.g., black, gray, etc.) to indicate its passive state. It should be appreciated that other visual patterns or images may be contemplated for the passive visual indicator. The passive visual indicator 606 may be displayed simultaneously with the instructions 604. Further, the passive visual indicator 606 may be continuously displayed while performing one or more of blocks 512-514.

At block 512 of process 500, and referring to fig. 6C, instructions 608 for performing a typed search may be displayed on the display unit. In particular, instructions 608 may describe user input required to display a virtual keyboard interface that may be used to perform a typed search. In some examples, instructions 604 for invoking and interacting with the digital assistant and instructions 608 for performing a typed search may be displayed sequentially at different times. For example, display of instructions 608 may replace display of instructions 604, or vice versa. In this example, the instructions 604,608 are in text form. It should be appreciated that in other examples, the instructions 604,608 may be in graphical form (e.g., pictures, symbols, animations, etc.).

At block 514 of process 500, one or more exemplary natural language requests may be displayed on a display unit. For example, fig. 6D-6E illustrate two different exemplary natural language requests 610,612 displayed on a display unit. In some examples, the exemplary natural language request may be displayed via a first user interface on the display unit. The first user interface may be overlaid on the displayed content. An exemplary natural language request may provide guidance to a user to interact with a digital assistant. Further, the exemplary natural language request may inform the user of various capabilities of the digital assistant. In response to receiving a user utterance corresponding to one of the exemplary natural language requests, the digital assistant can cause a corresponding action to be performed. For example, in response to the digital assistant of the media device being invoked (e.g., by a user input of the second input type at block 504) and provided (e.g., at block 518) with a user utterance of "jump forward for 30 seconds," the digital assistant may jump forward the media content playing on the media device for 30 seconds.

The displayed exemplary natural language request may be contextually related to the content being displayed (e.g., media content 602). For example, an exemplary set of natural language requests may be stored on a media device or on a separate server. Each exemplary natural language request in the set of exemplary natural language requests may be associated with one or more contextual attributes (e.g., media content being played, home page, iTunes media store, actors, movies, weather, sports, stock market, etc.). In some examples, block 514 may include identifying an exemplary natural language request from the set of exemplary natural language requests having contextual attributes corresponding to display content on the display unit. The identified exemplary natural language request may then be displayed on a display unit. Thus, different exemplary natural language requests may be displayed for different display content on the display unit. Displaying contextually relevant exemplary natural language requests can be used to conveniently inform a user of the capabilities of the digital assistant that are most relevant to the user's current usage conditions on the media device. This may improve the overall user experience.

In the present example illustrated in fig. 6D-6E, exemplary natural language request 610 and exemplary natural language request 612 may each be contextually related to media content 602 on a display unit. In particular, exemplary natural language request 610 and exemplary natural language request 612 may be requests to modify or control one or more settings associated with media content played on a media device. Such exemplary natural language requests may include requests for: open/close closed captioning, open captioning in a particular language, skip back/forward, pause playing media content, resume playing media content, slow or speed up playing media content, increase/decrease the volume (e.g., audio gain) of media content, and the like. Further, other exemplary natural language requests that are contextually relevant to the media content 602 can include requests for: adding media items corresponding to the media content 602 to the user's watch list, displaying information related to the media content 602 (e.g., actor information, story synopsis, release date, etc.), displaying other media items or content related to the media content 602 (e.g., same series, same season, same actor/director, same category, etc.), and so forth.

In examples where the displayed content includes content associated with an application of the media device, the contextually relevant exemplary natural language request may include a request to modify one or more settings or states of the application. In particular, exemplary natural language requests may include requests to open or close an application or manipulate one or more features of an application.

In some examples, the displayed content may include a user interface (e.g., second user interface 618 or third user interface 626) for searching, browsing, or selecting items. In particular, the displayed user interface may include one or more media items. Further, the focus of the user interface may be located on a media item of the one or more media items (e.g., media item 623 highlighted by cursor 624 in fig. 6G). In these examples, the contextually relevant exemplary natural language request may include a request for information about one or more media items in the displayed user interface or other media items. In particular, an exemplary natural language request may include a request related to a media item that is the focus of a user interface. In these examples, an exemplary natural language request may include a plurality of requests, such as "what is its content? "," how much it ranks? "," who there is? "," when the next set appears? "," show me more such movies. "and" show me movies starring by the same actor. ". In a particular example, information related to a media item or series of media items, such as a television series Mad Men, may be displayed via a user interface. In this example, a contextually relevant exemplary natural language request can include a requirement based on one or more attributes (e.g., lineup, plot, rank, release date, director, provider, etc.) of a media item or a series of media items (e.g., other programs attended by January Jones). Further, exemplary context-dependent natural language requests may include a request to play, select, or retrieve a focused media item or another media item displayed in the user interface (e.g., "rent this.", "play this.", "buy this." or "play How to Train Your Dragon picture 2"), or a request to navigate a media item in the user interface (e.g., "go to comedy." or "jump to horror movie."). Further, in these examples, the contextually relevant exemplary natural language requests may include requests for searching for other media items (e.g., "find new comedy.", "show free and good-looking movies.", or "what are programs featured by Nicole Kidman.

In some examples, the displayed content may include media items organized according to a particular category or topic. In these examples, the context-sensitive exemplary natural language request may include a request that is related to the particular category or topic. For example, in an example where the displayed content includes media items organized according to various actors, a contextually relevant exemplary natural language request may include a request for information or media items related to the actors (e.g., "what are movies starring by Jennifer Lawrence?," "what are Scarlett Johansson ages?," what are the latest movies by Brad Pitt. In another example where the displayed content includes media items organized according to a program channel or content provider (e.g., a channel page or television guide page), the contextually relevant exemplary natural language request may include a request for information or media items relevant to the program channel or content provider (e.g., "what did it play after an hour", "what did HBO play in prime time. In another example where the displayed content includes a media item recently selected by the user (e.g., a "recently played" list) or identified as being of interest to the user (e.g., a "watch list"), the contextually relevant exemplary natural language request may include a request to view or continue to view one of the media items (e.g., "continue playing from where it last stopped", "," continue viewing birdman ", or" ", play from the beginning").

In some examples, the displayed content may include a user interface containing results or information corresponding to a particular topic. In particular, the results may be associated with previous user requests (e.g., requests for digital assistants) and may include information corresponding to a topic such as weather, stock market, or sports. In these examples, the context-dependent exemplary natural language request may include a request to refine the result or a request for additional information about a particular topic. For example, in an example where the displayed content includes weather information for a particular location, the contextually relevant exemplary natural language request may include a request to display additional weather information for another location or a different time range (e.g., "how in New York?," what is the next week?, "" Hawaii. In another example where the displayed content includes information related to a sports team or athlete, the context-related exemplary natural language request may include a request to provide additional information related to the sports team or athlete (e.g., "how high is Shaquille O' Neal?," when is Tom Brady born. In another example where the displayed content includes information related to stock market, the contextually relevant exemplary natural language request may include a request for additional stock market related information (e.g., "how is the opening price of S & P500?," how does the stock price trend of Apple. Further, in some examples, the displayed content may include a user interface containing media search results associated with a previous user request. In these examples, the contextually relevant exemplary natural language requests may include requests to refine the displayed media search results (e.g., "find only those showing since the last year," "find only those rated G," "find only free"), or requests to perform different media searches (e.g., "find a good action movie," "display some movies coming out of dragon for me," etc.).

In some examples, the displayed content may include a main menu user interface of the media device. The main menu user interface may be, for example, a home screen or a root directory of the media device. In these examples, the contextually relevant exemplary natural language requests may include requests that represent various capabilities of the digital assistant. In particular, the digital assistant may have a set of core competencies associated with the media device, and the context-dependent exemplary natural language requests may include requests related to each of the core competencies of the digital assistant (e.g., "show me some nice and free movies," how much weather, "" play the next episode of Breaking Bad, "or" how much the Apple's stock price.

An exemplary natural language request may be in natural language form. This may be used to inform the user that the digital assistant is able to understand the natural language request. Further, in some examples, the exemplary natural language request may be context-obfuscated, informing the user that the digital assistant is able to infer the correct user intent associated with the user's request based on the displayed content. Specifically, as shown in the above example, an exemplary natural language request may include a context-obfuscated term such as "this" or "some," or a context-obfuscated phrase such as "find free only. "or" how are they in New York? ". These exemplary natural language requests may inform the user that the digital assistant is able to determine the correct context associated with such requests based on the displayed content. This will encourage users to rely on the context of the displayed content when interacting with the digital assistant, which may promote a more natural interactive experience with the digital assistant.

In some examples, block 514 may be performed after blocks 508 through 512. In particular, the exemplary natural language request may be displayed on the display unit a predetermined amount of time after determining that the user input corresponds to the first input type at block 506. It should be appreciated that in some examples, blocks 508 through 514 may be performed in any order, and in some examples, two or more of blocks 508 through 514 may be performed simultaneously.

In some examples, the exemplary natural language requests are displayed in turn in a predetermined order. Each exemplary natural language request may be displayed separately at different times. In particular, the display of the current exemplary natural language request may be replaced with the display of a subsequent exemplary natural language request. For example, as shown in FIG. 6D, an exemplary natural language request 610 may be displayed first. After a predetermined amount of time, the display of the exemplary natural language request 610 ("skip forward 30 seconds") may be replaced with a display of the exemplary natural language request 612 ("play next episode"), as shown in FIG. 6E. Thus, in this example, the exemplary natural language request 610 and the exemplary natural language request 612 are displayed one at a time, rather than simultaneously.

In some examples, the exemplary natural language request may be divided into a plurality of lists, where each list includes one or more exemplary natural language requests. In these examples, block 514 may include displaying a list of exemplary natural language requests on a display unit. Each list may be displayed at a different time in a predetermined order. In addition, these lists may be displayed in turn.

When one or more of blocks 508 through 514 are performed, the displayed content may continue to be displayed on the display unit. For example, as shown in fig. 6B-6E, upon performing blocks 508-512, the media content 602 may continue to play on the media device and be displayed on the display unit. In addition, audio associated with the media content may be output by the media device as the media content is played. In some examples, the audio amplitude is not reduced in response to detecting the user input or in accordance with a determination that the user input corresponds to the first input type. It is desirable to reduce interference with the consumption of the media content 602 being played. Thus, while elements 604-612 are being displayed on the display unit, the user may continue to focus on the media content 602 via the audio output.

In some examples, as represented by the outline font of the media content 602 in fig. 6B-6D, the brightness of the displayed content may be reduced (e.g., by 20% to 40%) in response to detecting the user input or in accordance with a determination that the user input corresponds to the first input type. In these examples, displayed elements 604-612 may be overlaid on displayed media content 602. Decreasing the brightness may be used to highlight the displayed elements 604 through 612. At the same time, the media content 602 is still discernable on the display unit, enabling the user to continue consuming the media content 602 while the elements 604 through 612 are displayed.

In performing one of blocks 508-512, the digital assistant may be invoked (e.g., by detecting a user input of the second input type at block 504) and a user utterance corresponding to one of the exemplary natural language requests may be received (e.g., at block 518). The digital assistant may then perform a task (e.g., at block 532) in response to the received request. Further details regarding invoking and interacting with a digital assistant are provided below with reference to fig. 5B-5I. Further, while performing one of blocks 508-512, a typing search may be performed by invoking the virtual keyboard interface (e.g., by detecting a fifth user input at block 558). More details regarding invoking the virtual keyboard interface and performing a typed search are provided below with reference to FIG. 5G.

Referring again to block 506, in accordance with a determination that the user input does not correspond to the first input type, one or more of blocks 516-530 of fig. 5B may be performed. At block 516, it may be determined whether the user input corresponds to a second input type. The second input type may be a predefined input to the media device that is different from the first input type. In some examples, the second input type may include pressing a particular button on a remote control of the media device and holding the button for more than a predetermined duration (e.g., a long press). The second input type can be associated with invoking a digital assistant. In some examples, the first input type and the second input type may be implemented using the same button of the remote control (e.g., a button configured to invoke a digital assistant). It is desirable that the invocation of the digital assistant and the provision of instructions for invoking and interacting with the digital assistant be intuitively integrated into a single button. Further, an inexperienced user may intuitively implement a short press, rather than a long press. Thus, providing instructions in response to detecting a short press may cause the instructions to be primarily directed to inexperienced users, rather than experienced users. This may improve the user experience by easily displaying instructions to the inexperienced user who needs guidance most while allowing the experienced user to select the option to bypass the instructions.

In accordance with a determination that the user input at block 516 corresponds to the second input type, one or more of blocks 518-530 may be performed. In some examples, the media content 602 may continue to be played on the media device while one or more of blocks 518-530 are performed. In particular, while the audio data is sampled at block 518 and the task is performed at block 528, the media content 602 may continue to play on the media device and continue to be displayed on the display unit.

At block 518 of the process 500, the audio data may be sampled. In particular, a first microphone (e.g., microphone 272) of the media device may be activated to begin sampling audio data. In some examples, the sampled audio data may include user utterances from the user. The user utterance may represent a user request for a digital assistant. Further, in some examples, the user request may be a request to perform a task. In particular, the user request may be a media search request. For example, referring to fig. 6F, the sampled audio data may include the user utterance "find romantic comedies that are starred by the Reese witerspoon. ". In other examples, the user request may be a request to play a media item or provide specific information (e.g., weather, stock market, sports, etc.).

The user utterance in the sampled audio data may be in a natural language form. In some examples, a user utterance may represent a partially specified user request by which not all information needed to satisfy the user request is explicitly defined. For example, the user utterance may be "play next episode". ". In this example, the user request does not explicitly define which next episode of the media series to play. Further, in some examples, the user utterance may include one or more ambiguous terms.

The duration of sampling the audio data may be based on the detection of the endpoint. In particular, the audio data may be sampled from a start time when the user input of the second input type is initially detected to an end time when the end point is detected. In some examples, the endpoint may be based on user input. In particular, the first microphone may be activated when a user input of the second input type is initially detected (e.g., pressing a button for more than a predetermined duration). The first microphone may remain active to sample audio data while user input of the second input type continues to be detected. Once no user input of the second input type is detected (e.g., the button is released), the first microphone may be deactivated. Thus, in these examples, the endpoint is detected upon detecting the end of the user input. Thus, the audio data is sampled upon detecting the user input of the second input type.

In other examples, detecting the endpoint may be based on one or more audio features of the sampled audio data. In particular, one or more audio features of the sampled audio data may be monitored, and the endpoint may be detected at a predetermined time after determining that the one or more audio features do not satisfy one or more predetermined criteria. In other examples, the endpoint may be detected based on a fixed duration. In particular, the endpoint may be detected at a predetermined duration after the initial detection of the user input of the second input type.

In some examples, audio associated with the displayed content may be output (e.g., using speaker 268) when performing block 504 or block 516. In particular, the audio may be audio of a media item played on the media device and displayed on the display unit. Audio may be output via an audio signal from the media device. In these examples, audio associated with the displayed content may be dodged (e.g., the amplitude of the audio is reduced) upon determining that the user input corresponds to the second input type and upon sampling the audio data. For example, audio may be dodged by reducing a gain associated with the audio signal. In other examples, the output of audio associated with the media content may be stopped while the audio data is sampled at block 518. For example, audio output may be stopped by blocking or disturbing the audio signal. Dodging or stopping the audio output may reduce background noise in the sampled audio data and increase the relative strength of the speech signal associated with the user utterance. Further, the dodging or stopping of the audio may be used as an audio prompt that prompts the user to begin providing speech input to the digital assistant.

In some examples, the background audio data may be sampled as the audio data is sampled, thereby eliminating noise. In these examples, the remote control or media device may include a second microphone. The second microphone may be oriented in a different direction than (e.g., opposite) the first microphone. The second microphone may be activated to sample the background audio data as the audio data is sampled. In some examples, the background audio data may be used to eliminate background noise in the audio data. In other examples, the media device may generate an audio signal for outputting audio associated with the displayed content. The generated audio signal may be used to remove background noise from the audio data. The elimination of background noise from audio signals may be particularly suitable for interaction with digital assistants in a media environment. This may be due to the common nature of consuming media content, where utterances from multiple individuals may be mixed in the audio data. By eliminating background noise in the audio data, a higher signal-to-noise ratio in the audio data may be obtained, which may be desirable when processing audio data requested by a user.

At block 520 of process 500 and referring to fig. 6F, an active visual indicator 614 may be displayed on the display unit. Active visual indicator 614 may indicate to the user that the digital assistant is invoked and actively listening. In particular, active visual indicator 614 may serve as a visual cue that prompts the user to begin providing speech input to the digital assistant. In some examples, active visual indicator 614 may include a color and/or visual animation to indicate that the digital assistant is invoked. For example, as shown in fig. 6F, the active visual indicator 614 may comprise an active waveform responsive to one or more characteristics (e.g., amplitude) of audio data received by the digital assistant. For example, the active visual indicator 614 displays a waveform having a larger amplitude in response to louder portions of the audio data and a waveform having a smaller amplitude in response to softer portions of the audio data. Further, in examples where the digital assistant is invoked when a passive visual indicator 606 (e.g., fig. 6E) is displayed, the display of the visual indicator 606 may be replaced with the display of the active visual indicator 614. This may provide a natural transition from the instructional user interface shown in fig. 6B-6E for demonstrating how to invoke and interact with a digital assistant to the active user interface shown in fig. 6F for actively interacting with a digital assistant.

At block 522 of process 500, a textual representation of a user utterance in the sampled audio data may be determined. For example, the text representation may be determined by performing speech-to-text (STT) processing on the sampled audio data. In particular, the sampled audio data may be processed using a STT processing module (e.g., STT processing module 430) to convert a user utterance in the sampled audio data into a textual representation. The text representation may be a string of symbols representing a corresponding text string.

In some examples, STT processing may be biased towards media-related textual results. Biasing may be achieved by utilizing a language model trained using a corpus of media-related text. Additionally or alternatively, biasing may be achieved by weighting the candidate text results more heavily with respect to the media. In this way, candidate text results that are relevant to the media may be ranked higher when biased than when not biased. Biasing may be desirable to increase the accuracy of STT processing of media related user utterances (e.g., movie titles, movie actors, etc.). For example, without biasing the media-related text results, certain media-related words or phrases, such as "Jurassic Park," "Arnold Schwarzenegger," and "Shrek," may rarely be found in a typical text corpus, and thus may not be successfully recognized during STT processing.

In some examples, the textual representation may be obtained from a separate device (e.g., DA server 106). In particular, sampled audio data may be transmitted from a media device to a stand-alone device to perform STT processing. In these examples, the media device may indicate to the separate device that the sampled audio data is associated with the media application (e.g., by data transmitted to the separate device having the sampled audio data). The indication may bias STT processing towards media-related textual results.

In some examples, the text representation may be based on a previous user utterance received by the media device prior to sampling the audio data. In particular, candidate text results of sampled audio data corresponding to one or more portions of a previous user utterance may be weighted more heavily. In some examples, a previous user utterance may be used to generate a language model, and the generated language model may be used to determine a textual representation of a current user utterance in the sampled audio data. The language model may be dynamically updated as additional user utterances are received and processed.

Further, in some examples, the text representation may be based on a time at which a previous user utterance was received prior to sampling the audio data. In particular, weighting candidate text results corresponding to previous user utterances received more recently relative to the sampled audio data may be heavier than weighting candidate text results corresponding to previous user utterances received earlier relative to the sampled audio data.

At block 524 of process 500, the text representation may be displayed on a display unit. For example, fig. 6F shows a text representation 616 corresponding to a user utterance in the sampled audio data. In some examples, blocks 522 and 524 may be performed when sampling audio data. In particular, the textual representation 616 of the user utterance may be displayed in a streaming manner such that the textual representation 616 is displayed in real-time as the audio data is sampled and the STT processing is performed on the sampled audio data. Displaying the textual representation 616 may provide confirmation to the user that the digital assistant is properly processing the user request.

At block 526 of process 500, a user intent corresponding to the user utterance may be determined. The user intent may be determined by performing natural language processing on the textual representation of block 522. In particular, the textual representation may be processed using a natural language processing module (e.g., natural language processing module 432) to derive the user intent. For example, referring to fig. 6F, from a textual representation 616 corresponding to "find romantic comedies starring by Reese Witherspoon," it may be determined that the user intent is to request a search for media items classified as romantic comedies and staring at the actor Reese Witherspoon. In some examples, block 526 may further include generating, using the natural language processing module, a structured query for representing the determined user intent. In the present example of "finding romantic comedies starring by the Reese Witherspoon," a structured query may be generated that represents a media search query for media items that are classified as romantic comedies and that the actor Reese Witherspoon is playing.

In some examples, natural language processing used to determine user intent may favor media-related user intent. In particular, the natural language processing module may be trained to identify media-related words and phrases (e.g., media title, media category, actor, MPAA movie rating tag, etc.) that trigger media-related nodes in the ontology. For example, the natural language processing module may identify the phrase "Jurassic Park" in the textual representation as a movie title and thereby trigger a "media search" node in the ontology associated with an executable intent to search for media items. In some examples, biasing may be implemented by limiting nodes in the ontology to a predetermined set of media-related nodes. For example, the set of media related nodes may be nodes associated with an application of the media device. Further, in some examples, the bias may be implemented by weighting candidate user intents that are relevant to the media more heavily than candidate user intents that are not relevant to the media.

In some examples, the user intent may be obtained from a separate device (e.g., DA server 106). In particular, audio data may be transmitted to a standalone device to perform natural language processing. In these examples, the media device may indicate to the separate device (e.g., via data transmitted to the separate device with the sampled audio data) that the sampled audio data is associated with the media application. The indication may bias natural language processing towards media-related user intent.

At block 528 of process 500, it may be determined whether the sampled audio data includes a user request. This determination may be made in accordance with the determined user intent of block 526. If the user intent includes a user request to perform a task, it may be determined that the sampled audio data contains the user request. Conversely, if the user intent does not include a user request to perform a task, the sampled audio data may be determined to not contain the user request. Further, in some examples, if the user intent cannot be determined from the textual representation at block 526 or the textual representation cannot be determined from the sampled audio data at block 522, then it may be determined that the sampled audio data does not contain the user request. In accordance with a determination that the audio data does not contain a user request, block 530 may be performed.

At block 530 of process 500, a request to clarify the user's intent may be displayed on a display unit. In one example, the request for clarification may be a request that requires the user to repeat the user request. In another example, the request for clarification may be a statement that the digital assistant cannot understand the user utterance. In yet another example, an error message may be displayed to indicate that the user's intent cannot be determined. Further, in some examples, in accordance with a determination that the audio data does not contain a user request, no response may be provided.

Referring to fig. 5C, in accordance with a determination at block 528 that the sampled audio data includes a user request, block 532 may be performed. At block 532 of process 500, a task may be performed that at least partially satisfies the user request. For example, performing the task at block 526 may include performing one or more tasks defined in the generated structured query of block 526. One or more tasks may be performed using a task flow processing module (e.g., task flow processing module 436) of the digital assistant. In some examples, the task may include changing a state or setting of an application on the media device. More specifically, the tasks may include, for example, selecting or playing a requested media item, opening or closing a requested application, or navigating through a displayed user interface in a requested manner. In some examples, the task may be performed at block 532 and no speech related to the task is output from the media device. Thus, while in these examples the user may provide the request to the digital assistant in the form of speech, the digital assistant may not provide the response to the user in the form of speech. Instead, the digital assistant may respond only visually by displaying the results on the display unit. It is desirable that the common experience of consuming media content be preserved.

In other examples, the task may include retrieving and displaying the requested information. In particular, performing the task at block 532 may include performing one or more of blocks 534-536. At block 534 of process 500, results that at least partially satisfy the user request may be obtained. The results may be obtained from an external service (e.g., external service 120). In one example, the user request may be a request to perform a media search query, such as "find romantic comedies that are starred by the Reese Witherspoon". ". In this example, block 534 may include performing the requested media search (e.g., using a media-related database of external services) to obtain media items classified as romantic comedies and having the actor Reese witerspanon. In other examples, the user request may include a request for other types of information, such as weather, sports, and stock market, and the corresponding information may be obtained at block 534.

At block 536 of the process 500, a second user interface may be displayed on the display unit. The second user interface may include a portion of the results obtained at block 534. For example, as shown in fig. 6G, the second user interface 618 may be displayed on the display unit. The second user interface 618 may include media items 622 that satisfy the user request of "find me romantic comedies that are starred by the Reese witerspoon". In This example, media items 622 may include some media items, such as "Legally Blnde," Legally Blnde 2, "" Hot Pursuit, "and" This Means War. The second user interface 618 may also include a text header 620 that describes the retrieved results. The text header 620 may rephrase a portion of the user request to convey the impression that the user's request has been processed directly. This provides a more personalized interactive experience between the user and the digital assistant. In the present example shown in fig. 6G, the media items 622 are organized in a single row across the second user interface 618. It should be appreciated that in other examples, the organization and presentation of the media items 622 may vary.

The second user interface 618 may further include a cursor 624 for navigating through the second user interface 618 and selecting media items 622. The position of the cursor may be indicated by visually highlighting the media item on which the cursor is located relative to the other media items. For example, in this example, the media item 623 on which the cursor 624 is located may be larger and bolder than other media items displayed in the second user interface 618.

In some examples, while displaying the second user interface, at least a portion of the displayed content may continue to be displayed. For example, as shown in fig. 6G, the second user interface 618 may be a small pane displayed at the base of the display unit, while the media content 602 continues to play on the media device and is displayed on the display unit above the second user interface 618. The second user interface 618 may be overlaid on the media content 602 being played. In this example, the display area of the second user interface 618 on the display unit may be smaller than the display area of the media content 602 on the display unit. It is desirable to reduce the interference of results displayed by the digital assistant while the user is consuming media content. It should be appreciated that in other examples, the display area of the second user interface may vary relative to the display area of the displayed content. Further, as shown in solid font "MEDIA PLAYING" in fig. 6G, the brightness of the media content 602 may be restored to normal brightness (e.g., brightness at fig. 6A prior to detecting the user input) while the second user interface 618 is displayed. This may be used to indicate to the user that interaction with the digital assistant has been completed. Accordingly, the user may continue to consume the media content 602 while viewing the requested results (e.g., media item 622).

In examples where media items retrieved from a media search are displayed on the second user interface, the number of media items displayed may be limited. This may allow the user to focus on the most relevant results and prevent the user from being faced with too many options when making selections. In these examples, block 532 may further include determining whether the number of media items in the resulting result is less than or equal to a predetermined number (e.g., 30, 28, or 25). In accordance with a determination that the number of media items in the result is less than or equal to the predetermined number, all of the media items in the result may be included in the second user interface. In accordance with a determination that the number of media items in the result is greater than the predetermined number, only the predetermined number of media items in the result may be included in the second user interface.

Further, in some examples, only the media items in the results that are most relevant to the media search request may be displayed in the second user interface. In particular, each of the media items in the results may be associated with a relevance score relative to the media search request. The displayed media items may have the highest relevance scores in the resulting results. Further, the media items in the second user interface may be arranged according to the relevance scores. For example, referring to FIG. 6G, media items with higher relevance scores are more likely to be closer to one side of the second user interface 618 (e.g., the side closer to the cursor 624), while media items with lower relevance scores are more likely to be closer to the opposite side of the user interface 618 (e.g., the side farther from the cursor 624). Further, each media item in the resulting results may be associated with a popularity rating. The popularity rating may be based on a rating of a movie critic (e.g., rotten tomato ratings) or on the number of users that have selected to play back the media item. In some examples, the arrangement of media items 622 in the second user interface 618 may be based on popularity ratings. For example, media items with higher popularity ratings are more likely to be positioned on one side of the second user interface 618, while media items with lower popularity ratings are more likely to be positioned closer to the opposite side of the second user interface 618.

As shown by the different flows (e.g., D, E, F and G) after block 532 in fig. 5C, one of block 538 of fig. 5D, block 542 of fig. 5E, block 550 of fig. 5F, or block 570 of fig. 5I may be performed after block 532. Block 538, block 542, block 550, or block 570 may be performed while the second user interface is displayed at block 536. In some examples, process 500 may alternatively include a determination step after block 536 to determine the appropriate flow (e.g., D, E, F or G) to perform. In particular, user input may be detected after block 536, and it may be determined whether the detected user input corresponds to a second user input (e.g., block 538), a third user input (e.g., block 542), a fourth user input (e.g., block 550), or a sixth user input (e.g., block 570). For example, in accordance with a determination that the user input corresponds to the third user input of block 542, one or more of blocks 544 through 546 may be performed. Following block 546, a similar determination step may also be included.

At block 538 of process 500 and referring to fig. 5D, a second user input may be detected. As described above, the second user input may be detected while the second user interface is displayed on the display unit. The second user input may be detected on a remote control of the media device. For example, the second user input may comprise a first predetermined motion pattern on a touch-sensitive surface of the remote control. In one example, the first predetermined motion pattern may include a continuous contact motion in a first direction from a first point of contact to a second point of contact on the touch-sensitive surface. The first direction may be a downward direction or a direction towards the user when holding the remote control in a desired manner. It should be appreciated that other forms of input are contemplated for the second user input. In response to detecting the second user input, block 540 may be performed.

At block 540 of process 500, the second user interface may be dismissed such that it is no longer displayed. For example, referring to fig. 6G, in response to detecting the second user input, the second user interface 618 may cease to be displayed. In this example, upon dismissing the second user interface 618, the media content 602 may be displayed full-screen on the display unit. For example, upon ceasing to display the second user interface 618, the media content 602 may be displayed as shown in FIG. 6A.

At block 542 of process 500 and referring to fig. 5E, a third user input may be detected. The third user input may be detected while the second user interface is displayed on the display unit. A third user input may be detected on a remote control of the media device. For example, the third user input may comprise a second predetermined motion pattern on the touch-sensitive surface of the remote control. The second predetermined motion pattern may include continuous contact motion in a second direction from a third point of contact to a fourth point of contact on the touch-sensitive surface. The second direction may be opposite to the first direction. Specifically, the second direction may be an upward direction or a direction away from the user when holding the remote controller in an intended manner. One or more of blocks 544 through 546 may be performed in response to detecting the third user input. In some examples, as shown in fig. 6G, the second user interface 618 may include a graphical indicator 621 (e.g., an arrow) to indicate to the user that the second user interface 618 may be expanded by providing a third user input. Further, the graphical indicator 621 may indicate to the user a second direction associated with a second predetermined motion pattern on the touch-sensitive surface for a third user input.

At block 544 of the process 500, a second result may be obtained. The resulting second result may be similar to but different from the result obtained at block 534. In some examples, the resulting second result may at least partially satisfy the user request. For example, the resulting second result may share one or more characteristics, parameters, or attributes of the result obtained at block 534. In the example shown in fig. 6F-6G, block 544 may include executing one or more additional media search queries related to the media search query executed at block 534. For example, the one or more additional media search queries may include searching for media items classified as romantic comedies or searching for media items hosted by a Reese witerspanon. Thus, the resulting second results may include media items that are romantic comedies (e.g., media item 634) and/or media items that are hosted by the Reese Witherspoon (e.g., media item 636).

In some examples, the resulting second result may be based on a previous user request received prior to detecting the user input at block 504. In particular, the second result may include one or more characteristics or parameters of a previous user request. For example, a previous user request may be "show me movies that were released in the last 5 years. ". In this example, the second result obtained may include the media item released in the last 5 years as a romantic comedy that was sponsored by the Reese Witherspoon.

Further, in some examples, block 544 may include obtaining a second result contextually related to the item focused on by the second user interface upon detecting the third user input. For example, referring to fig. 6G, upon detecting the third user input, the cursor 624 may be positioned at the media item 623 in the second user interface 618. The media item 623 may be, for example, the movie "Legally Blnde". In this example, the resulting second result may share one or more characteristics, properties, or parameters associated with the media item "Legally Blnde". In particular, the second result obtained may include a media item like "Legally blode" that relates to a college of employment or to a professional female taking on a leadership role.

At block 546 of process 500, a third user interface may be displayed on the display unit. In particular, the display of the second user interface at block 536 may be replaced with the display of the third user interface at block 546. In some examples, in response to detecting the third user input, the second user interface may be expanded to a third user interface. The third user interface may occupy at least a majority of the display area of the display unit. The third user interface may include a portion of the results of block 534. Further, the third user interface may include a portion of the retrieved second results of block 544.

In one example, as shown in FIG. 6H, the third user interface 626 can occupy substantially the entire display area of the display unit. In this example, the previous display of media content 602 and second user interface 618 may be replaced with the display of third user interface 626. In response to detecting the third user input, playback of the media content may be paused on the media device. This may be desirable to prevent the user from losing any portion of the media content 602 while browsing media items in the third user interface 626.

The third user interface 626 may include a menu that satisfies "find me romantic comedies that are starred by the Reese witerspoon. "is requested by the user 622. Further, the third user interface 626 can include media items 632 that at least partially satisfy the same user request. Media item 632 may include multiple groups of media items that each correspond to different characteristics, attributes, or parameters. In this example, media items 632 may include media item 634 as a romantic comedy and media item 636 featured by Reese Witherspoon. Each media item group may be marked with a text header (e.g., text headers 628, 630). The text header may describe one or more attributes or parameters associated with the respective group of media items. Further, each text header can be an exemplary user utterance that, when provided to the digital assistant by a user, can cause the digital assistant to acquire a similar set of media items. For example, referring to text header 628, in response to receiving the user utterance "romantic comedy" from the user, the digital assistant may retrieve and display the media item (e.g., media item 634) as a romantic comedy.

Although in the example shown in FIG. 6H, media item 622 is based on the initial user request "find romantic comedies that are starred by the Reese Witherspoon," it should be appreciated that in other examples, media item 632 may be based on other factors, such as media selection history, media search history, order in which previous media searches were received, relationships between media-related attributes, popularity of the media item, and so forth.

In examples where the user request is a media search request, the resulting second result may be based on the number of media items in the resulting result of block 534. In particular, in response to detecting the third user input, it may be determined whether the number of media items in the resulting result is less than or equal to a predetermined number. In accordance with a determination that the number of media items in the result is less than or equal to the predetermined number, the second result may include media items that are different from the media items in the second user interface. The resulting second result may at least partially satisfy the media search request performed at block 534. At the same time, the scope of the second result may be wider than the scope of the previous result and may be associated with some of the parameters defined in the media search request performed at block 534. This may provide a wider set of results to the user and may be desirable for more options to be available for selection.

In some examples, it may be determined whether the media search request includes more than one search attribute or parameter in accordance with the number of media items in the results of determination block 534 being less than or equal to a predetermined number. In accordance with a determination that the media search request includes more than one search attribute or parameter, the resulting second results may include media items associated with the more than one search attribute or parameter. Further, the media items in the retrieved second results may be organized in a third user interface according to more than one search attribute or parameter.

In the example shown in fig. 6F-6H, the media search request "find romantic comedies that are starred by the Reese witerspanon" may be determined to include more than one search attribute or parameter (e.g., "romantic comedies" and "Reese witerspanon"). In accordance with a determination that the media search request includes more than one search attribute or parameter, the resulting second result may include media item 634 associated with the search parameter "romantic comedy" and media item 636 associated with the search parameter "movie featured by Reese Witherspoon". As shown in FIG. 6H, media items 634 may be organized in the "romantic comedies" category, and media items 636 may be organized in the "Reese Witherspoon" category.

In some examples, in accordance with the determination block 534 that the number of media items in the result is greater than the predetermined number, the third user interface may include a first portion and a second portion of the result. The first portion of the obtained results may include a predetermined number of media items (e.g., having the highest relevance scores). The second portion of the result may be different from the first portion of the result and may include more media items than the first portion of the result. Further, it may be determined whether the media items in the resulting result include more than one media type (e.g., movies, television shows, music, applications, games, etc.). In response to determining that the media items in the results include more than one media type, the media items in the second portion of the results may be organized according to media type.

In the example shown in FIG. 6I, the results obtained at block 534 may include media items that are romantic comedies that are starred by the Reese Witherspoon. In accordance with a determination that the number of media items in the results is greater than the predetermined number, a first portion of the results (media item 622) and a second portion of the results (media item 638) may be displayed in the third user interface 626. In response to determining that the results include more than one media type (e.g., movies and television shows), the media items 638 may be organized according to media type. In particular, media items 640 may be organized by a "movies" category, and media items 642 may be organized by a "television programs" category. Further, in some examples, each media item group (e.g., media item 640, media item 642) corresponding to a respective media type (e.g., movie, television program) may be ordered according to a most popular category, actor/director, or release date within the respective media item group. It should be appreciated that in other examples, in response to determining that a media item in the resulting result is associated with more than one media attribute or parameter, the media items in the second portion of the resulting result may be organized according to media attributes or parameters (rather than media type).

In some examples, a user input representing a scroll command (e.g., a fourth user input described below at block 550) may be detected. In response to receiving a user input representing a scroll command, the expanded user interface (or more specifically, the items in the expanded user interface) may be caused to scroll. While scrolling, it may be determined whether the expanded user interface is scrolled beyond a predetermined location in the expanded user interface. In response to determining that the expanded user interface has scrolled beyond the predetermined position in the expanded user interface, media items in a third portion of the resulting results may be displayed on the expanded user interface. The media items in the third portion can be organized according to one or more media content providers (e.g., iTunes, Netflix, HuluPlus, HBO, etc.) associated with the media items in the third portion. It should be appreciated that in other examples, other media items may be obtained in response to determining that the expanded user interface has scrolled beyond a predetermined position in the expanded user interface. For example, popular media items or media items related to the results obtained may be obtained.

As shown by the different flows (e.g., B, F, G and H) starting at block 546 in fig. 5E, block 550 of fig. 5F, block 558 of fig. 5G, block 566 of fig. 5H, or 570 of fig. 5I may be performed after block 532. In particular, in some examples, block 550, block 560, block 564, or block 570 may be performed while the third user interface is displayed at block 546.

At block 550 of process 500 and referring to fig. 5F, a fourth user input may be detected. The fourth user input may be detected while the second user interface (e.g., the second user interface 618) or the third user interface (e.g., the third user interface 626) is displayed on the display unit. In some examples, the fourth user input may be detected on a remote control of the media device. The fourth user input may indicate a direction (e.g., up, down, left, right) on the display unit. For example, the fourth user input may be a contact action from a first location on the touch-sensitive surface of the remote control to a second location on the touch-sensitive surface to the right of the first location. The touch action may thus correspond to a rightward direction on the display unit. In response to detecting the fourth user input, block 552 may be executed.

At block 552 of process 500, the focus of the second user interface or the third user interface may be switched from the first item to the second item on the second user interface or the third user interface. The second item may be positioned in a direction relative to the first item (e.g., the same direction corresponding to the fourth user input). For example, in fig. 6G, the focus of the second user interface 618 may be on the media item 623, with the cursor 624 positioned at the media item 623. In response to detecting a fourth user input corresponding to a rightward direction on the display unit, the focus of the second user interface 618 may be switched from the media item 623 in fig. 6G to the media item 625 positioned to the right of the media item 623 in fig. 6J. In particular, the position of cursor 624 may change from media item 623 to media item 625. In another example, referring to FIG. 6H, the focus of the third user interface 626 can be located on the media item 623. In response to detecting a fourth user input corresponding to a downward direction on the display unit, the focus of the third user interface 626 can be switched from the media item 623 in fig. 6H to the media item 627 in fig. 6K positioned below it with respect to the media item 623. In particular, the position of cursor 624 may change from media item 623 to media item 627.

At block 554 of the process 500, a selection of a media item of the one or more media items may be received via the second user interface or the third user interface. For example, referring to fig. 6J, a selection of the media item 625 may be received via the second user interface 618 by detecting a user input corresponding to the user selection while the cursor 624 is positioned at the media item 625. Similarly, referring to FIG. 6K, a selection of a media item 627 may be received via the third user interface 626 by detecting a user input corresponding to the user selection while the cursor 624 is positioned at the media item 627. In response to receiving a selection of a media item of the one or more media items, block 556 may be performed.

At block 556 of the process 500, media content associated with the selected media item may be displayed on the display unit. In some examples, the media content may be a movie, video, television program, animation, etc. that is being played or streamed on the media device. In some examples, the media content may be a video game, an electronic book, an application program, or a program running on a media device. Further, in some examples, the media content may be information related to the media item. The information may be product information describing various characteristics of the selected media item (e.g., a story brief, actors, director, author, release date, rating, duration, etc.).

At block 558 of process 500 and referring to fig. 5G, a fifth user input may be detected. In some examples, the fifth user input may be detected while the third user interface (e.g., third user interface 626) is displayed. In these examples, the fifth user input may be detected when the focus of the third user interface is on a media item in the top row of the third user interface (e.g., one of the media items 622 in the third user interface 626 of fig. 6H). In other examples, the fifth user input may be detected while the first user interface is displayed. In these examples, the fifth user input may be detected when any of blocks 508 through 514 are performed. In some examples, the fifth user input may be detected on a remote control of the media device. The fifth user input may be similar or identical to the third user input. For example, the fifth user input may comprise a continuous contact action in a second direction on the touch-sensitive surface (e.g., a sliding-up contact action). In other examples, the fifth user input may be an activation of an affordance. The affordance may be associated with a virtual keyboard interface or a typed search interface. In response to detecting the fifth user input, one or more of blocks 560 through 564 may be performed.

At block 560 of process 500, a search field configured to receive entered search input may be displayed. For example, as shown in FIG. 6L, the search field 644 may be displayed on the displayed cell. In some examples, the search field may be configured to receive an entered search query. The entered search query may be a media-related search query, such as searching for media items. In some examples, the search field may be configured to perform a media related search based on text string matches between text entered via the search field 644 and stored text associated with the media item. Further, in some examples, the digital assistant may not be configured to receive input via the search field 644. This may encourage the user to interact with the digital assistant via a voice interface rather than a typing interface to facilitate a more human interface between the media device and the user. It should be appreciated that in some examples, the search field may have been displayed in the second user interface (e.g., second user interface 618) or the third user interface (e.g., third user interface 626). In these examples, block 566 may not be performed.

At block 562 of process 500, a virtual keyboard interface may be displayed on the display unit. For example, as shown in fig. 6L, a virtual keyboard interface 646 may be displayed. Virtual keyboard interface 646 may be configured such that user input received via virtual keyboard interface 646 results in text entry in a search field. In some examples, the virtual keyboard interface is not available to interact with the digital assistant.

At block 564 of process 500, the focus of the user interface may be switched to the search field. For example, referring to FIG. 6L, the search field 644 may be highlighted at block 568. Additionally, a text input cursor may be positioned in search field 644. In some examples, text may be displayed in the search field for prompting the user to enter input to enter a search. As shown in fig. 6L, text 648 includes a prompt to "enter search.

At block 566 of process 500 and referring to fig. 5H, a seventh user input may be detected. In some examples, a seventh user input may be detected while the third user interface (e.g., third user interface 626) is displayed. In some examples, the seventh user input may include pressing a button of a remote control of the electronic device. The button may be, for example, a menu button for navigating to a main menu user interface of the electronic device. It should be appreciated that in other examples, the seventh user input may comprise other forms of user input. In response to detecting the seventh user input, block 568 may be performed.

At block 568 of the process 500, a third user interface may be displayed on the display unit. In particular, the seventh user input may cause the third user interface to be dismissed. In some examples, the seventh user input may cause a main menu user interface menu to be displayed in place of the third user interface. Alternatively, in examples where the media content (e.g., media content 602) is displayed prior to displaying the third user interface (e.g., third user interface 626) and the media content on the electronic device is paused while the third user interface is displayed (e.g., paused in response to detecting the third user input), the media content on the electronic device may resume playing in response to detecting the seventh user input. Accordingly, the media content may be displayed in response to detecting the seventh user input.

At block 570 of process 500 and referring to fig. 5I, a sixth user input may be detected. As shown in fig. 6M, a sixth user input may be detected while third user interface 626 is displayed. However, in other examples, the sixth user input may alternatively be detected while the second user interface (e.g., second user interface 618) is displayed. Upon detecting the sixth user input, the second user interface or the third user interface may include a portion of the result that at least partially satisfies the user request. The sixth user input may comprise an input to invoke a digital assistant of the electronic device. In particular, the sixth user input may be similar or identical to the user input of the second input type described above with reference to block 516. For example, the sixth user input may include pressing a particular button on a remote control of the media device and holding the button for more than a predetermined duration (e.g., a long press). In response to detecting the sixth user input, one or more of blocks 572-592 may be performed.

At block 572 of the process 500, second audio data may be sampled. Block 572 may be similar or identical to block 518 described above. In particular, the sampled second audio data may include user utterances from a second user. The second user utterance may represent a second user request for the digital assistant. In some examples, the second user request may be a request to perform a second task. For example, referring to fig. 6M, the sampled second audio data may include those movies for which the second user utterance "only requires the presentation of Luke Wilson. ". In this example, the second user utterance may represent a second user request to refine a previous media search to include only media items having the actor Luke Wilson. In this example, the second user utterance is in natural language form. Further, the second user request may be partially specified in the event that the second user utterance does not explicitly specify all information needed to define the user request. For example, the second user utterance does not explicitly specify what "those" refer to. In other examples, the second user request may be a request to play a media item or to provide specific information (e.g., weather, stock market, sports, etc.).

It should be appreciated that in some examples, blocks 520 through 526 described above may be performed similarly with respect to the sixth user input. Specifically, as shown in fig. 6M, when the sixth user input is detected, an active visual indicator 614 may be displayed on the display unit. A second textual representation 650 of the second user utterance may be determined (e.g., using STT processing module 430) and displayed on the display unit. A second user intent corresponding to the second user utterance may be determined based on the second text representation (e.g., using natural language processing module 432). In some examples, as shown in fig. 6M, in response to detecting the sixth user input, content displayed on the display unit may be faded or dimmed upon detection of the sixth user input. This may be used to highlight the visual indicator 614 and the second textual representation 650 of the activity.

At block 574 of process 500, it may be determined whether the sampled second audio data includes a second user request. Block 574 may be similar or identical to block 528 described above. In particular, the determination at block 574 may be made based on a second user intent determined from a second textual representation of the second user utterance. In accordance with a determination that the second audio data does not contain a user request, block 576 may be performed. Alternatively, one or more of blocks 578-592 may be performed in accordance with a determination that the second audio data comprises a second user request.

At block 576 of process 500, a request to clarify the user's intent may be displayed on a display unit. Block 576 may be similar or identical to block 530 described above.

At block 578 of the process 500, it may be determined whether the second user request is a request to refine the results of the user request. In some examples, the determination may be made according to a second user intent corresponding to the second user utterance. In particular, the second user request may be determined to be a request to refine the results of the user request based on the expressed indication recognized in the second user utterance to refine the results of the user request. For example, referring to fig. 6M, the second text representation 650 may be parsed during natural language processing to determine whether the second user utterance includes a predetermined word or phrase that corresponds to an express intent to refine the media search result. Examples of words or phrases that correspond to an express intent to refine a media search result may include "only," "filtered by … …," and so forth. Thus, it may be determined, based on the word "only" in the second textual representation 650, that the second user request is a request to refine the media search results associated with the user request "find romantic comedies hosted by the Reese Witherspoon". It should be appreciated that other techniques may be implemented to determine whether the second user request is a request for refining the results of the user request. In accordance with a determination that the second user request is a request for refining the results of the user request, one or more of blocks 580 through 582 may be performed.

At block 580 of process 500, a subset of results that at least partially satisfy the user request may be obtained. In some examples, a subset of the results may be obtained by filtering existing results according to additional parameters defined in the second user request. For example, the obtained results at block 534 (e.g., including media item 622) may be filtered such that media items having the actor Luke Wilson are identified. In other examples, a new media search query combining the requirements of the user request and the second user request may be executed. For example, the new media search query may be a search query for media items classified as romantic comedies and having actors Reese witerspoon and Luke Wilson. In this example, a new media search query may result in media items, such as "Legally Blnde" and "Legally Blnde 2".

In an example of detecting the sixth user input while displaying the third user interface, additional results related to the user request and/or the second user request may be obtained. The additional results may include the media item having one or more attributes or parameters described in the user request and/or the second user request. Further, the additional results may not include all of the attributes or parameters described in the user request and the second user request. For example, referring to the examples described in fig. 6H and 6M, additional results may include media items having at least one (but not all) of the following attributes or parameters: romantic comedies, Reese witerspoon, and Luke Wilson. The additional results may provide the user with a broader set of results and it may be desirable to have more options available for selection. Further, the additional results may be related results that are likely to be of interest to the user.

At block 582, a subset of the results may be displayed on the display unit. For example, as shown in FIG. 6N, the resulting subgroup can include media items 652, which can include movies, such as "Legally Blnde" and "Legally Blnde 2". In this example, media item 652 is displayed in the top row of third user interface 626. The text header 656 may describe properties or parameters associated with the displayed media item 652. In particular, the text header 656 may include a rephrase of the user's intent associated with the second user utterance. In an example where the sixth user input is detected while the second user interface (e.g., second user interface 618 shown in fig. 6G) is displayed, the media item 652 may alternatively be displayed in the second user interface. In these examples, the media item 652 may be displayed as a single line across the second user interface. It should be appreciated that the manner in which the media item 652 is displayed in the second user interface or the third user interface may vary.

In an example of detecting the sixth user input while displaying the third user interface, additional results related to the user request and/or the second user request may be displayed in the third user interface. For example, referring to FIG. 6N, additional results may include media item 654 having one or more parameters described in the user request and/or the second user request. In particular, media items 654 may include media item 658 being a romantic comedy that was lead by Luke Wilson, and media item 660 being lead by Luke Wilson and released in the past 10 years. Each media item group (e.g., media item 658, media item 660) may be marked with a text header (e.g., text header 662, text header 664). The text header may describe one or more parameters associated with the respective group of media items. The text header may be in natural language form. Further, each text header can be an exemplary user utterance that, when provided to the digital assistant by a user, can cause the digital assistant to acquire a similar set of media items. For example, referring to text header 662, in response to receiving the user utterance "romantic comedies starring by Luke Wilson" from the user, the digital assistant may acquire and display a media item (e.g., media item 658) that is the romantic comedies starring by Luke Wilson.

Referring again to block 578, it may be determined that the second user request is not a request to refine the results of the user request. Such a determination may be made based on the absence of any explicit results in the second user utterance that indicate that the user request is to be refined. For example, when a second text representation of a second user utterance is parsed during natural language processing, a predetermined word or phrase corresponding to an explicit intent to refine a media search result may not be recognized. This may be because the second user request is a request unrelated to the previous user request (e.g., a new request). For example, the second user request may be a "find horror movie" request that is unrelated to the previous user request "find romantic comedies that are starred by Reese Witherspoon". Alternatively, the second user request may include a fuzzy language that may be interpreted as a request to refine the results of a previous user request or a new request unrelated to a previous user request. For example, referring to fig. 6P, the second user utterance may be "Luke Wilson," which may be interpreted as a request to refine the results of a previous user request (e.g., to include only media items with actor Luke Wilson), or may be interpreted as a new request unrelated to a previous user request (e.g., a new media search for media items with actor Luke Wilson). In these examples, the second user request may be determined not to be a request to refine a result of the user request. In accordance with a determination that the second user request is a request for refining the results of the user request, one or more of blocks 584 through 592 may be performed.

At block 584 of process 500, a second task that at least partially satisfies the second user request may be performed. Block 584 may be similar to block 532 described above, except that the second task of block 584 may be different from the task of block 532. Block 584 may include one or more of blocks 586 through 588.

At block 586 of the process 500, a third result that at least partially satisfies the second user request may be obtained. Block 586 may be similar to block 534 described above. Referring to the example shown in fig. 6P, the second user utterance "Luke Wilson" may be interpreted as a request to execute a new media search query to identify media items having an actor Luke Wilson. Thus, in this example, block 586 may include performing the requested media search to obtain the media item with actor Luke Wilson. It should be appreciated that in other examples, the user request may include a request for other types of information (e.g., weather, sports, stock market, etc.), and the corresponding type of information may be obtained at block 586.

At block 588 of process 500, a portion of the third result may be displayed on the display unit. For example, referring to fig. 6Q, a third result including a media item 670 with an actor Luke Wilson (e.g., a movie such as "Playing It Cool", "The skeeleton Twins", and "You Kill Me") may be displayed in The third user interface 626, hi this example, the media item 670 may be displayed in the top row of the third user interface 626, the text header 678 may describe attributes associated with the displayed media item 670, in particular, the text header 678 may include a rephrase of the determined user intent associated with the second user utterance, upon displaying the second user interface (e.g., second user interface 618 shown in fig. 6G), in detecting a sixth user input, the media item 670 may be displayed in the second user interface, hi these examples, the media item 670 may be displayed in a single line across the second user interface, it should be appreciated that, in other examples, the organization or configuration of the media items 670 in the second or third user interfaces may vary.

At block 590 of process 500, a fourth result may be obtained that at least partially satisfies the user request and/or the second user request. In particular, the fourth result may include the media item having one or more attributes or parameters defined in the user request and/or the second user request. Referring to the examples shown in fig. 6P and 6Q, the fourth result may include media items having one or more of the following attributes or parameters: romantic comedies, Reese witerspoon, and Luke Wilson. For example, the fourth result may include media item 676 classified as a romantic comedy and featured by Luke Wilson. The resulting fourth result may provide a broader set of results to the user and thus it may be desirable to provide more options to choose from. Further, the fourth result may be associated with an alternative predicted user intent resulting from the second user request and one or more previous user requests in order to increase the likelihood of meeting the user's actual intent. This can be used to improve the accuracy and relevance of the results returned to the user, thereby improving the user experience.

In some examples, at least a portion of the fourth result may include the media item having all of the parameters defined in the user request and the second user request. For example, the fourth result may include media item 674 classified as a romantic comedy and featured by Reese Witherspoon and Luke Wilson. The media item 674 can be associated with an alternative intent to use a second user request to refine the results of a previous user request. It may be desirable to obtain the media item 674 that may increase the likelihood of meeting the user's actual intent in the event that the user actually desires that the second request be a request to refine the results obtained.

In some examples, a portion of the fourth result may be based on a focus of the user interface at the time the sixth user input is detected. In particular, when the sixth user input is detected, the focus of the user interface may be located on one or more items of the third user interface. In this example, a portion of the fourth result may be contextually relevant to the one or more items focused on by the user interface. For example, referring to fig. 6K, cursor 624 may be positioned over media item 627, and thus the focus of third user interface 626 may be located on media item 627. In this example, a portion of the fourth result may be obtained using an attribute or parameter associated with the media item 627. For example, the category of "movies featured by Reee Witherspoon" associated with media item 627 may be used to obtain a portion of the fourth result, where the resulting portion may include media items featured by Reee Witherspoon and Luke Wilson. In another example, media item 627 may be an adventure movie, and thus a portion of the fourth result may include a media item that is an adventure movie featured by Luke Wilson.

At block 592 of process 500, a portion of the fourth result may be displayed. In an example of detecting the sixth user input while displaying the third user interface, a portion of the fourth result may be displayed in the third user interface. For example, as shown in fig. 6Q, a portion of the fourth result may include media item 672 displayed in a line after media item 670. Media item 672 may be associated with one or more attributes or parameters defined in the second user request and/or the user request (e.g., romantic comedies, Reese witerspoon, and Luke Wilson). For example, media items 672 may include media item 676 being a romantic comedy sponsored by Luke Wilson, and media item 674 being a romantic comedy sponsored by Reese Witherspoon and Luke Wilson. Each media item group (e.g., media item 674, media item 676) may be marked with a text header (e.g., text header 680, text header 682). The text header may describe one or more attributes or parameters associated with the respective group of media items. The text header may be in natural language form. Further, each text header may be an exemplary user utterance that, when provided to the digital assistant by a user, may cause the digital assistant to obtain a similar set of media items having similar attributes.

As described above, the second user utterance "Luke Wilson" may be associated with two possible user intents: a first user intent to perform a new media search or a second user intent to refine results of previous user requests. The displayed media item 670 may satisfy the first user intent and the displayed media item 674 may satisfy the second user intent. In this example, media item 670 and media item 674 are displayed in the first two rows. In this way, the results of the two most likely user intents associated with the second user request (e.g., a new search or a refinement to a previous search) may be highlighted (e.g., the first two rows) in the third user interface 626. This may minimize the user from scrolling or browsing through the third user interface before finding the desired media item for consumption. It should be appreciated that highlighting the media item 670 and the media item 674 in the third user interface 626 may vary in a manner that minimizes scrolling and browsing.

Fig. 7A-7C illustrate a process 700 for operating a digital assistant for a media system, according to various examples. Process 700 may be performed using one or more electronic devices implementing a digital assistant. For example, process 700 may be performed using one or more of system 100, media system 128, media device 104, user device 122\ or digital assistant system 400 described above. Fig. 8A-8W illustrate screenshots displayed by a media device on a display unit at various stages of process 700, according to various examples. Process 700 is described below with simultaneous reference to fig. 7A-7C and 8A-8W. It should be understood that some operations in process 700 may be combined, the order of some operations may be changed, and some operations may be omitted.

At block 702 of process 700, content may be displayed on a display unit (e.g., display unit 126). Block 702 may be similar or identical to block 502 described above. Referring to fig. 8A, the displayed content may include media content 802 (e.g., movies, videos, television programs, video games, etc.) that is played on a media device (e.g., media device 104). In other examples, the displayed content may include other content, such as content associated with an application running on the media device, or a user interface for interacting with a digital assistant of the media device. In particular, the displayed content may include a main menu user interface or a user interface containing objects or results previously requested by the user.

At block 704 of process 700, user input may be detected. Block 704 may be similar or identical to block 504 described above. The user input may be used to invoke a digital assistant of the media device. In some examples, user input may be detected while the contents of box 702 are displayed. The user input may be detected on a remote control of the media device (e.g., remote control 124). For example, the user input may correspond to the second input type described in block 516 of process 500. In particular, the user input at block 704 may include pressing a particular button on a remote control of the media device and holding the button for more than a predetermined duration (e.g., a long press). In response to detecting the user input, one or more of blocks 706-746 may be performed.

At block 706 of the process 700, audio data may be sampled. Block 706 may be similar or identical to block 518 described above. The sampled audio data may include user utterances. The user utterance may represent a user request for a digital assistant of the media device. For example, referring to the example shown in fig. 8A, the sampled audio data may include the user utterance "Paris is now a few? ". The user utterance may be in an unstructured natural language form. In some examples, the request represented by the user utterance may be partially specified, where information needed to perform the request is missing or not explicitly defined in the user utterance (e.g., "play this"). In other examples, the user utterance may not be an explicit request, but rather an indirect question or statement from which the request was inferred (e.g., "what did he say. Further, as described in more detail below in block 712, the user utterance may include one or more ambiguous terms.

At block 708 of process 700, a textual representation of a user utterance in the sampled audio data may be determined. Block 708 may be similar or identical to block 522 described above. In particular, the textual representation may be determined by performing STT processing on a user utterance in the sampled audio data. For example, referring to fig. 8A, a textual representation 804 "Paris is now a few points? ", and displays it on the display unit. As shown, the textual representation 804 may be overlaid on the media content 802, while the media content 802 continues to play on the media device.

In some examples, STT processing to determine the textual representation may be biased towards media-related textual results. Additionally or alternatively, the textual representation may be based on a previous user utterance received by the media device prior to sampling the audio data. Further, in some examples, the text representation may be based on a time at which a previous user utterance was received prior to sampling the audio data. In examples where the textual representation is obtained from a separate device (e.g., DA server 106), the media device may indicate to the separate device that the sampled audio data is associated with a media application, and the indication may bias STT processing on the separate device toward media-related textual results.

At block 710 of process 700, a user intent corresponding to a user utterance may be determined. Block 710 may be similar to block 526 described above. In particular, the textual representation of block 708 may be processed using natural language processing (e.g., with natural language processing module 432) for user intent. For example, referring to fig. 8A, from the textual representation 804, "Paris is now a few points? "determining user intent is at the time of requesting a location named" Paris ". Natural language processing for determining user intent may be biased towards media-related user intent. In an example of obtaining the user intent from a separate device (e.g., DA server 106), the media device may indicate to the separate device that the sampled audio data is associated with a media application, and the indication may bias natural language processing on the separate device toward media-related user intent.

In some examples, the user intent may be determined based on prosodic information derived from a user utterance in the sampled audio data. In particular, prosodic information (e.g., pitch, tempo, volume, pressure, intonation, speed, etc.) may be derived from the user utterance to determine the attitude, mood, emotion, or emotion of the user. The user intent may then be determined based on the attitude, mood, emotion, or mood of the user. For example, the sampled audio data may include the user utterance "what did he say? ". In this example, the user's impatience or frustration may be determined based on high volume and pressure detected in the user utterance. Based on the user utterance and the determined user emotion, it may be determined that the user intent includes a request to increase a volume of audio associated with media content being played on the media device.

As shown in fig. 7A, block 710 may include one or more of blocks 712 through 718. In particular, one or more of blocks 712-718 may be performed when two or more user intents are found to be highly probable and the natural language processing module is unable to narrow the range of the two or more user intents into a single user intent. This may occur, for example, when the user utterance contains ambiguous terms that cannot be disambiguated based on available context information.

At block 712 of process 700, it may be determined whether the user utterance (or a textual representation of the user utterance) includes ambiguous terms. The determination may be made during natural language processing (e.g., using natural language processing module 432) to determine the user intent. Ambiguous terms can be words or phrases that have more than one possible interpretation. For example, referring to fig. 8A, the user utterance "Paris is now a few? The term "Paris" in "can be interpreted as either" Paris "in france or" Paris "in texas, usa. Thus, the term "Paris" in the user utterance may be determined to be a vague term.

In some examples, contextual information may be retrieved (e.g., by a digital assistant) to potentially disambiguate ambiguous terms. If disambiguation is successful, it may be determined that the user utterance does not include ambiguous terms. For example, it may be determined that media content 802 is a movie with "Paris" in france as a setting (e.g., "ratouille"), and thus the user is more likely to refer to "Paris" in france than "Paris" in texas. In this example, the term "Paris" may be successfully disambiguated to indicate "Paris" in france, and thus it may be determined that the user utterance does not include ambiguous terms.

In another example, the user utterance may be "play this. ". In this example, the user utterance does not explicitly define the particular media item to be played, so the term "this" interpreted separately may be an ambiguous term that can refer to any media item accessible to the media device. The term may be disambiguated using context information displayed by the media device on the display unit. For example, the digital assistant can determine whether the focus of the displayed user interface is located on one media item. Upon determining that the focus of the user interface is located on the media item, the digital assistant may disambiguate the term "this" and determine that the term refers to the media item in focus for the displayed user interface. Based on this determination, it may be determined at block 712 that the user utterance does not include ambiguous terms. Thus, the user intent may be determined as a request to play a media item focused by the displayed user interface.

In examples where term ambiguity cannot be resolved, it may be determined at block 712 that the user utterance contains ambiguous terms. In response to determining that the user utterance includes ambiguous terms, one or more of blocks 714-718 may be performed. At block 714 of the process 700, two or more candidate user intents may be obtained based on the ambiguous terms. The two or more candidate user intents may be the most likely candidate user intents determined from the user utterance that cannot be disambiguated. Referring to the example illustrated in fig. 8A, the two or more candidate user intents may include a first candidate user intent requesting a time of french "Paris" and a second candidate user intent requesting a time of texas "Paris".

At block 716 of process 700, two or more candidate user intents may be displayed on a display unit for selection by a user. For example, referring to fig. 8B, a first candidate user intent 810 and a second candidate user intent 808 may be displayed. Further, a text prompt 806 may be provided to prompt the user to indicate an actual user intent corresponding to the user utterance by selecting between the first candidate user intent 810 and the second candidate user intent 808. The text prompt 806, the first candidate user intent 810, and the second candidate user intent 808 are overlaid on the media content 802.

At block 716 of process 700, a user selection of one of the two or more candidate user intents may be received. In some examples, the user selection may be received by selecting an affordance corresponding to one of the candidate user intents. In particular, as shown in fig. 8B, each of the two or more candidate user intents 810,808 may be displayed on the display unit as a selectable affordance. The media device may receive an input from a user (e.g., via a remote control of the media device) to change the focus of the display to one of the affordances. A user selection of a candidate user intent corresponding to the affordance may then be received (e.g., via a remote control of the media device). For example, as shown in fig. 8B, the media device may receive a user input to move a cursor 812 over an affordance corresponding to a first candidate user intent 810 (e.g., "Paris" in france). A user selection of a first candidate user intent 810 may then be received.

In other examples, the user selection may be received via a voice interaction with the digital assistant. For example, the second user input may be detected while displaying two or more candidate user intents. The second user input may be similar or identical to the user input of block 704. In particular, the second user input may be an input to invoke the digital assistant (e.g., press a particular button on a remote control of the media device and hold the button for more than a predetermined duration). In response to detecting the second user input, second audio data may be sampled. The second audio data may include a second user utterance representing a user selection of one of the two or more interpretations. For example, referring to fig. 8C, the second audio data may include a second user utterance "french 'Paris'". As shown, a textual representation 814 of the second user utterance "french 'Paris'" may be displayed on the display unit. In this example, the second user utterance "french 'Paris'" may represent a user selection of the first candidate user intent 810 (e.g., "Paris" in france). Based on the second user utterance "french 'Paris'", it may be determined that the first candidate user intent 810 is a point of what is now the user utterance "Paris? "corresponding actual user intent. Accordingly, it may be determined at block 710 when the user intent is to request "Paris" in france. In determining the user intent based on the received user selection, one or more of blocks 720-746 may be performed.

In some examples, blocks 710-718 may be performed without outputting speech from the media device. In particular, the textual prompt 806 and the candidate user intentions 808,810 may be displayed without outputting speech associated with the two or more candidate user intentions 808, 810. Thus, input from the user may be received in the form of speech, but output from the digital assistant may be presented to the user visually (rather than in audio) on the display unit. It may be desirable to maintain a common experience associated with consuming media content so that the user experience of the media device may be improved.

Referring again to block 712, in response to determining that the user utterance does not include ambiguous terms, one or more of blocks 720-746 may be performed. At block 720 of process 700, it may be determined whether the user intent corresponds to a predetermined core competitiveness of a plurality of core competitions associated with the media device. For example, a media device may be associated with several predetermined core competencies, such as, for example, searching for media items, playing media items, and providing information related to media items, weather, stock market, and sports. If the user intent is related to performing a task related to one of several predetermined core competencies, it may be determined that the user intent corresponds to one of several predetermined core competencies. For example, if the user intent is a request for a media item featured by a Reee Witherspoon, the user intent may be determined to correspond to one of several predetermined core competencies. In response to determining that the user intent corresponds to one of a plurality of core competencies associated with the electronic device, one or more of blocks 724-746 may be performed.

Conversely, if the user intent is related to performing a task other than a number of predetermined core competencies, it may be determined that the user intent does not correspond to one of the number of predetermined core competencies. For example, if the user intent is a request for a map direction, it may be determined that the user intent does not correspond to one of several predetermined core competencies. In response to determining that the user intent does not correspond to one of a plurality of core competencies associated with the electronic device, block 722 may be performed.

At block 722 of process 700, a second electronic device (e.g., device 122) may be caused to at least partially satisfy the user intent. In particular, the second electronic device may be caused to perform tasks that facilitate meeting the user's intent. In one example, it may be determined that the media device is not configured to satisfy a user intent requesting a map direction, and thus the user intent may be transmitted to the second electronic device to satisfy the user intent. In this example, the second user device may perform a task for displaying the requested map directions. In other examples, information other than the user intent may be transmitted to the second electronic device to cause the second electronic device to perform tasks for facilitating satisfaction of the user intent. For example, a digital assistant of the media device may determine (e.g., using natural language processing module 432 or task stream processing module 436) a task stream or structured query that satisfies the user's intent and may transmit the task stream or structured query to the second electronic device. The second electronic device may then execute the task flow or structured query to facilitate satisfying the user intent.

As will become apparent in the description provided below, the level of interference associated with satisfying the user intent may be based on the nature of the user intent. In some cases, the task associated with satisfying the user intent may be performed without displaying any additional responses or outputs on the display (e.g., block 726). In other cases, only textual responses (e.g., no corresponding visual or audio output) are provided to satisfy the user intent (e.g., block 732). In other cases, a user interface containing the relevant results may be displayed to satisfy the user intent (e.g., boxes 738,742 or 746). The user interface may occupy a large portion of the area of the display unit or a small portion of the area. Accordingly, the process 700 may intelligently adjust the output interference level according to the nature of the user's intent. This enables convenient access to the services of the digital assistant while reducing undesirable interference during consumption of the media content, thereby improving the overall user experience.

At block 724 of process 700, it may be determined whether the user intent includes a request to adjust a state or setting of an application on the media device. In response to determining that the user intent includes a request to adjust a state or setting of an application on the media device, block 726 may be executed. At block 726 of process 700, the state or settings of the application may be adjusted to meet the user intent.

In some examples, the status or setting may be associated with the displayed media content being played on the media device. For example, the request to adjust the state or settings of the application may include a request to control the media device to play media content. In particular, it may include a request to pause, resume, restart, stop, rewind, or fast forward play of displayed media content on a media device. It may also include a request to skip forward or backward (e.g., for a specified duration) in the media content in order to play a desired portion of the media content. Further, the request to adjust the state or settings of the application may include a request to turn on/off subtitles or closed captions (e.g., in a specified language) associated with the displayed media content, a request to increase/decrease the volume of audio associated with the displayed media content, a request to mute/unmute audio associated with the displayed media content, or a request to speed up/slow down the rate at which the displayed media content is played.

Fig. 8E-8F illustrate an exemplary example of a user intent including a request to control a media device to play media content. In this example, the digital assistant may be invoked (e.g., at block 704) while the media content 802 is being played. The media content may be displayed without initially displaying subtitles. The sampled audio data (e.g., at block 706) may include the user utterance "open english caption. ". As shown in fig. 8E, a textual representation 816 of the user utterance may be displayed on the display unit. Based on the user utterance, it may be determined at block 710 that the user intent includes a request to open a display of english subtitles of the media content 802. Further, at block 724, it may be determined that the user intent is a request to adjust a state or setting of an application of the electronic device. In response to the determination, english subtitles for the media content 802 may be opened. As illustrated by label 817 in fig. 8F, display of english subtitles associated with the media content 802 may be initiated to meet the user's intent.

In another illustrative example shown in fig. 8G-8H, the user utterance in the sampled audio data may be a natural language expression, indicating that the user is not hearing a portion of the audio associated with the media content. Specifically, as shown in the textual representation 820 in fig. 8G, the user utterance may be "what did he say? ". In this example, it may be determined (e.g., at block 710) that the user intent includes a request to playback a portion of the media content corresponding to a portion of audio not heard by the user. It may also be determined that the user intent includes a request to open closed captioning to aid in the difficulty of hearing audio associated with the media content. Further, based on prosodic information in the user utterance, it may be determined that the user is frustrated or impatient, and thus it may be determined that the user intent includes a request to increase the volume of audio associated with the media content based on the user mood. At block 724, it may be determined that the user intents are requests to adjust the state or settings of applications of the electronic device. In response to this determination, the media content may be rewound for a predetermined duration (e.g., 15 seconds) to a previous portion of the media content, and playback of the media content may resume from the previous portion (e.g., as indicated by label 822 in fig. 8H). Further, closed captioning can be opened (e.g., as shown by label 824 in fig. 8H) before playback of the media content resumes from the previous portion. Further, the volume of audio associated with the media content may be increased before playback of the media content is resumed from the previous portion.

It should be appreciated that closed captioning or subtitles associated with media content may be obtained from a service provider (e.g., a cable provider or a media subscription service). However, in examples where closed captioning or subtitles are not available from a service provider, the media device may generate closed captioning or subtitles to aid in the difficulty of hearing audio associated with the media content. For example, speech in audio associated with the media content may be continuously converted to text (e.g., using STT processing module 730) and stored in association with the media content prior to receiving the user utterance in the sampled audio data and while the media content is being played. In response to a user request to play back a previous portion of the media content that was not heard by the user, text corresponding to the previous portion being played back may be retrieved and displayed while the previous portion of the media content is being played back.

In some examples, the state or settings associated with the displayed media content may be adjusted without displaying an additional user interface for performing the adjustment or providing any text or graphics to indicate that the state or settings are being adjusted. For example, in the examples shown in fig. 8E-8H, the subtitles (or closed captions) may simply be opened without explicitly displaying text such as "closed captions," or without displaying a user interface for controlling the display of the subtitles. Further, the state or setting may be adjusted without outputting any audio associated with satisfying the user's intent. For example, in fig. 8E to 8H, subtitles (or closed captions) may be opened without outputting audio (e.g., a voice signal or a non-verbal audio signal) for confirming that the subtitles are opened. Thus, the requested action may be performed simply without causing additional audio or visual interference with the media content. In this way, process 700 may minimize interference with the user's consumption of media content while providing convenient access to digital assistant services, thereby improving the user experience.

In other examples, the request to adjust the state or settings of the application on the media device may include a request to navigate in a user interface (e.g., second user interface 818, third user interface 826, or main menu user interface) of the media device. In one example, the request to navigate in the user interface may include a request to switch focus of the user interface from a first object (e.g., a first media item) to a second object (e.g., a second media item) in the user interface. Fig. 8I to 8K show an exemplary example of such a request. As shown in fig. 8I, the displayed content may include a third user interface 826 having a plurality of media items organized in various categories (e.g., "romantic comedies," "romantic comedies hosted by Reese witerspoon," and "movies featured by Luke Wilson"). As indicated by the position of the cursor 828, the focus of the third user interface 826 may be located on the first media item 830 categorized as "romantic comedy". The title of the second media item 832 may be "Legally blode" and may be located in the category of "romantic comedies hosted by Reese Witherspoon". As shown in the text representation 834 in fig. 8J, the user utterance in the sampled audio data (e.g., at block 706) may be a "go to Legally Blonde. ". Based on the user utterance, it may be determined (e.g., at block 710) that the user intent is a request to switch focus of the third user interface 826 from the first media item 830 to the second media item 832 titled "Legally blode". In response to determining (e.g., at block 724) that the user intent is a request to adjust a state or setting of an application of the electronic device, focus of the third user interface 826 can be switched from the first media item 830 to the second media item 832. For example, as shown in FIG. 8K, the position of the cursor 828 may change from the first media item 830 to the second media item 832.

In another example, the request to navigate in the user interface may include a request to change the focus of the user interface to a particular category of results displayed in the user interface. For example, FIG. 8I includes media items associated with categories such as "romantic comedies", "romantic comedies starring by Reese Witherspoon", and "movies starring by Luke Wilson". Instead of "going to Legally blode", the user utterance in the sampled audio data may be "jumping to a romantic comedy that is hosted by Reese Witherspoon". ". Based on the user utterance, it may be determined (e.g., at block 710) that "romantic comedies featured by the Reee Witherspoon" defines a category of media items displayed in the third user interface 826, and thus, it may be determined that the user intent is a request to change the focus of the user interface to one or more media items associated with the category. In response to determining (e.g., at block 724) that the user intent is a request to adjust a state or setting of an application of the electronic device, focus of the third user interface 826 can be shifted to one or more media items associated with the category. For example, as shown in FIG. 8K, the position of the cursor 828 may be shifted to a second media item 832 associated with "romantic comedies featured by Reese Witherspoon".

In other examples, the request to navigate in the user interface of the media device may include a request to select an object in the user interface. Selection of an object may result in an action associated with the object to be performed. For example, as shown in FIG. 8K, the position of the cursor 828 is located on a second media item 832 entitled "Legally blode". As shown in fig. 8L, the digital assistant may be invoked (e.g., at block 704), and the user utterance in the sampled audio data (e.g., at block 706) may be "play this" (e.g., displayed as a textual representation 836). Based on the user utterance, it may be determined (e.g., at block 710) that the user intent is a request to play a particular media item. In this example, the user utterance does not explicitly define or identify the particular media item to be played. In particular, the word "this" is ambiguous. However, the digital assistant may obtain contextual information to disambiguate the user's intent. For example, it may be determined that the focus of the third user interface 826 was on the second media item 832 when the audio data was sampled. Based on the determination, the second media item 832 may be identified as the media item to be played. In response to determining (e.g., at block 724) that the user intent to play the second media item 832 is a request to adjust a state or setting of an application of the electronic device, an action for causing the second media item 832 to be played may be performed. For example, preview information for the second media item 832 may be displayed on the display unit. The preview information may include, for example, a brief synopsis, a list of actors, a release date, a user rating, and the like. Additionally or alternatively, the second media item 832 may be played on the media device and the media content associated with the second media item 832 may be displayed on the display unit (e.g., as shown by the text 838 "playing Legally blode" in FIG. 8M). It should be appreciated that in other examples, the media item to be selected may be explicitly identified. For example, in addition to "play this," the user utterance may specifically state "play Legally Blnde," and may perform similar actions for causing the second media item 832 to be played.

In other examples, the request to navigate through the user interface of the media device may include a request to view a particular user interface or application of the media device. For example, the user utterance in the sampled audio data may be a "go to actor page" where the user intent includes a request for y' y to display a user interface associated with browsing the media item according to a particular actor. In another example, the user utterance in the sampled audio data may be "return to home," where the user intent includes a request to display a main menu user interface of the media device. In yet another example, the request to navigate in the user interface of the media device may include a request to launch an application on the electronic device. For example, a user utterance in the sampled audio data may be "go to iTunes Store," where the user intent includes a request to launch an iTunes Store application. It should be appreciated that other requests for adjusting the state or settings of applications on a media device are contemplated.

Referring again to block 724, it may be determined that the user intent does not include a request to adjust a state or setting of an application on the electronic device. For example, the user intent may instead be a request to present information related to one or more media items. In response to such a determination, one or more of blocks 728-746 may be performed. At block 728 of process 700, it may be determined whether the user intent is one of a plurality of predetermined request types. In some examples, the plurality of predetermined request types may be requests associated with plain text responses. More specifically, the plurality of predetermined request types may be requests for information that is predetermined to require a plain text response. This is in contrast to a request for a response that is predetermined to require inclusion of a media object (e.g., an image, an animated object, a video, etc.). In some examples, the plurality of predetermined request types may include a request for a current time of a particular location (e.g., "Paris is now a few. One or more of blocks 730 through 732 may be performed in response to determining that the user intent is one of a plurality of predetermined request types.

At block 730 of process 700, results that at least partially satisfy the user's intent may be obtained. For example, results may be obtained from an external service (e.g., external service 120) by executing a task stream. At block 732 of process 700, the results obtained at block 730 may be displayed in text form on a display unit. Further, the results may be displayed in textual form without displaying any corresponding graphical or media-related items corresponding to the results.

Fig. 8M-8P show exemplary examples of blocks 728-732. As shown in FIG. 8M, the movie "Legally blode" may initially be played on the media device and displayed on the display unit. Upon playing "legallily blode," the digital assistant may be invoked (e.g., at block 704), and the user utterance in the sampled audio data may be "who is the actress? ". For example, as shown in fig. 8N, a textual representation 840 of the user utterance may be displayed on the display unit. Based on the user utterance, it may be determined (e.g., at block 710) that the user intent includes a request to identify a lead actor for a particular media item. Since the user utterance does not specify any particular media item, the user intent may be ambiguous. However, based on the movie "Legally Blnde" displayed when the audio data is sampled, it may be determined that the media item associated with the user's intent is "Legally Blnde". In this example, it may be determined (e.g., at block 728) that the user intent is one of a plurality of predetermined request types. In particular, it may be determined that a plain text response may be provided to satisfy the user intent to identify the female lead actor in the Legally blode. In response to determining that the user intent is one of a plurality of predetermined request types, a search may be performed (e.g., at block 730) in the media-related database to obtain a result that the actress in the movie "Legall Blnde" is "Reese Witherspoon". As shown in fig. 8P, a plain text result 842 "Reese witerspanon" may be displayed on the display unit to meet the user's intent. The plain text result 842 may be overlaid on the media content of the displayed "Legally blode". Further, the media content of "Legally blode" may continue to play while the plain text result 842 is displayed. By displaying the plain text results 842 (e.g., not displaying graphical results or additional user interfaces to satisfy the user intent), the user intent can be satisfied in an unobtrusive manner and with minimal interference with the user's consumption of the media content. At the same time, the user is provided access to the digital assistant service. This is desirable to improve the user experience.

Referring again to block 728, it may be determined that the user intent is not a predetermined request type of the plurality of predetermined request types. In particular, the user intent may be a type of request that is predetermined to require more than a textual result to satisfy. For example, the user intent may be a request to execute a media search query and display media items corresponding to the media search query. In other examples, the user intent may be a request for information other than a media item. For example, the user intent may be a request for information associated with: sports teams (e.g., "how do lakers perform in their last game. In response to determining that the user intent is not one of the plurality of predetermined request types, one or more of blocks 734-746 may be performed.

At block 734 of process 700, a second result that at least partially satisfies the user's intent may be obtained. Block 734 may be similar or identical to block 534 described above. In one example, the user intent may include a request to execute a media search query. In this example, the media search query may be executed at block 734 to obtain the second result. In particular, the second results may include media items corresponding to the media search query.

In some examples, the user intent may not be a media search query. For example, the user intent may be a request to provide a weather forecast for french "Paris" (e.g., how do weather forecasts for french "Paris"). In this example, the second result obtained at block 734 may include a weather forecast for the next 7 days of france "Paris". The second result may include non-media data that at least partially satisfies the user's intent. Specifically, a weather forecast for the next 7 days of france "Paris" may include textual data (e.g., a brief description of the date, temperature, and weather conditions) and graphical images (e.g., images of sunny, cloudy, windy, or rainy weather). Further, in some examples, the scope of the user intent may be expanded at block 710 to include a request for a media item that at least partially satisfies the user intent. In these examples, the second results obtained at block 734 may further include one or more media items having media content that at least partially satisfies the user's intent. For example, a media search query may be performed at block 734 for a weather forecast of france "Paris" for a relevant time period, and one or more media items related to the weather forecast of france "Paris" may be retrieved. The one or more media items may include, for example, video clips from a weather channel presenting a weather forecast of "Paris" in france. In these examples, the non-media data and/or one or more media items may be displayed in the user interface on the displayed unit (e.g., at block 738, block 742, or block 746 described below).

At block 736 of process 700, it may be determined whether the displayed content includes media content that is played on the electronic device. In some examples, the displayed content may be determined to not include media content that is played on the electronic device. For example, the displayed content may alternatively include a user interface, such as a main menu user interface or a third user interface (e.g., third user interface 826). The third user interface may occupy at least a majority of the display area of the display unit. Further, the third user interface may include previous results related to previous user requests received prior to detecting the user input at block 704. In accordance with a determination that the displayed content does not include media content, block 738 may be performed.

At block 738 of the process 700, a portion of the second result may be displayed in a third user interface on the display unit. In examples where the displayed content already includes the third user interface upon receiving the user input at block 704, the display of the previous results related to the previous user request may be replaced with the display of a portion of the second results in the third user interface. In examples where the displayed content does not include a third user interface (e.g., the displayed content includes a main menu user interface) upon receiving the user input at block 704, the third user interface may be displayed and the second result may be included in the displayed third user interface.

In some examples, it may be determined whether the second result includes a predetermined type of result. The predetermined type of result may be associated with a small portion of the display area of the display unit. The predetermined type of results may include, for example, results related to stock market or weather. It should be appreciated that in other examples, the predetermined type of result may vary. In response to determining that the second result includes a predetermined type of result, a portion of the second result may be displayed in a second user interface on the display unit. The second user interface may occupy a small portion of the display area of the display unit. In these examples, a portion of the second result may be displayed in the second user interface, although it is determined at block 736 that the displayed content does not include media content.

Fig. 8Q-8S show exemplary examples of blocks 734-738. In this example, as shown in fig. 8Q, the displayed content may initially include a third user interface 826. The third user interface 826 may include previous results from previous user requests. In particular, the third user interface 826 includes media items 844 from a previously requested media search query. As shown in fig. 8R, upon displaying the third user interface 826, the digital assistant may be invoked (e.g., at block 704). The user utterance in the sampled audio data may include "show me a movie featured by Luke Wilson. ". A textual representation 846 of the user utterance may be displayed on the display unit. In this example, it may be determined (e.g., at block 710) that the user intent is a request to perform a media search query on a movie of the dominant actor of Luke Wilson. The media search query may be executed (e.g., at block 734) to obtain second results. In particular, the second result may include media item 848 corresponding to a movie featured by Luke Wilson. Further, additional results (e.g., media item 850) related to the user intent or previous user intent may be obtained. These additional results may be obtained in a manner similar to the manner in which the second result was obtained as described in block 544.

In the present example of fig. 8Q-8S, the displayed content includes only the third user interface 826, and thus it may be determined (e.g., at block 736) that the displayed content does not include media content that is played on the electronic device. In response to the determination, the second result may be displayed in the third user interface 826. In particular, as shown in FIG. 8S, the display of media item 844 in third user interface 826 may be replaced with the display of media item 848 in third user interface 826. Further, a media item 850 may be displayed in the third user interface 826.

As shown in this example, the second result may be presented in the third user interface only after determining that the media content is not displayed on the display unit. This allows a wider range of results to be displayed in a larger area, thereby increasing the probability of meeting the user's actual intent. At the same time, interference with the user's consumption of the media content is avoided by ensuring that no media content is being displayed on the display unit before the second result is presented in the third user interface.

Referring again to block 736, the displayed content may include media content that is being played on the media device. In these examples, the displayed content may be determined to include media content played on a media device. Depending on the determination, one or more of blocks 740 through 746 may be performed.

At block 740 of process 700, it may be determined whether the media content being played may be paused. Examples of media content that may be paused may include on-demand media items, such as on-demand movies and television programs. Examples of media content that cannot be paused may include media programs that are broadcast or streamed and live media programs (e.g., sporting events, concerts, etc.). Thus, an on-demand media item may not include a broadcast or live program. In accordance with a determination at block 740 that the media content being played cannot be paused, block 742 may be performed. At block 742 of process 700, a second user interface having a portion of the second result may be displayed on the display unit. Block 742 may be similar to block 536 described above. The second user interface may be displayed while the media content is displayed. The display area occupied by the second user interface on the display unit may be smaller than the display area occupied by the media content on the display unit. In accordance with a determination that the media content being played may be paused, one or more of blocks 744-746 may be performed. At block 744 of process 700, the media content being played may be paused on the media device. At block 746 of process 700, a third user interface may be displayed having a portion of the second result. The third user interface may be displayed while the media content is paused.

Fig. 8T-8W show exemplary examples of blocks 740-746. As shown in fig. 8T, the media content 802 played on the media device may be displayed on a display unit. While the media content 802 is displayed, the digital assistant may be activated (e.g., at block 704). The user utterance in the sampled audio data may be "show me a movie featured by Luke Wilson. ". A textual c representation 846 of the user utterance may be displayed on the display unit. As described above, it may be determined (e.g., at block 710) that the user intent is a request to obtain a media item of a movie featured by Luke Wilson. The corresponding media search query may be executed (e.g., at block 734) to obtain the second result. The second result may include media item 848 as a movie featured by Luke Wilson. In examples where it is determined (e.g., at block 744) that the media content 802 cannot be paused, the media item 848 may be displayed in the second user interface 818 while the media content 802 continues to be displayed on the display unit (e.g., fig. 8U). It may be desirable to display the media item 848 in the second user interface 818 so that the media content 802 can persist for consumption by the user while the media item 848 is displayed, thereby satisfying the user's intent. This prevents the user from missing any portion of the media content 802 that cannot be paused or played back. Alternatively, in an example in which it is determined (e.g., at block 744) that the media content 802 may be paused, the media content 802 on the media device may be paused and the media item 848 may be displayed in the third user interface 826 on the display unit (e.g., fig. 8S). It may be desirable to display the third user interface 826 such that a broader range of media items associated with various alternative user intents (e.g., media item 850) can be displayed with the requested media item (e.g., media item 848), thereby increasing the likelihood of meeting the user's actual intent. At the same time, the media content 802 is paused so that the user does not miss any portion of the media content 802. By changing the user interface for displaying the media item 848 based on whether the media content 802 may be paused, user intent associated with user utterances may be fully achieved while interfering with the user's consumption of the media content 802 is reduced. This may improve the overall user experience.

In some examples, as shown in fig. 8V, the displayed content may include a second user interface 818 in addition to the media content 802 playing on the media device. In these examples, second user interface 818 may include media items 852 that are related to previous user requests (e.g., requests for romantic comedies hosted by the Reese Witherspoon). Upon displaying the media content 802 and the second user interface 818, the digital assistant may be invoked (e.g., at block 704). As shown in fig. 8W, the sampled audio data may include the user utterance "show me a movie featured by Luke Wilson. ". A textual representation 846 of the user utterance may be displayed on the display unit. Based on the user utterance, it may be determined (e.g., at block 710) that the user intent is a request to obtain a media item that is a movie featured by Luke Wilson. The corresponding media search query may be executed (e.g., at block 734) to obtain a second result (e.g., media item 848). In these examples, the display of media item 852 in second user interface 818 may be replaced with the display of media item 848 (e.g., fig. 8U).

Fig. 9 illustrates a process 900 for interacting with a digital assistant of a media system, according to various examples. Process 900 may be performed using one or more electronic devices implementing a digital assistant. For example, process 900 may be performed using one or more of system 100, media system 128, media device 104, user device 122, or digital assistant system 400 described above. It should be understood that some of the operations in process 900 may be combined, the order of some operations may be changed, and some operations may be omitted.

At block 902 of process 900, content may be displayed on a display unit. Block 902 may be similar or identical to block 502 described above. In some examples, the displayed content may include media content (e.g., movies, videos, television programs, video games, etc.). Additionally or alternatively, the displayed content may include a user interface. For example, the displayed content may include a first user interface having one or more exemplary natural language requests (e.g., as shown in fig. 6D-6E). In other examples, the displayed content may include a third user interface (e.g., third user interface 626) with results from previous user requests (e.g., previously requested media items). The third user interface may occupy at least a majority of the display area of the display unit.

At block 904 of process 900, user input may be detected while the contents of block 902 are displayed. This user input may be similar to or the same as the fifth user input described at block 558. In particular, user input may be detected on a remote control of the media device. For example, the user input may include a predetermined motion pattern on a touch-sensitive surface of the remote control device. In some examples, the user input may be detected via a second electronic device (e.g., device 122) different from the media device. The second electronic device may be configured to wirelessly control the media device. In response to detecting the user input, one or more of blocks 906-914 may be performed.

At block 906 of process 900, a virtual keyboard interface (e.g., virtual keyboard interface 646) may be displayed on the display unit. Block 906 may be similar or identical to block 562 described above. The virtual keyboard interface may be overlaid on at least a portion of the first user interface or the third user interface. Further, a search field (e.g., search field 644) may be displayed on the display unit. The virtual keyboard interface may be configured such that user input received via the virtual keyboard interface results in text entry in the search field.

At block 908 of the process 900, the selectable affordance may be caused to be displayed on the second electronic device (e.g., displayed on the touch screen 346 of the device 122). The second electronic device may be a different device than the remote control of the media device. Selection of the affordance may enable receipt of text input by the media device via a keyboard of the second electronic device. For example, selection of the affordance may cause a virtual keyboard interface (e.g., similar to virtual keyboard interface 646) to be displayed on the second electronic device. Input to the virtual keyboard interface of the second electronic device may cause corresponding text to be entered in a search field (e.g., search field 644).

At block 910 of process 900, text input may be received via a keyboard (e.g., a virtual keyboard interface) of the second electronic device. In particular, a user may enter text via a keyboard of the second electronic device, and the text input may be transmitted to and received by the media device. The text input may represent a user request. For example, the text input may be "Jurassic Park" which may represent a request to perform a search for media items associated with the search string "Jurassic Park".

At block 912 of process 900, results that at least partially satisfy the user request may be obtained. For example, a media search may be performed using text input, and corresponding media items may be retrieved. In a particular example where the text input is "Jurassic Park," a media item titled "Jurassic Park" or having the same actor or director as the movie "Jurassic Park" may be obtained. In another example where the text input is "Reese Witherspoon," a media item may be obtained having therein the actress Reese Witherspoon.

At block 914 of process 900, a user interface may be displayed on the display unit. The user interface may include at least a portion of the results. For example, the user interface may include media items obtained as a result of the media search performed at block 912.

While certain blocks of process 500, process 700, and process 900 are described above as being performed by a device or system (e.g., media device 104, user device 122, or digital assistant system 400), it should be appreciated that in some examples, more than one device may be used to perform the blocks. For example, in block where a determination is made, a first device (e.g., media device 104) may obtain the determination from a second device (e.g., server system 108). Similarly, in a box displaying content, objects, text, or a user interface, a first device (e.g., media device 104) may cause the content, objects, text, or user interface to be displayed on a second device (e.g., display unit 126).

5. Electronic device

According to some examples, fig. 10 illustrates a functional block diagram of an electronic device 1000 configured according to the principles of the various examples described, for example, to voice control media playback and update knowledge of a virtual assistant in real-time. The functional blocks of the device can be implemented by hardware, software, or a combination of hardware and software that perform the principles of the various described examples. Those skilled in the art will appreciate that the functional blocks described in fig. 10 may be combined or separated into sub-blocks in order to implement the principles of the various described examples. Thus, the description herein optionally supports any possible combination or separation or further definition of the functional blocks described herein.

As shown in fig. 10, the electronic device 1000 may include: an input unit 1003 (e.g., a remote controller 124, etc.) configured to receive a user input such as a tactile input, a gesture input; an audio input unit 1004 (e.g., a microphone 272, etc.) configured to receive audio data; a speaker unit 106 configured to output audio (e.g., a speaker 268, etc.); and a communication unit 1007 (e.g., communication subsystem 224, etc.) configured to transmit and receive information from an external device via a network. In some examples, electronic device 1000 may optionally include a display unit 1002 (e.g., display unit 126, etc.) configured to display media, interfaces, and other content. The electronic device 1000 may also include a processing unit 1008 coupled to the input unit 1003, the audio input unit 1004, the speaker unit 1006, the communication unit 1007, and the optional display unit 1002. In some examples, the processing unit 1008 may include a display enabling unit 1010, a detection unit 1012, a determination unit 1014, a sampling unit 1016, an output unit 1018, an execution unit 1020, an acquisition unit 1022, and a switching unit 1024.

According to some embodiments, the processing unit 1008 is configured to display content on a display unit (e.g., the display unit 1002 or a separate display unit) (e.g., with the display enabling unit 1010). The processing unit 1008 is further configured to detect a user input (e.g., with the detection unit 1012). The processing unit 1008 is further configured to determine whether the user input corresponds to the first input type (e.g., with the determining unit 1014). The processing unit 1008 is further configured to display a plurality of exemplary natural language requests on the display unit in accordance with a determination that the user input corresponds to the first input type (e.g., display enabling unit 1010). A plurality of exemplary natural language requests are contextually related to the displayed content, wherein receiving a user utterance corresponding to one of the plurality of exemplary natural language requests causes the digital assistant to perform a respective action.

In some examples, the user input is detected on a remote control of the electronic device. In some examples, the first input type includes pressing a button of the remote control and releasing the button for a predetermined duration. In some examples, the plurality of exemplary natural language requests are displayed on the display unit via a first user interface, and the first user interface is overlaid on the displayed content. In some examples, the displayed content includes media content, and the media content continues to play while the plurality of exemplary natural language requests are displayed.

In some examples, the processing unit 1008 is further configured to, in accordance with a determination that the user input corresponds to the first input type, display a visual indicator on the display unit (e.g., with the display enabling unit 1010) indicating that the digital assistant is not processing audio input.

In some examples, upon determining that the user input corresponds to the first input type, a plurality of exemplary natural language requests are displayed on the display unit after a predetermined amount of time. In some examples, each of the plurality of exemplary natural language requests is displayed separately at different times in a predetermined order.

In some examples, the processing unit 1008 is further configured to display a plurality of lists of exemplary natural language requests (e.g., with the display enabling unit 1010), wherein each list is displayed in turn at a different time.

In some examples, the processing unit 1008 is further configured to determine whether the user input corresponds to the second input type (e.g., with the determining unit 1014), in accordance with a determination that the user input does not correspond to the first input type. The processing unit 1008 is further configured to sample the audio data (e.g., using the sampling unit 1016 and the audio input unit 1004) in accordance with a determination that the user input corresponds to the second input type. The processing unit 1008 is further configured to determine whether the audio data includes a user request (e.g., with the determining unit 1014). The processing unit 1008 is further configured to perform a task that at least partially satisfies the user request in accordance with a determination that the audio data comprises the user request (e.g., with the execution unit 1020).

In some examples, the second input type includes pressing a button of the electronic device remote control and holding the button for more than a predetermined duration.

In some examples, the processing unit 1008 is further configured to display a request for clarifying the user intent on the display unit (e.g., with the display enabling unit 1010) in accordance with a determination that the audio data does not contain the user request.

In some examples, the displayed content includes media content, and the media content continues to play on the electronic device while the audio data is sampled and while the task is being performed.

In some examples, the processing unit 1008 is further configured to output (e.g., with the output unit 1018) audio associated with the media content (e.g., using the speaker unit 1006). The processing unit 1008 is further configured to reduce the audio amplitude (e.g., with the output unit 1018) in accordance with a determination that the user input corresponds to the second input type.

In some examples, the task is performed without outputting speech related to the task from the electronic device. In some examples, the audio data is sampled upon detecting the user input. In some examples, the audio data is sampled for a predetermined duration after detecting the user input.

In some examples, the audio data is sampled via a first microphone on the electronic device remote control (e.g., audio input unit 1004). The processing unit 1008 is further configured to sample the background audio data via a second microphone on the remote control (e.g., a second audio input unit of the electronic device 1000) while sampling the audio data (e.g., with the sampling unit 1016 and the audio input unit 1004). The processing unit 1008 is further configured to use the background audio data to remove background noise in the audio data (e.g., with the output unit 1018).

In some examples, audio associated with the displayed content is output via an audio signal from the electronic device. The processing unit 1008 is further configured to use the audio signal to remove background noise in the audio data (e.g., with the output unit 1018).

In some examples, the processing unit 1008 is further configured to display, on the display unit in response to detecting the user input, a visual cue for prompting the user to provide the verbal request (e.g., with the display enabling unit 1010).

In some examples, the processing unit 1008 is further configured to obtain (e.g., with the obtaining unit 1022) results that at least partially satisfy the user request. The processing unit 1008 is further configured to display a second user interface on the display unit (e.g., with the display enabling unit 1010). The second user interface includes a portion of the result, wherein at least a portion of the content continues to be displayed while the second user interface is displayed, and wherein a display area of the second user interface on the display unit is smaller than a display area of at least a portion of the content on the display unit. In some examples, the second user interface is overlaid on the displayed content.

In some examples, the portion of the result includes one or more media items. The processing unit 1008 is further configured to receive a selection of a media item of the one or more media items via the second user interface (e.g., with the detecting unit 1012). The processing unit 1008 is further configured to display media content associated with the selected media item on the display unit (e.g., with the display enabling unit 1010).

In some examples, the processing unit 1008 is further configured to detect the second user input (e.g., with the detecting unit 1012) while displaying the second user interface. The processing unit 1008 is further configured to stop displaying the second user interface (e.g., with the display enabling unit 1010) in response to detecting the second user input.

In some examples, the second user input is detected on a remote control of the electronic device. The second user input includes a first predetermined motion pattern on the touch-sensitive surface of the remote control.

In some examples, the processing unit 1008 is further configured to detect a third user input while displaying the second user interface (e.g., with the detecting unit 1012). The processing unit 1008 is further configured to replace display of the second user interface on the display unit with display of the third user interface (e.g., with the display enabling unit 1010) in response to detecting the third user input. The third user interface includes at least a portion of the results, and the third user interface occupies at least a majority of the display area of the display unit.

In some examples, the third user input is detected on a remote control of the electronic device, and the third user input includes a second predetermined motion pattern on the touch-sensitive surface of the remote control.

In some examples, the processing unit 1008 is further configured to, in response to detecting the third user input, obtain a second result that is different from the result (e.g., with the obtaining unit 1022). The second result at least partially satisfies the user request, and the third user interface includes at least a portion of the second result.

In some examples, the second result is based on a user request received prior to detecting the user input. In some examples, upon detecting the third user input, the second user interface is focused on a project of a portion of the results, and the second results are contextually related to the project.

In some examples, the displayed content includes media content. The processing unit 1008 is further configured to pause playing the media content on the electronic device (e.g., with the execution unit 1020) in response to detecting the third user input.

In some examples, at least a portion of the results include one or more media items. The processing unit 1008 is further configured to receive a selection of a media item of the one or more media items via a third user interface (e.g., with the detecting unit 1012). The processing unit 1008 is further configured to display media content associated with the media item on the display unit (e.g., with the display enabling unit 1010).

In some examples, the processing unit 1008 is further configured to detect a fourth user input associated with a direction on the display unit while displaying the third user interface (e.g., with the detection unit 1012). The processing unit 1008 is further configured to switch focus of the third user interface from the first item to the second item on the third user interface in response to detecting the fourth user input (e.g., with the switching unit 1024). The second item may be positioned in a direction relative to the first item.

In some examples, the processing unit 1008 is further configured to detect a fifth user input (e.g., with the detecting unit 1012) while displaying the third user interface. The processing unit 1008 is further configured to display the search field (e.g., with the display enabling unit 1010) in response to detecting the fifth user input. The processing unit 1008 is further configured to display on the display unit a virtual keyboard interface (e.g., with the display enabling unit 1010), wherein input received via the virtual keyboard interface results in text input in the search field.

In some examples, the processing unit 1008 is further configured to detect a sixth user input (e.g., with the detecting unit 1012) while displaying the third user interface. The processing unit 1008 is further configured to sample the second audio data (e.g., with the sampling unit 1016 and the audio input unit 1004) in response to detecting the sixth user input. The second audio data comprises a second user request. The processing unit 1008 is further configured to determine whether the second user request is a request that has a result for refining the user request (e.g., with the determining unit 1014). The processing unit 1008 is further configured to display a subset of the results via a third user interface (e.g., with the display enabling unit 1010) in accordance with a determination that the second user request is a request to refine the results of the user request.

In some examples, the subset of results is displayed at a top row of the third user interface. The processing unit 1008 is further configured to, in accordance with a determination that the second user request is not a request for refining a result of the user request, fetch (e.g., with the fetching unit 1018) a third result that at least partially satisfies the second user request. The processing unit 1008 is further configured to display a portion of the third result via a third user interface (e.g., with the display enabling unit 101). In some examples, a portion of the third result is displayed at a top row of the third user interface.

In some examples, the processing unit 1008 is further configured to obtain a fourth result that at least partially satisfies the user request or the second user request (e.g., with the obtaining unit 1022). The processing unit 1008 is further configured to display a portion of the fourth result via a third user interface (e.g., with the display enabling unit 1010).

In some examples, a portion of the fourth result is displayed at a row subsequent to a top row of the third user interface.

In some examples, upon detecting the sixth user input, the focus of the third user interface is on one or more items of the third user interface, and the fourth result is contextually relevant to the one or more items.

In some examples, the processing unit 1008 is further configured to detect a seventh user input (e.g., with the detecting unit 1012) while displaying the third user interface. The processing unit 1008 is further configured to stop displaying the third user interface (e.g., with the display enabling unit 1010) in response to detecting the seventh user input.

In some examples, the displayed content is media content, and the playing of the media content on the electronic device is paused in response to detecting the third user input. The processing unit 1008 is further configured to resume playing the media content on the electronic device (e.g., with the executing unit 1020) in response to detecting the seventh user input. In some examples, the seventh user input includes pressing a menu button of a remote control of the electronic device.

According to some embodiments, the processing unit 1008 is further configured to display content on the display unit (e.g., with the display enabling unit 1010). The processing unit 1008 is further configured to detect user input while displaying content (e.g., with the detection unit 1012). The processing unit 1008 is further configured to display a user interface on the display unit (e.g., with the display enabling unit 1010) in response to detecting the user input. The user interface includes a plurality of exemplary natural language requests that are contextually related to the displayed content, wherein receiving a user utterance corresponding to one of the plurality of exemplary natural language requests causes the digital assistant to perform a corresponding action.

In some examples, the displayed content includes media content. In some examples, the plurality of exemplary natural language requests includes a natural language request to modify one or more settings associated with the media content. In some examples, the media content continues to play while the user interface is displayed.

In some examples, the processing unit 1008 is further configured to output audio associated with the media content. In response to detecting the user input (e.g., with output unit 1018), the amplitude of the audio is not reduced. In some examples, the displayed content includes a main menu user interface.

In some examples, the plurality of exemplary natural language requests includes an exemplary natural language request related to each of a plurality of core competencies of the digital assistant. In some examples, the displayed content includes a second user interface having results associated with a previous user request. In some examples, the plurality of exemplary natural language requests includes natural language requests for refining the results. In some examples, the user interface includes text instructions for invoking and interacting with the digital assistant. In some examples, the user interface includes a visual indicator indicating that the digital assistant is not receiving audio input. In some examples, the user interface is overlaid on the displayed content.

In some examples, the processing unit 1008 is further configured to reduce a brightness of the displayed content to highlight the user interface (e.g., with the display enabling unit 1010) in response to detecting the user input.

In some examples, the user input is detected on a remote control of the electronic device. In some examples, the user input includes pressing a button of the remote control device and releasing the button within a predetermined duration after pressing the button. In some examples, the button is configured to invoke a digital assistant. In some examples, the user interface includes text instructions for displaying a virtual keyboard interface.

In some examples, the processing unit 1008 is further configured to detect the second user input after displaying the user interface (e.g., with the detecting unit 1012). The processing unit 1008 is further configured to display a virtual keyboard interface on the display unit (e.g., with the display unit 1012) in response to detecting the second user input.

In some examples, the processing unit 1008 is further configured to change the focus of the user interface to a search field on the user interface (e.g., with the display enabling unit 1010). In some examples, the search field is configured to receive a text search query via a virtual keyboard interface. In some examples, the virtual keyboard interface is not available to interact with a digital assistant. In some examples, the second user input includes a predetermined motion pattern on a touch-sensitive surface of a remote control device of the electronic device.

In some examples, the plurality of exemplary natural language requests are displayed a predetermined amount of time after detecting the user input. In some examples, the processing unit 1008 is further configured to display each of the plurality of exemplary natural language requests one at a time in a predetermined order (e.g., with the display enabling unit 1010). In some examples, the processing unit 1008 is further configured to replace the display of a previously displayed one of the plurality of exemplary natural language requests with a subsequent one of the plurality of exemplary natural language requests (e.g., with the display enabling unit 1010).

In some examples, the content includes a second user interface having one or more items. When user input is detected, the focus of the second user interface is on an item of the one or more items. A plurality of exemplary natural language requests are contextually related to the item of one or more items.

According to some embodiments, the processing unit 1008 is further configured to display content on the display unit (e.g., with the display enabling unit 1010). The processing unit 1008 is further configured to detect a user input (e.g., with the detection unit 1012). The processing unit 1008 is further configured to display one or more suggested examples of the natural language utterance (e.g., with the display enabling unit 1010) in response to detecting the user input. The one or more suggested examples are contextually relevant to the displayed content and, when spoken by the user, cause the digital assistant to perform a corresponding action.

In some examples, the processing unit 1008 is further configured to detect a second user input (e.g., with the detection unit 1012). The processing unit 1008 is further configured to sample the audio data (e.g., with the sampling unit 1016) in response to detecting the second user input. The processing unit 1008 is further configured to determine (e.g., with the determining unit 1014) whether the sampled audio data contains one of the one or more suggested examples of the natural language utterance. The processing unit 1008 is further configured to perform a corresponding action for the utterance (e.g., with the execution unit 1020) in accordance with a determination that the sampled audio data contains one of the one or more suggested examples of the natural language utterance.

According to some embodiments, the processing unit 1008 is further configured to display content on the display unit (e.g., with the display enabling unit 1010). The processing unit 1008 is further configured to detect user input while displaying content (e.g., with the detection unit 1012). The processing unit 1008 is further configured to sample audio data (e.g., with the sampling unit 1016) in response to detecting the user input. The audio data includes a user utterance representing a media search request. The processing unit 1008 is further configured to obtain a plurality of media items that satisfy the media search request (e.g., with the obtaining unit 1022). The processing unit 1008 is further configured to display at least a portion of the plurality of media items on the display unit via the user interface (e.g., with the display enabling unit 1010).

In some examples, the content continues to be displayed on the display unit while at least a portion of the plurality of media items are displayed. The display area occupied by the user interface is smaller than the display area occupied by the content.

In some examples, the processing unit 1008 is further configured to determine whether a number of media items in the plurality of media items is less than or equal to a predetermined number (e.g., with the determining unit 1014). In accordance with a determination that a number of media items in the plurality of media items is less than or equal to a predetermined number, at least a portion of the plurality of media items includes a plurality of media items.

In some examples, in accordance with a determination that a number of media items in the plurality of media items is greater than a predetermined number, a number of media items in at least a portion of the plurality of media items is equal to the predetermined number.

In some examples, each media item of the plurality of media items is associated with a relevance score relative to the media search request, and the relevance score of at least a portion of the plurality of media items is highest among the plurality of media items.

In some examples, each media item of the at least a portion of the plurality of media items is associated with a popularity rating, and the at least a portion of the plurality of media items is arranged in the user interface based on the popularity rating.

In some examples, the processing unit 1008 is further configured to detect a second user input (e.g., with the detecting unit 1012) while displaying at least a portion of the plurality of media items. The processing unit 1008 is further configured to expand the user interface (e.g., with the display enabling unit 1010) to occupy at least a majority of a display area of the display unit in response to detecting the second user input.

In some examples, the processing unit 1008 is further configured to determine, in response to detecting the second user input, whether a number of media items in the plurality of media items is less than or equal to a predetermined number (e.g., with the determining unit 1014). The processing unit 1008 is further configured to obtain a second plurality of media items that at least partially satisfy the media search request in accordance with a determination that a number of media items in the plurality of media items is less than or equal to a predetermined number, the second plurality of media items being different from at least a portion of the media items. The processing unit 1008 is further configured to display the second plurality of media items on the display unit via the expanded user interface (e.g., with the display enabling unit 101).

In some examples, the processing unit 1008 is further configured to determine whether the media search request includes more than one search parameter (e.g., with the determining unit 1014). In accordance with a determination that the media search request includes more than one search parameter, the second plurality of media items is organized in the expanded user interface in accordance with the more than one search parameter of the media search request.

In some examples, the processing unit 1008 is further configured to display, via the expanded user interface, at least a second portion of the plurality of media items in accordance with a determination that the number of media items in the plurality of media items is greater than the predetermined number (e.g., with the display enabling unit 1010). At least a second portion of the plurality of media items is different from at least a portion of the plurality of media items.

In some examples, the at least a second portion of the plurality of media items includes two or more media types, and the at least a second portion of the plurality of media items is organized in the expanded user interface according to each of the two or more media types.

In some examples, the processing unit 1008 is further configured to detect a third user input (e.g., with the detection unit 1012). The processing unit 1008 is further configured to cause the expanded user interface to scroll (e.g., with the display enabling unit 1010) in response to detecting the third user input. The processing unit 1008 is further configured to determine whether the expanded user interface has scrolled past a predetermined location on the expanded user interface (e.g., with the determining unit 1014). The processing unit 1008 is further configured to display at least a third portion of the plurality of media items on the expanded user interface (e.g., with the display enabling unit 1010) in response to determining that the expanded user interface has scrolled past the predetermined location on the expanded user interface. At least a third portion of the plurality of media items is organized on the expanded user interface in accordance with one or more media content providers associated with the third plurality of media items.

The operations described above with reference to fig. 5A-5I are optionally implemented by the components shown in fig. 1-3 and 4A-4B. For example, the display operations 502,508, 514,520,524,530,536,546,556,560,562,576,582,588,592, the detect operation 504,538,542,550,558,566,570, the determine operation 506,516,522,526,528,574,578, the sample operation 518,572, the execute operation 532,584, the acquire operation 534,544,580,586,590, the pause operation 540,568, the receive operation 554, and the switch operation 552,564 may be implemented by one or more of the operating system 252, the GUI module 256, the application module 262, the digital assistant module 426, and the one or more processors 204, 404. Those skilled in the art will clearly know how other processes may be implemented based on the components shown in fig. 1-3 and 4A-4B.

According to some examples, fig. 11 illustrates a functional block diagram of an electronic device 1100 that is configured in accordance with the principles of the various examples described, for example, to voice control media playback and update knowledge of a virtual assistant in real-time. The functional blocks of the device can be implemented by hardware, software, or a combination of hardware and software that perform the principles of the various described examples. Those skilled in the art will appreciate that the functional blocks described in fig. 11 may be combined or separated into sub-blocks in order to implement the principles of the various described examples. Thus, the description herein optionally supports any possible combination or separation or further definition of the functional blocks described herein.

As shown in fig. 11, the electronic device 1100 may include: an input unit 1103 (e.g., a remote controller 124, etc.) configured to receive user input such as tactile input, gesture input; an audio input unit 1104 (e.g., a microphone 272, etc.) configured to receive audio data; a speaker unit 116 configured to output audio (e.g., a speaker 268, etc.); and a communication unit 1107 (e.g., communication subsystem 224, or the like) configured to transmit and receive information from an external device via a network. In some examples, electronic device 1100 may optionally include a display unit 1102 (e.g., display unit 126, etc.) configured to display media, interfaces, and other content. The electronic device 1100 may also include a processing unit 1108 coupled to the input unit 1103, the audio input unit 1104, the speaker unit 1106, the communication unit 1107, and the optional display unit 1102. In some examples, processing unit 1108 may include a display enabling unit 1110, a detection unit 1112, a determination unit 1114, a sampling unit 1116, an output unit 1118, an execution unit 1120, an acquisition unit 1122, an identification unit 1124, and a transmission unit 1126.

According to some embodiments, the processing unit 1108 is configured to display content (e.g., with the display enabling unit 1110) on a display unit (e.g., the display unit 1102 or a separate display unit). The processing unit 1108 is further configured to detect user input while displaying content (e.g., with the detection unit 1112). The processing unit 1108 is further configured to sample audio data (e.g., using the sampling unit 1016 and the audio input unit 1104) in response to detecting the user input. The audio data includes a user utterance. The processing unit 1108 is further configured to obtain a determination of a user intent corresponding to the user utterance (e.g., utilizing the obtaining unit 1122). The processing unit 1108 is further configured to obtain a determination of whether the user intent includes a request to adjust a state or setting of an application on the electronic device (e.g., with the obtaining unit 1122). The processing unit 1108 is further configured to, in response to a determination that obtaining the user intent includes a request to adjust a state or setting of an application on the electronic device, adjust the state or setting of the application to meet the user intent (e.g., with the task execution unit 1120).

In some examples, the request to adjust the state or settings of an application on the electronic device includes a request to play a particular media item. Adjusting the state or settings of an application to meet a user's intent includes playing a particular media item.

In some examples, the displayed content includes a user interface with media items, and the user utterance does not explicitly define a particular media item to play. The processing unit 1108 is further configured to determine whether a focus of the user interface is located on the media item (e.g., with the determining unit 1114). The processing element 1108 is further configured to identify the media item as being a particular media item to play (e.g., with the identifying element 1124) in accordance with a determination that the focus of the user interface is located on the media item.

In some examples, the request to adjust the state or settings of the application on the electronic device includes a request to launch the application on the electronic device. In some examples, the displayed content includes media content played on the electronic device, and the status or setting relates to the media content played on the electronic device. In some examples, the request to adjust the state or settings of the application on the electronic device includes a request to fast forward or rewind media content played on the electronic device. In some examples, the request to adjust the state or settings of the application on the electronic device includes a request to jump forward or backward in the media content to play a particular portion of the media content. In some examples, the request to adjust the state or settings of the application on the electronic device includes a request to pause media content played on the electronic device. In some examples, the request to adjust the state or settings of the application on the electronic device includes a request to open or close subtitles for media content.

In some examples, the displayed content includes a user interface having a first media item and a second media item.

In some examples, the request to adjust the state or setting of the application on the electronic device includes a request to switch focus of the user interface from the first media item to the second media item. Adjusting the state or setting of the application to meet the user intent includes switching focus of the user interface from the first media item to the second media item.

In some examples, the displayed content includes media content being played on a media device. The user utterance is a natural language expression indicating that the user does not hear a portion of the audio associated with the media content. The request to adjust the state or settings of the application on the electronic device includes a request to replay a portion of the media content corresponding to a portion of audio not heard by the user. The processing unit 1108 is further configured to fast rewind the media content a predetermined amount toward a previous portion of the media content (e.g., with the task execution unit 1120); and resume playing the media content from the previous portion (e.g., with the task execution unit 1120).

In some examples, the processing unit 1108 is further configured to open closed captioning (e.g., with the task execution unit 1120) before resuming playing of the media content from the previous portion.

In some examples, the request to adjust the state or setting of the application on the electronic device further includes a request to increase the volume of audio associated with the media content. Adjusting the state or setting of the application also includes increasing the volume of audio associated with the media content before resuming playing the media content from the previous portion.

In some examples, speech in audio associated with the media content is converted to text. Adjusting the state or setting of the application also includes displaying a portion of the text when playing the media content is resumed from the previous portion.

In some examples, the processing unit 1108 is further configured to obtain a determination of a user emotion associated with the user utterance (e.g., utilizing the obtaining unit 1122). Determining a user intent based on the determined user emotion.

In some examples, the processing unit 1108 is further configured to obtain a determination of whether the user intent is one of a plurality of predetermined request types (e.g., with obtaining unit 1122) in response to obtaining a determination that the user intent does not include a request to adjust a state or setting of an application on the electronic device. The processing unit 1108 is further configured to, in response to obtaining a determination that the user intent is one of the plurality of predetermined request types, obtain a result that at least partially satisfies the user intent (e.g., with obtaining unit 1122), and display the result in textual form on the display unit (e.g., with display enabling unit 1110).

In some examples, the plurality of predetermined request types includes a request for a current time at a particular location. In some examples, the plurality of predetermined request types includes a request to present a joke. In some examples, the plurality of predetermined request types include a request for information about media content played on the electronic device. In some examples, the results in the form of text are overlaid on the displayed content. In some examples, the displayed content includes media content that is playing on the electronic device, and the media content continues to play while the results in text form are displayed.

In some examples, the processing unit 1108 is further configured to, in response to determining that the obtained user intent is not one of the plurality of predetermined request types, obtain media content that at least partially satisfies the user intent (e.g., with obtaining unit 1122), and determine whether the displayed content includes media content that is played on the electronic device (e.g., with determining unit 1114). The processing unit 1108 is further configured to determine whether the media content may be paused in accordance with a determination that the displayed content includes media content (e.g., the determining unit 1114). The processing unit 1108 is further configured to display a second user interface (e.g., display enabling unit 1110) including a portion of the second result on the display unit in accordance with the determination that the media content may not be paused. The display area occupied by the second user interface on the display unit is smaller than the display area occupied by the media content on the display unit.

In some examples, the user intent includes a request for a weather forecast for a particular location. The user intent includes a request for information associated with a sports team or athlete. In some examples, the user intent is not a media search query, and wherein the second results include one or more media items having media content that at least partially satisfies the user intent. In some examples, the second result further includes non-media data that at least partially satisfies the user intent. In some examples, the user intent is a media search query and the second results include a plurality of media items corresponding to the media search query.

In some examples, the processing unit 1108 is further configured to, in accordance with a determination that the displayed content does not include media content playing on the electronic device, display a third user interface (e.g., with the display enabling unit 1110) on the display unit that includes a portion of the second result, wherein the third user interface occupies a majority of a display area of the display unit.

In some examples, the display content includes a main menu user interface.

In some examples, the displayed content includes a third user interface having previous results related to previous user requests received prior to detecting the user input. In accordance with a determination that the displayed content does not include media content that is played on the electronic device, display of the previous result in the third user interface is replaced with display of the second result.

In some examples, the processing unit 1108 is further configured to determine whether the displayed content includes a second user interface with previous results from previous user requests in accordance with a determination that the displayed content includes media content playing on the electronic device (e.g., with the determining unit 1114). In accordance with a determination that the displayed content includes a second user interface having a previous result from a previous user request, the previous result is replaced with the second result.

In some examples, the processing unit 1108 is further configured to, in accordance with a determination that the media content may be paused, pause playing the media content on the electronic device (e.g., with the task execution unit 1120), and display a third user interface on the display unit (e.g., with the display enabling unit 1110) that includes a portion of the second result, wherein the third user interface occupies a majority of a display area of the display unit.

In some examples, the processing unit 1108 is further configured to transmit the audio data to the server to perform natural language processing (e.g., with the transmission unit 1126 and using the communication unit 1107), and to indicate to the server that the audio data is associated with the media application (e.g., with the transmission unit 1126). The indication biases natural language processing towards user intent associated with the media.

In some examples, the processing unit 1108 is further configured to transmit the audio data to a server to perform voice-to-text processing (e.g., the transmitting unit 1126).

In some examples, the processing unit 1108 is further configured to indicate to the server that the audio data is associated with the media application (e.g., utilizing the transmitting unit 1126). The indication biases speech-to-text processing towards the text result associated with the media.

In some examples, the processing unit 1108 is further configured to obtain a textual representation of the user utterance (e.g., with the obtaining unit 1122), where the textual representation is based on a previous user utterance received prior to sampling the audio data.

In some examples, the text representation is based on a time at which a previous user utterance was received prior to sampling the audio data.

In some examples, the processing unit 1108 is further configured to obtain (e.g., with obtaining unit 1122) a determination that the user intent does not correspond to one of a plurality of core competencies associated with the electronic device. The processing unit 1108 is further configured to cause the second electronic device to perform a task that facilitates satisfying the user's intent (e.g., with the task performing unit 1120).

In some examples, the processing unit 1108 is further configured to obtain a determination of whether the user utterance includes ambiguous terms (e.g., with the obtaining unit 1122). The processing unit 1108 is further configured to, in response to a determination that the obtained user utterance includes ambiguous terms, obtain two or more candidate user intents based on the ambiguous terms (e.g., with the obtaining unit 1122); and displaying the two or more candidate user intents on the display unit (e.g., with the display enabling unit 1110).

In some examples, the processing unit 1108 is further configured to, while displaying the two or more candidate user intents, receive a user selection of one of the two or more candidate user intents (e.g., with the detection unit 1112). A user intent is determined based on the user selection.

In some examples, the processing unit 1108 is further configured to detect the second user input (e.g., with a detection unit). The processing unit 1108 is further configured to sample the second audio data (e.g., with the sampling unit 1116) in response to detecting the second user input. The second audio data includes a second user utterance representing a user selection.

In some examples, two or more interpretations are displayed without outputting speech associated with two or more candidate user intentions.

According to some embodiments, the processing unit 1108 is further configured to display content on a display unit (e.g., the display unit 1102 or a separate display unit) (e.g., utilizing the display enabling unit 1110). The processing unit 1108 is further configured to detect user input while displaying content (e.g., with the detection unit 1112). The processing unit 1108 is further configured to display a virtual keyboard interface on the display unit (e.g., with the display enabling unit 1110) in response to detecting the user input. The processing unit 1108 is further configured to cause the selectable affordance to appear on a display of the second electronic device (e.g., with the task execution unit 1120). Selection of the affordance causes the electronic device to receive text input via a keyboard of a second electronic device (e.g., using communication unit 1107).

In some examples, the processing unit 1108 is further configured to receive text input (e.g., with the detection unit 1112) via a keyboard of the second electronic device, wherein the text input represents a user request. The processing unit 1108 is further configured to obtain a result that at least partially satisfies the user request (e.g., with the obtaining unit 1122), and display a user interface on the display unit and (e.g., with the display enabling unit 1110), wherein the user interface includes at least a portion of the result.

In some examples, the displayed content includes a second user interface having a plurality of exemplary natural language requests. In some examples, the displayed content includes media content. In some examples, the displayed content includes a third user interface having results from previous user requests, where the third user interface occupies at least a majority of the display area of the display unit. In some examples, the virtual keyboard interface is overlaid on at least a portion of the third user interface. In some examples, the user input is detected via a remote control of the electronic device, and the remote control and the second electronic device are different devices. In some examples, the user input includes a predetermined motion pattern on a touch-sensitive surface of the remote control device. In some examples, the user input is detected via a second electronic device.

The operations described above with reference to fig. 7A-7C and 9 are optionally implemented by the components shown in fig. 1-3 and 4A. The operations described above with reference to fig. 7A-7C and 9 are optionally implemented by the components shown in fig. 1-3 and 4A-4B. For example, the display operation 702,716,732,736,738,742,746,902,906,914, the detect operation 704,718,904,910, the determine operation 708,710,712,714,720,724,728,736,740, the sample operation 706, the execute operation 722,726,744,908, the obtain operation 730,734,912, and the switch operation 552,564 may be implemented by one or more of the operating systems 252,352, the GUI modules 256,356, the application module 262,362, the digital assistant module 426, and the one or more processors 204,304,404. Those skilled in the art will clearly know how other processes may be implemented based on the components shown in fig. 1-3 and 4A-4B.

According to some implementations, a computer-readable storage medium (e.g., a non-transitory computer-readable storage medium) is provided that stores one or more programs for execution by one or more processors of an electronic device, the one or more programs including instructions for performing any of the methods described herein.

According to some implementations, there is provided an electronic device (e.g., a portable electronic device) comprising means for performing any of the methods described herein.

According to some implementations, an electronic device (e.g., a portable electronic device) is provided that includes a processing unit configured to perform any of the methods described herein.

According to some implementations, an electronic device (e.g., a portable electronic device) is provided that includes one or more processors and memory storing one or more programs for execution by the one or more processors, the one or more programs including instructions for performing any of the methods described herein.

Exemplary methods, non-transitory computer-readable storage media, systems, and electronic devices are set forth in the following items:

1. A method for operating a digital assistant of a media system, the method comprising:

at an electronic device with memory and one or more processors:

displaying content on a display unit;

detecting a user input;

determining whether the user input corresponds to a first input type; and in accordance with a determination that the user input corresponds to the first input type:

displaying, on a display unit, a plurality of exemplary natural language requests that are contextually related to the displayed content, wherein receiving a user utterance corresponding to one of the plurality of exemplary natural language requests causes the digital assistant to perform a corresponding action.

2. The method of item 1, wherein the user input is detected on a remote control of the electronic device.

3. The method of item 2, wherein the first input type comprises pressing a button of the remote control and releasing the button for a predetermined duration.

4. The method of any of items 1-3, wherein the plurality of exemplary natural language requests are displayed on the display unit via a first user interface, and wherein the first user interface is overlaid on the displayed content.

5. The method of any of items 1-4, wherein the displayed content comprises media content, and wherein the media content continues to play while a plurality of exemplary natural language requests are displayed.

6. The method of any of clauses 1-5, further comprising:

in accordance with a determination that the user input corresponds to the first input type, displaying, on the display unit, a visual indicator for indicating that the digital assistant is not processing audio input.

7. The method of any of items 1-6, wherein after determining that the user input corresponds to the first input type, displaying a plurality of exemplary natural language requests on the display unit after a predetermined amount of time.

8. The method of any of items 1-7, wherein each of the plurality of exemplary natural language requests is displayed separately in a predetermined order and at different times.

9. The method of any of items 1-8, wherein displaying a plurality of exemplary natural language requests comprises:

multiple lists of exemplary natural language requests are displayed, with each list being displayed at a different time and in turn.

10. The method of any of clauses 1-9, further comprising:

in accordance with a determination that the user input does not correspond to the first input type:

determining whether the user input corresponds to a second input type; and

in accordance with a determination that the user input corresponds to the second input type:

sampling the audio data;

determining whether the audio data contains a user request; and

in accordance with a determination that the audio data contains a user request, a task is performed that at least partially satisfies the user request.

11. The method of item 10, wherein the second input type comprises pressing a button of a remote control of the electronic device and holding the button for more than a predetermined duration.

12. The method of any of clauses 10 to 11, further comprising:

in accordance with a determination that the audio data does not contain the user request, a request for clarifying the user's intent is displayed on the display unit.

13. The method of any of items 10-12, wherein the displayed content includes media content, and wherein the media content continues to play on the electronic device while the audio data is sampled and while the task is performed.

14. The method of item 13, further comprising:

Outputting audio associated with the media content; and

in accordance with a determination that the user input corresponds to the second input type, the amplitude of the audio is reduced.

15. The method of any of items 10-14, wherein the task is performed without outputting task-related speech from the electronic device.

16. The method of any of items 10 to 15, wherein the audio data is sampled upon detection of the user input.

17. The method of any of items 10 to 15, wherein the audio data is sampled for a predetermined duration after detecting the user input.

18. The method of any of items 10 to 17, wherein the audio data is sampled via a first microphone on a remote control of the electronic device, and further comprising:

while sampling the audio data, sampling background audio data via a second microphone on the remote control; and

background audio data is used to remove background noise in the audio data.

19. The method of any of items 10-18, wherein audio associated with the displayed content is output via an audio signal from an electronic device, and further comprising:

The audio signal is used to remove background noise in the audio data.

20. The method of any of clauses 10 to 19, further comprising:

in response to detecting the user input, a visual cue is displayed on the display unit for prompting the user to provide the verbal request.

21. The method of any of items 10-20, wherein the performed task comprises:

obtaining results that at least partially satisfy the user request; and

displaying a second user interface on the display unit, the second user interface including a portion of the result, wherein at least a portion of the content continues to be displayed while the second user interface is displayed, and wherein a display area of the second user interface on the display unit is smaller than a display area of at least a portion of the content on the display unit.

22. The method of item 21, wherein a second user interface is overlaid on the displayed content.

23. The method of any of items 21 to 22, wherein the portion of the results includes one or more media items, and further comprising:

receiving, via a second user interface, a selection of a media item of the one or more media items; and

media content associated with the selected media item is displayed on the display unit.

24. The method of any of items 21 to 22, further comprising:

while displaying the second user interface, detecting a second user input; and

in response to detecting the second user input, ceasing to display the second user interface.

25. The method of item 24, wherein the second user input is detected on a remote control of the electronic device, and wherein the second user input comprises a first predetermined motion pattern on a touch-sensitive surface of the remote control.

26. The method of any of items 21 to 22, further comprising:

while displaying the second user interface, detecting a third user input; and

in response to detecting the third user input, replacing display of the second user interface with display of a third user interface on the display unit, the third user interface including at least a portion of the results, wherein the third user interface occupies at least a majority of the display area of the display unit.

27. The method of item 26, wherein a third user input is detected on a remote control of the electronic device, and wherein the third user input comprises a second predetermined motion pattern on a touch-sensitive surface of the remote control.

28. The method of any of clauses 26 to 27, further comprising:

In response to detecting the third user input:

a second result is obtained that is different from the result, wherein the second result at least partially satisfies the user request, and wherein the third user interface includes at least a portion of the second result.

29. The method of item 28, wherein the second result is based on a user request received before the user input is detected.

30. The method of any of items 28-29, wherein upon detecting the third user input, a focus of the second user interface is on an item that is part of the results, and wherein the second results are contextually related to the item.

31. The method of any of items 26-30, wherein the displayed content includes media content, and wherein in response to detecting a third user input, pausing playback of the media content on the electronic device.

32. The method of any of items 26 to 31, wherein at least a portion of the results include one or more media items, and further comprising:

receiving a selection of a media item of the one or more media items via a third user interface; and

media content associated with the media item is displayed on the display unit.

33. The method of any of items 26 to 32, further comprising:

while displaying the third user interface, detecting a fourth user input associated with a direction on the display unit;

in response to detecting the fourth user input:

switching focus of the third user interface from the first item to a second item on the third user interface, the second item being positioned in a direction relative to the first item.

34. The method of any of items 26 to 33, further comprising:

while displaying the third user interface, detecting a fifth user input; and

in response to detecting the fifth user input:

displaying the search field; and

displaying a virtual keyboard interface on the display unit, wherein input received via the virtual keyboard interface results in text input in the search field.

35. The method of any of items 26 to 34, further comprising:

while displaying the third user interface, detecting a sixth user input; and

in response to detecting the sixth user input:

sampling second audio data, the second audio data comprising a second user request;

determining whether the second user request is a request for refining a result of the user request; and

In accordance with a determination that the second user request is a request for refining a result of the user request:

a subset of the results is displayed via a third user interface.

36. The method of item 35, wherein the subset of results is displayed at a top row of a third user interface.

37. The method of any of items 35-36, further comprising:

in accordance with a determination that the second user request is not a request for refining a result of the user request:

obtaining a third result that at least partially satisfies the second user request; and

a portion of the third result is displayed via a third user interface.

38. The method of item 37, wherein a portion of the third result is displayed at a top row of the third user interface.

39. The method of any of items 35 to 38, further comprising:

obtaining a fourth result that at least partially satisfies the user request or the second user request; and

a portion of the fourth result is displayed via a third user interface.

40. The method of item 39, wherein a portion of the fourth result is displayed at a row subsequent to a top row of the third user interface.

41. The method of any of items 39-40, wherein upon detecting the sixth user input, a focus of the third user interface is located on one or more items of the third user interface, and wherein the fourth result is contextually relevant to the one or more items.

42. The method of any of items 26 to 41, further comprising:

while displaying the third user interface, detecting a seventh user input;

in response to detecting the seventh user input, ceasing to display the third user interface.

43. The method of item 42, wherein the displayed content is media content, wherein playing of the media content on the electronic device is paused in response to detecting the third user input, and wherein playing of the media content on the electronic device is resumed in response to detecting the seventh user input.

44. The method of any of items 42-43, wherein the seventh user input comprises pressing a menu button of a remote control of the electronic device.

45. A method for operating a digital assistant of a media system, the method comprising:

at an electronic device with memory and one or more processors:

displaying content on a display unit;

while displaying content, detecting a user input;

in response to detecting the user input:

displaying a user interface on the display unit, the user interface including a plurality of exemplary natural language requests that are contextually related to the displayed content, wherein receiving a user utterance corresponding to one of the plurality of exemplary natural language requests causes the digital assistant to perform a corresponding action.

46. The method of item 45, wherein the displayed content comprises media content.

47. The method of item 46, wherein the plurality of exemplary natural language requests includes natural language requests for modifying one or more settings associated with the media content.

48. The method of any of items 46-47, wherein the media content continues to play while the user interface is displayed.

49. The method of any of items 46 to 41, further comprising:

outputting audio associated with the media content, wherein an amplitude of the audio does not decrease in response to detecting the user input.

50. The method of item 45, wherein the displayed content comprises a main menu user interface.

51. The method of item 50, wherein the plurality of exemplary natural language requests includes an exemplary natural language request associated with each of a plurality of core competencies of the digital assistant.

52. The method of item 45, wherein the displayed content includes a second user interface having results associated with a previous user request.

53. The method of item 52, wherein the plurality of exemplary natural language requests includes natural language requests for refining the results.

54. The method of any of items 45-53, wherein the user interface includes text instructions for invoking and interacting with the digital assistant.

55. The method of any of items 45-54, wherein the user interface includes a visual indicator that the digital assistant is not receiving audio input.

56. The method of any of items 45-55, wherein a user interface is overlaid on the displayed content.

57. The method of any of items 45 to 56, further comprising:

in response to detecting the user input, the brightness of the displayed content is reduced to highlight the user interface.

58. The method of any of items 45-57, wherein the user input is detected on a remote control of the electronic device.

59. The method of item 58, wherein the user input comprises pressing a button of the remote control device and releasing the button within a predetermined duration after pressing the button.

60. The method of item 59, wherein the button is configured to invoke a digital assistant.

61. The method of any of items 45-60, wherein the user interface includes text instructions for displaying a virtual keyboard interface.

62. The method of any of items 45 to 61, further comprising:

after displaying the user interface, detecting a second user input; and

in response to detecting the second user input, a virtual keyboard interface is displayed on the display unit.

63. The method of item 62, further comprising:

changing the focus of the user interface to a search field on the user interface.

64. The method of item 63, wherein the search field is configured to receive a text search query via a virtual keyboard interface.

65. The method of any of items 45-64, wherein the virtual keyboard interface is unavailable to interact with a digital assistant.

66. The method of any of items 45-65, wherein the second user input includes a predetermined motion pattern on a touch-sensitive surface of a remote control device of the electronic device.

67. The method of any of items 45-66, wherein a plurality of exemplary natural language requests are displayed a predetermined amount of time after detecting the user input.

68. The method of any of items 45-67, wherein displaying a plurality of exemplary natural language requests further comprises:

Each of the plurality of exemplary natural language requests is displayed one at a time in a predetermined order.

69. The method of item 68, wherein displaying in order further comprises:

replacing the display of the previously displayed ones of the plurality of exemplary natural language requests with subsequent ones of the plurality of exemplary natural language requests.

70. The method of any of items 45-69, wherein the content includes a second user interface having one or more items, wherein upon detecting the user input, a focus of the second user interface is on an item of the one or more items, and wherein the plurality of exemplary natural language requests are contextually related to the item of the one or more items.

71. A method for operating a digital assistant of a media system, the method comprising:

at an electronic device with memory and one or more processors:

displaying content on a display unit;

detecting a user input; and

in response to detecting the user input:

displaying one or more suggested examples of the natural language utterance, the one or more suggested examples being contextually relevant to the displayed content and causing the digital assistant to perform a corresponding action when spoken by the user.

72. The method of item 71, further comprising:

detecting a second user input;

in response to detecting the second user input:

sampling the audio data;

determining whether the sampled audio data contains one of the one or more suggested examples of the natural language utterance; and

in accordance with a determination that the sampled audio data contains one of the one or more suggested examples of the natural language utterance, a corresponding action is performed on the utterance.

73. A method for operating a digital assistant of a media system, the method comprising:

at an electronic device with memory and one or more processors:

displaying content on a display unit;

while displaying content, detecting a user input;

in response to detecting the user input, sampling audio data, wherein the audio data comprises a user utterance;

obtaining a determination of a user intent corresponding to a user utterance;

obtaining a determination of whether the user intent includes a request to adjust a state or setting of an application on the electronic device; and

in response to a determination that the obtained user intent includes a request to adjust a state or setting of an application on the electronic device, adjusting the state or setting of the application to meet the user intent.

74. The method of item 73, wherein the request to adjust the state or setting of the application on the electronic device comprises a request to play a particular media item, and wherein adjusting the state or setting of the application to meet the user intent comprises playing the particular media item.

75. The method of item 74, wherein the displayed content includes a user interface having media items, wherein the user utterance does not explicitly define a particular media item to play, and further comprising:

determining whether a focus of a user interface is located on a media item; and

in accordance with a determination that the focus of the user interface is located on the media item, the media item is identified as a particular media item to be played.

76. The method of item 73, wherein the request to adjust the state or setting of the application on the electronic device comprises a request to launch the application on the electronic device.

77. The method of item 73, wherein the displayed content comprises media content playing on the electronic device, and wherein the status or setting relates to the media content playing on the electronic device.

78. The method of item 77, wherein the request to adjust the state or settings of the application on the electronic device comprises a request to fast forward or rewind media content being played on the electronic device.

79. The method of item 77, wherein the request to adjust the state or setting of the application on the electronic device comprises a request to jump forward or backward in the media content to play a particular portion of the media content.

80. The method of item 77, wherein the request to adjust the state or setting of the application on the electronic device comprises a request to pause playing media content on the electronic device.

81. The method of item 77, wherein the request to adjust the state or setting of an application on the electronic device comprises a request to open or close subtitles for the media content.

82. The method of item 73, wherein:

the displayed content includes a user interface having a first media item and a second media item;

the request to adjust the state or setting of the application on the electronic device includes a request to switch focus of the user interface from the first media item to the second media item; and is

Adjusting the state or setting of the application to meet the user intent includes switching focus of the user interface from the first media item to the second media item.

83. The method of item 73, wherein:

The displayed content includes media content being played on a media device;

the user utterance is a natural language expression indicating that the user did not hear a portion of audio associated with the media content;

the request to adjust the state or setting of the application on the electronic device comprises a request to play back a portion of the media content corresponding to the portion of audio not heard by the user; and is

Adjusting the state or settings of the application includes:

fast rewinding the media content a predetermined amount toward a previous portion of the media content; and

the media content is played back from the previous portion.

84. The method of item 83, wherein adjusting the state or setting of the application further comprises:

the closed captioning is opened before the playback of the media content is resumed from the previous portion.

85. The method of any of items 83-84, wherein:

the request to adjust the state or setting of the application on the electronic device further comprises a request to increase the volume of audio associated with the media content; and is

Adjusting the state or setting of the application further includes increasing the volume of audio associated with the media content before resuming playing the media content from the previous portion.

86. The method of any of items 83-84, wherein:

speech in audio associated with the media content is converted to text; and is

Adjusting the state or setting of the application further includes displaying a portion of the text when playing the media content is resumed from the previous portion.

87. The method of any of items 73-85, wherein obtaining a determination of a user intent corresponding to a user utterance further comprises:

a determination of a user emotion associated with the user utterance is obtained, wherein the user intent is determined based on the determined user emotion.

88. The method of any of items 73-87, further comprising:

in response to obtaining a determination that the user intent does not include a request to adjust a state or setting of an application on the electronic device, obtaining a determination of whether the user intent is one of a plurality of predetermined request types; and

in response to obtaining a determination that the user intent is one of a plurality of predetermined request types:

obtaining a result that at least partially satisfies the user's intent; and

the results are displayed in text form on a display unit.

89. The method of item 88, wherein the plurality of predetermined request types includes a request for a current time at a particular location.

90. The method of item 88, wherein the plurality of predetermined request types comprises a request to present a joke.

91. The method of item 88, wherein the plurality of predetermined request types includes a request for information about media content being played on an electronic device.

92. The method of any of items 88-91, wherein the results in text are overlaid on the displayed content.

93. The method of any of items 88-92, wherein the displayed content includes media content that is playing on the electronic device, and wherein the media content continues to play while the textual results are displayed.

94. The method of any of items 88 to 93, further comprising:

in response to obtaining a determination that the user intent is not one of a plurality of predetermined request types:

obtaining a second result that at least partially satisfies the user's intent;

determining whether the displayed content includes media content that is playing on the electronic device; and

In accordance with a determination that the displayed content includes media content:

determining whether the media content can be paused; and

in accordance with a determination that the media content is not available to be paused, a second user interface having a portion of the second result is displayed on the display unit, wherein a display area occupied by the second user interface on the display unit is smaller than a display area occupied by the media content on the display unit.

95. The method of item 94, wherein the user intent comprises a request for a weather forecast at a particular location.

96. The method of item 94, wherein the user intent comprises a request for information associated with a sports team or athlete.

97. The method of any of items 94-96, wherein the user intent is not a media search query, and wherein the second results include one or more media items having media content that at least partially satisfies the user intent.

98. The method of any of items 97, wherein the second result further comprises non-media data that at least partially satisfies the user's intent.

99. The method of item 94, wherein the user intent is a media search query and the second result includes a plurality of media items corresponding to the media search query.

100. The method of any of items 94-99, further comprising:

in accordance with a determination that the displayed content does not include media content that is playing on the electronic device, displaying a third user interface on the display unit having a portion of the second result, wherein the third user interface occupies a majority of a display area of the display unit.

101. The method of item 100, wherein the display content comprises a main menu user interface.

102. The method of item 100, wherein:

the displayed content includes the third user interface having previous results related to previous user requests received prior to detecting the user input; and is

In accordance with a determination that the displayed content does not include media content that is playing on the electronic device, display of the previous result in the third user interface is replaced with display of the second result.

103. The method of any of items 94-102, further comprising:

in accordance with a determination that the displayed content includes media content being played on the electronic device:

determining whether the displayed content includes a second user interface having a previous result from the previous user request, wherein in accordance with the determination that the displayed content includes a second user interface having a previous result from the previous user request, the previous result is replaced with the second result.

104. The method of any of items 94-103, further comprising:

in accordance with a determination that the media content may be paused:

pausing the playing of the media content on the electronic device;

displaying a third user interface having a portion of the second result on the display unit, wherein the third user interface occupies a majority of a display area of the display unit.

105. The method of any of items 73-104, further comprising:

transmitting the audio data to a server to perform natural language processing; and

indicating to the server that the audio data is associated with the media application, wherein the indication biases natural language processing towards media-related user intent.

106. The method of any of items 73 to 105, further comprising:

and transmitting the audio data to a server to perform voice-to-text processing.

107. The method of item 106, further comprising:

indicating to the server that the audio data is associated with a media application, wherein the indication biases speech to text processing towards media-related text results.

108. The method of any of items 106-107, further comprising:

a textual representation of a user utterance is obtained, the textual representation being based on a previous user utterance received prior to sampling the audio data.

109. The method of item 108, wherein the text representation is based on a time at which a previous user utterance was received prior to sampling the audio data.

110. The method of any of items 73-109, further comprising:

obtaining a determination that the user intent does not correspond to one of a plurality of core competencies associated with the electronic device; and

causing the second electronic device to perform a task that facilitates satisfying the user's intent.

111. The method of any of items 73-110, wherein obtaining a determination of user intent further comprises:

obtaining a determination of whether a user utterance includes a ambiguous term;

in response to a determination that the captured user utterance includes ambiguous terms:

obtaining two or more candidate user intents based on the ambiguous term; and

displaying the two or more candidate user intents on the display unit.

112. The method of item 111, further comprising:

in displaying the two or more candidate user intentions, a user selection of one of the two or more candidate user intentions is received, and wherein the user intent is determined based on the user selection.

113. The method of item 112, wherein receiving a user selection further comprises:

Detecting a second user input; and

in response to detecting the second user input, second audio data is sampled, wherein the second audio data includes a second user utterance representing a user selection.

114. The method of any of items 111-113, wherein the two or more interpretations are displayed without outputting speech associated with the two or more candidate user intents.

115. A method for operating a digital assistant of a media system, the method comprising:

at an electronic device with memory and one or more processors:

displaying content on a display unit;

while displaying content, detecting a user input;

in response to detecting the user input, sampling audio data, wherein the audio data includes a user utterance representing a media search request;

obtaining a plurality of media items satisfying a media search request; and

displaying, via a user interface, at least a portion of the plurality of media items on the display unit.

116. The method of item 115, wherein the content continues to be displayed on the display unit while at least a portion of the plurality of media items are displayed, and wherein a display area occupied by the user interface is smaller than a display area occupied by the content.

117. The method of any of items 115-116, further comprising:

determining whether a number of media items in the plurality of media items is less than or equal to a predetermined number, wherein at least a portion of the plurality of media items comprises the plurality of media items in accordance with the determination that the number of media items in the plurality of media items is less than or equal to the predetermined number.

118. The method of item 117, wherein in accordance with a determination that a number of media items in the plurality of media items is greater than a predetermined number, a number of media items in at least a portion of the plurality of media items is equal to the predetermined number.

119. The method of any of items 115-118, wherein each media item of the plurality of media items is associated with a relevance score relative to the media search request, and wherein the relevance score of at least a portion of the plurality of media items is highest among the plurality of media items.

120. The method of any of items 115-119, wherein each media item of at least a portion of the plurality of media items is associated with a popularity rating, and wherein at least a portion of the plurality of media items is arranged in the user interface based on the popularity rating.

121. The method of any of items 115-120, further comprising:

while displaying at least a portion of the plurality of media items, detecting a second user input; and

in response to detecting the second user input, the user interface is expanded to occupy at least a majority of a display area of the display unit.

122. The method of item 121, further comprising:

in response to detecting the second user input:

determining whether a number of media items in the plurality of media items is less than or equal to a predetermined number; and

in accordance with a determination that the number of media items in the plurality of media items is less than or equal to a predetermined number:

obtaining a second plurality of media items that at least partially satisfy the media search request, the second plurality of media items being different from at least a portion of the media items; and

displaying, via the expanded user interface, the second plurality of media items on the display unit.

123. The method of item 122, further comprising:

determining whether the media search request includes more than one search parameter, wherein in accordance with a determination that the media search request includes more than one search parameter, the second plurality of media items are organized in the expanded user interface in accordance with the more than one search parameter of the media search request.

124. The method of any of items 122 to 123, further comprising:

in accordance with a determination that the number of media items in the plurality of media items is greater than a predetermined number:

displaying, via the expanded user interface, at least a second portion of the plurality of media items, wherein the at least a second portion of the plurality of media items is different from at least a portion of the plurality of media items.

125. The method of item 124, wherein the at least a second portion of the plurality of media items comprises two or more media types, and wherein the at least a second portion of the plurality of media items is organized in the expanded user interface according to each of the two or more media types.

126. The method of any of items 122-125, further comprising:

detecting a third user input;

in response to detecting the third user input, causing the expanded user interface to scroll;

determining whether the expanded user interface has scrolled beyond a predetermined location on the expanded user interface; and

in response to determining that the expanded user interface has scrolled beyond a predetermined location on the expanded user interface, displaying at least a third portion of the plurality of media items on the expanded user interface, wherein the at least a third portion of the plurality of media items is organized on the expanded user interface according to one or more media content providers associated with the third plurality of media items.

127. A method for operating a digital assistant of a media system, the method comprising:

at an electronic device with memory and one or more processors:

displaying content on a display unit;

while displaying content, detecting a user input; and

in response to detecting the user input:

displaying a virtual keyboard interface on a display unit; and

causing a selectable affordance to appear on a display of the second electronic device, wherein selection of the affordance causes a text input to be received by the electronic device via a keyboard of the second electronic device.

128. The method of item 127, further comprising:

receiving text input via a keyboard of the second electronic device, the text input representing a user request;

obtaining results that at least partially satisfy the user request; and

a user interface is displayed on the display unit, the user interface including at least a portion of the results.

129. The method of any of items 127-128, wherein the displayed content includes a second user interface having a plurality of exemplary natural language requests.

130. The method of item 129, wherein the displayed content comprises media content.

131. The method of any of items 127-128, wherein the displayed content includes a third user interface with results from previous user requests, the third user interface occupying at least a majority of a display area of the display unit.

132. The method of item 131, wherein the virtual keyboard interface is overlaid on at least a portion of the third user interface.

133. The method of any of items 127-132, wherein the user input is detected via a remote control of the electronic device, and wherein the remote control and the second electronic device are different devices.

134. The method of item 133, wherein the user input comprises a predetermined motion pattern on a touch-sensitive surface of the remote control device.

135. The method of any of items 127-132, wherein the user input is detected via a second electronic device.

136. A non-transitory computer readable storage medium containing computer executable instructions for performing the method of any of items 1-135.

137. A system, comprising:

the non-transitory computer readable storage medium of item 136; and

A processor configured to execute computer-executable instructions.

138. An apparatus comprising means for performing the method of any one of items 1 to 135.

139. An electronic device, comprising:

an input unit configured to receive a user input;

a processing unit coupled to the input unit, wherein the processing unit is configured to:

displaying content on a display unit;

detecting a user input via an input unit;

determining whether the user input corresponds to a first input type; and

in accordance with a determination that the user input corresponds to the first input type:

displaying, on a display unit, a plurality of exemplary natural language requests that are contextually related to the displayed content, wherein receiving a user utterance corresponding to one of the plurality of exemplary natural language requests causes the digital assistant to perform a corresponding action.

140. The electronic device of item 139, further comprising an audio input unit coupled to the processing unit, wherein the processing unit is further configured to:

in accordance with a determination that the user input does not correspond to the first input type:

determining whether the user input corresponds to a second input type; and

In accordance with a determination that the user input corresponds to the second input type:

sampling audio data using an audio input unit;

determining whether the audio data contains a user request;

in accordance with a determination that the audio data contains a user request, a task is performed that at least partially satisfies the user request.

141. The electronic device of any of items 139-140, wherein the processing unit is further configured to:

obtaining results that at least partially satisfy the user request; and

displaying a second user interface on the display unit, the second user interface including a portion of the result, wherein at least a portion of the content continues to be displayed while the second user interface is displayed, and wherein a display area of the second user interface on the display unit is smaller than a display area of at least a portion of the content on the display unit.

142. The electronic device of item 141, wherein the processing unit is further configured to:

detecting a second user input via the input unit while displaying the second user interface; and

in response to detecting the second user input, ceasing to display the second user interface.

143. The electronic device of any of items 141-142, wherein the processing unit is further configured to:

Detecting a third user input via the input unit while the second user interface is displayed; and

in response to detecting the third user input, replacing display of the second user interface with display of a third user interface on the display unit, the third user interface including at least a portion of the results, wherein the third user interface occupies at least a majority of the display area of the display unit.

144. The electronic device of item 143, wherein the processing unit is further configured to:

while displaying the third user interface, detecting, via the input unit, a fourth user input associated with a direction on the display unit; and

in response to detecting the fourth user input:

switching focus of the third user interface from the first item to a second item on the third user interface, the second item being positioned in a direction relative to the first item.

145. The electronic device of any of items 143-144, wherein the processing unit is further configured to:

detecting a fifth user input via the input unit while displaying the third user interface; and

in response to detecting the fifth user input:

displaying the search field; and

displaying a virtual keyboard interface on the display unit, wherein input received via the virtual keyboard interface results in text input in the search field.

146. The electronic device of any of items 143-145, wherein the processing unit is further configured to:

detecting a sixth user input via the input unit while displaying the third user interface; and

in response to detecting the sixth user input:

sampling second audio data, the second audio data comprising a second user request;

determining whether the second user request is a request for refining a result of the user request; and

in accordance with a determination that the second user request is a request to refine results of the user request:

a subset of the results is displayed via a third user interface.

147. An electronic device, comprising:

an input unit configured to receive a user input;

an audio input unit configured to receive audio data;

a processing unit coupled to the input unit and the audio input unit, wherein the processing unit is configured to:

displaying content on a display unit;

detecting a user input via an input unit while displaying content;

in response to detecting the user input, sampling audio data using an audio input unit, wherein the sampled audio data includes a user utterance;

obtaining a determination of a user intent corresponding to a user utterance;

Obtaining a determination of whether the user intent includes a request to adjust a state or setting of an application on the electronic device; and

in response to a determination that the obtained user intent includes a request to adjust a state or setting of an application on the electronic device, adjusting the state or setting of the application to meet the user intent.

148. The electronic device of item 147, wherein the processing unit is further configured to:

in response to obtaining a determination that the user intent does not include a request to adjust a state or setting of an application on the electronic device, obtaining a determination of whether the user intent is one of a plurality of predetermined request types; and

in response to obtaining a determination that the user intent is one of a plurality of predetermined request types:

obtaining a result that at least partially satisfies the user's intent; and

the results are displayed in text form on a display unit.

149. The electronic device of item 148, wherein the processing unit is further configured to:

in response to obtaining a determination that the user intent is not one of a plurality of predetermined request types:

Obtaining a second result that at least partially satisfies the user's intent;

determining whether the displayed content includes media content that is playing on the electronic device; and

in accordance with a determination that the displayed content includes media content:

determining whether the media content can be paused; and

in accordance with a determination that the media content is not available to be paused, a second user interface having a portion of the second result is displayed on the display unit, wherein a display area occupied by the second user interface on the display unit is smaller than a display area occupied by the media content on the display unit.

150. The electronic device of item 149, wherein the processing unit is further configured to:

in accordance with a determination that the displayed content does not include media content that is playing on the electronic device, displaying a third user interface on the display unit having a portion of the second result, wherein the third user interface occupies a majority of a display area of the display unit.

151. The electronic device of item 149, wherein the processing unit is further configured to:

in accordance with a determination that the media content may be paused:

pausing the playing of the media content on the electronic device;

displaying a third user interface having a portion of the second result on the display unit, wherein the third user interface occupies a majority of a display area of the display unit.

152. An electronic device, comprising:

an input unit configured to receive a user input;

a processing unit coupled to the input unit, wherein the processing unit is configured to:

displaying content on a display unit;

detecting a user input via an input unit while displaying content; and in response to detecting the user input:

displaying a virtual keyboard interface on a display unit; and

causing a selectable affordance to appear on a display of the second electronic device, wherein selection of the affordance causes a text input to be received by the electronic device via a keyboard of the second electronic device.

153. The electronic device of item 152, wherein the processing unit is further configured to:

receiving text input via a keyboard of the second electronic device, the text input representing a user request;

obtaining results that at least partially satisfy the user request; and

a user interface is displayed on the display unit, the user interface including at least a portion of the results.

Although the above description uses terms such as "first," "second," etc. to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first user input may be named a second user input, and similarly a second user input may be named a first user input, without departing from the scope of the various described embodiments. The first user input and the second user input are both user inputs, but they are not the same touch.

The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

Depending on the context, the term "if" may be interpreted to mean "when" ("where" or "upon") or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined." or "if [ a stated condition or event ] is detected" may be interpreted to mean "upon determining.. or" in response to determining. "or" upon detecting [ a stated condition or event ] or "in response to detecting [ a stated condition or event ]" depending on the context.

Furthermore, the foregoing description, for purposes of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the technology and its practical applications. Those skilled in the art are thus well able to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated.

Although the present disclosure and examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. It is to be understood that such changes and modifications are to be considered as included within the scope of the disclosure and examples as defined by the following claims.

Further, in any of the various examples discussed herein, the various aspects may be personalized for a particular user. User data, including contacts, preferences, locations, favorite media, etc., can be used to interpret voice commands and facilitate user interaction with the various devices discussed herein. The various processes discussed herein may also be modified in various other ways according to user preferences, contacts, text, usage history, profile data, age-segment data, and the like. Further, such preferences and settings may be updated over time based on user interactions (e.g., frequently issued commands, frequently selected applications, etc.). The collection and use of user data, which may be obtained from various sources, may be utilized to improve the delivery of the invited content, or any other content of interest to the user. The present disclosure contemplates that, in some examples, such collected data may include personal information data that uniquely identifies or may be used to contact or locate a particular person. Such personal information data may include demographic data, location-based data, telephone numbers, email addresses, home addresses, or any other identifying information.

The present disclosure recognizes that the use of such personal information data in the present technology may be useful to benefit the user. For example, the personal information data may be used to deliver target content that is of greater interest to the user. Thus, the use of such personal information data enables planned control of delivered content. In addition, the present disclosure also contemplates other uses for which personal information data is beneficial to a user.

The present disclosure also contemplates that entities responsible for the collection, analysis, disclosure, transmission, storage, or other use of such personal information data will comply with established privacy policies and/or privacy practices. In particular, such entities should enforce and adhere to the use of privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining privacy and security of personal information data. For example, personal information from a user should be collected for legitimate and legitimate uses by an entity and not shared or sold outside of these legitimate uses. In addition, such collection should only be done after the user has informed consent. In addition, such entities should take any required steps to secure and protect access to such personal information data, and to ensure that others who are able to access the personal information data comply with their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices.

Regardless of the foregoing, the present disclosure also contemplates examples in which a user selectively prevents use or access to personal information data. That is, the present disclosure contemplates that hardware elements and/or software elements may be provided to prevent or block access to such personal information data. For example, in the case of an ad delivery service, the techniques of the present invention may be configured to allow a user to opt-in to "join" or "opt-out of" participating in the collection of personal information data during registration with the service. In another example, the user may choose not to provide location information for the targeted content delivery service. As another example, the user may choose not to provide accurate location information, but to permit transmission of location area information.

Thus, while this disclosure broadly covers the use of personal information data to implement one or more of the various disclosed examples, this disclosure also contemplates that various examples may also be implemented without having to access such personal information data. That is, various examples of the present technology are not rendered incapable of normal presentation due to the absence of all or a portion of such personal information data. For example, content may be selected and delivered to a user by inferring preferences based on non-personal information data or an absolute minimum of personal information (e.g., content requested by a device associated with the user, other non-personal information available to a content delivery service, or publicly available information).

A system and process for operating a digital assistant in a media environment is disclosed. In one exemplary process, a primary set of media items may be displayed. An audio input comprising a media-related request may be received. A primary user intent corresponding to the media-related request may be determined. In accordance with a determination that the primary user intent includes a user intent to narrow the primary media search query, a second primary media search query corresponding to the primary user intent can be generated. The second primary media search query may be based on the media-related request and the primary media search query. A second primary media search query may be executed to obtain a second primary set of media items. The display of the primary media item group may be replaced with the display of the second primary media item group.

1. A non-transitory computer-readable medium storing instructions for operating a digital assistant of a media system, the instructions when executed by one or more processors cause the one or more processors to:

displaying the primary set of media items on the display;

in response to detecting the user input, receiving an audio input comprising a media-related request in natural language speech;

Determining a primary user intent corresponding to the media-related request;

determining whether the primary user intent comprises a user intent to narrow a primary media search query with the primary group of media items; and

in accordance with a determination that the primary user intent comprises a user intent to narrow the primary media search query:

generating a second primary media search query corresponding to the primary user intent based on the media-related request and the primary media search query;

executing the second primary media search query to obtain a second primary set of media items; and

replacing the display of the primary set of media items with the display of the second primary set of media items.

2. The non-transitory computer-readable medium of item 1, wherein determining whether the primary user intent comprises narrowing the user intent of the primary media search query comprises:

determining whether the media-related request includes a word or phrase corresponding to a user intent to narrow the primary media search query.

3. The non-transitory computer-readable medium of clause 1, wherein the second primary media search query includes one or more parameter values defined in the media-related request and one or more parameter values of the primary media search query.

4. The non-transitory computer-readable medium of item 1, wherein the second primary media search query comprises a set of parameter values, and wherein the instructions further cause the one or more processors to:

identifying a core parameter value set from the parameter value sets, the core parameter value set having fewer parameter values than the parameter value sets;

generating one or more additional media search queries based on the set of core parameter values;

executing the one or more additional media search queries to obtain one or more additional groups of media items; and

displaying the one or more additional groups of media items on the display.

5. The non-transitory computer-readable medium of item 1, wherein the instructions further cause the one or more processors to:

in accordance with a determination that the primary user intent does not include a user intent to narrow the primary media search query:

determining whether the primary user intent comprises a user intent to perform a new media search query; and

in accordance with a determination that the primary user intent comprises a user intent to perform a new media search query:

generating a third primary media search query corresponding to the primary user intent based on the media-related request;

Determining whether at least one media item corresponding to the third primary media search query is available; and

in accordance with a determination that at least one media item corresponding to the third primary media search query is available:

executing the third primary media search query to obtain a third primary set of media items; and

replacing the display of the primary media item group with the display of the third primary media item group.

6. The non-transitory computer-readable medium of clause 5, wherein determining whether the primary user intent comprises a user intent to perform a new media search query further comprises:

determining whether the media-related request includes a word or phrase corresponding to a parameter value of one or more media items.

7. The non-transitory computer-readable medium of clause 5, wherein executing the third primary media search query comprises identifying candidate media items associated with parameter values included in one or more media commentators comments of the identified candidate media items.

8. The non-transitory computer-readable medium of item 5, wherein the instructions further cause the one or more processors to:

In accordance with a determination that no media items correspond to the third primary media search query:

identifying the least relevant parameter values for the third primary media search query;

determining one or more alternative parameter values based on the identified least relevant parameter values;

executing one or more alternative primary media search queries using the one or more alternative parameter values to obtain a fourth primary media item group; and

replacing the display of the primary media item group with the display of the fourth primary media item group.

9. The non-transitory computer-readable medium of item 5, wherein the instructions further cause the one or more processors to:

in accordance with a determination that the primary user intent does not include a user intent to narrow the primary media search query:

determining one or more secondary user intents based on the primary user intent and one or more previous user intents corresponding to one or more previous media-related requests received prior to the media-related request;

generating one or more secondary media search queries corresponding to the one or more secondary user intents;

Executing the one or more secondary media search queries to obtain one or more secondary media item groups; and

displaying the one or more secondary media item groups on the display.

10. The non-transitory computer-readable medium of item 9, wherein the instructions further cause the one or more processors to:

determining one or more combinations of the primary user intent and the one or more previous user intents, wherein each combination of the one or more combinations is associated with at least one media item, and wherein the one or more secondary intents include the one or more combinations.

11. The non-transitory computer-readable medium of item 9, wherein the instructions further cause the one or more processors to:

receiving a media search history from a second electronic device, wherein the one or more secondary user intents are generated based on the media search history received from the second electronic device.

12. The non-transitory computer readable medium of item 9, wherein:

upon receiving the audio input, a plurality of texts are displayed on the display;

upon receiving the audio input, the plurality of texts is associated with a plurality of media items displayed on the display; and is

Generating the one or more secondary user intents based on the displayed plurality of texts.

13. The non-transitory computer-readable medium of item 9, wherein the instructions further cause the one or more processors to:

determining a ranking score for each of the one or more secondary user intents, wherein the one or more secondary media item groups are displayed according to the ranking score for each of the one or more secondary user intents.

14. The non-transitory computer-readable medium of item 13, wherein the ranking score for each of the one or more secondary user intents is based on each of the media-related requests and a time at which the one or more previous media-related requests were received.

15. The non-transitory computer-readable medium of item 5, wherein the instructions further cause the one or more processors to:

in accordance with a determination that the primary user intent does not include a user intent to perform a new media search query:

determining whether the primary user intent comprises a user intent to correct a portion of the primary media search query; and

In accordance with a determination that the primary user intent comprises a user intent to correct a portion of the primary media search query:

generating a fifth primary media search query corresponding to the primary user intent based on the media-related request and the primary media search query request;

executing the fifth primary media search query to obtain a fifth primary set of media items; and

replacing the display of the primary media item group with the display of the fifth primary media item group.

16. The non-transitory computer-readable medium of item 15, wherein determining whether the primary user intent includes a user intent to correct a portion of the primary media search query comprises:

determining whether a phoneme sequence representing a portion of the media-related request is substantially similar to a phoneme sequence representing a portion of a previous media-related request, the previous media-related request corresponding to the primary media search query.

17. The non-transitory computer-readable medium of item 15, wherein generating the fifth primary media search query comprises:

identifying a group of media items associated with a portion of the primary media search query that is not to be corrected, wherein the fifth primary media search query is generated based on one or more parameter values of the group of media items associated with the portion of the primary media search query that is not to be corrected.

18. The non-transitory computer-readable medium of item 15, wherein the instructions further cause the one or more processors to:

in accordance with a determination that the primary user intent comprises a user intent to correct a portion of the primary media search query:

excluding the primary media search query from consideration when determining a secondary user intent corresponding to the media-related request.

19. The non-transitory computer-readable medium of item 15, wherein the instructions further cause the one or more processors to:

in accordance with a determination that the primary user intent does not include a user intent to correct a portion of the primary media search query:

determining whether the primary user intent comprises a user intent to change a focus of a user interface displayed on the display, wherein the user interface comprises a plurality of media items; and

in accordance with a determination that the primary user intent comprises a user intent to change a focus of a user interface displayed on the display, changing the focus of the user interface from a first media item of the plurality of media items to a second media item of the plurality of media items.

20. The non-transitory computer-readable medium of item 19, wherein determining whether the primary user intent includes a user intent to change a focus of a user interface displayed on the display comprises:

Determining whether the media-related request includes a word or phrase corresponding to a user intent to change a focus of a user interface displayed on the display.

21. The non-transitory computer-readable medium of item 19, wherein the user interface includes a plurality of texts corresponding to the plurality of media items in the user interface, and wherein determining whether the primary user intent includes a user intent to change a focus of a user interface displayed on the display is based on the plurality of texts.

22. The non-transitory computer-readable medium of item 1, wherein the instructions further cause the one or more processors to:

upon receiving the audio input:

determining a preliminary user intent based on the received portion of the audio input;

identifying data needed to satisfy the preliminary user intent;

determining whether the data is stored on the media system when determining the preliminary user intent; and

in accordance with a determination that the data was not stored on the media system when the preliminary user intent was determined, retrieving the data.

23. A method for operating a digital assistant of a media system, the method comprising:

At one or more electronic devices comprising memory and one or more processors:

displaying the primary set of media items on the display;

in response to detecting the user input, receiving an audio input comprising a media-related request in natural language speech;

determining a primary user intent corresponding to the media-related request;

determining whether the primary user intent comprises a user intent to narrow a primary media search query with the primary group of media items; and

in accordance with a determination that the primary user intent comprises a user intent to narrow the primary media search query:

generating a second primary media search query corresponding to the primary user intent based on the media-related request and the primary media search query;

executing the second primary media search query to obtain a second primary set of media items; and

replacing the display of the primary set of media items with the display of the second primary set of media items.

24. The method of item 23, further comprising:

in accordance with a determination that the primary user intent does not include a user intent to narrow the primary media search query:

determining whether the primary user intent comprises a user intent to perform a new media search query;

In accordance with a determination that the primary user intent comprises a user intent to perform a new media search query:

generating a third primary media search query corresponding to the primary user intent based on the media-related request;

determining whether at least one media item corresponding to the third primary media search query is available; and

in accordance with a determination that at least one media item corresponding to the third primary media search query is available:

executing the third primary media search query to obtain a third primary set of media items; and

replacing the display of the primary media item group with the display of the third primary media item group.

25. The method of item 24, further comprising:

in accordance with a determination that the primary user intent does not include a user intent to narrow the primary media search query:

determining one or more secondary user intents based on the primary user intent and one or more previous user intents corresponding to one or more previous media-related requests received prior to the media-related request;

generating one or more secondary media search queries corresponding to the one or more secondary user intents;

Executing the one or more secondary media search queries to obtain one or more secondary media item groups; and

displaying the one or more secondary media item groups on the display.

26. The method of item 25, further comprising:

determining one or more combinations of the primary user intent and the one or more previous user intents, wherein each combination of the one or more combinations is associated with at least one media item, and wherein the one or more secondary intents include the one or more combinations.

27. The method of item 24, further comprising:

in accordance with a determination that the primary user intent does not include a user intent to perform a new media search query:

determining whether the primary user intent comprises a user intent to correct a portion of the primary media search query;

in accordance with a determination that the primary user intent comprises a user intent to correct a portion of the primary media search query:

generating a fifth primary media search query corresponding to the primary user intent based on the media-related request and the primary media search query request;

executing the fifth primary media search query to obtain a fifth primary set of media items; and

Replacing the display of the primary media item group with the display of the fifth primary media item group.

28. The method of item 27, further comprising:

in accordance with a determination that the primary user intent does not include a user intent to correct a portion of the primary media search query:

determining whether the primary user intent comprises a user intent to change a focus of a user interface displayed on the display, wherein the user interface comprises a plurality of media items; and

in accordance with a determination that the primary user intent comprises a user intent to change a focus of a user interface displayed on the display, changing the focus of the user interface from a first media item of the plurality of media items to a second media item of the plurality of media items.

29. An electronic device for operating a digital assistant for a media system, the device comprising:

one or more processors;

a memory storing instructions that, when executed by the one or more processors, cause the one or more processors to:

displaying the primary set of media items on the display;

in response to detecting the user input, receiving an audio input comprising a media-related request in natural language speech;

Determining a primary user intent corresponding to the media-related request;

determining whether the primary user intent comprises a user intent to narrow a primary media search query with the primary group of media items;

in accordance with a determination that the primary user intent comprises a user intent to narrow the primary media search query:

generating a second primary media search query corresponding to the primary user intent based on the media-related request and the primary media search query;

executing the second primary media search query to obtain a second primary set of media items; and

replacing the display of the primary set of media items with the display of the second primary set of media items.

An intelligent automated assistant for media search and playback.

This patent application claims priority to U.S. provisional serial No. 62/215,575 entitled "Intelligent Automated acquisition for Media Search and Playback" filed on 8.9.2015, which is hereby incorporated by reference in its entirety for all purposes.

This patent application is related to the following co-pending patent applications: U.S. non-provisional patent application serial No. ________, "Intelligent Automated assistance in a Media Environment," filed on 8.12.2015 (attorney docket No. 106842130800(P25817US 1)); U.S. non-provisional patent application serial No. 14/498,503, "Intelligent Automated assistance for TV User Interactions" (attorney docket No. 106842065100(P18133US1)) filed on 26.9.2014; and U.S. non-provisional patent application serial No. 14/498,391, "Real-time Digital Assistant Updates" (attorney docket No. 106842097900(P22498US1)), filed on 26.9.2014, which are hereby incorporated by reference in their entirety for all purposes.

The present invention relates generally to intelligent automated assistants, and more particularly to intelligent automated assistants for media search and playback.

An intelligent automated assistant (or digital assistant) can provide an intuitive interface between a user and an electronic device. These assistants may allow users to interact with a device or system in spoken and/or textual form using natural language. For example, a user may access a service of an electronic device by providing spoken user input in a natural language form to a virtual assistant associated with the electronic device. The virtual assistant can perform natural language processing on the spoken user input to infer user intent and implement the user intent into a task. The tasks may then be performed by performing one or more functions of the electronic device, and in some examples, the relevant output may be returned to the user in a natural language form.

It may be desirable to integrate digital assistants in media environments (e.g., televisions, television set-top boxes, cable boxes, gaming devices, streaming media devices, digital video recorders, etc.) to assist users in performing tasks related to media consumption. For example, a digital assistant may be utilized to assist in searching for desired media content for consumption. However, users are often unaware of the particular media item they want to consume, and may spend considerable time browsing through media items to discover new interesting content. Furthermore, existing search interfaces may be complex and not user-friendly, which may further increase the time a user spends browsing media items before finally selecting a desired item for consumption.

A system and method for operating a digital assistant in a media environment is disclosed. In one exemplary process, the primary set of media items may be displayed on a display unit. In response to detecting the user input, an audio input may be received. The audio input may comprise a media related request in the form of natural language speech. A primary user intent corresponding to the media-related request may be determined. The process may determine whether the primary user intent includes a user intent to narrow the primary media search query corresponding to the primary media item group. In accordance with a determination that the primary user intent includes a user intent to narrow the primary media search query, a second primary media search query corresponding to the primary user intent can be generated. The second primary media search query may be based on the media-related request and the primary media search query. A second primary media search query may be executed to obtain a second primary set of media items. The display of the primary set of media items on the display unit may be replaced with the display of the second primary set of media items.

In the following description of the examples, reference is made to the accompanying drawings in which are shown, by way of illustration, specific examples that may be implemented. It is to be understood that other examples may be used and structural changes may be made without departing from the scope of the various examples.

The present disclosure relates to a system and process for operating a digital assistant in a media environment. In one exemplary process, a media search request in the form of a natural language utterance may be received. A primary user intent corresponding to the media search request may be determined. The primary set of media items may be obtained according to a primary user intent. The process may determine whether there are one or more previous user intents, where the one or more previous user intents correspond to one or more previous media search requests received prior to the media search request. In response to determining that one or more previous user intents exist, one or more secondary user intents may be determined based on the primary user intent and the one or more previous user intents. The one or more secondary user intents may be based on various other factors, such as media browsing history, related search attributes, and popular media attributes among multiple users. A plurality of secondary media item groups may be obtained, where each secondary media item group corresponds to a respective secondary user intent of the one or more secondary user intents. The retrieved primary set of media items and the plurality of secondary sets of media items may be displayed on the display unit via a user interface for selection by a user. The primary user intent and the secondary user intent may be intelligently determined, thereby increasing the probability of predicting the actual intent of the user. By providing various media items based on the primary user intent and the secondary user intent, the user may be more likely to encounter media items that are of interest to the user. This may be desirable to improve the user experience by reducing the amount of time spent browsing media items and subsequently increasing the amount of time spent enjoying the media content.

1. System and environment

Fig. 12 illustrates an exemplary system 100-1 for operating a digital assistant, according to various examples. The terms "digital assistant," "virtual assistant," "intelligent automated assistant," or "automatic digital assistant" may refer to any information processing system for interpreting natural language input in spoken and/or textual form to infer user intent and perform actions based on the inferred user intent. For example, to take action in accordance with the inferred user intent, the system may perform one or more of the following: identifying a task flow utilizing steps and parameters designed to achieve the inferred user intent; entering into the task flow specific requirements from the inferred user intent; executing a task flow by calling a program, method, service, Application Programming Interface (API), or the like; and generating an output response to the user in audible (e.g., speech) and/or visual form.

In particular, the digital assistant may be capable of accepting user requests in the form of, at least in part, natural language commands, requests, statements, narratives and/or inquiries. Typically, the user request may seek an informational answer by the digital assistant or seek the digital assistant to perform a task. A satisfactory response to a user request may be to provide a requested informational answer, to perform a requested task, or a combination of both. For example, a user may ask a question to a digital assistant, such as "is Paris now a few? The "digital assistant may retrieve the requested information and answer" Paris is now 4:00 pm. ". The user may also request to perform a task, such as "find me a movie featured by the Reese Witherspoon. ". In response, the digital assistant can execute the requested search query and display the relevant movie names for the user to select from. During the performance of requested tasks, the digital assistant can sometimes interact with the user over a long period of time during a continuous conversation involving multiple exchanges of information. There are many other ways to interact with a digital assistant to request information or perform various tasks. In addition to providing a textual response and taking programmed actions, the digital assistant may also provide other visual or audio forms of response, such as responses in the form of speech, alerts, music, images, videos, animations, and the like. Further, as discussed herein, an exemplary digital assistant can control playback of media content (e.g., on a television set-top box) and display the media content or other information on a display unit (e.g., a television).

As shown in fig. 12, in some examples, the digital assistant may be implemented according to a client-server model. The digital assistant may include a client-side portion 102-1 (hereinafter "DA client 102-1") executing on the media device 104-1, and a server-side portion 106-1 (hereinafter "DA server 106-1") executing on the server system 108-1. Further, in some examples, the client-side portion may also execute on the user device 122-1. The DA client 102-1 may communicate with the DA server 106-1 over one or more networks 110-1. The DA client 102-1 may provide client-side functionality, such as user-oriented input and output processing, as well as communication with the DA server 106-1. The DA server 106-1 may provide server-side functionality for any number of DA clients 102-1 each residing on a respective device (e.g., media device 104-1 and user device 122-1).

Media device 104-1 may be any suitable electronic device configured to manage and control media content. For example, media device 104-1 may comprise a television set-top box, such as a cable box device, a satellite box device, a video player device, a video streaming device, a digital video recorder, a gaming system, a DVD player, a Blu-ray Disc TMPlayers, combinations of such devices, and the like. As shown in FIG. 12, media device 104-1 may be part of a media system 128-1. In addition to the media device 104-1, the media system 128-1 may include a remote control 124-1 and a display unit 126-1. Media device 104-1 may display media content on display unit 126-1. The display unit 126-1 may be any type of display, such as a television display, monitor, projector, etc. In some examples, media device 104-1 may be connected to an audio system (e.g., an audio receiver) and speakers (not shown) that may be integrated with or separate from display unit 126-1. In other examples, display unit 126-1 and media device 104-1 may be incorporated together in a single device, such as a smart television with advanced processing capabilities and network connection capabilities. In such examples, the functionality of media device 104-1 may be performed as an application on a combined device.

In some examples, media device 104-1 may function as a media control center for multiple types and sources of media content. For example, media device 104-1 may facilitate user access to live television (e.g., wireless, satellite, or cable television). Thus, the media device 104-1 may include a cable tuner or a satellite tuner, among others. In some examples, media device 104-1 may also record the television program for later time-shifted viewing. In other examples, media device 104-1 may provide access to one or more streaming media services, such as access to cable-delivered video-on-demand programming, video, and music, and internet-delivered television programming, video, and music (e.g., from various free, pay-for-fee, and subscription-based streaming services). In other examples, media device 104-1 may facilitate playback or display of media content from any other source, such as displaying photos from a mobile user device, playing videos from a coupled storage device, playing music from a coupled music player, and so forth. Media device 104-1 may also include various other combinations of the media control features discussed herein as desired. The media device 104-1 is described in detail below with reference to fig. 13.

The user device 122-1 may be any personal electronic device, such as a mobile phone (e.g., a smartphone), a tablet, a portable media player, a desktop computer, a laptop computer, a PDA, a wearable electronic device (e.g., digital glasses, a wristband, a watch, a brooch, an armband, etc.), and so forth. The user equipment 122-1 is described in detail below with reference to fig. 14.

In some examples, a user may interact with media device 104-1 through user device 122-1, remote control 124-1, or an interface element (e.g., a button, microphone, camera, joystick, etc.) integrated with media device 104-1. For example, voice input including a media-related query or command for a digital assistant may be received at user device 122-1 and/or remote control 124-1 and may be used to cause a media-related task to be performed on media device 104-1. Likewise, haptic commands for controlling media on media device 104-1 may be received at user device 122-1 and/or remote control 124 (as well as other devices not shown). Thus, the various functions of media device 104-1 may be controlled in various ways, giving the user a variety of options for controlling media content from multiple devices.

Examples of one or more communication networks 110-1 may include a Local Area Network (LAN) and a Wide Area Network (WAN), such as the Internet. The one or more communication networks 110-1 may be implemented using any known network protocol, including various wired or wireless protocols, such as, for example, Ethernet, Universal Serial Bus (USB), firewire, Global System for Mobile communications (GSM), Enhanced Data GSM Environment (EDGE), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Bluetooth, Wi-Fi, Voice over Internet protocol (VoIP), Wi-MAX, or any other suitable communication protocol.

DA server 106-1 may include a client-facing input/output I/O interface 112-1, one or more processing modules 114-1, a data and model 116-1, and an I/O interface 118-1 to external services. The client-facing I/O interface 112-1 may facilitate client-facing input and output processing of the DA server 106-1. The one or more processing modules 114-1 may utilize the data and models 116-1 to process speech input and determine user intent based on natural language input. Further, the one or more processing modules 114-1 may perform tasks based on the inferred user intent. In some examples, DA server 106-1 may communicate with external services 120-1 (such as, for example, telephone services, calendar services, information services, messaging services, navigation services, television programming services, streaming media services, media search services, etc.) over one or more networks 110-1 to complete tasks or obtain information. I/O interface 118-1 to external services may facilitate such communications.

The server system 108-1 may be implemented on one or more stand-alone data processing devices of a computer or a distributed network. In some examples, server system 108-1 may also employ various virtual devices and/or services of a third party service provider (e.g., a third party cloud service provider) to provide potential computing resources and/or infrastructure resources of server system 108-1.

While the digital assistant shown in fig. 12 may include both a client-side portion (e.g., DA client 102-1) and a server-side portion (e.g., DA server 106-1), in some examples, the functionality of the digital assistant may be implemented as a standalone application installed on a user device or a media device. Moreover, the division of functionality between the client portion and the server portion of the digital assistant may vary in different implementations. For example, in some examples, the DA client executing on the user device 122-1 or the media device 104-1 may be a thin client that provides only user-oriented input and output processing functions and delegates all other functions of the digital assistant to a backend server.

2. Media system

Fig. 13 illustrates a block diagram of a media system 128-1, in accordance with various examples. The media system 128-1 may include a media device 104-1 communicatively coupled to a display unit 126-1, a remote control 124-1, and speakers 268-1. Media device 104-1 may receive user input via remote control 124. Media content from media device 104-1 may be displayed on display unit 126-1.

In this example, as shown in FIG. 13, media device 104-1 may include a memory interface 202-1, one or more processors 204-1, and a peripheral interface 206-1. The various components in media device 104-1 may be coupled together by one or more communication buses or signal lines. Media device 104-1 may also include various subsystems and peripherals coupled to peripheral interface 206-1. The subsystems and peripheral devices may gather information and/or facilitate various functions of media device 104-1.

For example, media device 104-1 may include a communication subsystem 224-1. Communication functions may be facilitated by one or more wired and/or wireless communication subsystems 224-1, which may include various communication ports, radio frequency receivers and transmitters, and/or optical (e.g., infrared) receivers and transmitters.

In some examples, media device 104-1 may also include an I/O subsystem 240-1 coupled to peripheral interface 206-1. I/O subsystem 240-1 may include an audio/video output controller 270-1. Audio/video output controller 270-1 may be coupled to display unit 126-1 and speaker 268-1, or may be capable of otherwise providing audio and video output (e.g., via audio/video ports, wireless transmission, etc.). I/O subsystem 240-1 may also include a remote controller 242-1. The remote controller 242-1 is communicatively coupled to the remote control 124-1 (e.g., via a wired connection, Bluetooth, Wi-Fi, etc.).

The remote control 124-1 may include a microphone 272-1 for capturing audio data (e.g., voice input from a user), a button 274-1 for capturing tactile input, and a transceiver 276-1 for facilitating communication with the media device 104-1 via the remote control 242-1. Further, remote control 124-1 may include a touch-sensitive surface 278-1, a sensor, or a group of sensors that accept input from a user based on tactile sensation and/or tactile contact. The touch-sensitive surface 278-1 and the remote controller 242-1 may detect contact (and any movement or interruption of the contact) on the touch-sensitive surface 278-1 and convert the detected contact (e.g., a gesture, a contact action, etc.) into interaction with user interface objects (e.g., one or more soft keys, icons, web pages, or images) displayed on the display unit 126-1. In some examples, the remote control 124-1 may also include other input mechanisms, such as a keyboard, joystick, or the like. In some examples, the remote control 124-1 may also include output mechanisms, such as lights, a display, a speaker, and the like. Input received at the remote control 124-1 (e.g., user speech, button presses, contact actions, etc.) may be communicated to the media device 104-1 via the remote control 124-1. I/O subsystem 240-1 may also include one or more other input controllers 244-1. One or more other input controllers 244-1 may be coupled to other input/control devices 248-1, such as one or more buttons, rocker switches, thumb wheels, infrared ports, USB ports, and/or pointing devices (such as a stylus).

In some examples, media device 104-1 may also include a memory interface 202-1 coupled to memory 250-1. Memory 250-1 may include any electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device; portable computer diskette (magnetic); random Access Memory (RAM) (magnetic); read Only Memory (ROM) (magnetic); erasable programmable read-only memory (EPROM) (magnetic); portable optical disks such as CD, CD-R, CD-RW, DVD-R, or DVD-RW; or flash memory such as compact flash cards, secure digital cards, USB memory devices, memory sticks, and the like. In some examples, the non-transitory computer-readable storage medium of memory 250-1 may be used to store instructions (e.g., for performing part or all of the various processes described herein) for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In other examples, instructions (e.g., for performing some or all of the various processes described herein) may be stored on a non-transitory computer-readable storage medium of server system 108-1, or may be divided between the non-transitory computer-readable storage medium of memory 250-1 and the non-transitory computer-readable storage medium of server system 108-1. In the context of this document, a "non-transitory computer-readable storage medium" can be any medium that can contain or store the program for use by or in connection with the instruction execution system, apparatus, or device.

In some examples, memory 250-1 may store an operating system 252-1, a communication module 254-1, a Graphical User Interface (GUI) module 256-1, a device built-in media module 258-1, a device external media module 260-1, and an application module 262-1. Operating system 252-1 may include instructions for handling basic system services and for performing hardware related tasks. Communication module 254-1 may facilitate communication with one or more additional devices, one or more computers, and/or one or more servers. The graphical user interface module 256-1 may facilitate graphical user interface processing. The device built-in media module 258-1 may facilitate storage and playback of media content stored locally on the media device 104-1. The device external media module 260-1 may facilitate streaming playback or download of media content obtained from an external source (e.g., on a remote server, on the user device 122-1, etc.). In addition, the device external media module 260-1 may facilitate reception of broadcast and cable content (e.g., channel tuning). The application module 262-1 may facilitate various functions of media-related applications, such as web browsing, media processing, gaming, and/or other processes and functions.

As described herein, the memory 250-1 may also store client-side digital assistant instructions (e.g., in the digital assistant client module 264-1) and various user data 266-1 (e.g., user-specific vocabulary data, preference data, and/or other data such as a user's media search history, media watch lists, recently watched lists, favorite media items, etc.), for example, to provide client-side functionality of the digital assistant. User data 266-1 may also be used to perform speech recognition to support a digital assistant or for any other application.

In various examples, digital assistant client module 264-1 may be capable of accepting sound input (e.g., speech input), text input, touch input, and/or gesture input through various user interfaces of media device 104-1 (e.g., I/O subsystem 240-1, etc.). The digital assistant client module 264-1 can also provide output in audio (e.g., speech output), visual, and/or tactile forms. For example, the output may be provided as voice, sound, alarm, text message, menu, graphic, video, animation, vibration, and/or a combination of two or more of the foregoing. During operation, digital assistant client module 264-1 can use communication subsystem 224-1 to communicate with a digital assistant server (e.g., DA server 106-1).

In some examples, the digital assistant client module 264-1 may utilize various subsystems and peripherals to collect additional information related to the media device 104-1 from the surroundings of the media device 104-1 to establish a context associated with the user, current user interaction, and/or current user input. Such context may also include information from other devices, such as information from user device 122-1. In some examples, the digital assistant client module 264-1 may provide the contextual information or a subset thereof along with the user input to the digital assistant server to help infer the user's intent. The digital assistant can also use the contextual information to determine how to prepare and deliver the output to the user. The contextual information may also be used by the media device 104-1 or the server system 108-1 to support accurate speech recognition.

In some examples, contextual information accompanying the user input may include sensor information such as lighting, ambient noise, ambient temperature, distance to another object, and the like. The context information may also include information associated with the physical state of the media device 104-1 (e.g., device location, device temperature, power level, etc.) or the software state of the media device 104-1 (e.g., running process, installed applications, past and current network activities, background services, error logs, resource usage, etc.). The contextual information may also include information received from the user (e.g., voice input), information requested by the user, and information presented to the user (e.g., information currently or previously displayed by the media device). The contextual information may also include information associated with the state of the connected device or other devices associated with the user (e.g., content displayed on user device 122-1, playable content on user device 122-1, etc.). Any of these types of contextual information may be provided to DA server 106-1 (or for media device 104-1 itself) as contextual information related to user input.

In some examples, digital assistant client module 264-1 may selectively provide information (e.g., user data 266-1) stored on media device 104-1 in response to a request from DA server 106-1. Additionally or alternatively, this information may be used on the media device 104-1 itself to perform speech recognition and/or digital assistant functions. The digital assistant client module 264-1 may also elicit additional input from the user via a natural language dialog or other user interface upon request by the DA server 106-1. The digital assistant client module 264-1 may transmit additional input to the DA server 106-1 to assist the DA server 106-1 in intent inference and/or to satisfy the user intent expressed in the user request.

In various examples, memory 250-1 may include additional instructions or fewer instructions. . Further, various functions of the media device 104-1 may be implemented in hardware and/or firmware, including in one or more signal processing circuits and/or application specific integrated circuits.

3. User equipment

Fig. 14 illustrates a block diagram of an exemplary user device 122-1, in accordance with various examples. As shown, the user device 122-1 may include a memory interface 302-1, one or more processors 304-1, and a peripheral interface 306-1. The various components in user device 122-1 may be coupled together by one or more communication buses or signal lines. User device 122-1 may also include various sensors, subsystems, and peripherals coupled to peripheral interface 306-1. The sensors, subsystems, and peripherals may collect information and/or facilitate various functions of user device 122-1.

For example, the user device 122-1 may include a motion sensor 310-1, a light sensor 312-1, and a proximity sensor 314-1 coupled to the peripheral interface 306-1 to facilitate orientation, lighting, and proximity sensing functions. One or more other sensors 316-1, such as a positioning system (e.g., GPS receiver), temperature sensor, biometric sensor, gyroscope, compass, accelerometer, etc., may also be connected to the peripheral interface 306-1 to facilitate related functions.

In some examples, camera subsystem 320-1 and optical sensor 322-1 may be used to facilitate camera functions, such as taking pictures and recording video clips. Communication functions may be facilitated by one or more wired and/or wireless communication subsystems 324-1, which may include various communication ports, radio frequency receivers and transmitters, and/or optical (e.g., infrared) receivers and transmitters. An audio subsystem 326-1 may be coupled to speaker 328-1 and microphone 330-1 to facilitate voice-enabled functions such as voice recognition, voice replication, digital recording, and telephony functions.

In some examples, the user device 122-1 may also include an I/O subsystem 340-1 coupled to the peripheral interface 306-1. The I/O subsystem 340-1 may include a touch screen controller 342-1 and/or one or more other input controllers 344-1. The touch screen controller 342-1 can be coupled to the touch screen 346-1. The touch screen 346-1 and touch screen controller 342-1 can, for example, detect contact and movement or breaks thereof using any of a number of touch sensitivity technologies, such as capacitive, resistive, infrared, surface acoustic wave, proximity sensor arrays, and the like. One or more other input controllers 344-1 may be coupled to other input/control devices 348-1, such as one or more buttons, rocker switches, thumb wheels, infrared ports, USB ports, and/or pointer devices (such as a stylus).

In some examples, the user device 122-1 may also include a memory interface 302-1 coupled to the memory 350-1. Memory 350-1 may include any electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device; portable computer diskette (magnetic); random Access Memory (RAM) (magnetic); read Only Memory (ROM) (magnetic); erasable programmable read-only memory (EPROM) (magnetic); portable optical disks such as CD, CD-R, CD-RW, DVD-R, or DVD-RW; or flash memory such as compact flash cards, secure digital cards, USB memory devices, memory sticks, and the like. In some examples, the non-transitory computer-readable storage medium of memory 350-1 may be used to store instructions (e.g., for performing part or all of the various processes described herein) for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In other examples, instructions (e.g., for performing some or all of the various processes described herein) may be stored on a non-transitory computer-readable storage medium of server system 108-1, or may be divided between the non-transitory computer-readable storage medium of memory 350-1 and the non-transitory computer-readable storage medium of server system 108-1. In the context of this document, a "non-transitory computer-readable storage medium" can be any medium that can contain or store the program for use by or in connection with the instruction execution system, apparatus, or device.

In some examples, memory 350-1 may store an operating system 352-1, a communications module 354-1, a Graphical User Interface (GUI) module 356-1, a sensor processing module 358-1, a telephony module 360-1, and an application module 362-1. The operating system 352-1 may include instructions for handling basic system services and for performing hardware related tasks. The communication module 354-1 may facilitate communication with one or more additional devices, one or more computers, and/or one or more servers. The graphical user interface module 356-1 may facilitate graphical user interface processing. Sensor processing module 358-1 may facilitate sensor-related processing and functions. The phone module 360-1 may facilitate phone-related processes and functions. Application modules 362-1 may facilitate various functions of user applications such as electronic messaging, web browsing, media processing, navigation, imaging, and/or other processes and functions.

As described herein, the memory 350-1 may also store client-side digital assistant instructions (e.g., stored in the digital assistant client module 364-1) as well as various user data 366-1 (e.g., user-specific vocabulary data, preference data, and/or other data such as a user's electronic address book, to-do list, shopping list, television program collection, etc.), for example, to provide client-side functionality of the digital assistant. User data 366-1 may also be used to perform speech recognition to support a digital assistant or for any other application. Digital assistant client module 364-1 and user data 366-1 may be similar or identical to digital assistant client module 264-1 and user data 266-1, respectively, as described above with reference to fig. 13.

In various examples, memory 350-1 may include additional instructions or fewer instructions. Further, various functions of the user device 122-1 may be performed in hardware and/or firmware, including in one or more signal processing and/or application specific integrated circuits.

In some examples, user device 122-1 may be configured to control various aspects of media device 104-1. For example, user device 122-1 may function as a remote control (e.g., remote control 124-1). User input received via user device 122-1 may be transmitted to media device 104-1 (e.g., using a communication subsystem) to cause media device 104-1 to perform corresponding actions. Further, user device 122-1 may be configured to receive instructions from media device 104-1. For example, media device 104-1 may hand over the task to user device 122-1 to execute and cause an object (e.g., a selectable affordance) to be displayed on user device 122-1.

It should be understood that the system 100-1 and the media system 128-1 are not limited to the components and configurations shown in fig. 12 and 13, and that the user device 122-1, the media device 104-1, and the remote control 124-1 are likewise not limited to the components and configurations shown in fig. 13 and 14. In various configurations according to various examples, system 100-1, media system 128-1, user device 122-1, media device 104-1, and remote control 124-1 may all include fewer components, or include other components.

4. Digital assistant system

Fig. 15A illustrates a block diagram of a digital assistant system 400-1, according to various examples. In some examples, the digital assistant system 400-1 may be implemented on a stand-alone computer system. In some examples, the digital assistant system 400-1 may be distributed across multiple computers. In some examples, some modules and functionality of a digital assistant can be divided into a server portion and a client portion, where the client portion is located on one or more user devices (e.g., device 104-1 or device 122-1) and communicates with the server portion (e.g., server system 108-1) over one or more networks, for example as shown in fig. 12. In some examples, digital assistant system 400-1 may be a specific implementation of server system 108-1 (and/or DA server 106-1) shown in fig. 12. It should be noted that the digital assistant system 400-1 is only one example of a digital assistant system, and that the digital assistant system 400-1 may have more or fewer components than shown, may combine two or more components, or may have a different configuration or layout of components. The various components shown in fig. 15A may be implemented in hardware, software instructions for execution by one or more processors, firmware (including one or more signal processing integrated circuits and/or application specific integrated circuits), or a combination thereof.

The digital assistant system 400-1 may include a memory 402-1, one or more processors 404-1, an I/O interface 406-1, and a network communication interface 408-1. These components may communicate with each other via one or more communication buses or signal lines 410-1.

In some examples, the memory 402-1 may include a non-transitory computer-readable medium, such as high-speed random access memory and/or a non-volatile computer-readable storage medium (e.g., one or more disk storage devices, flash memory devices, or other non-volatile solid-state memory devices).

In some examples, the I/O interface 406-1 may couple I/O devices 416-1 of the digital assistant system 400-1, such as a display, a keyboard, a touch screen, and a microphone, to the user interface module 422-1. The I/O interface 406-1 in conjunction with the user interface module 422-1 may receive user inputs (e.g., voice inputs, keyboard inputs, touch inputs, etc.) and process those inputs accordingly. In some examples, such as when the digital assistant is implemented on a standalone user device, the digital assistant system 400-1 may include any of the components and I/O communication interfaces described with respect to the device 104-1 or the device 122-1 in fig. 13 or 14, respectively. In some examples, digital assistant system 400-1 may represent a server portion of a digital assistant implementation and may interact with a user through a client-side portion located on a client device (e.g., device 104-1 or device 122-1).

In some examples, the network communication interface 408-1 may include one or more wired communication ports 412-1, and/or wireless transmit and receive circuitry 414-1. The one or more wired communication ports may receive and transmit communication signals via one or more wired interfaces, such as ethernet, Universal Serial Bus (USB), firewire, and the like. The wireless circuitry 414-1 may receive RF signals and/or optical signals from, and transmit RF signals and/or optical signals to, communication networks and other communication devices. The wireless communication may use any of a variety of communication standards, protocols, and technologies, such as GSM, EDGE, CDMA, TDMA, Bluetooth, Wi-Fi, VoIP, Wi-MAX, or any other suitable communication protocol. Network communication interface 408-1 may enable communication between digital assistant system 400-1 and other devices via a network, such as the internet, an intranet, and/or a wireless network, such as a cellular telephone network, a wireless Local Area Network (LAN), and/or a Metropolitan Area Network (MAN).

In some examples, memory 402-1 or the computer-readable storage medium of memory 402-1 may store programs, modules, instructions, and data structures that include all or a subset of the following: an operating system 418-1, a communication module 420-1, a user interface module 422-1, one or more application programs 424-1, and a digital assistant module 426-1. In particular, memory 402-1 or the computer-readable storage medium of memory 402-1 may store instructions for performing process 800-1 described below. The one or more processors 404-1 may execute the programs, modules, and instructions and may read data from, or write data to, the data structures.

The operating system 418-1 (e.g., Darwin, RTXC, LINUX, UNIX, iOS, OS X, WINDOWS, or an embedded operating system such as VxWorks) may include various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware, firmware, and software components.

The communication module 420-1 may facilitate communications between the digital assistant system 400-1 and other devices over the network communication interface 408-1. For example, the communication module 420-1 may communicate with the communication subsystem (e.g., 224-1,324-1) of the electronic device (e.g., 104-1, 122-1). The communication module 420-1 may also include various components for processing data received by the wireless circuitry 414-1 and/or the wired communication port 412-1.

User interface module 422-1 may receive commands and/or input from a user (e.g., from a keyboard, touch screen, pointing device, controller, and/or microphone) via I/O interface 406-1 and generate user interface objects on a display. The user interface module 422-1 may also prepare for output (e.g., voice, sound, animation, text, icons, vibrations, haptic feedback, lighting, etc.) and deliver it to the user via the I/O interface 406-1 (e.g., through a display, audio channels, speaker and touchpad, etc.).

The application programs 424-1 may include programs and/or modules configured to be executed by the one or more processors 404-1. For example, if the digital assistant system 400-1 is implemented on a standalone user device, the application programs 424-1 may include user application programs, such as games, calendar application programs, navigation application programs, or email application programs. If the digital assistant system 400-1 is implemented on a server, the application programs 424-1 may include, for example, a resource management application, a diagnostic application, or a scheduling application.

Memory 402-1 may also store a digital assistant module 426-1 (or a server portion of a digital assistant). In some examples, digital assistant module 426-1 may include the following sub-modules, or a subset or superset thereof: an I/O processing module 428-1, a Speech To Text (STT) processing module 430-1, a natural language processing module 432-1, a dialog flow processing module 434-1, a task flow processing module 436-1, a service processing module 438-1, and a speech synthesis module 440-1. Each of these modules may have access to one or more, or a subset or superset thereof, of the systems or data and models of the following digital assistant module 426-1: ontology 460-1, vocabulary index 444-1, user data 448-1, task flow model 454-1, service model 456-1, and Automatic Speech Recognition (ASR) system 431-1.

In some examples, using the processing modules, data, and models implemented in digital assistant module 426-1, the digital assistant may perform at least some of the following operations: converting the speech input to text; identifying a user intent expressed in a natural language input received from a user; actively elicit and obtain information needed to fully infer user intent (e.g., by disambiguating words, games, intent, etc.); determining a task flow for satisfying the inferred intent; and executing the task flow to satisfy the inferred intent.

In some examples, as shown in FIG. 15B, the I/O processing module 428-1 may interact with a user through the I/O device 416-1 in FIG. 15A or interact with an electronic device (e.g., device 104-1 or device 122-1) through the network communication interface 408-1 in FIG. 15A to obtain user input (e.g., voice input) and provide a response to the user input (e.g., as voice output). The I/O processing module 428-1 may optionally obtain contextual information associated with the user input from the electronic device upon or shortly after receiving the user input. The contextual information may include user-specific data, vocabulary, and/or preferences related to user input. In some examples, the context information also includes software and hardware states of the electronic device at the time the user request is received, and/or information related to the user's surroundings at the time the user request is received. In some examples, the I/O processing module 428-1 may also send follow-up questions to the user related to the user request and receive answers from the user. When a user request is received by the I/O processing module 428-1 and the user request may include speech input, the I/O processing module 428-1 may forward the speech input to the STT processing module 430-1 (or speech recognizer) for speech-to-text conversion.

STT processing module 430-1 can include one or more ASR systems (e.g., ASR system 431-1). The one or more ASR systems may process speech input received through I/O processing module 428-1 to generate recognition results. Each ASR system may include a front-end speech preprocessor. A front-end speech preprocessor can extract representative features from the speech input. For example, a front-end speech preprocessor may perform a fourier transform on a speech input to extract spectral features characterizing the speech input as a sequence of representative multi-dimensional vectors. Further, each ASR system may include one or more speech recognition models (e.g., acoustic models and/or language models), and may implement one or more speech recognition engines. Examples of speech recognition models may include hidden markov models, gaussian mixture models, deep neural network models, n-gram language models, and other statistical models. Examples of speech recognition engines may include dynamic time warping based engines and Weighted Finite State Transformer (WFST) based engines. One or more speech recognition models and one or more speech recognition engines may be used to process the extracted representative features of the front-end speech preprocessor to produce intermediate recognition results (e.g., phonemes, phoneme strings, and sub-words) and, ultimately, text recognition results (e.g., words, word strings, or sequences of symbols). In some examples, the voice input may be processed at least in part by a third party service or on an electronic device (e.g., device 104-1 device 122-1) to produce a recognition result. Once STT processing module 430-1 generates a recognition result that includes a text string (e.g., a word, a sequence of words, or a sequence of symbols), the recognition result may be communicated to natural language processing module 432-1 for intent inference.

In some examples, one or more language models of one or more ASR systems may be configured to bias towards media-related results. In one example, a corpus of media-related text can be used to train one or more language models. In another example, the ASR system may be configured to facilitate media-related recognition results. In some examples, one or more ASR systems may include a static language model and a dynamic language model. Static language models may be trained using a general corpus of text, while dynamic language models may be trained using user-specific text. For example, a dynamic language model may be generated using text corresponding to previous speech input received from a user. In some examples, one or more ASR systems may be configured to generate recognition results based on a static language model and/or a dynamic language model. Further, in some examples, one or more ASR systems may be configured to facilitate recognition results corresponding to a most recently received previous speech input.

More details regarding the Speech to text process are described in U.S. utility model patent application serial No. 13/236,942 entitled "Consolidating Speech Recognition Results" filed on 20/9/2011, the entire disclosure of which is incorporated herein by reference.

In some examples, STT processing module 430-1 may include a vocabulary of recognizable words and/or may access the vocabulary via speech-to-alphabet conversion module 431-1. Each vocabulary word may be associated with one or more candidate pronunciations of the word represented in speech recognition phonetic letters. In particular, the vocabulary of recognizable words may include words associated with multiple candidate pronunciations. For example, the vocabulary may includeAndthe word "tomato" associated with the candidate pronunciation. Further, the vocabulary words may be associated with custom candidate pronunciations based on previous speech input from the user. Such custom candidate pronunciations can be stored in STT processing module 430-1 and can be associated with a particular user via a user profile on the device. In some examples, candidate pronunciations for words may be determined based on the spelling of the word and one or more linguistic and/or phonetic rules. In some examples, the candidate pronunciation may be generated manually, e.g., based on a known standard pronunciationAnd (4) generating.

In some examples, candidate pronunciations may be ranked based on their prevalence. For example, candidate pronunciationsCan be comparedThe ranking is higher because the former is the more common pronunciation (e.g., among all users, for users in a particular geographic area, or for any other suitable subset of users). In some examples, the candidate pronunciations may be ranked based on whether the candidate pronunciations are custom candidate pronunciations associated with the user. For example, the custom candidate pronunciation may be ranked higher than the standard candidate pronunciation. This can be used to identify proper nouns with unique pronunciations that deviate from the standard pronunciation. In some examples, the candidate pronunciation may be associated with one or more speech features, such as a geographic origin, country, or ethnicity. For example, candidate pronunciations Can be associated with the United states to make candidate pronunciationsMay be associated with the united kingdom. Further, the ranking of the candidate pronunciations may be based on one or more characteristics of the user (e.g., geographic origin, country, race, etc.) stored in a user profile on the device. For example, it may be determined from a user profile that the user is associated with the united states. Candidate pronunciation based on the user's association with the United statesComparable candidate pronunciation (associated with the United states)Rank high (associated with the uk). In some examples, one of the ranked candidate pronunciations may be selected as the predicted pronunciation (e.g., the predicted pronunciation may be selected as the predicted pronunciation)Most likely pronunciation).

Upon receiving a speech input, the STT processing module 430-1 may be used to determine a phoneme (e.g., using a sound model) corresponding to the speech input, and may then attempt to determine a word matching the phoneme (e.g., using a language model). For example, if STT processing module 430-1 may first identify a phoneme sequence corresponding to a portion of the speech inputIt may then determine that the sequence corresponds to the word "tomato" based on the lexical index 444-1.

In some examples, STT processing module 430-1 may use fuzzy matching techniques to determine words in the utterance. Thus, for example, STT processing module 430-1 may determine a phoneme sequence Corresponding to the word "tomato", even if the particular phoneme sequence is not a candidate phoneme sequence for the word.

The natural language processing module 432-1 ("natural language processor") of the digital assistant may take the sequence of words or symbols ("symbol sequence") generated by the STT processing module 430-1 and attempt to associate the symbol sequence with one or more "actionable intents" identified by the digital assistant. An "actionable intent" may represent a task that may be performed by a digital assistant and that may have an associated task flow implemented in the task flow model 454-1. The associated task flow may be a series of programmed actions and steps taken by the digital assistant to perform the task. The capability scope of the digital assistant may depend on the number and variety of task flows that have been implemented and stored in the task flow model 454-1, or in other words, on the number and variety of "actionable intents" that the digital assistant recognizes. However, the effectiveness of a digital assistant may also depend on the assistant's ability to infer the correct "executable intent or intents" from a user request expressed in natural language.

In some examples, the natural language processor 432-1 may receive context information associated with the user request (e.g., from the I/O processing module 428-1) in addition to the sequence of words or symbols obtained from the STT processing module 430-1. The natural language processing module 432-1 may optionally use context information to clarify, supplement, and/or further qualify information contained in the symbol sequence received from the STT processing module 430-1. The context information may include, for example: a user preference; hardware and/or software state of the user device; sensor information collected before, during, or shortly after a user request; previous interactions (e.g., conversations) between the digital assistant and the user, and so on. As described herein, contextual information may be dynamic and may vary with time, location, content of a conversation, and other factors.

In some examples, the natural language processing may be based on, for example, ontology 460-1. Ontology 460-1 may be a hierarchical structure containing a number of nodes, each node representing an "actionable intent" or an "attribute" related to one or more of the "actionable intents" or other "attributes". As described above, an "actionable intent" may represent a task that a digital assistant is capable of performing, i.e., that task is "actionable" or can be performed. An "attribute" may represent a parameter associated with a sub-aspect of an executable intent or another attribute. The connection between the actionable intent node and the property node in the ontology 460-1 may define how the parameters represented by the property node relate to the task represented by the actionable intent node.

In some examples, ontology 460-1 may be composed of actionable intent nodes and property nodes. Within ontology 460-1, each actionable intent node may be connected to one or more property nodes directly or through one or more intermediate property nodes. Similarly, each property node may be connected directly to one or more actionable intent nodes or through one or more intermediate property nodes. For example, as shown in FIG. 15C, ontology 460-1 may include a "media" node (i.e., an actionable intent node). The attribute nodes "one or more actors," "media category," and "media title" may each be directly connected to the actionable intent node (i.e., "media search" node). In addition, the attribute nodes "name", "age", "Ulmer scale ranking" and "nationality" may be child nodes of the attribute node "actor".

In another example, as shown in FIG. 15C, ontology 460-1 may also include a "weather" node (i.e., another actionable intent node). The attribute nodes "date/time" and "location" may each be connected to a "weather search" node. It should be appreciated that, in some examples, one or more attribute nodes may be associated with two or more executables. In these examples, the one or more attribute nodes may be connected to respective nodes corresponding to two or more executables intents in ontology 460-1.

The actionable intent node along with the concept nodes to which it connects may be described as a "domain". In the present discussion, each domain may be associated with a respective executable intent, and may refer to a set of nodes (and relationships to each other) associated with a particular executable intent. For example, the ontology 460-1 shown in FIG. 15C may include an example of a media domain 462-1 and an example of a weather domain 464-1 within the ontology 460-1. The media field 462-1 may include the executable intent node "media search" and the attribute nodes "one or more actors", "media category", and "media title". The weather field 464-1 may include the executable intent node "weather search," as well as the attribute nodes "location" and "date/time. In some examples, ontology 460-1 may be composed of multiple domains. Each domain may share one or more attribute nodes with one or more other domains.

Although FIG. 15C shows two exemplary domains within ontology 460-1, other domains may include, for example, "athlete," "stock," "direction," "media setting," "sports team," "time," and "joke," etc. The domain "athlete" may be associated with the executable intent node "search for athlete information" and may further include attribute nodes such as "athlete name", "team to which the athlete belongs", and "athlete statistics".

In some examples, ontology 460-1 may include all domains (and thus executable intents) that the digital assistant is able to understand and act upon. In some examples, ontology 460-1 may be modified, such as by adding or removing entire domains or nodes or by modifying relationships between nodes within ontology 460-1.

In some examples, each node in ontology 460-1 may be associated with a set of words and/or phrases that are related to the property or actionable intent represented by the node. The respective set of words and/or phrases associated with each node may be a so-called "vocabulary" associated with the node. The respective set of words and/or phrases associated with each node may be stored in the lexical index 444-1 associated with the property or actionable intent represented by the node. For example, returning to FIG. 15C, the vocabulary associated with the node for the attribute of "actor" may include words such as "A List," "Reese Witherspoon," "Arnold Schwarzenegger," "Brad Pitt," and so forth. In another example, the vocabulary associated with the node of the actionable intent of "weather search" may include words and phrases such as "weather," "how weather," "forecast," and the like. The vocabulary index 444-1 may optionally include words and phrases in different languages.

Natural language processing module 432-1 may receive a sequence of symbols (e.g., a text string) from STT processing module 430-1 and determine which nodes are involved in words in the sequence of symbols. In some examples, a word or phrase in a sequence of symbols may "trigger" or "activate" one or more nodes in ontology 460-1 if the word or phrase is found to be associated with those nodes (via lexical index 444-1). Based on the number and/or relative importance of the activated nodes, the natural language processing module 432-1 may select one of the actionable intents as the task that the user intends for the digital assistant to perform. In some examples, the domain with the most "triggered" nodes may be selected. In some examples, the domain with the highest confidence may be selected (e.g., based on the relative importance of its respective triggered node). In some examples, the domain may be selected based on a combination of the number and importance of triggered nodes. In some examples, additional factors are also considered in selecting a node, such as whether the digital assistant has previously correctly interpreted a similar request from the user.

The user data 448-1 can include user-specific information such as user-specific vocabulary, user preferences, user addresses, the user's default and second languages, the user's contact list, and other short-term or long-term information for each user. In some examples, the natural language processing module 432-1 may use user-specific information to supplement information contained in the user input to further define the user intent. For example, for a user request "how the week is, the natural language processing module 432-1 may access the user data 448-1 to determine where the user is located, rather than requiring the user to explicitly provide such information in their request.

Additional details of Searching for ontologies based on symbolic strings are described in U.S. utility patent application serial No. 12/341,743 entitled "Method and Apparatus for Searching Using An Active Ontology," filed on 22.12.2008, the entire disclosure of which is incorporated herein by reference.

In some examples, once the natural language processing module 432-1 identifies an executable intent (or domain) based on a user request, the natural language processing module 432-1 may generate a structured query to represent the identified executable intent. In some examples, the structured query may include parameters for one or more nodes of the executable intent within the domain, and at least some of the parameters are populated with specific information and requirements specified in the user request. For example, the user may say "find other seasons of this tv series". In this case, the natural language processing module 432-1 may correctly recognize the executable intent as "media search" based on the user input. According to the ontology, a structured query for the "media" domain may include parameters such as { media actor }, { media category }, { media title }, and the like. In some examples, based on the speech input and text derived from the speech input using STT processing module 430-1, natural language processing module 432-1 may generate a partially structured query for the restaurant reservation domain, where the partially structured query includes a parameter { media classification ═ tv series }. However, in this example, the user utterance contains insufficient information to complete a structured query associated with the domain. Thus, other necessary parameters, such as { media title }, may not be specified in the structured query based on currently available information. In some examples, the natural language processing module 432-1 may use the received contextual information to populate some parameters of the structured query. For example, a television series "Mad Men" is currently playing on the media device. Based on this context information, natural language processing module 432-1 may use "Mad Men" to populate the { media title } parameter in the structured query.

In some examples, the natural language processing module 432-1 may transmit the structured query (including any completed parameters) to a task stream processing module 436-1 ("task stream processor"). The task flow processing module 436-1 may be configured to receive the structured query from the natural language processing module 432-1 and, if necessary, complete the structured query and perform the actions required to "complete" the user's final request. In some examples, various processes necessary to accomplish these tasks may be provided in the task flow model 454-1. In some examples, the task flow model 454-1 may include processes for obtaining additional information from a user, as well as task flows for performing actions associated with an executable intent.

As described above, to complete a structured query, the task flow processing module 436-1 may need to initiate additional conversations with the user in order to obtain additional information and/or disambiguate potentially ambiguous utterances. When such interaction is necessary, the task flow processing module 436-1 may invoke the conversation flow processing module 434-1 to participate in a conversation with the user. In some examples, dialog flow processing module 434-1 may determine how (and/or when) to request additional information from a user, and may receive and process a user response. The questions may be provided to the user and the answers may be received from the user by the I/O processing module 428-1. In some examples, dialog flow processing module 434-1 may present dialog output to a user via audio and/or video output, and may receive input from the user via spoken or physical (e.g., click) responses. For example, the user may ask "how is the weather of Paris? "when the task flow processing module 436-1 calls the dialog flow processing module 434-1 to determine the" location "information of the structured query associated with the domain" weather search ", the dialog flow processing module 434-1 may generate a query such as" which Paris? "etc. to the user. Further, the conversation flow processing module 434-1 may cause affordances associated with "Paris in texas" and "Paris in france" to be presented for selection by the user. Upon receiving a response from the user, the dialog flow processing module 434-1 may populate the structured query with the missing information or pass the information to the task flow processing module 436-1 to come from the missing information that completed the structured query.

Once the task flow processing module 436-1 has completed the structured query for the executable intent, the task flow processing module 436-1 may begin executing the final task associated with the executable intent. Thus, the task flow processing module 436-1 may perform the steps and instructions in the task flow model 454-1 according to the specific parameters contained in the structured query. For example, a task flow model of an actionable intent of a "media search" may include steps and instructions for executing a media search query to obtain related media items. For example, by using structured queries such as: { media search, media category ═ tv series, media title ═ Mad Men }, task flow processing module 436-1 may perform the following steps: (1) performing a media search query using a media database to obtain related media items; (2) rank the retrieved media items according to relevance and/or popularity, and (3) display the sorted media items according to relevance and/or popularity.

In some examples, the task flow processing module 436-1 may complete the task requested in the user input or provide the informational answer requested in the user input with the help of the service processing module 438-1 ("service processing module"). For example, the service processing module 438-1 may perform media searches, retrieve weather information, invoke or interact with applications installed on other user devices, and invoke or interact with third-party services (e.g., social networking sites, media review sites, media subscription services, etc.) on behalf of the task flow processing module 436-1. In some examples, the protocols and APIs required for each service may be specified by a corresponding one of service models 456-1. The service processing module 438-1 may access the appropriate service model for the service and generate a request for the service according to the service model according to the protocols and APIs required by the service.

For example, a third party media search service may submit a service model for specifying necessary parameters for performing a media search, and an API for communicating values of the necessary parameters to the media search service. When the task stream processing module 436-1 makes a request, the service processing module 438-1 may establish a network connection with the media search service and send necessary parameters for the media search (e.g., media actors, media type, media title) to the online booking interface in a format according to the API of the media search service.

In some examples, the natural language processing module 432-1, the dialog flow processing module 434-1, and the task flow processing module 436-1 may be used jointly and iteratively to infer and define the user's intent, to obtain information to further clarify and refine the user's intent, and to ultimately generate a response (i.e., output to the user or complete the task) to satisfy the user's intent. The generated response may be a dialog response to the speech input that at least partially satisfies the user's intent. Further, in some examples, the generated response may be output as a speech output. In these examples, the generated response may be sent to a speech synthesis module 440-1 (e.g., a speech synthesizer), where the response may be processed to synthesize the dialog response into speech form. In other examples, the generated response may be data content relevant to satisfying the user request in the voice input.

The speech synthesis module 440-1 may be configured to synthesize speech output for presentation to a user. Speech synthesis module 440-1 synthesizes speech output based on text provided by the digital assistant. For example, the generated dialog response may be in the form of a text string. The speech synthesis module 440-1 may convert the text string into audible speech output. Speech synthesis module 440-1 may use any suitable speech synthesis technique to generate speech output from text, including but not limited to: concatenative synthesis, unit-selective synthesis, diphone synthesis, domain-specific synthesis, format synthesis, articulatory synthesis, Hidden Markov Model (HMM) based synthesis, and sine wave synthesis. In some examples, speech synthesis module 440-1 may be configured to synthesize individual words based on phoneme strings corresponding to the words. For example, the phoneme string may be associated with a word in the generated dialog response. The phoneme string may be stored in metadata associated with the word. The speech synthesis module 440-1 may be configured to directly process the phone strings in the metadata to synthesize words in speech form.

In some examples, instead of using speech synthesis module 440-1 (or in addition thereto), speech synthesis may be performed on a remote device (e.g., server system 108-1), and the synthesized speech may be sent to a user device for output to a user. This may occur, for example, in some implementations where the output of the digital assistant is generated at the server system. And since the server system typically has more processing power or more resources than the user device, it is possible to obtain a higher quality speech output than the client side synthesis will achieve.

More details regarding digital assistants can be found in U.S. utility model patent application No. 12/987,982 entitled "Intelligent Automated Assistant" filed on 10.1.2011 and U.S. utility model patent application No. 13/251,088 entitled "Generating and Processing Task Items at repeat Tasks to performance" filed on 30.9.2011, the entire disclosures of both of which are incorporated herein by reference.

4. Process for operating a digital assistant in a media environment

Fig. 16A-16E illustrate a process 500-1 for operating a digital assistant for a media system, according to various examples. Process 500-1 may be performed using one or more electronic devices implementing a digital assistant. For example, the process 500-1 may be performed using one or more of the systems 100-1, 128-1, 104-1, 122-1, or 400-1 described above. Fig. 17A-17K depict screenshots displayed by a media device on a display unit at various stages 0 of process 50, according to various examples. The process 500-1 is described below with simultaneous reference to fig. 16A-16E and 17A-17K. It should be understood that some operations in the process 500-1 may be combined, the order of some operations may be changed, and some operations may be omitted.

At block 502-1 of process 500-1 and referring to FIG. 17A, a primary media item group 604-1 may be displayed on a display unit. Each media item may correspond to particular media content (e.g., a movie, video, television program/series, video game, etc.). The primary set of media items 604-1 may be displayed in response to a previously received media search request. In some examples, the previously received media search request may be a spoken interaction with a digital assistant. In other examples, the previously received media search request may be a text interaction with a digital assistant received via a keyboard interface of the media device.

The primary set of media items 604-1 may be obtained by executing a primary media search query in accordance with a previously received media search request. In some examples, the primary media search query may be a structured search based on one or more parameter values defined in a previously received media search request. In these examples, each media item in the primary set of media items 604-1 may include one or more parameter values that match one or more parameter values defined in a previously received media search request. In other examples, the primary media search query may be a string search based on a text entry string of a previously received media search request. In these examples, each media item in the primary set of media items 604-1 may be associated with text that matches the text input string of a previously received media search request.

The media item 604-1 may share common attributes or parameter values corresponding to previously received media search requests. In the present example shown in fig. 17A, the previously received media search request may be a request for an action movie of the last 10 years. The primary set of media items 604-1 may be retrieved to satisfy a previously received media search request. In this example, The primary media item group 604-1 may include action movies released in The last 10 years, such as "The amplitude spinner Man 2", "Furious 7", and "Iron Man 3". Text 612-1 describing attributes or parameter values corresponding to a previously received media search request may be displayed in association with the primary media item group 612-1.

As shown in FIG. 17A, a primary media item group 604-1 may be displayed via a user interface 602-1O. The user interface 602-1 may be configured to enable a user to navigate through media items in the user interface 602-1 and select a particular media item for consumption. In some examples, one or more secondary media item groups 606-1 may be displayed with the primary media item group 604-1 in the user interface 602-1. It should be appreciated that the secondary media item groups may not always be displayed. In some examples, user interface 602-1 may occupy at least a majority of the display area of the display unit. In other examples, the display unit may display media content (not shown) being played on the media device while the user interface 602-1 is displayed. In these examples, the display area occupied by user interface 602-1 on the display unit may be smaller than the display area occupied by media content on the display unit. Further, in these examples, user interface 602-1 may not include secondary media item group 606-1. In particular, the only media items displayed via the user interface 602-1 may be the primary media item group 604-1.

Each of the displayed media items in the primary media item group 604-1 and the secondary media item group 606-1 may be associated with a parameter value for a parameter such as media type, media title, actor, media character, director, media release date, media duration, media quality rating, media popularity rating, and the like. In some examples, the one or more parameter values for each media item may be displayed as text on or adjacent to the respective media item via the user interface 602-1.

In this example, one or more secondary media item groups 606-1 may be based on the primary media item group 604-1. In particular, one or more secondary media item groups 606-1 may share common attributes or parameter values with the primary media item group 604-1. As shown in FIG. 17A, the secondary media item group 608-1 may be an action movie and the secondary media item group 610-1 may be a foreign action movie. Thus, in this example, both the primary media item group 604-1 and the secondary media item group 606-1 may be associated with an action movie media classification. It should be appreciated that in other examples, the secondary media item group 606-1 may be based on parameter values derived from other information, such as previous media search requests or media items and categories of popular trends.

At block 504-1 of process 500-1, user input may be detected. At block 502-1, user input may be detected while the primary media item group 604-1 is displayed. In some examples, the user input may be detected on a remote control of the media device (e.g., remote control 124-1). In particular, the user input may be a user interaction with the remote control, such as pressing a button (e.g., button 274-1) or contacting a touch-sensitive surface of the remote control (e.g., touch-sensitive surface 278-1). In some examples, the user input may be detected via a second electronic device (e.g., device 122-1) configured to interact with the media device. The user input may be associated with a digital assistant invoking the media device. In response to detecting the user input, one or more of blocks 506-1 through 510-1 may be performed.

At block 506-1 of process 500-1, an audio input may be received. The audio input may comprise a media related request. For example, in response to detecting the user input at block 504-1, audio input may be sampled via a microphone of the media device (e.g., microphone 272-1). The sampled audio input may include a media-related request in the form of a user utterance. In some examples, an audio input comprising a media-related request may be received while displaying at least a portion of the primary media item group 604-1. The media-related request may be in a natural language form. In some examples, media-related requests may be specified in part, where not all information needed to satisfy the request is explicitly defined. For example, the media-related request may be: "Jack Ryan". In this example, the request does not explicitly specify whether it is a new media search request for a movie having the person Jack Ryan or a request to filter the currently displayed media items based on the person Jack Ryan.

In some examples, the media-related request may include one or more ambiguous terms. For example, the media-related request may be: "which are better? "in this example, the media-related request includes the ambiguous term" some "intended to refer to the media item being displayed (e.g., the primary media item group 604-1 and/or the secondary media item group 606-1). Further, in this example, the media-related request uses an ambiguous term (e.g., "good") to define a parameter value (e.g., a user rating or comment rating) for the media item.

The media-related request may define one or more parameter values associated with the media item. Examples of parameter values that may be defined in a media-related request include media type, media title, actors, media characters, media director, media release date, media duration, media quality rating, media popularity rating, and the like.

In some examples, the media-related request may be a media search request. In some examples, the media-related request may be a request to correct the primary media search query. In other examples, the media-related request may be a request to navigate through media items displayed on the user interface 602-1. In other examples, the media-related request may be a request to adjust a state or setting of an application of the media device.

Although in this example, the media-related request is received in an audio input, it should be understood that in other examples, the media-related request may be received as a text input. In particular, text input including a media-related request may be received at block 506-1 via the keyboard interface instead of audio input. It should be appreciated that block 508-1 need not be performed in examples where the media-related request is received as text input. Instead, the preliminary user intent may be determined directly from the text input at block 510-1.

At block 508-1 of process 500-1, a textual representation of the media-related request may be determined. For example, the text representation may be determined by performing speech-to-text (STT) processing on the audio input received at block 506-1. In particular, an STT processing module (e.g., STT processing module 430-1) may be used to process the audio input to convert the media-related requests in the audio input into a textual representation. The text representation may be a token string representing a corresponding text string. In some examples, the text representation may be displayed on a display unit. In particular, the textual representation may be displayed in real-time upon receiving the audio input at block 506-1.

One or more language models may be used to determine the text representation during STT processing. In some examples, STT processing may be biased towards media-related textual results. In particular, one or more language models used to determine the textual representation may be biased towards media-related textual results. For example, a corpus of media-related text can be used to train one or more language models. Additionally or alternatively, biasing may be achieved by weighting the candidate text results more heavily with respect to the media. In this way, candidate text results that are relevant to the media may be ranked higher when biased than when not biased. Biasing may be desirable to increase the accuracy of STT processing of media-related words or phrases (e.g., movie names, movie actors, etc.) in a media-related request. For example, without biasing the media-related de text results, certain media-related words or phrases, such as "Jurassic Park," "Arnold Schwarzenegger," and "Shrek," may rarely be found in a typical text corpus and thus may not be successfully recognized during STT processing.

As described above, text associated with the media items displayed at block 502-1 (e.g., primary media item group 604-1 and secondary media item group 606-1) may be displayed via user interface 602-1. The text may describe one or more attributes or parameter values for each media item in the user interface 602-1. For example, the primary media item group 604-1 may include media items corresponding to the movie "Iron Man 3". In this example, the displayed text may include the title "Iron Man 3", the actors "Robert Down Jr" and "Gwyneth Padtrow", and the director "Shane Black". In some examples, the custom language model may be generated using displayed text associated with the displayed media item. STT processing may then be performed using the custom language model to determine the textual representation. In particular, candidate text results from the custom language model may be given greater weight when determining the text representation relative to candidate text results from other language models. It should be appreciated that, in some examples, not all of the attributes or parameter values associated with the primary media item group 604-1 and the secondary media item group 606-1 may be displayed as text on the display unit. In these examples, text of the attributes or parameter values of the primary media item group 604-1 and the secondary media item group 606-1 that are not displayed on the display unit may also be used to generate the custom language model.

In some examples, the predicted text may be determined using a text representation. For example, the language model may be used to predict one or more subsequent words based on a sequence of words in the textual representation. The predicted text may be determined upon receiving the audio input. Further, the predicted text may be displayed together with a text representation on the display unit. In particular, the predicted text may be displayed in real-time upon receiving the audio input at block 506-1.

The predicted text may be accepted by the user based on detecting an endpoint of the audio input. In some examples, the endpoint may be detected once the user input of block 504-1 is no longer detected. In other examples, the endpoint may be detected within a predetermined duration of time after one or more audio features of the audio input no longer satisfy the predetermined criteria. It may be determined whether an end point of the audio input is detected after the predictive text is displayed. In accordance with a determination that an endpoint of the audio input is detected after the predicted text is displayed, it may be determined that the predicted text is to be accepted by the user. Specifically, at block 510-1, the textual representation and the accepted predicted text may be used to determine a preliminary user intent.

In some examples, one or more language models used to determine the textual representation may be configured to recognize media-related terms in multiple languages. In particular, media related terms (e.g., media title, actor name, etc.) may have unique translations in different languages. For example, the actor "Arnold Schwarzenegger" corresponds to the Chinese characters "Arnold Schwarzenegger" and Hindi One or more language models for determining a representation of text may be trained using a corpus of media-related text in various languages. Thus, one or more language models may be configured to identify corresponding translations of media-related terms in various languages.

At block 510-1 of process 500-1, a preliminary user intent corresponding to a media-related request may be determined. The preliminary user intent may be determined by performing natural language processing on the textual representation. In particular, a natural language processing module (e.g., natural language processing module 432-1) may be used to parse and process the textual representation to determine a plurality of candidate user intents corresponding to the media-related request. The candidate user intents may be ranked according to probability, and the candidate user intent with the highest probability may be determined as the primary user intent.

Determining the preliminary user intent may include determining a relevant field or executable intent associated with the textual representation. In some examples, a media type associated with the media-related request may be determined at block 510-1, and a related domain or actionable intent may be determined based on the determined media type associated with the media-related request. For example, based on the media-related request "James Bond," the media type may be determined as "movie/television show," and the corresponding actionable intent or domain may be determined as "find movie/television show. In this example, the media-related request may be implemented by performing a media search for "James Bond" according to the media genre "movie/television program". In particular, the movie and television program database may be searched for the media character "James Bond" to implement the media related request. In another example, based on the media-related request "Taylor Swift", the media type may be determined to be "music" and the corresponding actionable intent or domain may be determined to be "find music". In this example, the media-related request may be implemented by searching a music database (e.g., performing a search on an iTunes music service) for the singer "Taylor Swift".

In some examples, the natural language processing used to determine the preliminary user intent may be biased toward media-related user intent. In particular, the natural language processing module may be trained to identify media-related words and phrases (e.g., media title, media category, actor, MPAA movie rating tag, etc.) that trigger media-related nodes in the ontology. For example, the natural language processing module may identify the phrase "Jurassic Park" in the textual representation as a movie title and thereby trigger a "media search" node in the ontology associated with an executable intent to search for media items. In some examples, biasing may be implemented by limiting nodes in the ontology to a predetermined set of media-related nodes. For example, the set of media related nodes may be nodes associated with an application of the media device. Further, in some examples, the bias may be implemented by weighting candidate user intents that are relevant to the media more heavily than candidate user intents that are not relevant to the media.

In some examples, the preliminary user intent may be obtained from a separate device (e.g., DA Server 106-1). In particular, audio data may be transmitted to a standalone device to perform natural language processing. In these examples, the media device may indicate to the separate device (e.g., via data transmitted to the separate device with the sampled audio data) that the sampled audio data is associated with the media application. The indication may bias natural language processing towards media-related user intent.

The natural language processing module may be further trained to recognize semantics of media-related terms in various languages and regions. For example, the natural language processing module may recognize "Arnold Schwarzenegger", andall refer to the same actor. In addition to this, the present invention is,movie titles may vary from language to language and region to region. For example, the United states movie "Live Free or Die Hard" is named "Die Hard 4.0" in the United kingdom. In another example, the U.S. movie "Top Gun" is named "Love in the Skies" in Israel. Thus, the natural language processing module may be configured to recognize that "Top Gun" in English and "Love in the Skies" in Hebrew both refer to the same movie.

In some examples, the natural language processing module may be configured to identify the expected parameter value based on ambiguous terms in the media-related request. In particular, the natural language processing module may determine a strength of association (e.g., relevance, prominence, semantic similarity, etc.) between the ambiguous term and one or more parameter values. The parameter value with the strongest association with the ambiguous term may be determined as the expected parameter value. For example, the media-related request may be: "show me some good movies. The term "good" may be ambiguous in that it does not explicitly define a particular parameter value. In this example, based on the strength of the connection with the term "good," the natural language processing module may determine that "good" refers to an average user rating parameter value that is greater than a predetermined value.

In some examples, the preliminary user intent may be determined prior to determining the preliminary user intent. The preliminary user intent may include using a portion of the audio input received at block 506-1 (but not the entire audio input) to determine an actionable intent or domain. The process of determining a preliminary user intent may be less robust and therefore faster than determining a preliminary user intent. This may allow for a determination of preliminary user intent while still receiving audio input. Determining the preliminary user intent may allow prefetching of data needed to satisfy the media-related request, thereby reducing the response time of the digital assistant. For example, the media-related request may be: "what is playing at 7 pm? The preliminary user intent may be determined to be "search channel programming" based on what is playing at the first portion of the request … …. Based on the preliminary user intent, data needed to satisfy the preliminary user intent may be identified. In particular, it may be determined that subscription information for the user will be needed to determine the channels available to the user. The programs corresponding to those channels may then be determined. The digital assistant may initially determine whether the desired data has been stored on the media system or the digital assistant server. In accordance with a determination that data is stored on the media system or the digital assistant server when determining the preliminary user intent, the data may be retrieved when determining the preliminary user intent. In accordance with a determination that data is not stored on the media system or the digital assistant when the preliminary user intent is determined, desired data may be obtained when the preliminary user intent is determined. For example, the digital assistant may automatically communicate with the user's subscription service provider and retrieve the channels available to the user without user intervention.

As shown in FIG. 16A, block 510-1 of process 500-1 may include one or more of blocks 512-1 through 518-1. At block 512-1 of process 500-1, it may be determined whether the primary user intent includes a user intent to narrow the primary media search query corresponding to the primary media item group 604-1. In other words, it may be determined at block 510-1 whether the media-related request of block 506-1 is a request to narrow a previously received media search request. In some examples, determining whether the primary user intent includes narrowing the user intent of the primary media search query may include determining whether the media-related request includes a predetermined word or phrase corresponding to the user intent narrowing the primary media search query. The predetermined word or phrase may include one of a plurality of refined terms. For example, the predetermined word or phrase may indicate an explicit request to narrow a previous media search request received prior to the media search request. Further, in some examples, the determination may be made based on a location of a predetermined word or phrase in the media-related request (e.g., at the beginning, middle, or end of the media-related request).

In the examples shown in fig. 17B-17C, the media-related request may be: "only those movies are needed for Jack Ryan to show. "the textual representation 612-1 corresponding to the media-related request may be parsed during natural language processing to determine whether the media-related request includes a predetermined word or phrase corresponding to a user intent to narrow the primary media search query. Examples of predetermined words or phrases corresponding to user intent to narrow the primary media search query may include "only," "sift with … …," "which," and so forth. In this example, based on the predetermined word "only" located at the beginning of the media-related request, it may be determined that the primary user intent includes a user intent to narrow the primary media search query corresponding to the primary media item group 604-1. In particular, it may be determined that the primary user intent was to narrow the search scope for action movies released in the last 10 years to include only media items with the character Jack Ryan. It should be appreciated that other techniques may be implemented to determine whether the primary user intent includes a user intent to narrow the primary media search query corresponding to the primary media item group 604-1. Further, it should be appreciated that the primary user intent may be based on one or more previous user intents corresponding to one or more previous media search requests received prior to the media search request of block 506-1.

In accordance with a determination that the primary user intent includes a user intent to narrow the primary media search query corresponding to the primary media item group 604-1, one or more of blocks 520-1 through 534-1 may be performed.

At block 520-1 of the process 500-1, a second primary set of media items 612-1 may be obtained to satisfy the primary user intent. Block 520-1 may include generating a second primary media search query corresponding to the primary user intent. This second primary media search query may be based on media-related requests (e.g., "those movies that only require Jack Ryan to show") and primary media search queries (e.g., "action movies of the last 10 years"). In particular, the second preliminary media search query may include a set of parameter values. The set of parameter values can include one or more parameter values defined in the media-related request and one or more parameter values of the primary media search query. For example, the second primary media search query may be a query for searching for media items of the media type "movies", media classification "action", release date "last 10 years" and having the media character "Jack Ryan". Alternatively, the second primary media search query may be a query for screening the primary set of media items 604-1 and identifying only the media items of the set of media items 604-1 having the media character "Jack Ryan". The second preliminary media search query may be generated by a natural language processing module (e.g., natural language processing module 432-1) based on the preliminary user intent.

Block 520-1 may also include executing the second primary media search query to obtain a second primary set of media items 612-1. The second primary media search query may be executed by searching one or more media databases for media items that satisfy parameter value requirements of the second primary media search query. Each media item in the second primary set of media items can be associated with a set of parameter values. The set of parameter values may include one or more parameter values in the primary media search query and one or more parameter values defined in the media-related request of block 506-1. Further, each media item in the second primary set of media items 612-1 can be associated with a relevance score. The relevance score may indicate a likelihood that the media item satisfies the primary user intent. For example, a higher relevance score may indicate a higher likelihood that the media item satisfies the primary user intent. The second primary media search query may be executed by a task stream processing module (e.g., task stream processing module 436-1).

In the example of obtaining the primary set of media items 604-1 by performing a string search based on a previously received media search request (e.g., received via a keyboard interface), a second primary media search query may be performed by searching the primary set of media items 604-1 for media items that satisfy parameter value requirements defined in a media-related request (e.g., "Jack Ryan"). In particular, a parameter value associated with the primary media item group 604-1 may be obtained first. The second preliminary set of media items 612-1 may then be obtained by performing a structured search using the resulting parameter values and based on the parameter values defined in the media-related request.

At block 522-1 of process 500-1, a second primary media item group 612-1 may be displayed on the display unit via the user interface 602-1. In particular, as shown in FIG. 17C, the display of the primary media item group 604-1 on the display unit may be replaced with the display of the second primary media item group 612-1. The second primary set of media items 612-1 may be displayed according to the relevance score associated with each media item. For example, referring to FIG. 17C, the second primary set of media items 612-1 may be arranged in descending order of relevance scores from left to right on the user interface 602-1.

At block 524-1 of the process 500-1, additional groups of media items may be obtained. Additional sets of media items may be acquired to provide the user with alternative options that may be relevant to the primary user intent. As shown in FIG. 16B, block 524-1 may include blocks 526-1 through 532-1.

At block 526-1 of process 500-1, a core parameter value set associated with a second primary set of media items 612-1 may be identified. A core parameter value set may be identified from the parameter value sets in the second preliminary media search query. In particular, non-salient parameter values in the parameter value sets may be identified and ignored. After ignoring the non-salient parameter values, remaining ones of the parameter value sets may be identified as core parameter value sets. The non-salient parameter values may be predetermined parameter values such as, for example, a media release date range, a media type, a media provider, a media quality rating, free or paid media, live or on-demand media, and the like. The core parameter value set may have fewer parameter values than the parameter value set.

In the example of FIG. 17C, the parameter value sets in the second primary media search query include the parameter values "action movie", "last 10 years", and "Jack Ryan". In this example, the parameter value "last 10 years" may be identified as a non-prominent parameter value (e.g., a media release date range) and removed. Thus, the remaining parameter values "action movie" and "Jack Ryan" can be identified as the core parameter value set.

At block 528-1 of process 500-1, one or more additional parameter values may be identified. One or more additional parameter values may be identified based on information that may reflect the user's media consumption interests. For example, one or more additional parameter values may be identified based on a user's media selection history, a user's media search history, or media items in a user's watch list. Additionally or alternatively, one or more additional parameter values may be identified based on media selection histories of multiple users, which may indicate parameter values for media items that are currently most popular among users of the media device. In some examples, the method of identifying one or more additional parameter values may be similar to the method of determining other relevant parameter values described at block 560-1.

Returning to the example of fig. 17C, it may be determined that the action movie featured by Ben Affleck is popular with users of media devices. In addition, it may be determined that the user has recently searched for or selected a movie featured by Ben Affleck. Thus, in this example, "Ben Affleck" may be identified as a parameter value of the one or more additional parameter values.

At block 530-1 of process 500-1, one or more additional media search queries may be generated. The additional media search query may be based on the core parameter value set identified at block 526-1. Further, the additional media search queries may be based on the one or more additional parameter values identified at block 528-1. For example, in fig. 17C, the one or more additional media search queries may include a search for an action movie featuring Jack Ryan (a core parameter value set) and a search for an action movie featuring a ban Ben Affleck (an additional parameter value identified at block 528-1).

Blocks 526-1 through 530-1 may be performed by a natural language processing module (e.g., natural language processing module 432-1). In particular, the natural language processing module may identify a set of core parameter values (at block 526-1) and one or more additional media search queries (at block 528-1) to determine one or more additional user intents. The natural language processing module may then generate one or more additional media search queries (e.g., the structured queries described above with reference to fig. 15B) based on the one or more additional user intents.

At block 532-1 of process 500-1, one or more additional media search queries of block 530-1 may be performed. For example, one or more additional media search queries may be executed by searching one or more media databases for media items that satisfy the additional media search query. The media database used may be based on the type of media being searched. For example, a music database may be used for media search queries involving music, and a movie/television program database may be used for media search queries involving music/television programs. Accordingly, one or more additional groups of media items 614-1 may be obtained by performing the one or more additional media search queries of block 530-1. Specifically, in FIG. 17C, The additional media item group 616-1 (e.g., The movies "Patriot Games", "Clear and Present finger", etc.) may be obtained by searching for an action movie featured by Jack Ryan, and The additional media item group 618-1 (e.g., The movies "The Sum of All Fears", "Daredevil", etc.) may be obtained by searching for an action movie featured by youBen Afflex. Block 532-1 may be performed by a task flow processing module (e.g., task flow processing module 436-1).

It should be appreciated that certain aspects of block 524-1 described above may similarly apply to block 546-1 or block 562-1.

At block 534-1 of process 500-1, one or more additional groups of media items may be displayed on the display unit. For example, as shown in FIG. 17C, additional sets of media items 616-1 and 618-1 may be displayed via the user interface 602-1. The additional sets of media items 616-1 and 618-1 may be used to provide the user with additional options that may interest the user. It may be desirable to increase the likelihood that a user will find and select media items for consumption without having to request additional searches, which may reduce browsing time and improve the user experience.

The manner in which the groups of media items are displayed may reflect the likelihood that the respective user intent corresponds to the user's actual intent. For example, as shown in FIG. 17C, the second primary group of media items is associated with a primary user intent (a user intent that most likely reflects an actual user intent) and is displayed in the top row of the user interface 602-1. One or more additional groups of media items 616-1 and 618-1 are associated with additional user intent (user intent that is unlikely to reflect actual user intent) and are displayed in one or more subsequent rows below the top row in user interface 602-1. Moreover, the additional user intent associated with the additional media item group 616-1 may more likely reflect the actual user intent than the additional user intent associated with the additional media item group 618-1. Thus, in this example, the additional media item group 618-1 may be displayed in a row below the additional media item group 616-1. Although in this example, the groups of media items are displayed in rows, it should be appreciated that in other examples, other display layouts may also be implemented.

Referring again to block 512-1, in accordance with a determination that the primary user intent does not include a user intent to narrow the primary media search query, one or more of blocks 514-1 through 518-1 or blocks 536-1 through 548-1 may be performed.

At block 514-1 of process 500-1, it may be determined whether the preliminary user intent includes a user intent to perform a new media search query. In some examples, the determination may be made based on explicit words or phrases in the media-related request. In particular, it may be determined whether the media-related request includes a word or phrase corresponding to the user's intent to execute the new media search query. The word or phrase may be predetermined from this, such as "show me," "find," "search," "… … show other movies," and the like. Further, in some examples, the determination may be made based on a location of a word or phrase in the media-related request (e.g., at the beginning, middle, or end of the media-related request). In a particular example, the media-related request may be: "show me some Jack Ryan movies. "show me based on the word at the beginning of the media-related request," it may be determined that the primary user intent is to perform a new media search query on a movie that Jack Ryan was showing.

In the absence of an explicit word or phrase indicating user intent (e.g., "show me," "find," "search," etc.), the determination at block 514-1 may be based on a word or phrase corresponding to a parameter value for one or more media items. For example, as shown in fig. 17D, the media-related request may be: "Jack Ryan". In this example, the media-related request does not include any explicit indication of whether the user intends to narrow the primary media search query or perform a new search. However, the digital assistant may recognize that "Jack Ryan" corresponds to a parameter value for one or more media items. In particular, "Jack Ryan" may be determined to be a media character associated with a plurality of electronic books and movies. Based on these parameter values, the primary user intent may be determined to perform a new media search query on e-books and movies having the character Jack Ryan. Other examples of words or phrases corresponding to parameter values for one or more media items may include "Tom Cruise," "Jurassic Park," "Spy movies," "Sean connect," "cartoon," "Frozen," and so forth.

In accordance with a determination that the primary user intent includes a user intent to perform a new media search query, one or more of blocks 536-1 through 548-1 may be performed. At block 536-1 of the process 500-1, a third primary set of media items may be obtained according to the primary user intent. Block 536-1 may be similar to block 520-1. In particular, block 536-1 may include generating a third primary media search query based on the media-related request. The third primary media search query may correspond to a primary user intent to execute a new media search query. In particular, the second primary media search query may include one or more parameter values defined in the media-related request. For example, referring to FIG. 17D, the generated third primary media search query may be a query for searching for media items having the media persona "Jack Ryan".

Block 536-1 may also include executing the third primary media search query to obtain a third primary media item group 620-1. The third primary media search query may be executed by searching one or more media databases for media items that satisfy parameter value requirements of the third primary media search query. Each media item in the third primary set of media items 620-1 can include one or more parameter values defined in the media-related request. In particular, in this example, each media item in the third primary media item group 620-1 can include "Jack Ryan" as the media character.

In some examples, the third primary media search query may be executed according to a media type associated with the media-related request. As described above, a media type associated with the media-related request may be determined at block 510-1, along with the preliminary user intent. The application or database used to execute the third primary media search query may be specific to the determined media type. In one example, if the media type is determined to be music, the third preliminary media search query may be executed using a music search application and/or a music database (e.g., an iTunes store application) instead of, for example, a movie database.

In some examples, the media-related request may be associated with more than one media type. For example, the media-related request "Frozen" may be associated with several media types such as movies/television programs, music (e.g., soundtracks), and electronic books. When the third primary media search query is executed, a plurality of media items associated with various media types may be retrieved from one or more media databases. Each media item may be associated with a relevance score. The relevance score may indicate a degree of relevance of the respective media item with respect to the third primary media search query. Further, the relevance score may be specific to the media database from which the candidate media item was obtained. In some examples, to compare media items from different databases based on the same criteria, normalized ranking may be performed on multiple candidate media items. In particular, the relevance scores may be normalized in one or more media databases, and the normalized relevance scores may be used to perform normalized ranking on the candidate media items. For example, a general media search application or database (e.g., Apple OS X or iOS' Spotlight) may be used to execute the third preliminary media search query. The universal media search application or database may be a service external to the digital assistant. Using a general media search application or database, related media items may be obtained from various sources or databases (e.g., iTunes stores, application stores, iboks, media items stored on a user device, etc.), and the related media items may be ranked based on the normalized relevance scores. The media items may then be sorted and displayed according to the normalized ranking for selection by the user at block 540-1.

The one or more databases used to obtain the third primary set of media items may include information derived from various sources. In some examples, the one or more databases may include information from one or more media commentators. The media commentator commentary may be authored by, for example, professional media commentators, journalists, bloggers, social media service users, and the like. In an exemplary example, one or more media commentator reviews may include a phrase such as "driving chase" to describe a movie such as "butilt", "The Bourne Identity", or "Fast Five". The phrase "chase," can be extracted from one or more media commentators reviews as a parameter value, and the parameter value can be associated with one or more of the movies in the media database. Thus, for a media-related request "show me movies with outstanding driving pursuits", the corresponding third preliminary media search query generated may be a search for movies with the parameter value "driving pursuits". As such, candidate media items such as "Bullitt", "The Bourne Identity", or "Fast Five" may be obtained when searching one or more databases.

In other examples, the one or more databases may include information derived from closed captioning of various movies, videos, or television programs. In particular, one or more parameter values may be extracted based on closed captioning. For example, closed captions for movies such as "butiltt", "The Bourne Identity", or "Fast Five" may include several examples of captions "[ Tire squeaky ] for" [ to indicate The sound associated with The racing. Based on the subtitles, one or more of the movies may be associated with a parameter value "chase" in the media database. Thus, when a third preliminary media search query is executed, candidate media items associated with The parameter value may be identified (e.g., "Bullitt," "The Bourne Identity," "Fast Five," etc.).

In some examples, the media-related request may be a media search request based on the media item focused on by the user interface 602-1. For example, upon receiving a media-related request at block 506-1, the cursor 609-1 of the user interface 602-1 may be positioned over the media item 611-1. It may be determined whether the media-related request is a request to obtain a set of alternative media items similar to media item 611-1. In one example, the media-related request may be: "more similar to this". In this example, it may be determined that "this" refers to media item 611-1 based on the location context of cursor 609-1. Thus, it may be determined that the media-related request is a request to obtain a set of alternative media items similar to media item 611-1. In response to determining that the media-related request is a request to obtain a set of alternative media items similar to the media item 611-1, a third primary set of media items may be obtained at block 536-1, where each media item in the third primary set of media items includes one or more parameter values for the media item 611-1. For example, in one example, media item 611-1 may be the action movie "lying tiger dragon". In this example, the resulting third primary set of media items may include media items that share one or more parameter values for the movie. In particular, the resulting third set of primary media items may, for example, comprise a movie directed by an Ang Lee, including a martial arts scene, or the star Chow Yun-Fat, Michelle Yeoh, or Zhang Ziyi.

It should be appreciated that certain aspects of block 536-1 may similarly apply to block 520-1, block 524-1, block 546-1, block 562-1, or block 566-1.

At block 538-1 of the process 500-1, it may be determined whether at least one media item corresponding to the third primary media search query may be acquired. When the third primary media search query is executed at block 536-1, the number of media items that were obtained (or that could be obtained) by the search query may be determined. If the number of media items retrieved is one or more, it may be determined that at least one media item corresponding to the third primary media search query may be retrieved. For example, a third primary media search query for the media-related request "Jack Ryan" may return at least the movies "Patriot Games" and "Clear and Present Danger". Thus, in this example, it may be determined that at least one media item corresponding to the third primary media search query may be obtained. In accordance with a determination that at least one media item corresponding to the third primary media search query is available, block 540-1 may be performed. As will become apparent in the following description, the determination at block 538-1 may be desirable to ensure that the third primary media search query performed at block 536-1 retrieves at least one media item. This may prevent the occurrence of a situation where no media items are displayed for the media search request and may save the user from having to provide another media search request, which improves the user experience.

At block 540-1 of process 500-1, a third primary media item group 620-1 may be displayed on the display unit via user interface 602-1. In particular, as shown in FIG. 17E, the display of the primary media item group 604-1 on the display unit may be replaced with the display of the third primary media item group 620-1. Block 540-1 may be similar to block 522-1. The third primary set of media items 620-1 may be displayed according to the relevance score associated with each media item. For example, referring to FIG. 17E, the third primary set of media items 612-1 may be arranged in descending order of relevance scores from left to right on the user interface 602-1.

Referring again to block 538-1, in some examples, it may be determined that at least one media item corresponding to the third primary media search query may not be obtained. For example, a media-related request or corresponding textual representation from the STT process may define incorrect parameter values or parameter values that differ from those actually desired by the user. In one such example, as shown in fig. 17F, the media-related request may be "Jackie Chan and Chris Rucker. In this example, the media items may not be obtained by executing a third primary media search query corresponding to the media-related request, and thus it may be determined that at least one media item corresponding to the third primary media search query cannot be obtained. In other examples, the media-related request may define incompatible parameters, such as "Jackie Chan" and "Spiderman" or "violence screen" and "baby fit". In accordance with a determination that at least one media item corresponding to the third primary media search query cannot be obtained, blocks 542-1 through 548-1 may be performed to present alternative results to the user that may satisfy the user's actual intent.

At block 542-1 of the process 500-1, the least relevant parameter values for the third primary media search query may be identified. In particular, the prominence score for each parameter value in the third primary media search query may be determined based on factors such as the popularity of the media item having the parameter value, the frequency of occurrence of the parameter value in previous media search requests, or the frequency of occurrence of the parameter value in the media item population. The least relevant parameter value may be identified as the parameter value with the lowest prominence score. For example, between the parameter values "Jackie Chan" and "christ Rucker", the parameter value "christ Rucker" may have a lower prominence score because christ Rucker is a soccer player and Jackie Chan is a popular actor. Thus, Jackie Chan is associated with more media items and previous media search queries than Chris Rucker. Thus, in this example, the parameter value "Chris Rucker" may be determined as the least relevant parameter value.

At block 544-1 of process 500-1, one or more alternative parameter values may be determined. One or more alternative parameter values may be determined based on the identified least relevant parameter values. For example, fuzzy string matching may be performed between the identified least relevant parameter values and a plurality of media-related parameter values in the data structure. In particular, a parameter value in the data structure having the shortest edit distance within a predetermined threshold may be determined as an alternative parameter value. For example, based on fuzzy string matching of the parameter value "Chris packer," it may be determined that the parameter value "Chris packer" has the shortest edit distance among the plurality of media-related parameter values in the data structure. Thus, in this example, "Chris Tucker" may be determined as an alternative parameter value.

Additionally or alternatively, one or more alternative parameter values may be determined based on other parameter values in the third primary media search query (e.g., parameter values other than the least relevant parameter values). In particular, parameter values may be determined that are closely related to other parameter values in the third primary media search query. For example, based on the presence of multiple media items that are starred by "Jackie Chan" and have the parameter values "action movie" and "martial arts", it may be determined that the parameter values such as "action movie" and "martial arts" are closely related to the parameter value "Jackie Chan".

At block 546-1 of process 500-1, a fourth primary set of media items may be acquired to satisfy the primary user intent. Block 546-1 may be similar to block 520-1. In particular, one or more alternative primary media search queries may be generated. One or more alternative primary search queries may be generated using the one or more alternative parameter values determined at block 544-1. For example, in fig. 17F-17G, where the media-related requests are "Jackie Chan and Chris Rucker" and the alternative parameter values are determined to be "Chris Tucker," the alternative primary search query may be to search for media items having the parameter values "Jackie Chan" and "Chris Tucker. Thus, in this example, the least relevant parameter values may be replaced by alternative parameter values that are more likely to reflect the user's actual intent. One or more alternative primary media search queries may then be executed to obtain a fourth primary set of media items 628-1. In the present example of searching for media items having the parameter values "Jackie Chan" and "christ Tucker," the fourth primary group of media items 628-1 may comprise a movie, such as "Rush Hour," Rush Hour 2, "or" Rush Hour 3.

At block 548-1 of the process 500-1, a fourth primary set of media items 628-1 may be displayed on the display unit via the user interface 602-1. Block 548-1 may be similar to block 522-1. In particular, as shown in FIG. 17G, the display of the primary media item group 604-1 on the display unit may be replaced with the display of a fourth primary media item group 628-1.

At block 550-1 of the process 500-1, it may be determined whether one or more previous user intents exist. The one or more previous user intents may correspond to one or more previous media-related requests received prior to the media-related request of block 506-1. Examples of prior media-related requests may include previously received media-related requests corresponding to the primary media search query and the primary media item group 604-1 of block 502-1. The determination may be made based on analyzing a history of previous user intents stored on a media device (e.g., media device 104-1) or a server (e.g., DA server 106-1). In some examples, only prior user intents within the relevant time range are considered when determining whether one or more prior user intents exist. The relevant time range may refer to a predetermined time range prior to receiving the media-related request of block 506-1. In other examples, the relevant time range may be based on an interactive session with the digital assistant. In particular, the media-related request of block 506-1 may be part of a sequence of media-related requests for an interactive session with a digital assistant. In these examples, the relevant time range may be from the time of the interactive session initiation to the time of the interactive session termination. It may be determined whether the interactive session contains one or more previous media-related requests received prior to the media-related request of block 506-1. If the interactive session contains one or more previous media-related requests, it may be determined that one or more previous user intents exist. Thus, the one or more previous user intents and the primary user intent may be associated with the same interactive session with the digital assistant. Conversely, if the interactive session does not contain one or more previous media-related requests, it may be determined that one or more previous user intents do not exist. In response to determining that there are one or more previous user intents, block 552-1 may be performed. Alternatively, block 560-1 may be performed in response to determining that one or more previous user intents do not exist.

At block 552-1 of process 500-1, one or more secondary user intents may be determined. One or more secondary user intents may be determined based on the primary user intent of block 510-1 and the one or more previous user intents determined to be present at block 550-1. In particular, the one or more secondary user intents may include a combination of the primary user intent and one or more previous user intents. In some examples, one or more previous user intents may be determined based on a history of media-related requests by the user on the media device.

Returning to the example of fig. 17D-17E, the primary user intent may be an intent to search for a media item having the person "Jack Ryan". In one example, the first previous user intent may be an intent to search for an action movie for the past 10 years. Further, the second previous user intent may be an intent to search for a media item featured by Ben Affleck. Thus, a secondary user intent may be a combination of two or more of these user intents. In particular, one secondary user intent may be a combination of the primary user intent and a first previous user intent (e.g., a user intent to search for action movies in the past 10 years in which Jack Ryan was played). Another secondary user intent may be a combination of the first previous user intent and the second previous user intent (e.g., a user intent to search for an action movie featured by Ben Affleck in the last 10 years). Block 552-1 may be performed using a natural language processing module of the media device (natural language processing module 432-1). As shown in FIG. 16D, block 552-1 may include blocks 554-1 through 560-1.

At block 554-1 of the process 500-1, an incorrect user intent of the one or more previous user intents may be identified. In particular, one or more previous user intents may be analyzed to determine whether any incorrect user intent is included. The previous user intent may be determined to be incorrect if the previous user intent is explicitly or implicitly indicated as incorrect by a subsequent previous user intent. For example, the one or more previous user intents may include user intents corresponding to the following sequence of previous media related requests:

[A] "show me some movies of James Bond. "

[B] "only those movies that Daniel Smith is required to play. "

[C] "not, my means Daniel Craig. "

In this example, based on the explicit phrase "not, my meaning … …," it may be determined that the previous user intent associated with request [ C ] is an intent to correct the previous user intent associated with request [ B ]. Thus, in this example, it may be determined that the previous user intent associated with request [ B ] prior to request [ C ] is incorrect. It should be appreciated that in other examples, request [ C ] may implicitly indicate that request [ B ] is incorrect. For example, request [ C ] may simply be "Daniel Craig". Based on the similarity of the strings "Daniel Craig" to "Daniel Smith" and the improved correlation associated with the parameter value "Daniel Craig" as opposed to "Daniel Smith", it may be determined that the previous user intent associated with request [ C ] is an intent to correct the previous user intent associated with request [ B ].

In other examples, the previous user intent may be determined to be incorrect based on a user selection of a media item that is inconsistent with the previous user intent. For example, the previous request may be: "show me the video made by Russell Simmons. "in response to this previous request, a primary set of media items comprising video produced by Russell Simmons may have been displayed for selection by the user. In addition, additional groups of media items related to the previous request may be displayed with the primary group of media items. In this example, it may be determined that the user selected media items in the additional group of media items made by "Richard Simmons" rather than "Russell Simmons". Based on the user selection of the media item not being consistent with a previous user intent to search for videos made by Russell Simmons, it may be determined that the previous user intent is incorrect. In other words, it may be determined that the correct user intent should be to search for videos made by "Richard Simmons" rather than "Russell Simmons".

In accordance with a determination that the one or more previous user intents include an incorrect previous user intent, the incorrect previous user intent may not be used to determine the one or more secondary user intents. In particular, incorrect prior user intents may be excluded, and thus, may not be used to generate a combination of user intents for use in determining one or more secondary user intents at block 556-1. However, in some examples, the corrected user intent may be used to generate a combination of user intents and determine one or more secondary user intents. For example, in the various examples described above, the corrected previous user intent associated with "Daniel Craig" (e.g., searching James Bond movies that Daniel Craig has played) and the corrected previous user intent associated with "Richard Simmons" (e.g., searching videos made by Richard Simmons) may be used to determine one or more secondary user intents.

At block 556-1 of process 500-1, a plurality of user intent combinations may be generated based on the primary user intent and one or more previous user intents. In an illustrative example, the media device may have received a media-related sequence of requests in which a primary user intent is associated with request [ G ] and one or more previous user intents are associated with requests [ D ] through [ F ].

[D] "main act of movie Keanu Reeves. "

[E] "a program containing a violent picture. "

[F] Film suitable for children "

[G] "cartoon. "

In this example, the plurality of user intent combinations may include any combination of the primary user intent and one or more previous user intents associated with requests [ D ] through [ G ]. One exemplary combination of user intentions may be to search for movies with violent pictures from the main actor of Keanu Reeves (e.g., based on a combination of requests [ D ] and [ E ]). Another exemplary combination of user intentions may be to search for a cartoon movie that is appropriate for a baby (e.g., based on a combination of requests [ F ] and [ G ]).

At block 558-1 of process 500-1, incompatible combinations of user intent may be excluded. In particular, incompatible combinations of user intentions may be identified, and one or more secondary user intentions may not be determined based on the identified incompatible combinations of user intentions. In some examples, the incompatible user intent combination may be a user intent combination that does not correspond to any media item. In particular, for each combination of user intents, a respective media search may be performed. If a particular media search does not obtain a media item, the corresponding user intent combination may be determined to be an incompatible user intent combination. For example, the user intent combination may be based on requests [ E ] and [ F ] described above. In this example, a corresponding media search may be performed for a children-appropriate movie containing a violence screen. However, such a media search may not result in any media items. Thus, in this example, the user intent combination based on requests [ E ] and [ F ] may be determined to be an incompatible user intent combination. It should be appreciated that in other examples, different predetermined thresholds may be established for determining incompatible combinations of user intent. For example, a combination of user intent that fails to correspond with more than a predetermined number of media items may be determined to be incompatible.

In other examples, incompatible combinations of user intent may be determined based on parameter values associated with the combinations of user intent. In particular, certain parameter values may be predetermined as incompatible. For example, the parameter value "violence screen" may be predetermined to be incompatible with the parameter value "fit for baby". Thus, a user intent combination containing two or more parameter values that are predetermined to be incompatible may be determined to be an incompatible user intent combination. Furthermore, it may also be predetermined that certain parameters require a single value. For example, the parameters of "media title," "media type," and "american movie association movie rating" may each be associated with no more than one parameter value in the user intent combination. In particular, the combination of the first user intent to search for a movie and the second user intent to search for a song will be an incompatible combination. Thus, if a user intent combination contains more than one parameter value for a parameter that is predetermined to require a single value, the user intent combination may be determined to be incompatible. Incompatible combinations of user intents may be excluded such that these combinations are not used to determine one or more secondary user intents at block 552-1. In particular, the one or more secondary user intents do not include any incompatible combinations of user intents. It may be desirable to remove incompatible user-intent combinations from consideration to increase the relevance of the media items displayed for user selection.

One or more secondary user intents may be determined based on a combination of remaining user intents that are not determined to be incompatible. In particular, the user intents in each remaining user intent combination may be merged to generate one or more secondary user intents. Further, each user intent of the remaining user intent combinations may be associated with at least one media item (or at least a predetermined number of media items). In some examples, the one or more secondary intents may include one or more remaining user intent combinations.

Returning to the example above with requests [ D ] through [ G ], a secondary user intent of the one or more secondary user intents may comprise a combination of a primary user intent (e.g., the primary user intent associated with request [ G ]) and a previous user intent of the one or more previous user intents (e.g., the previous user intent associated with request [ F ]). For example, the secondary user intent may be a media search for a cartoon movie that is appropriate for young children. Additionally, a secondary user intent of the one or more secondary user intents may include a combination of two or more previous user intents of the one or more previous user intents (e.g., previous user intents associated with requests [ D ] and [ E ]). For example, the secondary user intent may be a media search for a movie with a violent picture by the main actor of Keanu Reeves.

At block 560-1 of process 500-1, one or more secondary user intents may be generated based on other related parameter values. The one or more secondary user intents determined at block 560-1 may be in addition to or in lieu of the one or more secondary intents determined at block 552-1. Other relevant parameter values may be based on information other than the user's media search history on the media device. In particular, the information used to determine other relevant parameter values may reflect the user's media interests and habits, whereby the user's actual intent may be reasonably predicted.

In some examples, other relevant parameter values may be based on a user's media selection history on the media device. In particular, other relevant parameter values may include parameter values associated with media items previously selected by the user for consumption (e.g., selected prior to receiving the media-related request at block 506-1). In some examples, other relevant parameter values may be based on a media viewing list of the user on the media device. The media viewing list may be a user-defined list of media items that are of interest to the user or that wish to be consumed in the near future. Accordingly, parameter values associated with a user selection history or user media viewing lists may reflect a user's media interests or habits. In some examples, other relevant parameters may be based on a user's media search history on a device external to the media device. In particular, a history of media related searches performed on an external media device (e.g., user device 122-1) may be obtained from the external media device. These media related searches may be web page searches, iTunes store searches, local media file searches on the device, and the like. Thus, other relevant parameter values may include parameter values derived from the media related search history of the external media device.

In some examples, other relevant parameter values may be based on the media item in focus by the user interface. For example, referring to FIG. 17A, upon receiving a media-related request at block 506-1, a cursor 609-1 may be positioned over the media item 611-1. Accordingly, it may be determined that upon receiving the media-related request at block 506-1, the focus of the user interface 602-1 is located on the media item 611-1. In this example, other related parameter values may be contextually related to the media item 611-1. In particular, other related parameter values may include one or more parameter values for the media item 611-1. In some examples, upon receiving a media-related request at block 506-1, other related parameter values may be based on text associated with the media item displayed on the display unit. For example, in FIG. 17A, upon receiving a media-related request at block 506-1, a plurality of text associated with the primary media item group 604-1 and the secondary media item group 606-1 may be displayed on the display unit. The plurality of texts may describe parameter values of the associated media item. Thus, other relevant parameter values may include one or more parameter values described by the plurality of texts.

It should be appreciated that other information internal or external to the media device may be used to determine other relevant parameter values. For example, in some examples, other relevant parameter values may be determined in a similar manner to the additional parameter values identified at block 528-1.

A ranking score may be determined for each of the one or more secondary user intents of blocks 552-1 and 560-1. The ranking score may represent a likelihood that the secondary user intent corresponds to the user's actual user intent. In some examples, a higher ranking score may represent a higher likelihood that the respective secondary user intent corresponds to the actual user intent. As described below, the ranking score may be determined based on similar information used to derive one or more secondary user intents.

In some examples, the ranking score for each of the one or more secondary user intents may be determined based on a media-related request history (e.g., media search history) of the user or users. In particular, the ranking score may be determined based on the time and order in which each of the media-related requests and one or more previous media-related requests were received. Secondary user intents based on more recently received media search requests may be more likely to have a higher ranking score than secondary user intents based on earlier received media related requests. For example, in the example of requests [ D ] through [ G ] above, request [ G ] may be the most recently received media-related request, and request [ D ] may be the earliest received media-related request. In this example, the secondary user intent based on request [ G ] may be more likely to have a higher ranking score than the secondary user intent based on request [ D ].

Further, the ranking score may be based on the frequency of occurrence of parameter values in the media-related request history of the user or users. For example, if the parameter value "Keanu Reeves" appears more frequently than the parameter value "violence screen" in the user's media-related request history or media-related request histories of multiple users, the secondary user intent containing the parameter value "Keanu Reeves" may be more likely to have a higher ranking score than the secondary user intent containing the parameter value "violence screen".

In some examples, a ranking score for each of the one or more secondary user intents may be determined based on a selection history of the user or users. The user selection history may include a list of media items previously selected for consumption by the user or users. A secondary user intent that includes parameter values for one or more previously selected media items may be more likely to have a higher ranking score than a secondary user intent that does not include parameter values for any previously selected media items. Additionally, secondary user intents that include parameter values for more recently selected media items may be more likely to have a higher ranking score than secondary user intents that include parameter values for earlier selected media items. Further, secondary user intents with parameter values that appear more frequently in the previously selected media items may be more likely to have a higher ranking score than secondary user intents with parameter values that appear less frequently in the previously selected media items.

In some examples, a ranking score for each of the one or more secondary user intentions may be determined based on a media watch list of the user or users. For example, a secondary user intent that includes parameter values for one or more media items on a media watch list may be more likely to have a higher ranking score than a secondary user intent that does not include parameter values for any media items on the media watch list.

At block 562-1 of process 500-1, one or more secondary media item groups may be obtained. Block 562-1 may be similar to block 520-1. In particular, one or more secondary media search queries may be generated that correspond to the one or more secondary user intents of block 552-1 and/or block 560-1. One or more secondary media search queries may be performed to obtain one or more secondary media item groups. For example, referring again to FIG. 17E, a first secondary media search query may be generated and executed to obtain the secondary set of media items 624-1 for an action movie in which Jack Ryan has been released in the last 10 years. In addition, a second secondary media search query may be generated and executed to obtain the secondary media item group 626-1 for an action movie featured by Ben Affleck in the last 10 years.

At block 564-1 of process 500-1, one or more secondary media item groups may be displayed on the display unit. Box 564-1 may be similar to box 534-1. As shown in FIG. 17E, a third primary media item group 620-1 may be displayed at the top row of the user interface 602-1. The secondary media item groups 624-1 and 626-1 may be displayed in subsequent rows below the top row in the user interface 602-1. Each of the subsequent rows may correspond to a secondary user intent of the one or more secondary user intents of block 552-1 and/or block 560-1.

One or more secondary media item groups may be displayed according to a ranking score of the corresponding one or more secondary user intents. In particular, the secondary media item group corresponding to the secondary user intent with the higher ranking score may be displayed more prominently (e.g., in a higher row closer to the top row) than the secondary media item group corresponding to the secondary user intent with the lower ranking score.

Referring again to block 510-1, in accordance with a determination that the primary user intent does not include a user intent to perform a new media search query, one or more of blocks 516-1 through 518-1 may be performed. At block 516-1 of process 500-1, it may be determined whether the primary user intent includes a user intent to correct a portion of the primary media search query. The determination may be made based on explicit words or phrases that indicate a user intent to correct a portion of the primary media search query. In particular, it may be determined whether the media-related request includes a predetermined word or phrase indicating a user intent to revise a portion of the primary media search query. For example, referring to fig. 17H through 17I, the media-related request may be: "not, i means an adventure movie. "in this example, based on the explicit phrase appearing at the beginning of the media-related request" No, My meaning … … "it may be determined that the primary user intent comprises a user intent to correct a portion of the primary media search query. In particular, the primary user intent may be determined to be a user intent to correct the primary media search query from searching for the action movie in the last 10 years to searching for the adventure movie in the last 10 years. Other examples of predetermined words or phrases that indicate a user intent to correct a portion of the primary media search query may include "not," "my meaning," "error," and so forth.

In other examples, the determination at block 516-1 may be made based on a similarity between parameter values in the media-related request and parameter values in the primary media search query. For example, in one example, the previously received media-related request associated with the primary media search query may be: "Jackie Chan and Chris Rucker", the media related request may be: "Chris Tucker". Based on the determined edit distance between the parameter value "Chris packer" and the parameter value "Chris packer" being less than a predetermined value, it may be determined that the primary user intent comprises a user intent to correct the parameter value "Chris packer" in the primary media search query to "Chris packer". Additionally or alternatively, the phoneme sequences representing "Chris Rucker" and "Chris Tucker" may be compared. Based on the phoneme sequence representing "Chris packer" being substantially similar to the phoneme sequence representing "Chris packer," it may be determined that the primary user intent includes a user intent to correct "Chris packer" in the primary media search query to "Chris packer.

Further, the prominence of the parameter value "Chris Rucker" and the parameter value "Chris Tucker" with respect to the parameter value "Jackie Chan" may be compared. In particular, a media search may be performed using the parameter value "Jackie Chan" to identify a group of media items related to Jackie Chan. The prominence of "Chris Rucker" and "Chris Tucker" relative to "Jackie Chan" may be based on the number of media items in the set of media items associated with Jackie Chan that are associated with each of the two parameter values. For example, it may be determined that a "Chris Tucker" is associated with significantly more media items in a group of media items related to Jackie Chan than a "Chris Rucker". Therefore, it can be determined that the prominence of the "Chris Tucker" with respect to the "Jackie Chan" is significantly greater than that of the "Chris Tucker". Based on this comparative prominence, it may be determined that the primary user intent includes a user intent to correct "Chris Rucker" in the primary media search query.

In accordance with a determination that the primary user intent includes a user intent to correct a portion of the primary media search query, when one or more secondary user intents associated with the media-related request are determined (e.g., block 552-1), previous user intents associated with the primary media search query may be removed from consideration. For example, when determining one or more secondary user intents, previous user intents associated with previously received media-related requests "Jackie Chan and Chris Rucker" may be removed from consideration. Conversely, user intent associated with corrected media-related requests "Jackie Chan and Chris Tucker" may be considered when determining one or more secondary user intents.

Additionally, in accordance with a determination that the primary user intent includes a user intent to correct a portion of the primary media search query, one or more of blocks 566-1 through 568-1 may be performed. At block 566-1 of process 500-1, a fifth primary set of media items may be obtained. Block 566-1 may be similar to block 520-1. In particular, a fifth primary media search query may be generated that corresponds to the primary user intent. The fifth primary media search query may be based on the media-related request and the primary media search query. In particular, a portion of the primary media search query may be corrected in accordance with the media-related request to generate a fifth primary media search query. Returning to the example where the primary media search query is to search for media items that were sponsored by "Jackie Chan" and "christ Rucker" and the media-related request is "christ Tucker", the primary media search query may be corrected to generate a fifth primary media search query that searches for media items that were sponsored by "Jackie Chan" and "christ Tucker". A fifth primary media search query may then be executed to obtain a fifth primary set of media items.

At block 568-1 of the process 500-1, the fifth primary set of media items may be displayed on the display unit via a user interface (e.g., user interface 602-1). In particular, the display of the primary media item group (e.g., primary media item group 604-1) may be replaced with the display of the fifth primary media item group. Block 540-1 may be similar to block 522-1. Further, in some examples, blocks 550-1 through 564-1 may be performed to retrieve and display one or more secondary media item groups along with the fifth primary media item group to provide additional options to the user.

Referring again to 510-1, in accordance with a determination that the primary user intent does not include a user intent to correct a portion of the primary media search query, block 518-1 may be performed. At block 518-1 of process 500-1, it may be determined whether the primary user intent includes a user intent to change the focus of a user interface (e.g., user interface 602-1) displayed on the display unit. The user interface may include a plurality of media items. In some examples, the determination at block 518-1 may be made based on explicit words or phrases in the media-related request that correspond to the user's intent to change the user interface focus. In one example, the media-related request may be: "go to The Dark Knight. "in this example, the determinable phrase" go to … … "is a predetermined phrase that corresponds to a user intent to change the user interface focus. Other examples of predetermined words or phrases that correspond to a user's intent to change user interface focus may include "select", "move to", "jump to", "play", "purchase", and the like. Based on the predetermined words or phrases, it may be determined that the primary user intent includes a user intent to change the focus of the user interface.

In other examples, the determination at block 518-1 may be made implicitly based on text corresponding to the media item displayed in the user interface. For example, referring to FIG. 17A, media items 604-1 and 606-1 may be associated with text describing one or more parameter values for media items 604-1 and 606-1. In particular, the text may describe parameter values for the media items 604-1 and 606-1, such as media title, actors, release date, and so forth. As described above, at least a portion of the text may be displayed on the user interface 602-1 in conjunction with the corresponding media item. The determination at block 518-1 may be made based on text describing one or more parameter values for the media items 604-1 and 606-1. In this example, The media item 613-1 may be The movie "The Dark Knight" and The text may include The media title "The Dark Knight" associated with The media item 613-1. Based on determining that The parameter value "The Dark Knight" defined in The media-related request matches The media title "The Dark Knight" of The text associated with The media item 613-1, it may be determined that The preliminary user intent includes a user intent to change The focus of The user interface 602-1 from The media item 611-1 to The media item 613-1. It should be appreciated that, in some examples, the displayed text may not include all parameter values for the media items displayed via the user interface 602-1. In these examples, the determination at block 518-1 may also be based on parameter values for the displayed media items that are not described in the displayed text.

In accordance with a determination that the primary user intent comprises a user intent to change the focus of the user interface, block 570-1 may be performed. At block 570-1 of process 500-1, the focus of the user interface may be changed from the first media item to the second media item. For example, referring to FIG. 17K, the position of cursor 609-1 of user interface 602-1 may change from media item 611-1 to media item 613-1. In some examples, changing the focus of the user interface 602-1 may include selecting a media item. For example, media item 613-1 may be selected at block 570-1. Selection of the media item 613-1 may cause information associated with the media item 613-1 to be displayed (e.g., movie preview information). Additionally or alternatively, selecting the media item 613-1 may cause media content associated with the media item 613-1 to be played on the media device and to be displayed on the display unit.

Although some of the blocks of process 500-1 are described above as being performed by a device or system (e.g., media device 104-1, user device 122-1, or digital assistant system 400-1), it should be appreciated that in some examples, more than one device may be used to perform the blocks. For example, in block where a determination is made, a first device (e.g., media device 104-1) may obtain the determination from a second device (e.g., server system 108-1). Thus, in some examples, making a determination may refer to obtaining a determination. Similarly, in a box displaying content, objects, text, or a user interface, a first device (e.g., media device 104-1) may cause the content, objects, text, or user interface to be displayed on a second device (e.g., display unit 126-1). Thus, in some examples, displaying may refer to causing a display.

Further, it should be appreciated that, in some examples, items displayed in a user interface (e.g., media items, text, objects, graphics, etc.) may also refer to items that are included in the user interface but are not directly visible to the user. For example, displayed items in the user interface may be made visible to the user by scrolling to an appropriate area of the user interface.

5. Electronic device

According to some examples, fig. 18 illustrates a functional block diagram of an electronic device 700-1 configured according to the principles of the various examples described, for example, to voice control media playback and update knowledge of a virtual assistant in real-time. The functional blocks of the device can be implemented by hardware, software, or a combination of hardware and software that perform the principles of the various described examples. Those skilled in the art will appreciate that the functional blocks described in fig. 18 may be combined or separated into sub-blocks in order to implement the principles of the various described examples. Thus, the description herein optionally supports any possible combination or separation or further definition of the functional blocks described herein.

As shown in fig. 18, the electronic device 700-1 may include: an input unit 703-1 (e.g., a remote controller 124-1 or the like) configured to receive user input such as tactile input, gesture input, and text input; an audio receiving unit 704-1 configured to receive audio data (e.g., a microphone 272-1, etc.); a speaker unit 706-1 (e.g., speaker 268-1, etc.) configured to output audio; and a communication unit 707-1 (e.g., communication subsystem 224-1, etc.) configured to transmit and receive information from an external device via a network. In some examples, electronic device 700-1 may optionally include a display unit 702-1 (e.g., display unit 126-1, etc.) configured to display media, user interfaces, and other content. In some examples, the display unit 702-1 may be located external to the electronic device 700-1. The electronic device 700-1 may also include a processing unit 708-1 coupled to the input unit 703-1, the audio receiving unit 704-1, the speaker unit 706-1, the communication unit 707-1, and the optional display unit 702-1. In some examples, the processing unit 708-1 may include a display enabling unit 710-1, a detection unit 712-1, a determination unit 714-1, an audio reception enabling unit 716-1, an obtaining unit 718-1, an identifying unit 720-1, a receiving unit 722-1, an excluding unit 724-1, and a generating unit 726-1.

According to some embodiments, the processing unit 708-1 is configured to display (e.g., with the display enabling unit 710-1) the primary set of media items on the display unit (e.g., with the display unit 702-1 or a separate display unit). The processing unit 708-1 is further configured to detect a user input (e.g., with the detection unit 712-1). The processing unit 708-1 is further configured to receive audio input at the audio receiving unit 704-1 (e.g., with the audio receiving enabling unit 716-1) in response to detecting the user input. The audio input contains a media related request in the form of natural language speech. The processing unit 708-1 is further configured to determine a primary user intent corresponding to the media-related request (e.g., with the determining unit 714-1). The processing unit 708-1 is further configured to determine whether the primary user intent includes a user intent to narrow the primary media search query corresponding to the primary media item group (e.g., with the determining unit 714-1). The processing unit 708-1 is further configured to, in accordance with the determination that the primary user intent includes a user intent to narrow the primary media search query, generate a second primary media search query corresponding to the primary user intent based on the media-related request and the primary media search query (e.g., with the obtaining unit 718-1), execute the second primary media search query to obtain a second primary set of media items (e.g., with the identifying unit 720-1). The processing unit 708-1 is further configured to replace the display of the primary media item group on the display unit with the display of the second primary media item group (e.g., with the display enabling unit 710-1).

In some examples, determining whether the primary user intent includes narrowing the user intent of the primary media search query includes determining whether the media-related request includes a word or phrase corresponding to the user intent narrowing the primary media search query.

In some examples, the second primary media search query includes one or more parameter values defined in the media-related request and one or more parameter values of the primary media search query. In some examples, the second primary set of media items is obtained based on the primary set of media items.

In some examples, the second primary media search query includes a set of parameter values. The processing unit 708-1 is further configured to identify a core parameter value set from the parameter value set (e.g., with the identifying unit 720-1) having fewer parameter values than the parameter value set. Processing unit 708-1 is further configured to generate one or more additional media search queries based on the set of core parameter values (e.g., with obtaining unit 718-1). The processing unit 708-1 is further configured to execute the one or more additional media search queries to obtain one or more additional sets of media items (e.g., with the obtaining unit 718-1). The processing unit 708-1 is further configured to display the one or more additional sets of media items on the display unit (e.g., with the display enabling unit 710-1).

In some examples, processing unit 708-1 is further configured to identify one or more additional parameter values based on media selection histories of the plurality of users (e.g., with identifying unit 720-1). One or more additional media search queries are generated using the one or more additional parameter values.

In some examples, the second primary group of media items is displayed on the display unit at a top row of the user interface and the one or more additional groups of media items are displayed on the display unit at one or more subsequent rows of the user interface.

In some examples, the processing unit 708-1 is further configured to determine whether the primary user intent includes a user intent to perform a new media search query in accordance with a determination that the primary user intent does not include a user intent to narrow the primary media search query (e.g., with the determining unit 714-1). The processing unit 708-1 is further configured to, in accordance with a determination that the primary user intent includes a user intent to execute a new media search query, generate a third primary media search query corresponding to the primary user intent based on the media-related request (e.g., with the obtaining unit 718-1), determine whether at least one media item corresponding to the third primary media search query is available for retrieval (e.g., with the determining unit 714-1). The processing unit 708-1 is further configured to, in accordance with a determination that at least one media item corresponding to the third primary media search query is available for retrieval, execute the third primary media search query to retrieve a third primary set of media items (e.g., with the retrieval unit 718-1), and replace display of the primary set of media items on the display unit with display of the third primary set of media items (e.g., with the display enabling unit 710-1).

In some examples, determining whether the primary user intent includes a user intent to perform the new media search query further includes determining whether the media-related request includes a word or phrase corresponding to the user intent to perform the new media search query. In some examples, determining whether the primary user intent includes a user intent to perform the new media search query further includes determining whether the media-related request includes a word or phrase corresponding to a parameter value of the one or more media items.

In some examples, the processing unit 708-1 is further configured to execute a third primary media search query (e.g., with the obtaining unit 718-1) that includes performing a normalized ranking on a plurality of candidate media items, wherein the plurality of candidate media items includes a plurality of media types.

In some examples, determining the primary user intent includes determining a media type associated with the media-related request, wherein the third primary media search query is executed according to the determined media type.

In some examples, executing the third primary media search query includes identifying candidate media items associated with parameter values included in one or more media critic reviews of the identified candidate media items.

In some examples, performing the third primary media search query includes identifying candidate media items associated with parameter values that are derived from closed captioning information of the identified candidate media items.

In some examples, the processing unit 708-1 is further configured to identify the least relevant parameter values for the third primary media search query in accordance with a determination that there are no media items corresponding to the third primary media search query (e.g., with the identifying unit 720-1). The processing unit 708-1 is further configured to determine one or more alternative parameter values based on the identified least relevant parameter values (e.g., with the determining unit 714-1). The processing unit 708-1 is further configured to execute the one or more alternative primary media search queries using the one or more alternative parameter values to obtain a fourth primary set of media items (e.g., with the obtaining unit 718-1). The processing unit 708-1 is further configured to replace the display of the primary media item group on the display unit with the display of the fourth primary media item group (e.g., with the display enabling unit 710-1).

In some examples, the processing unit 708-1 is further configured to, in accordance with a determination that the primary user intent does not include a user intent to narrow the primary media search query, determine one or more secondary user intents based on the primary user intent and one or more previous user intents corresponding to one or more previous media-related requests received prior to the media-related request (e.g., with the determining unit 714-1). The processing unit 708-1 is further configured to generate one or more secondary media search queries corresponding to the one or more secondary user intents (e.g., with the obtaining unit 718-1). The processing unit 708-1 is further configured to execute the one or more secondary media search queries to obtain one or more secondary media item groups (e.g., with the obtaining unit 718-1). The processing unit 708-1 is further configured to display the one or more secondary media item groups on the display unit (e.g., with the display enabling unit 710-1).

In some examples, the one or more previous media-related requests include previous media-related requests corresponding to the primary group of media items.

In some examples, the processing unit 708-1 is further configured to determine one or more combinations of the primary user intent and one or more previous user intents (e.g., with the determining unit 714-1), wherein each of the one or more combinations is associated with at least one media item, and wherein the one or more secondary intents include the one or more combinations.

In some examples, the one or more previous user intents and the primary user intent are associated with the same interactive session with the digital assistant. In some examples, one or more secondary user intents are generated based on a user's media search history on one or more electronic devices. In some examples, the one or more secondary user intents are generated based on a media selection history (media selection history) of the user on the one or more electronic devices.

In some examples, the processing unit 708-1 is further configured to receive (e.g., via the communication unit) the media search history from the second electronic device (e.g., with the receiving unit 722-1). One or more secondary user intents are generated based on the media search history received from the second electronic device.

In some examples, the one or more secondary user intents are generated based on a media watch list of the user on the one or more electronic devices. In some examples, a plurality of texts associated with a plurality of media items displayed on the display unit when the audio input is received is displayed on the display unit when the audio input is received, and one or more secondary user intents are generated based on the displayed plurality of texts.

In some examples, the processing unit 708-1 is further configured to determine a ranking score for each of the one or more secondary user intents (e.g., with the determining unit 714-1), wherein the one or more secondary media item groups are displayed according to the ranking score for each of the one or more secondary user intents.

In some examples, the ranking score for each of the one or more secondary user intents is based on each of the media-related requests and a time at which the one or more previous media-related requests were received. In some examples, the ranking score for each of the one or more secondary user intents is based on a media search history of the user on the one or more electronic devices. In some examples, the ranking score for each of the one or more secondary user intents is based on a media selection history of the user on the one or more electronic devices. In some examples, the ranking score for each of the one or more secondary user intents is based on a media viewing list of the user on the one or more electronic devices.

In some examples, the processing unit 708-1 is further configured to determine, in accordance with a determination that the primary user intent does not include a user intent to execute the new media search query, whether the primary user intent includes a user intent to correct a portion of the primary media search query (e.g., with the determining unit 714-1). The processing unit 708-1 is further configured to, in accordance with a determination that the primary user intent includes a user intent to correct a portion of the primary media search query, generate a fifth primary media search query corresponding to the primary user intent based on the media-related request and the primary media search query request (e.g., with the obtaining unit 718-1). The processing unit 708-1 is further configured to execute the fifth primary media search query to obtain a fifth primary set of media items (e.g., with the obtaining unit 718-1). The processing unit 708-1 is further configured to replace the display of the primary media item group on the display unit with the display of the fifth primary media item group (e.g., with the display enabling unit 710-1).

In some examples, determining whether the primary user intent includes a user intent to correct a portion of the primary media search query includes determining whether the media-related request includes a word or phrase corresponding to the user intent to correct a portion of the primary media search query. In some examples, determining whether the primary user intent includes a user intent to correct a portion of the primary media search query includes determining whether a sequence of phonemes representing a portion of a media-related request is substantially similar to a sequence of phonemes representing a portion of a previous media-related request corresponding to the primary media search query.

In some examples, generating the fifth primary media search query includes identifying a group of media items associated with the portion of the primary media search query that is not to be corrected, wherein the fifth primary media search query is generated based on one or more parameter values of the group of media items associated with the portion of the primary media search query that is not to be corrected.

In some examples, the processing unit 708-1 is further configured to, in accordance with the determination that the primary user intent includes a user intent to correct a portion of the primary media search query, exclude the primary media search query from consideration when determining a secondary user intent corresponding to the media-related request (e.g., with the excluding unit 724-1).

In some examples, the processing unit 708-1 is further configured to determine (e.g., with the determining unit 714-1) whether the primary user intent includes a user intent to change a focus of a user interface displayed on the display unit in accordance with a determination that the primary user intent does not include a user intent to correct a portion of the primary media search query, wherein the user interface includes a plurality of media items. The processing unit 708-1 is further configured to change the focus of the user interface from a first media item of the plurality of media items to a second media item of the plurality of media items (e.g., with the display enabling unit 710-1) in accordance with the determination that the primary user intent includes a user intent to change the focus of the user interface displayed on the display unit.

In some examples, determining whether the primary user intent includes a user intent to change a focus of a user interface displayed on the display unit includes determining whether the media-related request includes a word or phrase corresponding to the user intent to change the focus of the user interface displayed on the display unit.

In some examples, the user interface includes a plurality of texts corresponding to a plurality of media items in the user interface, and wherein determining whether the primary user intent includes a user intent to change a focus of the user interface displayed on the display unit is based on the plurality of texts.

In some examples, the processing unit 708-1 is further configured to determine a textual representation of the media-related request (e.g., with the determining unit 714-1) and display the textual representation on the display unit (e.g., with the display enabling unit 710-1). In some examples, the textual representation is determined using one or more language models. In some examples, one or more language models favor media-related textual results. In some examples, the one or more language models are configured to identify media-related text in multiple languages.

In some examples, a plurality of media items and text associated with the plurality of media items are displayed on a display unit. The processing unit 708-1 is further configured to generate a second language model using text associated with the plurality of media items (e.g., with the generating unit 726-1), wherein the text representation is determined using the second language model.

In some examples, the processing unit 708-1 is further configured to determine the predicted text using the text representation (e.g., with the determining unit 714-1), and display the predicted text with the text representation on the display unit (e.g., with the display enabling unit 710-1).

In some examples, the predicted text is determined based on text displayed on the display unit when the audio input is received.

In some examples, the processing unit 708-1 is further configured to determine whether an end point of the audio input is detected after the predicted text is displayed (e.g., with the determining unit 714-1), wherein the text representation and the predicted text are used to determine the preliminary user intent in accordance with the determination that the end point of the audio input is detected after the predicted text is displayed.

In some examples, the processing unit 708-1 is further configured to determine a preliminary user intent based on the received portion of the audio input when the audio input is received (e.g., with the determining unit 714-1), identify data needed to satisfy the preliminary user intent (e.g., with the identifying unit 720-1), determine whether the data is stored on the one or more electronic devices when the preliminary user intent is determined (e.g., with the determining unit 714-1), and obtain the data (e.g., with the obtaining unit 718-1) in accordance with the determination that the data is not stored on the one or more electronic devices when the preliminary user intent is determined.

According to some embodiments, the processing unit 708-1 is configured to receive a media search request from a user (e.g., at the input unit 703-1 or the audio receiving unit 704-1 and using the receiving unit 722-1 or the audio reception enabling unit 716-1) in the form of natural language speech. The processing unit 708-1 is further configured to determine a primary user intent corresponding to the media search request (e.g., with the determining unit 714-1), and retrieve the primary set of media items in accordance with the primary user intent. The processing unit 708-1 is further configured to determine whether one or more previous user intents exist (e.g., with the determining unit 714-1), where the one or more previous user intents correspond to one or more previous media search requests received prior to the media search request. The processing unit 708-1 is further configured to, in response to determining that one or more previous user intents exist, determine one or more secondary user intents based on the primary user intent and the one or more previous user intents (e.g., with the determining unit 714-1). The processing unit 708-1 is further configured to retrieve a plurality of secondary media item groups (e.g., with the retrieving unit 718-1), wherein each secondary media item group corresponds to a respective secondary user intent of the one or more secondary user intents. The processing unit 708-1 is further configured to display the primary media item group and the plurality of secondary media item groups (e.g., with the display enabling unit 710-1).

In some examples, determining the primary user intent further includes determining whether the media search request includes an explicit request to narrow a previous media search request received prior to the media search request, wherein in accordance with a determination that the media search request includes an explicit request to narrow the previous media search request. The primary user intent is determined from the media search request and at least one of the one or more previous user intents.

In some examples, the primary user intent is determined from the media search request in response to determining that the media search request does not include an explicit request to narrow a previous media search request.

In some examples, the media search request is part of an interactive session with a digital assistant. Determining whether one or more previous user intents exist further includes determining whether the interactive session includes one or more previous media search requests received prior to the media search request, wherein the one or more previous media search requests correspond to the one or more previous user intents. In accordance with a determination that the interactive session contains one or more previous media search requests received prior to the media search request, one or more previous user intents are determined. In accordance with a determination that the interactive session does not contain one or more previous media search requests received prior to the media search request, it is determined that one or more previous user intents do not exist.

In some examples, the processing unit 708-1 is further configured to display the primary set of media items (e.g., with the display enabling unit 710-1) in response to determining that there are no one or more previous media user intents.

In some examples, a secondary user intent of the one or more secondary user intents includes a combination of a primary user intent and a previous user intent of the one or more previous user intents.

In some examples, the secondary user intent of the one or more secondary user intents includes a combination of a first previous user intent of the one or more previous user intents and a second previous user intent of the one or more previous user intents.

In some examples, determining the one or more secondary user intents further includes generating a plurality of combinations of the primary user intent and the one or more previous user intents.

In some examples, determining the one or more secondary user intents further includes determining whether the plurality of combinations includes a combination that cannot be merged. In accordance with a determination that the plurality of combinations include combinations of user intents that cannot be merged, the one or more secondary user intents do not include combinations that cannot be merged.

In some examples, combinations that cannot be merged include more than one value of a parameter that requires a single value.

In some examples, determining the one or more secondary user intents further includes determining whether the one or more previous user intents includes an incorrect user intent. In accordance with a determination that the one or more previous user intents include an incorrect user intent. The one or more secondary user intents are not based on incorrect user intents.

In some examples, determining whether the one or more previous user intentions include an incorrect user intent includes determining whether the one or more previous user intentions include a third user intent that corrects a fourth user intent of the one or more previous user intentions. In accordance with a determination that the one or more previous user intents includes a third user intent that corrects a fourth user intent of the one or more previous user intents, determining that the one or more previous user intents includes an incorrect user intent. The fourth user intent is determined to be an incorrect user intent.

In some examples, determining whether the one or more previous user intentions include an incorrect user intent includes determining whether the one or more previous user intentions include a fifth user intent, the fifth user intent associated with a user selection of a media item, the media item inconsistent with the fifth user intent. In accordance with a determination that the one or more previous user intents includes a third user intent that corrects an incorrect user intent, determining that the one or more previous user intents includes an incorrect user intent, wherein the fifth user intent is determined to be the incorrect user intent.

In some examples, the processing unit 708-1 is further configured to determine whether the plurality of combinations includes combinations associated with less than a predetermined number of media items (e.g., with the determining unit 714-1). In accordance with a determination that the plurality of combinations includes combinations associated with less than a predetermined number of media items, the one or more secondary user intents do not include combinations associated with less than the predetermined number of media items.

In some examples, the processing unit 708-1 is further configured to determine a ranking score for each of the one or more secondary user intents (e.g., with the determining unit 714-1), wherein the plurality of secondary media item groups are displayed according to the ranking score for each of the one or more secondary user intents.

In some examples, a ranking score for each of the one or more secondary user intents is determined based on an order of receipt of the media search request and the one or more previous media search requests. In some examples, the ranking score for each of the one or more secondary user intentions is determined based on a selection history of the user, the selection history including media items previously selected by the user. In some examples, a ranking score for each of the one or more secondary user intents is determined based on the media search history of the user.

In some examples, the primary media item group is displayed at a top row of the user interface, the plurality of secondary media item groups are displayed in a subsequent row of the user interface, the subsequent row being below the top row, and each of the subsequent rows corresponds to a respective secondary user intent of the one or more secondary user intents.

In some examples, the subsequent rows are ordered according to a ranking score of each of the one or more secondary user intents.

According to some embodiments, the processing unit 708-1 is configured to receive the first media search request (e.g., at the input unit 703-1 or the audio receiving unit 704-1 and with the receiving unit 722-1 or the audio reception enabling unit 716-1). The processing unit 708-1 is further configured to retrieve the first set of media items that satisfy the media search request (e.g., with the retrieving unit 718-1). The processing unit 708-1 is further configured to display the first set of media items on the display unit via the user interface (e.g., with the display enabling unit). While displaying at least a portion of the first set of media items, the processing unit 708-1 is further configured to receive a second media search request (e.g., at the input unit 703-1 or the audio receiving unit 704-1 and with the receiving unit 722-1 or the audio reception enabling unit 716-1) and obtain a determination of whether the second media search request is a request to narrow the first media search request (e.g., with the obtaining unit 718-1). The processing unit 708-1 is further configured to, in response to obtaining the determination that the second media search request is a request to narrow the first media search request, obtain a second set of media items that satisfies the second media search request (e.g., with the obtaining unit 718-1), the second set of media items being a subgroup of the plurality of media items, and replace, via the user interface, display of at least a portion of the first set of media items on the display unit with display of at least a portion of the second set of media items (e.g., with the display enabling unit 710-1).

In some examples, each media item in the second group of media items is associated with one or more parameter values of the first media search request and one or more parameter values of the second media search request.

In some examples, the processing unit 708-1 is further configured to display the media content on the display unit (e.g., with the display enabling unit 710-1) while displaying the first group of media items and while displaying at least a portion of the second group of media items.

In some examples, the user interface occupies at least a majority of the display area of the display unit. The processing unit 708-1 is further configured to retrieve a third set of media items that at least partially satisfies the second media search request (e.g., with the retrieving unit 718-1), wherein the second set of media items and the third set of media items are different. The processing unit 708-1 is further configured to display at least a portion of the third set of media items on the display unit via the user interface (e.g., with the display enabling unit 710-1).

In some examples, each media item in the third set of media items is associated with at least one parameter value defined in the first media search request or the second media search request. In some examples, at least a portion of the second group of media items is displayed at a top row of the user interface, and wherein at least a portion of the third group of media items is displayed at one or more subsequent rows on the user interface.

In some examples, when the second media search request is received, the focus of the user interface is on the media items of the first media item group, and the third media item group is contextually related to the media items of the first media item group.

In some examples, obtaining a determination of whether the second media search request is a request to narrow the media search request includes obtaining a determination of whether the second media search request contains one of a plurality of refined terms.

In some examples, the second media search request is in natural language form. In some examples, the second media search request defines parameter values using ambiguous terms.

In some examples, the processing unit 708-1 is further configured to identify the parameter values based on the strength of the connection between the ambiguous term and the parameter value using natural language processing (e.g., with the identifying unit 720-1).

In some examples, each media item in the first group of media items is associated with a quality rating, and the second media search request defines a parameter value associated with the quality rating. In some examples, each media item in the first group of media items is associated with a duration, and wherein the second media search request defines a parameter value associated with the duration.

In some examples, each media item in the first group of media items is associated with a popularity rating, and the second media search request defines a parameter value associated with the popularity rating.

In some examples, each media item in the first set of media items is associated with a release date, and the second media search request defines a parameter value associated with the release date.

In some examples, the processing unit 708-1 is further configured to, in response to obtaining a determination that the second media search request is not a request to narrow the first media search request, obtain a fourth media item group that satisfies the second media search request (e.g., with the obtaining unit 718-1), the fourth media item group being different from the first media item group, and replace, via the user interface, display of at least a portion of the first media item group on the display unit with display of at least a portion of the fourth media item group (e.g., with the display enabling unit 710-1).

In some examples, each media item in the fourth group of media items is associated with one or more parameters defined in the second media search request.

In some examples, the processing unit 708-1 is further configured to display the media content on the display unit (e.g., with the display enabling unit 710-1) while displaying the first group of media items and while displaying at least a portion of the fourth group of media items.

In some examples, the user interface occupies at least a majority of the display area of the display unit. The processing unit 708-1 is further configured to obtain a fifth set of media items (e.g., with the obtaining unit 718-1), wherein each media item in the fifth set of media items is associated with one or more parameters defined in the first media search request and one or more parameters defined in the second media search request. The processing unit 708-1 is further configured to display the fifth set of media items on the display unit via the user interface (e.g., with the display enabling unit 710-1).

In some examples, when the second media search request is received, the focus of the user interface is located on a second media item of the first media item group, and one or more media items of the fifth plurality of media items include a parameter value associated with the second media item of the first media item group.

In some examples, when the second media search request is detected, the focus of the user interface is located on a third media item of the first media item group. The processing unit 708-1 is further configured to, in response to obtaining a determination that the second media search request is not a request to narrow the first media search request, obtain a determination of whether the second media search request is a request to obtain a set of alternative media items similar to a third media item in the first set of media items (e.g., with obtaining unit 718-1). The processing unit 708-1 is further configured to, in response to determining that the second media search request is a request to retrieve a set of alternative media items similar to a third media item in the first media item group, retrieve a sixth media item group (e.g., with the retrieval unit 718-1), wherein each media item in the sixth media item group is associated with one or more parameter values of the third media item, and display the sixth media item group on the display unit via the user interface (e.g., with the display enabling unit 710-1).

In some examples, the first group of media items is obtained by performing a string search based on the first media search request, and the second group of media items is obtained by performing a structured search based on one or more parameter values defined in the second media search request.

In some examples, a first media search request is received via a keyboard interface and a second media search request is received in natural language speech. In some examples, a structured search is performed using the first group of media items.

The operations described above with reference to fig. 16A to 16E are optionally implemented by the components shown in fig. 12 to 14 and 15A to 15B. For example, the display operation 502-, application module 262-1, I/O processing module 428-1, STT processing module 430-1, natural language processing module 432-1, task flow processing module 436-1, service processing module 438-1, or one or more of the processors 204 and 1, 404-1. Those skilled in the art will clearly know how other processes may be implemented based on the components shown in fig. 12-14 and 15A-15B.

According to some implementations, a computer-readable storage medium (e.g., a non-transitory computer-readable storage medium) is provided that stores one or more programs for execution by one or more processors of an electronic device, the one or more programs including instructions for performing any of the methods described herein.

According to some implementations, there is provided an electronic device (e.g., a portable electronic device) comprising means for performing any of the methods described herein.

According to some implementations, an electronic device (e.g., a portable electronic device) is provided that includes a processing unit configured to perform any of the methods described herein.

According to some implementations, there is provided an electronic device (e.g., a portable electronic device) comprising one or more processors and memory storing one or more programs for execution by the one or more processors, the one or more programs including instructions for performing any of the methods described herein.

Although the above description uses terms such as "first," "second," etc. to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first user input may be named a second user input, and similarly, a second user input may be named a first user input, without departing from the scope of various described embodiments.

The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

Depending on the context, the term "if" may be interpreted to mean "when" ("where" or "upon") or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined." or "if [ a stated condition or event ] is detected" may be interpreted to mean "upon determining.. or" in response to determining. "or" upon detecting [ a stated condition or event ] or "in response to detecting [ a stated condition or event ]" depending on the context.

Furthermore, the foregoing description, for purposes of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the technology and its practical applications. Those skilled in the art are thus well able to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated.

Although the present disclosure and examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of the present disclosure and examples as defined by the following claims.

Further, in any of the various examples discussed herein, the various aspects may be personalized for a particular user. User data, including contacts, preferences, locations, favorite media, etc., can be used to interpret voice commands and facilitate user interaction with the various devices discussed herein. The various processes discussed herein may also be modified in various other ways based on user preferences, contacts, text, usage history, profile data, age zone data, and the like. Further, such preferences and settings may be updated over time based on user interactions (e.g., frequently spoken commands, frequently selected applications, etc.). The collection and use of user data available from various sources may be used to improve the delivery of invited content, or any other content that may be of interest to the user, to the user. The present disclosure contemplates that in some examples, such sampled data may include personal information data that uniquely identifies or may be used to contact or locate a particular person. Such personal information data may include demographic data, location-based data, telephone numbers, email addresses, home addresses, or any other identifying information.

The present disclosure recognizes that the use of such personal information data in the present technology may be useful to benefit the user. For example, the personal information data may be used to deliver target content that is of greater interest to the user. Thus, the use of such personal information data enables planned control of delivered content. In addition, the present disclosure also contemplates other uses for which personal information data is beneficial to a user.

The present disclosure also contemplates that entities responsible for the collection, analysis, disclosure, transmission, storage, or other use of such personal information data will comply with established privacy policies and/or privacy practices. In particular, such entities should enforce and adhere to the use of privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining privacy and security of personal information data. For example, personal information from a user should be collected for legitimate and legitimate uses by an entity and not shared or sold outside of these legitimate uses. In addition, such collection should only be done after the user has informed consent. In addition, such entities should take any required steps to secure and protect access to such personal information data, and to ensure that others who are able to access the personal information data comply with their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices.

Regardless of the foregoing, the present disclosure also contemplates examples in which a user selectively prevents use or access to personal information data. That is, the present disclosure contemplates that hardware elements and/or software elements may be provided to prevent or block access to such personal information data. For example, in the case of an ad delivery service, the techniques of the present invention may be configured to allow a user to opt-in to "join" or "opt-out of" participating in the collection of personal information data during registration with the service. In another example, the user may choose not to provide location information for the targeted content delivery service. As another example, the user may choose not to provide accurate location information, but to permit transmission of location area information.

Thus, while this disclosure broadly covers the use of personal information data to implement one or more of the various disclosed examples, this disclosure also contemplates that various examples may also be implemented without having to access such personal information data. That is, various examples of the present technology are not rendered incapable of normal presentation due to the absence of all or a portion of such personal information data. For example, content may be selected and delivered to a user by inferring preferences based on non-personal information data or an absolute minimum of personal information (e.g., content requested by a device associated with the user, other non-personal information available to a content delivery service, or publicly available information).

A system and process for controlling television user interaction using a virtual assistant is disclosed. The virtual assistant can interact with a television set-top box to control content shown on the television. A voice input for a virtual assistant can be received from a device having a microphone. User intent may be determined from the speech input, and the virtual assistant may perform tasks according to the user's intent, including causing media to be played back on the television. The virtual assistant interaction may be shown on the television in an interface that expands or contracts to occupy a minimum amount of space when conveying the desired information. Multiple devices associated with multiple displays may be used to determine user intent from speech input and to convey information to the user. In some examples, virtual assistant query suggestions may be provided to a user based on media content shown on a display.

1. A method for controlling television interactions using a virtual assistant, the method comprising:

at an electronic device:

receiving a voice input from a user;

determining media content based on the speech input;

displaying a first user interface having a first size, wherein the first user interface includes one or more selectable links to the media content;

Receiving a selection of one of the one or more selectable links; and

in response to the selection, displaying a second user interface having a second size larger than the first size, wherein the second user interface includes the media content associated with the selection.

2. The method of item 1, wherein the first user interface expands into the second user interface in response to the selection.

3. The method of item 1, wherein the first user interface is overlaid on media content being played.

4. The method of item 1, wherein the second user interface is overlaid on the media content being played.

5. The method of item 1, wherein the speech input comprises a query and the media content comprises results of the query.

6. The method of clause 5, wherein the first user interface includes a link to a result of the query that is external to the one or more selectable links to the media content.

7. The method of item 1, further comprising:

in response to the selection, playing the media content associated with the selection.

8. The method of item 1, wherein the media content comprises a sporting event.

9. The method of item 1, wherein the second user interface includes a description of the media content associated with the selection.

10. The method of item 1, wherein the first user interface includes a link to purchase media content.

11. The method of item 1, further comprising:

receiving additional voice input from the user, wherein the additional voice input comprises a query associated with the displayed content;

determining a response to the query associated with the displayed content based on metadata associated with the displayed content; and

in response to receiving the additional speech input, displaying a third user interface, wherein the third user interface includes the determined response to the query associated with the displayed content.

12. The method of item 1, further comprising:

receiving an indication to initiate receipt of a speech input; and

in response to receiving the indication, a readiness confirmation is displayed.

13. The method of item 1, further comprising:

in response to receiving the voice input, displaying an on-listening confirmation.

14. The method of item 1, further comprising:

displaying a transcription of the speech input.

15. The method of item 1, wherein the electronic device comprises a television.

16. The method of item 1, wherein the electronic device comprises a television set-top box.

17. The method of item 1, wherein the electronic device comprises a remote control.

18. The method of item 1, wherein the electronic device comprises a mobile phone.

19. The method of item 1, wherein the one or more selectable links in the first user interface include moving images associated with the media content.

20. The method of item 19, wherein the moving image associated with the media content comprises a live feed of the media content.

21. The method of item 1, further comprising:

determining whether the currently displayed content includes a moving image or a control menu;

in response to determining that the currently displayed content includes moving images, selecting a small size as the first size of the first user interface; and

in response to determining that the currently displayed content includes a control menu, selecting a large size larger than the small size as the first size of the first user interface.

22. The method of item 1, further comprising:

determining alternative media content for display based on one or more of user preferences, program popularity, and status of live sporting events; and

a notification is displayed that includes the determined alternative media content.

23. A non-transitory computer-readable storage medium comprising computer-executable instructions to:

receiving a voice input from a user;

determining media content based on the speech input;

displaying a first user interface having a first size, wherein the first user interface includes one or more selectable links to the media content;

receiving a selection of one of the one or more selectable links; and

in response to the selection, displaying a second user interface having a second size larger than the first size, wherein the second user interface includes the media content associated with the selection.

24. The non-transitory computer-readable storage medium of item 23, wherein the first user interface expands into the second user interface in response to the selection.

25. The non-transitory computer readable storage medium of item 23, wherein the first user interface is overlaid on media content being played.

26. The non-transitory computer readable storage medium of item 23, wherein the second user interface is overlaid on the media content being played.

27. The non-transitory computer-readable storage medium of item 23, wherein the speech input comprises a query and the media content comprises results of the query.

28. The non-transitory computer-readable storage medium of item 27, wherein the first user interface includes a link to a result of the query that is external to the one or more selectable links to the media content.

29. A system for controlling television interactions using virtual assistants, the system comprising:

one or more processors;

a memory; and

one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:

receiving a voice input from a user;

Determining media content based on the speech input;

displaying a first user interface having a first size, wherein the first user interface includes one or more selectable links to the media content;

receiving a selection of one of the one or more selectable links; and

in response to the selection, displaying a second user interface having a second size larger than the first size, wherein the second user interface includes the media content associated with the selection.

30. The system of item 29, wherein the first user interface expands into the second user interface in response to the selection.

31. The system of item 29, wherein the first user interface is overlaid on media content being played.

32. The system of item 29, wherein the second user interface is overlaid on the media content being played.

33. The system of item 29, wherein the speech input comprises a query and the media content comprises results of the query.

34. The system of item 33, wherein the first user interface includes a link to a result of the query, the link being external to the one or more selectable links to the media content.

An intelligent automated assistant for television user interaction.

This patent application claims priority to U.S. provisional serial No. 62/019,312 entitled "INTELLIGENT AUTOMATED ASSISTANT FOR TV USER INTERACTIONS" filed on 30.6.2014, which is hereby incorporated by reference in its entirety for all purposes.

This patent application is also related to the following co-pending provisional patent applications: U.S. patent application serial No. 62/019,292, "Real-time Digital Assistant Knowledge Updates" (attorney docket No. 106843097900(P22498USP1)) filed on 30/6/2014, which is hereby incorporated by reference in its entirety.

The present invention relates generally to controlling television user interactions, and more particularly to processing speech for a virtual assistant to control television user interactions.

An intelligent automated assistant (or virtual assistant) provides an intuitive interface between a user and an electronic device. These assistants may allow users to interact with a device or system in speech and/or text form using natural language. For example, a user may access a service of an electronic device by providing spoken user input in natural language to a virtual assistant associated with the electronic device. The virtual assistant can perform natural language processing on the spoken user input to infer user intent and implement the user intent into a task. The tasks may then be performed by performing one or more functions of the electronic device, and in some examples, the relevant output may be returned to the user in a natural language form.

Although mobile phones (e.g., smartphones), tablets, etc. have gained benefit from virtual assistant control, many other user devices still lack such a convenient control mechanism. For example, user interaction with media control devices (e.g., televisions, television set-top boxes, cable boxes, gaming devices, streaming media devices, digital video recorders, etc.) can be complex and unintelligible. Furthermore, with the increasing number of media sources that may be provided by these devices (e.g., wireless televisions, television subscription services, streaming video services, cable video-on-demand services, network-based video services, etc.), finding desired media content to consume may be cumbersome for some users, and even unwieldy in the face of a vast amount of content. As a result, many media control devices may provide a poor user experience that may be frustrating to many users.

A system and process for controlling television interactions using virtual assistants is disclosed. In one example, a voice input may be received from a user. The media content may be determined based on the speech input. A first user interface having a first size may be displayed, and the first user interface may include a selectable link to media content. A selection of one of the selectable links may be received. In response to the selection, a second user interface may be displayed, the second user interface having a second size larger than the first size, and the second user interface may include media content associated with the selection.

In another example, a voice input may be received from a user at a first device having a first display. The user intent of the speech input may be determined based on the content displayed on the first display. The media content may be determined based on user intent. The media content may be played on a second device associated with a second display.

In another example, a voice input may be received from a user, and the voice input may include a query associated with content shown on a television display. The user intent of the query may be determined based on the viewing history of the content and/or media content shown on the television display. Results of the query may be displayed based on the determined user intent.

In another example, media content may be displayed on a display. Input may be received from a user. The virtual assistant query may be determined based on the media content and/or the viewing history of the media content. The virtual assistant query can be displayed on the display.

In the following description of the examples, reference is made to the accompanying drawings in which are shown, by way of illustration, specific examples that may be implemented. It is to be understood that other examples may be used and structural changes may be made without departing from the scope of the various examples.

The present invention relates to a system and process for controlling television user interaction using a virtual assistant. In one example, a virtual assistant can be used to interact with a media control device, such as a television set-top box that controls content shown on a television display. Voice input for the virtual assistant may be received using a mobile user device or a remote control with a microphone. User intent may be determined from the voice input, and the virtual assistant may perform tasks according to the user intent, including causing media to be played back on a connected television and controlling any other functions of a television set-top box or similar device (e.g., managing video recordings, searching for media content, navigating menus, etc.).

The virtual assistant interaction may be shown on a connected television or other display. In one example, media content may be determined based on speech input received from a user. A first user interface having a first small size may be displayed, the first user interface including a selectable link to the determined media content. Upon receiving a selection of a media link, a second user interface having a second, larger size may be displayed, the second user interface including media content associated with the selection. In other examples, the interface for communicating virtual assistant interactions may expand or contract to occupy a minimum amount of space when communicating the desired information.

In some examples, multiple devices associated with multiple displays may be used to determine user intent from speech input and convey information to the user in different ways. For example, a voice input may be received from a user at a first device having a first display. The user intent may be determined from the speech input based on content displayed on the first display. The media content may be determined based on the user intent and may be played on a second device associated with the second display.

The television display content may also be used as a contextual input for determining user intent from the speech input. For example, a voice input may be received from a user, the voice input including a query associated with content shown on a television display. The user intent of the query may be determined based on the content shown on the television display and the media content viewing history on the television display (e.g., disambiguating the query based on the people in the television program being played). The results of the query may then be displayed based on the determined user intent.

In some examples, virtual assistant query suggestions may be provided to a user (e.g., to familiarize the user with available commands, suggest interesting content, etc.). For example, media content may be displayed on a display and input may be received from a user requesting virtual assistant query suggestions. Virtual assistant query suggestions (e.g., suggesting queries related to a television program being played) may be determined based on the media content shown on the display and the viewing history of the media content shown on the display. The suggested virtual assistant queries may then be displayed on the display.

Using a virtual assistant to control television user interaction according to various examples discussed herein may provide an effective and enjoyable user experience. By using a virtual assistant capable of receiving natural language queries or commands, a user may interact with the media control device simply and intuitively. Available functionality (including meaningful query suggestions based on playing content) can be suggested to the user as needed, which can help the user learn control capabilities. In addition, available media can be easily accessed using intuitive verbal commands. However, it should be understood that many other advantages may also be realized in accordance with the various examples discussed herein.

FIG. 19 illustrates an exemplary system 100-2 for controlling television user interactions using a virtual assistant. It should be appreciated that controlling television user interaction as discussed herein is merely one example of employing some type of display technology to control media and is for reference only, and that the concepts discussed herein may be generally used to control media content interaction on any of a variety of devices and associated displays, including controlling media content interaction on any of a variety of devices and associated displays (e.g., a monitor, a laptop display, a desktop computer display, a mobile user device display, a projector display, etc.). Thus, the term "television" may refer to any type of display associated with any of a variety of devices. Further, the terms "virtual assistant," "digital assistant," "intelligent automated assistant," or "automatic digital assistant" may refer to any information processing system that may interpret natural language input in speech and/or text form to infer user intent and perform actions based on the inferred user intent. For example, to take action in accordance with the inferred user intent, the system may perform one or more of the following: identifying a task flow by steps and parameters designed to achieve the inferred user intent; entering into the task flow specific requirements from the inferred user intent; executing a task flow by calling a program, method, service, API, etc.; and generating an output response to the user in audible (e.g., speech) and/or visual form.

The virtual assistant may be capable of accepting user requests at least partially in the form of natural language commands, requests, statements, narratives, and/or queries. Typically, a user request seeks either the virtual assistant to make an informational answer or the virtual assistant to perform a task (e.g., cause a particular media to be displayed). Satisfactory responses to user requests may include providing requested informational answers, performing requested tasks, or a combination of both. For example, a user may present questions to the virtual assistant, such as: "where do i now? "based on the user's current location, the virtual assistant may answer: "you are at the central park. "the user may also request to perform a task, such as: "please remind me to call mom at 4 pm today. "in response, the virtual assistant can acknowledge the request and then create an appropriate reminder item in the user's electronic calendar. During the performance of requested tasks, virtual assistants can sometimes interact with users over long periods of time in continuous conversations involving multiple exchanges of information. There are many other ways to interact with a virtual assistant to request information or perform various tasks. In addition to providing verbal responses and taking programmed actions, the virtual assistant can also provide responses in other visual or audio forms (e.g., as text, alerts, music, video, animation, etc.). Further, as described herein, an exemplary virtual assistant can control playback of media content (e.g., a video playing on a television) and cause information to be displayed on a display.

An example of a virtual Assistant is described in U.S. utility patent application serial No. 12/987,982, entitled "Intelligent Automated Assistant," filed on 10.1.2011, the entire disclosure of which is incorporated herein by reference.

As shown in fig. 19, in some examples, the virtual assistant may be implemented according to a client-server model. The virtual assistant can include a client-side portion executing on the user device 102-2 and a server-side portion executing on the server system 110-2. The client-side portion, which may be integrated with the remote control 106-2, is also executed on the television set-top box 104-2. The user device 102-2 may include any electronic device, such as a mobile phone (e.g., a smartphone), a tablet, a portable media player, a desktop computer, a laptop computer, a PDA, a wearable electronic device (e.g., digital glasses, a wrist band, a watch, a brooch, an arm band, etc.), and so forth. The television set-top box 104-2 may include any media control device, such as a cable box, satellite box, video player, video streaming device, digital video recorder, gaming system, DVD player, Blu-ray DiscTMPlayers, combinations of such devices, and the like. Television set-top box 104-2 may be connected to display 112-2 and speakers 111-2 via a wired connection or a wireless connection. Display 112-2 (with or without speakers 111-2) may be any type of display, such as a television display, monitor, or the like, A projector, etc. In some examples, television set-top box 104-2 may be connected to an audio system (e.g., an audio receiver) and speaker 111-2 may be separate from display 112-2. In other examples, display 112-2, speaker 111-2, and television set-top box 104-2 may be incorporated together into a single device, such as a smart television with advanced processing capabilities and network connection capabilities. In such an example, the functionality of television set-top box 104-2 may be performed as an application on a combined device.

In some examples, television set-top box 104-2 may function as a media control center for media content of multiple types and sources. For example, the television set-top box 104-2 may facilitate user access to a live television (e.g., wireless, satellite, or cable television). Accordingly, television set-top box 104-2 may include a cable tuner or a satellite tuner, among others. In some examples, television set-top box 104-2 may also record a television program for later time-shifted viewing. In other examples, television set-top box 104-2 may provide access to one or more streaming media services, such as access to cable-delivered video-on-demand programming, video, and music, and internet-delivered television programming, video, and music (e.g., from various free, pay-for-fee, and subscription streaming services). In other examples, television set-top box 104-2 may facilitate playback or display of media content from any other source, such as displaying photos from a mobile user device, playing videos from a coupled storage device, playing music from a coupled music player, and so forth. Television set-top box 104-2 may also include various other combinations of the media control features discussed herein as desired.

User device 102-2 and television set-top box 104-2 may communicate with server system 110-2 over one or more networks 108-2, which may include the internet, an intranet, or any other public or private network, wired or wireless. Additionally, the user device 102-2 may communicate with the television set-top box 104-2 via the network 108-2 or directly via any other wired or wireless communication mechanism (e.g., Bluetooth, Wi-Fi, radio frequency, infrared transmission, etc.). As shown, the remote control 106-2 may communicate with the television set-top box 104-2 using any type of communication means, such as a wired connection or any type of wireless communication (e.g., bluetooth, Wi-Fi, radio frequency, infrared transmission, etc.), including via the network 108-2. In some examples, a user may interact with television set-top box 104-2 through user device 102-2, remote control 106-2, or an interface element (e.g., a button, microphone, camera, joystick, etc.) integrated within television set-top box 104-2. For example, voice input may be received at user device 102-2 and/or remote control 106-2, including a media-related query or command for the virtual assistant, and may be used to cause media-related tasks to be performed on television set-top box 104-2. Likewise, haptic commands for controlling media on television set-top box 104-2 may be received at user device 102-2 and/or remote control 106-2 (as well as other devices not shown). Accordingly, various functions of television set-top box 104-2 may be controlled in various ways, thereby providing a user with a variety of options for controlling media content from multiple devices.

The client-side portion of the exemplary virtual assistant executing on user device 102-2 and/or television set-top box 104-2 with remote control 106-2 may provide client-side functionality, such as user-oriented input and output processing and communication with server system 110-2. Server system 110-2 may provide server-side functionality for any number of clients residing on respective user devices 102-2 or respective television set-top boxes 104-2.

The server system 110-2 may include one or more virtual assistant servers 114-2, which may include a client-facing I/O interface 122-2, one or more processing modules 118-2, a data and model store 120-2, and an I/O interface 116-2 to external services. Client-facing I/O interface 122-2 may facilitate client-facing input and output processing for virtual assistant server 114-2. The one or more processing modules 118-2 may utilize the data and model store 120-2 to determine a user's intent based on natural language input and may perform task execution based on the inferred user intent. In some examples, the virtual assistant server 114-2 may communicate with external services 124-2 (such as a telephone service, a calendar service, an information service, a messaging service, a navigation service, a television programming service, a streaming media service, etc.) over one or more networks 108-2 for completing tasks or obtaining information. An I/O interface 116-2 to an external service may facilitate such communication.

The server system 110-2 may be implemented on one or more stand-alone data processing devices or a distributed network of computers. In some examples, server system 110-2 may employ various virtual devices and/or services of a third party service provider (e.g., a third party cloud service provider) to provide potential computing resources and/or infrastructure resources of server system 110-2.

While the functionality of the virtual assistant shown in fig. 19 includes both a client-side portion and a server-side portion, in some examples, the functionality of the assistant (or speech recognition and media control in general) may be implemented as a standalone application installed on a user device, television set-top box, smart television, or the like. Further, the division of functionality between the client portion and the server portion of the virtual assistant can be different in different examples. For example, in some examples, the client executing on user device 102-2 or television set-top box 104-2 may be a thin client that provides only user-oriented input and output processing functions and delegates all other functions of the virtual assistant to a backend server.

Fig. 20 illustrates a block diagram of an example user device 102-2, in accordance with various examples. As shown, the user device 102-2 may include a memory interface 202-2, one or more processors 204-2, and a peripheral interface 206-2. The various components in the user equipment 102-2 may be coupled together by one or more communication buses or signal lines. User device 102-2 may also include various sensors, subsystems, and peripherals coupled to peripheral interface 206-2. The sensors, subsystems, and peripherals may collect information and/or facilitate various functions of user device 102-2.

For example, the user device 102-2-2 may include a motion sensor 210-2-2, a light sensor 212-2-2, and a proximity sensor 214-2-2 coupled to the peripheral interface 206-2 to facilitate orientation, lighting, and proximity sensing functions. One or more other sensors 216-2-2, such as a positioning system (e.g., GPS receiver), temperature sensor, biometric sensor, gyroscope, compass, accelerometer, etc., may also be connected to the peripheral interface 206-2 to facilitate related functions.

In some examples, camera subsystem 220-2 and optical sensor 222-2 may be used to facilitate camera functions, such as taking pictures and recording video clips. Communication functions may be facilitated by one or more wired and/or wireless communication subsystems 224-2, which may include various communication ports, radio frequency receivers and transmitters, and/or optical (e.g., infrared) receivers and transmitters. An audio subsystem 226-2 may be coupled to a speaker 228-2 and a microphone 230-2 to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions.

In some examples, the user device 102-2 may also include an I/O subsystem 240-2 coupled to the peripheral interface 206-2. I/O subsystem 240-2 may include a touchscreen controller 242-2 and/or one or more other input controllers 244-2. The touch screen controller 242-2 may be coupled to a touch screen 246-2. The touch screen 246-2 and touch screen controller 242-2 can detect contact and movement or breaks thereof, for example, using any of a number of touch sensitivity technologies, such as capacitive, resistive, infrared, surface acoustic wave, proximity sensor arrays, and the like. Other input controllers 244-2 may be coupled to other input/control devices 248-2, such as one or more buttons, rocker switches, thumb wheels, infrared ports, USB ports, and/or pointing devices (such as a stylus).

In some examples, the user device 102-2 may also include a memory interface 202-2 coupled to the memory 250-2. Memory 250-2 may include any electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device; portable computer diskette (magnetic); random Access Memory (RAM) (magnetic); read Only Memory (ROM) (magnetic); erasable programmable read-only memory (EPROM) (magnetic); portable optical disks such as CD, CD-R, CD-RW, DVD-R, or DVD-RW; or flash memory such as compact flash cards, secure digital cards, USB memory devices, memory sticks, and the like. In some examples, the non-transitory computer-readable storage medium of memory 250-2 may be used to store instructions (e.g., for performing part or all of the various processes described herein) for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In other examples, instructions (e.g., for performing some or all of the various processes described herein) may be stored on a non-transitory computer-readable storage medium of server system 110-2, or may be divided between the non-transitory computer-readable storage medium of memory 250-2 and the non-transitory computer-readable storage medium of server system 110-2. In the context of this document, a "non-transitory computer-readable storage medium" can be any medium that can contain or store the program for use by or in connection with the instruction execution system, apparatus, or device.

In some examples, memory 250-2 may store an operating system 252-2, a communication module 254-2, a graphical user interface module 256-2, a sensor processing module 258-2, a telephone module 260-2, and an application program 262-2. Operating system 252-2 may include instructions for handling basic system services and for performing hardware related tasks. Communication module 254-2 may facilitate communication with one or more additional devices, one or more computers, and/or one or more servers. Graphical user interface module 256-2 may facilitate graphical user interface processing. Sensor processing module 258-2 may facilitate sensor-related processing and functions. Phone module 260-2 may facilitate phone-related processes and functions. The application modules 262-2 may facilitate various functions of user applications, such as electronic messaging, web browsing, media processing, navigation, imaging, and/or other processes and functions.

As described herein, the memory 250-2 may also store client-side virtual assistant instructions (e.g., stored in the virtual assistant client module 264-2) as well as various user data 266-2 (e.g., user-specific vocabulary data, preference data, and/or other data such as a user's electronic address book, to-do list, shopping list, television program collection, etc.), for example, to provide client-side functionality of the virtual assistant. User data 266-2 may also be used to perform speech recognition in support of a virtual assistant or for any other application.

In various examples, virtual assistant client module 264-2 may be capable of accepting sound input (e.g., speech input), text input, touch input, and/or gesture input through various user interfaces of user device 102-2 (e.g., I/O subsystem 240-2, audio subsystem 226-2, etc.). Virtual assistant client module 264-2 can also provide output in audio (e.g., speech output), visual, and/or tactile forms. For example, the output may be provided as voice, sound, alarm, text message, menu, graphics, video, animation, vibration, and/or a combination of two or more of the foregoing. During operation, virtual assistant client module 264-2 can use communication subsystem 224-2 to communicate with a virtual assistant server.

In some examples, virtual assistant client module 264-2 may utilize various sensors, subsystems, and peripheral devices lai to gather additional information from the surroundings of user device 102-2 to establish a context associated with the user, the current user interaction, and/or the current user input. Such context may also include information from other devices, such as information from television set-top box 104-2. In some examples, virtual assistant client module 264-2 can provide the context information, or a subset thereof, along with the user input to the virtual assistant server to help infer the intent of the user. The virtual assistant can also use the context information to determine how to prepare and deliver the output to the user. The context information may also be used by the user device 102-2 or the server system 110-2 to support accurate speech recognition.

In some examples, contextual information accompanying the user input may include sensor information such as lighting, ambient noise, ambient temperature, images or video of the surrounding environment, distance to another object, and the like. The context information may also include information associated with a physical state of the user device 102-2 (e.g., device orientation, device location, device temperature, power level, velocity, acceleration, motion pattern, cellular signal strength, etc.) or a software state of the user device 102-2 (e.g., running process, installed programs, past and current network activities, background services, error logs, resource usage, etc.). The contextual information may also include information associated with the status of the connected device or other devices associated with the user (e.g., media content displayed by television set-top box 104-2, media content available to television set-top box 104-2, etc.). Any of these types of contextual information may be provided to the virtual assistant server 114-2 (or for the user device 102-2 itself) as contextual information associated with the user input.

In some examples, virtual assistant client module 264-2 may selectively provide information (e.g., user data 266-2) stored on user device 102-2 in response to a request from virtual assistant server 114-2 (or the virtual assistant client module may be used on user device 102-2 itself to perform speech recognition and/or virtual assistant functions). Virtual assistant client module 264-2 can also elicit additional input from the user via a natural language dialog or other user interface upon request by virtual assistant server 114-2. Virtual assistant client module 264-2 can transmit additional input to virtual assistant server 114-2 to help virtual assistant server 114-2 make intent inferences and/or satisfy the user intent expressed in the user request.

In various examples, memory 250-2 may include additional instructions or fewer instructions. Further, various functions of user device 102-2 may be performed in hardware and/or firmware, including in one or more signal processing and/or application specific integrated circuits.

Fig. 21 shows a block diagram of an exemplary television set-top box 104-2 in a system 300-2 for controlling television user interaction. System 300-2 may include a subset of the elements of system 100-2. In some examples, system 300-2 may perform certain functions alone, and may also operate with other elements of system 100-2 to perform other functions. For example, elements of system 300-2 may handle certain media control functions (e.g., playback of locally stored media, recording functions, channel tuning, etc.) without interacting with server system 110-2, and system 300-2 may handle other media control functions (e.g., playback of remotely stored media, download media content, make certain virtual assistant queries, etc.) in conjunction with other elements of server system 110-2 and system 100-2. In other examples, elements of system 300-2 may perform functions of larger system 100-2, including accessing external services 124-2 over a network. It should be appreciated that the functionality may be divided between the local device and the remote server device in a variety of other ways.

As shown in fig. 21, in one example, television set-top box 104-2 may include a memory interface 302-2, one or more processors 304-2, and a peripheral interface 306-2. The various components in television set-top box 104-2 may be coupled together by one or more communication buses or signal lines. The television set-top box 104-2 may also include various subsystems and peripherals coupled to the peripheral interface 306-2. The subsystems and peripherals may gather information and/or facilitate various functions of television set-top box 104-2.

For example, television set-top box 104-2 may include a communication subsystem 324-2. Communication functions may be facilitated by one or more wired and/or wireless communication subsystems 324-2, which may include various communication ports, radio frequency receivers and transmitters, and/or optical (e.g., infrared) receivers and transmitters.

In some examples, the television set-top box 104-2 may also include an I/O subsystem 340-2 coupled to the peripheral interface 306-2. The I/O subsystem 340-2 may include an audio/video output controller 370-2. The audio/video output controller 370-2 may be coupled to the display 112-2 and speakers 111-2, or may be capable of otherwise providing audio and video output (e.g., via an audio/video port, wireless transmission, etc.). The I/O subsystem 340-2 may also include a remote controller 342-2. The remote controller 342-2 may be communicatively coupled (e.g., via a wired connection, bluetooth, Wi-Fi, etc.) to the remote controller 106-2. The remote control 106-2 may include a microphone 372-2 for capturing audio input (e.g., voice input from a user), one or more buttons 374-2 for capturing tactile input, and a transceiver 376-2 for facilitating communication with the television set-top box 104-2 via the remote control 342-2. The remote control 106-2 may also include other input mechanisms such as a keyboard, joystick, touchpad, and the like. The remote control 106-2 may also include output mechanisms such as lights, a display, a speaker, and the like. Inputs received at remote control 106-2 (e.g., user speech, button presses, etc.) may be communicated to television set-top box 104-2 via remote control 342-2. The I/O subsystem 340-2 may also include one or more other input controllers 344-2. One or more other input controllers 344-2 may be coupled to other input/control devices 348-2, such as one or more buttons, rocker switches, thumb wheels, infrared ports, USB ports, and/or pointer devices (such as a stylus).

In some examples, television set-top box 104-2 may also include a memory interface 302-2 coupled to memory 350-2. Memory 350-2 may include any electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device; portable computer diskette (magnetic); random Access Memory (RAM) (magnetic); read Only Memory (ROM) (magnetic); erasable programmable read-only memory (EPROM) (magnetic); portable optical disks such as CD, CD-R, CD-RW, DVD-R, or DVD-RW; or flash memory such as compact flash cards, secure digital cards, USB memory devices, memory sticks, and the like. In some examples, the non-transitory computer-readable storage medium of memory 350-2 may be used to store instructions (e.g., for performing part or all of the various processes described herein) for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In other examples, instructions (e.g., for performing some or all of the various processes described herein) may be stored on a non-transitory computer-readable storage medium of server system 110-2, or may be divided between the non-transitory computer-readable storage medium of memory 350-2 and the non-transitory computer-readable storage medium of server system 110-2. In the context of this document, a "non-transitory computer-readable storage medium" can be any medium that can contain or store the program for use by or in connection with the instruction execution system, apparatus, or device.

In some examples, memory 350-2 may store an operating system 352-2, a communications module 354-2, a graphical user interface module 356-2, a device built-in media module 358-2, a device external media module 360-2, and application programs 362-2. The operating system 352-2 may include instructions for handling basic system services and for performing hardware related tasks. The communication module 354-2 may facilitate communication with one or more additional devices, one or more computers, and/or one or more servers. Graphical user interface module 356-2 may facilitate graphical user interface processing. The on-device media module 358-2 may facilitate storage and playback of media content stored locally on the television set-top box 104-2 as well as other media content available locally (e.g., cable channel tuning). The device external media module 360-2 may facilitate streaming playback or download of media content stored remotely (e.g., on a remote server, on the user device 102-2, etc.). The application modules 362-2 may facilitate various functions of user applications such as electronic messaging, web browsing, media processing, gaming, and/or other processes and functions.

As described herein, the memory 350-2 may also store client-side virtual assistant instructions (e.g., stored in the virtual assistant client module 364-2) as well as various user data 366-2 (e.g., user-specific vocabulary data, preference data, and/or other data such as a user's electronic address book, to-do list, shopping list, television program collection, etc.), for example, to provide client-side functionality of the virtual assistant. User data 366-2 may also be used to perform speech recognition to support a virtual assistant or for any other application.

In various examples, virtual assistant client module 364-2 can accept voice input (e.g., speech input), text input, touch input, and/or gesture input through various user interfaces of television set-top box 104-2 (e.g., I/O subsystem 340-2, etc.). Virtual assistant client module 364-2 can also provide output in audio (e.g., speech output), visual, and/or tactile forms. For example, the output may be provided as voice, sound, alarm, text message, menu, graphics, video, animation, vibration, and/or a combination of two or more of the foregoing. During operation, virtual assistant client module 364-2 may use communication subsystem 324-2 to communicate with a virtual assistant server.

In some examples, virtual assistant client module 364-2 may utilize various subsystems and peripherals to gather additional information from the surroundings of television set-top box 104-2 to establish a context associated with the user, current user interaction, and/or current user input. Such context may also include information from other devices, such as information from user device 102-2. In some examples, virtual assistant client module 364-2 may provide the context information, or a subset thereof, along with the user input to the virtual assistant server to help infer the user's intent. The virtual assistant can also use the context information to determine how to prepare and deliver the output to the user. The contextual information may also be used by the television set-top box 104-2 or the server system 110-2 to support accurate speech recognition.

In some examples, contextual information accompanying the user input may include sensor information such as lighting, ambient noise, ambient temperature, distance to another object, and the like. The contextual information may also include information associated with the physical state of the television set-top box 104-2 (e.g., device location, device temperature, power level, etc.) or the software state of the television set-top box 104-2 (e.g., running process, installed applications, past and current network activities, background services, error logs, resource usage, etc.). The context information may also include information associated with the state of the connected device or other devices associated with the user (e.g., content displayed on user device 102-2, playable content on user device 102-2, etc.). Any of these types of contextual information may be provided to virtual assistant server 114-2 (or for television set-top box 104-2 itself) as contextual information associated with the user input.

In some examples, virtual assistant client module 364-2 may selectively provide information (e.g., user data 366-2) stored on television set-top box 104-2 in response to a request from virtual assistant server 114-2 (or the virtual assistant client module may be on television set-top box 104-2 itself for performing speech recognition and/or virtual assistant functions). Virtual assistant client module 364-2 may also elicit additional input from the user via a natural language dialog or other user interface upon request by virtual assistant server 114-2. Virtual assistant client module 364-2 may transmit additional input to virtual assistant server 114-2 to help virtual assistant server 114-2 make intent inferences and/or satisfy the user intent expressed in the user request.

In various examples, memory 350-2 may include additional instructions or fewer instructions. Further, various functions of television set-top box 104-2 may be performed in hardware and/or firmware, including in one or more signal processing and/or application specific integrated circuits.

It should be understood that system 100-2 and system 300-2 are not limited to the components and configurations shown in fig. 19 and 21, and that user device 102-2, television set-top box 104-2, and remote control 106-2 are likewise not limited to the components and configurations shown in fig. 20 and 21. In various configurations according to various examples, system 100-2, system 300-2, user device 102-2, television set-top box 104-2, and remote control 106-2 may all include fewer components, or include other components.

In general, reference is made to a "system" that may include one or more elements of system 100-2, system 300-2, or system 100-2 or system 300-2, throughout this disclosure. For example, a typical system referred to herein may include at least television set-top box 104-2 receiving user input from remote control 106-2 and/or user device 102-2.

Fig. 22A-22E illustrate an exemplary voice input interface 484-2 that may be shown on a display (e.g., display 112-2) to convey voice input information to a user. In one example, the voice input interface 484-2 may be shown on a video 480-2, which may include any moving image or pause video. For example, video 480-2 may include a live television, a video that is playing, a streaming movie, a playback of a recorded program, and so forth. Voice input interface 484-2 may be configured to occupy a minimal amount of space so as not to significantly interfere with the user's viewing of video 480-2.

In one example, the virtual assistant can be triggered to listen for voice input containing a command or query (or begin recording voice input for subsequent processing, or begin processing voice input in real-time). Listening may be triggered in various ways, including indications such as: the user presses a physical button on remote control 106-2, the user presses a physical button on user device 102-2, the user presses a virtual button on user device 102-2, the user speaks a trigger phrase that can be recognized by the device listening all the time (e.g., speaks "hey, assistant" to start listening to a command), the user performs a gesture that can be detected by the sensor (e.g., acts in front of the camera), and so on. In another example, the user may press and hold a physical button on the remote control 106-2 or the user device 102-2 to initiate listening. In other examples, the user may press and hold a physical button on remote control 106-2 or user device 102-2 when speaking the query or command, and may release the button when completed. Various other indications may also be received to initiate receipt of speech input from a user.

In response to receiving an indication to listen for voice input, voice input interface 484-2 may be displayed. FIG. 22A shows notification area 482-2 unrolled upward from a bottom portion of display 112-2. Upon receiving an indication to listen for voice input, a voice input interface 484-2 may be displayed in notification area 482-2 and, as shown, the interface may slide upward in an animated fashion from the bottom edge of the viewing area of display 112-2. FIG. 22B shows voice input interface 484-2 after sliding up into view. The voice input interface 484-2 may be configured to occupy a minimum amount of space at the bottom of the display 112-2 to avoid significantly interfering with the video 480-2. In response to receiving an indication to listen for voice input, a readiness acknowledgement 486-2 may be displayed. Readiness confirmation 486-2 may include a microphone symbol as shown, or may include any other image, icon, animation, or symbol to communicate that the system (e.g., one or more elements of system 100-2) is ready to capture voice input from the user.

When the user begins speaking, the listening confirmation 487-2 shown in FIG. 22C can be displayed to confirm that the system is capturing speech input. In some examples, listening confirmation 487-2 may be displayed in response to receiving a voice input (e.g., capturing voice). In other examples, the readiness acknowledgement 486-2 may display a predetermined amount of time (e.g., 500-2 milliseconds, 1 second, 3 seconds, etc.), after which the listening acknowledgement 487-2 may be displayed. The hearing confirmation 487-2 may include a waveform symbol as shown, or may include an active waveform animation that moves (e.g., changes frequency) in response to the user's speech. In other examples, listening confirmation 487-2 may include any other image, icon, animation, or symbol to communicate that the system is capturing voice input from the user.

Upon detecting that the user has finished speaking (e.g., based on a pause, voice interpretation indicating end of query, or any other endpoint detection method), a processing confirmation 488-2, shown in fig. 22D, may be displayed to confirm that the system has finished capturing voice input and is processing voice input (e.g., interpreting voice input, determining user intent, and/or performing an associated task). The on-process confirmation 488-2 may include an hourglass symbol as shown, or may include any other image, icon, animation, or symbol to communicate that the system is processing the captured speech input. In another example, the in-process confirmation 488-2 may include an animation of a rotating circle or a colored/luminous point moving around a circle.

After the captured speech input is interpreted as text (or in response to successfully converting the speech input to text), a command receipt confirmation 490-2 and/or transcription 492-2 shown in FIG. 22E may be displayed to confirm that the system has received and interpreted the speech input. The transcription 492-2 may include a transcription of the received speech input (e.g., "which are sports events that are currently in progress. In some examples, the transcription 492-2 can slide upward from the bottom of the display 112-2 in an animated fashion, can be temporarily displayed (e.g., for a few seconds) in the position shown in fig. 22E, and can then slide upward to the top of the voice input interface 484-2 before disappearing from view (e.g., as text scrolls upward and eventually leaves the view). In other examples, the transcription may not be displayed, and the user's command or query may be processed and the associated task may be performed without displaying the transcribed (e.g., a simple channel change may be performed immediately without displaying the transcription of the user's speech).

In other examples, the voice transcription may be performed in real-time as the user speaks. When a word is transcribed, it can be displayed in speech input interface 484-2. For example, a word may be displayed next to the on-going listen confirmation 487-2. After the user finishes speaking, a command receipt confirmation 490-2 may be briefly displayed prior to performing the task associated with the user command.

Further, in other examples, command receipt acknowledgement 490-2 may convey information about the command received and understood. For example, for a simple request to change to another channel, a logo or number associated with the channel may be displayed briefly as a command receipt confirmation 490-2 (e.g., a few seconds) when the channel changes. In another example, for a request to pause a video (e.g., video 480-2), a pause symbol (e.g., two vertical parallel bars) may be displayed as a command receipt acknowledgement 490-2. The pause symbol may remain on the display until, for example, the user performs another action (e.g., issues a play command to resume playback). Symbols, logos, animations, etc. may likewise be displayed for any other command (e.g., symbols for fast reverse, fast forward, stop, play, etc.). Thus, command receipt acknowledgement 490-2 may be used to convey command specific information.

In some examples, voice input interface 484-2 may be hidden after receiving a user query or command. For example, voice input interface 484-2 may be animated to slide downward until it leaves the view from the bottom of display 112-2. Voice input interface 484-2 may be hidden without further information needing to be displayed to the user. For example, for general or direct commands (e.g., change to channel ten, change to sports channel, play, pause, fast forward, fast reverse, etc.), voice input interface 484-2 may be hidden immediately after confirming receipt of the command, and may immediately perform the associated task or tasks. While the various examples herein show and describe interfaces at the bottom or top edge of a display, it should be understood that any of the various interfaces may be located at other locations around the display. For example, voice input interface 484-2 may appear from the side edge of display 112-2, in the center of display 112-2, in a corner of display 112-2, and so on. Similarly, various other interface examples described herein may be arranged in a variety of different orientations in a variety of different locations on a display. Further, while the various interfaces described herein are shown as opaque, any of the various interfaces may be transparent or allow images to be viewed through the interface (obscured or full) (e.g., overlaying the interface content over the media content without completely obscuring the underlying media content).

In other examples, the results of the query may be displayed within voice input interface 484-2 or in a different interface. FIG. 23 shows an exemplary media content interface 510-2 on video 480-2 with exemplary results of the transcribed query of FIG. 22E. In some examples, the results of the virtual assistant query may include media content instead of or in addition to text content. For example, the results of the virtual assistant query may include television programs, videos, music, and so on. Some results may include media immediately available for playback, while other results may include media available for purchase, and so on.

As shown, media content interface 510-2 may be larger in size than voice input interface 484-2. In one example, the voice input interface 484-2 may have a first, smaller size to accommodate voice input information, while the media content interface 510-2 may have a second, larger size to accommodate query results, which may include text, still images, and moving images. In this way, the interface for communicating virtual assistant information may be scaled in size according to the content to be communicated, thereby limiting the occupied screen real estate (e.g., minimally blocking other content, such as video 480-2).

As shown, media content interface 510-2 may include (as a result of the virtual assistant query) selectable video link 512-2, selectable text link 514-2, and additional content link 513-2. In some examples, the link may be selected by navigating a focus, cursor, etc. to a particular element and selecting using a remote control (e.g., remote control 106-2). In other examples, a voice command to the virtual assistant may be used to select a link (e.g., watch a football game, display details about a basketball game, etc.). The selectable video link 512-2 may include still or moving images and may be selectable to cause playback of the associated video. In one example, selectable video link 512-2 may include a video of the associated video content that is playing. In another example, selectable video link 512-2 may comprise a live feed of a television channel. For example, selectable video link 512-2 may include a live feed of a football game on a sports channel as a result of a virtual assistant query regarding a sporting event currently being conducted on a television. The selectable video link 512-2 may also include any other video, animation, images, etc. (e.g., a triangle playing symbol). In addition, link 512-2 may link to any type of media content, such as movies, television shows, sporting events, music, and so forth.

Selectable text link 514-2 may include textual content associated with selectable video link 512-2 or may include a textual representation of the results of the virtual assistant query. In one example, selectable text link 514-2 may include a description of media derived from a virtual assistant query. For example, selectable text link 514-2 may include the name of a television program, the title of a movie, a description of a sporting event, a television channel name or number, and so forth. In one example, selection of text link 514-2 may cause playback of the associated media content. In another example, selection of the text link 514-2 may provide additional detailed information about the media content or other virtual assistant query results. Additional content links 513-2 may link to and cause to be displayed additional results of the virtual assistant query.

While certain media content examples are shown in fig. 23, it should be understood that any type of media content may be included as a result of a virtual assistant query for media content. For example, the media content that may be returned as a result of the virtual assistant may include videos, television programs, music, television channels, and so forth. Additionally, in some examples, a category filter may be provided in any of the interfaces described herein to allow a user to filter search or query results or displayed media options. For example, an optional filter may be provided to filter results by type (e.g., movie, music album, book, television program, etc.). In other examples, the selectable filters may include category descriptors or content descriptors (e.g., comedy, interview, specific program, etc.). In other examples, the optional filter may include a time (e.g., the present week, the last year, etc.). It should be appreciated that a filter may be provided in any of the various interfaces described herein to allow a user to filter results based on categories related to displayed content (e.g., filter by type if the media results are of different types, filter by category if the media results are of different categories, filter by time if the media results are of different times, etc.).

In other examples, the media content interface 510-2 may include a reformulation of the query in addition to the media content results. For example, a reformulation of a user query may be displayed over the media content results (over selectable video link 512-2 and selectable text link 514-2). In the example of FIG. 23, such a reformulation of a user query may include the following: "this is some sporting events that are currently in progress. "other text may be displayed as well that introduces the results of the media content.

In some examples, after displaying any interface (including interface 510-2), the user may initiate capture of additional speech input using a new query (which may or may not be related to a previous query). The user query may include a command to act on an interface element, such as a command to select video link 512-2. In another example, the user speech may include a query associated with displayed content, such as displayed menu information, a video being played (e.g., video 480-2), and so forth. Responses to such queries may be determined based on the information shown (e.g., the text displayed) and/or metadata associated with the content displayed (e.g., metadata associated with the video being played). For example, a user may query the media results shown in the interface (e.g., interface 510-2) and may search metadata associated with the media to provide answers or results. Such answers or results may then be provided in another interface or within the same interface (e.g., in any of the interfaces discussed herein).

As described above, in one example, additional detailed information about the media content may be displayed in response to selection of text link 514-2. FIGS. 24A and 24B illustrate an exemplary media detail interface 618-2 on the video 480-2 after the text link 514-2 is selected. In providing additional detail information, in one example, the media content interface 510-2 may be expanded into a media detail interface 618-2, as illustrated by interface expansion transition 616-2 of FIG. 24A. Specifically, as shown in FIG. 24A, the size of the selected content may be expanded and additional textual information may be provided by expanding the interface upward on display 112-2 to occupy more of the screen real estate. The interface may be expanded to accommodate additional detailed information desired by the user. In this way, the size of the interface may be proportional to the amount of content desired by the user, thereby minimizing the actual usage space of the screen that is occupied while still conveying the desired content.

FIG. 24B shows the detail interface 618-2 after it is fully deployed. As shown, the details interface 618-2 may have a larger size than the media content interface 510-2 or the voice input interface 484-2 to accommodate the required detailed information. The details interface 618-2 may include detailed media information 622-2, which detailed media information 622-2 may include various detailed information associated with media content or another result of the virtual assistant query. Detailed media information 622-2 may include a program title, a program description, a program start time, a channel, an episode summary, a movie description, actor names, character names, sports event participants, producer names, director names, or any other detailed information associated with the virtual assistant query results.

In one example, the details interface 618-2 may include a selectable video link 620-2 (or another link for playing media content). The selectable video link 620-2 may include a larger version of the corresponding selectable video link 512-2. Thus, selectable video link 620-2 may include still or moving images and may be selectable to cause playback of the associated video. Selectable video link 620-2 may include a video of the associated video content that is playing, a live feed of a television channel (e.g., a live feed of a football game on a sports channel), and so forth. The selectable video link 620-2 may also include any other video, animation, image, etc. (e.g., a triangle playing symbol).

As described above, the video may be played in response to selection of a video link (such as video link 620-2 or video link 512-2). Fig. 25A and 25B illustrate exemplary media transition interfaces that may be displayed in response to selection of a video link (or other command to play video content). As shown, video 480-2 may be replaced with video 726-2. In one example, video 726-2 may be expanded to replace or overlay video 480-2, as shown by interface extension transition 724-2 in FIG. 25A. The result of the transition may include the expanded media interface 728-2 of FIG. 25B. As with the other interfaces, the size of the extended media interface 728-2 may be sufficient to provide the user with the desired information; here, expansion to fill the display 112-2 may be included. Thus, the extended media interface 728-2 may be larger than any other interface, as the desired information may include the media content being played across the entire display. Although not shown, in some examples, descriptive information may briefly be overlaid (e.g., along the bottom of the screen) on the video 726-2. Such descriptive information may include the name of the associated program, video, channel, etc. The descriptive information may then be hidden from view (e.g., after a few seconds).

Fig. 26A and 26B illustrate an exemplary voice input interface 836-2 that may be shown on the display 112-2 to convey voice input information to a user. In one example, a voice input interface 836-2 may be displayed on menu 830-2. The menu 830-2 can include various media options 832-2, and the voice input interface 836-2 can similarly be displayed on any other type of menu (e.g., content menu, category menu, control menu, settings menu, program menu, etc.). In one example, voice input interface 836-2 may be configured to occupy a relatively large amount of screen real estate of display 112-2. For example, voice input interface 836-2 may be larger than voice input interface 484-2 discussed above. In one example, the size of the voice input interface to be used (e.g., smaller interface 484-2 or larger interface 836-2) may be determined based on the context. When the background content includes moving images, for example, a small-sized voice input interface (e.g., interface 484-2) may be displayed. On the other hand, when the background content includes a still image (e.g., a paused video) or a menu, for example, a large-sized voice input interface (e.g., interface 836-2) may be displayed. In this way, if the user is watching video content, a smaller voice input interface may be displayed that only minimally occupies the actual usage space of the screen; whereas if the user is navigating a menu or viewing a paused video or other still image, a larger voice input interface may be displayed that may convey more information or play a deeper role by occupying additional real estate. Other interfaces discussed herein may likewise be sized differently based on context.

As described above, the virtual assistant can be triggered to listen for voice input containing a command or query (or begin recording voice input for subsequent processing, or begin processing voice input in real-time). Listening may be triggered in various ways, including indications such as: the user presses a physical button on remote control 106-2, the user presses a physical button on user device 102-2, the user presses a virtual button on user device 102-2, the user speaks a trigger phrase that can be recognized by the device listening all the time (e.g., speaks "hey, assistant" to start listening to a command), the user performs a gesture that can be detected by the sensor (e.g., acts in front of the camera), and so on. In another example, the user may press and hold a physical button on the remote control 106-2 or the user device 102-2 to initiate listening. In other examples, the user may press and hold a physical button on remote control 106-2 or user device 102-2 when speaking the query or command, and may release the button when completed. Various other indications may also be received to initiate receipt of speech input from a user.

In response to receiving an indication to listen for voice input, a voice input interface 836-2 may be displayed on menu 830-2. FIG. 26A shows a large notification area 834-2 that extends upward from a bottom portion of the display 112-2. Upon receiving an indication to listen for voice input, a voice input interface 836-2 may be displayed in the large notification area 834-2, and as shown, the interface may slide upward in an animated fashion from the bottom edge of the viewing area of the display 112-2. In some examples, a background menu, paused video, still image, or other background content may shrink in the z-direction and/or move backward (as if going further into display 112-2) while the overlapping interface is displayed (e.g., in response to receiving an indication to listen to a voice input). The background interface pinch transition 831-2 and associated inwardly pointing arrows show how background content (e.g., menu 830-2) can be pinched (narrowing the displayed menu, image, text, etc.). This may provide a visual effect that looks as the background content moves away from the user, revealing a new foreground interface (e.g., interface 836-2). FIG. 26B shows a contracted background interface 833-2 that includes a contracted (narrowed) version of menu 830-2. As shown, the collapsed background interface 833-2 (which may include a border) may appear further away from the user when focus is shifted to the foreground interface 836-2. When displaying the overlapping interface, the background content (including the background video content) in any of the other examples discussed herein may similarly contract and/or move backward in the z-direction.

FIG. 26B shows the voice input interface 836-2 after sliding up into view. As described above, various confirmations may be displayed upon receiving a voice input. Although not shown here, the voice input interface 836-2 may similarly display a readiness acknowledgement 486-2, an on-going acknowledgement 487-2, and/or a larger version of an on-going acknowledgement 488-2 in a manner similar to the voice input interface 484-2 discussed above with reference to fig. 22B, 22C, and 22D.

As shown in FIG. 26B, a command receipt confirmation 838-2 (a smaller sized command receipt confirmation 490-2 as discussed above) may be shown to confirm the speech input received and interpreted by the system. A transcription 840-2 may also be shown, and may include a transcription of the received speech input (e.g., "how weather of New York. In some examples, the transcription 840-2 may slide upward in animation from the bottom of the display 112-2, may be temporarily displayed (e.g., for a few seconds) in the position shown in fig. 26B, and may then slide upward to the top of the voice input interface 836-2 before disappearing from view (e.g., as text scrolls upward and eventually leaves the view). In other examples, the transcription may not be displayed, and a user's command or query may be processed and the associated task may be performed without displaying the transcription.

In other examples, the voice transcription may be performed in real-time as the user speaks. When a word is transcribed, it can be displayed in the speech input interface 836-2. For example, a word may be displayed next to the larger version of listening confirmation 487-2 described above. After the user has finished speaking, a command receipt confirmation 838-2 may be briefly displayed prior to performing the task associated with the user command.

Further, in other examples, command receipt acknowledgement 838-2 may convey information about the command received and understood. For example, for a simple request to tune to a particular channel, a logo or number associated with the channel may be displayed momentarily as a command receipt confirmation 838-2 (e.g., a few seconds) when the channel is tuned. In another example, for a request to select a displayed menu item (e.g., one of media options 832-2), an image associated with the selected menu item may be displayed as a command receipt confirmation 838-2. Thus, command receipt confirmation 838-2 may be used to communicate command specific information.

In some examples, the voice input interface 836-2 may be hidden after receiving a user query or command. For example, the voice input interface 836-2 may be animated to slide downward until it leaves the view from the bottom of the display 112-2. The voice input interface 836-2 may be hidden without requiring further information to be displayed to the user. For example, for a general or direct command (e.g., change to channel ten, change to a sports channel, play the movie, etc.), the voice input interface 836-2 may be hidden immediately after confirming receipt of the command, and may immediately perform the associated task or tasks.

In other examples, the results of the query may be displayed within voice input interface 836-2 or in a different interface. FIG. 27 shows an exemplary virtual assistant results interface 942-2 on menu 830-2 (specifically, on contracted background interface 833-2) with exemplary results for the transcribed query in FIG. 26B. In some examples, the virtual assistant query results may include a textual answer, such as textual answer 944-2. The results of the virtual assistant query may also include media content that resolves the user query, such as content associated with the selectable video link 946-2 and the purchase link 948-2. Specifically, in this example, the user may request weather information for the New York specified location. The virtual assistant may provide a textual answer 944-2 that directly answers the user query (e.g., indicating that the weather looks good and provides temperature information). Instead of or in addition to the textual answer 944-2, the virtual assistant may provide a selectable video link 946-2 along with the purchase link 948-2 and associated text. The media associated with links 946-2 and 948-2 may also provide responses to user queries. Here, the media associated with links 946-2 and 948-2 may include a ten minute clip of weather information at the specified location (specifically, a weather forecast for the next five days from New York for a television channel called a weather forecast channel).

In one example, the clip that resolves the user query may include a time-stamped portion of previously-played content (which may be obtained from a recording or from a streaming service). In one example, the virtual assistant can identify such content based on user intent associated with the voice input and by searching for detailed information about available media content (e.g., metadata including recorded programs, and detailed timing information, or detailed information about streaming content). In some examples, a user may not have access rights to certain content or may not have a subscription. In this case, the content may be provided for purchase, for example, via purchase link 948-2. Upon selection of the purchase link 948-2 or the video link 946-2, the cost of the content may be automatically deducted from or otherwise accounted for in the user's account.

FIG. 28 illustrates an exemplary process 1000-2 for using a virtual assistant to control television interactions and using a different interface to display associated information. At block 1002-2, a speech input may be received from a user. For example, voice input may be received at user device 102-2 or remote control 106-2 of system 100-2. In some examples, the voice input (or a data representation of some or all of the voice input) may be transmitted to and received by server system 110-2 and/or television set-top box 104-2. In response to a user initiating receipt of a voice input, various notifications may be displayed on a display, such as display 112-2. For example, a ready acknowledgement, a listening acknowledgement, a processing acknowledgement, and/or a command receipt acknowledgement may be displayed as discussed above with reference to fig. 22A-22E. Further, the received user speech input may be transcribed and the transcription may be displayed.

Referring again to the process 1000-2 of FIG. 28, at block 1004-2, media content may be determined based on the speech input. For example, media content for resolving a user query directed at a virtual assistant can be determined (e.g., by searching for available media content, etc.). For example, the media content associated with transcript 492-2 of fig. 22E ("which are the sporting events now in progress. Such media content may include live sporting events displayed on one or more television channels available for viewing by a user.

At block 1006-2, a first user interface having a first size of selectable media links may be displayed. For example, a media content interface 510-2 having a selectable video link 512-2 and a selectable text link 514-2 may be displayed on the display 112-2, as shown in FIG. 23. As described above, the media content interface 510-2 may have a smaller size to avoid interfering with the background video content.

At block 1008-2, a selection of one of the links may be received. For example, selection of one of links 512-2 and/or link 514-2 may be received. At block 1010-2, a second user interface having a second, larger size of media content associated with the selection may be displayed. For example, a details interface 618-2 with a selectable video link 620-2 and detailed media information 622-2 may be displayed, as shown in FIG. 24B. As described above, the details interface 618-2 may be of a larger size to convey additional detailed media information as desired. Similarly, upon selection of video link 620-2, extended media interface 728-2 may be displayed along with video 726-2, as shown in FIG. 25B. As described above, the extended media interface 728-2 may have a larger size to provide the desired media content to the user. In this way, the various interfaces discussed herein can be sized to accommodate desired content (including interfaces that expand to a larger size or contract to a smaller size), while on the other hand occupying limited screen real estate. Thus, process 1000-2 may be used to control television interactions using a virtual assistant and display associated information using a different interface.

In another example, an interface having a size larger than the size of the interface on the background video content may be displayed on the control menu. For example, as shown in FIG. 26B, the speech input interface 836-2 may be displayed over the menu 830-2, and as shown in FIG. 27, the assistant results interface 942-2 may be displayed over the menu 830-2, while as shown in FIG. 23, the smaller media content interface 510-2 may be displayed over the video 480-2. In this way, the size of the interface (e.g., the amount of screen real estate occupied by the interface) may be determined, at least in part, by the type of background content.

FIG. 29 illustrates exemplary television media content on user device 102-2, which user device 102-2 may comprise a mobile phone with a touch screen 246-2 (or another display), a tablet, a remote control, and so forth. Fig. 29 shows interface 1150-2 including a television listing having a plurality of television programs 1152-2. Interface 1150-2 may, for example, correspond to a particular application on user device 102-2, such as a television control application, a television content listing application, an internet application, and so forth. In some examples, content shown on user device 102-2 (e.g., on touch screen 246-2) may be used to determine user intent from speech input related to the content, and the user intent may be used to cause the content to be played or displayed on another device and display (e.g., on television set-top box 104-2 and display 112-2 and/or speaker 111-2). For example, content shown in interface 1150-2 on user device 102-2 may be used to disambiguate a user request and determine a user intent from the voice input, and may then use the determined user intent to play or display media via television set-top box 104-2.

Fig. 30 illustrates an exemplary television control using a virtual assistant. FIG. 30 illustrates an interface 1254-2 that can include a virtual assistant interface formatted as a conversational dialog between the assistant and the user. For example, interface 1254-2 may include an assistant greeting 1256-2 that prompts the user to make a request. Subsequently received user speech, such as transcribed user speech 1258-2, can then be transcribed, thereby displaying the back-and-forth conversation. In some examples, interface 1254-2 may appear on user device 102-2 in response to a trigger initiating receipt of a voice input (trigger, e.g., a button press, a key phrase, etc.).

In one example, a user request to play content via television set-top box 104-2 (e.g., on display 112-2 and speaker 111-2) may include an obfuscated reference to some of the content shown on user device 102-2. For example, the transcribed user speech 1258-2 includes a reference to "that" football game ("play that football game"). The particular football match desired may not be clear from the speech input alone. However, in some examples, the content shown on the user device 102-2 may be used to disambiguate the user request and determine the user intent. In one example, prior to a request by a user (e.g., prior to interface 1254-2 appearing on touch screen 246-2), content shown on user device 102-2 may be used to determine user intent (as may be content appearing within interface 1254-2, such as previous queries and results). In the illustrated example, the content shown in interface 1150-2 of FIG. 29 may be used to determine the user intent from a command to play "that" football game. The television listing for television program 1152-2 includes a variety of different programs, one of which is entitled "football" appearing on channel 5. The appearance of the football list may be used to determine the user's intent based on saying "that" football match. In particular, a user's reference to "that" football game may be resolved to a football program that appears in the television listing of interface 1150-2. Thus, the virtual assistant may cause playback of a particular soccer game desired by the user (e.g., by causing television set-top box 104-2 to tune to the appropriate channel and display the game).

In other examples, the user may reference the television program shown in interface 1150-2 in various other ways (e.g., program on channel eight, news, drama program, advertisement, first program, etc.) and may similarly determine the user intent based on the displayed content. It should be appreciated that metadata associated with the displayed content (e.g., television program description), fuzzy matching techniques, synonym matching, and the like may also be used in conjunction with the displayed content to determine user intent. For example, the term "advertisement" may be matched (e.g., using synonyms and/or fuzzy matching techniques) with the description "pay-per-view" to determine user intent from a request to display "advertisement". Likewise, the description of a particular television program may be analyzed in determining the user's intent. For example, the term "law" may be identified in the detailed description of the court show, and the user intent may be determined from the user request to view the "law" program based on the detailed description associated with the content shown in interface 1150-2. Thus, the displayed content and data associated therewith may be used to disambiguate the user request and determine the user intent.

FIG. 31 shows exemplary picture and video content on a user device 102-2, which may include a mobile phone with a touch screen 246-2 (or another display), a tablet, a remote control, and so forth. FIG. 31 shows an interface 1360-2 including a list of photos and videos. The interface 1360-2 may, for example, correspond to a particular application on the user device 102-2, such as a media content application, a file navigation application, a storage application, a remote storage management application, a camera application, and so forth. As shown, interface 1360-2 may include a video 1362-2, an album 1364-2 (e.g., a group of multiple photographs), and photographs 1366-2. As discussed above with reference to FIGS. 29 and 30, the content shown on the user device 102-2 may be used to determine a user intent from a speech input associated with the content. The user may then be interested in having the content played back or displayed on another device and display (e.g., on television set-top box 104-2 and display 112-2 and/or speaker 111-2). For example, content shown in interface 1360-2 on user device 102-2 may be used to disambiguate user requests and to determine user intent from voice input, and may then use the determined user intent to play or display media via television set-top box 104-2.

FIG. 32 illustrates an exemplary media display control using a virtual assistant. FIG. 32 illustrates an interface 1254-2 that can include a virtual assistant interface formatted as a conversational dialog between the assistant and the user. As shown, interface 1254-2 can include an assistant greeting 1256-2 that prompts the user to make a request. The user's voice may then be transcribed within the conversation, as shown in the example of FIG. 32. In some examples, interface 1254-2 may appear on user device 102-2 in response to a trigger initiating receipt of a voice input (trigger, e.g., a button press, a key phrase, etc.).

In one example, a user request to play media content or display media via television set-top box 104-2 (e.g., on display 112-2 and speaker 111-2) may include an obfuscated reference to some of the content shown on user device 102-2. For example, the transcribed user speech 1468-2 includes a reference to "that" video ("that video is displayed"). The specific video referenced may not be clear from the speech input alone. However, in some examples, the content shown on the user device 102-2 may be used to disambiguate the user request and determine the user intent. In one example, prior to a request by a user (e.g., prior to interface 1254-2 appearing on touch screen 246-2), content shown on user device 120-2 may be used to determine user intent (as may be content appearing within interface 1254-2, such as previous queries and results). In the example of user speech 1468-2, the content shown in interface 1360-2 of FIG. 31 may be used to determine user intent from a command to display "that" video. The list of photos and videos in interface 1360-2 includes a variety of different photos and videos, including video 1362-2, photo album 1354-2, and photo 1366-2. When only one video (e.g., video 1362-2) appears in interface 1360-2, the presence of video 1362-2 in interface 1360-2 can be used to determine the user's intent from saying "that" video. Specifically, a user's reference to "that" video may be resolved to video 1362-2 (entitled "graduation video") appearing in interface 1360-2. Thus, the virtual assistant can cause the video 1362-2 to be played back (e.g., by causing the video 1362-2 to be transmitted from the user device 102-2 or remote storage to the television set-top box 104-2 and causing playback to begin).

In another example, transcribed user speech 1470-2 includes a reference to "that" album ("play the slides of that album"). The specific photo album referenced may not be clear from the voice input alone. The content shown on the user device 102-2 may again be used to disambiguate the user request. In particular, the content shown in interface 1360-2 of FIG. 31 may be used to determine user intent from a command to play a slide show for "that" album. The list of photos and videos in interface 1360-2 includes album 1354-2. The presence of album 1364-2 in interface 1360-2 may be used to determine the user's intent based on saying "that" album. Specifically, a user's reference to "that" album may be resolved to album 1364-2 (titled "graduation album") appearing in interface 1360-2. Thus, in response to the user speech 1470-2, the virtual assistant may cause a slide show including photos from album 1364-2 to be displayed (e.g., by causing photos of album 1364-2 to be transmitted from the user device 102-2 or remote storage to the television set-top box 104-2 and causing a slide show of the photos to begin).

In yet another example, the transcribed user speech 1472-2 includes a reference to the "last" photo ("last photo displayed on kitchen television"). The specific photograph referenced may not be clear from the speech input alone. The content shown on the user device 102-2 may again be used to disambiguate the user request. In particular, the content shown in interface 1360-2 of FIG. 31 may be used to determine user intent from a command to display a "last" photo. The list of photos and videos in interface 1360-2 includes two separate photos 1366-2. The occurrence of the photos 1366-2 in the interface 1360-2 (and in particular the order of occurrence of the photos 1366-2 within the interface) can be used to determine the user's intent by saying the "last" photo. Specifically, the user's reference to the "last" photo may be resolved to photo 1366-2 (date 6 month 21 2014) appearing at the bottom of interface 1360-2. Thus, in response to the user speech 1472-2, the virtual assistant may cause the last photograph 1366-2 shown in the interface 1360-2 to be displayed (e.g., by causing the last photograph 1366-2 to be transmitted from the user device 102-2 or remote storage to the television set-top box 104-2 and causing the photograph to be displayed).

In other examples, the user may reference the media content shown in interface 1360-2 in various other ways (e.g., last pair of photos, all videos, all photos, graduation albums, graduation videos, photos from day 6-21, etc.), and the user intent may be similarly determined based on the displayed content. It should be appreciated that metadata associated with the displayed content (e.g., timestamps, location information, titles, descriptions, etc.), fuzzy matching techniques, synonym matching, etc. may also be combined with the displayed content to determine user intent. Thus, the displayed content and data associated therewith may be used to disambiguate the user request and determine the user intent.

It should be understood that any type of content displayed in any application interface of any application program may be used to determine user intent. For example, images displayed on a web page in an Internet browser application may be referenced in the speech input and the displayed web page content may be analyzed to identify the desired image. Similarly, music tracks in a music list in a music application may be referenced in speech input by title, genre, artist, band name, etc., and the displayed content in the music application (and in some examples, associated metadata) may be used to determine user intent from the speech input. The determined user intent may then be used to cause the media to be displayed or played back via another device (e.g., via television set-top box 104-2), as described above.

In some examples, user identification, user authentication, and/or device authentication may be employed to determine whether media control may be enabled, determine media content available for display, determine access permissions, and the like. For example, it may be determined whether a particular user device (e.g., user device 102-2) is authorized to control media on, for example, television set-top box 104-2. User devices may be authorized based on registration, pairing, trust determination, passwords, security issues, system settings, and the like. In response to determining that a particular user device is authorized, an attempt to control television set-top box 104-2 may be allowed (e.g., media content may be played in response to determining that the requesting device is authorized to control the media). Instead, media control commands or requests from unauthorized devices may be ignored and/or users of these devices may be prompted to register their devices for control of a particular television set-top box 104-2.

In another example, a particular user may be identified and personal data associated with the user may be used to determine the user's intent of the request. For example, the user may be identified based on a voice input, such as by voice recognition using a user voiceprint. In some examples, a user may speak a particular phrase that is analyzed for speech recognition. In other examples, voice recognition may be used to analyze a voice input request for a virtual assistant to identify a speaker. The user may also be identified based on the source of the voice input sample (e.g., on the user's personal device 102-2). The user may also be identified based on a password, menu selection, and the like. The voice input received from the user may then be interpreted based on the personal data of the identified user. For example, the user intent of the voice input may be determined based on previous requests from the user, media content owned by the user, media content stored on the user device, user preferences, user settings, user demographics (e.g., language used, etc.), user profile information, user payment methods, or various other personal information associated with a particular identified user. For example, voice input referencing a favorites list or the like may be disambiguated based on personal data, and a personal favorites list of a user may be identified. Voice inputs referencing "my" photos, "my" videos, "my" programs, etc. may also be disambiguated based on user recognition to correctly identify photos, videos, and shows (e.g., photos stored on personal user devices, etc.) associated with the recognized user. Similarly, voice input requesting purchase of content may be disambiguated to determine that the payment method of the identified user (and not another user) should be paid for the purchase.

In some examples, user authentication may be used to determine whether to allow a user to access media content, purchase media content, and so forth. For example, voice recognition may be used to verify the identity of a particular user (e.g., using the user's voiceprint) to allow the user to make purchases using the user's payment method. Similarly, passwords and the like may be used to authenticate the user to allow purchase. In another example, voice recognition may be used to verify the identity of a particular user to determine whether the user is allowed to view a particular program (e.g., a program with a particular parental guidance rating, a movie with a particular age suitability rating, etc.). For example, a child's request for a particular program may be denied based on voice recognition indicating that the requestor is not an authorized user (e.g., a parent) capable of viewing such content. In other examples, speech recognition may be used to determine whether a user has access to particular subscription content (e.g., to limit access to premium channel content based on speech recognition). In some examples, a user may speak a particular phrase that is analyzed for speech recognition. In other examples, voice recognition may be used to analyze a voice input request for a virtual assistant to identify a speaker. Thus, certain media content may be played in response to first determining that a user is authorized in any of a variety of ways.

FIG. 33 illustrates an exemplary virtual assistant interaction with results on a mobile user device and a media display device. In some examples, the virtual assistant may provide information and control on more than one device, such as on user device 102-2 and on television set-top box 104-2. Further, in some examples, a virtual assistant interface for controls and information on user device 102-2 may also be used to issue requests to control media on television set-top box 104-2. Thus, the virtual assistant system may determine whether to display results or perform tasks on user device 102-2 or television set-top box 104-2. In some examples, when user device 102-2 is employed to control television set-top box 104-2, the space occupied by the virtual assistant interface on the display (e.g., display 112-2) associated with television set-top box 104-2 may be minimized by displaying information on user device 102-2 (e.g., on touch screen 246-2). In other examples, the virtual assistant information may be displayed on display 112-2 alone, or the virtual assistant information may be displayed on both user device 102-2 and display 112-2.

In some examples, it may be determined whether the results of the virtual assistant query should be displayed directly on user device 102-2 or on display 112-2 associated with television set-top box 104-2. In one example, in response to determining that the user intent of the query includes a request for information, an informational response may be displayed on the user device 102-2. In another example, in response to determining that the user intent of the query includes a request to play media content, the media content responsive to the query may be played via television set top box 104-2.

FIG. 33 illustrates a virtual assistant interface 1254-2 that includes an example of a conversational dialog between a virtual assistant and a user. The assistant greeting 1256-2 can prompt the user for a request. In the first query, the transcribed user speech 1574-2 (which may also be typed in or otherwise input) includes a request for an informational answer associated with the displayed media content. In particular, the transcribed user speech 1574-2 asks who is playing a football game, which may be displayed on an interface on the user device 102-2 (e.g., listed in interface 1150-2 of FIG. 29) or on the display 112-2 (e.g., listed in interface 510-2 of FIG. 23, or played as video 726-2 on the display 112-2 in FIG. 25B), for example. A user intent of the transcribed user speech 1574-2 may be determined based on the displayed media content. For example, the particular football game in question may be identified based on what is shown on the user device 102-2 or on the display 112-2. The user intent of the transcribed user speech 1574-2 may include obtaining an informative answer detailing the team playing the football game that is identified based on the displayed content. In response to determining that the user intent includes a request for an informational answer, the system may determine to display the response within interface 1254-2 (instead of on display 112-2) in FIG. 33. In some examples, the response to the query may be determined based on metadata associated with the displayed content (e.g., based on a description of a football game in a television listing). As shown, assistant response 1576-2 may thus be displayed in interface 1254-2 on touch screen 246-2 of user device 102-2, identifying the Alpha team and the Zeta team as the team playing the game. Thus, in some examples, an informational response may be displayed within interface 1254-2 on user device 102-2 based on determining that the query includes an informational request.

However, the second query in interface 1254-2 includes a media request. Specifically, the transcribed user speech 1578-2 requests that the displayed media content be changed to "game". The user intent of the transcribed user speech 1578-2 may be determined based on the displayed content (e.g., to identify which game the user desires), such as the games listed in interface 510-2 of FIG. 23, the games listed in interface 1150-2 of FIG. 29, the games referenced in a previous query (e.g., in the transcribed user speech 1574-2), and so forth. Thus, the user intent of the transcribed user speech 1578-2 may include changing the displayed content to a particular game (here, a football game with team Alpha and team Zeta engaged). In one example, the game may be displayed on the user device 102-2. However, in other examples, the game may be shown via the television set-top box 104-2 based on a query including a request to play media content. In particular, in response to determining that the user intent includes a request to play media content, the system may determine to display the media content results on display 112-2 (rather than within interface 1254-2 in fig. 33) via television set-top box 104-2. In some examples, a response or a rephrase confirming an intended action of the virtual assistant may be shown in interface 1254-2 or on display 112-2 (e.g., "change to football game").

FIG. 34 illustrates an exemplary virtual assistant interaction with media results on a media display device and a mobile user device. In some examples, the virtual assistant may provide access to media on both user device 102-2 and television set-top box 104-2. Further, in some examples, a virtual assistant interface for media on user device 102-2 may also be used to issue requests for media on television set-top box 104-2. Thus, the virtual assistant system may determine whether to display the media results on user device 102-2 or on display 112-2 via television set-top box 104-2.

In some examples, whether the media is displayed on device 102-2 or display 112-2 may be determined based on media result formats, user preferences, default settings, expression commands in the request itself, and so forth. For example, the format of the queried media results may be used (e.g., without specific instructions) to determine on which device to display the media results by default. Television programming may be more suitable for display on a television, large format video may be more suitable for display on a television, thumbnail photos may be more suitable for display on user equipment, small format web video may be more suitable for display on user equipment, and various other media formats may be more suitable for display on a relatively larger television screen or relatively smaller user equipment display. Thus, in response to determining (e.g., based on the media format) that the media content should be displayed on a particular display, the media content may be displayed on the particular display by default.

FIG. 34 illustrates virtual assistant interface 1254-2 where an example of a query relates to media content that is playing or being displayed. The assistant greeting 1256-2 can prompt the user for a request. In the first query, the transcribed user speech 1680-2 includes a request to display a football game. As in the examples discussed above, the user intent of the transcribed user speech 1680-2 may be determined based on the displayed content (e.g., to identify which game the user desires), such as the games listed in interface 510-2 of FIG. 23, the games listed in interface 1150-2 of FIG. 29, the games referenced in a previous query, and so forth. Thus, the user intent of the transcribed user speech 1680-2 may include displaying a particular football game that may be played, for example, on a television. In response to determining that the user intent includes a request to display media formatted for television (e.g., a televised football game), the system may automatically determine that the desired media is displayed on display 112-2 via television set-top box 104-2 (rather than on user device 102-2 itself). The virtual assistant system may then cause the television set-top box 104-2 to tune to and display the football game on the display 112-2 (e.g., by performing the necessary tasks and/or sending appropriate commands).

However, in the second query, transcribed user speech 1682-2 includes a request to display a picture of team members (e.g., a picture of "Alpha team"). As in the examples discussed above, the user intent of the transcribed user speech 1682-2 may be determined. The user intent of the transcribed user speech 1682-2 may include performing a search (e.g., a web search) on pictures associated with the "Alpha team" and displaying the resulting pictures. In response to determining that the user intent includes a request to display media that may be presented in thumbnail format or media associated with a network search or other non-specific media that does not have a particular format, the system may automatically determine to display the desired media results on the touchscreen 246-2 of the user device 102-2 in the interface 1254-2 (rather than displaying the resulting picture on the display 112-2 via the television set-top box 104-2). For example, as shown, thumbnail photo 1684-2 may be displayed within interface 1254-2 on user device 102-2 in response to a user query. Thus, the virtual assistant system can cause media in a particular format or media that can be presented in a particular format (e.g., in a set of thumbnails) to be displayed on the user device 102-2 by default.

It should be appreciated that in some examples, a football game referenced in the user speech 1680-2 may be displayed on the user device 102-2 and the photograph 1684-2 may be displayed on the display 112-2 via the television set-top box 104-2. However, the default device for display may be automatically determined based on the media format, thereby simplifying the user's media commands. In other examples, a default device for displaying the requested media content may be determined based on user preferences, default settings, a device most recently used to display the content, speech recognition that identifies the user and the device associated with the user, and so on. For example, a user may set preferences or may set default configurations to display certain types of content (e.g., videos, slides, television programs, etc.) on display 112-2 and other types of content (e.g., thumbnails, photographs, network videos, etc.) on touch screen 246-2 of user device 102-2 via television set-top box 104-2. Similarly, preferences or default configurations may be set to respond to certain queries by displaying content on one device or another. In another example, all content may be displayed on the user device 102-2 unless otherwise indicated by the user.

In other examples, the user query may include a command to display content on a particular display. For example, the user speech 1472-2 of FIG. 32 includes a command to display a photograph on a kitchen television. Thus, rather than displaying the photograph on user device 102-2, the system may cause the photograph to be displayed on a television display associated with the user's kitchen. In other examples, the user may indicate which display device to use in a variety of other ways (e.g., on a television, on a large screen, in a living room, in a bedroom, on a my tablet, on a my phone, etc.). Thus, the display device used to display the media content results of the virtual assistant query can be determined in a number of different ways.

FIG. 35 illustrates exemplary proximity-based media device control. In some examples, a user may have multiple televisions and television set-top boxes within the same home or on the same network. For example, a home may have a television and set-top box in a living room, another in a bedroom, and yet another in a kitchen. In other examples, multiple set-top boxes may be connected to the same network, such as a common network in an apartment or office building. Although the user may pair, connect, or otherwise authorize remote control 106-2 and user device 102-2 for a particular set top box to avoid unauthorized access, in other examples, the remote control and/or user device may be used to control more than one set top box. A user may control a set-top box in a bedroom, living room, and kitchen, for example, using a single user device 102-2. A user may also control his set-top box in his own apartment, for example, using a single user device 102-2, and control the set-top boxes of neighbors in a neighbor apartment (e.g., sharing content from user device 102-2 with neighbors, such as displaying a slide show of photos stored on user device 102-2 on the neighbors' televisions). Because a user may control multiple different set top boxes using a single user device 102-2, the system may determine which set top box of the multiple set top boxes to send commands to. Also, because the home may have multiple remote controls 106-2 that can operate multiple set top boxes, the system may similarly determine which set top box of the multiple set top boxes to send commands to.

In one example, the proximity of the device may be used to determine which set top box of the plurality of set top boxes to send a command to (or on which display to display the requested media content). Proximity may be determined between user device 102-2 or remote control 106-2 and each of a plurality of set-top boxes. The issued command may then be sent to the nearest set-top box (or the requested media content may be displayed on the nearest display). Proximity may be determined (or at least estimated) in any of a number of ways, such as time-of-flight measurements (e.g., using radio frequency), bluetooth LE, electronic pulse signals, proximity sensors, acoustic path measurements, and so forth. The measured or estimated distances may then be compared and a command may be issued to the device with the shortest distance (e.g., the closest set top box).

Fig. 35 illustrates a multi-device system 1790-2 including a first set top box 1792-2 having a first display 1786-2 and a second set top box 1794-2 having a second display 1788-2. In one example, the user may issue a command from user device 102-2 to display media content (e.g., without having to specify where or on which device to display). Then, a distance 1795-2 to the first set top box 1792-2 and a distance 1796-2 to the second set top box 1794-2 may be determined (or estimated). As shown, distance 1796-2 may be greater than distance 1795-2. Based on proximity, a command from the user device 102-2 may be issued to the first set top box 1792-2, which is the closest device and most likely matches the user's intent. In some examples, a single remote control 106-2 may also be used to control more than one set top box. The required devices for control at a given time may be determined based on proximity. A distance 1797-2 to the second set top box 1794-2 and a distance 1798-2 to the first set top box 1792-2 may be determined (or estimated). As shown, distance 1798-2 may be greater than distance 1797-2. Based on proximity, the command from the remote control 106-2 may be sent to a second set top box 1794-2, which is the closest device and most likely matches the user's intent. The distance measurements may be refreshed periodically or with each command to accommodate, for example, the user moving to a different room and desiring to control a different device.

It should be understood that the user may specify different devices for the command, overwriting proximity in some cases. For example, a list of available display devices may be displayed on the user device 102-2 (e.g., listing the first display 1786-2 and the second display 1788-2 by setting a name, specifying a room, etc., or listing the first set top box 1792-2 and the second set top box 1794-2 by setting a name, specifying a room, etc.). The user may select one of the devices from the list and may then send a command to the selected device. The request for media content issued at the user device 102-2 may then be processed by displaying the desired media on the selected device. In other examples, the user may speak the desired device as part of a spoken command (e.g., show a game on a kitchen television, change to a cartoon channel in a living room, etc.).

In other examples, a default device for displaying the requested media content may be determined based on status information associated with the particular device. For example, it may be determined whether a headset (or headphones) is attached to user device 102-2. In response to determining that the headphones are attached to user device 102-2 when a request to display media content is received, the requested content may be displayed on user device 102-2 by default (e.g., assuming the user is consuming content on user device 102-2 instead of on a television). In response to determining that the headset is not attached to user device 102-2 when the request to display media content is received, the requested content may be displayed on user device 102-2 or on the television according to any of the various determination methods discussed herein. Other device status information, such as ambient lighting around user device 102-2 or set top box 104-2, proximity of other devices to user device 102-2 or set top box 104-2, orientation of user device 102-2 (e.g., landscape orientation is more likely to indicate a desire to view on user device 102-2), display status of set top box 104-2 (e.g., in sleep mode), time since last interaction on a particular device, or any of a variety of other status indicators for user device 102-2 and/or set top box 104-2, may similarly be used to determine whether requested media content should be displayed on user device 102-2 or set top box 104-2.

FIG. 36 shows an exemplary process 1800-2 for controlling television interactions using a virtual assistant and a plurality of user devices. At block 1802-2, a speech input may be received from a user at a first device having a first display. For example, voice input may be received from a user at user device 102-2 or remote control 106-2 of system 100-2. In some examples, the first display may include the touch screen 246-2 of the user device 102-2 or a display associated with the remote control 106-2.

At block 1804-2, a user intent may be determined from the speech input based on the content displayed on the first display. For example, content (such as television program 1152-2 in interface 1150-2 of FIG. 29 or photos and videos in interface 1360-2 of FIG. 31) may be analyzed and used to determine user intent for voice input. In some examples, the user may reference content shown on the first display in an obscured manner, and the reference may be disambiguated by analyzing the content shown on the first display to resolve the reference (e.g., to determine the user's intent of "that" video, "that" album, "that" game, etc.), as discussed above with reference to fig. 30 and 32.

Referring again to the process 1800-2 of FIG. 36, at block 1806-2, media content may be determined based on the user intent. For example, particular videos, photos, photo albums, television programs, sporting events, music tracks, and the like may be identified based on user intent. In the examples of fig. 29 and 30 discussed above, the particular football game shown on channel five may be identified, for example, based on the user intent referring to the "that" football game shown in interface 1150-2 of fig. 29. In the example of fig. 31 and 32 discussed above, a particular video 1362-2 entitled "graduation video", a particular album 1364-2 entitled "graduation album", or a particular photo 1366-2 entitled "graduation album" may be identified based on the user intent determined from the voice input example of fig. 32.

Referring again to the process 1800-2 of FIG. 36, at block 1808-2, the media content may be played on a second device associated with the second display. For example, the determined media content may be played on display 112-2 with speaker 111-2 via television set-top box 104-2. Playing media content may include tuning to a particular television channel, playing a particular video, displaying a photo slideshow, displaying a particular photo, playing a particular audio track, etc., on television set-top box 104-2 or another device.

In some examples, it may be determined whether a response to a voice input for the virtual assistant should be displayed on a first display associated with a first device (e.g., user device 102-2) or a second display associated with a second device (e.g., television set-top box 104-2). For example, as discussed above with reference to fig. 33 and 34, informational responses or media content adapted for display on a smaller screen may be displayed on user device 102-2, while media responses or media content adapted for display on a larger screen may be displayed on a display associated with set top box 104-2. As discussed above with reference to fig. 35, in some examples, the distance between user device 102-2 and multiple set top boxes may be used to determine on which set top box to play media content or to issue commands to which set top box. Similarly, various other determinations may be made to provide a convenient and user-friendly experience in which multiple devices may interact.

In some examples, since the content shown on the user device 102-2 may be used to inform interpretation of the speech input as described above, the content shown on the display 112-2 may also be used to inform interpretation of the speech input. In particular, content shown on a display associated with television set top box 104-2 may be used along with metadata associated with the content to determine user intent from voice input, to disambiguate user queries, to respond to content-related queries, and the like.

FIG. 37 shows an exemplary speech input interface 484-2 with a virtual assistant query with respect to video 480-2 shown in the background (as described above). In some examples, the user query may include questions about the media content shown on display 112-2. For example, transcript 1916-2 includes a query requesting identification of actresses ("who are those actresses. The content shown on display 112-2 (along with metadata or other descriptive information about the content) may be used to determine user intent from voice input related to the content, and may also be used to determine responses to queries (responses include informational responses as well as media responses that provide media selections to the user). For example, video 480-2, a description of video 480-2, a list of characters and actors for video 480-2, rating information for video 480-2, classification information for video 480-2, and a variety of other descriptive information associated with video 480-2 may be used to disambiguate a user request and determine a response to a user query. The associated metadata may include, for example, identification information for character 1910-2, character 1912-2, and character 1914-2 (e.g., the name of the character and the name of the actress acting as the character). The metadata for any other content may similarly include a title, description, list of people, list of actors, list of players, category, producer name, director's name, or display schedule associated with the content shown on the display, or a viewing history of the media content on the display (e.g., recently displayed media).

In one example, a user query for a virtual assistant may include an ambiguous reference to some content shown on display 112-2. For example, transcript 1916-2 includes references to "those" actresses ("who are those. The particular actress the user is querying may not be clear from the voice input alone. However, in some examples, the content and associated metadata shown on display 112-2 may be used to disambiguate the user request and determine the user intent. In the illustrated example, the content shown on display 112-2 may be used to determine user intent from references to "those" actresses. In one example, the television set-top box 104-2 may identify the content being played and the details associated with the content. In this case, the television set-top box 104-2 may identify the title of the video 480-2 as well as various descriptive content. In other examples, television shows, sporting events, or other content may be shown that may be combined with associated metadata for determining a user's intent. Additionally, in any of the various examples discussed herein, the speech recognition results and intent determination may give higher weight to the items associated with the displayed content than to alternatives. For example, actor names of screen characters may be weighted higher when those actors appear on the screen (or when a program in which they appear is playing), which may provide for accurate speech recognition and intent determination for possible user requests associated with the displayed content.

In one example, the list of characters and/or actors associated with video 480-2 may be used to identify all or the most prominent actress appearing in video 480-2, which may include actress 1910-. The identified actress may be returned as a possible result (including fewer or more actresses if the metadata resolution is coarse). However, in another example, the metadata associated with video 480-2 may include an identification of which actors and actresses appear on the screen at a given time, and the actresses that appear at the time of the query may be determined from the metadata (e.g., specifically identified actresses 1910-. In yet another example, a facial recognition application may be used to identify actress 1910-2,1912-2, and 1914-2 from the image shown on display 112-2. In other examples, various other metadata associated with video 480-2 and various other identification methods may be used to identify a user's possible intent to reference "those" actresses.

In some examples, the content shown on display 112-2 may change during the process of submitting a query and determining a response. As such, the viewing history of the media content may be used to determine user intent and determine responses to queries. For example, if the video 480-2 moves to another view (e.g., with other people) before generating a response to the query, the results of the query (e.g., the people displayed on the screen when the user initiated the query) may be determined based on the view of the user at the time the query was spoken. In some cases, the user may pause playing media to issue a query, and the content shown at the time of the pause may be used with associated metadata to determine user intent and response to the query.

Given the determined user intent, the results of the query may be provided to the user. FIG. 38 illustrates an exemplary assistant response interface 2018-2 that includes an assistant response 2020-2 that may include a response determined from the query of transcription 1916-2 of FIG. 37. As shown, assistant response 2020-2 may include a list of the name of each actress and its associated characters in video 480-2 ("actress Jennifer Jones plays the character Blanche; actress Elizabeth Arnold plays the character Julia; and actress Whitney Davidson plays the character melissa."). The actresses and characters listed in response 2020-2 may correspond to characters 1910, 1912-2, and 1914-2 appearing on display 112-2. As described above, in some examples, the content shown on display 112-2 may change during the process of submitting a query and determining a response. Thus, response 2020-2 may include information about content or people that may no longer appear on display 112-2.

As with other interfaces displayed on the display 112-2, the assistant response interface 2018-2 may occupy a minimum amount of screen real estate while providing sufficient space to convey the desired information. In some examples, as with other text displayed in the interface on display 112-2, assistant response 2020-2 may scroll up from the bottom of display 112-2 to the position shown in fig. 38, display an amount of time (e.g., a delay based on the length of the response), and scroll up out of view. In other examples, interface 2018-2 may slide down out of view after a delay.

FIGS. 39 and 40 illustrate another example of determining user intent and responding to a query based on content displayed on the display 112-2. FIG. 39 illustrates an exemplary voice input interface 484-2 containing a virtual assistant query for media content associated with a video 480-2. In some examples, the user query may include a request for media content associated with the media shown on display 112-2. For example, a user may request other movies, television programs, sporting events, etc. associated with a particular media based on, for example, characters, actors, categories, etc. For example, transcript 2122-2 includes a query requesting other media associated with the actress in video 480-2, referenced by the name of the actress's character in video 480-2 ("what did the Blanche also play. The content shown on display 112-2 (along with metadata or other descriptive information about the content) may again be used to determine user intent from voice input related to the content, and may also be used to determine a response to a query (informational response or a response resulting in media selection).

In some examples, the user query for the virtual assistant may include fuzzy references using character names, actor names, program names, team member names, and so forth. Such references may be difficult to resolve accurately without the context of the content and its associated metadata shown on display 112-2. For example, transcript 2122-2 includes a reference to a person named "Blankhe" from video 480-2. The particular actress or other individual that the user is asking may not be clear from the voice input alone. However, in some examples, the content and associated metadata shown on display 112-2 may be used to disambiguate the user request and determine the user intent. In the illustrated example, the content and associated metadata shown on display 112-2 may be used to determine the user intent from the person name "Blanche". In this case, the list of people associated with the video 480-2 may be used to determine that "Blanche" may refer to the person "Blanche" in the video 480-2. In another example, detailed metadata and/or facial recognition may be used to determine that a person with the name "Blanche" appears on the screen (or appears on the screen at the time of initiation of the user query) so that the actress associated with that person is the most likely user query intent. For example, it may be determined that characters 1910-, 2,1912-2 and 1914-2 appear on display 112-2 (or appear on display 112-2 when the user query originates), and then their associated character names may be referenced to determine the user intent of the query that references character Blanche. The actor list may then be used to identify actresses playing the Blanche, and a search may be conducted to identify other media in which the identified actress appears.

Given the determined user intent (e.g., the parsing of the character reference "Blanche") and the query result determination (e.g., other media associated with the actress acting as "Blanche"), a response may be provided to the user. FIG. 40 illustrates an exemplary assistant response interface 2224-2 including an assistant text response 2226-2 and a selectable video link 2228-2, which may be responsive to a query of transcript 2122-2 of FIG. 39. As shown, assistant text response 2226-2 may include a rephrase to the user request to introduce selectable video link 2228-2. Assistant text response 2226-2 may also include an indication to disambiguate the user query (specifically, the actress Jennifer Jones is identified as playing the character Blanche in video 480-2). Such a reformulation can confirm to the user that the virtual assistant correctly interpreted the user query and is providing the desired results.

Assistant response interface 2224-2 may also include selectable video link 2228-2. In some examples, various types of media content may be provided as results of the virtual assistant query, including movies (e.g., movie a and movie B of interface 2224-2). The media content displayed as a result of the query may include media available for consumption by the user (free, purchased, or as part of a subscription). The user may select the displayed media to view or consume the resulting content. For example, the user may select one of selectable video links 2228-2 (e.g., using a remote control, voice command, etc.) to view one of the other movies in which actor Jennifer Jones appears. In response to selection of one of selectable video links 2228-2, the video associated with the selection may be played, replacing video 480-2 on display 112-2. Thus, the displayed media content and associated metadata may be used to determine user intent from the voice input, and in some examples, playable media may be provided as a result.

It should be appreciated that the user may reference actors, team members, characters, locations, teams, sporting event details, movie topics, or various other information associated with the displayed content when forming the query, and the virtual assistant system may similarly disambiguate such requests and determine user intent based on the displayed content and associated metadata. Likewise, it should be understood that in some examples, the results may include media suggestions associated with the query, such as movies, television shows, or sporting events associated with the person who is the subject of the query (whether or not the user specifically requests such media content).

Further, in some examples, the user query may include a request for information associated with the media content itself, such as a query regarding characters, episodes, movie scenarios, previous scenes, and so forth. As with the examples discussed above, the displayed content and associated metadata may be used to determine user intent from such queries and determine responses. For example, a user may request a description of a person (e.g., "what is the Blanche done in this movie. The virtual assistant system may then identify the requested information about the persona, such as a persona description or role, from metadata associated with the displayed content (e.g., "Blanche is one of a group of lawyers and is referred to as a troubled producer of Hartford"). Similarly, a user may request an episode summary (e.g., "what happened in the previous episode.

In some examples, the content displayed on display 112-2 may include menu content, and such menu content may similarly be used to determine user intent of the voice input and response to user queries. Fig. 41A to 41B show exemplary pages of the program menu 830-2. FIG. 41A shows a first page of media option 832-2, and FIG. 41B shows a second page of media option 832-2 (which may include a consecutive next page in the content listing that extends beyond a single page).

In one example, the user request to play content may include an obfuscated reference to some content shown on display 112-2 in menu 830-2. For example, a user viewing menu 830-2 may request to view "that" football game, "that" basketball game, vacuum cleaner advertisements, legal programs, and so forth. The particular program desired may not be clear from the speech input alone. However, in some examples, the content shown on display 112-2 may be used to disambiguate the user request and determine the user intent. In the illustrated example, the media options in menu 830-2 (and in some examples, metadata associated with the media options) can be used to determine user intent from commands that include an ambiguous reference. For example, the "that" football game may be resolved to a football game on a sports channel. The "that" basketball game may be resolved to a basketball game on a college sports channel. The vacuum cleaner advertisement may be parsed into a pay-for-air program (e.g., based on metadata associated with the program describing the vacuum cleaner). Legal shows may be resolved into forensic plays based on metadata and/or synonym matching, fuzzy matching, or other matching techniques associated with the show. Thus, the presence of various media options 832-2 in menu 830-2 on display 112-2 may be used to disambiguate a user request.

In some examples, the displayed menu may be navigated using a cursor, joystick, arrow, button, gesture, or the like. In such cases, focus may be displayed on the selected item. For example, a menu item that is selected and has focus may be emphasized by displaying the selected item in bold, underlined, outlined by a border, in a size larger than other menu items, shaded, inverted, illuminated, and/or with any other feature. For example, selected media option 2330-2 in fig. 41A may have focus as the currently selected media option and be displayed with a large size, underlined font, and a border.

In some examples, the request to play content or select a menu item may include an ambiguous reference to the menu item having focus. For example, a user viewing menu 830-2 shown in FIG. 41A may request that "program be played (e.g.," play that program "). Similarly, the user may request various other commands associated with the menu item having focus, such as play, delete, hide, remind me to watch, record, and so forth. The particular menu item or program desired may not be clear from the voice input alone. However, the content shown on display 112-2 may be used to disambiguate the user request and determine the user intent. In particular, the fact that the selected media option 2330-2 has focus in menu 830-2 may be used to identify the media subject desired for any of the following commands: a command referencing "that" item, a command without a theme (e.g., play, delete, hide, etc.), or any other ambiguous command referencing media content with focus. Thus, the menu item with the focus can be used to determine the user intent from the speech input.

As with the viewing history of media content that may be used to disambiguate a user request (e.g., content that was displayed when the user initiated the request but has passed since), previously displayed menu or search result content may similarly be used to disambiguate a later user request after continuing to move (e.g., to a later menu or search result content). For example, FIG. 41B shows a second page of the menu 830-2 with additional media options 832-2. The user may proceed to the second page shown in FIG. 41B, but refer back to the content shown in the first page shown in FIG. 41A (e.g., media option 832-2 shown in FIG. 41A). For example, while having moved to the second page of menu 830-2, the user may request to view "that" football game, "that" basketball game, or a legal show, all of which are media options 832-2 that were most recently displayed on the previous page of menu 830-2. Such references may be ambiguous, but may use the most recently displayed menu content from the first page of menu 830-2 to determine user intent. In particular, the most recently displayed media option 832-2 of FIG. 41A may be analyzed to identify a particular football game, basketball game, or court play referenced in the obfuscated example request. In some examples, results may be biased based on how long the content was displayed (e.g., the most recently viewed results page is weighted more than the earlier viewed results). In this way, the viewing history of the content recently shown on the display 112-2 may be used to determine the user intent. It should be understood that any recently displayed content may be used, such as previously displayed search results, previously displayed programs, previously displayed menus, and the like. This may allow users to return to some content they have seen before without having to find and navigate to a particular view that they see the content.

In other examples, various display prompts shown in a menu or results list on display 112-2 may be used to disambiguate the user request and determine the user intent. FIG. 42 shows an exemplary media menu divided into categories, where the exemplary media menu of one category has focus (movies). FIG. 42 illustrates a category interface 2440-2 that may include a rotating carousel interface of categorized media options including a TV option 2442-2, a movie option 2444-2, and a music option 2446-2. As shown, the music category is only partially displayed, and the carousel interface may be shifted to display additional content to the right (e.g., as indicated by the arrow) as if the media were rotated in the carousel. In the illustrated example, the movie category has focus indicated by the underlined title and border, but focus may be indicated in any of a number of other ways (e.g., making the category larger than the other categories to appear close to the user, glow, etc.).

In some examples, a request to play content or select a menu item may include an ambiguous reference to a menu item in a set of items (e.g., categories). For example, a user viewing category interface 2440-2 may request that a football program be played ("play football program"). The particular menu item or program desired may not be clear from the voice input alone. Further, the query may be resolved to more than one program displayed on display 112-2. For example, a request for a football program may refer to a football game listed in a television program category or a football movie listed in a movie category. Content shown on display 112-2, including display prompts, may be used to disambiguate user requests and determine user intent. In particular, the fact that the movie category has focus in category interface 2440-2 can be used to identify a particular football program that is desired, which is likely to be a football movie if focus is on the movie category. Thus, a media category (or any other media grouping) having focus as shown on display 112-2 may be used to determine user intent from voice input. It should also be understood that the user may make various other requests associated with categories, such as requests to display certain categories of content (e.g., show me comedy movies, show me horror movies, etc.).

In other examples, the user may reference a menu or media item shown on display 112-2 in various other ways, and may similarly determine user intent based on the displayed content. It should be appreciated that metadata associated with the displayed content (e.g., television program description, movie description, etc.), fuzzy matching techniques, synonym matching, etc. may also be combined with the displayed content to determine user intent from the speech input. Thus, various forms of user requests (including natural language requests) may be accommodated, and user intent may be determined according to various examples discussed herein.

It should be appreciated that the content displayed on the display 112-2 may be used alone or in combination with content displayed on the user device 102-2 or on a display associated with the remote control 106-2 in determining the user's intent. Likewise, it should be understood that the virtual assistant query may be received at any of the various devices communicatively coupled to the television set-top box 104-2, and that the content displayed on the display 112-2 may be used to determine user intent regardless of which device receives the query. The query results may likewise be displayed on display 112-2 or on another display (e.g., on user device 102-2).

Additionally, in any of the various examples discussed herein, the virtual assistant system may navigate the menu and select a menu option without requiring the user to specifically open the menu and navigate to the menu item. For example, the options menu may appear after selecting media content or a menu button (such as selecting movie option 2444-2 in fig. 42). Menu options may include alternative forms of playing media as well as simply playing media, such as setting reminders to view media later, creating media recordings, adding media to favorites, hiding media from additional views, and so forth. The user may issue a virtual assistant command while viewing content on a menu or content with submenu options, otherwise navigation to the menu or submenu would be required to make a selection. For example, a user viewing the category interface 2440-2 of FIG. 42 may issue any menu command associated with the movie option 2444-2 without manually opening the associated menu. For example, the user may request to add a football movie to a favorites list, record nighttime news, and set reminders to watch movie B without navigating to a menu or sub-menu associated with those media options in which such commands may be available. Thus, the virtual assistant system may navigate the menus and sub-menus to execute commands on behalf of the user, regardless of whether or not these menu options appear on the display 112-2. This may simplify user requests and reduce the number of clicks or selections that the user must make to implement a desired menu function.

FIG. 43 illustrates an exemplary process 2500-2 for controlling television interaction using media content and a media content viewing history shown on a display. At block 2502-2, receiving a voice input from a user, the voice input comprising a query associated with content shown on a television display, may be received. For example, the voice input may include a query for characters, actors, movies, television programs, sporting events, team members, etc. that appear on the display 112-2 (shown by the television set-top box 104-2) of the system 100-2. Transcript 1916-2 of FIG. 37 includes, for example, a query associated with the actress shown in video 480-2 on display 112-2. Similarly, transcript 2122-2 of FIG. 39 includes queries associated with the people in video 480-2 shown on display 112-2. The voice input may also include a query associated with a menu or search content appearing on the display 112-2, such as a query for selecting a particular menu item or obtaining information about a particular search result. For example, the displayed menu content may include media options 832-2 of menu 830-2 in FIG. 41A and FIG. 41B. The displayed menu content may also include a television option 2442-2, a movie option 2444-2, and/or a music option 2446-2 that appear in category interface 2440-2 of FIG. 42.

Referring again to the process 2500-2 of FIG. 43, at block 2504-2, the user intent of the query may be determined based on the viewing history of the content and media content shown. For example, user intent may be determined based on scenes displayed or recently displayed for television programs, sporting events, movies, and the like. User intent may also be determined based on displayed or recently displayed menus or search content. The displayed content may also be analyzed along with metadata associated with the content to determine user intent. For example, the content shown and described with reference to fig. 37, 39, 41A, 41B, and 42 may be used alone or in combination with metadata associated with the displayed content to determine user intent.

At block 2506-2, results of the query may be displayed based on the determined user intent. For example, results similar to the assistant response 2020-2 in the assistant response interface 2018-2 shown in FIG. 38 may be displayed on the display 112-2. In another example, text and selectable media may be provided as results, such as the assistant text response 2226-2 and the selectable video link 2228-2 in the assistant response interface 2224-2 shown in FIG. 40. As another example, displaying the query results may include displaying or playing the selected media content (e.g., playing the selected video on display 112-2 via television set-top box 104-2). Thus, user intent may be determined from speech input in various ways using the displayed content and associated metadata as context.

In some examples, the user may be provided with virtual assistant query suggestions, for example, to inform the user of available queries, to suggest content that the user may like, to teach the user how to use the system, to encourage the user to find content for additional media consumption, and so forth. In some examples, the query suggestions may include general suggestions of possible commands (e.g., find comedy, display a television guide for me, search for action movies, open closed captions, etc.). In other examples, the query suggestions may include goal suggestions related to the displayed content (e.g., add the program to a watch list, share the program through social media, display the soundtrack for me, display for me the book that the guest is selling, display for me a trailer for the movie that guest is recommending, etc.), user preferences (e.g., closed caption usage, etc.), user owned content, content stored on the user device, notifications, alerts, viewing histories of media content (e.g., recently displayed menu items, recently displayed show scenes, recent departures of actors, etc.), and so forth. The suggestion may be displayed on any device, including via television set-top box 104-2 on display 112-2, on user device 102-2, or on a display associated with remote control 106-2. Additionally, the suggestions may be determined based on which devices are nearby and/or in communication with the television set-top box 104-2 at a particular time (e.g., suggesting content from the devices of the user watching the television in the room at the particular time). In other examples, the suggestion may be determined based on various other contextual information including a time of day, information derived from the masses (e.g., popular shows watched at a given time), live programming (e.g., live sporting events), viewing history of media content (e.g., last few programs watched, recently-watched search result sets, recently-watched media option sets, etc.), or any of a variety of other contextual information.

FIG. 44 illustrates an exemplary suggestion interface 2650-2 that includes content-based virtual assistant query suggestions 2652-2. In one example, query suggestions may be provided in an interface (such as interface 2650-2) in response to input received from a user requesting suggestions. Input requesting query suggestions may be received, for example, from user device 102-2 or remote control 106-2. In some examples, the input may include a button press, a button double click, a menu selection, a voice command (e.g., display some suggestions for me, what you can do for me, some options, etc.) received at user device 102-2 or remote 106-2, and so forth. For example, a user may double-click a physical button on remote control 106-2 to request a query suggestion, or may double-click a physical or virtual button on user device 102-2 to request a query suggestion while viewing an interface associated with television set-top box 104-2.

The suggestion interface 2650-2 may be displayed over a moving image, such as video 480-2, or may be displayed over any other background content (e.g., menu, still image, paused video, etc.). As with other interfaces discussed herein, the suggestion interface 2650-2 may slide upward from the bottom of the display 112-2 in an animated fashion and may occupy a minimum amount of space when fully conveying the desired information in order to limit interference with the video 480-2 in the background. In other examples, a larger suggestion interface may be provided when the background content is stationary (e.g., paused video, menus, images, etc.).

In some examples, the virtual assistant query suggestions may be determined based on the media content being displayed or a viewing history of the media content (e.g., movies, television shows, sporting events, recently viewed programs, recently viewed menus, recently viewed movie scenes, recent scenes of a tv show being played, etc.). For example, FIG. 44 illustrates a content-based suggestion 2652-2 that may be determined based on the displayed video 480-2 shown in the background, where characters 1910-2,1912-2 and 1914-2 appear on the display 112-2. Metadata associated with the displayed content (e.g., descriptive details of the media content) may also be used to determine query suggestions. The metadata may include various information associated with the displayed content including program titles, character listings, actor listings, episode descriptions, team listings, team rankings, program summaries, movie details, episode descriptions, director names, producer names, actor times, sporting events, sports scores, categories, season listings, related media content, or various other associated information. For example, the metadata associated with the video 480-2 may include the names of the characters 1910, 1912-2 and 1914-2 and the actresses playing the characters. The metadata may also include a plot description of video 480-2, a description of a previous or next episode (where video 480-2 is a episode in a television series), and so on.

FIG. 44 illustrates various content-based suggestions 2652-2 that may be shown in the suggestion interface 2650-2 based on the video 480-2 and metadata associated with the video 480-2. For example, the character 1910-2 of the video 480-2 may be named "Blanche" and the character name may be used to formulate a query suggestion for information about the character Blanche or the actress playing the character (e.g., "who is the actress playing Blanche. Character 1910-2 may be identified from metadata associated with video 480-2 (e.g., a list of characters, a list of actors, a time associated with the actor coming out, etc.). In other examples, facial recognition may be used to identify actresses and/or characters appearing on display 112-2 at a given time. Various other query suggestions associated with personalities within the media itself may be provided, such as queries related to personalities, profile information, relationships with other personalities, and so forth.

In another example, an actor or actress appearing on display 112-2 may be identified (e.g., based on metadata and/or facial recognition) and query suggestions associated with the actor or actress may be provided. Such query suggestions may include one or more of the roles played, performance awards, age, other media with their presence, history, family members, interpersonal relationships, or any of a variety of other details about the actor or actress. For example, the character 1914-2 may be played by an actress named Whitney Davidson, and the actress's name Whitney Davidson may be used to formulate a query suggestion to identify other movies, television programs, or other media in which the actress Whitney Davidson appears (e.g., "what was the Whitney Davidson played back?).

In other examples, details about the program may be used to formulate query suggestions. Query suggestions may be formulated using episode summaries, episode profiles, episode lists, episode titles, series titles, and the like. For example, a suggestion may be provided to describe an event that occurred in a previous episode of the television program (e.g., "what happened in the previous episode"), to which the virtual assistant system may provide an episode summary of the previous episode in response, the previous episode being identified based on the episode (and its associated metadata) currently displayed on display 112-2. In another example, a suggestion may be provided to set the next episode of the recording, which may be accomplished by the system identifying the next episode based on the currently playing episode shown on display 112-2. As another example, suggestions may be provided to obtain information about the current episode or program appearing on the display 112-2, and the title of the program obtained from the metadata may be used to formulate query suggestions (e.g., "what is the content of this set 'Their Show?or'" what is the content of Their Show.

In another example, query suggestions may be formulated using categories, classifications, ratings, awards, descriptions, etc. associated with the displayed content. For example, video 480-2 may correspond to a television program described as a comedy with a female hero. Query suggestions can be formulated based on this information to identify other programs with similar characteristics (e.g., "find me other comedies with a blond"). In other examples, the suggestion may be determined based on a user subscription, content available for playback (e.g., content on television set top box 104-2, content on user device 102-2, content available for streaming, etc.), and so on. For example, possible query suggestions may be filtered based on informational or whether media results are available. Query suggestions that may not result in playable media content or informational answers may be excluded and/or query suggestions with readily available informational or playable media content may be provided (or weighted more heavily in determining which suggestions to provide). Thus, the displayed content and associated metadata may be used in a variety of ways to determine query suggestions.

FIG. 45 illustrates an exemplary selection interface 2754-2 for confirming selection of a suggested query. In some examples, the user may select the displayed query suggestions by speaking the query, selecting the query with a button, navigating to the query with a cursor, and so forth. In response to the selection, the selected suggestion may be briefly displayed in a confirmation interface (such as selection interface 2754-2). In one example, the selected suggestion 2756-2 may be animated to move from anywhere it appears in the suggestion interface 2650-2 to a position next to the command receipt confirmation 490-2 shown in fig. 45 (e.g., as indicated by the arrow), and other unselected suggestions may be hidden from the display.

46A-46B illustrate an exemplary virtual assistant answer interface 2862-2 based on a selected query. In some examples, informational answers to the selected query may be displayed in an answer interface, such as answer interface 2862-2. Upon switching from the suggestion interface 2650-2 or the selection interface 2754-2, a transition interface 2858-2 as shown in FIG. 46A may be shown. In particular, when the next content scrolls upward from the bottom of display 112-2, previously displayed content within the interface may scroll upward away from the interface. For example, the selected suggestion 2756-2 may slide or scroll up until it disappears at the top edge of the virtual assistant interface, and the assistant result 2860-2 may slide or scroll up from the bottom of the display 112-2 until it reaches the position shown in fig. 46B.

Answer interface 2862-2 may include informational answers and/or media results responsive to the selected query suggestions (or responsive to any other query). For example, assistant results 2860-2 may be determined and provided in response to the selected query suggestions 2756-2. In particular, in response to a request for a summary of a previous episode, the previous episode may be identified based on the displayed content, and an associated description or summary may be identified and provided to the user. In the illustrated example, the assistant results 2860-2 can describe the last set of programs corresponding to the video 480-2 on the display 112-2 (e.g., "in set 203-2 'Their Show', Blankhe was invited as a lecture on a guest-to-university psychological course, Julia and Melissa suddenly appeared, causing a harassment."). The informational answers and media results (e.g., the selectable video links) may also be presented in any other manner discussed herein, or the results may be presented in various other manners (e.g., loud speaking answers, playing content immediately, displaying animations, displaying images, etc.).

In another example, notifications or alerts may be used to determine virtual assistant query suggestions. FIG. 47 shows media content notification 2964-2 (although any notification may be considered in determining the suggestion) and suggestion interface 2650-2 with both notification-based suggestion 2966-2 and content-based suggestion 2652-2 (which may include some of the same concepts discussed above with reference to FIG. 44). In some examples, the content of the notification may be analyzed to identify relevant names, titles, topics, actions, etc. of the relevant media. In the illustrated example, notification 2964-2 includes an alert that notifies the user of alternative media content available for display-specifically, the sporting event is live, and the content of the game may be of interest to the user (e.g., "five minutes left in the game, tie in the Zeta and Alpha teams"). In some examples, the notification may be temporarily displayed at the top of display 112-2. The notification may slide down (as indicated by the arrow) from the top of display 112-2 to the position shown in FIG. 47, be displayed for a period of time, and slide back to the top of display 112-2 to disappear again.

The notification or alert may notify the user of various information, such as alternative media content that is available (e.g., alternatives that may currently be shown on display 112-2), live television programming that is available, newly downloaded media content, recently added subscription content, suggestions received from friends, receipt of media sent from another device, and so forth. Notifications may also be personalized based on the home or identified user viewing media (e.g., identified based on user authentication using account selection, voice recognition, passwords, etc.). In one example, the system may interrupt the display and display a notification based on content that may be desired, such as displaying a notification 2964-2 for a user that may desire the notification content based on the user profile information, one or more teams of favorites, one or more sports of preferences, viewing history, and so forth. For example, sports scores, game status, time remaining, etc. may be obtained from sports data feeds, news feeds, social media discussions, etc., and may be used to identify possible alternative media content to notify the user.

In other examples, popular media content may be provided (e.g., among multiple users) via an alert or notification to suggest an alternative to currently viewed content (e.g., to notify the user of popular shows or shows that have just begun in a user's favorite category or that are otherwise available for viewing). In the illustrated example, a user may track one or both of the Zeta and Alpha teams (or may track football or a particular sport, league, etc.). The system may determine that the available live content matches the user's preferences (e.g., a game on another channel matches the user's preferences, the game has little time remaining, and the scores are close). The system may then determine to alert the user of content that may be desired via notification 2964-2. In some examples, the user may select notification 2964-2 (or a link within notification 2964-2) (e.g., using a remote control button, cursor, voice request, etc.) to switch to suggested content.

Virtual assistant query suggestions can be determined based on the notification by analyzing the notification content to identify relevant terms, names, titles, topics, actions, etc. of the relevant media. The identified information may then be used to formulate appropriate virtual assistant query suggestions, such as notification-based suggestion 2966-2 based on notification 2964-2. For example, a notification may be displayed regarding the end of an exciting live sporting event. Then, if the user requests query suggestions, a suggestion interface 2650-2 may be displayed, including query suggestions to view sporting events, query team statistics, or look up content related to the notification (e.g., switch to a Zeta/Alpha game, how the statistics for the Zeta team are, what football games are in progress in addition, etc.). Various other query suggestions may likewise be determined and provided to the user based on the particular terms of interest identified in the notification.

Virtual assistant query suggestions related to media content may also be determined from content on the user device (e.g., for consumption via television set-top box 104-2), and suggestions may also be provided on the user device. In some examples, playable device content may be identified on a user device connected to or in communication with television set-top box 104-2. FIG. 48 shows the user device 102-2 with exemplary picture and video content in the interface 1360-2. It may be determined what content is available for playback on the user device or what content may be desired to be played back. For example, the playable media 3068-2 may be identified based on an active application (e.g., photo and video applications), or whether the playable media is displayed on the interface 1360-2 may be identified based on stored content (e.g., content may be identified based on the active application in some examples, or not displayed at a given time in other examples). Playable media 3068-2 may include, for example, video 1362-2, photo album 1364-2, and photos 1366-2, each of which may include personal user content that may be transmitted to television set-top box 104-2 for display or playback. In other examples, any photos, videos, music, game interfaces, application interfaces, or other media content stored or displayed on user device 102-2 may be identified and used to determine query suggestions.

Where playable media 3068-2 is identified, virtual assistant query suggestions may be determined and provided to the user. Fig. 49 illustrates an exemplary television assistant interface 3170-2 on user device 102-2 that contains virtual assistant query suggestions based on playable user device content and based on video content shown on a separate display (e.g., display 112-2 associated with television set-top box 104-2). Television assistant interface 3170-2 may include a virtual assistant interface that is dedicated to interacting with media content and/or television set-top box 104-2. The user may request query suggestions on user device 102-2 by, for example, double-clicking a physical button while viewing interface 3170-2. Other inputs may similarly be used to indicate a request for query suggestions. As shown, assistant greeting 3172-2 can introduce provided query suggestions (e.g., "this is some suggestion for controlling your television experience").

The virtual assistant query suggestions provided on the user device 102-2 may include suggestions based on various source devices as well as general suggestions. For example, device-based suggestions 3174-2 may include query suggestions based on content stored on user device 102-2 (including content displayed on user device 102-2). The content-based suggestion 2652-2 may be based on content displayed on a display 112-2 associated with the television set-top box 104-2. General suggestions 3176-2 may include general suggestions that may not be associated with particular media content or a particular device with media content.

The device-based suggestion 3174-2 may be determined, for example, based on playable content (e.g., videos, music, photos, game interfaces, application interfaces, etc.) identified on the user device 102-2. In the illustrated example, the device-based suggestion 3174-2 may be determined based on the playable media 3068-2 shown in FIG. 48. For example, assuming album 1364-2 is identified as playable media 3068-2, the details of album 1364-2 may be used to formulate a query. The system may identify the content as an album of photos that may be displayed in a slideshow, and may then (in some cases) use the title of the album to formulate a query suggestion to show the slideshow for a particular album (e.g., "show" the 'graduation album' of your photos via a slideshow). In some examples, the suggestion may include an indication of the source of the content (e.g., "in your photo," "in Jennifer phone," "in Daniel tablet," etc.). The suggestion may also reference particular content using other details, such as a suggestion to view photos after a particular date (e.g., show photos from day 21/6). In another example, video 1362-2 may be identified as playable media 3068-2 and the title of the video (or other identifying information) may be used to formulate a query suggestion to play the video (e.g., show "graduation video" in your video).

In other examples, content available on other connected devices may be identified and used to formulate virtual assistant query suggestions. For example, content from each of two user devices 102-2 connected to a common television set top box 104-2 may be identified and used to formulate virtual assistant query suggestions. In some examples, the user may select which content is made visible to the system for sharing, and other content may be hidden from the system so that it is not included in the query suggestions or otherwise made available for playback.

The content-based suggestion 2652-2 shown in the interface 3170-2 of fig. 49 may be determined, for example, based on content displayed on the display 112-2 associated with the television set-top box 104-2. In some examples, the content-based suggestion 2652-2 may be determined in the same manner as described above with reference to fig. 44. In the illustrated example, the content-based suggestion 2652-2 shown in FIG. 49 may be based on the video 480-2 shown on the display 112-2 (e.g., as shown in FIG. 44). In this way, virtual assistant query suggestions can be derived based on content displayed or available on any number of connected devices. In addition to the goal suggestions, general suggestions 3176-2 may be predetermined and provided (e.g., show me a guide, what sports events are in progress, what channel three is playing, etc.).

FIG. 50 illustrates an exemplary suggestion interface 2650-2 containing connected device based suggestions 3275-2 and content based suggestions 2652-2 shown on a display 112-2 associated with a television set top box 104-2. In some examples, the content-based suggestion 2652-2 may be determined in the same manner as described above with reference to fig. 44. As described above, virtual assistant query suggestions can be formulated based on content on any number of connected devices, and suggestions can be provided on any number of connected devices. FIG. 50 shows a connected device based suggestion 3275-2 that may be derived from content on the user device 102-2. For example, playable content may be identified on the user device 102-2, such as photos and video content displayed as playable media 3068-2 in interface 1360-2 in FIG. 48. The identified playable content on user device 102-2 may then be used to formulate suggestions that may be displayed on display 112-2 associated with television set-top box 104-2. In some examples, the connected device-based suggestion 3275-2 may be determined in the same manner as the device-based suggestion 3174-2 described above with reference to fig. 49. Further, as described above, in some examples, the identifying source information may be included in a recommendation, such as "in Jake's phone" shown in recommendation 3275-2 based on the connected device. Accordingly, virtual assistant query suggestions provided on one device may be derived based on content from another device (e.g., displayed content, stored content, etc.). It should be understood that the connected devices may include remote storage devices that are accessible to television set-top box 104-2 and/or user device 102-2 (e.g., accessing media content stored in the cloud to make recommendations).

It should be appreciated that any combination of virtual assistant query suggestions from various sources may be provided in response to a request for suggestions. For example, suggestions from various sources may be combined randomly, or may be presented based on popularity, user preferences, selection history, and so forth. Further, the query can be determined in various other ways and presented based on various other factors (such as query history, user preferences, query popularity, etc.). Additionally, in some examples, query suggestions may be automatically cycled through by replacing displayed suggestions with new alternative suggestions after a delay. It should also be appreciated that a user may select a suggestion displayed on any interface by, for example, tapping a touchscreen, speaking a query, selecting a query using a navigation key, selecting a query using a button, selecting a query using a cursor, etc., and may then provide an associated response (e.g., an informational and/or media response).

In any of the various examples, the virtual assistant query suggestions may also be filtered based on available content. For example, possible query suggestions that would get unavailable media content (e.g., no cable subscription) or may not have an associated informational answer may be disqualified as suggestions and prevented from being displayed. On the other hand, possible query suggestions that will result in immediately playable media content to which the user has access may be weighted more heavily or otherwise biased for display relative to other possible suggestions. In this way, the availability of media content for viewing by the user may also be used to determine virtual assistant query suggestions for display.

Additionally, in any of the various examples, preloaded query answers may be provided in lieu of or in addition to suggestions (e.g., in the suggestion interface 2650-2). Such preloaded query answers may be selected and provided based on personal usage and/or current context. For example, a user watching a particular program may tap a button, double-click a button, long-press a button, etc. to receive a suggestion. Alternatively or in addition to query suggestions, context-based information may be automatically provided, such as identifying the song or soundtrack being played (e.g., "the song is Performance Piece"), identifying the actor member of the currently playing episode (e.g., "actress Janet Quinn plays Genevieve"), identifying similar media (e.g., "program Q is similar to this"), or providing the results of any of the other queries discussed herein.

Further, the user may be provided with an affordance (e.g., a selectable rating scale) to rate the media content in any of a variety of interfaces to inform the virtual assistant of user preferences. In other examples, the user may speak the rating information as a natural language command (e.g., "i love this," "i dislike this program," etc.). In other examples, various other functional and informational elements may be provided in any of the various interfaces shown and described herein. For example, the interface may also include links to important functions and locations, such as search links, purchase links, media links, and the like. In another example, the interface may also include recommendations for other content to be viewed next based on currently playing content (e.g., selecting similar content). As yet another example, the interface may also include recommendations for other content to be viewed next based on personalized tastes and/or recent activity (e.g., selecting content based on user ratings, user-entered preferences, recently viewed programs, etc.). As another example, the interface can also include instructions for user interaction (e.g., "press and hold talkable with virtual assistant," "tap once acquirable suggestion," etc.). In some examples, providing preloaded answers, suggestions, etc. may provide a pleasant user experience while making content readily available to a wide variety of users (e.g., users of various skill levels, regardless of language or other control impediments).

FIG. 51 illustrates an exemplary process 3300-2 of suggesting virtual assistant interactions (e.g., virtual assistant queries) for controlling media content. At block 3302-2, media content may be displayed on the display. For example, as shown in FIG. 44, video 480-2 may be displayed on display 112-2 via television set-top box 104-2, or interface 1360-2 may be displayed on touch screen 246-2 of user device 102-2 as shown in FIG. 48. At block 3304-2, input may be received from a user. The input may include a request for a virtual assistant query suggestion. The input may include a button press, a button double click, a menu selection, a spoken query for suggestions, and the like.

At block 3306-2, a virtual assistant query may be determined based on the media content and/or the viewing history of the media content. For example, a virtual assistant query may be determined based on a displayed program, menu, application, media content list, notification, and the like. In one example, the content-based suggestion 2652-2 may be determined based on the video 480-2 and associated metadata, as described with reference to FIG. 44. In another example, notification-based suggestion 2966-2 may be determined based on notification 2964-2, as described with reference to FIG. 47. In yet another example, the device-based suggestion 3174-2 may be determined based on playable media 3068-2 on the user device 102-2, as described with reference to fig. 48 and 49. In other examples, the connected device based suggestion 3275-2 may be determined based on the playable media 3068-2 on the user device 102-2, as described with reference to FIG. 50.

Referring again to the process 3300-2 of FIG. 51, at block 3308-2, the virtual assistant query may be displayed on the display. For example, the determined query suggestions may be displayed as shown and described with reference to fig. 44, 45, 47, 49, and 50. As described above, query suggestions can be determined and displayed based on various other information. Further, virtual assistant query suggestions provided on one display may be derived based on content from another device having another display. Accordingly, targeted virtual assistant query suggestions can be provided to a user, thereby assisting the user in understanding possible queries and providing desired content suggestions, among other benefits.

Further, in any of the various examples discussed herein, the various aspects may be personalized for a particular user. User data, including contacts, preferences, locations, favorite media, etc., can be used to interpret voice commands and facilitate user interaction with the various devices discussed herein. The various processes discussed herein may also be modified in various other ways based on user preferences, contacts, text, usage history, profile data, age zone data, and the like. Further, such preferences and settings may be updated over time based on user interactions (e.g., frequently spoken commands, frequently selected applications, etc.). The collection and use of user data available from various sources may be used to improve the delivery of the invited content, or any other content that may be of interest to the user, to the user. The present disclosure contemplates that, in some examples, such sampled data may include personal information data that uniquely identifies or may be used to contact or locate a particular person. Such personal information data may include demographic data, location-based data, telephone numbers, email addresses, home addresses, or any other identifying information.

The present disclosure recognizes that the use of such personal information data in the present technology may be useful to benefit the user. For example, the personal information data may be used to deliver target content that is of greater interest to the user. Thus, the use of such personal information data enables planned control of delivered content. In addition, the present disclosure also contemplates other uses for which personal information data is beneficial to a user.

The present disclosure also contemplates that entities responsible for the collection, analysis, disclosure, transmission, storage, or other use of such personal information data will comply with established privacy policies and/or privacy practices. In particular, such entities should enforce and adhere to the use of privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining privacy and security of personal information data. For example, personal information from a user should be collected for legitimate and legitimate uses by an entity and not shared or sold outside of these legitimate uses. In addition, such collection should only be done after the user has informed consent. In addition, such entities should take any required steps to secure and protect access to such personal information data, and to ensure that others who are able to access the personal information data comply with their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices.

Regardless of the foregoing, the present disclosure also contemplates examples in which a user selectively prevents use or access to personal information data. That is, the present disclosure contemplates that hardware elements and/or software elements may be provided to prevent or block access to such personal information data. For example, in the case of an ad delivery service, the techniques of the present invention may be configured to allow a user to opt-in to "join" or "opt-out of" participating in the collection of personal information data during registration with the service. In another example, the user may choose not to provide location information for the targeted content delivery service. As another example, the user may choose not to provide accurate location information, but to permit transmission of location area information.

Thus, while this disclosure broadly covers the use of personal information data to implement one or more of the various disclosed examples, this disclosure also contemplates that various examples may also be implemented without having to access such personal information data. That is, various examples of the present technology do not fail to function properly due to the lack of all or a portion of such personal information data. For example, content may be selected and delivered to a user by inferring preferences based on non-personal information data or an absolute minimum of personal information (e.g., content requested by a device associated with the user, other non-personal information available to a content delivery service, or publicly available information).

According to some examples, fig. 52 illustrates a functional block diagram of an electronic device 3400-2 configured according to the principles of various described examples to control television interactions and display associated information using different interfaces, for example, using a virtual assistant. The functional blocks of the device can be implemented by hardware, software, or a combination of hardware and software that perform the principles of the various described examples. Those skilled in the art will appreciate that the functional blocks described in fig. 52 may be combined or separated into sub-blocks in order to implement the principles of the various described examples. Thus, the description herein optionally supports any possible combination or separation or further definition of the functional blocks described herein.

As shown in FIG. 52, electronic device 3400-2 may include a display unit 3402-2 (e.g., display 112-2, touchscreen 246-2, etc.) configured to display media, interfaces, and other content. The electronic device 3400-2 may also include an input unit 3404-2 configured to receive information, such as voice input, tactile input, gesture input, and the like (e.g., a microphone, a receiver, a touch screen, buttons, and the like). The electronic device 3400-2 may also include a processing unit 3406-2 coupled to the display unit 3402-2 and the input unit 3404-2. In some examples, the processing unit 3406-2 may include a voice input receiving unit 3408-2, a media content determining unit 3410-2, a first user interface display unit 3412-2, a selection receiving unit 3414-2, and a second user interface display unit 3416-2.

Processing unit 3406-2 may be configured to receive voice input from a user (e.g., via input unit 3404-2). The processing unit 3406-2 may be further configured to determine media content based on the speech input (e.g., using the media content determination unit 3410-2). The processing unit 3406-2 may be further configured to display using a first user interface having a first size (e.g., on the display unit 3402-2 using the first user interface display unit 3412-2), wherein the first user interface includes one or more selectable links to media content. The processing unit 3406-2 may be further configured to receive a selection of one of the one or more selectable links (e.g., from the input unit 3404-2 using the selection receiving unit 3414-2). The processing unit 3406-2 may be further configured to display, in response to the selection, (e.g., on the display unit 3402-2 using the second user interface display unit 3416-2) a second user interface having a second size, the second size being larger than the first size, wherein the second user interface includes media content associated with the selection.

In some examples, in response to a selection (e.g., of selection receiving unit 3414-2), the first user interface (e.g., of first user interface display unit 3412-2) is expanded into the second user interface (e.g., of second user interface display unit 3416-2). In other examples, the first user interface is overlaid on the content being displayed. In one example, the second user interface is overlaid on the content being displayed. In another example, the voice input (e.g., from the voice input receiving unit 3408-2 of the input unit 3404-2) comprises a query and the media content (e.g., of the media content determination unit 3410-2) comprises results of the query. In yet another example, the first user interface includes a link to the query result that is in addition to one or more selectable links to the media content. In other examples, the query includes a query about weather, and the first user interface includes a link to media content associated with the query about weather. In another example, the query includes a location and the link to the media content associated with the query for weather includes a link to a portion of the media content associated with weather at the location.

In some examples, in response to a selection, processing element 3406-2 may be configured to play media content associated with the selection. In one example, the media content comprises a movie. In another example, the media content includes television programming. In another example, the media content includes a sporting event. In some examples, the second user interface (e.g., of second user interface display unit 3416-2) includes a description of the media content associated with the selection. In other examples, the first user interface includes a link to purchase media content.

The processing unit 3406-2 may be further configured to receive additional voice input from the user (e.g., via the input unit 3404-2), where the additional voice input includes a query associated with the displayed content. The processing unit 3406-2 may be further configured to determine a response to a query associated with the displayed content based on the metadata associated with the displayed content. The processing unit 3406-2 may be further configured to display (e.g., on the display unit 3402-2) a third user interface in response to receiving the additional speech input, wherein the third user interface includes the determined response to the query associated with the displayed content.

The processing unit 3406-2 may be further configured to receive an indication to initiate receipt of the speech input (e.g., via the input unit 3404-2). The processing unit 3406-2 may be further configured to display (e.g., on the display unit 3402-2) a readiness confirmation in response to receiving the indication. The processing unit 3406-2 may be further configured to display an on-listening confirmation in response to receiving the voice input. The processing unit 3406-2 may be further configured to detect an end of the voice input and, in response to detecting the end of the voice input, display that the confirmation is being processed. In some examples, processing unit 3406-2 may be further configured to display the transcription of the speech input.

In some examples, electronic device 3400-2 comprises a television. In other examples, electronic device 3400-2 comprises a television set-top box. In other examples, electronic device 3400-2 includes a remote control. In other examples, electronic device 3400-2 comprises a mobile phone.

In one example, one or more selectable links in the first user interface (e.g., of the first user interface display unit 3412-2) include moving images associated with the media content. In some examples, the moving images associated with the media content include live feeds of the media content. In other examples, the one or more selectable links in the first user interface include still images associated with the media content.

In some examples, the processing unit 3406-2 may be further configured to determine whether the currently displayed content includes moving images or control menus; in response to determining that the currently displayed content includes moving images, selecting a small size as the first size of the first user interface (e.g., of the first user interface display unit 3412-2); and in response to determining that the currently displayed content includes a control menu, select a large size larger than a small size as the first size of the first user interface (e.g., of the first user interface display unit 3412-2). In other examples, processing unit 3406-2 may be further configured to determine alternative media content for display based on one or more of user preferences, program popularity, and status of a live sporting event, and display a notification including the determined alternative media content.

Fig. 53 illustrates, in accordance with some examples, a functional block diagram of an electronic device 3500-2 configured in accordance with the principles of various described examples to control television interaction, e.g., using a virtual assistant and a plurality of user devices. The functional blocks of the device can be implemented by hardware, software, or a combination of hardware and software that perform the principles of the various described examples. Those skilled in the art will appreciate that the functional blocks described in fig. 53 may be combined or separated into sub-blocks in order to implement the principles of the various described examples. Thus, the description herein optionally supports any possible combination or separation or further definition of the functional blocks described herein.

As shown in fig. 53, electronic device 3500-2 can include a display unit 3502-2 (e.g., display 112-2, touch screen 246-2, etc.) configured to display media, interfaces, and other content. The electronic device 3500-2 can also include an input unit 3504-2 configured to receive information, such as voice input, tactile input, gesture input, and the like (e.g., microphone, receiver, touch screen, buttons, and the like). The electronic device 3500-2 can also include a processing unit 3506-2 coupled to the display unit 3502-2 and the input unit 3504-2. In some examples, the processing unit 3506-2 may include the voice input receiving unit 3508-2, the user intent determination unit 3510-2, the media content determination unit 3512-2, and the media content playing unit 3514-2.

The processing unit 3506-2 can be configured to receive voice input from a user at a first device (e.g., device 3500-2) having a first display (e.g., in some examples, display unit 3502-2) (e.g., from input unit 3504-2 with voice input receiving unit 3508-2). The processing unit 3506-2 may be further configured to determine a user intent of the voice input based on the content displayed on the first display (e.g., using the user intent determination unit 3510-2). The processing unit 3506-2 may be further configured to determine media content based on the user intent (e.g., using the media content determination unit 3512-2). The processing unit 3506-2 may be further configured to play the media content (e.g., using the media content playing unit 3514-2) on a second device associated with a second display (e.g., the display unit 3502-2 in some examples).

In one example, the first device includes a remote control. In another example, the first device comprises a mobile phone. In another example, the first device comprises a tablet computer. In some examples, the second device comprises a television set-top box. In other examples, the second display comprises a television.

In some examples, the content displayed on the first display includes an application interface. In one example, the voice input (e.g., from voice input receiving unit 3508-2 of input unit 3504-2) includes a request to display media associated with an application interface. In one example, the media content includes media associated with an application interface. In another example, the application interface includes an album and the media includes one or more photos in the album. In yet another example, the application interface includes a list of one or more videos and the media includes one of the one or more videos. In other examples, the application interface includes a list of television programs and the media includes television programs in the list of television programs.

In some examples, the processing unit 3506-2 may be further configured to determine whether the first device is authorized; wherein the media content is played on the second device in response to determining that the first device is authorized. The processing unit 3506-2 may be further configured to identify a user based on the voice input and determine a user intent of the voice input based on data associated with the identified user (e.g., using the user intent determination unit 3510-2). The processing unit 3506-2 may be further configured to determine whether the user is authorized based on the voice input; wherein the media content is played on the second device in response to determining that the user is an authorized user. In one example, determining whether the user is authorized includes analyzing the voice input using voice recognition.

In other examples, the processing unit 3506-2 may be further configured to display information associated with the media content on the first display of the first device in response to determining that the user intent includes a request for the information. The processing unit 3506-2 may be further configured to play the media content on the second device in response to determining that the user intent includes a request to play the media content.

In some examples, the voice input includes a request to play content on the second device, and the media content is played on the second device in response to the request to play content on the second device. The processing unit 3506-2 may be further configured to determine whether the determined media content should be displayed on the first display or the second display based on the media format, user preferences, or default settings. In some examples, in response to determining that the determined media content should be displayed on the second display, the media content is displayed on the second display. In other examples, the media content is displayed on the first display in response to determining that the determined media content should be displayed on the first display.

In other examples, the processing unit 3506-2 may be further configured to determine a proximity of each of the two or more apparatuses (including the second apparatus and the third apparatus). In some examples, media content is played on a second device associated with the second display based on a proximity of the second device relative to a proximity of the third device. In some examples, determining the proximity of each of the two or more devices includes determining the proximity based on bluetooth LE.

In some examples, processing unit 3506-2 may be further configured to display a list of display devices including a second device associated with the second display, and receive a selection of the second device in the list of display devices. In one example, media content is displayed on the second display in response to receiving a selection of the second device. The processing unit 3506-2 may be further configured to determine whether a headset is attached to the first device. The processing unit 3506-2 may be further configured to display the media content on the first display in response to a determination that the headset is attached to the first device. The processing unit 3506-2 may be further configured to display the media content on the second display in response to a determination that the headset is not attached to the first device. In other examples, processing unit 3506-2 may be further configured to determine alternative media content for display based on one or more of user preferences, program popularity, and status of live sporting events, and to display a notification including the determined alternative media content.

According to some examples, fig. 54 illustrates a functional block diagram of an electronic device 3600-2 that is configured in accordance with the principles of various described examples to control television interaction, e.g., using media content displayed on a display and a viewing history of the media content. The functional blocks of the device can be implemented by hardware, software, or a combination of hardware and software that perform the principles of the various described examples. Those skilled in the art will appreciate that the functional blocks described in fig. 54 may be combined or separated into sub-blocks in order to implement the principles of the various described examples. Thus, the description herein optionally supports any possible combination or separation or further definition of the functional blocks described herein.

As shown in FIG. 54, the electronic device 3600-2 can include a display unit 3602-2 (e.g., display 112-2, touch screen 246-2, etc.) configured to display media, interfaces, and other content. The electronic device 3600-2 may also include an input unit 3604-2 configured to receive information, such as voice input, tactile input, gesture input, and so forth (e.g., microphone, receiver, touch screen, buttons, and so forth). The electronic device 3600-2 may also include a processing unit 3606-2 coupled to the display unit 3602-2 and the input unit 3604-2. In some examples, the processing unit 3606-2 may include a voice input receiving unit 3608-2, a user intent determination unit 3610-2, and a query result display unit 3612-2.

The processing unit 3606-2 may be configured to receive voice input from a user (e.g., from the input unit 3604-2 with the voice input receiving unit 3608-2), where the voice input includes a query associated with content displayed on a television display (e.g., in some examples, the display unit 3602-2). The processing unit 3606-2 may be further configured to determine a user intent of the query based on one or more of content and media content viewing history shown on the television display (e.g., using the user intent determination unit 3610-2). The processing unit 3606-2 may be further configured to display results of the query based on the determined user intent (e.g., using the query result display unit 3612-2).

In one example, a voice input is received at a remote control. In another example, a voice input is received at a mobile phone. In some examples, the results of the query are displayed on a television display. In another example, the content shown on the television display comprises a movie. In yet another example, the content shown on the television display includes a television program. In yet another example, the content shown on the television display includes a sporting event.

In some examples, the query includes a request for information about a person associated with content shown on a television display, and the results of the query (e.g., of query result display unit 3612-2) include information about the person. In one example, the results of the query include media content associated with a person. In another example, the media content includes one or more of a movie, a television program, or a sporting event associated with a person. In some examples, the query includes a request for information about a character in content shown on a television display, and the results of the query include information about the character or information about an actor playing the character. In one example, the results of the query include media content associated with actors acting as characters. In another example, the media content includes one or more of a movie, a television program, or a sporting event associated with actors acting as characters.

In some examples, the processing unit 3606-2 may be further configured to determine a result of the query based on metadata associated with content shown on a television display or media content viewing history. In one example, the metadata includes one or more of a title, description, list of people, list of actors, list of players, category of players, or display schedule associated with the content shown on the television display or media content viewing history. In another example, the content shown on the television display includes a list of media content, and the query includes a request to display one of the items in the list. In yet another example, the content shown on the television display also includes an item in the media content list that has focus, and determining the user intent of the query (e.g., using user intent determination unit 3610-2) includes identifying the item that has focus. In some examples, the processing unit 3606-2 may be further configured to determine the user intent of the query based on a menu or search content recently displayed on the television display (e.g., using the user intent determination unit 3610-2). In one example, the content shown on the television display includes a page of the listed media, and the most recently displayed menu or search content includes a previous page of the listed media. In another example, the content shown on the television display includes one or more categories of media, and one of the one or more categories of media has focus. In one example, the processing unit 3606-2 may be further configured to determine a user intent of the query based on one of the one or more media categories that has focus (e.g., using the user intent determination unit 3610-2). In another example, the categories of media include movies, television programs, and music. In other examples, the processing unit 3606-2 may be further configured to determine alternative media content for display based on one or more of user preferences, program popularity, and status of a live sporting event, and display a notification including the determined alternative media content.

According to some examples, fig. 55 illustrates a functional block diagram of an electronic device 3700-2 that is configured in accordance with the principles of various described examples, for example, to suggest virtual assistant interactions for controlling media content. The functional blocks of the device can be implemented by hardware, software, or a combination of hardware and software that perform the principles of the various described examples. Those skilled in the art will appreciate that the functional blocks described in fig. 55 may be combined or separated into sub-blocks in order to implement the principles of the various described examples. Thus, the description herein optionally supports any possible combination or separation or further definition of the functional blocks described herein.

As shown in FIG. 55, an electronic device 3700-2 can include a display unit 3702-2 (e.g., display 112-2, touch screen 246-2, etc.) that is configured to display media, interfaces, and other content. The electronic device 3700-2 can also include an input unit 3704-2 that is configured to receive information, such as voice input, tactile input, gesture input, and so forth (e.g., microphone, receiver, touch screen, buttons, and so forth). The electronic device 3700-2 may also include a processing unit 3706-2 coupled to the display unit 3702-2 and the input unit 3704-2. In some examples, processing unit 3706-2 may include media content display unit 3708-2, input receiving unit 3710-2, query determination unit 3712-2, and query display unit 3714-2.

The processing unit 3706-2 may be configured to display media content on a display (e.g., the display unit 3702-2) (e.g., using the media content display unit 3708-2). The processing unit 3706-2 may be further configured to receive input from the user (e.g., from the input unit 3704-2 using the input receiving unit 3710-2). The processing unit 3706-2 may be further configured to determine one or more virtual assistant queries based on one or more of the media content and the media content viewing history (e.g., using the query determining unit 3712-2). The processing unit 3706-2 may be further configured to display the one or more virtual assistant queries on the display (e.g., using the query display unit 3714-2).

In one example, input is received from a user on a remote control. In one example, input is received from a user on a mobile phone. In some examples, one or more virtual assistant queries are overlaid on the moving image. In another example, the input includes double-clicking a button. In one example, the media content comprises a movie. In another example, the media content includes television programming. In yet another example, the media content includes a sporting event.

In some examples, the one or more virtual assistant queries include queries about people appearing in the media content. In other examples, the one or more virtual assistant queries include queries about personas appearing in the media content. In another example, the one or more virtual assistant queries include queries for media content associated with people appearing in the media content. In some examples, the media content or media content viewing history includes a set of television programs, and the one or more virtual assistant queries include queries about another set of television programs. In some examples, the media content or media content viewing history includes a collection of television programs, and the one or more virtual assistant queries include a request to set a reminder to view or record a subsequent episode of the media content. In yet another example, the one or more virtual assistant queries include queries for descriptive details of the media content. In one example, the descriptive details include one or more of a program title, a list of people, a list of actors, an episode description, a team roster, a team ranking, or a program summary.

In some examples, the processing unit 3706-2 may be further configured to receive a selection of one of the one or more virtual assistant queries. The processing unit 3706-2 may be further configured to display the results of the selected one of the one or more virtual assistant queries. In one example, determining the one or more virtual assistant queries includes determining the one or more virtual assistant queries based on one or more of query history, user preferences, or query popularity. In another example, determining the one or more virtual assistant queries includes determining the one or more virtual assistant queries based on media content available for viewing by the user. In yet another example, determining the one or more virtual assistant queries includes determining the one or more virtual assistant queries based on the received notification. In yet another example, determining the one or more virtual assistant queries includes determining the one or more virtual assistant queries based on the active application. In other examples, processing element 3706-2 may be further configured to determine alternative media content for display based on one or more of user preferences, program popularity, and the status of a live sporting event, and display a notification including the determined alternative media content.

Although examples have been fully described with reference to the accompanying drawings, it is noted that various changes and modifications will be apparent to those skilled in the art (e.g., modifying any of the systems or processes discussed herein in accordance with the concepts described herein in connection with any other system or process discussed herein). It is to be understood that such changes and modifications are to be considered as included within the scope of the various examples as defined by the appended claims.

A system and process for updating virtual assistant media knowledge in real time is disclosed. The virtual assistant knowledge can be updated with timely information associated with the media being played (e.g., sporting events, television shows, etc.). The data feed that may be received includes data that correlates events to specific times in the media stream. The user request may be received based on a voice input, and may be associated with an event in a media stream or program. In response to receiving the request, the media stream may be prompted to begin playback at a time in the media stream associated with the event referenced in the request. In another example, a response to a user request may be generated based on data related to an event. The response may then be delivered to the user (e.g., read aloud, displayed, etc.).

1. A method for voice control of media playback, the method comprising:

at an electronic device:

receiving a data feed, wherein the data feed comprises data related to an event, the event associated with a time in a media stream;

receiving a user request based on a voice input, wherein the user request is associated with the event; and

in response to receiving the user request, causing the media stream to begin playback at the time in the media stream associated with the event.

2. The method of item 1, further comprising:

the user request is interpreted based on the currently playing media.

3. The method of item 1, further comprising:

the user request is interpreted based on a current playback position of currently playing media.

4. The method of item 1, further comprising:

interpreting the user request based on one or more of: an on-screen actor, an on-screen team member, a list of participants, a list of actors in a program, a list of characters in a program, or a team list.

5. The method of item 1, wherein the media stream comprises a sporting event, and wherein the data related to the event comprises one or more of: characteristics of players, scores, penalties, statistics, or event indicators.

6. The method of item 1, wherein the media stream comprises a prize awards ceremony, and wherein the data related to the event comprises one or more of: of participantsFeature(s)A performance description, or a prize awarding ceremony indicator.

7. The method of item 1, wherein the media stream comprises a television program, and wherein the data related to the event comprises one or more of: a show description or a program segment indicator.

8. The method of item 1, wherein the user request comprises a request for highlights in the media stream.

9. The method of item 1, further comprising:

causing continuous playback of a plurality of segments of the media stream in response to receiving the user request.

10. The method of item 1, wherein causing playback of the media stream comprises causing media to be played back on a playback device other than the electronic device.

11. The method of item 10, further comprising:

interpreting the user request based on information displayed by the electronic device.

12. The method of item 10, further comprising:

interpreting the user request based on information displayed by the playback device.

13. The method of item 1, wherein the data related to the event comprises closed caption text.

14. The method of item 13, further comprising:

determining the time in the media stream associated with the event based on the closed caption text.

15. The method of item 1, wherein the data related to the event comprises one or more of: secondary screen experience data, secondary camera view data, or social network feed data.

16. The method of item 1, further comprising:

receiving a bookmark indication from the user, wherein the bookmark corresponds to a particular playback position in the media stream.

17. The method of item 16, further comprising:

receiving a user request for sharing the bookmark; and

in response to receiving the user request to share the bookmark, causing reminder information associated with the particular playback position to be transmitted to a server.

18. The method of item 1, further comprising:

interpreting the user request based on one or more of: a user's favorite team, a user's favorite sport, a user's favorite team member, a user's favorite actor, a user's favorite television program, a user's geographic location, user demographic information, a user's viewing history, or a user's subscription data.

19. A non-transitory computer-readable storage medium comprising computer-executable instructions to:

receiving a data feed, wherein the data feed comprises data related to an event, the event associated with a time in a media stream;

receiving a user request based on a voice input, wherein the user request is associated with the event; and

in response to receiving the user request, causing the media stream to begin playback at the time in the media stream associated with the event.

20. A system for voice control of media playback, the system comprising:

one or more processors;

a memory; and

one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:

receiving a data feed, wherein the data feed comprises data related to an event, the event associated with a time in a media stream;

receiving a user request based on a voice input, wherein the user request is associated with the event; and

In response to receiving the user request, causing the media stream to begin playback at the time in the media stream associated with the event.

21. A method for integrating information into digital assistant knowledge, the method comprising:

at an electronic device:

receiving a data feed, wherein the data feed comprises data related to an event, the event associated with a time in a media stream;

receiving a user request based on voice input from a user, wherein the user request is associated with the event;

generating a response to the user request based on the data related to the event; and

causing the response to be delivered.

22. The method of item 21, wherein generating the response further comprises generating the response based on currently playing media.

23. The method of item 21, wherein generating the response further comprises generating the response based on a current playback position of currently playing media.

24. The method of item 21, wherein generating the response further comprises generating the response based on media content previously consumed by the user.

25. The method of item 21, wherein generating the response further comprises generating the response based on one or more of: an on-screen actor, an on-screen team member, a list of participants, a list of actors in a program, or a team list.

26. The method of item 21, further comprising:

in response to the user request including a request for information synchronized with a current playback position of currently playing media, generating the response based on data synchronized with the current playback position, wherein the data synchronized with the current playback position does not include data associated with a time after the current playback position; and is

Generating the response based on live data in response to the user request including a request for live information.

27. The method of item 21, wherein causing the response to be delivered comprises causing the response to be displayed or played on a playback device other than the electronic device.

28. The method of item 21, wherein causing the response to be delivered comprises causing the response to be delivered to a playback device other than the electronic device.

And updating knowledge of the real-time digital assistant.

This patent application claims priority to U.S. temporary serial number 62/019,292 entitled "REAL-TIME DIGITAL ASSISTANT KNOWLEDGE UPDATES" filed on 30.6.2014, which is hereby incorporated by reference herein in its entirety for all purposes.

This patent application is also related to the following co-pending provisional applications: U.S. patent application serial No. 62/019,312, "Intelligent Automated assistance for TV User Interactions" (attorney docket No. 106843065100(P18133USP1)), filed 6/30/2014, which is hereby incorporated by reference in its entirety.

The present invention relates generally to voice control of television user interactions, and more particularly to real-time updating of virtual assistant media knowledge.

An intelligent automated assistant (or virtual assistant) provides an intuitive interface between a user and an electronic device. These assistants may allow users to interact with a device or system in speech and/or text form using natural language. For example, a user may access a service of an electronic device by providing spoken user input in a natural language form to a virtual assistant associated with the electronic device. The virtual assistant can perform natural language processing on the spoken user input to infer user intent and implement the user intent into a task. The tasks may then be performed by performing one or more functions of the electronic device, and in some examples, the relevant output may be returned to the user in a natural language form.

Although mobile phones (e.g., smartphones), tablets, etc. have gained benefit from virtual assistant control, many other user devices still lack such a convenient control mechanism. For example, user interaction with media control devices (e.g., televisions, television set-top boxes, cable boxes, gaming devices, streaming media devices, digital video recorders, etc.) can be complex and unintelligible. Furthermore, with the increasing number of media sources that may be provided by these devices (e.g., wireless televisions, television subscription services, streaming video services, cable video-on-demand services, network-based video services, etc.), finding desired media content to consume may be cumbersome for some users, and even unwieldy in the face of a vast amount of content. In addition, coarse time shifting and cue control can make it difficult for a user to obtain desired content, such as a particular moment in a television program. Obtaining timely information associated with live media content can also present difficulties. As a result, many media control devices may provide a poor user experience that may be frustrating to many users.

Systems and processes for updating virtual assistant media knowledge in real-time are disclosed. In one example, the virtual assistant knowledge can be updated with timely information associated with the media being played. The data feed that may be received includes data that associates an event with a particular time in the media stream. A user request may be received based on the voice input and may be associated with an event in a media stream or program. In response to receiving the request, the media stream may be prompted to begin playback from a time in the media stream associated with the event referenced in the request.

In another example, timely information may be integrated into digital assistant knowledge to provide answers to queries related to a current event. The data feed that may be received includes data that associates an event with a particular time in the media stream. A user request may be received based on voice input from a user, and the user request may be associated with one of the events. A response to the user request may be generated based on data associated with the event. The response may then be delivered to the user in various ways (e.g., loud speaking, displayed on a television, displayed on a mobile user device, etc.).

In the following description of the examples, reference is made to the accompanying drawings in which are shown, by way of illustration, specific examples that may be implemented. It is to be understood that other examples may be used and structural changes may be made without departing from the scope of the various examples.

The invention relates to a system and a method for updating virtual assistant media knowledge in real time. Real-time virtual assistant knowledge updates can, for example, enable precise voice control of television user interactions and provide accurate virtual assistant responses to media-related queries in a timely manner. In one example, a virtual assistant can be used to interact with a media control device, such as a television set-top box that controls content shown on a television display. Voice input for the virtual assistant may be received using a mobile user device or a remote control with a microphone. User intent may be determined from the voice input, and the virtual assistant may perform tasks according to the user intent, including causing media to be played back on a connected television and controlling any other function of a television set-top box or similar device (e.g., causing live media content playback, causing recorded media content playback, managing video recordings, searching for media content, menu navigation, etc.).

In one example, the virtual assistant knowledge may be updated with timely information or even real-time information associated with the media being played (e.g., sporting events, television shows, etc.). The data feed that may be received includes data that associates an event with a particular time in the media stream. For example, the data feed may indicate that a goal was made at a certain time in a televised football game. In another example, the data feed may indicate that the program host has monologue at a certain time in the television program. A user request may be received based on the voice input and may be associated with an event in a media stream or program. In response to receiving the request, the media stream may be prompted to begin playback from a time in the media stream associated with the event referenced in the request.

In another example, timely or real-time information may be integrated into the digital assistant knowledge to provide answers to queries related to the current event. The data feed that may be received includes data that associates an event with a particular time in the media stream. A user request may be received based on voice input from a user, and the user request may be associated with one of the events. A response to the user request may be generated based on data associated with the event. The response may then be delivered to the user in various ways (e.g., loud speaking, displayed on a television, displayed on a mobile user device, etc.).

According to various examples discussed herein, updating virtual assistant knowledge with timely media information may provide an effective and enjoyable user experience. By using a virtual assistant that is capable of receiving natural language queries or commands associated with media content, a user may interact with the media control device simply and intuitively. Real-time virtual assistant knowledge updates can, for example, enable precise voice control of television user interactions and provide accurate virtual assistant responses to media-related queries in a timely manner. In addition, intuitive verbal commands related to the displayed media can be used to easily access desired portions or scenes of the media. However, it should be understood that many other advantages may also be realized in accordance with the various examples discussed herein.

FIG. 56 illustrates an exemplary system 100-3 for providing real-time updates to voice control and virtual assistant knowledge for media playback. It should be appreciated that voice control of media playback on a television as discussed herein is merely one example of employing some type of display technology to control media, and for reference only, the concepts discussed herein may be generally used to control media content interaction on any of a variety of devices and associated displays, including controlling media content interaction on any of a variety of devices and associated displays (e.g., a monitor, a laptop display, a desktop computer display, a mobile user device display, a projector display, etc.). Thus, the term "television" may refer to any type of display associated with any of a variety of devices. Further, the terms "virtual assistant," "digital assistant," "intelligent automated assistant," or "automatic digital assistant" may refer to any information processing system that may interpret natural language input in speech and/or text form to infer user intent and perform actions based on the inferred user intent. For example, to take action in accordance with the inferred user intent, the system may perform one or more of the following: identifying a task flow by steps and parameters designed to achieve the inferred user intent; entering into the task flow specific requirements from the inferred user intent; executing a task flow by calling a program, method, service, API, etc.; and generating an output response to the user in audible (e.g., speech) and/or visual form.

The virtual assistant may be capable of accepting user requests at least partially in the form of natural language commands, requests, statements, narratives, and/or queries. Typically, a user request seeks either the virtual assistant to make an informational answer or the virtual assistant to perform a task (e.g., cause a particular media to be displayed). Satisfactory responses to user requests may include providing requested informational answers, performing requested tasks, or a combination of both. For example, a user may present questions to the virtual assistant, such as: "where do i now? "based on the user's current location, the virtual assistant may answer: "you are at the central park. "the user may also request to perform a task, such as: "please remind me to call mom at 4 pm today. "in response, the virtual assistant can acknowledge the request and then create an appropriate reminder item in the user's electronic calendar. During the performance of requested tasks, virtual assistants can sometimes interact with users over long periods of time in continuous conversations involving multiple exchanges of information. There are many other ways to interact with a virtual assistant to request information or perform various tasks. In addition to providing verbal responses and taking programmed actions, the virtual assistant can also provide responses in other visual or audio forms (e.g., as text, alerts, music, video, animation, etc.). Further, as described herein, an exemplary virtual assistant can control playback of media content (e.g., a video playing on a television) and cause information to be displayed on a display.

An example of a virtual Assistant is described in U.S. utility patent application serial No. 12/987,982, entitled "Intelligent Automated Assistant," filed on 10.1.2011, the entire disclosure of which is incorporated herein by reference.

As shown in fig. 56, in some examples, the virtual assistant may be implemented according to a client-server model. The virtual assistant can include a client-side portion executing on the user device 102-3 and a server-side portion executing on the server system 110-3. The client-side portion, which may be integrated with the remote control 106-3, is also executed on the television set-top box 104-3. The user device 102-3 may include any electronic device, such as a mobile phone (e.g., a smartphone), a tablet computer, a smart phone, or a smart phone, or a combination of,Portable media players, desktop computers, laptop computers, PDAs, wearable electronic devices (e.g., digital glasses, wristbands, watches, brooches, armbands, etc.), and the like. The television set-top box 104-3 may comprise any media control device, such as a cable box, satellite box, video player, video streaming device, digital video recorder, gaming system, DVD player, Blu-ray DiscTMPlayers, combinations of such devices, and the like. The television set-top box 104-3 may be connected to the display 112-3 and speakers 111-3 via a wired connection or a wireless connection. Display 112-3 (with or without speakers 111-3) may be any type of display, such as a television display, monitor, projector, etc. In some examples, television set-top box 104-3 may be connected to an audio system (e.g., an audio receiver) and speaker 111-3 may be separate from display 112-3. In other examples, display 112-3, speaker 111-3, and television set-top box 104-3 may be incorporated together into a single device, such as a smart television with advanced processing capabilities and network connection capabilities. In such an example, the functionality of television set-top box 104-3 may be performed as an application on a combined device.

In some examples, television set-top box 104-3 may function as a media control center for media content of multiple types and sources. For example, the television set-top box 104-3 may facilitate user access to a live television (e.g., wireless, satellite, or cable television). Accordingly, television set-top box 104-3 may include a cable tuner or a satellite tuner, among others. In some examples, television set-top box 104-3 may also record a television program for later time-shifted viewing. In other examples, television set-top box 104-3 may provide access to one or more streaming media services, such as access to cable-delivered video-on-demand programming, video, and music, and internet-delivered television programming, video, and music (e.g., from various free, pay-for-fee, and subscription streaming services). In other examples, television set-top box 104-3 may facilitate playback or display of media content from any other source, such as displaying photos from a mobile user device, playing videos from a coupled storage device, playing music from a coupled music player, and so forth. Television set-top box 104-3 may also include various other combinations of the media control features discussed herein as desired.

User device 102-3 and television set-top box 104-3 may communicate with server system 110-3 over one or more networks 108-3, which may include the internet, an intranet, or any other public or private network, wired or wireless. Additionally, the user device 102-3 may communicate with the television set-top box 104-3 through the network 108-3 or directly through any other wired or wireless communication mechanism (e.g., Bluetooth, Wi-Fi, radio frequency, infrared transmission, etc.). As shown, the remote control 106-3 may communicate with the television set-top box 104-3 using any type of communication means, such as a wired connection or any type of wireless communication (e.g., bluetooth, Wi-Fi, radio frequency, infrared transmission, etc.), including via the network 108-3. In some examples, a user may interact with television set-top box 104-3 through user device 102-3, remote control 106-3, or an interface element (e.g., a button, microphone, camera, joystick, etc.) integrated within television set-top box 104-3. For example, voice input may be received at user device 102-3 and/or remote control 106-3, including a media-related query or command for the virtual assistant, and may be used to cause media-related tasks to be performed on television set-top box 104-3. Likewise, haptic commands for controlling media on television set-top box 104-3 may be received at user device 102-3 and/or remote control 106-3 (as well as other devices not shown). Accordingly, various functions of television set-top box 104-3 may be controlled in various ways, thereby providing a user with a variety of options for controlling media content from multiple devices.

The client-side portion of the exemplary virtual assistant executing on user device 102-3 and/or television set-top box 104-3 with remote control 106-3 may provide client-side functionality, such as user-oriented input and output processing and communication with server system 110-3. Server system 110-3 may provide server-side functionality for any number of clients residing on respective user devices 102-3 or respective television set-top boxes 104-3.

The server system 110-3 may include one or more virtual assistant servers 114-3, which may include a client-facing I/O interface 122-3, one or more processing modules 118-3, data and model storage 120-3, and an I/O interface 116-3 to external services. Client-facing I/O interface 122-3 may facilitate client-facing input and output processing for virtual assistant server 114-3. The one or more processing modules 118-3 may utilize the data and model store 120-3 to determine a user's intent based on natural language input and may perform task execution based on the inferred user intent. In some examples, the virtual assistant server 114-3 may communicate with external services 124-3 (such as a telephone service, a calendar service, an information service, a messaging service, a navigation service, a television programming service, a streaming media service, etc.) over one or more networks 108-3 for completing tasks or obtaining information. An I/O interface 116-3 to an external service may facilitate such communication.

The server system 110-3 may be implemented on one or more stand-alone data processing devices or a distributed network of computers. In some examples, server system 110-3 may employ various virtual devices and/or services of a third party service provider (e.g., a third party cloud service provider) to provide potential computing resources and/or infrastructure resources of server system 110-3.

While the functionality of the virtual assistant shown in fig. 56 includes both a client-side portion and a server-side portion, in some examples, the functionality of the assistant (or speech recognition and media control in general) may be implemented as a standalone application installed on a user device, television set-top box, smart television, or the like. Further, the division of functionality between the client portion and the server portion of the virtual assistant can be different in different examples. For example, in some examples, the client executing on user device 102-3 or television set-top box 104-3 may be a thin client that provides only user-oriented input and output processing functions and delegates all other functions of the virtual assistant to a backend server.

Fig. 57 illustrates a block diagram of an exemplary user device 102-3, in accordance with various examples. As shown, the user device 102-3 may include a memory interface 202-3, one or more processors 204-3, and a peripheral interface 206-3. The various components in the user equipment 102-3 may be coupled together by one or more communication buses or signal lines. User device 102-3 may also include various sensors, subsystems, and peripherals coupled to peripheral interface 206-3. The sensors, subsystems, and peripherals may gather information and/or facilitate various functions of user device 102-3.

For example, the user device 102-3 may include a motion sensor 210-3, a light sensor 212-3, and a proximity sensor 214-3 coupled to the peripheral interface 206-3 to facilitate orientation, lighting, and proximity sensing functions. One or more other sensors 216-3, such as a positioning system (e.g., GPS receiver), temperature sensor, biometric sensor, gyroscope, compass, accelerometer, etc., may also be connected to the peripheral interface 206-3 to facilitate related functions.

In some examples, camera subsystem 220-3 and optical sensor 222-3 may be used to facilitate camera functions, such as taking pictures and recording video clips. Communication functions may be facilitated by one or more wired and/or wireless communication subsystems 224-3, which may include various communication ports, radio frequency receivers and transmitters, and/or optical (e.g., infrared) receivers and transmitters. The audio subsystem 226-3 may be coupled to a speaker 228-3 and a microphone 230-3 to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions.

In some examples, the user device 102-3 may also include an I/O subsystem 240-3 coupled to the peripheral interface 206-3. I/O subsystem 240-3 may include a touchscreen controller 242-3 and/or one or more other input controllers 244-3. The touch screen controller 242-3 may be coupled to the touch screen 246-3. The touch screen 246-3 and touch screen controller 242-3 may detect contact and movement or breaks thereof, for example, using any of a number of touch sensitivity technologies, such as capacitive, resistive, infrared, surface acoustic wave, proximity sensor arrays, and the like. Other input controllers 244-3 may be coupled to other input/control devices 248-3, such as one or more buttons, rocker switches, thumb wheels, infrared ports, USB ports, and/or pointing devices (such as a stylus).

In some examples, the user device 102-3 may also include a memory interface 202-3 coupled to the memory 250-3. Memory 250-3 may include any electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device; portable computer diskette (magnetic); random Access Memory (RAM) (magnetic); read Only Memory (ROM) (magnetic); erasable programmable read-only memory (EPROM) (magnetic); portable optical disks such as CD, CD-R, CD-RW, DVD-R, or DVD-RW; or flash memory such as compact flash cards, secure digital cards, USB memory devices, memory sticks, and the like. In some examples, the non-transitory computer-readable storage medium of memory 250-3 may be used to store instructions (e.g., for performing part or all of the various processes described herein) for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In other examples, instructions (e.g., for performing some or all of the various processes described herein) may be stored on a non-transitory computer-readable storage medium of server system 110-3, or may be divided between the non-transitory computer-readable storage medium of memory 250-3 and the non-transitory computer-readable storage medium of server system 110-3. In the context of this document, a "non-transitory computer-readable storage medium" can be any medium that can contain or store the program for use by or in connection with the instruction execution system, apparatus, or device.

In some examples, memory 250-3 may store operating system 252-3, communication module 254-3, graphical user interface module 256-3, sensor processing module 258-3, telephone module 260-3, and application programs 262-3. Operating system 252-3 may include instructions for handling basic system services and for performing hardware related tasks. Communication module 254-3 may facilitate communication with one or more additional devices, one or more computers, and/or one or more servers. The graphical user interface module 256-3 may facilitate graphical user interface processing. The sensor processing module 258-3 may facilitate sensor-related processing and functions. The phone module 260-3 may facilitate phone-related processes and functions. The application modules 262-3 may facilitate various functions of user applications, such as electronic messaging, web browsing, media processing, navigation, imaging, and/or other processes and functions.

As described herein, the memory 250-3 may also store client-side virtual assistant instructions (e.g., stored in the virtual assistant client module 264-3) as well as various user data 266-3 (e.g., user-specific vocabulary data, preference data, and/or other data such as a user's electronic address book, to-do list, shopping list, television program collection, etc.), for example, to provide client-side functionality of the virtual assistant. User data 266-3 may also be used to perform speech recognition in support of a virtual assistant or for any other application.

In various examples, virtual assistant client module 264-3 may be capable of accepting sound input (e.g., speech input), text input, touch input, and/or gesture input through various user interfaces of user device 102-3 (e.g., I/O subsystem 240-3, audio subsystem 226-3, etc.). Virtual assistant client module 264-3 can also provide output in audio (e.g., speech output), visual, and/or tactile forms. For example, the output may be provided as voice, sound, alarm, text message, menu, graphics, video, animation, vibration, and/or a combination of two or more of the foregoing. During operation, virtual assistant client module 264-3 can use communication subsystem 224-3 to communicate with a virtual assistant server.

In some examples, virtual assistant client module 264-3 may utilize various sensors, subsystems, and peripheral devices lai to gather additional information from the surroundings of user device 102-3 to establish a context associated with the user, current user interaction, and/or current user input. Such context may also include information from other devices, such as information from television set-top box 104-3. In some examples, virtual assistant client module 264-3 can provide the context information, or a subset thereof, along with the user input to the virtual assistant server to help infer the intent of the user. The virtual assistant can also use the context information to determine how to prepare and deliver the output to the user. The context information may also be used by the user device 102-3 or the server system 110-3 to support accurate speech recognition.

In some examples, contextual information accompanying the user input may include sensor information such as lighting, ambient noise, ambient temperature, images or video of the surrounding environment, distance to another object, and the like. The context information may also include information associated with a physical state of the user device 102-3 (e.g., device orientation, device location, device temperature, power level, velocity, acceleration, motion pattern, cellular signal strength, etc.) or a software state of the user device 102-3 (e.g., running process, installed programs, past and current network activities, background services, error logs, resource usage, etc.). The contextual information may also include information associated with the status of the connected device or other devices associated with the user (e.g., media content displayed by television set-top box 104-3, media content available to television set-top box 104-3, etc.). Any of these types of contextual information may be provided to the virtual assistant server 114-3 (or for the user device 102-3 itself) as contextual information associated with the user input.

In some examples, virtual assistant client module 264-3 may selectively provide information (e.g., user data 266-3) stored on user device 102-3 in response to a request from virtual assistant server 114-3 (or the virtual assistant client module may be used on user device 102-3 itself to perform speech recognition and/or virtual assistant functions). Virtual assistant client module 264-3 can also elicit additional input from the user via a natural language dialog or other user interface upon request by virtual assistant server 114-3. Virtual assistant client module 264-3 can communicate additional input to virtual assistant server 114-3 to help virtual assistant server 114-3 make intent inferences and/or satisfy the user intent expressed in the user request.

In various examples, memory 250-3 may include additional instructions or fewer instructions. Further, various functions of user device 102-3 may be performed in hardware and/or firmware, including in one or more signal processing and/or application specific integrated circuits.

Fig. 58 shows a block diagram of an exemplary television set-top box 104-3 in a system 300-3 for providing voice control of media playback. System 300-3 may include a subset of the elements of system 100-3. In some examples, system 300-3 may perform certain functions alone, and may also operate with other elements of system 100-3 to perform other functions. For example, elements of system 300-3 may handle certain media control functions (e.g., playback of locally stored media, recording functions, channel tuning, etc.) without interacting with server system 110-3, and system 300-3 may handle other media control functions (e.g., playback of remotely stored media, download media content, make certain virtual assistant queries, etc.) in conjunction with other elements of server system 110-3 and system 100-3. In other examples, elements of system 300-3 may perform functions of larger system 100-3, including accessing external services 124-3 over a network. It should be appreciated that the functionality may be divided between the local device and the remote server device in a variety of other ways.

As shown in fig. 58, in one example, the television set-top box 104-3 may include a memory interface 302-3, one or more processors 304-3, and a peripheral interface 306-3. The various components in television set-top box 104-3 may be coupled together by one or more communication buses or signal lines. The television set-top box 104-3 may also include various subsystems and peripherals coupled to the peripheral interface 306-3. The subsystems and peripherals may gather information and/or facilitate various functions of television set-top box 104-3.

For example, television set-top box 104-3 may include a communication subsystem 324-3. Communication functions may be facilitated by one or more wired and/or wireless communication subsystems 324-3, which may include various communication ports, radio frequency receivers and transmitters, and/or optical (e.g., infrared) receivers and transmitters.

In some examples, the television set-top box 104-3 may also include an I/O subsystem 340-3 coupled to the peripheral interface 306-3. The I/O subsystem 340-3 may include an audio/video output controller 370-3. The audio/video output controller 370-3 may be coupled to the display 112-3 and speakers 111-3, or may be capable of otherwise providing audio and video output (e.g., via an audio/video port, wireless transmission, etc.). The I/O subsystem 340-3 may also include a remote controller 342-3. The remote controller 342-3 may be communicatively coupled (e.g., via a wired connection, bluetooth, Wi-Fi, etc.) to the remote controller 106-3. The remote control 106-3 may include a microphone 372-3 for capturing audio input (e.g., voice input from a user), one or more buttons 374-3 for capturing tactile input, and a transceiver 376-3 for facilitating communication with the television set-top box 104-3 via the remote control 342-3. The remote control 106-3 may also include other input mechanisms such as a keyboard, joystick, touchpad, and the like. The remote control 106-3 may also include output mechanisms such as lights, a display, a speaker, and the like. Inputs received at the remote control 106-3 (e.g., user speech, button presses, etc.) may be communicated to the television set-top box 104-3 via the remote control 342-3. The I/O subsystem 340-3 may also include one or more other input controllers 344-3. One or more other input controllers 344-3 may be coupled to other input/control devices 348-3, such as one or more buttons, rocker switches, thumb wheels, infrared ports, USB ports, and/or pointer devices (such as a stylus).

In some examples, the television set-top box 104-3 may also include a memory interface 302-3 coupled to the memory 350-3. Memory 350-3 may include any electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device; portable computer diskette (magnetic); random Access Memory (RAM) (magnetic); read Only Memory (ROM) (magnetic); erasable programmable read-only memory (EPROM) (magnetic); portable optical disks such as CD, CD-R, CD-RW, DVD-R, or DVD-RW; or flash memory such as compact flash cards, secure digital cards, USB memory devices, memory sticks, and the like. In some examples, the non-transitory computer-readable storage medium of memory 350-3 may be used to store instructions (e.g., for performing part or all of the various processes described herein) for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In other examples, instructions (e.g., for performing some or all of the various processes described herein) may be stored on a non-transitory computer-readable storage medium of server system 110-3, or may be divided between the non-transitory computer-readable storage medium of memory 350-3 and the non-transitory computer-readable storage medium of server system 110-3. In the context of this document, a "non-transitory computer-readable storage medium" can be any medium that can contain or store the program for use by or in connection with the instruction execution system, apparatus, or device.

In some examples, the memory 350-3 may store an operating system 352-3, a communication module 354-3, a graphical user interface module 356-3, a device built-in media module 358-3, a device external media module 360-3, and application programs 362-3. The operating system 352-3 may include instructions for handling basic system services and for performing hardware related tasks. The communication module 354-3 may facilitate communication with one or more additional devices, one or more computers, and/or one or more servers. Graphical user interface module 356-3 may facilitate graphical user interface processing. The on-device media module 358-3 may facilitate storage and playback of media content stored locally on the television set-top box 104-3 as well as other media content available locally (e.g., cable channel tuning). The device external media module 360-3 may facilitate streaming playback or download of media content stored remotely (e.g., on a remote server, on the user device 102-3, etc.). The application modules 362-3 may facilitate various functions of user applications such as electronic messaging, web browsing, media processing, gaming, and/or other processes and functions.

As described herein, the memory 350-3 may also store client-side virtual assistant instructions (e.g., stored in the virtual assistant client module 364-3) as well as various user data 366-3 (e.g., user-specific vocabulary data, preference data, and/or other data such as a user's electronic address book, to-do list, shopping list, television program collection, etc.), for example, to provide client-side functionality of the virtual assistant. User data 366-3 may also be used to perform speech recognition to support a virtual assistant or for any other application.

In various examples, virtual assistant client module 364-3 can accept voice input (e.g., speech input), text input, touch input, and/or gesture input through various user interfaces of television set-top box 104-3 (e.g., I/O subsystem 340-3, etc.). Virtual assistant client module 364-3 can also provide output in audio (e.g., speech output), visual, and/or tactile forms. For example, the output may be provided as voice, sound, alarm, text message, menu, graphics, video, animation, vibration, and/or a combination of two or more of the foregoing. During operation, virtual assistant client module 364-3 may use communication subsystem 324-3 to communicate with a virtual assistant server.

In some examples, virtual assistant client module 364-3 may utilize various subsystems and peripherals to gather additional information from the surroundings of television set-top box 104-3 to establish a context associated with the user, current user interaction, and/or current user input. Such context may also include information from other devices, such as information from user device 102-3. In some examples, virtual assistant client module 364-3 may provide the context information, or a subset thereof, along with the user input to the virtual assistant server to help infer the user's intent. The virtual assistant can also use the context information to determine how to prepare and deliver the output to the user. The contextual information may also be used by the television set-top box 104-3 or the server system 110-3 to support accurate speech recognition.

In some examples, contextual information accompanying the user input may include sensor information such as lighting, ambient noise, ambient temperature, distance to another object, and the like. The contextual information may also include information associated with the physical state of the television set-top box 104-3 (e.g., device location, device temperature, power level, etc.) or the software state of the television set-top box 104-3 (e.g., running process, installed applications, past and current network activities, background services, error logs, resource usage, etc.). The context information may also include information associated with the state of the connected device or other devices associated with the user (e.g., content displayed on user device 102-3, playable content on user device 102-3, etc.). Any of these types of contextual information may be provided to virtual assistant server 114-3 (or for television set-top box 104-3 itself) as contextual information associated with the user input.

In some examples, virtual assistant client module 364-3 may selectively provide information (e.g., user data 366-3) stored on television set-top box 104-3 in response to a request from virtual assistant server 114-3 (or the virtual assistant client module may be on television set-top box 104-3 itself for performing speech recognition and/or virtual assistant functions). Virtual assistant client module 364-3 may also elicit additional input from the user via a natural language dialog or other user interface upon request by virtual assistant server 114-3. Virtual assistant client module 364-3 may transmit additional input to virtual assistant server 114-3 to help virtual assistant server 114-3 make intent inferences and/or satisfy the user intent expressed in the user request.

In various examples, memory 350-3 may include additional instructions or fewer instructions. Further, various functions of television set-top box 104-3 may be performed in hardware and/or firmware, including in one or more signal processing and/or application specific integrated circuits.

It should be understood that the system 100-3 and the system 300-3 are not limited to the components and configurations shown in fig. 56 and 58, and that the user device 102-3, the television set-top box 104-3, and the remote control 106-3 are likewise not limited to the components and configurations shown in fig. 57 and 58. In various configurations according to various examples, system 100-3, system 300-3, user device 102-3, television set-top box 104-3, and remote control 106-3 may all include fewer components, or include other components.

In general reference is made to a "system" that may include one or more elements of system 100-3, system 300-3, or system 100-3 or system 300-3 throughout this disclosure. For example, a typical system referred to herein may include a television set-top box 104-3 that receives user input from a remote control 106-3 and/or user equipment 102-3.

In some examples, the virtual assistant query may include a request to be prompted to jump to particular media for a particular time. For example, a user may want to see a particular course in a game, a particular show during a show, a particular scene in a movie, etc. To process such a query, the virtual assistant system may determine a user intent associated with the query, identify relevant media responsive to the query, and prompt the media for playback at an appropriate time according to a user request (e.g., prompting to begin playback of a game before someone will score a goal). Detailed media information may be incorporated into the virtual assistant repository to support various media-related queries. For example, detailed media information may be incorporated into the data and model 120-3 of the virtual assistant server 114-3 of the system 100-3 to support a particular media query. In some examples, detailed media information may also be obtained from a service 124-3 external to the system 100-3.

However, a response system capable of processing relevant user requests may include incorporating real-time or near real-time media data into the virtual assistant knowledge. For example, a live sporting event may include various points of interest that a user may wish to see. In addition, the video that the user is currently watching may include many points of interest that the user may reference in the query. Similarly, a television show may include popular scenes that the user may want to prompt for playback or identify as shared with friends, special guests to be present, widely discussed times, and so forth. Various other media content may likewise include relevant points of interest to the user (e.g., music, network-based video clips, etc.). Thus, according to various examples herein, detailed and timely media data can be incorporated into virtual assistant knowledge to support various user requests associated with media, including even near real-time requests for content and media-related information.

Fig. 59 illustrates an exemplary process 400-3 for voice control of media playback, including incorporation of detailed and/or timely media data, according to various examples. At block 402-3, a data feed including an event associated with a time in a media stream may be received. The data feeds may be received from a variety of different sources in any of a number of different forms. For example, the data feed may include a table associating events in a particular media with times, a database in which the times and events are related, a text file associating events in a particular media with times, an information server providing times in response to event requests, and so forth. The data feed may come from a variety of different sources, such as an external service 124-3 of the system 100-3. In some examples, the data feed may be provided by an organization associated with a particular media, such as a sports league that provides detailed sporting event information, a video provider that provides detailed video and scene information, a sports data integrator extracted from a multi-sports data source, and so forth. In other examples, the data feed may be obtained by analyzing media content (such as analyzing actor appearances, closed caption text, scene changes, etc.). In other examples, data feeds may be obtained from social media, such as moments that are commonly discussed in a program, frequently referenced events in a game, and so forth. Thus, the term data feed as used herein may refer to various forms of data, including data that may be mined from the media itself.

FIG. 60 illustrates an exemplary data feed 510-3 associating an event in a media stream 512-3 with a particular time 514-3 in the media stream. It should be understood that fig. 60 is provided for purposes of illustration, and that the data feed 510-3 may take various other forms (e.g., text file, table file, information server data, database, message, informational feed, etc.). Media stream 512-3 may include any type of playable media, such as sporting events, videos, television programs, music, and so forth. In the example of FIG. 60, media stream 512-3 may include a televised hockey game. Whether or not summary information or other descriptive details of a particular media are associated with a particular time, they may be included in the data feed 510-3 (e.g., may be included in a header or the like). In the illustrated example, descriptive summary information is provided in a first box at 5:01(UTC), including the media title (e.g., "hockey game"), media description ("team a versus team B in Ice Arena"), and media source (e.g., television "channel 7"). Various other descriptive information may be similarly provided, and information may be provided in specific fields for reference (e.g., a title field may include a title, a source field may include a television channel or internet address, etc.). In addition to the information shown in fig. 60, various other media information such as a list of players on the game team, a list of actors appearing in the episode, a producer, a director, an artist, and the like can be acquired. Various summary and descriptive information may be incorporated into the virtual assistant knowledge and used to support related queries.

As shown, the data feed 510-3 may include a media stream event 516-3 related to the media stream time 514-3. The media stream time 514-3 may be specified in a variety of different ways, including using coordinated universal time (abbreviated as "UTC"), a local time of the user, a time at a virtual assistant server, a time at a media source (e.g., a sports venue), or various other time zones. In other examples, the media stream time 514-3 may be provided as a progress from the beginning of the media content (e.g., from the beginning of a movie, episode, sporting event, audio track, etc.). In other examples, media stream time 514-3 may be provided as a game clock time, or the like. In any of the various examples, it should be appreciated that media stream time 514-3 may include precise time designations such as seconds, milliseconds, or even finer gradations. For ease of reference, an example of media stream time 514-3 is provided herein with UTC hours and minutes designations, although seconds may generally be used, but milliseconds or finer rankings may also be used.

The media stream event 516-3 may include various events or points of interest in the media stream 512-3. In a sporting event, for example, media stream event 516-3 may include a game, a penalty, a goal, a segment (e.g., one period, one quarter, one half, etc.), a play lineup (batter, ice player, quarter guard, kicker on the field, etc.), and so forth. In a television program (e.g., a situation comedy, a talk show, etc.), media stream events 516-3 may include a title, a character present, an actor present (e.g., a time designation on a screen), an event within the program's plot (e.g., a particular scene), a guest present, a guest show, a monologue, a commercial break, etc. In an awards show (e.g., a movie prize, a drama prize, etc.), media stream events 516-3 may include monologues, awards ceremonies, winner speeches, artists' performances, commercial breaks, etc. In a broadcast program, media stream event 516-3 may include a karaoke, guest speaker, discussion topic, and the like. It should therefore be appreciated that various events or points of interest may be identified in any of a variety of media types, and those events may be associated with particular times in the media.

In other examples, points of interest or events may be identified based on social media, popular viewpoints, voting, and the like. For example, popular reviews on a social media network associated with particular media (e.g., a live sporting event) may be used to identify possible points of interest and approximate times of occurrence (e.g., shortly before the first review of a topic). In another example, the viewer may indicate the point of interest by marking a time in the media (e.g., using a button on a remote control, verbal request, virtual button, etc.). As another example, points of interest may be identified from users who share media with others (such as sharing video clips from a portion of a media stream). Thus, the media stream event 516-3 in the data feed 510-3 may be identified from a media provider, a user, a social network discussion, and various other sources.

In the example of FIG. 60, the data feed 510-3 may include a media stream event 516-3 associated with an event in a hockey game. For example, the ball throw at the beginning of the first game may occur at 5:07(UTC), and the data feed 510-3 may include an associated media stream event 516-3 at a particular media stream time 514-3 of the event. At 5:18(UTC), it may be judged that player X played player Z with a stick foul, penalizing his downtime for two minutes. Penalty details (e.g., penalty type, team member involved, penalty time, etc.) can be included in media stream event 516-3 associated with the penalty at the particular media stream time 514-3. At 5:19(UTC), team A may have started more than less and may include a media stream event 516-3 that may correlate more than less with a particular media stream time 514-3. As shown, various other media stream events 516-3 may likewise be included and associated with a particular media stream time 514-3. The details of different events may vary, and some or all of the information may be incorporated into the virtual assistant knowledge. For example, the details of the goal may include a goal player and an attack assistant. Details of the end of a focused attack may include information identifying teams who have lost the state of how many shots they hit and teams who have hit back at full force. The details of the on-screen team member may include the coordinate position of the team member on the screen. Additionally, the media stream event 516-3 may include a time period designation for the game, such as the end of the first round at 5:31 (UTC).

In other examples, various other media stream events 516-3 with additional detailed information may be included in the data feed 510-3 and/or determined from the media stream 512-3 itself. For example, an on-ice team member can be associated with the media streaming time 514-3, a score change can be associated with the media streaming time 514-3, a game break can be associated with the media streaming time 514-3, an on-ice and participant can be associated with the media streaming time 514-3, and so forth. In addition, various other details may be included in a particular event or may be associated with a media stream, such as various statistical information, team member information, participant information (e.g., officials, coaches, etc.), course indicators, and so forth. As such, the data feed 510-3 may include detailed textual descriptions of various events 516-3 that occurred in the media stream 512-3 at various times 514-3.

It should be appreciated that knowledge of media stream event 516-3 and media stream time 514-3 is to be incorporated into the virtual assistant repository without receiving media stream 512-3. In some examples, without media stream 512-3, information of data feed 510-3 may be received by virtual assistant server 114-3 to incorporate the information into virtual assistant knowledge (e.g., into data and model 120-3). In another aspect, media stream 512-3 may be provided directly to user device 102-3, television set-top box 104-3, or another user device. As described below, in some examples, virtual assistant knowledge of media event 516-3 may be used to prompt playback of media stream 512-3 on a user device (e.g., on user device 102-3, television set-top box 104-3, etc.) and in response to other virtual assistant queries. In other examples, media stream 512-3, portions of media stream 512-3, and/or metadata associated with media stream 512-3 may be received by virtual assistant server 114-3 and incorporated into a knowledge base of the virtual assistant.

Referring again to process 400-3 in FIG. 59, at block 404-3, a spoken user request associated with an event in a media stream may be received. As described above, voice input may be received from a user in various ways, such as via user device 102-3, remote control 106-3, or another user device in system 100-3. The voice input for the virtual assistant can include various user requests, including requests associated with media and/or events within a particular media. For example, the user request may include a reference to any of the media stream events 516-3 discussed herein, such as a query associated with the hockey game event shown in fig. 60. In some examples, the user request may include a request to prompt for media to a particular point of interest. For example, the user may request to view fighting in a hockey game (e.g., "show me fighting between team member Y and team member Q"), jump to the beginning of a round (e.g., "jump to throw ball in first round"), view goals (e.g., "show me goals for team member M"), view results from specific penalties (e.g., "show me penals for team member X who is putting), and so forth.

Referring again to the process 400-3 of FIG. 59, at block 406-3, the media stream may be played back beginning at a time in the media stream associated with the event in the user request. For example, knowledge from data feed 510-3 incorporated into the virtual assistant knowledge base may be used to determine a particular time in a media stream associated with a user's request for particular content. Fig. 61 shows an exemplary virtual assistant query response prompting video playback based on an event in the media stream that responds to the query. In the illustrated example, the user may be watching display 112-3 with content controlled by television set-top box 104-3. The user may be watching video 620-3, which may include a hockey game associated with the data feed 510-3 described above. As discussed with reference to block 404-3 of process 400-3, the user may then request to view particular media content associated with the event. For example, the user may request to view a goal (e.g., "show me that goal again," "show me the goal of player M," "show me the goal of team a," "show me the goal in the first view," "show me the first goal in the a/B hockey game," "play back the last goal," etc.).

In response to a user request, a particular time in the media stream (e.g., in video 620-3) that is responsive to the user's request may be determined. In this example, using the knowledge from the data feed 510-3 in fig. 60 incorporated into the virtual assistant knowledge base, the system can identify a goal that team a, player M, is under attack by player Q at 5:21(UTC) as shown in fig. 60. The system may then move the timeline for video 620-3 to the correct time to display the desired content. In this example, the system may move the timeline of video 620-3 to begin playback at cue time 624-3 indicated on playback indicator 622-3. As shown, cue time 624-3 may be different than live time 626-3 (e.g., a time associated with a live television or otherwise live content stream). In some examples, the cue time 624-3 may correspond to the media stream time 514-3 associated with the corresponding media stream event 516-3. In other examples, the cue time 624-3 may be moved earlier or later than the media stream time 514-3 depending on how the media stream event 516-3 is associated with the media stream time 514-3. For example, the cue time 624-3 may be thirty seconds, one minute, two minutes, or another amount earlier than the corresponding media stream time 514-3 to allow the user to see the game immediately before scoring a goal. In some examples, the data feed 510-3 may include precise time designations for when to begin playback of a particular event (e.g., designating when a hockey player begins a last ball assault, designating when a foul action is first seen, etc.). Accordingly, video 620-3 may be played for the user starting at cue time 624-3 in response to the user virtual assistant request.

In some examples, video 620-3 may replace another video shown on display 112-3 or may be retrieved for playback in response to a user request. For example, a user watching other content may issue a request to watch the last goal scored in the hockey game on another channel (e.g., "show me the last goal scored in the hockey game on channel seven", "show me the last goal of the a/B hockey game", "show me the first goal in the Ice Arena game", etc.). As described above, if the user request cannot be resolved to a particular media, the virtual assistant may prompt for more information or confirmation as needed (e.g., "do you mean that do you are playing Ice hockey games on Ice Arena for team a and team B on channel 7. It should be appreciated that the video 620-3 may be played on the user device 102-3 or any other device, and the virtual assistant may similarly prompt the video 620-3 on the user device 102-3 or another device to go to a prompt time 624-3 (e.g., based on a particular user command, based on the device the user is viewing the video 620-3, based on the source of the user request, etc.).

In some examples, the user request for the virtual assistant may include an ambiguous reference to certain content shown by television set-top box 104-3 on display 112-3 or on touch screen 246-3 of user device 102-3. For example, a request related to the video 620-3 shown on the display 112-3 in FIG. 61 may include a fuzzy reference to the on-screen player 628-3 or the on-screen player 630-3. The particular team member that the user is asking or referring to may not be clear from the voice input alone. In another example, user requests that are blurry from speech input alone may include other references. For example, a request to view a list of teams may be ambiguous without knowing the particular game in which the particular team the user is watching is participating; without knowing the particular game the user is watching, the request to watch the next goal may be ambiguous; and so on. Accordingly, the content shown on display 112-3 and associated metadata (e.g., from data feed 510-3 or otherwise) may be used to disambiguate user requests and determine user intent. For example, on-screen actors, on-screen team players, contestant listings, in-program actor listings, team listings, and the like may be used to interpret the user request.

In the illustrated example, the content and associated metadata shown on display 112-3 may be used to determine user intent from a reference to "goalkeeper," "that team member," "eight," "he," "M," a nickname, or any other reference related to a particular game and/or a particular team member on the screen. For example, as described above, the data feed 510-3 may include indications of which team members are present on the screen at a particular time, which team members are engaged in a particular event, which team members are on an ice rink at a particular time, and so on. At the time associated with fig. 61, for example, knowledge incorporated into the virtual assistant knowledge base from the data feed 510-3 may indicate that the team member M (e.g., on-screen team member 628-3) and goalkeeper (e.g., on-screen team member 630-3) are on-screen at that particular time, on the ice field at that time, playing that game, or at least possibly on-screen or related to that particular time. Requests referencing a "goalkeeper," "that team member," "eight," "he," "M," or nickname, etc., may then be disambiguated based on this information.

For example, a request to view the "nearest guard of a goalkeeper" (e.g., "show me the nearest guard of a goalkeeper") may be resolved to the fact that the particular goalkeeper corresponds to on-screen team member 630-3 (rather than a replacement team member or another team's goalkeeper), and his name or other identifying information may be used to identify content responsive to the user query (e.g., the nearest guard of the particular goalkeeper in the current game, the nearest guard of the particular goalkeeper in a previous game, etc.). In another example, based on the data feed 510-3 and associated metadata, a request to view the "next goal of eight" (e.g., "show me the next goal of eight") may be resolved to a particular player (e.g., on-screen player 628-3) with the number eight or nickname eight. Then, the content responsive to the query (e.g., the next goal by the player M in the game, the next goal by the player M in a subsequent game, etc.) may be identified based on the identification information of the player corresponding to "eight". In other examples, content shown on display 112-3 or on user device 102-3 may be analyzed to otherwise interpret the user request. For example, the on-screen team members 628-3 and 630-3 may be identified using facial recognition, image recognition (identifying a jersey number), or the like, to interpret the associated user request. It should be appreciated that the response to the user request may include an informational response and/or a media content response, and the response may be displayed on any device (e.g., display 112-3, touch screen 246-3, etc.).

While various examples have been provided herein, it should be understood that a user may indicate a team member (as well as actors, characters, etc.) in a variety of different ways, all of which may be disambiguated according to the examples discussed herein. For example, a user may reference a name (e.g., first name, last name, full name, nickname, etc.), a number, a location, a team, a location on the field (e.g., "supplementarian quarterwise", a game-specific identifier (e.g., a first pitcher, a supplementarian, a relay pitcher, a rescue pitcher, etc.), a participation experience (e.g., a new team member, a first year team member, a second year team member, etc.), a head in team (e.g., a team leader, a side team leader, etc.), a game style (e.g., a powerful, a quick, etc.), a previous team, a university (e.g., "quarterwise from Q university"), statistical information (e.g., "battle of a member who played hat on, a game", "penalty of the highest scoring hand of a team", etc.), biographical information (e.g., "son of a member O", "next player after a pitcher from francis hit in francis", etc.)), Appearance (e.g., tall, short, flesh tone, dressed, etc.), sponsors (e.g., "crashing of hardware store cars"), and so forth.

In other examples, the user request for the virtual assistant may include an ambiguous reference that is based on a current playback location of some content shown by television set-top box 104-3 on display 112-3 or on touch screen 246-3 of user device 102-3. For example, the user may reference a "next" goal, a "previous" penalty, a "next" advertisement, a "recent" performance, a "next" actor present, and so on. The user's intent (e.g., the particular desired content) may not be clear from the speech input alone. However, in some examples, the current playback position in the media stream may be used to disambiguate the user request and determine the user intent. For example, a media stream time indicating the current playback position can be sent to the virtual assistant system and used by the virtual assistant system to interpret the user request.

Fig. 62 shows a media stream 512-3 in which exemplary media stream events 516-3 occur before and after a current playback position 732-3, which may be used to interpret a user query (e.g., to disambiguate a user request and determine a user intent). As shown, the live time 626-3 may be later than the current playback position 732-3, and in some examples, the media stream 512-3 may include a recording of content that is no longer live. Given the current playback position 732-3 as shown, various references to the media stream event 516-3, such as "next" and "previous" events, may be interpreted. For example, a user request to view a previous or recent goal (e.g., "show me the recent goal") may be ambiguous based solely on voice input, but the user request may be interpreted using the current playback position 732-3 (e.g., resolve the reference "recent one") and identify the previous goal 734-3 as the desired media stream event 516-3. In another example, a user request to view the next penalty (e.g., "show me next penalty") may be ambiguous based solely on voice input, but the user request may be interpreted using the current playback position 732-3 (e.g., parse reference "next") and identify the next penalty 738-3 as the desired media stream event 516-3. The current playback position 732-3 may be used to interpret not only the request for the previous penalty 736-3 and the next goal 740-3 in a similar manner, but also various other position references (e.g., the next two, the last three, etc.).

FIG. 63 illustrates an exemplary data feed 810-3 associating an event in a media stream 812-3 with a particular time 514-3 in the media stream. The data feed 810-3 may include similar features to the data feed 510-3 described above, and the data feed 810-3 may similarly be received at block 402-3 and used to cause media playback at block 406-3 of the process 400-3 discussed above. In the example of FIG. 63, media stream 812-3 may include a televised awards show. In other examples, similar media streams may include internet-based awards, radio shows, art shows, and the like. Whether or not summary information or other descriptive details of a particular media are associated with a particular time, they may be included in the data feed 810-3 (e.g., may be included in a header, etc.). In the illustrated example, descriptive summary information is provided in a first box at 10:59(UTC), including a media title (e.g., "movie awards"), a media description ("annual movie awards hosted by comedy actor Whitney Davidson"), and a media source (e.g., airing on television "channel 31"). Various other descriptive information may be similarly provided, and information may be provided in specific fields for reference (e.g., a title field may include a title, a source field may include a television channel or internet address, etc.). In addition to the information shown in fig. 63, various other media information such as participant names, performance descriptions, awarded items, and the like can be acquired. Various summary and descriptive information may be incorporated entirely into the virtual assistant knowledge and used to support related queries.

As shown, the data feed 810-3 may include a media stream event 516-3 related to a media stream time 514-3, which may be similar to the event 516-3 and time 514-3 discussed above with reference to FIG. 60. Media stream event 516-3 in data feed 810-3 may include various events or points of interest in media stream 812-3. For example, in an awards presentation (e.g., a movie award, a drama award, etc.), such as media stream 812-3, media stream events 516-3 may include monologs, awards ceremony, winner speeches, participant exits, performance descriptions, commercial breaks, and the like.

In other examples, points of interest or events may be identified based on social media, popular viewpoints, voting, and the like. For example, popular reviews on a social media network associated with a particular media (e.g., a bonus show live) may be used to identify possible points of interest and approximate times of occurrence (e.g., shortly before the first review of the subject). In another example, the viewer may indicate the point of interest by marking a time in the media (e.g., using a button on a remote control, verbal request, virtual button, etc.). As another example, points of interest may be identified from users who share media with others (such as sharing video clips from a portion of a media stream). Thus, the media stream event 516-3 in the data feed 810-3 may be identified from a media provider, a user, a social network discussion, and various other sources.

In the example of FIG. 63, the data feed 810-3 may include a media stream event 516-3 associated with an event in the awards show. For example, an opening monologue of a comedy actor named Whitney Davidson may occur at 11:00(UTC), and the data feed 810-3 may include the associated media stream event 516-3 at a particular media stream time 514-3 of the event. At 11:08(UTC), actors named Jane Doe and John Richards may award a best service design prize to a winning designer named Jennifer Lane. Prize awarding ceremony details (e.g., prize award names, prize awarding honored guests, winners, etc.) may be included in media stream event 516-3 associated with the prize awarding ceremony for that particular media stream time 514-3. At 11:10(UTC), the best service jackpot winner may have published a lecture, and at that time may include a media stream event 516-3 with associated details (e.g., type of award, winner of award, presenter, etc.). At 11:12(UTC), a singer named David Holmes gave a musical performance entitled "Unforgettable" and may include a media stream event 516-3 with associated details at a corresponding time 514-3. As shown, various other media stream events 516-3 may likewise be included and associated with a particular media stream time 514-3. The details of different events may vary, and some or all of the information may be incorporated into the virtual assistant knowledge.

In other examples, various other media stream events 516-3 with additional detailed information may be included in the data feed 810-3 and/or determined from the media stream 812-3 itself. For example, an actor or participant who is appearing on the screen may be associated with media stream time 514-3. Such information may originate from the provided data or may be derived by analyzing the media stream 812-3 (e.g., using facial recognition, etc.). In addition, various other details may be included in a particular event or may be associated with a media stream, such as various statistical information, participant information (e.g., audience, producer, director, etc.), and so forth. As such, the data feed 810-3 may include detailed text descriptions of various events 516-3 that occurred in the media stream 812-3 at various times 514-3. As described above, this information may be incorporated into the knowledge base of the virtual assistant and used to prompt the video in response to a user request, such as in accordance with the user request discussed above with reference to block 406-3 of process 400-3.

FIG. 64 illustrates an exemplary data feed 910-3 associating an event in a media stream 912-3 with a particular time 514-3 in the media stream. The data feed 910-3 may include similar features to the data feed 510-3 and the data feed 810-3 as described above, and the data feed 910-3 may similarly be received at block 402-3 and used to cause media playback at block 406-3 of the process 400-3 discussed above. In the example of FIG. 64, media stream 912-3 may include a television program, such as a sitcom. In other examples, similar media streams may include game shows, news shows, talk shows, art shows, knowledge game shows, virtual reality shows, dramas, soap operas, and the like. Whether or not summary information or other descriptive details of a particular media are associated with a particular time, they may be included in the data feed 910-3 (e.g., may be included in a header, etc.). In the illustrated example, descriptive summary information is provided in a first box at 14:00(UTC), including media titles (e.g., "television programs"), media descriptions (situation comedies with actors Jane Holmes (character a) and David Doe (character B)) and media sources (e.g., streamed from a network source). Various other descriptive information may be similarly provided, and information may be provided in specific fields for reference (e.g., a title field may include a title, a source field may include a television channel or internet address, etc.). In addition to the information shown in fig. 64, various other media information such as producer, director, moderator, participant name, participant character, actor, storyline description, guest, etc. may be acquired. Various summary and descriptive information may be incorporated entirely into the virtual assistant knowledge and used to support related queries.

As shown, the data feed 910-3 may include a media stream event 516-3 related to media stream time 514-3, which may be similar to event 516-3 and time 514-3 discussed above with reference to FIG. 60. The media stream event 516-3 in the data feed 910-3 may include various events or points of interest in the media stream 912-3. For example, in a television program (e.g., a television show, a news program, a talk show, etc.), such as media stream 912-3, media stream event 516-3 may include a performance description (e.g., a scene description, a performer present, etc.), a program segment indicator (e.g., a monologue, a cheerful, a leader, a guest present, a prize awarding link), an advertisement break, and so forth.

In other examples, points of interest or events may be identified based on social media, popular viewpoints, voting, and the like. For example, popular comments on a social media network associated with particular media (e.g., a new episode of a popular situation comedy, a night talk show, etc.) may be used to identify possible points of interest and approximate times of occurrence (e.g., shortly before the first comment on the topic). In another example, the viewer may indicate the point of interest by marking a time in the media (e.g., using a button on a remote control, verbal request, virtual button, etc.). As another example, points of interest may be identified from users who share media with others (such as sharing video clips from a portion of a media stream). Thus, the media stream event 516-3 in the data feed 910-3 may be identified from a media provider, a user, a social network discussion, and various other sources.

In the example of FIG. 64, data feed 810-3 may include media stream event 516-3 associated with an event in a sitcom television show. For example, the slice header portion may occur at 14:01(UTC), and the data feed 910-3 may include the associated media stream event 516-3 at the particular media stream time 514-3 of the event. At 14:03(UTC), in the scenario of the program, two people may have a rack for parking space. Details of the scene or moment in the episode (e.g., on-screen characters, on-screen actors, descriptions of what happened, etc.) may be included in the media stream event 516-3 associated with the awards ceremony for the particular media stream time 514-3. At 14:06(UTC), a guest may appear in the program and sing a song, and may include a media stream event 516-3 with associated details at a corresponding time 514-3. As shown, various other media stream events 516-3 may likewise be included and associated with a particular media stream time 514-3. The details of different events may vary, and some or all of the information may be incorporated into the virtual assistant knowledge.

In other examples, various other media stream events 516-3 with additional detailed information may be included in the data feed 910-3 and/or determined from the media stream 912-3 itself. For example, an actor or participant who is appearing on the screen may be associated with media stream time 514-3. Such information may originate from the provided data or may be derived by analyzing media stream 912-3 (e.g., using facial recognition, etc.). In addition, various other details may be included in a particular event or may be associated with a media stream, such as various statistical information, participant information (e.g., audience, producer, director, etc.), and so forth. As such, the data feed 910-3 may include detailed textual descriptions of various events 516-3 that occurred in the media stream 912-3 at various times 514-3. As described above, this information may be incorporated into the knowledge base of the virtual assistant and used to prompt the video in response to a user request, such as in accordance with the user request discussed above with reference to block 406-3 of process 400-3.

In any of the various examples discussed herein, the additional virtual assistant knowledge may be derived from closed caption text associated with the particular media content. For example, the information of any of the data feeds discussed herein may be supplemented by or derived from closed caption text. Additional media stream events 516-3 (e.g., identifying when a particular phrase is spoken, identifying when a particular person speaks, etc.) may be added at media stream time 514-3 based on closed caption text associated with a particular time in the media playback. In addition, closed caption text may be used to disambiguate user requests and determine user intent according to various examples discussed herein (e.g., based on spoken names).

Fig. 65 illustrates exemplary closed caption text 1054-3 associated with a particular time in the video 1050-3, which may be used to respond to a virtual assistant query. In the illustrated example, the closed caption interface 1052-3 may include closed caption text 1054-3 at the current playback position 1056-3 of the video 1050-3 shown on the display 112-3. At the current playback position 1056-3, the characters 1060-3, 1062-3, and 1064-3 may appear on the screen, and some of them may speak the text shown in the closed caption text 1054-3. The closed caption text 1054-3 may be associated with the current playback position 1056-3 when deriving information for virtual assistant knowledge. In some examples, the time offset 1058-3 may be used as a reference (e.g., the text of the closed caption text 1054-3 may appear for two minutes in the video 1050-3, or similarly, comparable speech may speak for two minutes in the video 1050-3).

Various information may be derived from the closed caption text 1054-3, and some of the information may be associated with the time offset 1058-3 as a particular media stream event 516-3. For example, spoken names may be used to infer that a person is present on a screen at a particular time. The spoken word "Blanche" may be used, for example, to infer that a person named "Blanche" may appear on the screen at or near the temporal offset 1058-3 in the video 1050-3. The derived information may then be used to respond to user requests associated with the character name "Blanche" or the corresponding actress identified from the metadata (e.g., "show me a scene of the Blanche departure"). In another example, spoken phrases may be identified and associated with particular times at which the phrases were spoken. The spoken phrase "background explicit" may be identified, for example, as being spoken at or near temporal offset 1058-3 in video 1050-3. The derived information is then available to respond to user requests associated with the spoken phrase "background terahertz" (e.g., "show me a scene that Blanche said background terahertz"). Thus, closed caption text can be analyzed and associated with a particular time, and the combination can be incorporated into virtual assistant knowledge in response to relevant user requests.

It should be understood that whether or not closed caption text 1054-3 is shown in an interface, such as interface 1052-3, information can be derived from the closed caption text. For example, closed caption text may be analyzed without actually playing the corresponding video, and the time may be derived from metadata associated with the closed caption. Further, while closed captioning is shown on the display 112-3 in fig. 65, it should be understood that closed captioning may be analyzed to derive virtual assistant knowledge at a server or another device with or without actual playing of the associated video.

As described above, speech input received from a user may be ambiguous. In addition to the above-described information that may be used to interpret a user request (e.g., on-screen team member, on-screen actor, playback position, etc.), various other contextual information may also be used to interpret a user request. For example, personal information about the user may be used to interpret the user request. The user may be identified based on voice recognition, logging in to the device, entering a password, using a particular account, selecting profile information (e.g., age and gender), and so forth. The user request may then be interpreted using the user-specific data of the identified user (or particular household). Such user-specific data may include user favorite teams, user favorite sports, user favorite team members, user favorite actors, user favorite television shows, user geographic location, user demographic characteristics, user viewing history, user subscription data, and the like. Additionally, the user-specific data (or family-specific data) may include a media content viewing history that reflects commonly viewed programs, commonly viewed sporting events, categories of preferences, and the like. Further, in some examples, generic age and gender data may be inferred from the user's speech (e.g., based on pitch, wording, etc.), which may then be used to bias the results according to profile information (e.g., bias words, shows, names, query results, etc. based on possible preferences of age and gender profiles).

In some examples, the user request may specifically reference user-specific data. For example, a user may reference "my team" (e.g., "how do my team behave. The reference "my team" may then be resolved using user-specific data to a specific sports team designated as the user's favorite team. In other examples, user-specific data may be used to bias speech recognition and user intent determination (e.g., infer that a particular user may ask a particular actor based on a recently viewed movie in which the actor appears). For example, names of actors or team members that the user likes, views, or otherwise associates may be identified in the user-specific data and used in the speech recognition and intent determination processes to bias the results in favor of those actor or team member names. This may help to accurately identify unique names, names that sound like other words or other names, and so forth.

In addition to the various other context sources discussed herein for accurately recognizing speech input and interpreting user requests, information from multiple devices associated with a user may be used as context for accurate speech recognition and determining user intent. For example, a user watching television (e.g., on display 112-3) may also consume content on another device (e.g., on user device 102-3). The user request may then be interpreted using the content from both devices.

FIG. 66A shows television display 112-3 with video 1150-3 displayed. Figure 66B illustrates the user device 102-3 with the touch screen 246-3 showing the displayed image 1170-3 and the displayed text 1172-3. A user request to reference content from either device may be received (e.g., via remote control 106-3 or user device 102-3). For example, the user may request that "Jennifer's" last goal be shown. Only references to "Jennifer" from the speech input may be ambiguous. However, the displayed text 1172-3 may be used to disambiguate the request and identify Jennifer as a team member appearing in the content shown on the user device 102-3. Video content responsive to the request can then be identified based on the particular team member and the content can be played for the user. Responsive content may be provided on the display 112-3 or user device 102-3 (e.g., based on particular commands, user preferences, etc.).

In another example, the name associated with the video 1150-3 in FIG. 66A and the name associated with the displayed image 1170-3 and the displayed text 1172-3 in FIG. 66B may be used in the speech recognition process to bias the results towards possible name candidates or to identify names that may be difficult to recognize. For example, the user request may include a name that may be ambiguous, but the user intent may be accurately identified using the name associated with the content displayed on either device. In other examples, lists of actors, awards guests, performers, producers, directors, participants, penalties, sports terms, etc. associated with content displayed on either device may be similarly used to improve speech recognition accuracy and determine user intent.

In some examples, the image 1170-3 displayed in fig. 66B may comprise a moving image or video. For example, the content shown in fig. 66B may include secondary screen experience data (e.g., data and video intended to accompany another program), secondary camera view data (e.g., video for a particular program that has a selectable view or vantage point compared to the primarily displayed video), and so forth. Such information may be used to improve speech recognition accuracy and determine user intent in a manner similar to that described above. Further, whether shown on a separate user device or not, secondary screen experience data, secondary camera view data, and the like may be received and used as part of a data feed to identify relevant points of interest and associated times in a media stream. For example, the secondary screen experience may include a description of game highlights. Those descriptions may be included in the virtual assistant knowledge as related media stream events with associated media stream times and may be used to respond to user requests. Similarly, the secondary camera view data may be included in the virtual assistant knowledge as a related media stream event that identifies a particular media stream time within which alternative camera content is available (which may be used, for example, to respond to certain user requests).

As described above, media playback may begin from a particular cue time in response to certain user requests. In some examples, multiple segments in one or more media streams may be played back continuously in response to some user request. For example, the user may request to view a highlight in a game, all goals in a game, all fighting in a game, all shows of a particular actor in a program, all scenes of a particular character in a program, a start monologue for each talk show in a plurality of talk shows, an awarding link for each game show in a plurality of game shows, a best time for a show, or various other media segments for one or more shows. In the same manner as described above, a particular time associated with a desired event may be identified in one or more programs and playback may begin with a first segment followed by other identified segments in succession. In some examples, highlights, best moments, etc. may be determined based on bookmark popularity, social media discussions, playback counts, etc. The end of each segment may be identified in various ways, such as by a commercial break, another media event in the related media stream, a default play time, a particular endpoint entry in the media event details, and so forth. In this way, the user may request, for example, a highlight reel compilation for a particular content that they want to see, and the system may automatically identify the required highlights and play back in succession (or provide these highlights in any other order, etc.) for optional play.

In some examples, a user may want to share a particular segment in a media stream with a friend, family, etc. In one example, a user may indicate a bookmark location in a media stream that corresponds to a particular playback location in the media stream. The customized bookmark location may then be transmitted to a server and shared with friends through social networks, messages, other television set-top boxes 104-3, other user devices 102-3, and the like. The user may indicate the bookmark using a physical button, a virtual button, a voice input, or using any other entry of remote control 106-3 and/or user device 102-3. For example, a user may direct a request to the virtual assistant system to bookmark a certain media segment and send it to a contact in the user's address book (e.g., bookmark it and send it to Corey). The system may then identify a particular media segment (e.g., media identifier and UTC reference, offset, etc.) and transmit it to the desired contact. In some examples, the user may identify both the start position and the end position of the desired segment. In other examples, the user may reference and share a particular media stream event (e.g., share this goal with Jordan, send this performance to Susan, etc.). In other examples, bookmarks and media stream events may be shared through a social network or the like.

As described above, in response to a media-related virtual assistant query, the system may prompt video playback and/or respond with an informational answer (e.g., by displaying a text response, a loud speaking response, etc. on display 112-3 or user device 102-3). In some examples, various data feeds and other information for prompting video playback as discussed herein may be used in a similar manner to determine an informational response to a user request. FIG. 67 illustrates an exemplary process 1200-3 for integrating information into digital assistant knowledge and responding to user requests. At block 1202-3, a data feed including an event associated with a time in a media stream may be received. The data feeds may include any of the data feeds discussed herein having a corresponding media stream event of any of the corresponding media stream events 516-3, such as the data feed 510-3 discussed with reference to fig. 60, the data feed 810-3 discussed with reference to fig. 63, and the data feed 910-3 discussed with reference to fig. 64.

Referring again to the process 1200-3 in FIG. 67, at block 1204-3, a spoken user request associated with an event in a data feed may be received. The user may request information about any media stream event, currently playing media, on-screen team members, on-screen actors, etc. For example, a user may request a team member that identifies scores (e.g., "who scored that score.

At block 1206-3, a response to the user request may be generated based on data related to the event (e.g., data from any of the data feeds discussed herein). Any of the media stream events 516-3 discussed herein may be searched, for example, to obtain informational responses to various queries (e.g., such as the various query examples mentioned above with reference to block 1204-3). In some examples, the response may be generated based on the currently playing media (e.g., a program being played, a paused program, a program shown on a screen, etc.). For example, a user request to reference currently playing media may be ambiguous based solely on voice input. Currently playing media may be used to disambiguate a user request and determine user intent by resolving references to current content. For example, a user may request a list of actors for "this" program (e.g., "who is there in this program. However, a reference to "this" may be parsed using the currently playing program and user intent may be identified. For example, if the television program example of FIG. 64 is being played, the user query may be responded to by identifying the actors Jane Holmes and David Doe using the summary information listed at 14:00 (UTC).

In other examples, the response may be generated based on a current playback position of currently playing media and/or media content previously consumed by the user. For example, a user may request to identify a player who has just been shown to goal, and may reference "that" goal in the request (e.g., "who did that ball. The current playback position of the currently playing media may be used to determine the user's intent and a response may be generated by resolving the "that" goal to the most recent goal displayed to the user, regardless of whether other goals appear later in the media stream. In the example of fig. 62, the current playback position 732-3 may be used to resolve the "that" goal to a previous goal 734-3, and the content of the corresponding media stream event may be used to answer the query. Specifically, the player M may be identified as having obtained the most recent goal score seen by the user. As discussed above with reference to fig. 62, the current playback position may also be used to determine user intent from various other ambiguous references (e.g., next, previous, etc.), and the identified media stream event information may be used to formulate a response to the query.

Additionally, in some examples, users may want to change their viewing experience and delay learning of live or updated information. For example, a user may begin viewing after a sporting event has begun and even after it has ended. However, the user may want to experience the entire game as live. In this case, the available virtual assistant knowledge may be filtered to reference information available at the same time as the current playback position and avoid referencing information from points after the current playback position. For example, referring again to the example of fig. 62, assuming the user is viewing at the current playback position 732-3, the system may refrain from including the next goal 740-3 in the response. The user may request a goal at, for example, the current playback position 732-3 (e.g., "what is what so far. In response, the system may provide a score based on a previously viewed event (e.g., a previous goal 734-3) while excluding events (e.g., a next goal 740-3) after the current playback position 732-3.

In some examples, the user request may specify (e.g., by saying "so far", "until now", "in the game at this time", "so far", etc.) that the response information should be synchronized with the current playback position, or specify (e.g., by saying "live", "updated", "current", etc.) that the response information should be the most recently updated information available. In other examples, settings, user preferences, etc. may determine whether the response includes the most updated information or, alternatively, only information synchronized with the playback position. Further, in some examples, alerts, notifications, messages, social media feed entries, etc., that may be associated with a particular game (e.g., based on terms, names, etc.) may be blocked from being sent to the user as desired and only delivered after the user reaches a playback position in the associated content corresponding to the various messages. For example, a message from a friend commenting on a live sporting event (e.g., for delivery on user device 102-3 or any other device) may be intentionally delayed until the user arrives at a point corresponding to the time the message was sent when the user was delayed viewing the sporting event, at which point the message may be delivered to the user. In this way, the entire experience of watching a sporting event (or consuming any other media) may be time-shifted as desired (e.g., to avoid corrupting the results).

In other examples, the response may be generated based on content shown by the television set-top box 104-3 on the display 112-3, content shown on the touchscreen 246-3 of the user device 102-3, and/or metadata associated with any of the displayed content. For example, the response may be generated based on an on-screen actor, an on-screen team member, a list of participants, a list of actors in a program, a list of teams, and so forth. As discussed above with reference to fig. 61, 66A, and 66B, various information may be derived from the displayed content and associated metadata, and this information may be used to disambiguate a user request, determine user intent, and generate a response to the user request. For example, a response to a user request to identify an on-screen player (e.g., "who is that bit. In the example of FIG. 61, on-screen player 628-3 may be identified as player M, for example, using media stream events near cue time 624-3 (e.g., near team A goals). In another example, image processing may be used to identify the jersey number of the on-screen player 628-3 as player M from the list.

Referring again to the process 1200-3 in FIG. 67, at block 1208-3, the response determined at block 1206-3 may be caused to be delivered. In some examples, delivering the response may include causing the response to be displayed or played on display 112-3, on user device 102-3, or on another device via television set-top box 104-3. For example, the text response and/or the media response may be displayed or played in a virtual assistant interface on the device. In another example, delivering the response may include transmitting the response information (e.g., from a server) to television set-top box 104-3, user device 102-3, or another device. In other examples, the user may request identification of information within the image or video (e.g., "which is Jennifer. Accordingly, process 1200-3 may be used to respond to various user queries in a variety of ways by employing timely data incorporated into a virtual assistant repository.

Further, in any of the various examples discussed herein, the various aspects may be personalized for a particular user. User data, including contacts, preferences, locations, favorite media, etc., can be used to interpret voice commands and facilitate user interaction with the various devices discussed herein. The various processes discussed herein may also be modified in various other ways based on user preferences, contacts, text, usage history, profile data, age zone data, and the like. Further, such preferences and settings may be updated over time based on user interactions (e.g., frequently spoken commands, frequently selected applications, etc.). The collection and use of user data available from various sources may be used to improve the delivery of the invited content, or any other content that may be of interest to the user, to the user. The present disclosure contemplates that, in some examples, such sampled data may include personal information data that uniquely identifies or may be used to contact or locate a particular person. Such personal information data may include demographic data, location-based data, telephone numbers, email addresses, home addresses, or any other identifying information.

The present disclosure recognizes that the use of such personal information data in the present technology may be useful to benefit the user. For example, the personal information data may be used to deliver target content that is of greater interest to the user. Thus, the use of such personal information data enables planned control of delivered content. In addition, the present disclosure also contemplates other uses for which personal information data is beneficial to a user.

The present disclosure also contemplates that entities responsible for the collection, analysis, disclosure, transmission, storage, or other use of such personal information data will comply with established privacy policies and/or privacy practices. In particular, such entities should enforce and adhere to the use of privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining privacy and security of personal information data. For example, personal information from a user should be collected for legitimate and legitimate uses by an entity and not shared or sold outside of these legitimate uses. In addition, such collection should only be done after the user has informed consent. In addition, such entities should take any required steps to secure and protect access to such personal information data, and to ensure that others who are able to access the personal information data comply with their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices.

Regardless of the foregoing, the present disclosure also contemplates examples in which a user selectively prevents use or access to personal information data. That is, the present disclosure contemplates that hardware elements and/or software elements may be provided to prevent or block access to such personal information data. For example, in the case of an ad delivery service, the techniques of the present invention may be configured to allow a user to opt-in to "join" or "opt-out of" participating in the collection of personal information data during registration with the service. In another example, the user may choose not to provide location information for the targeted content delivery service. As another example, the user may choose not to provide accurate location information, but to permit transmission of location area information.

Thus, while this disclosure broadly covers the use of personal information data to implement one or more of the various disclosed examples, this disclosure also contemplates that various examples may also be implemented without having to access such personal information data. That is, various examples of the present technology do not fail to function properly due to the lack of all or a portion of such personal information data. For example, content may be selected and delivered to a user by inferring preferences based on non-personal information data or an absolute minimum of personal information (e.g., content requested by a device associated with the user, other non-personal information available to a content delivery service, or publicly available information).

Fig. 68 illustrates a functional block diagram of an electronic device 1300-3 configured in accordance with the principles of the various examples described, for example, to voice control media playback and update knowledge of a virtual assistant in real-time, according to some examples. The functional blocks of the device can be implemented by hardware, software, or a combination of hardware and software that perform the principles of the various described examples. Those skilled in the art will appreciate that the functional blocks described in fig. 68 may be combined or separated into sub-blocks in order to implement the principles of the various described examples. Thus, the description herein optionally supports any possible combination or separation or further definition of the functional blocks described herein.

As shown in fig. 68, the electronic device 1300-3 may include a display unit 1302-3 (e.g., display 112-3, touch screen 246-3, etc.) configured to display media, interfaces, and other content. The electronic device 1300-3 may also include an input unit 1304-3 configured to receive information, such as voice input, tactile input, gesture input, media information, data feeds, media, and so forth (e.g., microphone, receiver, touch screen, buttons, server, and so forth). The electronic device 1300-3 may also include a processing unit 1306-3 coupled to the display unit 1302-3 and the input unit 1304-3. In some examples, processing unit 1306-3 may include a data feed receiving unit 1308-3, a user request receiving unit 1310-3, and a media playback unit 1312-3.

The processing unit 1306-3 may be configured to receive a data feed (e.g., from the input unit 1304-3 using the data feed receiving unit 1308-3), wherein the data feed includes data related to an event associated with a time in the media stream. The processing unit 1306-3 may be further configured to receive a user request based on the voice input (e.g., from the input unit 1304-3 using the user request receiving unit 1310-3), wherein the user request is associated with an event. Processing unit 1306-3 may be further configured to, in response to receiving the user request, cause the media stream to begin playback (e.g., on display unit 1302-3) at a time in the media stream associated with the event (e.g., using media playback unit 1312-3).

In some examples, the processing unit 1306-3 may be further configured to interpret the user request based on currently playing media. In other examples, the processing unit 1306-3 may be further configured to interpret the user request based on a current playback position of the currently playing media. In other examples, the processing unit 1306-3 may be further configured to interpret the user request based on one or more of an on-screen actor, an on-screen team member, a list of participants, a list of actors in the program, a list of people in the program, or a list of teams. In some examples, the media stream comprises a sporting event, and the data related to the event comprises one or more of player characteristics (e.g., name, nickname, number, location, team, on-scene location, experience, style, biographical information, etc.), scores, penalties, statistical information, or segment indicators (e.g., quarter, one-round, half, one-round, warning sign, parking in, behind, play, etc.). In other examples, the media stream includes a prize awards ceremony, and the data related to the event includes one or more of a participant characteristic (e.g., name, nickname, character name, biographical information, etc.), a performance description, or a prize awards ceremony indicator. In other examples, the media stream comprises a television program and the data related to the event comprises one or more of a show description or a program segment indicator.

In one example, the user request (e.g., of the user request receiving unit 1310-3) includes a request for highlights in a media stream. In some examples, processing unit 1306-3 may be further configured to cause the plurality of segments of the media stream to be played back in succession in response to receiving the request. In other examples, playing the media stream includes playing the media on a playback device other than the electronic device. In some examples, the electronic device comprises a server, a set-top box, a remote control, a smartphone, or a tablet. In other examples, the playback device includes a set-top box, a smart phone, a tablet, or a television. The processing unit 1306-3 may be further configured to interpret the user request based on information displayed by the electronic device. The processing unit 1306-3 may be further configured to interpret the user request based on information displayed by the playback device.

In some examples, the data related to the event includes closed caption text. The processing unit 1306-3 may be further configured to determine a time associated with the event in the media stream based on the closed caption text. In one example, the data related to the event includes one or more of secondary screen experience data, secondary camera view data, or social network feed data. The processing unit 1306-3 may be further configured to receive an indication of a bookmark from the user, where the bookmark corresponds to a particular playback position in the media stream. The processing element 1306-3 may be further configured to receive a user request to share a bookmark, and in response to receiving the user request to share a bookmark, cause reminder information associated with a particular playback position to be transmitted to a server. The processing unit 1306-3 may be further configured to interpret the user request based on one or more of a user's favorite teams, a user's favorite sports, a user's favorite team members, a user's favorite actors, a user's favorite television shows, a user's geographic location, a user's demographic characteristics, a user's viewing history, or a user's subscription data.

According to some examples, fig. 69 illustrates a functional block diagram of an electronic device 1300-3 configured according to the principles of various described examples, for example, to integrate information into digital assistant knowledge and to respond to user requests. The functional blocks of the device can be implemented by hardware, software, or a combination of hardware and software that perform the principles of the various described examples. Those skilled in the art will appreciate that the functional blocks described in fig. 69 may be combined or separated into sub-blocks in order to implement the principles of the various described examples. Thus, the description herein optionally supports any possible combination or separation or further definition of the functional blocks described herein.

As shown in FIG. 69, electronic device 1400-3 may include a display unit 1402-3 (e.g., display 112-3, touch screen 246-3, etc.) configured to display media, interfaces, and other content. The electronic device 1400-3 can also include an input unit 1404-3 configured to receive information, such as voice input, tactile input, gesture input, media information, data feeds, media, and so forth (e.g., microphone, receiver, touch screen, buttons, server, and so forth). The electronic device 1400-3 may also include a processing unit 1406-3 coupled to the display unit 1402-3 and the input unit 1404-3. In some examples, processing unit 1306-3 may include a data feed receiving unit 1408-3, a user request receiving unit 1410-3, a response generating unit 1412-3, and a response delivery unit 1414-3.

The processing unit 1406-3 may be configured to receive a data feed (e.g., from the input unit 1404-3 using the data feed receiving unit 1408-3), where the data feed includes data related to an event associated with a time in the media stream. Processing unit 1406-3 may be further configured to receive a user request based on voice input from a user (e.g., from input unit 1404-3 using user request receiving unit 1410-3), wherein the user request is associated with an event. The processing unit 1406-3 may be further configured to generate a response to the user request based on the data related to the event (e.g., using the response generating unit 1412-3). The processing unit 1408-3 may be further configured to cause the response to be delivered (e.g., using the response delivery unit 1414-3).

In some examples, generating the response (e.g., using response generating unit 1412-3) further includes generating the response based on the currently playing media. In other examples, generating the response (e.g., using response generation unit 1412-3) further includes generating the response based on a current playback position of the currently playing media. In other examples, generating the response (e.g., using the response generation unit 1412-3) further includes generating the response based on media content previously consumed by the user. In some examples, generating the response (e.g., using response generation unit 1412-3) further includes generating the response based on one or more of an on-screen actor, an on-screen team member, a list of participants, a list of actors in the program, or a list of teams.

In some examples, the processing unit 1406-3 may be further configured to generate a response based on data synchronized with the current playback position in response to the user request including a request for information synchronized with the current playback position of the currently playing media, wherein the data synchronized with the current playback position does not include data associated with a time after the current playback position; and, in response to the user request including a request for live information, generating a response based on the live data. In some examples, causing the response to be delivered (e.g., using response delivery unit 1414-3) includes causing the response to be displayed or played on a playback device other than the electronic device. In other examples, causing the response to be delivered (e.g., using response delivery unit 1414-3) includes causing the response to be delivered to a playback device other than the electronic device. In some examples, the electronic device comprises a server, a set-top box, a remote control, a smartphone, or a tablet. In other examples, the playback device includes a set-top box, a smart phone, a tablet, or a television. In some examples, the processing unit 1406-3 may be further configured to interpret the user request based on information displayed by the electronic device. In other examples, the processing unit 1406-3 may be further configured to interpret the user request based on information displayed by the playback device.

Although examples have been fully described with reference to the accompanying drawings, it is noted that various changes and modifications will be apparent to those skilled in the art (e.g., modifying any of the systems or processes discussed herein in accordance with the concepts described herein in connection with any other system or process discussed herein). It is to be understood that such changes and modifications are to be considered as included within the scope of the various examples as defined by the appended claims.

307页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:媒体环境中的智能自动化助理

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类