Electronic device and control method thereof

文档序号:516392 发布日期:2021-05-28 浏览:29次 中文

阅读说明:本技术 电子装置及其控制方法 (Electronic device and control method thereof ) 是由 赵钟明 郑泰雄 于 2020-10-20 设计创作,主要内容包括:一种包括通信接口、显示器和处理器的电子装置,处理器被配置为:基于通过通信接口从源装置接收到内容,获得接收到的内容的特征信息,并且将所获得的特征信息发送到外部服务器;从外部服务器接收内容的标识信息,该标识信息是基于所发送的特征信息而获得的;基于根据通过通信接口从源装置接收的信号开启显示器的预定模式,获得与开启预定模式时的第一时间点有关的信息;基于关闭显示器的预定模式,获得与在第一时间点之后关闭预定模式时的第二时间点有关的信息。(An electronic device comprising a communication interface, a display, and a processor, the processor configured to: obtaining feature information of the received content based on receiving the content from the source device through the communication interface, and transmitting the obtained feature information to the external server; receiving identification information of the content from the external server, the identification information being obtained based on the transmitted feature information; obtaining information on a first time point when a predetermined mode is turned on based on the predetermined mode in which the display is turned on according to a signal received from the source device through the communication interface; based on a predetermined mode of turning off the display, information on a second point in time when the predetermined mode is turned off after the first point in time is obtained.)

1. An electronic device, comprising:

a communication interface;

a display; and

a processor configured to:

obtaining feature information of the received content based on receiving the content from the source device through the communication interface, and transmitting the obtained feature information to an external server;

receiving identification information of the content from the external server, the identification information being obtained based on the transmitted feature information;

obtaining information on a first time point when a predetermined mode is turned on based on the predetermined mode in which the display is turned on according to a signal received from the source device through the communication interface;

based on the predetermined mode in which the display is turned off, obtaining information about a second point in time when the predetermined mode is turned off after the first point in time; and

obtaining information related to content displayed through the display based on the received identification information, the obtained information related to the first point in time, and the obtained information related to the second point in time.

2. The electronic device of claim 1, wherein the processor is further configured to:

identifying a title of the displayed content based on the identification information received from the external server; and

identifying a reproduction period of the content of which the title is identified, based on the obtained information on the first time point and the obtained information on the second time point.

3. The electronic device of claim 1, wherein the processor is further configured to: obtaining, among the plurality of pieces of identification information received from the external server, identification information of the content based on the feature information obtained between the first time point and the second time point.

4. The electronic device of claim 1, wherein the signal received from the source device includes control information for allowing the electronic device to turn on or off the predetermined mode based on a type of content received from the source device.

5. The electronic device of claim 4, wherein the predetermined mode is an automatic low latency mode ALLM.

6. The electronic apparatus according to claim 4, wherein the control information is provided from the source apparatus to the electronic apparatus supporting a predetermined or higher version of the HDMI standard.

7. The electronic device of claim 4, wherein the control information is provided from the source device to the electronic device based on the type of content being game content.

8. The electronic device of claim 1, wherein the feature information comprises any one or any combination of video feature information and audio feature information, and

wherein the processor is further configured to:

capturing an image of content currently being viewed among the content received from the source device at a predetermined time interval;

obtaining the video feature information based on pixel values of the captured image;

obtaining frequency information of acoustic information of content currently being viewed at the predetermined time interval; and

obtaining the audio feature information based on the obtained frequency information.

9. The electronic apparatus according to claim 8, wherein, in response to a number of pieces of identification information of content obtained based on the video feature information being larger than one, the audio feature information is additionally used to obtain identification information corresponding to content currently being viewed among the obtained pieces of identification information.

10. The electronic device of claim 8, wherein the processor is further configured to obtain the video feature information from a most recently captured predetermined number of images among the captured images.

11. A method of controlling an electronic device, the method comprising:

obtaining feature information of the received content based on the content received from the source device, and transmitting the obtained feature information to the external server;

receiving identification information of the content from the external server, the identification information being obtained based on the transmitted feature information;

obtaining information on a first time point when a predetermined mode is turned on based on the predetermined mode in which a display of the electronic device is turned on according to a signal received from the source device;

based on the predetermined mode in which the display is turned off, obtaining information about a second point in time when the predetermined mode is turned off after the first point in time; and

obtaining information about the displayed content based on the received identification information, the obtained information about the first point in time, and the obtained information about the second point in time.

12. The method of claim 11, wherein obtaining information related to the content comprises:

identifying a title of the displayed content based on the identification information received from the external server; and

identifying a reproduction time of the content of which the title is identified, based on the obtained information on the first time point and the obtained information on the second time point.

13. The method of claim 11, further comprising: obtaining, among the plurality of pieces of identification information received from the external server, identification information of the content based on the feature information obtained between the first time point and the second time point.

14. The method of claim 11, wherein the signal received from the source device includes control information for allowing the electronic device to turn on or off the predetermined mode based on a type of content received from the source device.

15. The method of claim 14, wherein the predetermined mode is an automatic low latency mode ALLM.

Technical Field

The present disclosure relates to an electronic apparatus that obtains information related to content being displayed and a control method thereof.

Background

In the related art, there are various methods for obtaining information related to content being displayed. However, the related art technology is mainly applied to content in which predetermined frames are continuously displayed, such as a movie or a television series.

For example, a television may identify the title of a television show or movie currently being displayed by comparing several frames of content currently being displayed with frames stored in a database.

However, unlike the content in which predetermined frames are continuously displayed, it is difficult to specify game content by comparison with frames stored in the database, because various game images may be displayed according to user operations.

Disclosure of Invention

An electronic apparatus that identifies a title and a reproduction period of content currently being displayed, and a control method thereof are provided.

Additional aspects will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the presented embodiments.

According to an aspect of the present disclosure, there is provided an electronic device comprising a communication interface, a display, and a processor configured to: obtaining feature information of the received content based on receiving the content from the source device through the communication interface, and transmitting the obtained feature information to the external server; receiving identification information of the content from the external server, the identification information being obtained based on the transmitted feature information; obtaining information on a first time point when a predetermined mode is turned on based on the predetermined mode in which the display is turned on according to a signal received from the source device through the communication interface; obtaining information on a second time point when the predetermined mode is turned off after the first time point, based on the predetermined mode in which the display is turned off; and obtaining information related to the content displayed through the display based on the received identification information, the obtained information related to the first time point, and the obtained information related to the second time point.

The processor may be further configured to: identifying a title of the displayed content based on the identification information received from the external server; and identifying a reproduction period of the content of which the title is identified, based on the obtained information on the first time point and the obtained information on the second time point.

The processor may be further configured to: among the plurality of pieces of identification information received from the external server, identification information of the content is obtained based on feature information obtained between the first time point and the second time point.

The signal received from the source device may include control information for allowing the electronic device to turn on or off the predetermined mode based on the type of content received from the source device.

The predetermined mode may be an Automatic Low Latency Mode (ALLM).

Control information may be provided from a source device to an electronic device that supports a predetermined or higher version of the HDMI standard.

Control information may be provided from the source device to the electronic device based on the type of content being game content.

The feature information may comprise any one or any combination of video feature information and audio feature information, and the processor may be further configured to: capturing an image of content currently being viewed, among content received from a source device, at a predetermined time interval; obtaining video feature information based on pixel values of the captured image; obtaining frequency information of acoustic information of content currently being viewed at predetermined time intervals; and obtaining audio feature information based on the obtained frequency information.

In response to the number of pieces of identification information of the content obtained based on the video feature information being more than one piece, the audio feature information may be additionally used to obtain identification information corresponding to the content currently being viewed among the obtained pieces of identification information.

The processor may be further configured to obtain video feature information from a most recently captured predetermined number of images among the captured images.

According to an aspect of the present disclosure, there is provided a method of controlling an electronic device, the method including: obtaining feature information of the received content based on the content received from the source device, and transmitting the obtained feature information to the external server; receiving identification information of the content from the external server, the identification information being obtained based on the transmitted feature information; obtaining information about a first time point when a predetermined mode is turned on, based on the predetermined mode in which the display is turned on according to a signal received from the source device; obtaining information on a second time point when the predetermined mode is turned off after the first time point, based on the predetermined mode in which the display is turned off; and obtaining information about the displayed content based on the received identification information, the obtained information about the first point in time, and the obtained information about the second point in time.

Obtaining information related to the content may include: identifying a title of the displayed content based on the identification information received from the external server; and identifying a reproduction time of the content of which the title is identified, based on the obtained information on the first time point and the obtained information on the second time point.

The method may further comprise: among the plurality of pieces of identification information received from the external server, identification information of the content is obtained based on feature information obtained between the first time point and the second time point.

The signal received from the source device may include control information for allowing the electronic device to turn on or off the predetermined mode based on the type of content received from the source device.

The predetermined mode may be an Automatic Low Latency Mode (ALLM).

Control information may be provided from a source device to an electronic device that supports a predetermined or higher version of the HDMI standard.

Control information may be provided from the source device to the electronic device based on the type of content being game content.

The feature information may include any one or any combination of video feature information and audio feature information, and obtaining the feature information may include: capturing an image of content currently being viewed, among content received from a source device, at a predetermined time interval; obtaining video feature information based on pixel values of the captured image; obtaining frequency information of acoustic information of content currently being viewed at predetermined time intervals; and obtaining audio feature information based on the obtained frequency information.

In response to the number of pieces of identification information of the content obtained based on the video feature information being more than one piece, the audio feature information may be additionally used to obtain identification information corresponding to the content currently being viewed among the obtained pieces of identification information.

Obtaining the video feature information may include: video feature information is obtained from a predetermined number of most recently captured images among the captured images.

According to an aspect of the disclosure, there is provided a non-transitory computer-readable storage medium storing instructions that, when executed by at least one processor of an electronic device, cause the at least one processor to: identifying whether a predetermined mode of a display of an electronic device is turned on based on a signal received from a source device; obtaining a first time point when the predetermined mode of the display is turned on based on the predetermined mode of the display being recognized as on; obtaining characteristic information from the received signal; transmitting the obtained feature information to an external server; receiving identification information of content corresponding to the transmitted feature information from an external server; obtaining a second point in time of the predetermined pattern when the display is turned off after the first point in time based on the predetermined pattern of the display being recognized as off; and obtaining information related to the content displayed through the display from a portion of the received identification information, the portion corresponding to a time period from the obtained first time point to the obtained second time point.

The predetermined mode may include an Automatic Low Latency Mode (ALLM), and the information of the content may include one or both of a title and a genre.

Drawings

The above and other aspects, features and advantages of some embodiments of the disclosure will become more apparent from the following description taken in conjunction with the accompanying drawings, in which:

fig. 1 is a view for schematically illustrating a configuration of an electronic system according to an embodiment;

FIG. 2 is a block diagram for explaining the operation of an electronic device according to the embodiment;

fig. 3 is a block diagram for explaining a configuration of an electronic device according to the embodiment;

fig. 4 is a block diagram for explaining an operation between an electronic device and a server according to the embodiment;

fig. 5 is a view for explaining an operation of discriminating contents in a case where the same video is reproduced on a plurality of channels according to the embodiment;

FIG. 6 is a flow diagram illustrating a process for using audio feature information if content is not identified using only video feature information, according to an embodiment;

fig. 7 is a sequence diagram of an electronic device, a source device, and a server according to an embodiment;

fig. 8 is a view for explaining identification information based on a time point when feature information is obtained according to an embodiment; and

fig. 9 is a flowchart for explaining a method of controlling an electronic device according to an embodiment.

Detailed Description

Hereinafter, the present disclosure will be described in detail with reference to the accompanying drawings.

The present disclosure will be described in detail after explaining terms used in the specification in brief.

Terms used in the embodiments of the present disclosure are selected as general terms used as widely as possible in consideration of functions in the present disclosure, but they may be changed according to intentions of those skilled in the art, precedent examples, appearance of new technology, and the like. In addition, there are also terms arbitrarily selected by the applicant in cases where their meanings will be described in detail in the description of the present disclosure. Accordingly, terms used in the present disclosure may be defined based on their meanings as well as the entire disclosure, rather than based on simple names of the terms.

The embodiments of the present disclosure can be modified and include various embodiments, which will be illustrated in the accompanying drawings and described in the specification in detail. It is to be understood, however, that this does not limit the scope of the embodiments and includes all modifications, equivalents, and/or alternatives included within the spirit and scope of the disclosure. In describing the present disclosure, a detailed description of the related art may be omitted when it is determined that the detailed description may unnecessarily obscure the gist of the present disclosure.

Unless specifically defined otherwise, singular expressions may cover plural expressions. It will be understood that terms such as "including" or "consisting of," are used herein to specify the presence of stated features, quantities, steps, operations, elements, components, or combinations thereof, but do not preclude the presence or addition of one or more other features, quantities, steps, operations, elements, components, or combinations thereof.

Expressions such as "a and/or B" may be understood to mean "a", "B", or any of "a and B".

The expressions "first", "second", and the like, as used in this disclosure, may denote various elements, regardless of order and/or importance, and may be used to distinguish one element from another without limiting the elements.

If an element (e.g., a first element) is described as being "operatively or communicatively coupled" or "connected" to another element (e.g., a second element), it is understood that the element may be connected to the other element, either directly or through yet another element (e.g., a third element).

Terms such as "module" or "unit" in the present disclosure may perform at least one function or operation, and may be implemented as hardware, software, or a combination of hardware and software. Further, rather than each of the plurality of "modules," "units," etc. being implemented in separate hardware, these components may be integrated in at least one module and implemented in at least one processor. In the present disclosure, the term "user" may refer to a person using an electronic device or a device using an electronic device (e.g., an artificial intelligence electronic device).

Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings, so that those skilled in the art can easily implement and use the embodiments in the technical field of the present disclosure. However, the present disclosure may be embodied in various different forms and is not limited to the embodiments described herein. In addition, in the drawings, parts irrelevant to the description may be omitted for clarity of description of the present disclosure, and the same reference numerals are used for the same parts throughout the specification.

Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.

Fig. 1 is a view for schematically illustrating a configuration of an electronic system according to an embodiment.

Referring to fig. 1, an electronic system 1000 according to an embodiment of the present disclosure may include an electronic device 100, a source device 200, and a server 300.

The electronic device 100 may be a display device that receives an image signal from the source device 200 and displays content including the received image signal. For example, the electronic apparatus 100 may be implemented in various forms including a display, such as a TV, a smart phone, a tablet PC, a mobile phone, a video phone, an e-book reader, a desktop PC, a laptop computer, a netbook computer, a PDA, a Portable Multimedia Player (PMP), an MP3 player, a medical device, a camera, or a wearable device.

The source device 200 may be a device that transmits a source image and a signal including information about the source image to the electronic device 100. For example, the source device 200 may be implemented in various forms, such as a gaming machine (e.g., XBOX)TM、PLAYSTATIONTM) Smart phones, tablet PCs, desktop PCs, laptop PCs, netbook computers, wearable devices, set-top boxes, and storage devices for USB, etc.

The server 300 may be a device that stores and manages information about various contents. For example, the server 300 may generate and store characteristic information of the content. The feature information may refer to unique information for distinguishing the corresponding content from other content, and may include, for example, video feature information generated based on a video signal and audio feature information generated based on an audio signal. This will be described in detail below.

If the electronic device 100 does not have information about the content currently being displayed, the electronic device 100 may obtain feature information about the content currently being displayed and transmit the feature information to the server 300, and the server 300 may identify which content the content currently being displayed on the electronic device 100 is by comparing the received feature information with information stored in the server 300 and transmit identification information to the electronic device 100. In addition, the electronic device 100 may obtain reproduction time information of the content currently being displayed based on information included in the signal received from the source device 200.

An embodiment of obtaining information related to content being reproduced by the electronic device 100 as described above will be described in detail below.

Fig. 2 is a block diagram for explaining the operation of the electronic device according to the embodiment.

Referring to fig. 2, the electronic device 100 may include a communication interface 110, a display 120, and a processor 130.

The communication interface 110 may include a circuit and is an element capable of communicating with the source device 200 and the server 300.

The communication interface 110 may communicate with the source device 200 and the server 300 based on a wired or wireless communication method.

According to an embodiment, if the communication interface 110 communicates with the outside through a wired communication method, the communication interface 110 may be implemented as a port provided in the electronic device 100. The communication interface 110 may be implemented as an HDMI port that communicates with the source device 200. In this case, the source device 200 may further include an HDMI port. Accordingly, the electronic device 100 and the source device 200 may communicate with each other through each HDMI port and a High Definition Multimedia Interface (HDMI) cable connecting them. However, not limited thereto, the communication interface 110 may also communicate with the source device 200 through a Low Voltage Differential Signaling (LVDS) cable, a Digital Visual Interface (DVI) cable, a D-subminiature (D-SUB) cable, a Video Graphics Array (VGA) cable, a V-by-One cable, or an optical cable.

According to another embodiment, the communication interface 110 may communicate with the source device 200 and the server 300 through wireless communication. In this case, the communication interface 110 may include a Wi-Fi module, a bluetooth module, an Infrared (IR) module, a Local Area Network (LAN) module, an ethernet module, and the like. Each communication module may be implemented as a hardware chip. In addition to the above-described communication method, the wireless communication module may include at least one communication chip that performs communication based on various wireless communication standards such as Zigbee, Universal Serial Bus (USB), mobile industry processor interface camera serial interface (MIPI CSI), third generation (3G), third generation partnership project (3GPP), Long Term Evolution (LTE), LTE-advanced (LTE-a), fourth generation (4G), fifth generation (5G), and the like. However, this is one embodiment, and the communication interface 110 may use at least one communication module among various communication modules.

The communication interface for communicating with the source device 200 and the communication interface for communicating with the server 300 may be implemented as different interfaces from each other. For example, the communication interface 110 may include a first communication interface 110-1 to communicate with the source device 200 and a second communication interface 110-2 to communicate with the server 300. In this case, the first communication interface 110-1 may communicate with the source device 200 through wired communication, and the second communication interface 110-2 may communicate with the server 300 through wireless communication. The first communication interface 110-1 may be implemented as an HDMI port, but is not limited thereto.

The first communication interface 110-1 may receive an image signal from the source device 200. The image signal herein may include content and a signal including information related to the content. The second communication interface 110-2 may transmit the characteristic information of the content to the server 300 and receive the identification information of the content obtained based on the characteristic information from the server 300. This will be described in detail below.

The display 120 is an element that displays content received from the source device 200.

The display 120 may be implemented as various displays such as a Liquid Crystal Display (LCD), an Organic Light Emitting Diode (OLED), a liquid crystal on silicon (LCoS), a Digital Light Processing (DLP), a Quantum Dot (QD) display panel, a quantum dot light emitting diode (QLED), and a micro Light Emitting Diode (LED).

The display 120 may be implemented as a touch screen type having a layered structure with a touch panel. The touch screen may be configured to detect a touch input pressure in addition to a touch input position and area.

The processor 130 may be electrically connected to the memory and control the operation of the electronic device 100.

According to an embodiment of the present disclosure, processor 130 may receive content from source device 200 through communication interface 110. In some cases, the electronic device 100 may receive information related to the content (e.g., title, genre, reproduction time period of the content) from the source device 200, but the present disclosure will be described by assuming that information related to the content currently being reproduced or to be reproduced is obtained because clear information related to the content cannot be received from the source device 200.

The processor 130 may obtain the feature information of the content received from the source device 200 and transmit the feature information to the external server 300. The characteristic information may refer to unique information of the content that allows the corresponding content to be distinguished from another content, and the characteristic information may include any one or any combination of video characteristic information and audio characteristic information. The video feature information may be information extracted from the video signal that does not include audio information, and may be video fingerprint information. In addition, the audio feature information may be information extracted from the audio signal, and may be audio fingerprint information.

The video fingerprint information may be character string information generated based on pixel values of one frame included in the video signal. Such character strings change depending on the pixel values of the pixel positions, and therefore, the same character strings can be generated only in the case of the same frame. Thus, a video fingerprint may be characteristic information that is capable of distinguishing a corresponding frame from another frame.

The audio fingerprint information may be character string information generated based on audio information included in a portion of the audio signal. Such character strings vary depending on frequency information, and therefore, the same character strings can be generated only in the case of the same acoustic signal. Thus, the audio fingerprint may be characteristic information that allows a corresponding portion of an audio frequency to be distinguished from another audio frequency.

The processor 130 may capture an image of content currently being viewed among the content received from the source device 200 at intervals of a predetermined period of time. For example, the processor 130 may capture frames of content displayed by the display 120 at 500ms intervals. The processor 130 may then obtain video feature information based on the pixel values of the captured image.

For example, the processor 130 may divide all pixels in the captured image (i.e., one captured frame) into blocks including n × m pixels. The processor 130 may then calculate pixel values for some or all of the divided blocks. The processor 130 may generate a character string corresponding to the calculated pixel value, and the generated character string may be a video fingerprint. The processor 130 may obtain the video fingerprint, i.e., the video feature information, by the above-described method.

The processor 130 may obtain the video feature information from a predetermined number of most recently captured images among a plurality of images captured at predetermined time intervals. For example, when video feature information is acquired from all of a plurality of images captured at predetermined time intervals and transmitted to the server 300, the recognition accuracy regarding the content currently reproduced on the display 120 may be improved, but the processing amount of the processor 130 may be unnecessarily increased. Accordingly, by considering the real-time property of the content currently being reproduced, it is possible to obtain the video feature information from the minimum number of images for identifying the content, and obtain the video feature information from the most recently captured image. For example, if the predetermined number is 3, the processor 130 may obtain video feature information from only the three most recently captured images.

In addition, the processor 130 may obtain an acoustic signal of the content currently being viewed at predetermined time intervals. For example, the processor 130 may obtain frequency information of the acoustic signal of the content output through the speaker at intervals of 500 ms. The processor 130 may then obtain audio feature information based on the obtained frequency information. The processor 130 may analyze the waveform of the obtained frequency information and generate a character string corresponding to the analyzed waveform. The character string generated as described above may be an audio fingerprint. The processor 130 may obtain audio feature information as the video feature information from a predetermined number of most recently obtained frequency information among the frequency information obtained at predetermined time intervals.

If the number of pieces of identification information of the content obtained based on the video feature information is more than one piece, the audio feature information is information that can be additionally used to obtain identification information corresponding to the content currently being viewed among the pieces of identification information. This will be described in detail with reference to fig. 5 and 6.

The processor 130 may transmit the characteristic information obtained in the above embodiment to the server 300.

In an example, the processor 130 may transmit the feature information to the server 300 at predetermined time intervals, and the interval of the predetermined period of time here may be the same as the time interval when the feature information is obtained, but is not limited thereto, and may transmit the feature information to the server 300 at time intervals different from the time interval when the feature information is obtained.

However, not limited thereto, if a content recognition request is input from a user or a feature information transmission request signal is received from the server 300, the processor 130 may transmit the obtained feature information to the server 300.

Then, the server 300 may compare the characteristic information transmitted from the electronic device 100 with the characteristic information stored in the database of the server 300. For example, if video fingerprint information corresponding to three frames included in one content is transmitted from the electronic device 100, the server 300 may search the database for a content including three pieces of video fingerprint information. In an example, the server 300 may identify content including any one or any combination of three pieces of video fingerprint information as candidate content. In this case, if the identified candidate contents are different from each other, the server 300 may identify a content having a greater number of video fingerprint information matching the three pieces of video fingerprint information as a content corresponding to the video fingerprint information transmitted from the electronic device 100. For example, if movie contents different from each other are divided into a first set and a second set, the introduction images may be identical to each other. Accordingly, the server 300 may identify a content having a greater number of matching video fingerprint information as a content corresponding to the characteristic information transmitted from the electronic device 100 among the plurality of identified candidate contents. Accordingly, the accuracy of identifying content corresponding to the video fingerprint transmitted by the electronic device 100 may be improved.

In addition, since the server 300 may not store video fingerprints of all frames related to one content, the server 300 may not find a content matching all video fingerprints transmitted from the electronic device 100. Accordingly, the server 300 may search for content having only one fingerprint matching a plurality of video fingerprints transmitted from the electronic device 100 and identify the content as candidate content. In subsequent processing, as described above, content having a larger amount of matching video fingerprint information may be identified as content corresponding to video fingerprint information transmitted from the electronic device 100.

In other words, the server 300 may identify contents having a high degree of similarity by comparing the characteristic information transmitted from the electronic device 100 with the information stored in the database. The server 300 may obtain information about the recognized content from the database and transmit identification information of the obtained content to the electronic device 100. The identification information of the content may include title information, genre information, production year information, production country information, personal information, and the like of the content.

The processor 130 may receive identification information of the content obtained based on the characteristic information from the server 300.

Processor 130 may receive signals from source device 200 through communication interface 110. The signal here may be a signal including content and control information. The content may be a source image provided by the source device 200, and the control information may include instruction information for changing setting information of the display 120.

For example, the signal received from the source device 200 may include control information for allowing the electronic device 100 to turn on or off a predetermined mode based on the type of content provided by the source device 200. If the type of content is game content, the predetermined mode may be an Automatic Low Latency Mode (ALLM). ALLM may be a mode in which a response to an input is displayed on a display relatively quickly. Unlike content such as movies or television shows, game content may be content that requires real-time user manipulation. Accordingly, it is necessary to quickly reflect the response to the user manipulation on the display 120, and therefore, it is necessary to relatively reduce the response time to the user input. Accordingly, if the content transmitted to the electronic device 100 is recognized as game content, the source device 200 may include control information for turning on the ALLM in a signal transmitted to the electronic device 100.

Such control information may be information provided to the electronic apparatus 100 supporting the predetermined or higher version of the HDMI standard. For example, the control information may be provided to the electronic apparatus 100 supporting the standard of the HDMI2.0 or higher version. In this case, the electronic device 100 and the source device 200 may include HDMI2.0 ports. However, without being limited thereto, and if the version supported by the electronic device 100 is lower than HDMI2.0, the all may not be turned on in the electronic device 100 even if the control information is provided from the source device 200.

Here, HDMI2.0 is a standard optimal choice for an ultra-high resolution environment called 4K or UHD (ultra high definition). HDMI2.0 supports a maximum bandwidth of 18Gbps and can transmit images moving smoothly at 60Hz with a maximum resolution of 4,096 × 2,160(2160 p).

The HDMI standard may include information blocks of Vendor Specific Data Blocks (VSDBs), and the VSDB may include audio/video delay information, CEC physical address information, color bit information, highest TMDS frequency information, and the like. The color bit information herein may refer to color information, and the highest Transition Minimized Differential Signaling (TMDS) frequency information may refer to resolution information.

If the type of the content transmitted to the electronic device 100 is identified as game content, the source device 200 may include automatic low delay mode (ALLM) control information for adjusting a delay included in the VSDB.

Each version of HDMI port may have backward compatibility. Thus, a higher standard source device 200 may be connected to a lower standard electronic device 100, and vice versa. However, in this case, both devices may use only the functions corresponding to the lower standards. In an example, even if the source device 200 supports HDMI2.0, when the electronic device 100 supports the HDMI1.4 standard, only the function of HDMI1.4 may be used. Therefore, in order to turn on all on the electronic device 100, the standard of the electronic device 100 and the source device 200 may be at least HDMI 2.0.

If the predetermined mode of the display 120 is turned on according to the signal received from the source device 200, the processor 130 may obtain information on a first time point when the predetermined mode is turned on. The processor 130 may store information about a first point in time when the predetermined mode is turned on in the memory.

In addition, the processor 130 may obtain information on a second time point when the predetermined mode is turned off after the first time point. The processor 130 may store information in the memory about a second point in time when the predetermined mode is turned off.

In an example, the information on the first time point when the predetermined mode is turned on and the information on the second time point when the predetermined mode is turned off may be time information of the corresponding time points. For example, the information related to the first time point may be time information of the first time point, such as 16:30 of 9/1/2019. In another example, the information related to the first point in time may be a starting point in time of a stopwatch for measuring the time period. In this case, the information on the second time point when the predetermined mode is turned off may refer to the time period itself. For example, the first time point may be a time point at which a stopwatch starts, which is 0, and the second time point may be a time point at which the stopwatch ends, which is, for example, 2 hours and 30 minutes.

In an example, the predetermined mode may be turned off based on a signal including control information for turning off the predetermined mode.

In another example, a signal including control information for turning on a predetermined mode may be periodically transmitted from the source device 200 to the electronic device 100 from a first time point, and the predetermined mode may be turned off if the control information for turning on the predetermined mode is no longer transmitted.

Accordingly, the processor 130 may recognize information about a first time point when the predetermined mode is turned on and information about a second time point when the predetermined mode is turned off. In other words, the processor 130 may identify a time period for which the predetermined pattern is turned on.

The processor 130 may obtain information related to the content displayed through the display 120 based on the identification information, the information related to the first time point, and the information related to the second time point.

The information related to the content here may include any one or any combination of the title, genre, or reproduction time period of the content.

The processor 130 may identify the title of the content displayed through the display 120 based on the identification information. The identification information here may be information obtained by the server 300 by comparing the feature information received from the electronic device 100 with the feature information stored in the server 300, and may be information transmitted to the electronic device 100 by the server 300. In addition to the title of the content, the processor 130 may recognize type information, year of production information, country of production information, personal information, and the like based on the identification information.

In addition, the processor 130 may identify a reproduction period of the content identified based on the information about the first time point and the information about the second time point. The processor 130 may recognize that the identified content starts to be reproduced at a first time point and ends at a second time point based on the identification information. Accordingly, the processor 130 may obtain total time information during which the content is reproduced.

The processor 130 may obtain identification information of the content among the plurality of pieces of identification information received from the server 300 based on the feature information obtained between the first time point and the second time point. For example, the characteristic information is obtained before the first time point and transmitted to the server 300, the identification information received from the server 300 is not used for the content reproduced through the display 120 between the first time point and the second time point, and thus the processor 130 may recognize only the content as the content reproduced between the first time point and the second time point based on the characteristic information obtained between the first time point and the second time point. This will be described in detail with reference to fig. 8.

It is described that the processor 130 obtains the characteristic information of the content currently being reproduced, but not limited thereto, the processor 130 may obtain the characteristic information of the content to be reproduced and receive the identification information related thereto from the server 300.

Fig. 3 is a block diagram for explaining the configuration of an electronic device according to the embodiment.

Referring to fig. 3, electronic device 100 may include a communication interface 110, a display 120, a processor 130, a memory 140, an audio output interface 150, and a user input interface 160. Detailed description about components of the configuration shown in fig. 3 overlapping with the configuration shown in fig. 2 will not be repeated.

Processor 130 may control the operation of electronic device 100 using various programs stored in memory 140. The processor 130 may include a graphics processor 132 for graphics processing corresponding to the image. The processor 130 may be implemented as a system on chip (SoC) including a core and a Graphics Processing Unit (GPU) 132. Processor 130 may be a single core processor, a dual core processor, a three core processor, a four core processor, or other multi-core processor.

Processor 130 may include a main Central Processing Unit (CPU)131, a GPU 132, and a digital processing unit (NPU) 133.

The main CPU 131 can perform booting using the O/S stored in the memory 140 by accessing the memory 140. The main CPU 131 can perform various operations using various programs, contents, data, and the like stored in the memory 140. According to an embodiment, the main CPU 131 may copy a program in the memory 140 to the RAM and access the RAM to execute the corresponding program according to instructions stored in the ROM.

GPU 132 may correspond to a high performance processor for graphics processing, and may be a special purpose electronic circuit designed to quickly process and change memory to speed up image generation in a frame buffer to be output on a screen. Additionally, GPU 132 may refer to a Visual Processing Unit (VPU).

The NPU 133 may correspond to an AI chipset (or, an AI processor), and may be an AI accelerator. The NPU 133 may correspond to a processor chip optimized for a deep neural network. NPU 133 may correspond to a processor that executes a deep learning model instead of GPU 132, and NPU 133 may correspond to a processor that executes a deep learning model with GPU 132.

The memory 140 may be electrically connected to the processor 130 and store data of the embodiment.

The memory 140 may be implemented in the form of a memory embedded in the electronic device 100 or may be implemented in the form of a memory detachable from the electronic device 100 according to data storage purposes. For example, data for operating the electronic device 100 may be stored in a memory embedded in the electronic device 100, and data for extended functions of the electronic device 100 may be stored in a memory detachable from the electronic device 100. The memory embedded in the electronic device 100 may be implemented as any one or any combination of the following: volatile memory (e.g., dynamic ram (dram), static ram (sram), synchronous dynamic ram (sdram), etc.), and non-volatile memory (e.g., one-time programmable ROM (otprom), programmable ROM (prom), erasable programmable ROM (eprom), electrically erasable programmable ROM (eeprom), mask ROM, flash memory (e.g., NAND flash or NOR flash), hard disk drive, or Solid State Drive (SSD)), and the memory that is removable from electronic device 100 may be implemented in the form of: a memory card (e.g., Compact Flash (CF), Secure Digital (SD), Micro-secure digital (Micro-SD), Mini-secure digital (Mini-SD), extreme digital (xD), multimedia card (MMC), etc.), an external memory (e.g., USB memory) connectable to a USB port, etc.

According to an embodiment, the memory 140 may store feature information obtained by the processor 130. The memory 140 may store any one or any combination of video fingerprints or audio fingerprints.

The memory 140 may store information related to the first time point and information related to the second time point. In addition, the memory 140 may also store information about a point in time at which the characteristic information is obtained.

The audio output interface 150 may be an element for outputting an audio signal corresponding to a video signal. The audio output interface 150 may be implemented as a speaker and may also be implemented as an external element of the electronic device 100, such as a soundbar.

The user input interface 160 may be an element for receiving various user instructions and information. The processor 130 may perform functions corresponding to user instructions input through the user input interface 160 and store information input through the user input interface 160 in the memory 140.

The user input interface 160 may include a microphone for receiving a user instruction as voice, or may be implemented as the display 120 for receiving a user instruction by touch.

In addition, the user input interface 160 may receive a user instruction or a signal including information related to an operation from a separate control device for controlling the electronic device 100.

Fig. 4 is a block diagram for explaining an operation between an electronic device and a server according to the embodiment.

The electronic device 100 may include a signal reception module, a Video Fingerprint (VFP) obtaining module, an Audio Fingerprint (AFP) obtaining module, an HDMI information processing module, and a content information obtaining module. The above modules may be stored in the memory 140 and may be loaded to the processor 130 and executed according to the control of the processor 130.

The signal receiving module may receive an image signal from the source device 200. The image signal may include content and control information related to the content. The signal receiving module may divide the image signal received from the source device 200 into a video signal and an audio signal. Then, the signal reception module may transmit the video signal to the video fingerprint obtaining module, transmit the audio signal to the audio fingerprint obtaining module, and transmit the control information to the HDMI information processing module.

The video fingerprint obtaining module may obtain video fingerprints from the video signal at predetermined time intervals. The video fingerprint obtaining module may obtain a video fingerprint including a character string based on pixel values included in the video signal. The obtained video fingerprint information may be transmitted to the content information obtaining module.

The audio fingerprint obtaining module may obtain the audio fingerprint from the audio signal at predetermined time intervals. The audio fingerprint obtaining module may obtain an audio fingerprint including a character string based on frequency information of an acoustic signal included in the audio signal. The obtained audio fingerprint information may be transmitted to the content information obtaining module.

The HDMI information processing module may monitor whether all m is turned on in the electronic device 100 according to control information transmitted from the source device 200 based on a predetermined or higher version of the HDMI standard. The HDMI information processing module may transmit information about a first time point when all is turned on to the content information obtaining module.

In addition, the HDMI information processing module may monitor whether the turned-on all is turned off. The HDMI information processing module may transmit information on a second time point when all m is turned off after the first time point to the content information obtaining module.

The content information obtaining module may store the information about the first time point and the information about the second time point, which are transmitted from the HDMI information processing module, in the memory 140, and recognize that the predetermined type of content is reproduced between the first time point and the second time point. In an example, the content information obtaining module may identify that the game content is reproduced between a first point in time and a second point in time.

The content information obtaining module may transmit the video fingerprint information and the audio fingerprint information transmitted from the video fingerprint obtaining module and the audio fingerprint obtaining module to the server 300. In an example, the content information obtaining module may transmit any one or any combination of the most recently received video fingerprint information or audio fingerprint information to the server 300. However, it is not limited thereto, and the content information obtaining module may transmit all video fingerprint information and audio fingerprint information transmitted from the video fingerprint obtaining module and the audio fingerprint obtaining module to the server 300.

Operations of the signal reception module, the Video Fingerprint (VFP) obtaining module, the Audio Fingerprint (AFP) obtaining module, the HDMI information processing module, and the content information obtaining module may be performed by the processor 130, and the above-described modules may be collectively referred to as the processor 130. In addition, names of modules and types of modules are examples, and the modules may be implemented to have various names and various types of modules.

The server 300 may include a matching module, an indexing module, a database, and an image input module.

The server 300 may identify content reproduced by the electronic device 100 based on any one or any combination of video fingerprints or audio fingerprints transmitted from the electronic device 100.

The matching module may determine content that matches any one or any combination of the video fingerprint or the audio fingerprint transmitted from the electronic device 100. For example, the matching module may identify content that matches any one or any combination of video fingerprints or audio fingerprints transmitted from the electronic device 100 based on the video fingerprint information and the audio fingerprint information stored in the database. A match may refer to a case where a video fingerprint is the same as or similar to another video fingerprint, and a case where an audio fingerprint is the same as or similar to another audio fingerprint.

The database may store any one or any combination of video fingerprints or audio fingerprints generated with respect to the at least one content. For example, the database may store video or audio fingerprints of game content, or video or audio fingerprints generated with respect to a real-time broadcast service.

The indexing module may index each of the video and sound. The index module may generate a video fingerprint based on a video signal according to the image signal transmitted through the image input module and generate an audio fingerprint based on an audio signal according to the transmitted image signal. The generation of the video fingerprint and the generation of the audio fingerprint may refer to the extraction of the video fingerprint and the extraction of the audio fingerprint.

The indexing module may index and store each of the generated video and audio fingerprints in a database.

The image input module may receive a content signal including a game content-related signal and a broadcast service signal. The image input module may divide the received signal into a video signal and an audio signal. The image input module may transmit the divided video signal and audio signal to the index module. The game content-related signal or the broadcast service signal may be a signal transmitted from the source device 200 or an external device.

In case of the real-time broadcasting, the real-time broadcasting service signal may be transmitted to the image input module of the server 300 before the electronic device 100. Therefore, before the electronic device 100, the video fingerprint and the audio fingerprint may be obtained from the image signal including the broadcast service signal and stored in the database. Accordingly, even if a content identification request related to the real-time broadcast is received from the electronic device 100, the server 300 may identify the content corresponding to the real-time broadcast based on any one or any combination of the video fingerprint or the audio fingerprint stored in the database.

Fig. 5 is a view for explaining an operation of discriminating contents in the case of reproducing the same video on a plurality of channels according to the embodiment.

As shown in fig. 5, it is assumed that videos of contents reproduced on a plurality of channels are the same. In this case, the electronic device 100 may obtain video fingerprint information obtained from channel 5, channel 7, and channel 11 and transmit them to the server 300. Since a plurality of video fingerprints transmitted from the electronic device 100 are the same, the server 300 may recognize the content of the channel as one content.

However, assuming that the contents reproduced on the channel 5, the channel 7, and the channel 11 are soccer games, for example, the contents can be identified as different contents because the numbers of the channels on which the contents are reproduced, commentators, and the like are different from each other. Accordingly, an embodiment of more accurately identifying content using audio fingerprint information in this case will be described in detail below with reference to fig. 6.

Fig. 6 is a flowchart for explaining a process of using audio feature information if content is not identified using only video feature information according to an embodiment.

The electronic device 100 may receive identification information related to the content from the server 300 (S610). It may be determined whether the number of identification information received from the server 300 is more than one. That is, it may be recognized whether a plurality of pieces of identical identification information are received from the server 300 (S620). If only one identification information is received from the server 300 (S620 — no), the electronic device 100 may recognize that identification information related to one content is requested and recognize the content being reproduced as corresponding identification information (S630).

If a plurality of pieces of identification information are received from the server 300 and the plurality of pieces of identification information are identical (S620-YES), the electronic device 100 may obtain Audio Fingerprint (AFP) information (S640). This is because the pieces of identification information can be distinguished using the audio fingerprint information.

The electronic device 100 may transmit the obtained audio fingerprint information to the server 300 (S650). For example, assuming that the contents reproduced on channel 5, channel 6, and channel 11 are a soccer game, the audio signals of channel 5, channel 6, and channel 11 may be different from each other because commentators may be different from each other even if the video signals of channel 5, channel 6, and channel 11 are the same. Accordingly, the electronic device 100 may transmit the audio fingerprint information of each channel to the server 300.

The server 300 may obtain content identification information different from each other by comparing a plurality of pieces of audio fingerprint information transmitted from the electronic device 100 with information stored in each database. For example, the server 300 may differently obtain a broadcasting station, a channel number, and personal information corresponding to each audio fingerprint information and transmit them to the electronic device 100.

The electronic apparatus 100 may receive a plurality of pieces of identification information different from each other from the server 300 (S660) and recognize the content (S670).

The case where the electronic device 100 transmits the audio fingerprint information to the server 300 after receiving the identification information corresponding to the video fingerprint from the server 300 is described, but not limited thereto, the electronic device 100 may transmit the video fingerprint and the audio fingerprint related to the channel 5, the channel 7, and the channel 11, and if a plurality of video fingerprints transmitted from the electronic device 100 are recognized to be the same, the server 300 may obtain the identification information of contents different from each other corresponding to each piece of the audio fingerprint information using the audio fingerprint and transmit the identification information to the electronic device 100.

Fig. 7 is a sequence diagram of an electronic device, a source device, and a server according to an embodiment.

The source device 200 may identify the type of content to be transmitted to the electronic device 100 (S705), and transmit an image signal to the electronic device 100 (S710). The source device 200 may include control information for turning on the ALLM in the image signal transmitted to the electronic device 100 if the content transmitted to the electronic device 100 is recognized as game content.

The electronic device 100 may receive an image signal from the source device 200 (S715). The electronic apparatus 100 may recognize whether the predetermined mode is turned on through the control information included in the image signal (S720). When it is determined that the predetermined mode is turned on (S720 — yes), the electronic device 100 may store information about a first time point when the predetermined mode is turned on (S725).

In addition, when receiving the image signal from the source device 200, the electronic device 100 may divide the content included in the image signal into a video signal and an audio signal, and periodically obtain the feature information from the video signal and the audio signal (S730). The electronic device 100 may obtain video fingerprint information from a video signal and audio fingerprint information from an audio signal.

Then, the electronic device 100 may transmit the obtained feature information for identifying the information related to the content to the server 300 (S735).

Before the electronic device 100, the server 300 may receive an image signal from the source device 200 or an external device (S740). The server 300 may divide the content included in the image signal into a video signal and an audio signal and generate video feature information and audio feature information (S745). The generated feature information may be stored as video feature information and audio feature information (S750).

Then, when the feature information is transmitted from the electronic device 100 (S735), the server 300 may match (compare) the video feature information transmitted from the electronic device 100 with information of the database (S755). The server 300 may recognize whether the number of identification information obtained based on the video feature information is more than one (S760). If the number of identification information is recognized as more than one (S760 — yes), in other words, if the identification information is not recognized as information for one content, the server 300 may match (compare) the audio feature information transmitted from the electronic device 100 with the information of the database (S765). Even a plurality of pieces of identification information using the audio feature information can be recognized as one piece of identification information, and thus, the server 300 can recognize one matching content and obtain identification information related thereto (S770). Then, the server 300 may transmit the matched contents identification information to the electronic device 100 (S775).

The electronic device 100 may obtain information on a second time point when the subscription mode is turned off and store the information on the second time point (S780). Step S780 is shown after receiving the identification information from the server 300 in fig. 7, but this is an embodiment, and step S780 is a step performed at various points in time after step S730 of obtaining the feature information on the electronic device 100.

The electronic device 100 may obtain information related to the content based on the identification information received from the server 300 and the stored information related to the first time point and the second time point (S785).

The electronic apparatus 100 may identify the type and title of the content based on the identification information, and may identify the reproduction period and type of the content based on the information about the first time point and the second time point.

Fig. 8 is a view for explaining identification information based on a time point when feature information is obtained according to an embodiment.

The electronic device 100 may determine the type of content reproduced between a first time point when the predetermined mode is turned on and a second time point when the predetermined mode is turned off. In this case, if a plurality of pieces of identification information different from each other are received between the first time point and the second time point, the electronic apparatus 100 may recognize which content corresponding to which identification information is reproduced between the first time point and the second time point.

For example, the electronic device 100 may obtain the feature information from the content a at a time point t1 before the first time point and transmit the obtained feature information to the server 300. The server 300 may identify the matched content a based on the characteristic information and transmit identification information of the content a to the electronic device 100. It is assumed that the electronic device 100 may receive the identification information of the content a from the server 300 at a time point t2 after the first time point.

In addition, the electronic device 100 may obtain the feature information from the content B at a time point t3 after the first time point and transmit the obtained feature information to the server 300. The server 300 may identify the matched content B based on the characteristic information and transmit identification information of the content B to the electronic device 100. It is assumed that the electronic device 100 may receive the identification information of the content B from the server 300 at a time point t4 before the second time point.

In other words, the electronic device 100 may receive the pieces of identification information from the server 300 between the first time point and the second time point. In this case, the electronic apparatus 100 may identify, among the plurality of pieces of identification information received by the server 300, identification information corresponding to the feature information obtained between the first time point and the second time point (identification information received at time point t 4). The electronic apparatus 100 may recognize that the content corresponding to the identification information received at the time point t4 is reproduced between the first time point and the second time point, and may recognize the reproduction time period of the content B.

In addition, even if t4 (i.e., the time point at which the identification information of the content B is received) is after the second time point, the electronic device 100 may recognize that the content B is reproduced between the first time point and the second time point because the feature information of the content B is obtained between the first time point and the second time point.

Fig. 9 is a flowchart for explaining a method of controlling an electronic device according to an embodiment.

When receiving the content from the source device 200, the electronic device 100 may obtain feature information of the received content and transmit the feature information to the server 300 (S910).

Here, the feature information may include any one or any combination of video feature information and audio feature information. The video characteristic information may be implemented as a video fingerprint and the audio characteristic information may be implemented as an audio fingerprint.

The electronic apparatus 100 may capture an image of content currently being viewed among the received content at predetermined time intervals, and obtain video feature information based on pixel values of the captured image.

In addition, the electronic apparatus 100 may obtain frequency information of acoustic information of content currently being viewed at predetermined time intervals, and obtain audio feature information based on the obtained frequency information.

Here, if the number of pieces of identification information of the content obtained based on the video feature information is more than one piece, the audio feature information is information that can be additionally used to obtain identification information corresponding to the content currently being viewed among the pieces of identification information.

The electronic device 100 may acquire video feature information from a predetermined number of recently captured images among a plurality of images captured at predetermined time intervals and transmit the video feature information to the server 300.

The electronic device 100 may receive identification information of the content obtained based on the feature information from the server 300 (S920).

The electronic device 100 may obtain the identification information of the content based on the feature information obtained between the first time point and the second time point among the plurality of pieces of identification information received from the server 300.

If the predetermined mode of the electronic device 100 is turned on according to the signal received from the source device 200, the electronic device 100 may obtain information on a first time point when the predetermined mode is turned on (S930).

The signal received from the source device 200 may include control information for allowing the electronic device 100 to turn on or off a predetermined mode based on the type of content provided from the source device 200.

If the type of the content is game content, the control information may be information provided from the source device 200, and the predetermined mode may be an Automatic Low Latency Mode (ALLM).

The electronic device 100 may obtain information on a second time point when the predetermined mode is turned off after the first time point (S940).

The electronic apparatus 100 may obtain information related to the displayed content based on the obtained identification information, the information related to the first time point, and the information related to the second time point (S950).

The electronic apparatus 100 may identify the title of the displayed content based on the identification information, and may identify the reproduction period of the identified content based on the information about the first time point and the second time point.

The operation of each step is described above, and thus, a detailed description thereof will not be repeated.

The method according to the embodiments of the present disclosure described above may be implemented in the form of an application that can be installed in a related art electronic device.

In addition, the method according to the embodiment of the present disclosure described above may be simply implemented by software update or hardware update in the electronic device of the related art.

Further, the above-disclosed embodiments may be performed by an embedded server provided in the electronic device or an external server of the electronic device.

In accordance with embodiments of the present disclosure, the above-described embodiments may be implemented as software including instructions stored in a machine (e.g., computer) readable storage medium. The machine is a device that invokes instructions stored in a storage medium and operates in accordance with the invoked instructions, and may include an electronic device in accordance with the disclosed embodiments. In the case where instructions are executed by a processor, the processor may perform functions corresponding to the instructions directly or using other elements under the control of the processor. The instructions may include code generated by a compiler or code executable by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Here, a "non-transitory" storage medium is tangible and may not include signals, and it is indistinguishable whether data is semi-permanently or temporarily stored in the storage medium.

Additionally, in accordance with embodiments of the present disclosure, methods in accordance with the above-disclosed embodiments may be provided for inclusion in a computer program product. The computer program product may be exchanged as an item between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)), or through an application store (e.g., PlayStore)TM) And (4) online distribution. In case of online distribution, at least a part of the computer program product may be at least temporarily stored or may be temporarily generated in a storage medium such as a memory of a server of the manufacturer, a server of an application store or a relay server.

According to the embodiments of the present disclosure, the above-described embodiments may be implemented in a recording medium readable by a computer or the like using software, hardware, or a combination thereof. In some cases, the embodiments described in this specification may be implemented as a processor itself. According to an implementation by software, embodiments such as the processes and functions described in this specification may be implemented as separate software modules. Each software module may perform one or more of the functions and operations described in this specification.

Computer instructions for performing processing operations in accordance with embodiments of the present disclosure described above may be stored in a non-transitory computer readable medium. When executed by a processor, the computer instructions stored in such a non-transitory computer-readable medium may enable the machine to perform processing operations in accordance with the embodiments described above.

A non-transitory computer-readable medium is not a medium (such as a register, cache, or memory) that stores data for a short period of time, but refers to a medium that stores data semi-permanently and is readable by a machine. Examples of non-transitory computer readable media may include CDs, DVDs, hard disks, blu-ray discs, USB, memory cards, and ROMs.

In addition, each of the elements (e.g., modules or programs) according to the above-described embodiments may include a single entity or a plurality of entities, and in the above-described embodiments, some of the above-described sub-elements may be omitted or other sub-elements may also be included. Alternatively or additionally, some elements (e.g., modules or programs) may be integrated into one entity to perform the same or similar functions as performed by each respective element prior to integration. Operations performed by a module, program, or other element may be performed in parallel, in an iterative or heuristic manner, or at least some of the operations may be performed in a different order, omitted, or different operations may be added, depending on the embodiment.

While the embodiments of the present disclosure have been illustrated and described, the present disclosure is not limited to the foregoing embodiments, and it will be apparent to those skilled in the art to which the present disclosure pertains that various modifications may be made without departing from the spirit of the disclosure as claimed in the appended claims. Further, it is intended that these modifications should not be construed as independent of the technical idea or the prospect of the present disclosure.

As described above, according to an embodiment of the present disclosure, an electronic device may identify a title of content currently being displayed.

In addition, if the content reproduced by the electronic device is game content, the electronic device may recognize a game time of the game content that the user actually plays.

27页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种超高清视频优化方法及系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类