Method and apparatus for displaying image

文档序号:1345785 发布日期:2020-07-21 浏览:6次 中文

阅读说明:本技术 用于显示图像的方法和装置 (Method and apparatus for displaying image ) 是由 高超 解晶 思磊 郭鹤 于 2019-01-15 设计创作,主要内容包括:本公开的实施例公开了用于显示图像的方法和装置。该方法的一具体实施方式包括:确定用户对目标视频的播放进度进行调整的选定时间点,其中,目标视频包括关键帧集合;从关键帧集合中,确定目标关键帧,其中,目标关键帧对应的时间点与选定时间点之差符合第一预设条件;基于目标关键帧,对选定时间点对应的视频帧进行解码,获得解码后视频帧;在第一目标显示区域中显示解码后视频帧。该实施方式提高了显示视频帧的灵活性,以及有助于提高用户定位及处理视频帧的效率。(Embodiments of the present disclosure disclose methods and apparatus for displaying images. One embodiment of the method comprises: determining a selected time point for adjusting the playing progress of a target video by a user, wherein the target video comprises a key frame set; determining a target key frame from the key frame set, wherein the difference between a time point corresponding to the target key frame and a selected time point meets a first preset condition; decoding the video frame corresponding to the selected time point based on the target key frame to obtain a decoded video frame; the decoded video frame is displayed in a first target display area. This embodiment increases the flexibility of displaying video frames and helps to increase the efficiency with which a user can locate and process video frames.)

1. A method for displaying an image, comprising:

determining a selected time point at which a user adjusts the playing progress of a target video, wherein the target video comprises a key frame set;

determining a target key frame from the key frame set, wherein the difference between the time point corresponding to the target key frame and the selected time point meets a first preset condition;

decoding the video frame corresponding to the selected time point based on the target key frame to obtain a decoded video frame;

displaying the decoded video frame in a first target display area.

2. The method of claim 1, wherein the determining the selected point in time at which the user adjusted the playing progress of the target video comprises at least one of:

in response to the fact that the staying time of a control point for adjusting the playing progress of the target video at the current time point is larger than or equal to a preset time threshold value, determining the current time point as a selected time point;

and in response to detecting that the user does not manipulate the control point any more, determining the time point corresponding to the control point currently as a selected time point.

3. The method of claim 1 or 2, wherein prior to the determining the selected point in time at which the user adjusted the playing progress of the target video, the method further comprises:

detecting the adjusted time point in real time in the process of adjusting the playing progress of the target video by the user;

and determining a target time point based on the real-time detected time point, and displaying the video frame corresponding to the determined target time point in a second target display area.

4. The method of claim 3, wherein the determining a target point in time based on the real-time detected points in time comprises:

and determining time points, the distances between which and the detected time points meet a second preset condition, from the time points respectively corresponding to the key frames included in the key frame set as target time points.

5. The method of claim 3, wherein the determining a target point in time based on the real-time detected points in time comprises:

and selecting a time point as a target time point from a target time period in which the detected time point is located, wherein the target time period is a time period in a time period set obtained by dividing the playing time of the target video based on the key frame set.

6. The method of claim 3, wherein the determining a target point in time based on the real-time detected points in time comprises:

acquiring processing capacity information of a target processor, wherein the target processor is used for processing video frames included in the target video, and the processing capacity information is used for representing the capacity of the target processor for processing information;

and periodically determining the time point detected in real time as a target time point according to a preset time interval corresponding to the processing capacity information.

7. The method of claim 3, wherein the determining a target time point based on the real-time detected time point and displaying the video frame corresponding to the determined target time point in a second target display area comprises:

determining the currently detected time point as a target time point;

the following display steps are performed: decoding the video frame corresponding to the determined target time point to obtain a decoded video frame for displaying in the second target display area; determining whether the second target display area includes the resulting decoded video frame;

in response to determining that the second target display area includes the resulting decoded video frame, re-determining the most recently detected point in time as the target point in time, and continuing the displaying step with the re-determined target point in time.

8. An apparatus for displaying an image, comprising:

a first determining unit configured to determine a selected time point at which a user adjusts a playing progress of a target video, wherein the target video includes a set of key frames;

a second determining unit, configured to determine a target key frame from the key frame set, wherein a difference between a time point corresponding to the target key frame and the selected time point meets a first preset condition;

a decoding unit configured to decode the video frame corresponding to the selected time point based on the target key frame to obtain a decoded video frame;

a display unit configured to display the decoded video frame in a first target display area.

9. The apparatus of claim 8, wherein the first determining unit comprises at least one of:

a first determining module, configured to determine a current time point as a selected time point in response to detecting that a staying time of a control point for adjusting the playing progress of the target video at the current time point is greater than or equal to a preset time threshold;

a second determination module configured to determine a time point currently corresponding to the control point as a selected time point in response to detecting that the user is no longer manipulating the control point.

10. The apparatus of claim 8 or 9, wherein the apparatus further comprises:

a detection unit configured to detect a time point adjusted to in real time during the process of the user adjusting the playing progress of the target video;

a second determining unit configured to determine a target time point based on the real-time detected time point, and to display a video frame corresponding to the determined target time point in a second target display area.

11. The apparatus of claim 10, wherein the second determining unit is further configured to:

and determining time points, the distances between which and the detected time points meet a second preset condition, from the time points respectively corresponding to the key frames included in the key frame set as target time points.

12. The apparatus of claim 10, wherein the second determining unit is further configured to:

and selecting a time point as a target time point from a target time period in which the detected time point is located, wherein the target time period is a time period in a time period set obtained by dividing the playing time of the target video based on the key frame set.

13. The apparatus of claim 10, wherein the second determining unit comprises:

an obtaining module configured to obtain processing capability information of a target processor, wherein the target processor is used for processing a video frame included in the target video, and the processing capability information is used for representing the capability of the target processor for processing information;

and the third determining module is configured to periodically determine the time point detected in real time as the target time point according to a preset time interval corresponding to the processing capacity information.

14. The apparatus of claim 10, wherein the second determining unit comprises:

a fourth determination module configured to determine a currently detected time point as a target time point;

a display module configured to perform the following display steps: decoding the video frame corresponding to the determined target time point to obtain a decoded video frame for displaying in the second target display area; determining whether the second target display area includes the resulting decoded video frame;

a fifth determining module configured to re-determine a most recently detected point in time as a target point in time in response to determining that the second target display area includes the resulting decoded video frame, and continue performing the displaying step using the re-determined target point in time.

15. A terminal device, comprising:

one or more processors;

a storage device having one or more programs stored thereon,

when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.

16. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.

Technical Field

Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method and apparatus for displaying an image.

Background

With the development of internet technology, people have more and more demands for watching and recording videos by using terminals such as mobile phones and tablet computers. One can process the video frames included in the video using existing image processing techniques. For example, the recorded video may be processed with special effects. Generally, in order for a user to accurately locate a video frame to be processed from a video, the existing method is to pre-process the video to set more key frames into the video frames included in the video. Since the speed of decoding the key frames is faster, the user can quickly preview the video by dragging the progress bar by setting more key frames, thereby positioning the video frames to be processed.

Disclosure of Invention

The embodiment of the application provides a method and a device for displaying an image.

In a first aspect, an embodiment of the present application provides a method for displaying an image, where the method includes: determining a selected time point for adjusting the playing progress of a target video by a user, wherein the target video comprises a key frame set; determining a target key frame from the key frame set, wherein the difference between a time point corresponding to the target key frame and a selected time point meets a first preset condition; decoding the video frame corresponding to the selected time point based on the target key frame to obtain a decoded video frame; the decoded video frame is displayed in a first target display area.

In some embodiments, determining the selected point in time at which the user adjusted the playing progress of the target video includes at least one of: determining the current time point as a selected time point in response to the fact that the staying time of a control point for adjusting the playing progress of the target video at the current time point is greater than or equal to a preset time threshold; and in response to detecting that the user does not operate the control point any more, determining the time point corresponding to the control point currently as the selected time point.

In some embodiments, before determining the selected point in time at which the user adjusted the playing progress of the target video, the method further comprises: detecting the adjusted time point in real time in the process of adjusting the playing progress of the target video by the user; and determining a target time point based on the real-time detected time point, and displaying the video frame corresponding to the determined target time point in a second target display area.

In some embodiments, determining the target point in time based on the real-time detected points in time comprises: and determining time points, the distances between which and the detected time points meet a second preset condition, from the time points respectively corresponding to the key frames included in the key frame set as target time points.

In some embodiments, determining the target point in time based on the real-time detected points in time comprises: and selecting the time point as a target time point from a target time period in which the detected time point is positioned, wherein the target time period is a time period in a time period set obtained by dividing the playing time of the target video based on the key frame set.

In some embodiments, determining the target point in time based on the real-time detected points in time comprises: acquiring processing capacity information of a target processor, wherein the target processor is used for processing video frames included in a target video, and the processing capacity information is used for representing the capacity of the target processor for processing the information; and periodically determining the time point detected in real time as a target time point according to a preset time interval corresponding to the processing capacity information.

In some embodiments, determining a target time point based on the real-time detected time point, and displaying a video frame corresponding to the determined target time point in a second target display area includes: determining the currently detected time point as a target time point; the following display steps are performed: decoding the video frame corresponding to the determined target time point to obtain a decoded video frame for displaying in a second target display area; determining whether the second target display area includes the resulting decoded video frame; in response to determining that the second target display area includes the resulting decoded video frame, re-determining the most recently detected point in time as the target point in time, and continuing to perform the displaying step using the re-determined target point in time.

In a second aspect, an embodiment of the present application provides an apparatus for displaying an image, the apparatus including: a first determining unit configured to determine a selected time point at which a user adjusts a playing progress of a target video, wherein the target video includes a set of key frames; the second determining unit is configured to determine a target key frame from the key frame set, wherein the difference between a time point corresponding to the target key frame and the selected time point meets a first preset condition; the decoding unit is configured to decode the video frame corresponding to the selected time point based on the target key frame to obtain a decoded video frame; a display unit configured to display the decoded video frame in the first target display area.

In some embodiments, the first determination unit comprises at least one of: the first determination module is configured to determine that the current time point is a selected time point in response to detecting that the staying time of a control point for adjusting the playing progress of the target video at the current time point is greater than or equal to a preset time threshold; and the second determination module is configured to determine the time point corresponding to the control point currently as the selected time point in response to detecting that the user does not operate the control point any more.

In some embodiments, the apparatus further comprises: the detection unit is configured to detect the adjusted time point in real time in the process of adjusting the playing progress of the target video by the user; a second determining unit configured to determine a target time point based on the real-time detected time point, and to display a video frame corresponding to the determined target time point in a second target display area.

In some embodiments, the second determination unit is further configured to: and determining time points, the distances between which and the detected time points meet a second preset condition, from the time points respectively corresponding to the key frames included in the key frame set as target time points.

In some embodiments, the second determination unit is further configured to: and selecting the time point as a target time point from a target time period in which the detected time point is positioned, wherein the target time period is a time period in a time period set obtained by dividing the playing time of the target video based on the key frame set.

In some embodiments, the second determination unit comprises: the acquisition module is configured to acquire processing capability information of a target processor, wherein the target processor is used for processing video frames included in a target video, and the processing capability information is used for representing the capability of the target processor for processing the information; and the third determining module is configured to periodically determine the time point detected in real time as the target time point according to a preset time interval corresponding to the processing capacity information.

In some embodiments, the second determination unit comprises: a fourth determination module configured to determine a currently detected time point as a target time point; a display module configured to perform the following display steps: decoding the video frame corresponding to the determined target time point to obtain a decoded video frame for displaying in a second target display area; determining whether the second target display area includes the resulting decoded video frame; a fifth determining module configured to re-determine the most recently detected point in time as the target point in time in response to determining that the second target display area includes the resulting decoded video frame, and continue performing the displaying step using the re-determined target point in time.

In a third aspect, an embodiment of the present application provides a terminal device, where the terminal device includes: one or more processors; a storage device having one or more programs stored thereon; when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any implementation of the first aspect.

In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which, when executed by a processor, implements the method as described in any implementation manner of the first aspect.

According to the method and the device for displaying the image, the selected time point of the user for adjusting the playing progress of the target video is determined, the key frame is determined from the key frame set included in the target video, the video frame corresponding to the selected time point is decoded based on the target key frame, and the decoded video frame is displayed in the first target display area.

Drawings

Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:

FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;

FIG. 2 is a flow diagram of one embodiment of a method for displaying an image according to an embodiment of the present application;

FIG. 3 is a schematic illustration of an application scenario of a method for displaying an image according to an embodiment of the present application;

FIG. 4 is a flow diagram of yet another embodiment of a method for displaying an image according to an embodiment of the present application;

fig. 5 is a flowchart of determining a target time point of a method for displaying an image according to an embodiment of the present application;

FIG. 6 is a schematic diagram illustrating an embodiment of an apparatus for displaying an image according to an embodiment of the present application;

fig. 7 is a schematic structural diagram of a computer system suitable for implementing a terminal device according to an embodiment of the present application.

Detailed Description

The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.

It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.

Fig. 1 illustrates an exemplary system architecture 100 to which the method for displaying an image or the apparatus for displaying an image of the embodiments of the present application may be applied.

As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.

The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as a video playing application, a web browser application, social platform software, etc., may be installed on the terminal devices 101, 102, 103.

The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen and supporting video playback, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.

The server 105 may be a server providing various services, such as a background video server providing support for video playing on the terminal devices 101, 102, 103. The background video server can send videos to the terminal equipment and also can receive videos from the terminal equipment.

It should be noted that the method for displaying an image provided in the embodiment of the present application is generally executed by the terminal devices 101, 102, and 103, and accordingly, the apparatus for displaying an image is generally disposed in the terminal devices 101, 102, and 103.

The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.

It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. The system architecture described above may not include a server and a network in the case where the video to be processed does not need to be obtained remotely.

With continued reference to FIG. 2, a flow 200 of one embodiment of a method for displaying an image in accordance with the present application is shown. The method for displaying an image includes the steps of:

step 201, determining a selected time point at which the user adjusts the playing progress of the target video.

In this embodiment, an execution subject (e.g., a terminal device shown in fig. 1) of the method for displaying an image may determine a selected time point at which a user adjusts the progress of playing of a target video. Wherein the target video may be a video to be selected from among video frames included therein to be displayed in the first target display area. As an example, the target video may be a video acquired from a remote location by using a wireless connection manner or a wired connection manner, or may be a video stored in a local location in advance (for example, a video recorded by a user using the execution main body). It should be noted that the target video in this embodiment is typically a compressed video, for example, a video obtained by compressing an original video by using the existing H26X encoding standard.

The target video includes a set of keyframes. The key frame (also called I frame) is a frame that completely retains image data in the compressed video, and when decoding the key frame, decoding can be completed only by the image data of the frame.

In practice, the compressed video may also include P-frames and B-frames. The P frame (also called difference frame) includes data for characterizing the difference between the current frame and a previous key frame (or P frame), and when decoding, the data included in the current frame needs to be superimposed by a previously buffered image, so as to generate a final image. That is, the P-frame does not include the complete image data, only data characterizing the differences from the previous frame image. B-frames (also known as bidirectional difference frames) include data that characterizes the difference between the current frame and the previous and subsequent frames. That is, when decoding a B frame, not only an image before the current frame but also an image after the current frame are acquired, and a final image is obtained by superimposing the previous and subsequent images on data included in the current frame. In compressed video, a plurality of key frames are usually included, and a plurality of P frames and B frames are included between the key frames. For example, assume that a certain video frame sequence includes video frames arranged in an order IBBPBBPBBP, where images corresponding to B frames and P frames can be decoded based on I frames.

The selected time point is a play time point selected by the user when adjusting the play progress of the target video. Generally, a user may drag a progress bar of the target video or slide on a displayed video frame to adjust a playing progress of the target video. It should be noted that, a user may adjust the playing progress of the target video by using an electronic device such as a mouse, and when the execution main body includes a touch screen, the user may also adjust the playing progress by sliding a finger on the screen.

In some optional implementations of this embodiment, the execution subject may determine the selected time point according to at least one of the following manners:

the first method is as follows: and determining the current time point as a selected time point in response to detecting that the staying time of the control point for adjusting the playing progress of the target video at the current time point is greater than or equal to a preset time threshold. The control point may be a point displayed on the screen (for example, a point displayed on the progress bar and representing the current playing progress), or a point not displayed (for example, a point where a finger is in contact with the screen when the user slides the finger on the video picture displayed on the screen), and the user may drag the control point by touching, clicking with a mouse, or the like, to adjust the playing progress. The time threshold may be a time preset by a technician, such as 2 seconds, 5 seconds, etc.

The second method comprises the following steps: and in response to detecting that the user does not operate the control point any more, determining the time point corresponding to the control point currently as the selected time point. Specifically, the execution body may detect whether the user clicks or touches the control point in real time. For example, when it is detected that the user lifts a finger or releases a mouse button, it is determined that the control point is no longer manipulated, and the current playing time point is determined as the selected time point.

Step 202, determining a target key frame from the key frame set.

In this embodiment, the executing subject of the method for displaying an image may determine a target key frame from the set of key frames based on the selected time point determined in step 201. The difference between the time point corresponding to the target key frame and the selected time point meets a first preset condition. The first preset condition may include at least one of: the time point corresponding to the key frame is prior to the selected time point and the difference with the selected time point is minimum; the time point corresponding to the key frame is prior to the selected time point, and the difference between the time point and the selected time point is less than or equal to a preset time difference threshold value.

And 203, decoding the video frame corresponding to the selected time point based on the target key frame to obtain a decoded video frame.

In this embodiment, based on the key frame determined in step 202, the executing entity of the method for displaying an image may decode the video frame corresponding to the selected time point, and obtain a decoded video frame. Specifically, as an example, assuming that the target video includes a key frame, a P frame, and a B frame, if the video frame corresponding to the selected time point is the key frame, the video frame may be decoded in a manner of decoding the key frame; if the video frame corresponding to the selected time point is a P frame or a B frame, decoding can be performed according to a mode of decoding the P frame or the B frame based on the target key frame. It should be noted that the above-mentioned decoding method for I frames, P frames, and B frames is a well-known technology widely studied and applied at present, and is not described herein again.

And step 204, displaying the decoded video frame in the first target display area.

In this embodiment, the execution subject may display the decoded video frame obtained in step 203 in the first target display area. The first target display area may be a display area for displaying the decoded video frame, for example, the first target display area may be an area where a video is played on a screen, or another area located on the screen (for example, a window for a user to process the decoded video frame).

Generally, since the decoding time for the key frames is less than the decoding time for the P frames and the B frames, in order to enable the user to quickly and accurately preview the video frames corresponding to the playing time point selected by the user, the existing method generally pre-processes the video in advance to add more key frames to the video or set each video frame as a key frame, and this process takes more time. By adopting the steps, the video can be pre-processed without needing to be pre-processed in advance, and only the time point selected by the user is determined and the video frame corresponding to the selected time point is displayed, so that the efficiency of video processing is improved.

With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for displaying an image according to the present embodiment. In the application scenario of fig. 3, a user has previously recorded a video 302 via a terminal device 301, where the video 302 includes a predetermined set of key frames (e.g., frames indicated by a symbol "") in the figure. The user wants to extract a video frame from the video 302 for processing (e.g., adding a special effect). The user adjusts the playing progress of the video 302 by dragging the progress bar of the video 302. When the user's finger leaves the screen, the terminal device 301 determines that the play time point last dragged by the user is a selected time point (for example, the play time point corresponding to the dragged point 305 shown in the figure), and the video frame corresponding to the selected time point is 3022 shown in the figure. Then, the terminal device 301 determines a target key frame from the key frame set, where a difference between a time point corresponding to the target key frame and the selected time point meets a key frame 3021 of a first preset condition (e.g., the corresponding time point precedes the selected time point and the difference is minimum). Next, the terminal device 301 decodes the video frame 3022 corresponding to the selected time point based on the target key frame 3021, obtains a decoded video frame 303, and displays the decoded video frame 303 in the first target display area (for example, a window in which a video is played) 304.

According to the method provided by the embodiment of the disclosure, the selected time point when the user adjusts the playing progress of the target video is determined, the key frame is determined from the key frame set included in the target video, the video frame corresponding to the selected time point is decoded based on the target key frame, and finally the decoded video frame is displayed in the first target display area, so that when the user adjusts the playing progress of the video, the preview of the video frame corresponding to the selected time point can be realized by using the existing key frame without adding more key frames into the video in advance, the time required for adding the key frame into the video in advance is saved, the flexibility of displaying the video frame is improved, and the efficiency of positioning the video frame and processing the video frame is improved.

With further reference to FIG. 4, a flow 400 of yet another embodiment of a method for displaying an image is shown. The flow 400 of the method for displaying an image comprises the steps of:

step 401, detecting the adjusted time point in real time in the process of adjusting the playing progress of the target video by the user.

In this embodiment, an executing body (e.g., the terminal device shown in fig. 1) of the method for displaying an image may detect a point of time to which adjustment is made in real time during a process in which a user adjusts the play progress of a target video. The target video may be a video acquired from a remote location by using a wireless connection manner or a wired connection manner, or may be a video stored in a local location in advance (for example, a video recorded by a user using the execution main body). The way for the user to adjust the playing progress of the target video may be the same as the way described in the embodiment of fig. 2, and is not described herein again.

The target video includes a set of keyframes. The key frame (also called I frame) is a frame that completely retains image data in the compressed video, and when decoding the key frame, decoding can be completed only by the image data of the frame.

The adjusted time point may be a playing time point detected by the user in real time when the playing progress of the target video is adjusted. Generally, a user may drag a progress bar of the target video or slide on a displayed video frame to adjust a playing progress of the target video. It should be noted that, a user may adjust the playing progress of the target video by using an electronic device such as a mouse, and when the execution main body includes a touch screen, the user may also adjust the playing progress by sliding a finger on the screen.

Step 402, determining a target time point based on the real-time detected time point, and displaying a video frame corresponding to the determined target time point in a second target display area.

In this embodiment, the execution body may determine the target time point according to various methods based on the time point detected in real time, and display a video frame corresponding to the determined target time point in the second target display area. Wherein, the second target display area may be a display area for previewing the video frame corresponding to the target time point. The second target display region may be a display region (for example, a preview window) different from the first target display region, or may be the same display region as the first target display region (for example, a display region in which the target video is played).

In some optional implementations of this embodiment, the executing entity may determine the target time point according to the following steps:

and determining time points, the distances between which and the detected time points meet a second preset condition, from the time points respectively corresponding to the key frames included in the key frame set as target time points. Specifically, the second preset condition may include at least one of: the distance between the time point corresponding to the key frame and the detected time point is minimum; the distance between the time point corresponding to the key frame and the detected time point is smaller than or equal to a preset distance threshold value. The distance may be an absolute value of a difference between a time point corresponding to the key frame and the detected time point, that is, the target time point may be before or after the detected time point. Because the speed of decoding the key frames is high, the key frames can be displayed in real time in the process of adjusting the playing progress of the target video by the user, and the user can be helped to judge the currently adjusted playing progress.

In some optional implementations of this embodiment, the executing entity may determine the target time point according to the following steps:

and selecting the time point as a target time point from the target time period in which the detected time point is positioned. The target time period is a time period in a time period set obtained by dividing the playing time of the target video based on the key frame set. As an example, assuming that the target video includes N (N is a positive integer) key frames (the first frame is a key frame), the entire playing time of the target video may be divided according to a time point corresponding to each key frame, so as to obtain N time periods (i.e., a time period set). The execution body may determine a period in which the detected time point is present as a target period, and select the time point as the target time point in various ways from the target period. For example, from the target time period, a time point at the intermediate position is selected as the target time point, or a time point is randomly selected as the target time point.

In some optional implementations of this embodiment, the executing entity may determine the target time point according to the following steps:

first, processing capability information of a target processor is acquired. Specifically, the execution agent may acquire the processing capability information from a remote place or from a local place. The target processor may be a processor provided on the execution main body, and the target processor may be configured to process a video frame included in the target video. The processing capability information may be used to characterize the capability of the target processor to process information (e.g., including processing speed, cache size, etc.), and may include, but is not limited to, at least one of: the type of the target processor, the dominant frequency value of the target processor, the core number of the target processor, and the like.

Then, the time point detected in real time is periodically determined as a target time point at a preset time interval corresponding to the processing capability information. As an example, the correspondence of the processing capability information to the time interval may be characterized by a correspondence table including a plurality of processing capability information and a plurality of time intervals. The execution body may look up a time interval corresponding to the determined processing capability information from the correspondence table, and periodically determine the detected time point as the target time point according to the time interval. It should be understood that the weaker the processing capability represented by the processing capability information (for example, the lower the dominant frequency value), the larger the corresponding time interval is, so that the number of times of processing the video frame by the target processor can be reduced and the burden of the target processor can be alleviated in the case that the processing capability of the target processor is lower.

In some alternative implementations of this embodiment, as shown in fig. 5, step 402 may be performed as follows:

step 4021, determining the currently detected time point as a target time point.

Specifically, in general, when the execution main body detects that the user adjusts the playing progress of the target video through the control point described in the embodiment of fig. 2, the time point detected for the first time is determined as the target time point.

Step 4022, the following display steps are performed: decoding the video frame corresponding to the determined target time point to obtain a decoded video frame for displaying in a second target display area; it is determined whether the second target display area includes the resulting decoded video frame.

Specifically, the execution main body may decode the video frame corresponding to the determined target time point to obtain a decoded video frame. If the video frame corresponding to the determined target time point is a key frame, decoding can be performed according to a key frame decoding mode; if the video frame corresponding to the determined target time point is a P frame or a B frame, the decoding may be performed in a manner of decoding the P frame or the B frame. In general, when continuing to decode a P frame or a B frame, it is necessary to first use a key frame whose corresponding time point is before the determined target time point and whose corresponding time point is closest to the determined target time point as a key frame for decoding, and decode the key frame for decoding in such a manner as to decode the P frame or the B frame based on the determined key frame for decoding, thereby obtaining a decoded video frame.

Then, the execution body may determine whether the decoded video frame is displayed in the second target display area, and if so, determine that the second target display area includes the video frame corresponding to the determined target time point.

Step 4023, in response to determining that the second target display area includes the resulting decoded video frame, re-determining the most recently detected time point as the target time point, and continuing the above displaying step (i.e., step 4022) using the re-determined target time point.

Specifically, in the process of adjusting the playing progress of the target video, the detected time point is changed in real time, and the executing main body may determine the time point monitored in real time last time as the target time point again in response to the decoded video frame obtained in the determining step 4022 being displayed in the second target display area, and then execute step 4022 again using the time point determined as the target time point again.

In general, the execution subject may wait for the completion of decoding of the video frame corresponding to the target time point in response to determining that the second target display region does not include the video frame corresponding to the determined target time point (i.e., the video frame corresponding to the target time point has not been decoded and displayed).

Since only one video frame is processed during the execution of the above steps 4022 and 4023, and each video frame does not need to be decoded, the present implementation can avoid the pause phenomenon (e.g., pause when the control point moves) caused by processing a large number of video frames, and can adapt to processors with different processing capabilities.

Step 403, determining a selected time point at which the user adjusts the playing progress of the target video.

In this embodiment, step 403 is substantially the same as step 201 in the corresponding embodiment of fig. 2, and is not described herein again.

Step 404, determining a target key frame from the key frame set.

In this embodiment, step 404 is substantially the same as step 202 in the corresponding embodiment of fig. 2, and is not described herein again.

And 405, decoding the video frame corresponding to the selected time point based on the target key frame to obtain a decoded video frame.

In this embodiment, step 405 is substantially the same as step 203 in the corresponding embodiment of fig. 2, and is not described herein again.

Step 406, displaying the decoded video frame in the first target display area.

In this embodiment, step 406 is substantially the same as step 204 in the corresponding embodiment of fig. 2, and is not described herein again.

As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the flow 400 of the method for displaying images in the present embodiment highlights a step of previewing video frames in the process of the user adjusting the playing progress of the target video. Therefore, the scheme described in this embodiment can realize previewing the video frames in real time in the process of adjusting the playing progress without adding key frames to the target video in advance, further improve the flexibility of displaying the video frames, and facilitate the user to accurately position the video frames corresponding to the selected time points.

With further reference to fig. 6, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for displaying an image, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.

As shown in fig. 6, the apparatus 600 for displaying an image of the present embodiment includes: a first determining unit 601 configured to determine a selected time point at which a user adjusts a playing progress of a target video, wherein the target video includes a set of key frames; a second determining unit 602, configured to determine a target key frame from the key frame set, where a difference between a time point corresponding to the target key frame and a selected time point meets a first preset condition; a decoding unit 603 configured to decode the video frame corresponding to the selected time point based on the target key frame, to obtain a decoded video frame; a display unit 604 configured to display the decoded video frame in the first target display area.

In this embodiment, the first determination unit 601 may determine a selected time point at which the user adjusts the playing progress of the target video. Wherein the target video may be a video to be selected from among video frames included therein to be displayed in the first target display area. As an example, the target video may be a video obtained remotely by using a wireless connection manner or a wired connection manner, or may be a video stored locally in advance (for example, a video recorded by the user using the apparatus 600). It should be noted that the target video in this embodiment is typically a compressed video, for example, a video obtained by compressing an original video by using the existing H26X encoding standard.

The target video includes a set of keyframes. The key frame (also called I frame) is a frame that completely retains image data in the compressed video, and when decoding the key frame, decoding can be completed only by the image data of the frame.

The selected time point is a play time point selected by the user when adjusting the play progress of the target video. As an example, the user may drag a progress bar of the target video or slide on a displayed video screen to adjust the playing progress of the target video. It should be noted that, a user may adjust the playing progress of the target video by using an electronic device such as a mouse, and when the apparatus 600 includes a touch screen, the user may also adjust the playing progress by sliding a finger on the screen.

In this embodiment, the second determining unit 602 may determine a target key frame from the key frame set, where a difference between a time point corresponding to the target key frame and a selected time point meets a first preset condition. From the set of keyframes, a target keyframe is determined. The difference between the time point corresponding to the target key frame and the selected time point meets a first preset condition. The first preset condition may include at least one of: the time point corresponding to the key frame precedes the selected time point and has the minimum difference with the selected time point, and the time point corresponding to the key frame precedes the selected time point and has the difference with the selected time point smaller than or equal to a preset time difference threshold value.

In this embodiment, the decoding unit 603 may decode the video frame corresponding to the selected time point to obtain a decoded video frame. Specifically, as an example, assuming that the target video includes a key frame, a P frame, and a B frame, if the video frame corresponding to the selected time point is the key frame, the video frame may be decoded in a manner of decoding the key frame; if the video frame corresponding to the selected time point is a P frame or a B frame, decoding can be performed according to a mode of decoding the P frame or the B frame based on the target key frame. It should be noted that the above-mentioned decoding method for I frames, P frames, and B frames is a well-known technology widely studied and applied at present, and is not described herein again.

In this embodiment, the display unit 604 may display the decoded video frame obtained by the decoding unit 603 in the first target display area. The first target display area may be a display area for displaying the decoded video frame, for example, the first target display area may be an area where a video is played on a screen, or another area located on the screen (for example, a window for a user to process the decoded video frame).

Generally, since the decoding time for the key frames is less than the decoding time for the P frames and the B frames, in order to enable the user to quickly and accurately preview the video frames corresponding to the playing time point selected by the user, the existing method generally pre-processes the video in advance to add more key frames in the video or set each frame as a key frame, and this process takes more time. By adopting the steps, the video can be pre-processed without needing to be pre-processed in advance, and only the time point selected by the user is determined and the video frame corresponding to the selected time point is displayed, so that the efficiency of video processing is improved.

In some optional implementations of this embodiment, the first determining unit 601 includes at least one of: a first determining module (not shown in the figures) configured to determine the current time point as a selected time point in response to detecting that the staying time of the control point for adjusting the playing progress of the target video at the current time point is greater than or equal to a preset time threshold; and a second determining module (not shown in the figure) configured to determine the time point corresponding to the control point currently as the selected time point in response to detecting that the user no longer manipulates the control point.

In some optional implementations of this embodiment, the apparatus 600 may further include: a detecting unit (not shown in the figure) configured to detect a time point adjusted to in real time during the process that the user adjusts the playing progress of the target video; a second determining unit (not shown in the figure) configured to determine a target time point based on the time point detected in real time, and to display a video frame corresponding to the determined target time point in a second target display area.

In some optional implementations of this embodiment, the second determining unit 602 may be further configured to: and determining time points, the distances between which and the detected time points meet a second preset condition, from the time points respectively corresponding to the key frames included in the key frame set as target time points.

In some optional implementations of this embodiment, the second determining unit 602 may be further configured to: and selecting the time point as a target time point from a target time period in which the detected time point is positioned, wherein the target time period is a time period in a time period set obtained by dividing the playing time of the target video based on the key frame set.

In some optional implementations of this embodiment, the second determining unit 602 may include: an obtaining module (not shown in the figure) configured to obtain processing capability information of a target processor, wherein the target processor is used for processing video frames included in a target video, and the processing capability information is used for representing the capability of the target processor for processing information; and a third determining module (not shown in the figure) configured to periodically determine the time point detected in real time as the target time point according to a preset time interval corresponding to the processing capability information.

In some optional implementations of this embodiment, the second determining unit 602 may include: a fourth determination module (not shown in the drawings) configured to determine the currently detected time point as the target time point; a display module (not shown in the figures) configured to perform the following display steps: decoding the video frame corresponding to the determined target time point to obtain a decoded video frame for displaying in a second target display area; determining whether the second target display area includes the resulting decoded video frame; a fifth determining module (not shown in the figures) configured to re-determine the most recently detected time point as the target time point in response to determining that the second target display area comprises the resulting decoded video frame, and to continue the displaying step with the re-determined target time point.

According to the device provided by the embodiment of the disclosure, the selected time point when the user adjusts the playing progress of the target video is determined, the key frame is determined from the key frame set included in the target video, the video frame corresponding to the selected time point is decoded based on the target key frame, and finally the decoded video frame is displayed in the first target display area, so that when the user adjusts the playing progress of the video, the preview of the video frame corresponding to the selected time point can be realized by using the existing key frame without adding more key frames into the video in advance, the time required for adding the key frame into the video in advance is saved, the flexibility of displaying the video frame is improved, and the efficiency of positioning the video frame and processing the video frame is improved.

Referring now to fig. 7, shown is a block diagram of a terminal device 700 suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.

As shown in fig. 7, the terminal device 700 may include a processing means (e.g., a central processing unit, a graphic processor, etc.) 701, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage means 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data necessary for the operation of the terminal apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.

Generally, input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc., output devices 707 including, for example, a liquid crystal display (L CD), speaker, vibrator, etc., storage devices 708 including, for example, magnetic tape, hard disk, etc., and communication devices 709. communication devices 709 may allow terminal device 700 to communicate wirelessly or wiredly with other devices to exchange data.

In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of embodiments of the present disclosure.

It should be noted that the computer readable medium in the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.

The computer readable medium may be included in the terminal device; or may exist separately without being assembled into the terminal device. The computer readable medium carries one or more programs which, when executed by the terminal device, cause the terminal device to: determining a selected time point for adjusting the playing progress of a target video by a user, wherein the target video comprises a key frame set; determining a target key frame from the key frame set, wherein the difference between a time point corresponding to the target key frame and a selected time point meets a first preset condition; decoding the video frame corresponding to the selected time point based on the target key frame to obtain a decoded video frame; the decoded video frame is displayed in a first target display area.

Computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including AN object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation on the unit itself, for example, the first determination unit may also be described as a "unit that determines a selected point in time at which the user adjusts the playing progress of the target video".

The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种视频上展示信息的方法、装置、设备和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类