Video editing method and related device

文档序号:142690 发布日期:2021-10-22 浏览:23次 中文

阅读说明:本技术 视频编辑方法及相关装置 (Video editing method and related device ) 是由 廖宇辰 尤嘉华 王志超 于 2021-08-03 设计创作,主要内容包括:本申请提供的视频编辑方法及相关装置中,电子设备从源视频中确定出待编辑的至少一个视频片段;并获取第一片段集中各视频片段的第一帧图像;然后,依据显示策略确定第一帧图像显示时的时间戳;最后,根据第一帧图像显示时的时间戳,显示第一帧图像。由于该方法中,不需要对待编辑的视频片段进行重新编码,即可对编辑后的播放效果进行预览,因此,能够提高视频编辑的效率。(In the video editing method and the related device, the electronic equipment determines at least one video segment to be edited from a source video; acquiring a first frame image of each video clip in the first clip set; then, determining a time stamp of the first frame image when the first frame image is displayed according to a display strategy; and finally, displaying the first frame image according to the time stamp of the first frame image when the first frame image is displayed. According to the method, the edited playing effect can be previewed without recoding the video segment to be edited, so that the video editing efficiency can be improved.)

1. A video editing method applied to an electronic device, the method comprising:

determining a first segment set to be edited from a source video, wherein the first segment set comprises at least one video segment to be edited;

acquiring a first frame image of each video clip in the first clip set;

determining a time stamp of the first frame image when the first frame image is displayed according to a display strategy;

and displaying the first frame image according to the time stamp of the first frame image when being displayed.

2. The video editing method of claim 1, wherein the method further comprises:

and encoding the first frame image into a target video according to the timestamp of the first frame image when the first frame image is displayed.

3. The video editing method of claim 1, wherein said obtaining a first frame image corresponding to the first segment set comprises:

judging whether a plurality of target segments to be combined exist in the first segment set, wherein the target segments have video segments with offset positions located in the same picture group or video segments with overlapped intervals;

when the first segment set has the plurality of target segments, merging the plurality of target segments into a video segment having continuous pictures of the source video to obtain a second segment set;

acquiring a second frame image of each video clip in the second clip set;

and acquiring the first frame image from the second frame image according to the corresponding relation between each video clip in the first clip set and each video clip in the second clip set.

4. The video editing method according to claim 3, wherein said obtaining a second frame image corresponding to the second clip set comprises:

acquiring offset positions corresponding to key frames of picture groups in the source video;

adjusting the offset position of each video clip in the second clip set to be an offset position corresponding to a key frame of a corresponding picture group;

and decoding each video clip in the second clip set according to the adjusted offset position to obtain a second frame image of each video clip in the second clip set.

5. The video editing method of claim 1, wherein the determining a first set of segments to be edited from a source video comprises:

acquiring the offset position of the at least one video clip to be edited in the source video;

and determining the first segment set from the source video according to the offset position.

6. The video editing method of claim 1, wherein the first segment set comprises associated video segments, wherein the associated video segments comprise a first segment and a second segment, the first segment and the second segment have an overlapping interval, and wherein displaying the first frame image according to a timestamp of when the first frame image is displayed comprises:

when the playing sequence of the first segment is earlier than that of the second segment, caching the first frame image corresponding to the second segment after displaying the first frame image corresponding to the first segment according to the timestamp of the first frame image when displaying; when the second segment needs to be displayed, reading the cached first frame image for displaying;

when the playing sequence of the second segment is earlier than that of the first segment, displaying a first frame image corresponding to the second segment according to the timestamp of the first frame image when the first frame image is displayed; caching a first frame image corresponding to the first segment; and when the first segment needs to be displayed, reading the cached first frame image for displaying.

7. A video editing apparatus applied to an electronic device, the video editing apparatus comprising:

the video editing device comprises a video acquisition module, a video editing module and a video editing module, wherein the video acquisition module is used for determining a first clip set to be edited from a source video, and the first clip set comprises at least one video clip to be edited;

the video processing module is used for acquiring a first frame image of each video clip in the first clip set;

the video processing module is further used for determining a timestamp of the first frame image when the first frame image is displayed according to a display strategy;

and the video output module is used for displaying the first frame image according to the timestamp of the first frame image when being displayed.

8. An electronic device, comprising a processor and a memory, the memory storing a computer program that, when executed by the processor, implements the video editing method of any of claims 1-6.

9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the video editing method of any one of claims 1-6.

10. A computer program product comprising computer programs/instructions which, when executed by a processor, implement the video editing method of any of claims 1-6.

Technical Field

The present application relates to the field of video editing, and in particular, to a video editing method and a related apparatus.

Background

In recent years, users have not satisfied playing videos in a constant-speed and variable-speed sequence from beginning to end or playing videos in a reverse sequence from end to head, but playing different section segments in a source video in a variable-speed sequence or a reverse sequence, so as to obtain a video playing effect with more artistic and visual effects.

However, in the video coding technology, inter-frame related information is used to efficiently compress image data, which also results in poor random access of video, thereby causing difficulty in real-time presentation of video motion effects. At present, in order to preview the motion effect of a video, an original video is usually encoded into a video with a motion effect, and then the video after re-encoding is played through a video player, so that the motion effect is previewed.

The inventor researches and discovers that the method needs to edit the source video into the video with the motion effect, the whole process needs to decode the video and then encode the video, and a large amount of calculation is needed.

Disclosure of Invention

In order to overcome at least one of the deficiencies in the prior art, an embodiment of the present invention provides a video editing method and a related apparatus, which specifically include:

in a first aspect, this embodiment provides a video editing method applied to an electronic device, where the method includes:

determining a first segment set to be edited from a source video, wherein the first segment set comprises at least one video segment to be edited;

acquiring a first frame image of each video clip in the first clip set;

determining a time stamp of the first frame image when the first frame image is displayed according to a display strategy;

and displaying the first frame image according to the time stamp of the first frame image when being displayed.

In a second aspect, this embodiment provides a video editing apparatus applied to an electronic device, the video editing apparatus including:

the video editing device comprises a video acquisition module, a video editing module and a video editing module, wherein the video acquisition module is used for determining a first clip set to be edited from a source video, and the first clip set comprises at least one video clip to be edited;

the video processing module is used for acquiring a first frame image of each video clip in the first clip set;

the video processing module is further used for determining a timestamp of the first frame image when the first frame image is displayed according to a display strategy;

and the video output module is used for displaying the first frame image according to the timestamp of the first frame image when being displayed.

In a third aspect, this embodiment provides an electronic device, which includes a processor and a memory, where the memory stores a computer program, and the computer program, when executed by the processor, implements the video editing method.

In a fourth aspect, the present embodiment provides a computer-readable storage medium storing a computer program, which when executed by a processor, implements the video editing method.

In a fifth aspect, the present embodiments provide a computer program product comprising computer programs/instructions which, when executed by a processor, implement the video editing method.

Compared with the prior art, the method has the following beneficial effects:

in the video editing method and the related apparatus provided by this embodiment, the electronic device determines at least one video segment to be edited from the source video; acquiring a first frame image of each video clip in the first clip set; then, determining a time stamp of the first frame image when the first frame image is displayed according to a display strategy; and finally, displaying the first frame image according to the time stamp of the first frame image when the first frame image is displayed. According to the method, the edited playing effect can be previewed without recoding the video segment to be edited, so that the video editing efficiency can be improved.

Drawings

In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.

Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;

fig. 2 is a schematic flowchart of a video editing method according to an embodiment of the present application;

FIG. 3 is a schematic view of video segment merging provided in the present application;

FIG. 4 is a schematic diagram illustrating adjustment of offset positions of video clips according to an embodiment of the present application;

fig. 5 is a schematic structural diagram of a video editing apparatus according to an embodiment of the present application;

fig. 6 is a schematic diagram of data flow of each module in the video editing apparatus according to the embodiment of the present application.

Icon: 120-a memory; 130-a processor; 140-a communication device; 201-video acquisition module; 202-a video processing module; 203-video output module.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.

Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.

In the description of the present application, it is noted that the terms "first", "second", "third", and the like are used merely for distinguishing between descriptions and are not intended to indicate or imply relative importance. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.

In the related art, when previewing the playing effect of the edited video, the edited video needs to be encoded, and then the video player plays the re-encoded video for previewing. This process requires a large number of calculations.

In view of the above, in order to at least partially solve the technical problems in the prior art, the present embodiment provides a video editing method applied to an electronic device. According to the method, the electronic equipment determines the time stamp of the frame image corresponding to the video clip to be edited when displaying according to the display strategy, and displays each frame image according to the re-determined time stamp, so that the edited video playing effect can be previewed without re-encoding the frame image into the video.

The electronic device may be a server or a user terminal. When the electronic device is a server, the specific type thereof may be, but is not limited to, a Web server, an FTP (File Transfer Protocol) server, a data processing server, and the like. In addition, the server may be a single server or a server group. The set of servers can be centralized or distributed (e.g., the servers can be a distributed system). In some embodiments, the server 100 may be local or remote to the user terminal. In some embodiments, the server 100 may be implemented on a cloud platform; by way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud (community cloud), a distributed cloud, an inter-cloud, a multi-cloud, and the like, or any combination thereof. In some embodiments, the server 100 may be implemented on an electronic device having one or more components.

When the electronic device is a user terminal, the user terminal may be, but is not limited to, a mobile terminal, a tablet computer, a laptop computer, or a built-in device in a motor vehicle, etc., or any combination thereof. In some embodiments, the mobile terminal may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home devices may include smart lighting devices, control devices for smart electrical devices, smart monitoring devices, smart televisions, smart cameras, or walkie-talkies, or the like, or any combination thereof. In some embodiments, the wearable device may include a smart bracelet, a smart lace, smart glass, a smart helmet, a smart watch, a smart garment, a smart backpack, a smart accessory, and the like, or any combination thereof. In some embodiments, the smart mobile device may include a smartphone, a Personal Digital Assistant (PDA), a gaming device, a navigation device, or a Point of Sale (POS) device, or the like, or any combination thereof.

The embodiment also provides a structural schematic diagram of the electronic device. As shown in fig. 1, the electronic device includes a memory 120, a processor 130, and a communication device 140.

The memory 120, processor 130, and communication device 140 are electrically connected to each other directly or indirectly to enable data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines.

The Memory 120 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 120 is used for storing a program, and the processor 130 executes the program after receiving the execution instruction.

The communication device 140 is used for transmitting and receiving data through a network. Specific types of the Network may include a wired Network, a Wireless Network, a fiber optic Network, a telecommunication Network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a Public Switched Telephone Network (PSTN), a bluetooth Network, a ZigBee Network, or a Near Field Communication (NFC) Network, or the like, or any combination thereof. In some embodiments, the network may include one or more network access points. For example, the network may include wired or wireless network access points, such as base stations and/or network switching nodes, through which one or more components of the service request processing system may connect to the network to exchange data and/or information.

The processor 130 may be an integrated circuit chip having signal processing capabilities, and may include one or more processing cores (e.g., a single-core processor or a multi-core processor). Merely by way of example, the Processor may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an Application Specific Instruction Set Processor (ASIP), a Graphics Processing Unit (GPU), a Physical Processing Unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a microcontroller Unit, a Reduced Instruction Set computer (Reduced Instruction Set computer), a microprocessor, or the like, or any combination thereof.

The following describes the video editing method in detail with reference to the flowchart of fig. 2. As shown in fig. 2, the method includes:

step S101, a first segment set to be edited is determined from a source video, wherein the first segment set comprises at least one video segment to be edited.

In this embodiment, the number of source videos may be one or multiple. And when the number of the source videos is one, at least one video segment is intercepted from the source videos, and the video segments are spliced into the target videos according to the display strategy. And when the number of the source videos is multiple, splicing the video segments intercepted from the multiple source videos into the target video according to the display strategy.

The display policy referred to in this embodiment may be, but is not limited to, one or a combination of playing manners such as a playing order among the video clips, a playing speed of the video clips, and a reverse playing video.

As one implementation, the electronic device may obtain an offset position of at least one video segment to be edited in the source video; a first set of segments is determined from the source video based on the offset position.

For example, the electronic device may provide a corresponding editing interface, and in response to a segment selection operation of a user in the editing interface, obtain an offset position of at least one video segment to be edited in the source video; then, a first set of segments is determined from the source video based on the offset position.

Step S102, a first frame image of each video clip in a first clip set is obtained.

For each video segment, the electronic device decodes the video segment to obtain a first image frame of each video segment. It should be noted that the source video is data obtained by efficiently compressing image data by using a video coding technique, and therefore, cannot be used for direct display; but the source video needs to be decoded, and the image data (i.e. the first frame image in this embodiment) of each video segment is restored before being displayed by the display device.

And step S103, determining a time stamp of the first frame image when the first frame image is displayed according to the display strategy.

Since each video clip is captured from a different position of the source video, which also disturbs the normal playing sequence of the video clips, the time stamp of each frame image needs to be determined again according to the display strategy, and the time stamp of each frame image is mapped to a continuous time axis, so that the player can play according to the time stamp.

It should be noted that if the first frame of image is encoded into a video and then played, a large amount of computation is required, and the computation is positively correlated with the resolution of the video. For a mobile platform with insufficient computing resources, if the effect after video editing is previewed in a coding mode, a long time is consumed for coding, so that a user needs to wait for editing and modifying for a long time each time.

And step S104, displaying the first frame image according to the time stamp of the first frame image when the first frame image is displayed.

According to the design of the method, the electronic equipment determines at least one video clip to be edited from the source video; acquiring a first frame image of each video clip in the first clip set; then, determining a time stamp of the first frame image when the first frame image is displayed according to a display strategy; and finally, displaying the first frame image according to the time stamp of the first frame image when the first frame image is displayed. According to the method, the edited playing effect can be previewed without recoding the video segment to be edited, so that the video editing efficiency can be improved.

Further, the electronic device displays the first frame image according to the timestamp of the first frame image when the first frame image is displayed; if the display effect of the first frame image meets the user requirement, the user can input a video coding instruction through a preset operation mode provided by the electronic equipment, and the electronic equipment receives and responds to the video editing instruction and codes the first frame image into the target video according to the redetermined timestamp.

In order to reduce the situation of repeated decoding when acquiring the first frame image, the electronic device first determines whether the first segment set has multiple target segments to be combined, where the multiple target segments have video segments with offset positions located in the same frame group or have overlapping sections; when there are multiple target segments in the first set of segments, then the multiple target segments are merged into a video segment having a continuous picture of the source video to obtain a second set of segments.

Further, the electronic equipment acquires a second frame image of each video clip in a second clip set; and acquiring the first frame image from the second frame image according to the corresponding relation between each video clip in the first clip set and each video clip in the second clip set.

As shown in fig. 3, it is assumed that the playing time of the source video is 0-10 s (10000ms), and the video segments to be edited in the first segment set include segment a [500 ms-1500 ms ], segment B [1500 ms-3000 ms ], segment C [1800 ms-2500 ms ], segment D [5500 ms-6500 ms), and segment E [7500 ms-9000 ms); wherein, the segment A [500 ms-1500 ms) indicates that the offset position of the video segment A in the source video is 500 ms-1500 ms; the rest of the fragments are the same.

As shown in fig. 3, the present embodiment provides the above-mentioned 5 video segments, a display policy, in which the symbol "←" represents the reverse play. That is, the display order of the above 5 video clips is clip a, clip C, clip B, clip E, and clip D, and clip C and clip E are displayed in an inverted manner.

Assume that a Group of Pictures (GOP) of the source video is one per interval of 1 s. Wherein the group of pictures represents frame images between two key frames (also known as I frames) in the source video, and the number of frame images in the group of pictures is positively correlated with the picture quality of the source video. It is noted that the electronic device needs to start with a key frame when decoding the video segment.

In view of this, the electronic device determines a plurality of target segments to be merged from the first segment set, and the plurality of target segments have a video segment with an offset position located in the same frame group or a video segment with an overlapping interval.

Based on the screening criteria of the target segment, if the end point of the offset position of segment a and the start point of the offset position of segment C in fig. 3 are both located in the second group of pictures, segment a and segment C are combined into a segment of 500ms to 2500 ms; in this case, the segment of 500ms to 2500ms has an overlapping section with the segment B. Therefore, as shown in fig. 3, the electronic device merges the segment a, the segment B, and the segment C in the first segment set into a video segment of 500ms to 3000ms, and obtains a second segment set. Wherein, the second fragment set obtained by merging can be seen in fig. 3.

Considering the need for a video clip to decode starting with a key frame and the possible presence of bi-directionally predicted frames (also known as B-frames), the electronic device therefore acquires the offset position of the group of pictures in the source video; adjusting the offset position of each video clip in the second clip set to be the offset position corresponding to the key frame of the corresponding picture group; and decoding each video clip in the second clip set according to the adjusted offset position to obtain a second frame image of each video clip in the second clip set.

The following description will be continued with the above-described fragment A [500ms to 1500ms), fragment B [1500ms to 3000ms), fragment C [1800ms to 2500ms), fragment D [5500ms to 6500ms), and fragment E [7500ms to 9000 ms). As shown in fig. 4, α represents a continuous time axis [0ms,5000ms ] after rearranging frame images acquired by decoding the segments a to E under the display strategy; the mapping relationship between each video clip and the time axis α can be seen in fig. 4, which is a dashed line perpendicular to the time axis α, that is, the offset position of the clip a on the time axis α is located in [0ms,1000ms ]; the offset position of the clip B on the time axis α is [1000ms,2500ms ]; the offset position of the clip C on the time axis α is [1300ms,2000ms ]; the offset position of the clip D on the time axis α is [2500ms,4000ms ]; the offset position of the clip E on the time axis α is located at [4000ms,5000ms ].

As shown in fig. 5, since the segment a [500ms to 1500ms), the segment B [1500ms to 3000ms), and the segment C [1800ms to 2500ms) are combined into a video segment whose offset position is located at [500ms to 3000ms), and the offset position 500ms is not located at the offset position corresponding to the key frame of the picture group, the electronic apparatus adjusts the start point of the offset position corresponding to [500ms to 3000ms) to the offset position corresponding to the key frame of the picture group to which the electronic apparatus belongs, and thus the adjusted offset position becomes [0ms, 3000 ms). Similarly, for the fragment D [5500ms to 6500ms), the electronic device adjusts the fragment D, the offset position corresponding to the key frame of the picture group to which the offset position 5500ms belongs is 5000ms, and the offset position corresponding to the key frame of the next picture group to which the offset position 6500ms belongs is 7000ms, so that the offset position after the adjustment of the fragment D becomes [5000ms to 7000ms ]. By analogy, the offset position of the segment E [7500ms to 9000ms) is adjusted to [7000ms to 9000 ms). The [7000ms to 9000ms) display strategy is inverted, so that it needs to be buffered, and the present embodiment splits it into two parts, i.e., [8000 to 9000 ] and [7000 to 8000 ], in order to reduce the memory occupancy. Therefore, the video clips after the offset position is adjusted are mapped to the time axis β, which is shown in fig. 4, and the adjusted offset position can be referred to as a dashed line extending along the time axis in fig. 4.

In addition, in order to improve the utilization rate of the memory, after the video segment is displayed, the first frame image of the video segment in the memory is cleared to recycle the occupied memory space. While segment C in fig. 4 has an overlapping interval with segment B, and segment B includes segment C; segment C is played in an order earlier than segment B.

Therefore, the previewing efficiency of the edited video is improved in order to avoid repeated decoding of repeated video segments; when the first segment set comprises the associated video segments, wherein the associated video segments comprise the first segments and the second segments, and the first segments and the second segments have overlapping intervals; the electronic device displays the first frame image according to the re-determined time stamp in the following manner:

when the playing sequence of the first segment is earlier than that of the second segment, the electronic equipment caches the first frame image corresponding to the second segment after displaying the first frame image corresponding to the first segment according to the redetermined timestamp; and when the second segment needs to be displayed, reading the cached first frame image for displaying.

When the playing sequence of the second segment is earlier than that of the first segment, the electronic equipment displays the first frame image corresponding to the second segment according to the redetermined timestamp; caching a first frame image corresponding to the first segment; and when the first segment needs to be displayed, reading the cached first frame image for displaying. Wherein, the segment B corresponds to a first segment, and the segment C corresponds to a second segment.

Based on the same inventive concept as the video editing method, the embodiment also provides a corresponding related device.

The present embodiment further provides a video editing apparatus, wherein the video editing apparatus includes at least one functional module that can be stored in the memory 120 in a software form. As shown in fig. 5, the video editing apparatus may include, functionally divided:

the video obtaining module 201 is configured to determine a first segment set to be edited from a source video, where the first segment set includes at least one video segment to be edited.

In this embodiment, the video acquiring module 201 is configured to implement step S101 in fig. 2, and for the detailed description of the video acquiring module 201, refer to the detailed description of step S101.

The video processing module 202 is configured to obtain a first frame image of each video clip in the first clip set.

The video processing module 202 is further configured to determine a timestamp of the first frame image when being displayed according to the display policy.

In this embodiment, the video processing module 202 is configured to implement steps S102 to S103 in fig. 2, and for detailed description of the video processing module 202, refer to detailed description of steps S102 to S103.

And the video output module 203 is configured to display the first frame image according to the timestamp of the first frame image when the first frame image is displayed.

In this embodiment, the video output module 203 is configured to implement step S104 in fig. 2, and for a detailed description of the video output module 203, refer to a detailed description of step S104.

As shown in fig. 6, as a possible implementation manner, the video processing module 202 includes a fragment processing module, a decapsulation module, a decoding module, a decoded frame buffering module, and a decoded frame control module.

The video output module 203 comprises a display frame buffer module, a display frame control module and a frame selection module.

And the segment processing module is used for analyzing the first segment set, combining the segments with overlapped intervals and/or adjacent segments to obtain video segments (namely a second segment set) which need to be realized by adopting a positioning mode, calculating the continuous time line alpha formed by connecting the video segments and increasing from 0, outputting the video segments and the time line alpha information to the decapsulation module, outputting the cache segments after the time is modified to the display frame control module, and outputting the time line alpha to the frame selection module.

The decapsulation module is used for analyzing a video source to obtain a timestamp of each key frame in a video, and respectively extending each input video clip to an offset position corresponding to a key frame of a group of pictures (GOP) where the clip start time is located to an offset position corresponding to a key frame of a next group of pictures where the clip end time is located; then, these extended segments are connected into a continuous time line β that increases from 0, and the extended portion and the valid portion are distinguished, and these information and the time line α information are output to the decoded frame control module. In addition, the decapsulation module is further configured to locate and decapsulate the video source segment by segment according to the extended segments; and mapping the timestamp modification of each video packet after de-encapsulation onto a time line beta and sending the time line beta to a decoding module for decoding.

And the decoding module is used for decoding the video packet input by the decapsulating module and outputting the decoded frame image to the decoded frame caching module.

And the decoding frame buffer module is used for buffering the frame image output by the decoding module.

And the decoding frame control module is used for discarding the extended part of the interval segment of the frame image cached in the decoding frame caching module according to the information output by the decapsulating module, inverting the frame image of the interval segment needing to be played in reverse order, mapping the timestamp of the effective frame image to the time line alpha calculated by the segment processing module, and then outputting the time line alpha to the display frame caching module.

And the display frame buffer module is used for buffering the frame image processed by the decoding frame control module and then from the decoding frame buffer module.

And the frame selection module is used for converting the current preview clock into a timestamp on the timeline alpha (under the condition of variable speed, the current preview clock can be converted again according to a variable speed curve) and acquiring a corresponding frame image in the display frame cache module according to the timestamp. In addition, the module outputs the time progress information to the display frame control module.

And the display frame control module is used for controlling the display frame caching module according to the time progress information output by the frame selection module and discarding the frame image which does not need to be cached when the frame image is played to the time progress.

The embodiment also provides an electronic device, which includes a processor and a memory, where the memory stores a computer program, and when the computer program is executed by the processor, the video editing method is implemented.

The present embodiment also provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the video editing method is implemented.

The present embodiments also provide a computer program product comprising a computer program/instructions which, when executed by a processor, implement the video editing method.

To sum up, in the video editing method and the related apparatus provided by the embodiment of the application, the electronic device determines at least one video segment to be edited from the source video; acquiring a first frame image of each video clip in the first clip set; then, determining a time stamp of the first frame image when the first frame image is displayed according to a display strategy; and finally, displaying the first frame image according to the time stamp of the first frame image when the first frame image is displayed. According to the method, the edited playing effect can be previewed without recoding the video segment to be edited, so that the video editing efficiency can be improved.

In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.

The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

The above description is only for various embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and all such changes or substitutions are included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种视频特效显示方法及设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类