Video data production method and device, electronic equipment and computer readable medium

文档序号:1144529 发布日期:2020-09-11 浏览:4次 中文

阅读说明:本技术 视频资料制作方法及装置、电子设备、计算机可读介质 (Video data production method and device, electronic equipment and computer readable medium ) 是由 李卫国 于 2020-06-24 设计创作,主要内容包括:本公开提供了一种视频资料制作方法,涉及计算机和视频图像处理技术领域,该方法包括:响应用户触发的插入标签的操作,获得当前播放的视频图像在原始视频中的时间戳;获取所述用户针对所述视频图像添加的标签信息;将所述标签信息、所述时间戳与所述原始视频的视频数据关联整合,生成携带所述标签信息的综合视频资料。该方法便于添加标签信息,而且保留原始视频的完整性,后续重复观看该综合视频资料时,可以快速、准确地定位到标签位置,减少查找时间,提高学习效率,从而提高用户体验。本公开还提供了一种视频资料制作装置、电子设备和计算机可读介质。(The present disclosure provides a video data making method, which relates to the technical field of computer and video image processing, and comprises the following steps: responding to the operation of inserting a label triggered by a user, and obtaining a timestamp of a currently played video image in an original video; acquiring label information added by the user aiming at the video image; and associating and integrating the tag information, the timestamp and the video data of the original video to generate comprehensive video data carrying the tag information. The method is convenient for adding label information, keeps the integrity of the original video, can quickly and accurately position the label position when the comprehensive video data is watched repeatedly in the follow-up process, reduces the search time, improves the learning efficiency and further improves the user experience. The present disclosure also provides a video material producing apparatus, an electronic device, and a computer readable medium.)

1. A method for producing video data, comprising:

responding to the operation of inserting a label triggered by a user, and obtaining a timestamp of a currently played video image in an original video;

acquiring label information added by the user aiming at the video image;

and associating and integrating the tag information, the timestamp and the video data of the original video to generate comprehensive video data carrying the tag information.

2. The method of claim 1, wherein the obtaining of the tag information added by the user for the video image comprises:

and obtaining the label information added by the user aiming at the video image through a label input module.

3. The method of claim 1, wherein before the associating and integrating the tag information, the timestamp and the video data of the original video to generate the integrated video material carrying the tag information, the method further comprises:

acquiring label auxiliary information; wherein the tag auxiliary information is a description of the tag and the limited usage rights;

the associating and integrating the tag information, the timestamp and the video data of the original video to generate a comprehensive video material carrying the tag information includes:

and associating and integrating the tag information, the timestamp, the tag auxiliary information and the video data of the original video to generate comprehensive video data carrying the tag information.

4. The method of claim 3, wherein the tag assistance information comprises at least one of user information, user configuration information, and an identification of the original video.

5. The method of claim 4, wherein the user information comprises an identification of a user account and/or a terminal device used by the user; the user configuration information includes user permission information.

6. The method according to any one of claims 1 to 5, wherein after the associating and integrating the tag information, the timestamp and the video data of the original video to generate the integrated video material carrying the tag information, the method further comprises:

responding to a playing instruction of a user, analyzing the comprehensive video data, and acquiring all the tags and the tag information in the comprehensive video data;

displaying all the tags on a playing page;

and displaying the label information corresponding to the label based on the label selected by the user.

7. The method of claim 6, wherein after presenting the tag information corresponding to the video node based on the video node selected by the user, further comprising:

receiving modification information of the user for the tag, and updating the tag information based on the modification information;

and generating and carrying new comprehensive video data according to the updated tag information, the timestamp, the tag auxiliary information and the video data association and integration of the original video.

8. The method according to any one of claims 1 to 5, wherein after the associating and integrating the tag information, the timestamp and the video data of the original video to generate the integrated video material carrying the tag information, the method further comprises:

and sharing the comprehensive video data to a sharing platform so as to enable other users in the sharing platform to obtain the comprehensive video data.

9. The method according to any one of claims 1-5, wherein the obtaining the timestamp of the currently played video image in the original video in response to the user-triggered tag insertion further comprises:

screening video resources based on video use information to obtain the original video; wherein the video usage information includes one or more of a play amount, a replay rate, a user comment, and a number of likes of the video.

10. The method of any of claims 1-5, wherein the tag information comprises a label and/or a note.

11. An apparatus for producing video material, comprising:

the trigger module is used for responding to a trigger instruction of a user to trigger the operation of inserting the label;

the first acquisition module is used for acquiring a timestamp of a currently played video image in an original video;

the second acquisition module is used for acquiring the label information added by the user aiming at the video image;

and the association module is used for associating and integrating the tag information, the timestamp and the video data of the original video to generate comprehensive video data carrying the tag information.

12. An electronic device, comprising:

one or more processors;

storage means having one or more programs stored thereon which, when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-10;

one or more I/O interfaces connected between the processor and the memory and configured to enable information interaction between the processor and the memory.

13. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-10.

Technical Field

The disclosed embodiments relate to the field of computer and video image processing technologies, and in particular, to a method and an apparatus for producing video data, an electronic device, and a computer-readable medium.

Background

With the optimization of network environment and the popularization of mobile intelligent devices, mobile terminals become the main way for people to obtain information. Video can express information more intuitively and clearly, so that the video is widely spread and used as an important spreading carrier.

When a user watches videos, especially knowledge-based videos or videos with rich contents, the user wants to make labels or notes at certain video nodes for repeated watching or learning. Since the video player can only find the video node that wants to watch repeatedly by double speed or manually adjusting the progress.

BRIEF SUMMARY OF THE PRESENT DISCLOSURE

The embodiment of the disclosure provides a video data production method and device, electronic equipment and a computer readable medium.

In a first aspect, an embodiment of the present disclosure provides a method for producing video data, including:

responding to the operation of inserting a label triggered by a user, and obtaining a timestamp of a currently played video image in an original video;

acquiring label information added by the user aiming at the video image;

and associating and integrating the tag information, the timestamp and the video data of the original video to generate comprehensive video data carrying the tag information.

In some embodiments, the obtaining of the tag information added by the user for the video image includes:

and obtaining the label information added by the user aiming at the video image through a label input module.

In some embodiments, before the associating and integrating the tag information, the timestamp, and the video data of the original video to generate the integrated video material carrying the tag information, the method further includes:

acquiring label auxiliary information; wherein the tag auxiliary information is a description of the tag and the limited usage rights;

the associating and integrating the tag information, the timestamp and the video data of the original video to generate a comprehensive video material carrying the tag information includes:

and associating and integrating the tag information, the timestamp, the tag auxiliary information and the video data of the original video to generate comprehensive video data carrying the tag information.

In some embodiments, the tag assistance information includes at least one of user information, user configuration information, and an identification of the original video.

In some embodiments, the user information comprises a user account and/or an identification of a terminal device used by the user; the user configuration information includes user permission information.

In some embodiments, after the associating and integrating the tag information, the timestamp, and the video data of the original video to generate the integrated video material carrying the tag information, the method further includes:

responding to a playing instruction of a user, analyzing the comprehensive video data, and acquiring all the tags and the tag information in the comprehensive video data;

displaying all the tags on a playing page;

and displaying the label information corresponding to the label based on the label selected by the user.

In some embodiments, after presenting the tag information corresponding to the video node based on the video node selected by the user, the method further includes:

receiving modification information of the user for the tag, and updating the tag information based on the modification information;

and generating and carrying new comprehensive video data according to the updated tag information, the timestamp, the tag auxiliary information and the video data association and integration of the original video.

In some embodiments, after the associating and integrating the tag information, the timestamp, and the video data of the original video to generate the integrated video material carrying the tag information, the method further includes:

and sharing the comprehensive video data to a sharing platform so as to enable other users in the sharing platform to obtain the comprehensive video data.

In some embodiments, the obtaining, in response to a user-triggered tag insertion operation, a timestamp of a currently played video image in an original video further includes:

screening video resources based on video use information to obtain the original video; wherein the video usage information includes one or more of a play amount, a replay rate, a user comment, and a number of likes of the video.

In some embodiments, the tag information includes a label and/or note;

in a second aspect, an embodiment of the present disclosure provides an apparatus for generating video data, including:

the trigger module is used for responding to a trigger instruction of a user to trigger the operation of inserting the label;

the first acquisition module is used for acquiring a timestamp of a currently played video image in an original video;

the second acquisition module is used for acquiring the label information added by the user aiming at the video image;

and the association module is used for associating and integrating the tag information, the timestamp and the video data of the original video to generate comprehensive video data carrying the tag information.

In a third aspect, an embodiment of the present disclosure provides an electronic device, including:

one or more processors;

a memory having one or more programs stored thereon that, when executed by the one or more processors, cause the one or more processors to perform any of the methods of video material production described above;

one or more I/O interfaces connected between the processor and the memory and configured to enable information interaction between the processor and the memory.

In a fourth aspect, the present disclosure provides a computer-readable medium, on which a computer program is stored, where the computer program is executed by a processor to implement any one of the above-mentioned video material production methods.

The video data making method provided by the embodiment of the disclosure responds to the operation of inserting the label triggered by a user, and obtains the timestamp of the currently played video image in the original video; acquiring label information added by the user aiming at the video image; the tag information, the timestamp and the video data of the original video are integrated in a correlation mode to generate comprehensive video data carrying the tag information, the tag information can be directly added into the data of the original video when a user watches the original video, the operation is convenient, the integrity of the original video is kept, and the tag position can be quickly and accurately positioned when the user watches the comprehensive video data repeatedly, so that the searching time is shortened, the learning efficiency is improved, and the user experience is improved.

Drawings

The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. The above and other features and advantages will become more apparent to those skilled in the art by describing in detail exemplary embodiments thereof with reference to the attached drawings, in which:

fig. 1 is a flowchart illustrating a method for generating video data according to an embodiment of the disclosure;

fig. 2 is a schematic diagram of a tag edit page provided by an embodiment of the present disclosure;

FIG. 3 is a flowchart illustrating another method for generating video data according to an embodiment of the present disclosure;

FIG. 4 is a flowchart illustrating another method for generating video data according to an embodiment of the present disclosure;

FIG. 5 is a flowchart illustrating a method for generating video data according to another embodiment of the present disclosure;

FIG. 6 is a flowchart illustrating a method for generating video data according to an embodiment of the present disclosure;

FIG. 7 is a schematic block diagram of a video data generating apparatus according to an embodiment of the disclosure;

fig. 8 is a block diagram of an electronic device according to an embodiment of the present disclosure.

Detailed Description

In order to make those skilled in the art better understand the technical solution of the present disclosure, the following describes in detail a method and apparatus for producing video material, an electronic device, and a computer readable medium provided by the present disclosure with reference to the accompanying drawings.

Example embodiments will be described more fully hereinafter with reference to the accompanying drawings, but which may be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.

Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.

As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.

For knowledge-based video, a user wants to tag or note certain video images (video nodes) in the original video for subsequent repeated viewing or learning; in addition, in order to improve the learning efficiency, only the video image corresponding to the label position needs to be watched in a focused mode, all original videos are not played repeatedly, and meanwhile, the complete original videos are reserved, so that some special requirements can be met, for example, other users can obtain the complete original videos and labels added by the current user conveniently.

At present, a video tag establishes an index table according to the start and stop time of a target video segment in an original video and the video tag, and then stores the index table and a video label of the original video, thereby obtaining a video note. It is understood that the video notes are two files with the original video. When the video notes need to be checked, the start-stop time of the target video segment is checked according to the index table, and then the original video is searched according to the video labels and the start-stop time of the target video segment to obtain the target video segment. The video note and original video separation mode is low in response speed, and when an original video is played, a video note of a user for the original video cannot be directly obtained, the video note can only be searched from a file corresponding to the video note, and a complete original video cannot be obtained, so that the user experience is influenced.

In a first aspect, the disclosed embodiments provide a video material production method for producing a comprehensive video material with a video label, so that a user can conveniently and quickly locate a desired video image in an original video.

Fig. 1 is a flowchart of a method for producing video data according to an embodiment of the disclosure. As shown in fig. 1, the method for producing video data includes:

step 101, responding to the operation of inserting a tag triggered by a user, and obtaining a timestamp of a currently played video image in an original video.

The original video is original video data issued by a video resource issuer and can be played at a terminal. For example, the original video is a movie, a video courseware, a documentary, a recorded learning video, etc. The video format of the original video may be MPEG, AVI, MOV, ASF or WMV supported by the terminal.

The video image is a certain video frame in the original video. The currently played video image is a video image displayed in the display screen when the original video is played.

When the original video is a documentary, and the user plays the documentary, when a certain video image in the documentary is interested, a label can be inserted into the video image, so that the user can directly jump to the video image when subsequently replaying the documentary. Alternatively, if a certain video in the documentary is interested, a tag can be inserted into the start position of the video segment, so that when the documentary is replayed subsequently, the user can jump directly to the start position of the video segment to start playing.

When the original video is a video courseware, and a user plays the video courseware, if a certain video in the video courseware is interested, a tag can be inserted into the starting position of the video courseware, so that the user can directly jump to the starting position of the video courseware to start playing when subsequently playing the video courseware. Alternatively, when a video image in a video courseware is of interest, a tag may be inserted into the video image so that a subsequent replay of the video courseware can jump directly to the video image.

In some embodiments, during the playing of the original video, the user may trigger the operation of inserting the tag by triggering a button, triggering an action, or triggering voice on the playing page. When the terminal is a computer terminal, the triggering operation can be realized by a mouse or a keyboard and the like. For example, when a trigger operation is performed using a mouse, a preset operation button is clicked with the mouse, and the click action may be a single click or a double click. For another example, when the keyboard is used to perform a trigger operation, a preset shortcut key may be pressed. The shortcut key can be any key or a combination of keys on the keyboard. The specific setting mode of the shortcut key and the type of the shortcut key are not limited herein.

When the terminal is a mobile terminal or a terminal with a touch function, the trigger operation may be implemented by a touch or the like. For example, the user touches a predetermined button or slides a predetermined button.

The video image played currently refers to the image displayed on the display screen of the terminal at the current moment, and the timestamp refers to a time node of the video image in the original video. For example, the terminal is playing a video courseware of chapter X of the math class, and a video image of the video courseware at 9:30(9 minutes 30 seconds) is displayed on the display screen of the terminal at the current moment, so that the corresponding timestamp of the video image is 9: 30.

And 102, acquiring label information added by a user aiming at the video image.

The label information includes a mark, a learning note, a look-after feeling, and the like. Wherein the mark is equivalent to a bookmark and is only used for representing the video image. The learning note is an annotation added by the user for the video image, and the annotation can be an explanation or a question of the content in a certain video image, or a summary. Alternatively, the annotation is a summary or an explanation of the video image and a segment of video prior to the video image.

For example, the user summarizes the time node 9:30 and the content within 30 seconds before the time node 9:30, i.e., summarizes the video courseware content for the time period from the time node 9:00 to the time node 9:30, and adds a label to the time node 9: 30. In some embodiments, the tag information is directly added to the video image, and may be externally hung on the edge area of the video image.

In some embodiments, a user may add a label in a video image by invoking a label entry module, which may be a label control, that is embedded in a player program. For example, after the user operates the activation button, the tag entry module is activated, a tag editing page is displayed on a display screen of the terminal, and the user can input and edit content on the tag editing page.

Fig. 2 is a schematic diagram of a tag editing page provided in the embodiment of the present disclosure. As shown in fig. 2, the tag edit page includes a tag number area 21 and a tag content edit area 22, where the tag number area 21 can input information such as a tag number and a tag name. The tag content editing area 22 may input information such as notes. In addition, the input content can be deleted, copied, pasted and the like in the tag editing area and the tag content editing area.

In some embodiments, the tag entry module is an application installed on the terminal, such as a tablet, note, etc. application, with which the player calls to connect. When the user touches the activation button, an application installed in the terminal is called, and the display screen displays an interface of the application. For example, when the tablet is connected to the player call, if the user slides the activation button, the tablet is called, the display screen displays the interface of the tablet, and the user can edit the content of the label on the tablet. After the editing of the tag content is finished, the user can click a finish button, and the tag content is automatically associated with the timestamp and the video data of the original video.

In some embodiments, when the user activates the operation of inserting the tab, the activated tab entry module and the called editable application may occupy the entire page of the display screen or may occupy a part of the page of the display screen.

And 103, associating and integrating the tag information, the timestamp and the video data of the original video to generate comprehensive video data carrying the tag information.

The integrated video data not only contains video data of the original video, but also contains tag information and a timestamp, the timestamp is associated with the tag information, and meanwhile, the tag information, the timestamp and the video data of the original video are associated. The association is to add the tag information into the video data of the original video and associate the tag information with the timestamp, so that the tag information, the timestamp and the video data of the original video are integrated into an integral data. When the tag is activated, the player can directly jump to the position of the timestamp to play the corresponding video image.

In this embodiment, the tag information, the timestamp and the video data of the original video are integrated into the integrated video material through the data model, and the integrated video material can be regarded as the original video containing more information, that is, the integrated video material is a file. When the comprehensive video data is played, the player can directly analyze the comprehensive video data, all the time nodes added with the labels in the comprehensive video data are displayed according to the time stamps, and a user can check the label information by clicking the corresponding time nodes. The data model may be any model capable of associating and integrating the tag information and the timestamp with the video data of the original video, and this embodiment does not limit this.

In some embodiments, the player may display the tagged time node according to a preset icon. The preset icons can be cartoon graphics, animal graphics, pointer graphics or time graphics. For example, the time chart shows time of hour, minute, and second. In some embodiments, the time graph represents minutes and seconds only if the duration of the integrated video material is less than one hour. If the integrated video data exceeds one hour, the time graph represents time, minutes and seconds.

In the video data making method provided by the embodiment, a timestamp of a currently played video image in an original video is obtained in response to a tag inserting operation triggered by a user; acquiring label information added by a user aiming at a video image; and associating and integrating the tag information, the timestamp and the video data of the original video to generate comprehensive video data carrying the tag information. The integrated video data is a file, so that the integrated video data is convenient to store and share, and can be quickly called and buffered during playing. In addition, when a user watches the original video, the label information can be directly added into the data of the original video, the operation is convenient, the integrity of the original video is kept, and when the user watches the comprehensive video data repeatedly, the label position can be quickly and accurately positioned, the searching time is reduced, the learning efficiency is improved, and the user experience is improved.

Fig. 3 is a flowchart of another method for producing video data according to an embodiment of the disclosure. As shown in fig. 3, the method for producing video material includes:

step 301, responding to the operation of inserting a tag triggered by a user, and obtaining a timestamp of a currently played video image in an original video.

The original video is original video data issued by a video resource issuer and can be played at a terminal. For example, the original video is a movie, a video courseware, a documentary, a recorded learning video, etc. The video format of the original video may be MPEG, AVI, MOV, ASF or WMV supported by the terminal.

The video image is a certain video frame in the original video. The currently played video image is a video image displayed in the display screen when the original video is played.

Wherein, the timestamp refers to a time node of the video image in the original video. For example, the terminal is playing a video courseware of chapter X of the math class, and a video image of the video courseware at 9:30(9 minutes 30 seconds) is displayed on the display screen of the terminal at the current moment, so that the corresponding timestamp of the video image is 9: 30.

Wherein, the label can be a mark, a study note, a look-after feeling and the like. For further description of the marking, reference may be made to step 101 of the above-described embodiment, which is not described in detail herein for brevity.

In some embodiments, the user may trigger the operation of inserting the tag by triggering a button, triggering an action, triggering voice on the play page. In addition, the operation of triggering the insertion of the tag can be performed in different ways according to different terminals. For example, when the terminal is a computer, the operation of inserting the tag may be triggered by a mouse or a keyboard. When the terminal is a mobile phone, the operation of inserting the tag can be triggered in a touch mode.

Step 302, acquiring the label information added by the user for the video image.

The label information includes a mark, a learning note, a look-after feeling, and the like. Wherein the mark is equivalent to a bookmark and is only used for representing the video image. The learning note is an annotation added by the user for the video image, and the annotation can be an explanation or a question of the content in a certain video image, or a summary. Alternatively, the annotation is a summary or an explanation of the video image and a segment of video prior to the video image.

For example, the user summarizes the time node 9:30 and the content within 30 seconds before the time node 9:30, i.e., summarizes the video courseware content for the time period from the time node 9:00 to the time node 9:30, and adds a label to the time node 9: 30. In some embodiments, the tag information is directly added to the video image, and may be externally hung on the edge area of the video image.

In some embodiments, a user may add a label in a video image by invoking a label entry module, which may be a label control, that is embedded in a player program. For example, after the user operates the activation button, the tag entry module is activated, a tag editing page is displayed on a display screen of the terminal, and the user can input and edit content on the tag editing page.

Step 303, obtain tag auxiliary information.

Wherein the tag auxiliary information is information explaining a tag and defining a usage right. For example, the tag auxiliary information includes at least one of user information, user configuration information, and an identification of the original video.

In some embodiments, the user information includes an account number of the user and/or an identification of the terminal device used by the user. The user account is an account for distinguishing a user watching an original video, or an account for distinguishing a user added with tag information. The user account may be an account of a user using the player, or a user account of a user logging in a server, where the server is a server storing the original video. The user account may also be a user account for logging in the terminal. The identity of the terminal device used by the user is also to distinguish between tagged users. When the terminal device has a correspondence with the user, the user can be distinguished by using the identifier of the terminal device.

In some embodiments, the user configuration information is the rights information added by the tagged user for the original video, including the user rights information. Wherein the user authority information is used for limiting the use authority of the user. For example, when adding tag information, the user may set that user a can view all tag information, and user B can only view the mark and cannot view the note. For another example, when the user adds tag information, it may be set that the user C can view tag information numbered as a single number, and the user D can view tag information numbered as a double number.

In some embodiments, the original video identification is unique to distinguish the original video. And obtaining the corresponding original video through the original video identification.

And 304, associating and integrating the tag information, the timestamp, the video data of the original video and the tag auxiliary information to generate comprehensive video data carrying the tag information.

In some embodiments, in the present embodiment, the tag information, the timestamp, the tag auxiliary information and the video data of the original video are integrated into the integrated video material through the data model, and the integrated video material can be regarded as the original video containing more information, that is, the integrated video material is a file. When the comprehensive video data is played, the player can directly analyze the comprehensive video data, all the time nodes added with the labels in the comprehensive video data are displayed according to the time stamps, and a user can check the label information by clicking the corresponding time nodes. The data model may be any model capable of associating and integrating the tag information and the timestamp with the video data of the original video, and this embodiment does not limit this.

It is understood that the original video identifier has uniqueness, so that the integrated video material can be distinguished through the original video identifier. When the user shares the comprehensive video data to the sharing platform, other users can obtain the corresponding comprehensive video data through the original video identification, obtain the producer of the comprehensive video data through the user information, and obtain the playing authority according to the user authority information.

Step 305, storing the integrated video data.

In some embodiments, the user may store the integrated video feed in a local storage medium, either at the source of the original video, or on a third party server as needed.

Fig. 4 is a flowchart of another method for producing video data according to an embodiment of the disclosure. As shown in fig. 4, the method for producing video material includes:

step 401, responding to the operation of inserting a tag triggered by a user, and obtaining a timestamp of a currently played video image in an original video.

The original video is original video data issued by a video resource issuer and can be played at a terminal. For example, the original video is a movie, a video courseware, a documentary, a recorded learning video, etc. The video format of the original video may be MPEG, AVI, MOV, ASF or WMV supported by the terminal.

The video image is a certain video frame in the original video. The currently played video image is a video image displayed in the display screen when the original video is played.

Wherein, the timestamp refers to a time node of the video image in the original video. For example, the terminal is playing a video courseware of chapter X of the math class, and a video image of the video courseware at 9:30(9 minutes 30 seconds) is displayed on the display screen of the terminal at the current moment, so that the corresponding timestamp of the video image is 9: 30.

Wherein, the label can be a mark, a study note, a look-after feeling and the like. For further description of the marking, reference may be made to step 101 of the above-described embodiment, which is not described in detail herein for brevity.

In some embodiments, the user may trigger the operation of inserting the tag by triggering a button, triggering an action, triggering voice on the play page. In addition, the operation of triggering the insertion of the tag can be performed in different ways according to different terminals. For example, when the terminal is a computer, the operation of inserting the tag may be triggered by a mouse or a keyboard. When the terminal is a mobile phone, the operation of inserting the tag can be triggered in a touch mode.

Step 402, acquiring label information added by a user for a video image.

The label information includes a mark, a learning note, a look-after feeling, and the like. Wherein the mark is equivalent to a bookmark and is only used for representing the video image. The learning note is an annotation added by the user for the video image, and the annotation can be an explanation or a question of the content in a certain video image, or a summary. Alternatively, the annotation is a summary or an explanation of the video image and a segment of video prior to the video image.

For example, the user summarizes the time node 9:30 and the content within 30 seconds before the time node 9:30, i.e., summarizes the video courseware content for the time period from the time node 9:00 to the time node 9:30, and adds a label to the time node 9: 30. In some embodiments, the tag information is directly added to the video image, and may be externally hung on the edge area of the video image.

In some embodiments, a user may add a label in a video image by invoking a label entry module, which may be a label control, that is embedded in a player program. For example, after the user operates the activation button, the tag entry module is activated, a tag editing page is displayed on a display screen of the terminal, and the user can input and edit content on the tag editing page.

In step 403, tag auxiliary information is obtained.

Wherein the tag auxiliary information is information explaining a tag and defining a usage right. For example, the tag auxiliary information includes at least one of user information, user configuration information, and an identification of the original video.

In some embodiments, the user information includes an account number of the user and/or an identification of the terminal device used by the user. The user account is an account for distinguishing a user watching an original video, or an account for distinguishing a user added with tag information. The user account may be an account of a user using the player, or a user account of a user logging in a server, where the server is a server storing the original video. The user account may also be a user account for logging in the terminal. The identity of the terminal device used by the user is also to distinguish between tagged users. When the terminal device has a correspondence with the user, the user can be distinguished by using the identifier of the terminal device.

In some embodiments, the user configuration information is the rights information added by the tagged user for the original video, including the user rights information. Wherein the user authority information is used for limiting the use authority of the user. For example, when adding tag information, the user may set that user a can view all tag information, and user B can only view the mark and cannot view the note. For another example, when the user adds tag information, it may be set that the user C can view tag information numbered as a single number, and the user D can view tag information numbered as a double number.

In some embodiments, the original video identification is unique to distinguish the original video. And obtaining the corresponding original video through the original video identification.

And step 404, associating and integrating the tag information, the timestamp, the video data of the original video and the tag auxiliary information to generate comprehensive video data carrying the tag information.

In some embodiments, the integrated video feed includes tag information, a timestamp, video data of the original video, and tag auxiliary information, and the tag information, the timestamp, and the tag auxiliary information are associated with the video data of the original video.

It is understood that the original video identifier has uniqueness, so that the integrated video material can be distinguished through the original video identifier.

When the user shares the comprehensive video data to the sharing platform, other users can obtain the corresponding comprehensive video data through the original video identification, obtain the producer of the comprehensive video data through the user information, and obtain the playing authority according to the user authority information.

In some embodiments, the user may store the integrated video feed in a local storage medium, either at the source of the original video, or on a third party server as needed.

Step 405, the integrated video data is shared to the sharing platform for other users in the sharing platform to obtain the integrated video data.

In some embodiments, the user shares the integrated video material to the sharing platform, and shares the integrated video material to friends or others through the sharing platform. Wherein, the sharing platform can be a sharing platform which is logged in by the user currently or a third party sharing platform which is different from the sharing platform which is logged in currently,

and acquiring other users of the comprehensive video data through the sharing platform, after the player analyzes the comprehensive video data, judging the authority of the other users through the user authority information in the label auxiliary information, and playing the comprehensive video data according to the authority.

In some embodiments, the user plays the integrated video material, and the time node with the inserted tag can be displayed on the playing page in its entirety for the user to quickly locate. In addition, the user can also modify the tag information.

Fig. 5 is a flowchart illustrating a further method for generating video data according to an embodiment of the disclosure. As shown in fig. 5, the method for producing video material includes:

step 501, responding to a play instruction of a user, and judging whether the integrated video data has a label.

After receiving a playing instruction of a user, the player judges whether the integrated video data has a label or not. In some embodiments, the player may determine whether the integrated video feed is tagged with tag data.

Step 502, analyzing the integrated video data to obtain all tags and tag information in the integrated video data.

In this embodiment, when the integrated video data includes the tag, the integrated video data is parsed to obtain all the tags and tag information in the integrated video data.

In step 503, all the tags are displayed on the playing page.

Wherein, the playing page is all or part of the display page of the terminal. For example, when the display page of the terminal displays a plurality of citations, the play page may be a partial display page of the terminal. When the display page of the terminal only displays the player, the play page may be the entire display page of the terminal. However, when the display page of the terminal displays only the player, the play page may be a partial display page of the terminal.

In the embodiment, all the tags are displayed on the playing page, so that the user can quickly and accurately locate a desired position, the searching time is shortened, the efficiency is improved, and the user experience is improved.

And step 504, displaying the label information corresponding to the label based on the label selected by the user.

In some embodiments, the user may select by touch the tab of the tab information that needs to be further displayed. For example, when the user clicks a tab icon, the tab information corresponding to the tab icon is displayed on the display page.

And 505, receiving modification information of the user for the tag, and updating the tag information based on the modification information.

In some embodiments, if the user needs to modify the tag information, the user can click a modification button to enter the tag entry module for modification. In other embodiments, when the user clicks on the label icon, the label information is displayed directly in the label entry module, so that the user can directly modify the label information and update the label information.

And step 506, generating and carrying new comprehensive video data according to the association and integration of the updated tag information, the timestamp, the tag auxiliary information and the video data of the original video.

And 507, storing the updated comprehensive video data or sharing the comprehensive video data on a sharing platform.

In some embodiments, the user may store the integrated video feed in a local storage medium, either at the source of the original video, or on a third party server as needed. Or sharing the updated comprehensive video data on the sharing platform. Or, the updated comprehensive video data is stored and shared on the sharing platform at the same time.

Fig. 6 is a flowchart illustrating a method for generating video data according to an embodiment of the disclosure. As shown in fig. 6, the method for producing video material includes:

step 601, screening video resources based on the video use information to obtain an original video.

Wherein the video usage information comprises one or more of the playing amount, the replay rate, the user comments and the number of praises of the video.

For a video material production platform, users who want to obtain learning materials from a network can analyze video using information of the video materials through a background big data analysis module and select valuable original video materials based on analysis results, so that unnecessary resource waste is reduced.

Step 602, responding to the operation of inserting a tag triggered by a user, and obtaining a timestamp of a currently played video image in an original video.

The video image is a certain video frame in the original video. The currently played video image is a video image displayed in the display screen when the original video is played.

Wherein, the timestamp refers to a time node of the video image in the original video. For example, the terminal is playing a video courseware of chapter X of the math class, and a video image of the video courseware at 9:30(9 minutes 30 seconds) is displayed on the display screen of the terminal at the current moment, so that the corresponding timestamp of the video image is 9: 30.

Wherein, the label can be a mark, a study note, a look-after feeling and the like. For further description of the marking, reference may be made to step 101 of the above-described embodiment, which is not described in detail herein for brevity.

In some embodiments, the user may trigger the operation of inserting the tag by triggering a button, triggering an action, triggering voice on the play page. In addition, the operation of triggering the insertion of the tag can be performed in different ways according to different terminals. For example, when the terminal is a computer, the operation of inserting the tag may be triggered by a mouse or a keyboard. When the terminal is a mobile phone, the operation of inserting the tag can be triggered in a touch mode.

Step 603, acquiring the label information added by the user for the video image.

The label information includes a mark, a learning note, a look-after feeling, and the like. Wherein the mark is equivalent to a bookmark and is only used for representing the video image. The learning note is an annotation added by the user for the video image, and the annotation can be an explanation or a question of the content in a certain video image, or a summary. Alternatively, the annotation is a summary or an explanation of the video image and a segment of video prior to the video image.

In some embodiments, a user may add a label in a video image by invoking a label entry module, which may be a label control, that is embedded in a player program. For example, after the user operates the activation button, the tag entry module is activated, a tag editing page is displayed on a display screen of the terminal, and the user can input and edit content on the tag editing page.

Step 604, tag auxiliary information is obtained.

Wherein the tag auxiliary information is information explaining a tag and defining a usage right. For example, the tag auxiliary information includes at least one of user information, user configuration information, and an identification of the original video.

In some embodiments, the user information includes an account number of the user and/or an identification of the terminal device used by the user. The user account is an account for distinguishing a user watching an original video, or an account for distinguishing a user added with tag information. The user account may be an account of a user using the player, or a user account of a user logging in a server, where the server is a server storing the original video. The user account may also be a user account for logging in the terminal. The identity of the terminal device used by the user is also to distinguish between tagged users. When the terminal device has a correspondence with the user, the user can be distinguished by using the identifier of the terminal device.

And step 605, associating and integrating the tag information, the timestamp, the video data of the original video and the tag auxiliary information to generate comprehensive video data carrying the tag information.

In some embodiments, the integrated video feed includes tag information, a timestamp, video data of the original video, and tag auxiliary information, and the tag information, the timestamp, and the tag auxiliary information are associated with the video data of the original video.

Step 606, the integrated video data is shared to the sharing platform for other users in the sharing platform to obtain the integrated video data.

In some embodiments, the user shares the integrated video material to the sharing platform, and shares the integrated video material to friends or others through the sharing platform. Wherein, the sharing platform can be a sharing platform which is logged in by the user currently or a third party sharing platform which is different from the sharing platform which is logged in currently,

step 607, when playing the integrated video data, analyzing the integrated video data to obtain all the tags and tag information in the integrated video data.

In this embodiment, when the integrated video data includes the tag, the integrated video data is parsed to obtain all the tags and tag information in the integrated video data.

In step 608, all the tags are displayed on the play page.

Wherein, the playing page is all or part of the display page of the terminal. For example, when the display page of the terminal displays a plurality of citations, the play page may be a partial display page of the terminal. When the display page of the terminal only displays the player, the play page may be the entire display page of the terminal. However, when the display page of the terminal displays only the player, the play page may be a partial display page of the terminal.

And step 609, displaying the label information corresponding to the label based on the label selected by the user.

In some embodiments, the user may select by touch the tab of the tab information that needs to be further displayed. For example, when the user clicks a tab icon, the tab information corresponding to the tab icon is displayed on the display page.

Step 610, receiving modification information of the user for the tag, and updating the tag information based on the modification information.

In some embodiments, if the user needs to modify the tag information, the user can click a modification button to enter the tag entry module for modification. In other embodiments, when the user clicks on the label icon, the label information is displayed directly in the label entry module, so that the user can directly modify the label information and update the label information.

Step 611, generating a new integrated video data according to the updated tag information, timestamp, tag auxiliary information and video data association and integration of the original video.

Step 612, storing the updated integrated video data or sharing the video data on the sharing platform.

In the video data making method provided by the embodiment, a timestamp of a currently played video image in an original video is obtained in response to a tag inserting operation triggered by a user; acquiring label information added by a user aiming at a video image; and associating and integrating the tag information, the timestamp and the video data of the original video to generate comprehensive video data carrying the tag information. The integrated video data is a file, so that the integrated video data is convenient to store and share, and can be quickly called and buffered during playing. In addition, when a user watches the original video, the label information can be directly added into the data of the original video, the operation is convenient, the integrity of the original video is kept, and when the user watches the comprehensive video data repeatedly, the label position can be quickly and accurately positioned, the searching time is reduced, the learning efficiency is improved, and the user experience is improved.

In a second aspect, an embodiment of the present disclosure provides an apparatus for generating video data. Fig. 7 is a schematic block diagram of a video data generating apparatus according to an embodiment of the disclosure. As shown in fig. 7, the video material production apparatus includes:

and the triggering module 701 is used for responding to a triggering instruction of a user to trigger the operation of inserting the tag.

In some embodiments, during the playing of the original video, the user may trigger the operation of inserting the tag by triggering a button, triggering an action, or triggering voice on the playing page. When the terminal is a computer terminal, the triggering operation can be realized by a mouse or a keyboard and the like. For example, when a trigger operation is performed using a mouse, a preset operation button is clicked with the mouse, and the click action may be a single click or a double click. For another example, when the keyboard is used to perform a trigger operation, a preset shortcut key may be pressed. The shortcut key can be any key or a combination of keys on the keyboard. The specific setting mode of the shortcut key and the type of the shortcut key are not limited herein.

A first obtaining module 702, configured to obtain a timestamp of a currently playing video image in an original video.

Wherein, the timestamp refers to a time node of the video image in the original video. The currently played video image means an image displayed on a display screen of the terminal at the current moment. For example, the terminal is playing a video courseware of chapter X of the math class, and a video image of the video courseware at 9:30(9 minutes 30 seconds) is displayed on the display screen of the terminal at the current moment, so that the corresponding timestamp of the video image is 9: 30.

A second obtaining module 703, configured to obtain tag information that is added by the user for the video image.

The label information includes a mark, a learning note, a look-after feeling, and the like. Wherein the mark is equivalent to a bookmark and is only used for representing the video image. The learning note is an annotation added by the user for the video image, and the annotation can be an explanation or a question of the content in a certain video image, or a summary. Alternatively, the annotation is a summary or an explanation of the video image and a segment of video prior to the video image.

In some embodiments, a user may add a label in a video image by invoking a label entry module, which may be a label control, that is embedded in a player program. For example, after the user operates the activation button, the tag entry module is activated, a tag editing page is displayed on a display screen of the terminal, and the user can input and edit content on the tag editing page.

In some embodiments, the second acquisition module 703 is a label entry module. The label entry module is an application installed on the terminal, such as a tablet, a note and the like, and the application is associated with the player. When the user touches the activation button, an application installed in the terminal is called, and the display screen displays an interface of the application. For example, when a tablet is associated with the player, if the user slides the activation button, the tablet is called and the display screen displays the interface of the tablet, and the user can edit the contents of the tag on the tablet. After the editing of the tag content is finished, the user can click a finish button, and the tag content is automatically associated with the timestamp and the video data of the original video.

In some embodiments, when the user activates the operation of inserting the tab, the activated tab entry module and the called editable application may occupy the entire page of the display screen or may occupy a part of the page of the display screen.

And the association module 704 is configured to associate and integrate the tag information, the timestamp, and the video data of the original video to generate a comprehensive video data carrying the tag information.

The integrated video data not only contains video data of the original video, but also contains tag information and a timestamp, the timestamp is associated with the tag information, and meanwhile, the tag information, the timestamp and the video data of the original video are associated.

In this embodiment, the tag information, the timestamp and the video data of the original video are integrated into the integrated video material through the data model, and the integrated video material can be regarded as the original video containing more information, that is, the integrated video material is a file. When the comprehensive video data is played, the player can directly analyze the comprehensive video data, all the time nodes added with the labels in the comprehensive video data are displayed according to the time stamps, and a user can check the label information by clicking the corresponding time nodes. The data model may be any model capable of associating and integrating the tag information and the timestamp with the video data of the original video, and this embodiment does not limit this.

In some embodiments, the player may display the tagged time node according to a preset icon. The preset icons can be cartoon graphics, animal graphics, pointer graphics or time graphics. For example, the time chart shows time of hour, minute, and second. In some embodiments, the time graph represents minutes and seconds only if the duration of the integrated video material is less than one hour. If the integrated video data exceeds one hour, the time graph represents time, minutes and seconds.

In the video data production apparatus provided by this embodiment, the trigger module is configured to respond to a tag insertion operation triggered by a user, and the first obtaining module is configured to obtain a timestamp of a currently played video image in an original video; the second acquisition module is used for acquiring label information added by a user aiming at the video image; the association module is used for associating and integrating the tag information, the timestamp and the video data of the original video to generate comprehensive video data carrying the tag information. The integrated video data is a file, so that the integrated video data is convenient to store and share, and can be quickly called and buffered during playing. In addition, when a user watches the original video, the label information can be directly added into the data of the original video, the operation is convenient, the integrity of the original video is kept, and when the user watches the comprehensive video data repeatedly, the label position can be quickly and accurately positioned, the searching time is reduced, the learning efficiency is improved, and the user experience is improved.

In a third aspect, referring to fig. 8, an embodiment of the present disclosure provides an electronic device, including:

one or more processors 801;

a memory 802, on which one or more programs are stored, which, when executed by the one or more processors, cause the one or more processors to implement the method for producing video material of any one of the above;

one or more I/O interfaces 803, coupled between the processor and the memory, are configured to enable information interaction between the processor and the memory.

The processor 801 is a device with data processing capability, and includes, but is not limited to, a Central Processing Unit (CPU), and the like; memory 802 is a device having data storage capabilities including, but not limited to, random access memory (RAM, more specifically SDRAM, DDR, etc.), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), FLASH memory (FLASH); an I/O interface (read/write interface) 803 is connected between the processor 801 and the memory 802, and can realize information interaction between the processor 801 and the memory 802, which includes but is not limited to a data Bus (Bus) and the like.

In some embodiments, the processor 801, memory 802, and I/O interface 803 are interconnected via a bus, which in turn connects with other components of the computing device.

In a fourth aspect, the present disclosure provides a computer-readable medium, on which a computer program is stored, where the computer program is executed by a processor to implement any one of the above-mentioned video data producing methods.

It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.

Example embodiments have been disclosed herein, and although specific terms are employed, they are used and should be interpreted in a generic and descriptive sense only and not for purposes of limitation. In some instances, features, characteristics and/or elements described in connection with a particular embodiment may be used alone or in combination with features, characteristics and/or elements described in connection with other embodiments, unless expressly stated otherwise, as would be apparent to one skilled in the art. Accordingly, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the disclosure as set forth in the appended claims.

24页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:在主机设备和电子解扰设备之间传输视频流的方法和设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类