Method for generating video abstract based on multi-mode data and aesthetic principle through neural network

文档序号:1816115 发布日期:2021-11-09 浏览:26次 中文

阅读说明:本技术 一种基于多模态数据和美学原理的神经网络生成视频摘要的方法 (Method for generating video abstract based on multi-mode data and aesthetic principle through neural network ) 是由 卢少平 谢杰航 杨愚鲁 于 2021-08-11 设计创作,主要内容包括:一种基于多模态数据和美学原理的神经网络生成视频摘要的方法,包括:S100:将原始视频输入到多模态数据提取模块后得到文本模态的字幕数据,音频模态的背景音乐数据和图像模态的视频帧数据,再通过用户输入场景文本数据;S200:将多模态数据再分别输入到多模态特征编码模块中编码,输出各模态数据的特征向量表示序列;S300:将特征向量表示序列输入到重要镜头选择模块,分别提取出原始视频中的亮点镜头、代表性镜头、用户期望镜头和叙事镜头。S400:把亮点镜头、代表性镜头、用户期望镜头和叙事镜头输入到美学镜头组装模块中筛选出遵循美学原理的高质量镜头并拼接成视频摘要。相较于现有方法,提高了生成的视频摘要的可看性和叙述性。(A method for generating a video summary based on a neural network of multimodal data and aesthetic principles, comprising: s100: inputting an original video into a multi-mode data extraction module to obtain subtitle data of a text mode, background music data of an audio mode and video frame data of an image mode, and inputting scene text data through a user; s200: respectively inputting the multi-modal data into a multi-modal feature coding module for coding, and outputting a feature vector representation sequence of each modal data; s300: and inputting the characteristic vector representation sequence into an important shot selection module, and respectively extracting a bright spot shot, a representative shot, a user expected shot and a narrative shot in the original video. S400: and inputting the bright spot shots, the representative shots, the user expected shots and the narrative shots into an aesthetic shot assembly module to screen out high-quality shots which follow aesthetic principles and splice the high-quality shots into a video abstract. The visibility and narrative of the generated video summary are improved compared to prior methods.)

1. A method for generating a video abstract by a neural network based on multi-mode data and aesthetic principles is characterized by comprising the following steps: the neural network comprises a multi-mode data extraction module, a multi-mode feature coding module, an important lens selection module and an aesthetic lens assembly module; the multi-modal data in the method comprises four types of data of three modalities, namely subtitle data of a text modality, scene text data input by a user, background music data of an audio modality and video frame data of an image modality; the aesthetic principle in the method comprises three aspects of color continuity of video frames, video duration and integrity of video shots; the method comprises the following steps: s100: inputting the original video into the multi-mode data extraction module, automatically obtaining caption data of a text mode, background music data of an audio mode and video frame data of an image mode, and inputting the scene text data through a user; s200: inputting the multi-modal data into the multi-modal feature coding module respectively for coding, and outputting a feature vector representation sequence of each modal data; s300: inputting the characteristic vector representation sequence into the important shot selection module, and respectively extracting a bright spot shot, a representative shot, a user expected shot and a narrative shot in an original video; s400: and inputting the bright spot shots, the representative shots, the user desired shots and the narrative shots into the aesthetic shot assembly module to screen out high-quality shots which follow the aesthetic principle and splice the high-quality shots into a video abstract.

2. The method for generating a video summary based on multi-modal data and aesthetic principles of neural networks as claimed in claim 1, wherein: in step S100, the multi-modal data extraction module includes an audio data extraction component, a video frame data extraction component, a subtitle data extraction component, and a scene text data reception component, where the audio data extraction component is a component that extracts background music data from an original video based on an FFmpeg dependency library; the video frame data extraction component is a component used for storing each frame in the original video as a picture, and is realized by separately intercepting each frame of the original video into a picture and storing the picture; the subtitle data extraction component is a method based on voice recognition, and the method is used for recognizing voice sentences contained in an original video, recording the time of the voice sentences appearing in a video time axis, and storing all the voice sentences and corresponding time into a pure text form; the scene text data receiving component is used for receiving and saving plain text data input by a user.

3. The method for generating a video summary based on multi-modal data and aesthetic principles of neural networks as claimed in claim 1, wherein: in step S200, the multi-modal feature encoding module includes an audio encoder, an image encoder, and a text encoder, where the audio encoder is a component constructed based on fast fourier transform and mel spectrum, and encodes background music data into waveform features; the image encoder encodes the frequency frame picture into an image characteristic matrix based on a residual error network; the text encoder adopts a Transformer encoder and a bidirectional gating cyclic neuron encoder to encode the subtitle data and the scene text data into subtitle characteristic vectors and scene characteristic vectors respectively.

4. The method for generating a video summary based on multi-modal data and aesthetic principles of neural networks as claimed in claim 1, wherein: in S300, the important shot selection module includes a bright spot shot extraction component, a representative shot extraction component, a narrative shot extraction component, and a user-desired shot extraction component.

5. The method for generating a video summary based on multi-modal data and aesthetic principles of neural networks as claimed in claim 1, wherein: step S300 includes: s301: the bright spot lens extraction component acquires a bright spot lens in the original video based on the change of the waveform characteristics; s302: the representative shot extraction component selects a group of continuous video frames from the original video to serve as a representative shot according to an image feature matrix based on an optimized and advanced DSNet; s303: selecting narrative captions from the caption feature vectors by the narrative shot extraction component, and extracting shots corresponding to the narrative captions in the original video so as to obtain narrative shots; s304: and the user expected lens extraction component picks out the image characteristics which are most matched with the scene characteristic vector in the image characteristic matrix, and then obtains the user expected lens according to the picked image characteristics.

6. The method for generating a video summary based on multi-modal data and aesthetic principles neural network as claimed in claim 7, wherein: the bright spot lens extraction component in step S301 obtains a bright spot lens in the original video according to the change of the waveform characteristics calculated by the following formula:

where HS is the climax lens desired to be selected, TX(. h) represents the top x% ranked segment of all audio segments, ηkRepresents the range of values for k, l is the duration of the video; suppose EkIs the value of the audio signal at time k, w is the segment duration for each audio segment from time k to k + w, thenIs the value of the acoustic energy of the segment, i.e., the value of the change in the waveform characteristics.

7. The method for generating a video summary based on multi-modal data and aesthetic principles neural network as claimed in claim 7, wherein: step S303 includes: s3001, a text chapter division method based on TF-IDF similarity score and Kmeans text clustering, which is used for automatically dividing caption data into different chapters; and S3002, decoding the subtitle feature vectors of different chapters by using a decoder based on a pointer network so as to select important subtitle texts in different chapters, and finally obtaining narrative shots corresponding to the subtitle texts according to the important subtitle texts.

8. The method for generating a video summary based on multi-modal data and aesthetic principles neural network as claimed in claim 7, wherein: step S304 includes: s3003, a text similarity calculation component based on word co-occurrence degree and semantic similarity is used for calculating the similarity between scene text data and subtitle data and then creating a sub-video; s3004, a shot positioning component based on the visual semantic positioning method is used for selecting shots in the sub-video which are in accordance with the description of the scene text data, and the shots are the shots expected by the user.

9. The method for generating a video summary based on multi-modal data and aesthetic principles of neural networks as claimed in claim 1, wherein: step S400 the aesthetic lens assembly module includes: s401: a shot reselection component based on the aesthetic principles for selecting high quality shots from the highlight shots, the representative shots, the user desired shots, and the narrative shots; s402: and the lens assembly component is used for assembling the high-quality lenses selected by the lens reselection component into a video abstract.

10. The method for generating a video summary based on multi-modal data and aesthetic principles of neural networks as claimed in claim 11, wherein: step S401 includes: combining the selected bright spot shots, the representative shots, the user expected shots and the repeated shots in the narrative shots to obtain fusion shots without repeated shots, selecting high-quality shots according with the aesthetic principle from the fusion shots, and finally splicing the selected high-quality shots into a complete video abstract according to the time axis of the original video.

Technical Field

The invention belongs to the technical field of image and video processing, and particularly relates to a video abstract generating method based on multi-mode and aesthetic principles.

Background

Narrative videos, such as documentaries, movies, and scientific commentary, share immersive visual information along with narrative captions, voice-overs, and background music. With the uploading of a large number of narrative videos on various online social platforms, there is an urgent need to produce narrative video summaries that can help viewers quickly browse and understand the content and present them in movie trailers, knowledge popularization platforms, and many other applications.

The main purpose of video summarization is to generate a short video that contains the most representative visual information in a given video. Generally, when compressing a relatively long video into a shorter version, the most representative shots should be selected and the shots should be coherently combined according to a certain artistic style, which requires an in-depth understanding of the video. In this context, various automatic video summarization methods have been introduced in the research field.

In recent years, with the rapid development of machine learning, deep neural networks have also been used to automatically generate video summaries. Gygli et al, at the university of federal engineering, zurich, developed a linear model using information of spatial and temporal saliency and locality. Furthermore, methods based on deep learning have been proposed. Among them, the RNN-based method is a representative method. In particular, a paper "TTH-RNN: sensor-channel adaptive neural network for video summary" published by Zhao et al of the Western Ann university of transportation on IEEE Transactions on Industrial Electronics in 2020 discloses the bottom layer structure of the video by using a fixed-length hierarchical RNN and a hierarchical structure adaptive LSTM respectively, and promotes the application of a deep learning algorithm in the field of video summary generation. However, while these methods may capture some important visual information from the original video, they also have some common drawbacks. For example, in the shot selection process, some image information is considered by searching for shot boundaries, the switched shot is taken as important content, and the multi-modal information of the original video is ignored. Thus, the generated video summary loses a lot of information, making it look like a truncated version of the original video, with no consistent narrative information.

In addition, it is very difficult to automatically generate a short and consistent summary for long videos, let alone to display visual content of interest to the viewer. While some summary schemes utilize some specific patterns to select important shots, few schemes consider film aesthetics criteria during the shot assembly process, which can significantly undermine the quality of the generated summary. In addition, the existing summary solution is directly applied to the narrative video, and multi-modal information such as audio, video frames, subtitles and the like is not well considered, so that the problems of discontinuous audio, discontinuous shot pictures and the like still occur when the video summary is generated in the traditional method, and the quality of the generated video summary is further influenced.

Disclosure of Invention

The invention aims to solve the problems that the shot content of a video abstract obtained by the existing video abstract generating method is lack of narrative consistency and the content is inconsistent in vision and audition. The invention provides a video abstract generating method based on multi-mode and aesthetic principles, a system can automatically make a high-quality video abstract for an original video by using the aesthetic principles and multi-mode information such as audio, video frames, subtitles and the like in the original video as long as the original video is input, and the method comprises the following steps:

s100: inputting the original video into the multi-mode data extraction module, automatically obtaining caption data of a text mode, background music data of an audio mode and video frame data of an image mode, and inputting the scene text data through a user;

s200: inputting the multi-modal data into the multi-modal feature coding module respectively for coding, and outputting a feature vector representation sequence of each modal data;

s300: and inputting the characteristic vector representation sequence into the important shot selection module, and respectively extracting a bright spot shot, a representative shot, a user expected shot and a narrative shot in an original video.

S400: and inputting the bright spot shots, the representative shots, the user desired shots and the narrative shots into the aesthetic shot assembly module to screen out high-quality shots which follow the aesthetic principle and splice the high-quality shots into a video abstract. Compared with the existing method, the method improves the visibility and narrative property of the generated video abstract.

In step S100 of the present invention, the multi-modal data extraction module includes an audio data extraction component, a video frame data extraction component, a subtitle data extraction component, and a scene text data receiving component. The audio data extraction component is a component for extracting background music data in the original video based on an FFmpeg dependency library; the video frame data extraction component is a component used for storing each frame in the original video as a picture, and is realized by separately intercepting each frame of the original video into a picture and storing the picture; the subtitle data extraction component is a method based on voice recognition, and the method is used for recognizing voice sentences contained in an original video, recording the time of the voice sentences appearing in a video time axis, and storing all the voice sentences and corresponding time into a pure text form; the scene text data receiving component is used for receiving and saving plain text data input by a user.

In step S200 of the present invention, the multi-modal feature encoding module includes an audio encoder, an image encoder, and a text encoder. The audio encoder is an assembly constructed based on fast Fourier transform and Mel frequency spectrum, and encodes background music data into waveform characteristics; the image encoder encodes the frequency frame picture into an image characteristic matrix based on a residual error network; the text encoder adopts a Transformer encoder and a bidirectional gating cyclic neuron encoder to encode the subtitle data and the scene text data into subtitle characteristic vectors and scene characteristic vectors respectively.

The important shot selection module comprises a bright spot shot extraction component, a representative shot extraction component, a narrative shot extraction component and a user desired shot extraction component.

Step S300 of the present invention includes: s301: the bright spot lens extraction component acquires a bright spot lens in the original video based on the change of the waveform characteristics; s302: the representative shot extraction component selects a group of continuous video frames from the original video to serve as a representative shot according to an image feature matrix based on an optimized and advanced DSNet; s303: selecting narrative captions from the caption feature vectors by the narrative shot extraction component, and extracting shots corresponding to the narrative captions in the original video so as to obtain narrative shots; s304: and the user expected lens extraction component picks out the image characteristics which are most matched with the scene characteristic vector in the image characteristic matrix, and then obtains the user expected lens according to the picked image characteristics.

Further, the bright spot shot extraction component in step S301 obtains the bright spot shot in the original video according to the change of the waveform characteristics calculated by the following formula:

where HS is the climax lens desired to be selected, TX(. h) represents the top x% ranked segment of all audio segments, ηkRepresents the range of values for k, l is the duration of the video; suppose EkIs the value of the audio signal at time k, w is the segment duration for each audio segment from time k to k + w, thenIs the value of the acoustic energy of the segment, i.e., the value of the change in the waveform characteristics.

Further, step S303 includes: s3001, a text chapter division method based on TF-IDF similarity score and Kmeans text clustering, which is used for automatically dividing caption data into different chapters; and S3002, decoding the subtitle feature vectors of different chapters by using a decoder based on a pointer network so as to select important subtitle texts in different chapters, and finally obtaining narrative shots corresponding to the subtitle texts according to the important subtitle texts.

Further, step S304 includes: s3003, a text similarity calculation component based on word co-occurrence degree and semantic similarity is used for calculating the similarity between scene text data and subtitle data and then creating a sub-video; s3004, a shot positioning component based on the visual semantic positioning method is used for selecting shots in the sub-video which are in accordance with the description of the scene text data, and the shots are the shots expected by the user.

In step S400 of the present invention, the aesthetic lens assembly module includes: s401: a shot reselection component based on the aesthetic principles for selecting high quality shots from the highlight shots, the representative shots, the user desired shots, and the narrative shots; s402: and the lens assembly component is used for assembling the high-quality lenses selected by the lens reselection component into a video abstract.

Further, step S401 includes: combining the selected bright spot shots, the representative shots, the user expected shots and the repeated shots in the narrative shots to obtain fusion shots without repeated shots, selecting high-quality shots according with the aesthetic principle from the fusion shots, and finally splicing the selected high-quality shots into a complete video abstract according to the time axis of the original video.

The technical method integrates visual content, subtitles and audio information into a shot selection process, and establishes a key shot selection module, a subtitle summarization module and a highlight extraction module. The key lens selection module and the highlight extraction module respectively adopt image information and audio information as monitoring signals to select the lens. Particularly, in order to ensure the narrative capability of the generated abstract, the subtitle abstract module considers the subject consistency of the original video in a period of time and combines a text abstract method to select a shot. In addition, in order to acquire contents which are interested by a user, a visual semantic matching module is constructed, and the module comprehensively considers the influence of semantic correlation between subtitles and user design texts on visual semantic positioning. Furthermore, our solution automatically guarantees the integrity of the shot content through some complementary strategies. Then, according to the aesthetic criteria of the movie, selected shots are spliced under a series of constraint conditions such as color continuity, shot length and the like, so that the overall quality of the generated abstract is increased.

Drawings

FIG. 1 is a flow diagram of a method for generating a video summary based on a neural network of multimodal data and aesthetic principles provided in one embodiment of the present disclosure;

FIG. 2 is a block diagram of a method for generating a video summary based on a neural network of multi-modal data and aesthetic principles provided in one embodiment of the present disclosure;

fig. 3 is a workflow of a bright spot lens extraction assembly provided in an embodiment of the present disclosure.

Fig. 4 is a workflow of a user-desired take component provided in one embodiment of the present disclosure.

FIG. 5 is a workflow of a narrative shot extraction component provided in one embodiment of the present disclosure.

Fig. 6 is a workflow of an aesthetic lens assembly module provided in an embodiment of the present disclosure.

Table 1 is a comparison of the quality of video summaries generated by the present method and other conventional modeling methods in one embodiment of the present disclosure.

Detailed Description

In the big data age, video websites update a large number of narrative videos every minute or even every second, and it is time-consuming and labor-consuming to carefully watch the content in each video. In this case, the video summary can save a lot of time and energy of the audience, improve the viewing efficiency of the audience, and can play an important role in many applications such as movie trailers, knowledge popularization platforms, and the like.

In one embodiment, the model structure of the method for generating the video abstract by the neural network based on the multi-modal data and the aesthetic principle is disclosed, and the model structure is respectively composed of a multi-modal data extraction module, a multi-modal feature coding module, an important lens selection module and an aesthetic lens assembly module from left to right. The multi-modal data in the method comprises four types of data of three modes, namely subtitle data of a text mode, scene text data input by a user, background music data of an audio mode and video frame data of an image mode; the aesthetic principles in the method include three aspects of color continuity of video frames, video duration and integrity of video shots. As shown in fig. 1, the method comprises the following steps:

s100: and inputting the original video into the multi-mode data extraction module, automatically obtaining caption data of a text mode, background music data of an audio mode and video frame data of an image mode, and inputting the scene text data through a user.

S200: and respectively inputting the multi-modal data into the multi-modal feature coding module for coding, and outputting a feature vector representation sequence of each modal data.

S300: and inputting the characteristic vector representation sequence into the important shot selection module, and respectively extracting a bright spot shot, a representative shot, a user expected shot and a narrative shot in an original video.

S400: and inputting the bright spot shots, the representative shots, the user desired shots and the narrative shots into the aesthetic shot assembly module to screen out high-quality shots which follow the aesthetic principle and splice the high-quality shots into a video abstract. Compared with the existing method, the method improves the visibility and narrative property of the generated video abstract.

The following describes in further detail embodiments of the present invention with reference to the accompanying drawings. Referring to fig. 3, the bright spot shot extraction component obtains the bright spot shot by monitoring the fluctuation of the audio energy, and the extraction method is as shown in formula (1):

where HS is the climax lens desired to be selected, TX(. h) represents the top x% ranked segment of all audio segments, ηkRepresenting the range of values for k and l is the duration of the video. Suppose EkIs the value of the audio signal at time k, w is the segment duration for each audio segment from time k to k + w, thenIs the value of the acoustic energy of the segment, i.e., the value of the change in the waveform characteristics.

Referring to fig. 4, in one embodiment, the user-desired-shot extraction component first calculates the similarity between the scene text data and the subtitle data using the word co-occurrence and semantic similarity, thereby obtaining the subtitle data having the highest similarity with the scene text data. Next, the user-desired-shot extracting component picks out a corresponding shot from the original video according to the obtained subtitle data with the highest similarity. And finally, the user expected shot extraction component calculates the matching degree of the image feature matrixes of the shots with the scene feature vectors, and selects the shot with the highest matching degree with the scene feature vectors from the shots as the user expected shot. Wherein the word co-occurrence degree represents the number of times the same word occurs in the scene text data and the subtitle data. The semantic similarity represents the distance between the subtitle feature vector and the scene feature vector in the vector space, and the closer the distance is, the more similar the distance is.

Referring to fig. 5, in another embodiment, the narrative shot extraction component first automatically divides the subtitle data into different chapters based on the TF-IDF similarity score and Kmeans text clustering method, and then decodes the divided chapters by using a decoder based on a pointer network, thereby picking out important subtitle text S in the divided different chaptersiAnd i is more than or equal to 0 and less than or equal to L, wherein L is the number of divided chapters, and finally, a narrative shot corresponding to the subtitle text is obtained according to the important subtitle texts.

In another embodiment, the image feature matrix of the original video is input to a representative shot extraction component that picks a set of consecutive video frames from the input image feature matrix to serve as a representative shot. In this embodiment, the representative shot extraction component is preferably an advanced DSNet.

Referring to fig. 6, in another embodiment, the aesthetic shot assembly module screens shots that conform to a predefined aesthetic principle from the highlight shots, the representative shots, the user desired shots, and the narrative shots, and then stitches the shots to output as a video summary. In this embodiment, the predefined aesthetic principles are three of color continuity of a shot, shot duration and integrity of a shot, the color continuity representing that of two adjacent shots.

Referring to table 1, in another embodiment, the methods of the invention are compared to DR, HSA, VAS and DSN. The novel video abstract generating method based on the multi-mode and aesthetic principles designed by the invention can effectively capture important contents appearing in the original video, and can solve the problem of discontinuous voice-over which is difficult to solve by the traditional method, thereby obtaining better viewing experience.

TABLE 1

Although the embodiments of the present invention have been described above with reference to the accompanying drawings, the present invention is not limited to the above-described embodiments and application fields, and the above-described embodiments are illustrative, instructive, and not restrictive. Those skilled in the art, having the benefit of this disclosure, may effect numerous modifications thereto without departing from the scope of the invention as defined by the appended claims.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:视频化脚本语义结构的组装方法、系统和电子装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!