Video content processing method and device

文档序号:1470520 发布日期:2020-02-21 浏览:26次 中文

阅读说明:本技术 一种视频内容的处理方法及装置 (Video content processing method and device ) 是由 王往 于 2018-08-07 设计创作,主要内容包括:本申请实施方式公开了一种视频内容的处理方法及装置,其中,所述方法包括:获取待处理的目标视频,并解析所述目标视频中包含的场景片段;提取所述场景片段中的关键帧,并识别所述目标视频的语音信息,以确定与所述关键帧相适配的文本信息;基于所述关键帧中展示的内容以及所述文本信息,确定与所述场景片段相适配的画面版式;将所述关键帧展示的内容以及与所述关键帧相适配的文本信息填充于所述画面版式中,以生成所述场景片段对应的漫画内容。本申请提供的技术方案,能够提高视频的关注度。(The embodiment of the application discloses a method and a device for processing video content, wherein the method comprises the following steps: acquiring a target video to be processed, and analyzing scene segments contained in the target video; extracting key frames in the scene segment, and identifying voice information of the target video to determine text information matched with the key frames; determining a picture layout adapted to the scene segment based on the content shown in the key frame and the text information; and filling the content displayed by the key frame and the text information matched with the key frame into the picture layout so as to generate the cartoon content corresponding to the scene segment. According to the technical scheme, the attention of the video can be improved.)

1. A method for processing video content, the method comprising:

acquiring a target video to be processed, and analyzing scene segments contained in the target video;

extracting key frames in the scene segment, and identifying voice information of the target video to determine text information matched with the key frames;

determining a picture layout adapted to the scene segment based on the content shown in the key frame and the text information;

and filling the content displayed by the key frame and the text information matched with the key frame into the picture layout so as to generate the cartoon content corresponding to the scene segment.

2. The method of claim 1, wherein parsing the scene segments contained in the target video comprises:

and determining a scene switching frame in the target video, and taking a video segment between two adjacent scene switching frames as a scene segment contained in the target video.

3. The method of claim 2, wherein determining a scene cut frame in the target video comprises:

determining a reference frame in the target video, and sequentially calculating the similarity between a video frame after the reference frame and the reference frame;

if the similarity between the current video frame in the target video and the reference frame is smaller than or equal to a specified threshold value, determining the current video frame as a scene switching frame;

and taking the current video frame as a new reference frame, and sequentially calculating the similarity between the video frame after the new reference frame and the new reference frame so as to determine the next scene switching frame according to the calculation result.

4. The method of claim 1, wherein extracting key frames in the scene segment comprises:

identifying scene features contained in video frames of the scene segments, wherein the scene comprises at least one of expression features, action features and environment features;

comparing the scene features contained in the video frame with the feature templates in the scene feature set, and if the scene features contained in the video frame exist in the scene feature set, taking the video frame as a key frame of the scene segment.

5. The method of claim 4, wherein after the video frame is used as a key frame of the scene segment, the method further comprises:

determining a special effect pattern corresponding to the scene feature contained in the video frame, and adding the special effect pattern at the scene feature of the video frame.

6. The method of claim 1, wherein determining textual information that fits the key frame comprises:

determining a target time node where the key frame is located in the target video, and acquiring voice information corresponding to the target time node;

and identifying the acquired voice information as text information, and using the text information obtained by identification as the text information matched with the key frame, or using a sentence generated by refining according to the text information as the text information matched with the key frame.

7. The method according to claim 1, wherein a set of picture layouts is provided, the picture layouts in the set of picture layouts are provided with emotion representatives; accordingly, determining a picture layout adapted to the scene segment comprises:

and identifying character expression characteristics contained in the key frame, and taking the picture format corresponding to the emotion representative words used for representing the character expression characteristics in the picture format set as the picture format matched with the scene segment.

8. The method according to claim 1, wherein the format of the picture adapted to the scene segment comprises a plurality of pit bits, wherein the size of the pit bits is determined as follows:

and determining a degree value corresponding to the scene features contained in the key frame, and adjusting the size of the area occupied by the picture pit position corresponding to the key frame in the picture format according to the degree value.

9. The method of claim 1, wherein in identifying the voice information of the target video, the method further comprises: determining the voice characteristics of the voice information, wherein the voice characteristics comprise at least one of speed, tone and volume; accordingly, the filling of the text information adapted to the key frame into the picture layout comprises:

determining the space of characters in the text information matched with the key frame according to the speech speed represented by the speech characteristics, and displaying the text information in the picture format according to the space of the characters;

determining a keyword in the text information according to the tone of the voice characteristic representation, and displaying the keyword in the picture format in a display form different from other characters in the text information;

and determining a text frame matched with the text information according to the tone and/or volume represented by the voice characteristics, filling the text information into the text frame, and displaying the text frame filled with the text information into the picture format.

10. The method of claim 1, wherein if the speech information of the target video includes background music, determining the text information adapted to the key frame comprises:

identifying lyrics contained in the background music and using the lyrics as text information matched with the key frames;

and/or

And identifying the melody of the background music, and using the text information for representing the melody as the text information matched with the key frame.

11. An apparatus for processing video content, the apparatus comprising:

the scene segment analysis unit is used for acquiring a target video to be processed and analyzing scene segments contained in the target video;

the graphic text determination unit is used for extracting key frames in the scene segments and identifying the voice information of the target video so as to determine text information matched with the key frames;

the picture layout determining unit is used for determining a picture layout matched with the scene clip based on the content displayed in the key frame and the text information;

and the cartoon content generating unit is used for filling the content displayed by the key frames and the text information matched with the key frames into the picture format so as to generate the cartoon content corresponding to the scene clip.

12. The apparatus of claim 11, wherein the graphic text determination unit comprises:

the scene feature identification module is used for identifying scene features contained in the video frames of the scene segments, wherein the scene comprises at least one of expression features, action features and environment features;

and the key frame determining module is used for comparing the scene features contained in the video frames with the feature templates in the scene feature set, and if the scene features contained in the video frames exist in the scene feature set, taking the video frames as a key frame of the scene segment.

13. The apparatus according to claim 11, wherein the apparatus is provided with a set of picture layouts, the picture layouts in the set of picture layouts are provided with emotion representative words; accordingly, the picture layout determining unit includes:

and the expression recognition module is used for recognizing the character expression characteristics contained in the key frame and taking the picture format corresponding to the emotion representative words used for representing the character expression characteristics in the picture format set as the picture format matched with the scene segment.

14. The apparatus according to claim 11, wherein the format of the picture adapted to the scene segment comprises a plurality of pit bits; accordingly, the picture layout determining unit includes:

and the pit position adjusting module is used for determining a degree value corresponding to the scene features contained in the key frame and adjusting the size of the area occupied by the picture pit position corresponding to the key frame in the picture format according to the degree value.

15. The apparatus of claim 11, wherein the graphic text determination unit comprises:

the voice characteristic determining module is used for determining the voice characteristics of the voice information, wherein the voice characteristics comprise at least one of speed, tone and volume; accordingly, the cartoon content generating unit includes:

the text space determining module is used for determining the space of the text information matched with the key frame according to the speech speed represented by the speech characteristic, and displaying the text information in the picture format according to the space of the text;

the keyword determining module is used for determining keywords in the text information according to the tone of the voice characteristic representation, and displaying the keywords in the picture format in a display form different from other characters in the text information;

and the text frame determining module is used for determining a text frame matched with the text information according to the tone and/or volume represented by the voice characteristics, filling the text information into the text frame, and then displaying the text frame filled with the text information into the picture format.

16. An apparatus for processing video content, the apparatus comprising a memory and a processor, the memory being configured to store a computer program which, when executed by the processor, implements the method of any of claims 1 to 10.

Technical Field

The present application relates to the field of internet technologies, and in particular, to a method and an apparatus for processing video content.

Background

At present, part of videos in a video playing website have long time, so that the time cost invested by a user for watching the videos is relatively high. For example, a movie in a video playback website may be up to 3 hours long, and if the user does not know the movie in advance, then the user may not choose to watch the movie. Therefore, the videos in the video playing website cannot well attract the user to watch, and therefore the method has low popularization benefit. Therefore, there is a need for a more novel way to promote the content of long videos to users.

Disclosure of Invention

The embodiment of the application aims to provide a method and a device for processing video content, which can improve the attention of video.

In order to achieve the above object, an embodiment of the present application provides a method for processing video content, where the method includes: acquiring a target video to be processed, and analyzing scene segments contained in the target video; extracting key frames in the scene segment, and identifying voice information of the target video to determine text information matched with the key frames; determining a picture layout adapted to the scene segment based on the content shown in the key frame and the text information; and filling the content displayed by the key frame and the text information matched with the key frame into the picture layout so as to generate the cartoon content corresponding to the scene segment.

In order to achieve the above object, an embodiment of the present application further provides an apparatus for processing video content, the apparatus including: the scene segment analysis unit is used for acquiring a target video to be processed and analyzing scene segments contained in the target video; the graphic text determination unit is used for extracting key frames in the scene segments and identifying the voice information of the target video so as to determine text information matched with the key frames; the picture layout determining unit is used for determining a picture layout matched with the scene clip based on the content displayed in the key frame and the text information; and the cartoon content generating unit is used for filling the content displayed by the key frames and the text information matched with the key frames into the picture format so as to generate the cartoon content corresponding to the scene clip.

To achieve the above object, the present application further provides an apparatus for processing video content, the apparatus includes a memory and a processor, the memory is used for storing a computer program, and the computer program is executed by the processor to implement the above method.

Therefore, according to the technical scheme provided by the application, after the target video to be processed is obtained, the scene segments contained in the target video can be analyzed. For example, the target video may be parsed into scene segments such as emotional play, action play, quest play, and the like. Then, key frames in the scene segment can be extracted, which can reflect the environment and characters of the scene segment. Then, text information matched with the key frame can be determined by recognizing the voice information of the target video. The text information may be, for example, a dialog of a person in the key frame, or lyrics of background music in the key frame, or the like. Then, based on the content displayed in the key frame and the recognized text information, a picture layout adapted to the scene segment can be determined. For example, if the content displayed in the key frame is the emotional games of two characters, the selected frame format can have bright and vivid colors. For another example, if the recognized text information represents an angry emotion, the text frame in the frame format may be a text frame with a larger size and exhibiting a special explosion effect, so as to match the emotion represented by the text information. After the frame layout is determined, the content displayed by the key frame and the corresponding text information can be filled in the frame layout. Thus, through the above processing, the content expressed by the video can be presented through the brief cartoon content. On one hand, such a presentation form can more easily arouse the interest of the user, thereby improving the click rate; on the other hand, the long video is converted into the cartoon with shorter space, so that the time cost input by the user can be reduced, and the attention of the long video is further improved.

Drawings

In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.

FIG. 1 is a diagram illustrating steps of a method for processing video content according to an embodiment of the present disclosure;

FIG. 2 is a flow chart of a method for processing video content according to an embodiment of the present disclosure;

FIG. 3 is a diagram illustrating an effect of cartoon content according to an embodiment of the present disclosure;

FIG. 4 is a functional block diagram of a video content processing apparatus according to an embodiment of the present disclosure;

fig. 5 is a schematic structural diagram of a video content processing apparatus according to an embodiment of the present application.

Detailed Description

In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art without any inventive work based on the embodiments in the present application shall fall within the scope of protection of the present application.

The video content processing method provided by the application can be applied to a server of a video playing website or can be applied to independent video processing equipment. Referring to fig. 1 and 2, the method may include the following steps.

S1: the method comprises the steps of obtaining a target video to be processed and analyzing scene segments contained in the target video.

In this embodiment, the target video may be a video stored in a video playback website server, and the manner of acquiring the target video may be downloading the target video from a video playback website. The target video may also be a video stored in a storage medium, and the manner of acquiring the target video may be reading the target video from the storage medium.

In this embodiment, one video may be composed of one or more scene segments. For different video frames in the same scene segment, people and environments generally have higher similarity. Therefore, the video frames in the same scene segment have higher similarity. The scene segment may be, for example, a scene segment of an emotional game, a scene segment of an action game, a scene segment of a quest game, and the like. In the case of a scenario with the video, the video may be divided into a plurality of scene segments according to the setting of the scenario and the duration of the scenario in the scenario. For example, in a scenario, a scene identifier may be labeled for a scene segment, and the scene identifier may correspond to a start time point and an end time point in a video. In this way, the video content in the time period corresponding to the scene identifier can be used as a scene segment.

In one embodiment, without the transcript of the video, the scene segments in the target video may be identified by means of image recognition. Specifically, the same scene in the target video may be determined by a scene cut frame in the target video. The scene cut frame may be a video frame between two adjacent different scenes in the target video. In this way, when a scene segment is parsed, a scene cut frame can be determined in the target video, and a video segment between two adjacent scene cut frames is taken as a scene segment contained in the target video. In order to obtain scene change frames corresponding to each scene of the target video, the scene change frames may be extracted by frame-by-frame comparison in the present embodiment. Specifically, a reference frame may be determined in the target video, and the similarity between each video frame subsequent to the reference frame and the reference frame may be calculated sequentially.

In this embodiment, the reference frame may be a frame of a picture randomly designated within a certain range. For example, the reference frame may be a frame of picture randomly selected within 2 minutes of the beginning of the target video. Of course, in order not to miss a scene in the target video, the first frame of the target video may be used as the reference frame.

In this embodiment, after the reference frame is determined, each frame picture after the reference frame may be sequentially compared with the reference frame from the reference frame to calculate the similarity between each subsequent frame picture and the reference frame. Specifically, when calculating the similarity between each video frame and the reference frame, the first feature vector and the second feature vector of the reference frame and the current frame may be extracted, respectively.

In this embodiment, the first feature vector and the second feature vector may have various forms. The feature vector of each frame of picture can be constructed based on the pixel values of the pixel points in the frame of picture. Each frame of picture is usually formed by arranging a plurality of pixel points according to a certain sequence, and the pixel points correspond to respective pixel values, so that a gorgeous picture can be formed. The pixel value may be a numerical value within a specified interval. For example, the pixel value may be a gray scale value, the gray scale value may be any one of 0 to 255, and the magnitude of the numerical value may represent the shade of the gray scale. Of course, the pixel value may also be the respective values of a plurality of color system components in other color system spaces. For example, in an RGB (Red, Green, Blue, Red, Green, Blue) color system space, the pixel values may include R component values, G component values, and B component values.

In this embodiment, the pixel values of the pixel points in each frame of the picture can be obtained, and the feature vector of the frame of the picture is formed by the obtained pixel values. For example, for a current frame having 9 × 9-81 pixels, pixel values of the pixels may be sequentially obtained, and then the obtained pixel values may be sequentially arranged in order from left to right and from top to bottom, thereby forming an 81-dimensional vector. The 81-dimensional vector can be used as the feature vector of the current frame.

In this embodiment, the feature vector may be a CNN (Convolutional neural network) feature of each frame. Specifically, the reference frame and each frame picture after the reference frame may be input into a convolutional neural network, and then the convolutional neural network may output the feature vectors corresponding to the reference frame and each other frame picture.

In this embodiment, in order to accurately represent the contents shown in the reference frame and the current frame, the first feature vector and the second feature vector may represent scale-invariant features of the reference frame and the current frame, respectively. In this way, even if the rotation angle, the image brightness or the shooting angle of view of the image is changed, the contents in the reference frame and the current frame can still be well embodied by the extracted first feature vector and the second feature vector. Specifically, the first Feature vector and the second Feature vector may be a Scale-invariant Feature transform (Sift-invariant Feature transform) Feature, a surf Feature (Speed Up Robust Feature), a color histogram Feature, or the like.

In this embodiment, after the first feature vector and the second feature vector are determined, the similarity between the first feature vector and the second feature vector may be calculated. In particular, the similarity may be expressed in vector space as a distance between two vectors. The closer the distance, the more similar the two vectors are represented, and thus the higher the similarity. The further the distance, the greater the difference between the two vectors and hence the lower the similarity. Therefore, in calculating the similarity between the reference frame and the current frame, the spatial distance between the first feature vector and the second feature vector may be calculated, and the reciprocal of the spatial distance may be taken as the similarity between the reference frame and the current frame. Thus, the smaller the spatial distance, the greater the corresponding similarity, which indicates the more similarity between the reference frame and the current frame. Conversely, the greater the spatial distance, the less similarity it corresponds, indicating that there is more dissimilarity between the reference frame and the current frame.

In this embodiment, the similarity between each video frame subsequent to the reference frame and the reference frame may be sequentially calculated in the above manner. In order to determine different scenes in the target video, in the present embodiment, when the similarity between the reference frame and the current frame is less than or equal to a specified threshold, the current frame may be determined as a scene change frame. The designated threshold may be a preset value, and the value may be flexibly adjusted according to actual conditions. For example, when the number of scene change frames screened out according to the specified threshold is too large, the size of the specified threshold may be appropriately reduced. For example, when the number of scene change frames to be filtered out based on the predetermined threshold is too small, the size of the predetermined threshold may be increased as appropriate. In this embodiment, the similarity being less than or equal to the predetermined threshold may indicate that the contents in the two frames are significantly different, and therefore, it may be considered that the scene shown in the current frame is changed from the scene shown in the reference frame. At this time, the current frame can be reserved as a frame of picture for scene switching.

In this embodiment, when the current frame is determined as one scene switching frame, the subsequent other scene switching frames may be continuously determined. Specifically, from the reference frame to the current frame, it can be considered that a scene has changed once, and thus the current scene is the content shown by the current frame. Based on this, the current frame can be used as a new reference frame, and the similarity between each video frame after the new reference frame and the new reference frame is sequentially calculated, so that the next scene switching frame is determined according to the calculated similarity. Similarly, when determining the next scene switching frame, the similarity between two frames of pictures can still be determined by extracting the feature vector and calculating the spatial distance, and the determined similarity can still be compared with the specified threshold, so as to determine the next scene switching frame in which the scene changes again after the new reference frame.

In this embodiment, in the above manner, each scene change frame may be sequentially extracted from the target video, so that a video frame between two adjacent scene change frames may be used as a same scene frame, and these same scene frames constitute a same scene segment in the target video.

S3: and extracting key frames in the scene segment, and identifying voice information of the target video to determine text information matched with the key frames.

In this embodiment, after one or more scene clips are identified from the target video, since the scene clips include a large number of video frames, a small number of key frames can be extracted from the scene clips, and the content displayed by the scene clips can be represented by a small number of key frames.

In the present embodiment, when extracting a key frame from a scene segment, a comprehensive consideration can be given to the people and the environment in the video frame. In particular, scene features contained in video frames of the scene segment may be identified. The scene features may include an expressive feature of a character, an action feature of a character, an environmental feature, and the like. The expression features of the characters can be divided into various expression features such as happiness, sadness, anger, startle and the like according to the current dividing rule of the emotion. The character's motion characteristics may also be divided into fighting, intimacy, and so on. Environmental characteristics can be classified as dangerous, comfortable, etc.

In the present embodiment, a scene feature set may be set in advance, and scene features included in the scene feature set are all scene features having a clear style. For example, for the expressive features, the expressive features included in the scene feature set are often those with large facial fluctuations, such as expressive features like laughter, pain, frighter, etc. Similarly, for the action feature and the environment feature, the scene feature set also includes features that can reflect exaggerated limb actions or obvious environment features. In the scene feature set, each scene feature may have a feature template. The feature template may be a digital vector, which may be obtained by modeling the scene features through current digital modeling techniques. For example, for the expression features, the facial features may be analyzed by a facial modeling technique to obtain a digital vector representing the distribution positions of the facial features and the angles between the facial features. In addition, the scene feature set may include feature templates of the motion features and the environmental features of the person, and the feature templates may be represented by a digital vector. For example, for the action of the person, it is possible to determine what action state the person is currently in by recognizing the head, limbs, and torso of the person. The current states of the head, the trunk, and the limbs can be represented by a digital vector. For example, the number vector includes 6 elements, where the 6 elements correspond to the head, the trunk, and the limbs, respectively, and then, for each element, there may be multiple assignments, each of which may correspond to an action state. For example, when the value of the element for representing the head is 0, the representing head is upright; when the value of the element for representing the head is 1, the representing head inclines to the left side. In this way, the motion characteristics of the person can be uniquely characterized in a digital vector manner, so that each scene characteristic in the scene characteristic set can be represented by a corresponding digital vector, and the digital vector can be used as a characteristic module of the scene characteristic.

In the present embodiment, the scene features included in the scene feature set can be all used as features having distinctive features, and therefore, it is possible to identify whether or not the scene features included in the scene feature set are included in the video frames of the scene segment, and it is possible to determine whether or not the current video frame can clearly express the content of the scene segment. Specifically, after identifying the scene features included in the video frames of the scene segment, the scene features included in the video frames may be compared with the feature templates in the scene feature set, and if the scene features included in the video frames exist in the scene feature set, the video frames may be used as a key frame of the scene segment. Specifically, when two scene features are compared, the similarity between the digital vectors of the two scene features may be calculated. If the calculated similarity is greater than or equal to the specified similarity threshold, the current scene features may be considered to be included in the scene feature set.

It should be noted that, for the same scene feature, it may appear in multiple consecutive video frames at the same time, in this case, only one video frame needs to be randomly selected from the multiple consecutive video frames as a key frame, and the video frames do not need to be extracted as key frames.

Thus, in the manner described above, one or more key frames can be extracted from a scene segment. In one embodiment, after the key frame is determined, the key frame may be post-processed. Specifically, a special effect pattern corresponding to the scene feature included in the video frame may be determined, and the special effect pattern may be added at the scene feature of the video frame. Wherein, different special effect patterns can be added according to different scene characteristics. For example, for the expression characteristics, cartoon special effects for representing the emotions of anger, surprise, photophobia and the like can be added. Also for example, for motion characterization, a caricature special may be added to characterize a blow, a traveling wave, a fall, etc. In this embodiment, the special effect patterns described above may be stored in a special effect pattern library, and these special effect patterns may be stored in association with the pattern identifiers. The pattern identifier may be, for example, words that characterize the contents of the special effect pattern, such as "anger", "traveling wave", "crying", etc. Of course, these words can also be represented in the special effect pattern library through digital codes, and the association between the digital codes and word semantics can be established. For example, the numerical code "01" characterizes the word "crying", and the numerical code "02" represents the word "smiling". Therefore, according to the scene characteristics identified from the key frames, the special effect patterns matched with the meanings of the scene characteristics can be inquired in the special effect pattern library, so that the corresponding special effect patterns can be added to the scene characteristics in the key frames, and the interestingness and the dynamic effect of the picture are enhanced.

In this embodiment, after the key frames are determined, in order to make the user know the story line occurring in the key frames, the key frames need to be matched with corresponding text information. Therefore, the voice information of the target video can be recognized, and the text information matched with the key frame is determined according to the recognized voice information. The voice information may be a conversation of a person, or may be music of a background, or the like. Specifically, in one embodiment, a target time node where the key frame is located in the target video may be determined, and voice information corresponding to the target time node may be acquired. The voice information corresponding to the target time node may refer to voice information occurring within a period of time with the target time node as a center. For example, if the target time node where the key frame is located is 45 minutes 26 seconds, the speech information from 45 minutes 23 seconds to 45 minutes 30 seconds can be used as the speech information corresponding to the target time node. In this way, the acquired voice information can be recognized as text information by a voice recognition technique. The recognized text information may then be used as the text information adapted to the key frame. In some cases, the recognized text information may be too lengthy to be completely displayed in the limited area of the key frame, so that a concise sentence may be abstracted from the text information, and the abstracted sentence may be used as the text information adapted to the key frame. The text information may be a dialogue between persons, a voice guide for representing environmental characteristics, lyrics of current background music, or a special sound effect corresponding to motion characteristics. For example, if the speech information of the target video includes background music, then when determining the text information adapted to the key frame, lyrics contained in the background music may be identified and used as the text information adapted to the key frame. Furthermore, it is also possible to recognize the melody of the background music and to use the text information for characterizing the melody as the text information adapted to the key frame. For example, if the melody of the background music is relatively smooth, the text information matching the current background music can be determined to be "youthful" according to the preset association relationship between the melody and the text information.

In one embodiment, the text information can be presented in various ways, and the text information obtained by recognition can be presented in different presentation forms according to different voice characteristics of the voice information. Specifically, when recognizing the voice information, the voice characteristics of the voice information may be determined together, and the voice characteristics may include, for example, the speed of speech, the intonation, the tone, the volume, and other characteristics. And determining the space of characters in the text information matched with the key frame according to the speech speed represented by the speech characteristics. For example, the slower the speech rate, the larger the spacing of the characters in the text message may be. And determining key words in the text information according to the tone of the voice characteristic representation. Specifically, the intonation corresponding to the keyword is usually different from the intonation corresponding to other words. For example, when a person speaks a speech, one or more words in the speech may be pronounced, or the intonation may be raised or lowered, and words with changed intonation may be identified by recognizing the intonation, and may be used as keywords in the text information. When the keywords are displayed, the keywords can be displayed in a display form different from other characters in the text information. In addition, according to the tone and/or volume of the voice feature representation, a text frame matched with the text information can be determined. The text frame may be a bubble as shown in fig. 3 for containing text information. When the volume of the voice information is high, a text frame with a more prominent appearance can be adopted. For example, in FIG. 3, the character is saying "feed! | A "when the speech is played, the sound volume reaches a specified threshold value, so that the speech is considered to be high in sound volume, and the appearance of the bubble with the edge being relatively concave and convex in fig. 3 can be selected. As another example, in FIG. 3, the character is saying "how! When the lines are in the speech, the voice is not friendly, and the bubble appearance with concave and convex edges can be selected. In this way, the character language characteristic can be expressed in the static image through the appearance of the different text frames, and the user can understand the emotion of the character in the image.

S5: and determining a picture format matched with the scene segment based on the content displayed in the key frame and the text information.

In this embodiment, after determining a key frame in a scene segment and identifying text information matching the key frame, a picture format adapted to the scene segment may be determined. The picture format may include a background color/background pattern of the picture, a pit shape included in the picture, and the like. In practical application, a picture layout set can be preset, and picture layouts in the picture layout set can have respective theme styles. Wherein the theme styles may be associated with the emotion representative words. The emotion representative words may include words such as "happy", "thriller", "romantic", and the like. In this way, after the character expression features included in the key frame are identified, the picture layout corresponding to the emotion representative words used for representing the character expression features in the picture layout set can be used as the picture layout matched with the scene segment. For example, if the emotion representative word corresponding to the expression feature of the character is "joy", the picture layout representing joy can be determined in the picture layout set, and then a picture layout adapted to the scene segment is randomly determined from the picture layouts. It should be noted that sometimes the emotion representative determined according to the expression features of the person may not be the same as the emotion representative in the picture layout set, but the expressed semantics are the same. In this case, after determining the emotion representative word corresponding to the expression feature of the person, an emotion representative word that is the same as or similar to the emotion representative word in semantic meaning may be determined in the picture layout set, so that a suitable picture layout may be further selected.

In this embodiment, the frame format may include frame pit bits adapted to the number of key frames, and these frame pit bits may be used to fill the key frames and the corresponding text information. In practical applications, for a key frame capable of representing a main scenario of a video, the size of a corresponding picture pit bit should be relatively large. In view of this, a degree value corresponding to a scene feature contained in the key frame may be determined. Specifically, the degree value may be represented by an exaggerated degree of the character expression characteristic and a magnitude of the character action characteristic. The more exaggerated the expression of the character and the larger the action amplitude, the stronger the current dramatic conflict is, and therefore the corresponding degree value is also larger. Therefore, the size of the area occupied by the picture pit position corresponding to each key frame can be adjusted in the picture format according to the degree values of different key frames. Wherein, the larger the degree value is, the larger the corresponding picture pit position can be, so as to highlight the importance degree of the plot.

S7: and filling the content displayed by the key frame and the text information matched with the key frame into the picture layout so as to generate the cartoon content corresponding to the scene segment.

In this embodiment, after determining the screen layout and the size of the pit, the content displayed by the key frame and the text information adapted to the key frame may be filled in the screen layout, so as to generate the cartoon content corresponding to the scene segment shown in fig. 3. In practical applications, the key frame is usually rectangular, and the shape of the pit bit in the picture format is not necessarily rectangular. Therefore, when filling the key frame, the key frame can be cut to a size suitable for the picture pit position, and then the key frame connected text information is filled in the picture pit position. As described in step S3, the text information may be displayed in the screen layout according to the space between the characters determined in step S3, the determined keywords may be displayed in the screen layout in a display form different from other characters in the text information, and after the text frame is determined, the text information may be filled in the text frame, and the text frame filled with the text information may be displayed in the screen layout. In practical application, the content in the screen layout can be flexibly adjusted according to needs, for example, special effect patterns in key frames can be changed, or the color, transparency, size and the like of text information can be changed.

Therefore, through the identification of the key frames and the text information in the scene segments of the target video, the content of the video can be finally displayed in a cartoon mode, so that the time cost for a user to know the content of the video is reduced, the interestingness is improved, and more users can be attracted to watch the content of the video.

Referring to fig. 4, the present application further provides a device for processing video content, where the device includes:

the scene segment analysis unit is used for acquiring a target video to be processed and analyzing scene segments contained in the target video;

the graphic text determination unit is used for extracting key frames in the scene segments and identifying the voice information of the target video so as to determine text information matched with the key frames;

the picture layout determining unit is used for determining a picture layout matched with the scene clip based on the content displayed in the key frame and the text information;

and the cartoon content generating unit is used for filling the content displayed by the key frames and the text information matched with the key frames into the picture format so as to generate the cartoon content corresponding to the scene clip.

In one embodiment, the graphic text determination unit comprises:

the scene feature identification module is used for identifying scene features contained in the video frames of the scene segments, wherein the scene comprises at least one of expression features, action features and environment features;

and the key frame determining module is used for comparing the scene features contained in the video frames with the feature templates in the scene feature set, and if the scene features contained in the video frames exist in the scene feature set, taking the video frames as a key frame of the scene segment.

In one embodiment, the device is provided with a picture layout set, and picture layouts in the picture layout set are provided with emotion representatives; accordingly, the picture layout determining unit includes:

and the expression recognition module is used for recognizing the character expression characteristics contained in the key frame and taking the picture format corresponding to the emotion representative words used for representing the character expression characteristics in the picture format set as the picture format matched with the scene segment.

In one embodiment, the format of the picture adapted to the scene segment comprises a plurality of picture pit positions; accordingly, the picture layout determining unit includes:

and the pit position adjusting module is used for determining a degree value corresponding to the scene features contained in the key frame and adjusting the size of the area occupied by the picture pit position corresponding to the key frame in the picture format according to the degree value.

In one embodiment, the graphic text determination unit comprises:

the voice characteristic determining module is used for determining the voice characteristics of the voice information, wherein the voice characteristics comprise at least one of speed, tone and volume; accordingly, the cartoon content generating unit includes:

the text space determining module is used for determining the space of the text information matched with the key frame according to the speech speed represented by the speech characteristic, and displaying the text information in the picture format according to the space of the text;

the keyword determining module is used for determining keywords in the text information according to the tone of the voice characteristic representation, and displaying the keywords in the picture format in a display form different from other characters in the text information;

and the text frame determining module is used for determining a text frame matched with the text information according to the tone and/or volume represented by the voice characteristics, filling the text information into the text frame, and then displaying the text frame filled with the text information into the picture format.

Referring to fig. 5, the present application further provides a video content processing apparatus, where the apparatus includes a memory and a processor, the memory is used to store a computer program, and the computer program, when executed by the processor, can implement the above-mentioned video content processing method.

In this embodiment, the memory may include a physical device for storing information, and typically, the information is digitized and then stored in a medium using an electrical, magnetic, or optical method. The memory according to this embodiment may further include: devices that store information using electrical energy, such as RAM, ROM, etc.; devices that store information using magnetic energy, such as hard disks, floppy disks, tapes, core memories, bubble memories, usb disks; devices for storing information optically, such as CDs or DVDs. Of course, there are other ways of memory, such as quantum memory, graphene memory, and so forth.

In this embodiment, the processor may be implemented in any suitable manner. For example, the processor may take the form of, for example, a microprocessor or processor and a computer-readable medium that stores computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, an embedded microcontroller, and so forth.

The specific functions of the device, the memory thereof, and the processor thereof provided in the embodiments of this specification can be explained in comparison with the foregoing embodiments in this specification, and can achieve the technical effects of the foregoing embodiments, and thus, will not be described herein again.

Therefore, according to the technical scheme provided by the application, after the target video to be processed is obtained, the scene segments contained in the target video can be analyzed. For example, the target video may be parsed into scene segments such as emotional play, action play, quest play, and the like. Then, key frames in the scene segment can be extracted, which can reflect the environment and characters of the scene segment. Then, text information matched with the key frame can be determined by recognizing the voice information of the target video. The text information may be, for example, a dialog of a person in the key frame, or lyrics of background music in the key frame, or the like. Then, based on the content displayed in the key frame and the recognized text information, a picture layout adapted to the scene segment can be determined. For example, if the content displayed in the key frame is the emotional games of two characters, the selected frame format can have bright and vivid colors. For another example, if the recognized text information represents an angry emotion, the text frame in the frame format may be a text frame with a larger size and exhibiting a special explosion effect, so as to match the emotion represented by the text information. After the frame layout is determined, the content displayed by the key frame and the corresponding text information can be filled in the frame layout. Thus, through the above processing, the content expressed by the video can be presented through the brief cartoon content. On one hand, such a presentation form can more easily arouse the interest of the user, thereby improving the click rate; on the other hand, the long video is converted into the cartoon with shorter space, so that the time cost input by the user can be reduced, and the attention of the long video is further improved.

In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified Programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Lola, HDL, laspam, hardsradware (Hardware Description Language), vhjhd (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.

Those skilled in the art will also appreciate that, in addition to implementing an apparatus as pure computer readable program code, an apparatus can be implemented by logically programming method steps such that the apparatus performs functions in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such means may thus be regarded as a hardware component and means for performing the functions included therein may also be regarded as structures within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.

From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.

The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the device, reference may be made to the introduction of embodiments of the method described above for comparison.

The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.

Although the present application has been described in terms of embodiments, those of ordinary skill in the art will recognize that there are numerous variations and permutations of the present application without departing from the spirit of the application, and it is intended that the appended claims encompass such variations and permutations without departing from the spirit of the application.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种使用方便的通讯设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类