Clip recommendation method, device, electronic equipment, storage medium and program product

文档序号:73227 发布日期:2021-10-01 浏览:10次 中文

阅读说明:本技术 剪辑推荐方法、装置、电子设备、存储介质及程序产品 (Clip recommendation method, device, electronic equipment, storage medium and program product ) 是由 卢家辉 于 2021-05-25 设计创作,主要内容包括:本公开关于一种剪辑推荐方法、装置、电子设备、存储介质及程序产品及装置。包括:响应于当前用户针对剪辑素材的当前剪辑操作,获取剪辑素材的素材属性特征;根据素材属性特征和当前剪辑操作,获取针对剪辑素材、当前剪辑操作关联的下一步剪辑操作,得到目标剪辑操作;响应于目标剪辑操作和属于当前用户的剪辑操作记录,将与目标剪辑操作对应的操作入口进行显著性显示。从而,可以根据剪辑素材对应的素材属性特征和当前剪辑操作,智能地确定出推荐的下一步目标剪辑操作,减少用户选择剪辑操作的时间,提高剪辑效率。(The disclosure relates to a clip recommendation method, device, electronic equipment, storage medium, program product and device. The method comprises the following steps: responding to the current clipping operation of the current user aiming at the clipping material, and acquiring the material attribute characteristics of the clipping material; according to the material attribute characteristics and the current clipping operation, the next clipping operation related to the clipping material and the current clipping operation is obtained, and a target clipping operation is obtained; and in response to the target clipping operation and the clipping operation record belonging to the current user, performing saliency display on an operation entry corresponding to the target clipping operation. Therefore, the recommended next target editing operation can be intelligently determined according to the material attribute characteristics corresponding to the editing material and the current editing operation, the time for the user to select the editing operation is reduced, and the editing efficiency is improved.)

1. A clip recommendation method, comprising:

responding to the current clipping operation of a current user for clipping materials, and acquiring material attribute characteristics of the clipping materials;

according to the material attribute characteristics and the current clipping operation, acquiring the next clipping operation associated with the clipping material and the current clipping operation to obtain a target clipping operation;

and in response to the target clipping operation and the clipping operation record belonging to the current user, performing saliency display on an operation entry corresponding to the target clipping operation.

2. The clip recommendation method according to claim 1, wherein said obtaining a next clipping operation associated with said clip material and said current clipping operation according to said material attribute feature and said current clipping operation to obtain a target clipping operation comprises:

inputting the material attribute characteristics and the current clipping operation into a preset clipping recommendation model to obtain at least one recommended clipping operation and at least one corresponding clipping operation probability; the editing recommendation model is used for representing material attribute characteristics and the corresponding relation between the current editing operation and the next editing operation;

and determining the target clipping operation from the at least one recommended clipping operation according to the at least one recommended clipping operation and at least one clipping operation probability corresponding to the at least one recommended clipping operation.

3. The clipping recommendation method according to claim 2, wherein said determining the target clipping operation from the at least one recommended clipping operation according to the at least one recommended clipping operation and at least one clipping operation probability corresponding to the at least one recommended clipping operation comprises:

determining a maximum a posteriori probability from the at least one clipping operation probability;

and determining the recommended clipping operation corresponding to the maximum posterior probability as the target clipping operation.

4. The clip recommendation method according to claim 2, wherein before inputting the material property feature and the current clip operation into a preset clip recommendation model, further comprising obtaining the clip recommendation model, the obtaining the clip recommendation model comprises:

acquiring historical material attribute characteristics, first editing operation and second editing operation corresponding to historical editing materials; wherein the first clipping operation is an initial clipping operation performed on the historical clipping features, and the second clipping operation is a next clipping operation corresponding to the initial clipping operation;

and training a preset initial clipping recommendation model by taking the historical clipping characteristics and the first clipping operation as input and the second clipping operation as monitoring information to obtain the clipping recommendation model.

5. The clip recommendation method according to claim 1, wherein the clip material includes at least one of a picture, a video, and an audio, the attribute characteristic of the picture includes a resolution, the attribute characteristic of the video includes a duration and a resolution, and the attribute characteristic of the audio includes a duration;

the obtaining of the material attribute characteristics of the clip material includes:

when the editing material is a picture, acquiring the editing resolution of the editing material from the resolution of the picture;

when the editing material is a video, acquiring editing duration of the editing material from duration of the video, and acquiring editing resolution of the editing material from resolution of the video;

when the editing material is audio, acquiring the editing duration of the editing material from the duration of the audio;

and determining the material attribute characteristics according to the clipping resolution and the clipping duration.

6. The clip recommendation method according to claim 1, wherein said displaying, in a conspicuous manner, an operation entry corresponding to the target clip operation in response to the target clip operation and a clip operation record belonging to a current user, comprises:

acquiring a clipping operation database corresponding to the current user; wherein a clipping operation in the clipping operation database is a clipping operation used by the current user;

and when detecting that the operation corresponding to the target clipping operation does not exist in the clipping operation database, performing significance display on the operation entry corresponding to the target clipping operation.

7. A clip recommendation apparatus, comprising:

a data acquisition unit configured to perform a current clipping operation for a clip material in response to a current user, and acquire a material attribute feature of the clip material;

the clipping prediction unit is configured to execute the next clipping operation associated with the clipping material and the current clipping operation according to the material attribute characteristics and the current clipping operation to obtain a target clipping operation;

and the clipping recommendation unit is configured to perform significance display on an operation entry corresponding to the target clipping operation in response to the target clipping operation and a clipping operation record belonging to the current user.

8. An electronic device, comprising:

a processor;

a memory for storing the processor-executable instructions;

wherein the processor is configured to execute the instructions to implement the clip recommendation method of any one of claims 1 to 7.

9. A computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the clip recommendation method of any one of claims 1 to 7.

10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the clip recommendation method of any one of claims 1 to 7.

Technical Field

The present disclosure relates to the field of computer technologies, and in particular, to a clip recommendation method, an apparatus, an electronic device, a storage medium, and a program product.

Background

With the development of computer technology, a technology for editing clip materials such as pictures, videos and audios by video clip software appears, and new pictures, videos or audios with different expressive forces can be manufactured by applying the video clip software.

However, since a large number of users of video clipping software are not professional video clipping personnel, the clipping operation functions that many of the video clipping software have to use are not well applied. The clipping personnel cannot select the appropriate clipping operation according to specific needs, or the appropriate clipping operation needs to be selected in a certain time, so the clipping efficiency is low.

Disclosure of Invention

The present disclosure provides a clipping recommendation method, apparatus, electronic device, storage medium, and program product to at least solve the problem of low clipping efficiency in the related art. The technical scheme of the disclosure is as follows:

according to a first aspect of the embodiments of the present disclosure, there is provided a clip recommendation method including:

responding to the current clipping operation of a current user for clipping materials, and acquiring material attribute characteristics of the clipping materials;

according to the material attribute characteristics and the current clipping operation, acquiring the next clipping operation associated with the clipping material and the current clipping operation to obtain a target clipping operation;

and in response to the target clipping operation and the clipping operation record belonging to the current user, prominently displaying an operation entry corresponding to the target clipping operation.

In an exemplary embodiment, the obtaining, according to the material attribute features and the current clipping operation, a next clipping operation associated with the clipping material and the current clipping operation to obtain a target clipping operation includes:

inputting the material attribute characteristics and the current clipping operation into a preset clipping recommendation model to obtain at least one recommended clipping operation and at least one corresponding clipping operation probability; the editing recommendation model is used for representing the corresponding relation among the material attribute characteristics, the current editing operation and the next editing operation;

and determining the target clipping operation from the at least one recommended clipping operation according to the at least one recommended clipping operation and at least one clipping operation probability corresponding to the at least one recommended clipping operation.

In an exemplary embodiment, the determining the target clipping operation from the at least one recommended clipping operation according to the at least one recommended clipping operation and at least one clipping operation probability corresponding to the at least one recommended clipping operation includes:

determining a maximum a posteriori probability from the at least one clipping operation probability;

and determining the recommended clipping operation corresponding to the maximum posterior probability as the target clipping operation.

In an exemplary embodiment, before the inputting the material attribute feature and the current clip operation into a preset clip recommendation model, the obtaining of the clip recommendation model further includes:

acquiring historical material attribute characteristics, first editing operation and second editing operation corresponding to historical editing materials; wherein the first clipping operation is an initial clipping operation performed on the historical clipping features, and the second clipping operation is a next clipping operation corresponding to the initial clipping operation;

and training a preset initial clipping recommendation model by taking the historical clipping characteristics and the first clipping operation as input and the second clipping operation as monitoring information to obtain the clipping recommendation model.

In an exemplary embodiment, the clip material includes at least one of a picture, a video, and audio, the attribute characteristic of the picture includes a resolution, the attribute characteristic of the video includes a duration and a resolution, and the attribute characteristic of the audio includes a duration;

the obtaining of the material attribute characteristics of the clip material includes:

when the editing material is a picture, acquiring the editing resolution of the editing material from the resolution of the picture;

when the editing material is a video, acquiring editing duration of the editing material from duration of the video, and acquiring editing resolution of the editing material from resolution of the video;

when the editing material is audio, acquiring the editing duration of the editing material from the duration of the audio;

and determining the material attribute characteristics according to the clipping resolution and the clipping duration.

In an exemplary embodiment, the clipping resolution includes an initial resolution and a current resolution, the clipping duration includes an initial duration and a current duration, and the attribute characteristics of the picture, video and audio include corresponding formats;

the determining the material attribute characteristics according to the clipping resolution and the clipping duration comprises the following steps:

according to the initial duration and the current duration, carrying out normalization processing on the duration of the editing material to obtain normalized duration;

according to the initial resolution and the current resolution, carrying out normalization processing on the resolution of the editing material to obtain a normalized resolution;

and acquiring the characteristic values of the normalized time length, the normalized resolution and the format to obtain the material attribute characteristics.

In an exemplary embodiment, the prominently displaying, in response to the target clipping operation and the clipping operation record belonging to the current user, an operation entry corresponding to the target clipping operation includes:

acquiring a clipping operation database corresponding to the current user; wherein the clipping operation in the clipping operation database is a clipping operation used by the current user;

and when detecting that the operation corresponding to the target clipping operation does not exist in the clipping operation database, performing significance display on the operation entry corresponding to the target clipping operation.

In an exemplary embodiment, after the detecting that there is no operation corresponding to the target clipping operation in the clipping operation database, performing saliency display on an operation entry corresponding to the target clipping operation, the method further includes:

and editing the editing material according to the target editing operation to obtain a target editing effect corresponding to the target editing operation, and displaying the target editing effect.

According to a second aspect of the embodiments of the present disclosure, there is provided a clip recommending apparatus including:

a data acquisition unit configured to perform a current clipping operation for a clip material in response to a current user, and acquire a material attribute feature of the clip material;

the clipping prediction unit is configured to execute the next clipping operation associated with the clipping material and the current clipping operation according to the material attribute characteristics and the current clipping operation to obtain a target clipping operation;

and the clipping recommendation unit is configured to perform significance display on an operation entry corresponding to the target clipping operation in response to the target clipping operation and the clipping operation record belonging to the current user.

In an exemplary embodiment, the clip prediction unit is further configured to perform:

inputting the material attribute characteristics and the current clipping operation into a preset clipping recommendation model to obtain at least one recommended clipping operation and at least one corresponding clipping operation probability; the editing recommendation model is used for representing the corresponding relation among the material attribute characteristics, the current editing operation and the next editing operation;

and determining the target clipping operation from the at least one recommended clipping operation according to the at least one recommended clipping operation and at least one clipping operation probability corresponding to the at least one recommended clipping operation.

In an exemplary embodiment, the clip prediction unit is further configured to perform:

determining a maximum a posteriori probability from the at least one clipping operation probability;

and determining the recommended clipping operation corresponding to the maximum posterior probability as the target clipping operation.

In an exemplary embodiment, the clip prediction unit is further configured to perform:

acquiring historical material attribute characteristics, first editing operation and second editing operation corresponding to historical editing materials; wherein the first clipping operation is an initial clipping operation performed on the historical clipping features, and the second clipping operation is a next clipping operation corresponding to the initial clipping operation;

and training a preset initial clipping recommendation model by taking the historical clipping characteristics and the first clipping operation as input and the second clipping operation as monitoring information to obtain the clipping recommendation model.

In an exemplary embodiment, the clip material includes at least one of a picture, a video, and audio, the attribute characteristic of the picture includes a resolution, the attribute characteristic of the video includes a duration and a resolution, and the attribute characteristic of the audio includes a duration;

the data acquisition unit is further configured to perform:

when the editing material is a picture, acquiring the editing resolution of the editing material from the resolution of the picture;

when the editing material is a video, acquiring editing duration of the editing material from duration of the video, and acquiring editing resolution of the editing material from resolution of the video;

when the editing material is audio, acquiring the editing duration of the editing material from the duration of the audio;

and determining the material attribute characteristics according to the clipping resolution and the clipping duration.

In an exemplary embodiment, the clipping resolution includes an initial resolution and a current resolution, the clipping duration includes an initial duration and a current duration, and the attribute characteristics of the picture, video and audio include corresponding formats;

the data acquisition unit is further configured to perform:

according to the initial duration and the current duration, carrying out normalization processing on the duration of the editing material to obtain normalized duration;

according to the initial resolution and the current resolution, carrying out normalization processing on the resolution of the editing material to obtain a normalized resolution;

and acquiring the characteristic values of the normalized time length, the normalized resolution and the format to obtain the material attribute characteristics.

In an exemplary embodiment, the clip recommending unit is further configured to perform:

acquiring a clipping operation database corresponding to the current user; wherein the clipping operation in the clipping operation database is a clipping operation used by the current user;

and when detecting that the operation corresponding to the target clipping operation does not exist in the clipping operation database, performing significance display on the operation entry corresponding to the target clipping operation.

In an exemplary embodiment, the clip recommending unit is further configured to perform:

and editing the editing material according to the target editing operation to obtain a target editing effect corresponding to the target editing operation, and displaying the target editing effect.

According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:

a processor;

a memory for storing the processor-executable instructions;

wherein the processor is configured to execute the instructions to implement the clip recommendation method of any of the first aspect.

According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the clip recommendation method of any one of the first aspect.

According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program stored in a readable storage medium, from which the computer program is read and executed by at least one processor of an apparatus, such that the apparatus performs the clip recommendation method described in any one of the first aspect.

The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:

responding to the current clipping operation of the current user aiming at the clipping material, and acquiring the material attribute characteristics of the clipping material; according to the material attribute characteristics and the current clipping operation, the next clipping operation related to the clipping material and the current clipping operation is obtained, and a target clipping operation is obtained; and in response to the target clipping operation and the clipping operation record belonging to the current user, performing saliency display on an operation entry corresponding to the target clipping operation. Therefore, the recommended next target clipping operation can be intelligently determined according to the material attribute characteristics corresponding to the clipping material and the current clipping operation, the time for the user to select the clipping operation is reduced, and the clipping efficiency is improved.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.

FIG. 1 is a flow diagram illustrating a clip recommendation method according to an example embodiment.

Fig. 2 is a flowchart illustrating one possible implementation of step S200 according to an exemplary embodiment.

FIG. 3 is a flow diagram illustrating one possible implementation of a get clip recommendation model in accordance with an illustrative embodiment.

Fig. 4 is a flowchart illustrating one possible implementation of step S100 according to an exemplary embodiment.

Fig. 5 is a flowchart illustrating one possible implementation of step S140 according to an exemplary embodiment.

Fig. 6 is a flowchart illustrating one possible implementation of step S300 according to an example embodiment.

Fig. 7 is a diagram illustrating a clip recommendation method according to an exemplary embodiment.

FIG. 8 is a flowchart illustrating a clip recommendation method according to a particular illustrative embodiment.

Fig. 9 is a block diagram illustrating a clip recommending apparatus according to an exemplary embodiment.

FIG. 10 is a block diagram illustrating an apparatus for clip recommendation, according to an example embodiment.

Detailed Description

In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.

It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.

Fig. 1 is a flowchart illustrating a clip recommendation method according to an exemplary embodiment, which is illustrated in the exemplary embodiment as applied to an electronic device, and in the exemplary embodiment, the method includes the steps of:

in step S100, in response to a current clipping operation by a current user for a clip material, a material attribute feature of the clip material is acquired.

In step S200, according to the material attribute features and the current clipping operation, a next clipping operation associated with the clipping material and the current clipping operation is obtained, so as to obtain a target clipping operation.

In step S300, in response to the target clipping operation and the clip operation record belonging to the current user, the operation entry corresponding to the target clipping operation is prominently displayed.

The current user refers to a user who performs a clipping operation on the clipping material, and the identity of the current user may be an identity that can identify the user identity, such as a user ID of the video clipping software or an IP address and an MAC address of the video clipping software operating device. The clip material refers to a photographed picture, a recorded video, an audio, and the like. The material attribute features refer to current feature values of the clip material, such as feature values of the resolution and format of a picture, feature values of the duration, resolution and format of a video, and feature values of the duration and format of an audio.

Specifically, a buried point is set in clipping software, and material attribute characteristics corresponding to the clipped material and the current clipping operation are reported to the electronic equipment through the buried point. In the process of the current user performing the editing operation, when the current user performs the editing operation on the current editing material, an action object of the current editing operation of the current user, that is, a material attribute feature of the editing material, may be obtained. Generally, in the process of a clipping operation of a user, when the user selects a clipping operation for a specific clipping material, the clipping operation associated with the previous clipping operation box is often selected at the time of the next clipping, so that the next clipping operation associated with the clipping material and the current clipping operation can be acquired according to the material attribute characteristics and the current clipping operation, and the next clipping operation is determined as a target clipping operation recommended to the current user. After the target editing operation is obtained, the operation entry corresponding to the target editing operation can be displayed prominently according to the target editing operation and the editing operation record belonging to the current user, so that the current user can easily see the recommended target editing operation, and the current user can conveniently select the target editing operation. Meanwhile, the operation entries corresponding to the target editing operation are displayed in a conspicuous manner, so that the aim of promoting the target editing operation can be fulfilled, more editing operation functions can be applied, the utilization rate of each editing operation is improved, and the editing efficiency is improved.

The editing recommendation method is used for responding to the current editing operation of the current user for editing materials and acquiring the material attribute characteristics of the editing materials; according to the material attribute characteristics and the current clipping operation, acquiring the next clipping operation related to the clipping material and the current clipping operation to obtain a target clipping operation; and in response to the target clipping operation and the clipping operation record belonging to the current user, performing saliency display on an operation entry corresponding to the target clipping operation. Therefore, the recommended next target editing operation can be intelligently determined according to the material attribute characteristics corresponding to the editing materials and the current editing operation, the time for the user to select the editing operation is reduced, and the editing efficiency is improved. Meanwhile, the method can achieve the purpose of promoting the target editing operation, so that more editing operation functions can be applied, the utilization rate of each editing operation is improved, and the editing efficiency is improved.

Fig. 2 is a flowchart of an implementable manner of step S200 shown according to an exemplary embodiment, which may be specifically implemented by the following steps:

in step S210, inputting the material attribute characteristics and the current clipping operation into a preset clipping recommendation model to obtain at least one recommended clipping operation and at least one corresponding clipping operation probability; the clipping recommendation model is used for representing the corresponding relation among the attribute characteristics of the pixel material, the current clipping operation and the next clipping operation.

In step S220, a target clipping operation is determined from the at least one recommended clipping operation according to the at least one recommended clipping operation and the at least one clipping operation probability corresponding to the at least one recommended clipping operation.

Specifically, after the material attribute characteristics and the current clipping operation corresponding to the clipping material are obtained, the material attribute characteristics and the current clipping operation are input into a preset clipping recommendation model to obtain at least one recommended clipping operation. And meanwhile, obtaining the clipping operation probability corresponding to each recommended clipping operation, namely obtaining at least one clipping operation probability. And the recommended clipping operation is the predicted next clipping operation based on the material attribute characteristics corresponding to the clipping material and the current clipping operation, and the clipping operation probability is the probability that the recommended clipping operation is the recommended operation. Illustratively, based on the material property features and the current clipping operation, up to three recommended clipping operations may result: a first clipping operation, a second clipping operation, and a third clipping operation, wherein the clipping operation probability of the first clipping operation may be 0.9, the clipping operation probability of the second clipping operation may be 0.8, and the clipping operation probability of the third clipping operation may be 0.6. Of course, there may be a plurality of recommended clipping operations output by the actual clipping recommendation model, and this is merely an exemplary illustration and is not used to limit the present embodiment. After the at least one recommended clipping operation and the at least one clipping operation probability output by the clipping recommendation model are obtained, a recommended target clipping operation is determined from the at least one recommended clipping operation according to the at least one recommended clipping operation and the at least one clipping operation probability corresponding to the at least one recommended clipping operation. For example, the recommended clipping operation corresponding to the maximum clipping operation probability among the clipping operation probabilities may be determined as the target clipping operation, or the recommended clipping operation having the clipping operation probability within a range of 0.6 to 0.8 may be determined as the target clipping operation.

In the above exemplary embodiment, the material attribute characteristics and the current clipping operation are input into a preset clipping recommendation model, and at least one recommended clipping operation and at least one corresponding clipping operation probability are obtained; the editing recommendation model is used for representing material attribute characteristics and the corresponding relation between the current editing operation and the next editing operation; and determining the target clipping operation from the at least one recommended clipping operation according to the at least one recommended clipping operation and the at least one clipping operation probability corresponding to the at least one recommended clipping operation. Therefore, the recommended next target editing operation can be intelligently determined according to the material attribute characteristics corresponding to the editing material and the current editing operation, the time for the user to select the editing operation is reduced, and the editing efficiency is improved.

In an exemplary embodiment, an implementation manner of step S220 specifically includes:

determining a maximum a posteriori probability from the at least one clipping operation probability; and determining the recommended clipping operation corresponding to the maximum posterior probability as the target clipping operation.

Specifically, after obtaining at least one recommended clipping operation and at least one clipping operation probability corresponding to the at least one recommended clipping operation in step S200, a maximum a-posteriori probability is determined from the at least one clipping operation probability, a recommended clipping operation corresponding to the maximum a-posteriori probability is determined, and the recommended clipping operation corresponding to the maximum a-posteriori probability is determined as a final recommended clipping operation, that is, a target clipping operation.

In the above exemplary embodiment, the recommended clipping operation corresponding to the maximum posterior probability is determined as the target clipping operation, the maximum posterior probability can be gradually corrected according to the prior data, accumulation along with clipping materials and the user clipping operation can be realized, and the recommended target clipping operation better meets the user requirements.

FIG. 3 is a flow diagram illustrating one possible implementation of a get clip recommendation model according to an example embodiment, which may be implemented by:

in step S221, acquiring history material attribute characteristics, a first clipping operation, and a second clipping operation corresponding to the history clipping material; the first clipping operation is an initial clipping operation performed on the historical clipping characteristics, and the second clipping operation is a next clipping operation corresponding to the initial clipping operation.

In step S222, a preset initial clip recommendation model is trained by using the historical clip characteristics and the first clip operation as inputs, and the second clip operation as supervision information, so as to obtain a clip recommendation model.

The editing recommendation model is used for representing the corresponding relation between the material attribute characteristics and the current editing operation and the next editing operation, and the editing recommendation model can output the recommended next editing operation according to the material attribute characteristics and the current editing operation. The first clipping operation is a current clipping operation performed on the material attribute characteristics, and the second clipping operation is a next clipping operation performed on the material attribute characteristics. Alternatively, the clip recommendation model may be a model trained based on a bayesian classifier.

Specifically, after the electronic equipment acquires the historical material attribute characteristics, the first clipping operation and the second clipping operation corresponding to the historical clipping material, the historical material attribute characteristics, the first clipping operation and the second clipping operation are determined as training data. Training a preset initial clipping recommendation model by taking the historical material attribute characteristics and the first clipping operation as input and the second clipping operation as monitoring information to obtain a clipping recommendation model; wherein the initial clipping recommendation model may be a model designed based on a bayesian classifier. The clip recommendation model can associate the material attribute characteristics of the clip materials of a user in video clip software with the clip operation, and analyze the association between the material attribute characteristics of the clip materials and the first clip operation and the second clip operation through the Bayesian principle. And finally, recommending the next operation to the user by using the obtained clipping recommendation model. Therefore, the editing operation function in the video editing software can be fully exposed and used, the editing can be easier and simpler, the using skill of the editing software can be more easily mastered, and the editing efficiency is improved.

In the above exemplary embodiment, the clip recommendation model is obtained by obtaining the historical material attribute characteristics, the first clip operation, and the second clip operation corresponding to the historical clip material, and training a preset initial clip recommendation model with the historical clip characteristics and the first clip operation as input, and the second clip operation as supervision information. And then the obtained clipping recommendation model can be adopted to recommend the next operation to the user. Therefore, the clipping operation function in the video clipping software can be fully exposed and used, the clipping can be easier and simpler, a user can more easily master the using skill of the clipping software, and the clipping efficiency is improved.

Fig. 4 is a flowchart of an implementable manner of step S100 shown according to an exemplary embodiment, which may be specifically implemented by the following steps:

in step S110, when the clip material is a picture, the clip resolution of the clip material is acquired from the resolution of the picture.

In step S120, when the clip material is a video, the clip duration of the clip material is acquired from the duration of the video, and the clip resolution of the clip material is acquired from the resolution of the video.

In step S130, when the clip material is audio, the clip time length of the clip material is acquired from the time length of the audio.

In step S140, material attribute characteristics are determined according to the clip resolution and the clip duration.

The clip material comprises at least one of pictures, videos and audios, the attribute features of the pictures comprise resolution, the attribute features of the videos comprise duration and resolution, and the attribute features of the audios comprise duration.

Specifically, when the clip material is a picture, the clip resolution of the clip material is acquired from the resolution of the picture. When the editing material is a video, the editing time length of the editing material is obtained from the time length of the video, and the editing resolution of the editing material is obtained from the resolution of the video. When the clip material is audio, the clip duration of the clip material is acquired from the duration of the audio. And after the material attribute characteristics (the clipping resolution and the clipping duration) of the clipping material are obtained, determining the material attribute characteristics according to the clipping resolution and the clipping duration.

In the above exemplary embodiment, when the clip material is a picture, the clip resolution of the clip material is acquired from the resolution of the picture; when the editing material is a video, acquiring the editing time length of the editing material from the time length of the video, and acquiring the editing resolution of the editing material from the resolution of the video; when the editing material is audio, acquiring the editing duration of the editing material from the duration of the audio; and determining the material attribute characteristics according to the clipping resolution and the clipping duration. Therefore, different clipping materials can be subjected to unified material attribute feature extraction, the efficiency of material attribute feature extraction is improved, follow-up clipping operation is recommended according to the uniformly extracted material attribute features, and the clipping efficiency is improved.

Fig. 5 is a flowchart of an implementable manner of step S140 shown according to an exemplary embodiment, which may be specifically implemented by the following steps:

in step S141, the time length of the clip material is normalized according to the initial time length and the current time length, so as to obtain a normalized time length.

In step S142, the resolution of the clip material is normalized according to the initial resolution and the current resolution, so as to obtain a normalized resolution.

In step S143, the feature values of the normalized duration, the normalized resolution, and the normalized format are obtained, so as to obtain the material attribute features.

The clipping resolution comprises an initial resolution and a current resolution, the clipping duration comprises an initial duration and a current duration, and the attribute characteristics of the pictures, the videos and the audios comprise corresponding formats. The initial resolution refers to the resolution at which the clip material is imported into the video clip software. The current resolution refers to the resolution of the clip material at the current processing time after a series of clipping operations, and if the clip material is processed, the current processing time can be regarded as the resolution of the clip material at the final processing time. The initial duration refers to the length of time the clip material is imported into the video clip software. The current duration refers to the length of the clip material at the current processing time after the clip material is subjected to a series of clipping operations, and if the clip material is processed, the current processing time can be regarded as the length of the clip material at the final processing time.

Specifically, after the initial duration and the current duration are obtained, the initial duration t may be obtainedInitial durationAnd the current time length tCurrent time lengthFrom the formula tChinese angelica root-barkALength of aging=tInitial duration/tAt presentDuration of timeNormalizing the time length of the editing material to obtain a normalized time length tChinese angelica root-barkALength of aging. After the initial resolution and the current resolution are obtained, the initial resolution (w) can be obtainedInitial resolution(Width of),hInitial resolution(Height of) And current resolution (w)Current resolution(Width of),hCurrent resolution(Height of) By the formula w)Chinese angelica root-barkAResolution of(Width of)=wInitial resolution(Width of)/wCurrent resolution(Width of)、 hChinese angelica root-barkAResolution of(Height of)=hInitial resolution(Height of)/hCurrent resolution(Height of) Normalizing the resolution of the editing material to obtain a normalized resolution (w)Chinese angelica root-barkAResolution of(Width of),hChinese angelica root-barkAResolution of(Height of)). And after the normalized time length and the normalized resolution are obtained, obtaining the normalized time length, the normalized resolution and the characteristic values of formats corresponding to the attribute characteristics of the editing material pictures, videos and audios to obtain the material attribute characteristics.

In the above exemplary embodiment, by normalizing the duration and resolution of the clipping material, a relatively uniform measure value can be obtained, and the normalized feature value is determined as the basic data recommended by the clipping, so that the accuracy of the recommended target clipping operation can be improved, and the clipping operation more conforming to the clipping material is recommended.

Fig. 6 is a flowchart of an implementable manner of step S300 shown according to an exemplary embodiment, which may be specifically implemented by the following steps:

in step S310, a clipping operation database corresponding to the current user is obtained; wherein the clipping operation in the clipping operation database is a clipping operation used by the current user.

In step S320, when it is detected that the operation corresponding to the target clipping operation does not exist in the clipping operation database, the operation entry corresponding to the target clipping operation is prominently displayed.

Wherein the clipping operation in the clipping operation database is a database formed by the clipping operation used by the current user.

Specifically, after the recommended target clipping operation is obtained in step S200, the clipping operation database corresponding to the current user is obtained. Then, it is detected whether an operation corresponding to the target clipping operation exists in the clipping operation database. If the operation corresponding to the target clipping operation does not exist in the clipping operation database, the result indicates that the current user has not used the target clipping operation before, or the current user does not know the operation and function of the target clipping operation before. At this time, the operation entry corresponding to the target clipping operation is prominently displayed, and the target clipping operation is recommended to the current user through the prominently displayed entry. Meanwhile, the method can achieve the purpose of promoting the target editing operation, so that more editing operation functions are applied, the utilization rate of each editing operation is improved, and the editing efficiency is improved.

Optionally, after the step S320 performs the saliency display on the operation entry corresponding to the target clipping operation, the method further includes: and editing the editing material according to the target editing operation to obtain a target editing effect corresponding to the target editing operation and display the target editing effect.

Specifically, when the target clipping operation is recommended to the current user, clipping is further performed on the clipping element according to the target clipping operation, a target clipping effect corresponding to the target clipping operation is obtained, and the target clipping effect is displayed, so that the current user can see the effect of the target clipping operation, the user can see the effect of the target clipping operation more intuitively, whether the target clipping operation is adopted for clipping is determined, and the clipping efficiency is improved.

For example, fig. 7 is a schematic diagram of a clip recommendation method according to an exemplary embodiment, as shown in fig. 7(a), a current clipping operation is "clipping operation 3", a determined target clipping operation is "clipping operation 6" according to the current clipping operation "clipping operation 3" and a material attribute feature of a clip material, an operation entry corresponding to the target clipping operation is an area where the "clipping operation 6" is located, and a saliency display is performed on the area, so as to obtain an operation entry corresponding to the target clipping operation shown in fig. 7 (b). And further, clipping the clip material according to the target clipping operation to obtain a target clipping effect corresponding to the target clipping operation. Alternatively, when the "clipping operation 6" is to add a texture effect to the clipping material, the clipping material with the texture effect is presented to the user, a target clipping effect as shown in fig. 7(c) is obtained, the user is made to see the effect of the target clipping operation more intuitively, and whether to clip by the target clipping operation is further determined, thereby improving the editing efficiency.

In the above exemplary embodiment, a clipping operation database corresponding to a current user is obtained; wherein the clipping operation in the clipping operation database is the clipping operation used by the current user; and when detecting that the operation corresponding to the target clipping operation does not exist in the clipping operation database, displaying the significance of the operation entry corresponding to the target clipping operation. Therefore, the method can play the aim of expanding the target clipping operation, so that more clipping operation functions can be applied, the utilization rate of each clipping operation is improved, and the clipping efficiency is improved.

Fig. 8 is a flowchart illustrating a clip recommendation method according to a specific exemplary embodiment, which specifically includes the following steps:

a buried point is added in the video clipping software for recording the clipping operation of the user and the feature value (material attribute feature) of the clipping material, and the clipping operation of the user in the next step after the clipping operation. When the buried point data are accumulated to a certain degree, the buried point data can be used as training data of a subsequent naive Bayes classifier, so that the clipping operation can be recommended to the user through the training data and the Bayes classifier.

The acquisition of the training data comprises: in video clip software, there are 3 types of clip material imported by a user, which are pictures, video, and audio, respectively. For picture materials, the characteristics are picture resolution and picture format; for video material, the characteristics are video duration, video resolution and video format; for audio material, the characteristic is audio durationAnd audio formats. When the characteristic values such as video time, audio time, video resolution, picture resolution and the like are reported through the buried point operation, the actual original data of the material is not uploaded, but normalized data is uploaded. For the duration-like data, the normalized data refers to the duration of the video (or audio) material/the total duration of the clip work, that is: t is tChinese angelica root-barkATransforming=tInitial duration/tCurrent time length(ii) a For data in the resolution class, normalized data refers to the width (height) of a video material (picture material)/the width (height) of a clip work, that is: w is aChinese angelica root-barkATransforming=wMaterial/wEditing works,hChinese angelica root-barkATransforming=hMaterial/hEditing works. Alternatively, the normalized data may only retain the two bits of data after the decimal point, and the two bits of data after the decimal point are discarded. And uploading the operation records of the user and the characteristic values of the materials to a background server database through the embedded point operation.

Applications of bayesian classifiers include: when a user performs an operation on the material in the video clip software, the operation is sent to the background server. And the background server obtains a trained Bayesian classifier by utilizing the principle of the Bayesian classifier according to the collected training data. And predicting the probability of each next clipping operation of the user by using a trained Bayesian classifier according to the clipping characteristics and the current clipping operation. Specifically, as shown in formula (1):

p (user next clipping operation X-user current material characteristic value, user current clipping operation) ═ P (user current material value, user current clipping operation | -user next operation X) } P (user next operation X)/(P (user current material value, user current clipping operation) (1)

And the background server takes out the operation corresponding to the maximum posterior probability as a recommended operation, namely a target clipping operation according to the probability of occurrence of each clipping operation calculated by the formula (1), and returns the target clipping operation to the video clipping software client. Then, the video clipping software client detects whether the user has used the clipping operation through the local database, and if the user has not used the clipping operation, provides an entry highlighting the clipping operation to prompt the user that the clipping operation can be used. If the user uses the operation, the user is indicated to try to use the operation, and no prompt is needed.

In the above exemplary embodiment, the user's clipping material and the clipping operation in the video clipping software can be associated, and the association between the clipping feature of the clipping material and the clipping operation can be analyzed by the bayesian principle. And finally, recommending the next operation to the user by using the obtained Bayesian classifier. Therefore, the clipping operation function in the video clipping software can be fully exposed and used, the clipping can be easier and simpler, the using skill of the clipping software can be easier to master, and the clipping efficiency is improved.

It should be understood that although the various steps in the flowcharts of fig. 1-6, 8 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-6 and 8 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternatively with other steps or at least some of the other steps or stages.

Fig. 9 is a block diagram illustrating a clip recommending apparatus according to an exemplary embodiment. Referring to fig. 9, the apparatus includes a data acquisition unit 901, a clip prediction unit 902, and a clip recommendation unit 903:

a data acquisition unit 901 configured to perform a material attribute feature of a clip material in response to a current clipping operation of a current user for the clip material;

a clipping prediction unit 902, configured to execute the next clipping operation associated with the clipping material and the current clipping operation according to the material attribute characteristics and the current clipping operation, so as to obtain a target clipping operation;

a clip recommending unit 903 configured to perform a saliency display of an operation entry corresponding to a target clip operation in response to the target clip operation and a clip operation record belonging to a current user.

In an exemplary embodiment, the clip prediction unit 902 is further configured to perform: inputting the material attribute characteristics and the current editing operation into a preset editing recommendation model to obtain at least one recommended editing operation and at least one corresponding editing operation probability; the editing recommendation model is used for representing material attribute characteristics and the corresponding relation between the current editing operation and the next editing operation; and determining the target clipping operation from the at least one recommended clipping operation according to the at least one recommended clipping operation and the at least one clipping operation probability corresponding to the at least one recommended clipping operation.

In an exemplary embodiment, the clip prediction unit 902 is further configured to perform: determining a maximum a posteriori probability from the at least one clipping operation probability; and determining the recommended clipping operation corresponding to the maximum posterior probability as the target clipping operation.

In an exemplary embodiment, the clip prediction unit 902 is further configured to perform: acquiring historical material attribute characteristics, first editing operation and second editing operation corresponding to historical editing materials; the method comprises the following steps that a first clipping operation is an initial clipping operation performed on historical clipping characteristics, and a second clipping operation is a next clipping operation corresponding to the initial clipping operation; and training a preset initial clipping recommendation model by taking the historical clipping characteristics and the first clipping operation as input and taking the second clipping operation as monitoring information to obtain a clipping recommendation model.

In an exemplary embodiment, the clip material includes at least one of a picture, a video, and audio, the attribute characteristic of the picture includes a resolution, the attribute characteristic of the video includes a duration and a resolution, and the attribute characteristic of the audio includes a duration; the data acquisition unit 901 is further configured to perform: when the editing material is a picture, acquiring the editing resolution of the editing material from the resolution of the picture; when the editing material is a video, acquiring the editing time length of the editing material from the time length of the video, and acquiring the editing resolution of the editing material from the resolution of the video; when the editing material is audio, acquiring the editing duration of the editing material from the duration of the audio; and determining the material attribute characteristics according to the clipping resolution and the clipping duration.

In an exemplary embodiment, the clipping resolution includes an initial resolution and a current resolution, the clipping duration includes an initial duration and a current duration, and the attribute characteristics of the picture, the video, and the audio include corresponding formats; the data acquisition unit 901 is further configured to perform: according to the initial duration and the current duration, carrying out normalization processing on the duration of the editing material to obtain normalized duration; according to the initial resolution and the current resolution, carrying out normalization processing on the resolution of the editing material to obtain normalized resolution; and acquiring characteristic values of the normalized time length, the normalized resolution and the normalized format to obtain material attribute characteristics.

In an exemplary embodiment, the clip recommending unit 903 is further configured to perform: acquiring a clipping operation database corresponding to a current user; wherein the clipping operation in the clipping operation database is the clipping operation used by the current user; and when detecting that the operation corresponding to the target clipping operation does not exist in the clipping operation database, displaying the significance of the operation entry corresponding to the target clipping operation.

In an exemplary embodiment, the clip recommending unit 903 is further configured to perform: and editing the editing material according to the target editing operation to obtain a target editing effect corresponding to the target editing operation and display the target editing effect.

With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.

FIG. 10 is a block diagram illustrating an apparatus Z00 for clip recommendation, according to an example embodiment. For example, device Z00 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, and the like.

Referring to fig. 10, device Z00 may include one or more of the following components: a processing component Z02, a memory Z04, a power component Z06, a multimedia component Z08, an audio component Z10, an interface for input/output (I/O) Z12, a sensor component Z14 and a communication component Z16.

The processing component Z02 generally controls the overall operation of the device Z00, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component Z02 may include one or more processors Z20 to execute instructions to perform all or part of the steps of the method described above. Further, the processing component Z02 may include one or more modules that facilitate interaction between the processing component Z02 and other components. For example, the processing component Z02 may include a multimedia module to facilitate interaction between the multimedia component Z08 and the processing component Z02.

The memory Z04 is configured to store various types of data to support operations at device Z00. Examples of such data include instructions for any application or method operating on device Z00, contact data, phonebook data, messages, pictures, videos, etc. The memory Z04 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.

The power supply component Z06 provides power to the various components of the device Z00. The power component Z06 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device Z00.

The multimedia component Z08 comprises a screen between the device Z00 and the user providing an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component Z08 includes a front facing camera and/or a rear facing camera. When device Z00 is in an operating mode, such as a capture mode or a video mode, the front-facing camera and/or the rear-facing camera may receive external multimedia data. Each of the front camera and the rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.

The audio component Z10 is configured to output and/or input an audio signal. For example, the audio component Z10 includes a Microphone (MIC) configured to receive external audio signals when the device Z00 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory Z04 or transmitted via the communication component Z16. In some embodiments, the audio component Z10 further includes a speaker for outputting audio signals.

The I/O interface Z12 provides an interface between the processing component Z02 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.

The sensor assembly Z14 includes one or more sensors for providing status assessment of various aspects to the device Z00. For example, sensor assembly Z14 may detect the open/closed state of device Z00, the relative positioning of the components, such as the display and keypad of device Z00, sensor assembly Z14 may also detect a change in the position of one component of device Z00 or device Z00, the presence or absence of user contact with device Z00, the orientation or acceleration/deceleration of device Z00, and a change in the temperature of device Z00. The sensor assembly Z14 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly Z14 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly Z14 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.

The communication component Z16 is configured to facilitate wired or wireless communication between device Z00 and other devices. Device Z00 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component Z16 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component Z16 further includes a Near Field Communication (NFC) module to facilitate short-range communications.

In an exemplary embodiment, device Z00 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described methods.

In an exemplary embodiment, there is also provided a computer readable storage medium, such as the memory Z04, comprising instructions executable by the processor Z20 of the device Z00 to perform the above method. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.

In an exemplary embodiment, a computer program product is also provided, the program product comprising a computer program stored in a readable storage medium, from which at least one processor of a device reads and executes the computer program, causing the device to perform the method in the above-described embodiments.

Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

21页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:素材处理方法、装置、电子设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类