Video special effect detection method and device, computer equipment and storage medium

文档序号:196219 发布日期:2021-11-02 浏览:18次 中文

阅读说明:本技术 视频特效的检验方法、装置、计算机设备和存储介质 (Video special effect detection method and device, computer equipment and storage medium ) 是由 陈裕发 龙祖苑 谢宗兴 于 2021-01-25 设计创作,主要内容包括:本申请涉及一种视频特效的检验方法、装置、计算机设备和存储介质。所述方法包括:获取从符合目标特效对应效果的标准视频中抽取的参考视频帧;所述标准视频通过对初始视频添加所述目标特效后得到;所述参考视频帧与所述标准视频中目标特效的生效范围匹配;以所述目标特效对应效果为期待效果,对所述初始视频进行特效添加处理,获得待检验的特效视频;抽取所述特效视频中与所述生效范围匹配的检验视频帧;将所述检验视频帧和所述参考视频帧进行比对得到第一比对结果,根据所述第一比对结果得到所述特效视频的特效检验结果。才能本申请的方法能够提高视频特效检验的效率和准确性。(The application relates to a method and a device for testing video special effects, computer equipment and a storage medium. The method comprises the following steps: acquiring a reference video frame extracted from a standard video conforming to a corresponding effect of a target special effect; the standard video is obtained by adding the target special effect to the initial video; the reference video frame is matched with the effective range of the target special effect in the standard video; taking the corresponding effect of the target special effect as an expected effect, and carrying out special effect adding processing on the initial video to obtain a special effect video to be checked; extracting a checking video frame matched with the effective range in the special effect video; and comparing the test video frame with the reference video frame to obtain a first comparison result, and obtaining a special effect test result of the special effect video according to the first comparison result. The method can improve the efficiency and accuracy of video special effect inspection.)

1. A method for verifying a video special effect, the method comprising:

acquiring a reference video frame extracted from a standard video conforming to a corresponding effect of a target special effect;

the standard video is obtained by adding the target special effect to the initial video; the reference video frame is matched with the effective range of the target special effect in the standard video;

taking the corresponding effect of the target special effect as an expected effect, and carrying out special effect adding processing on the initial video to obtain a special effect video to be checked;

extracting a checking video frame matched with the effective range in the special effect video;

and comparing the test video frame with the reference video frame to obtain a first comparison result, and obtaining a special effect test result of the special effect video according to the first comparison result.

2. The method according to claim 1, wherein the performing a special effect adding process on the initial video with the effect of the target special effect as an expected effect to obtain a special effect video to be checked comprises:

and detecting the video content of the initial video by taking the effect corresponding to the target special effect as an expected effect, and when the trigger content corresponding to the target special effect is detected, performing special effect adding processing on the initial video to obtain a special effect video to be detected.

3. The method according to claim 2, wherein the trigger content corresponding to the target special effect comprises a target object; the determination step of the effective range of the target special effect comprises the following steps:

acquiring the appearance time period of the target object;

and determining the appearance time period of the target object as the effective range of the target special effect.

4. The method according to claim 2 or 3, wherein the trigger content corresponding to the target special effect comprises a target action corresponding to a target object; the determination step of the effective range of the target special effect comprises the following steps:

acquiring the occurrence time period of the target action;

and determining a preset time after the occurrence time period of the target action as an effective range of the target special effect.

5. The method of claim 4, wherein the target action comprises a target five sense organ action of the target object; the acquiring the occurrence period of the target action comprises:

acquiring the occurrence time period of the target facial features action;

the determining a preset time after the occurrence period of the target action as the effective range of the target special effect comprises:

determining the tail unit time corresponding to the target facial features action according to the appearance time period of the target facial features action;

acquiring preset time length of the target special effect;

and determining a preset time period which takes the next unit time of the last unit time corresponding to the target facial features action as the starting time and has the duration matched with the preset duration as the effective range of the target special effect.

6. The method according to claim 2, wherein the trigger content corresponding to the target special effect comprises a target limb action corresponding to a target object; the determination step of the effective range of the target special effect comprises the following steps:

acquiring the occurrence time period of the target limb action, and acquiring the video frame rate of the initial video;

determining the frame number of the last video frame of the target limb action according to the appearance time interval of the target limb action and the video frame rate of the initial video frame;

determining the frame number of the next video frame according to the frame number of the last video frame of the target limb action, and obtaining the initial frame number of the target special effect;

acquiring preset duration of the target special effect, and determining the number of video frames of the target special effect according to the preset duration and the video frame rate of the initial video frame;

and determining the video frame sequence number range of the target special effect according to the initial frame sequence number of the target special effect and the number of the video frames of the target special effect, and taking the video frame sequence number range of the target special effect as the effective range of the target special effect.

7. The method according to claim 2 or 3, wherein the trigger content corresponding to the target special effect comprises a target action corresponding to a target object; the generating of the initial video comprises:

acquiring a basic video frame containing the target object;

acquiring a motion video frame corresponding to the target motion; in the action video frame, the target object performs the target action;

and generating the initial video according to the basic video frame and the action video frame.

8. The method of claim 7, wherein generating the initial video frame from the base video frame and the action video frame comprises:

acquiring a video frame rate and preset duration of the initial video, and determining the total frame number of the initial video according to the video frame rate and the preset duration of the initial video frame;

acquiring the occurrence time period of the target action, and determining a first target frame number of the action video frame according to the occurrence time period of the target action and the video frame rate;

acquiring a difference value between the total frame number of the initial video and the first target frame number of the action video frame to obtain a second target frame number of the basic video frame;

and generating a video frame with a first target frame number as a video frame within the appearance time period of the target action according to the action video frame, and generating a video frame with a second target frame number as a video frame outside the appearance time period of the target action according to the basic video frame to generate the initial video frame.

9. The method of claim 1, wherein prior to said obtaining a special effect test result for the special effect video from the first comparison result, the method further comprises;

acquiring a contrast video frame extracted from the standard video;

the contrast video frame is matched with the contrast range of the target special effect in the standard video; the contrast range of the target special effect is before the occurrence range of the trigger content in the initial video;

extracting a test video frame matched with the comparison range in the special effect video;

comparing the inspection video frame matched with the comparison range with the comparison video frame to obtain a second comparison result;

the obtaining of the special effect test result of the special effect video according to the first comparison result includes:

and obtaining a special effect test result of the special effect video according to the first comparison result and the second comparison result.

10. The method according to claim 9, wherein the initial video includes trigger content corresponding to a target special effect; the trigger content corresponding to the target special effect comprises a target action corresponding to a target object;

prior to said obtaining a control video frame extracted from said standard video, said method further comprises:

acquiring the occurrence time period of the target action;

and determining a preset time period before the occurrence time period of the target action as a comparison range of the target special effect.

11. The method of claim 1, wherein comparing the test video frame to the reference video frame to obtain a first comparison result comprises:

calculating a similarity between the inspection video frame and the reference video frame;

taking the calculated similarity as a comparison result of the test video frame and the reference video frame to obtain a first comparison result;

the obtaining of the special effect test result of the special effect video according to the first comparison result includes:

and determining a special effect test result of the special effect video according to the size relation between the similarity and a preset similarity threshold.

12. The method of claim 11, wherein calculating the similarity between the verification video frame and the reference video frame comprises:

acquiring a target initial frame corresponding to the target special effect;

updating the pixel values of the inspection video frame by the pixel difference values of the inspection video frame and the target initial frame;

according to the size relation between the updated pixel value of the inspection video frame and a preset pixel difference threshold value, eliminating the content similar to the target initial frame in the inspection video frame to obtain a difference value image corresponding to the inspection video frame;

updating the pixel value of the reference video frame according to the pixel difference value of the reference video frame and the target initial frame;

according to the size relation between the updated pixel value of the reference video frame and the preset pixel difference threshold value, eliminating the content similar to the target initial frame in the reference video frame to obtain a difference value image corresponding to the reference video frame;

and determining the similarity between the verification video frame and the reference video frame based on the similarity between the difference image corresponding to the verification video frame and the difference image corresponding to the reference video frame.

13. The method of claim 12, wherein determining the similarity between the verification video frame and the reference video frame based on the similarity between the difference map corresponding to the verification video frame and the difference map corresponding to the reference video frame comprises:

performing histogram calculation on a difference value image corresponding to the inspection video frame to obtain a first gray level histogram;

performing histogram calculation on the difference image corresponding to the reference video frame to obtain a second gray level histogram;

calculating a pixel number difference between the first gray level histogram and the second gray level histogram for each gray level value of each color channel, and calculating a similarity between the first gray level histogram and the second gray level histogram based on the calculated pixel number difference;

and taking the calculated similarity as the similarity between the test video frame and the reference video frame.

14. An apparatus for verifying a video effect, the apparatus comprising:

the reference video frame acquisition module is used for acquiring a reference video frame extracted from a standard video conforming to the corresponding effect of the target special effect; the standard video is obtained by adding the target special effect to the initial video; the reference video frame is matched with the effective range of the target special effect in the standard video;

the special effect adding processing module is used for carrying out special effect adding processing on the initial video by taking the target special effect corresponding effect as an expected effect to obtain a special effect video to be detected;

the inspection video frame extraction module is used for extracting the inspection video frames matched with the effective range in the special effect video;

and the comparison module is used for comparing the test video frame with the reference video frame to obtain a first comparison result, and obtaining a special effect test result of the special effect video according to the first comparison result.

15. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any one of claims 1 to 13 when executing the computer program.

16. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 13.

Technical Field

The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, a computer device, and a storage medium.

Background

With the development of image processing technology, the requirement of users on the diversity of multimedia contents is gradually increased, and users can use various special effects when editing images or videos to make the multimedia contents richer and more colorful, so that whether the special effect function can be normally realized in practical application is a key factor for guaranteeing user experience.

In the related art, whether the special effect function is normal or not is usually checked from the implementation code level of the special effect function, but the method is not only inefficient, but also is restricted by the technical level of a code analyst, and the checking accuracy is difficult to guarantee.

Disclosure of Invention

In view of the above, it is necessary to provide a method, an apparatus, a computer device and a storage medium for inspecting a video special effect, which can improve the efficiency and accuracy of inspecting the video special effect.

A method of verifying a video effect, the method comprising:

acquiring a reference video frame extracted from a standard video conforming to a corresponding effect of a target special effect;

the standard video is obtained by adding the target special effect to the initial video; the reference video frame is matched with the effective range of the target special effect in the standard video;

taking the corresponding effect of the target special effect as an expected effect, and carrying out special effect adding processing on the initial video to obtain a special effect video to be checked;

extracting a checking video frame matched with the effective range in the special effect video;

and comparing the test video frame with the reference video frame to obtain a first comparison result, and obtaining a special effect test result of the special effect video according to the first comparison result.

An apparatus for verifying video effects, the apparatus comprising:

the reference video frame acquisition module is used for acquiring a reference video frame extracted from a standard video conforming to the corresponding effect of the target special effect; the standard video is obtained by adding the target special effect to the initial video; the reference video frame is matched with the effective range of the target special effect in the standard video;

the special effect adding processing module is used for carrying out special effect adding processing on the initial video by taking the target special effect corresponding effect as an expected effect to obtain a special effect video to be detected;

the inspection video frame extraction module is used for extracting the inspection video frames matched with the effective range in the special effect video;

and the comparison module is used for comparing the test video frame with the reference video frame to obtain a first comparison result, and obtaining a special effect test result of the special effect video according to the first comparison result.

A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:

acquiring a reference video frame extracted from a standard video conforming to a corresponding effect of a target special effect;

the standard video is obtained by adding the target special effect to the initial video; the reference video frame is matched with the effective range of the target special effect in the standard video;

taking the corresponding effect of the target special effect as an expected effect, and carrying out special effect adding processing on the initial video to obtain a special effect video to be checked;

extracting a checking video frame matched with the effective range in the special effect video;

and comparing the test video frame with the reference video frame to obtain a first comparison result, and obtaining a special effect test result of the special effect video according to the first comparison result.

A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:

acquiring a reference video frame extracted from a standard video conforming to a corresponding effect of a target special effect;

the standard video is obtained by adding the target special effect to the initial video; the reference video frame is matched with the effective range of the target special effect in the standard video;

taking the corresponding effect of the target special effect as an expected effect, and carrying out special effect adding processing on the initial video to obtain a special effect video to be checked;

extracting a checking video frame matched with the effective range in the special effect video;

and comparing the test video frame with the reference video frame to obtain a first comparison result, and obtaining a special effect test result of the special effect video according to the first comparison result.

According to the method, the device, the computer equipment and the storage medium for testing the video special effects, the reference video frame extracted from the standard video conforming to the corresponding effect of the target special effect is obtained, the reference video frame is matched with the effective range of the target special effect, so that the reference video frame can accurately present the special effect corresponding to the target special effect, the terminal further takes the corresponding effect of the target special effect as an expected effect, carries out special effect adding processing on the initial video to obtain the special effect video to be tested, extracts the testing video frame matched with the effective range in the special effect video, and can accurately present the effect corresponding to the target special effect under the condition that the special effect function of the terminal is normal, so that the testing video frame and the reference video frame can be compared to obtain a first comparison result, and obtaining a special effect test result of the special effect video according to the first comparison result, thereby realizing automatic test of a special effect adding function, and compared with the method for testing whether the special effect function is normal from a code implementation level of the special effect function in the related technology, the method for automatically testing not only has higher efficiency, but also is not restricted by the technical level of code analysts, and has higher accuracy.

Drawings

FIG. 1 is a diagram of an exemplary environment in which a method for video effect detection is implemented;

FIG. 2 is a flowchart illustrating a method for checking video effects according to an embodiment;

FIG. 3 is a schematic flow chart of the step of generating the initial video frame in one embodiment;

FIG. 4 is a flow chart illustrating the verification of video effects in another embodiment;

FIG. 5 is a histogram of a difference map corresponding to a test video frame in one embodiment;

FIG. 6A is a diagram of a base video frame, in one embodiment;

FIG. 6B is a diagram of action video frames in one embodiment;

FIG. 6C is a diagram illustrating effects corresponding to a target effect in one embodiment;

FIG. 7A is a diagram illustrating special effects when a face is recognized according to an embodiment;

FIG. 7B is a diagram illustrating a special effects effect triggered upon detection of the occurrence of a blinking motion, in one embodiment;

FIG. 7C is a diagram of a difference map in one embodiment;

FIG. 8A is a diagram of an action video frame in accordance with another embodiment;

FIG. 8 is a block diagram showing an exemplary configuration of a video effect inspection apparatus;

FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.

Detailed Description

In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.

Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.

The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.

Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.

Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.

With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed and applied in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical care, smart customer service, and the like.

The scheme provided by the embodiment of the application relates to the computer vision technology of artificial intelligence and the like, and is specifically explained by the following embodiment:

the method for testing the video special effect can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 through the network, and the terminal 102 includes a plurality of terminals 102a, 10 b. The terminal 102 may be provided with a video client, and the server 104 is a background server corresponding to the video client, and stores an initial video and a standard video, where the standard video is a video that meets a corresponding effect of a target special effect, and the standard video is obtained by adding the target special effect to the initial video. The terminal can acquire an initial video and a standard video from the server, extract a video frame matched with the effective range of the target special effect from the standard video as a reference video frame, perform special effect adding processing on the initial video by taking the corresponding effect of the target special effect as an expected effect, obtain a special effect video to be detected, further extract a video matched with the effective range of the target special effect from the special effect video as a detection video frame, and compare the detection video frame with the reference video frame to obtain a first comparison result, and obtain a special effect detection result of the special effect video according to the first comparison result.

The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.

In one embodiment, as shown in fig. 2, a method for checking a video special effect is provided, which is described by taking the method as an example applied to the terminal in fig. 1, and includes the following steps:

step 202, acquiring a reference video frame extracted from a standard video conforming to a corresponding effect of a target special effect; the standard video is obtained by adding a target special effect to the initial video; the reference video frame is matched with the effective range of the target special effect in the standard video.

The special effect refers to a video special effect, and the video special effect is a specific pattern or animation added in a video frame. For example, glasses are added to a face in a video frame containing the face; as another example, an animation of snowing is added to a video frame containing a human face. In the case that the video special effect adding function is normal, the video special effect is usually triggered to be added by specific content in the video. For example, in a video frame containing a human face, a video special effect may be triggered by the human face. The target effect refers to an effect that needs to be checked whether or not it can be normally added. The target special effect corresponding effect refers to the effect on the graphic level after the target special effect is normally added. It can be understood that when the video special effect is a pattern, the target special effect corresponding effect is an image effect presented by the pattern; when the video special effect is animation, the corresponding effect of the target special effect is the animation effect presented by the animation.

The standard video is a video with a special effect addition effect according with a corresponding effect of a target special effect. Since the effect of adding the special effect of the standard video conforms to the effect corresponding to the target special effect, the special effect adding function of the terminal can be checked by using the standard video as a reference. The standard video is obtained by adding a target special effect to the initial video. Adding the target special effect to the initial video can be that a technician manually adds the target special effect to the initial video; or automatically adding the target special effect after the initial video is read by the video application program, at the moment, in order to ensure the accuracy of the detection, the automatically added special effect can be confirmed manually, and the special effect adding effect of the standard video is ensured to accord with the corresponding effect of the target special effect. The initial video refers to an original video without a special effect added, and generally, the initial video contains specific content capable of triggering a target special effect.

In one embodiment, the initial video may be a video gathered from the internet that contains specific content that can trigger the target special effect. For example, if the specific content capable of triggering the target special effect is a human face, the initial video is a video containing the human face.

It can be understood that when there are multiple target special effects, multiple special effects can be added to the initial video to obtain a standard video; or each target special effect can be added in the initial video respectively to obtain a standard video of each target special effect, wherein the standard video only comprises one target special effect.

The reference video frame is matched with the effective range of the target special effect in the standard video. The effective range of the target special effect may be an effective time range of the target special effect or a sequence number range of effective video frames of the target special effect. For example, the effective range of a target special effect may be 3.0 seconds to 3.2 seconds; as another example, the effective range of a target special effect may be from frame 90 to frame 105 in a video. Correspondingly, the fact that the effective range of the reference video frame and the target special effect is matched means that the time of the reference video frame is within the effective time range of the target special effect or the video frame sequence number of the reference video frame is within the sequence number range of the effective video frame of the target special effect. For example, in the above example, when the effective range of the target special effect may be 3.0 seconds to 3.2 seconds, the time for referring to the video frame needs to be within a time period of 3.0 seconds to 3.2 seconds; when the effective range of the target special effect is from frame 90 to 105, the video frame number of the reference video frame needs to be between frame 90 and 105.

Specifically, a video client is provided on the terminal, and the video client may be a web page client or an APP (Application) client. The video client has the function of special effect addition. And the terminal checks the function of adding the special effect to the video client according to a preset period. When the test is carried out, the terminal extracts the reference video frame from the standard video as the basis of the special effect test.

In one embodiment, the terminal acquires the initial video and the standard video from the server before the first verification, stores the initial video and the standard video to the local, and can acquire the initial video and the standard video from the local and extract the video frame matched with the effective range of the target special effect from the standard video as the reference video frame during each verification. In other embodiments, the terminal may further store the reference video frame to the local when the reference video frame is extracted for the first time, so that the terminal may directly obtain the reference video frame from the local in the subsequent inspection, thereby improving the efficiency of the special effect inspection.

It is understood that, in other embodiments, in order to save local storage space, the terminal may not locally store the initial video and the standard video, and send a request to the server during each special effect check, obtain the initial video and the standard video, and extract a video frame matching the effective range of the target special effect from the standard video as a reference video frame.

And 204, taking the effect corresponding to the target special effect as an expected effect, and performing special effect adding processing on the initial video to obtain a special effect video to be checked.

Specifically, the terminal performs special effect adding processing on an initial video, and in the process of the special effect adding processing, a special effect video to be inspected is obtained by taking a target special effect corresponding effect as an expected effect, the special effect video at this time is a video after the special effect adding processing, whether a target special effect is normally added in the special effect video is unknown, and inspection needs to be performed on the basis of a reference video extracted from a standard video.

In one embodiment, the terminal may detect video content of an initial video with a target special effect corresponding effect as an expected effect, and perform special effect adding processing on the initial video when trigger content corresponding to the target special effect is detected, so as to obtain a special effect video to be checked.

And step 206, extracting the checking video frames matched with the effective range in the special effect video.

Specifically, when the terminal performs special effect inspection, a reference video frame is taken as a basis, the reference video frame is obtained from a standard video which accords with a corresponding effect of a target special effect and is matched with the effective time of the reference video, so that the reference video frame can accurately present the special effect corresponding to the target special effect, for the terminal, under the condition that a special effect adding function is normal, a special effect video to be inspected, which is obtained by performing special effect processing on an initial video by the terminal, can present the special effect of the target special effect in the effective range of the target special effect, so that the terminal can extract a video frame which is matched with the effective range of the target special effect from the special effect video to be inspected to be used as an inspection video frame, and whether the special effect adding function of the terminal is normal is judged by judging whether the inspection video frame normally adds the target special effect.

In one embodiment, the terminal may first determine an effective time range of the target special effect, and extract a video frame with a time range within the effective time range from the special effect video to be checked as a video frame matched with the effective range of the target special effect, to obtain a check video frame. For example, assuming that the effective time range of the target special effect is 3.0S-3.2S, any frame in the special effect video to be inspected within the time period of 3.0S-3.2S can be used as the inspection video frame. It can be understood that, in the specific implementation, in order to ensure the accuracy of the subsequent comparison, the terminal may obtain the exact time of the reference video frame, and obtain, from the special effect video to be checked, a video frame with the same exact time as the reference video frame as the check video frame.

In another embodiment, the terminal may first determine a sequence number range of an effective video frame of the target special effect, and extract a video frame with a sequence number range in the sequence number range from the special effect video to be checked as a video frame matched with the effective range of the target special effect to obtain a check video frame. For example, assuming that the effective range of the target special effect is from frame 90 to frame 105, any one frame from frame 90 to frame 105 of the special effect video to be inspected may be used as the inspection video frame. It can be understood that, in specific implementation, in order to ensure the accuracy of subsequent comparison, the terminal may obtain the exact sequence number of the reference video frame, and obtain, from the special effect video to be checked, a video frame having the same exact sequence number as the reference video frame as the check video frame.

And 208, comparing the test video frame with the reference video frame to obtain a first comparison result, and obtaining a special effect test result of the special effect video according to the first comparison result.

Specifically, because the standard video and the special effect video to be inspected are obtained by adding the same special effect to the same initial video, and the special effect adding effect of the standard video is in accordance with the corresponding effect of the target special effect, under the condition that the special effect function of the terminal is normally added, the special effect frame extracted from the effective time of the target special effect in the special effect video to be inspected is similar to the special effect presented by the reference video extracted from the effective time of the target special effect in the standard video, the terminal can compare the inspection video frame with the reference video frame, and the special effect inspection result of the special effect video to be inspected is obtained according to the obtained comparison result, wherein the special effect inspection result includes two types: the special effect addition of the terminal is abnormal or normal.

In one embodiment, the terminal may compare the similarity between the verification video frame and the reference video frame when the verification video frame and the reference video frame are compared. In other embodiments, when the terminal compares the verification video frame with the reference video frame, the difference between the verification video frame and the reference video frame may also be compared.

In one embodiment, the terminal may obtain a result identifier for characterizing the special effect test result according to the first comparison result. For example, a normal effect test result is added by the effect of marking the terminal with "1"; and marking the special effect of the terminal by 0 and adding an abnormal special effect test result.

In one embodiment, when the special effect addition is abnormal as a result of the special effect check, the terminal may generate alarm information, send the alarm information to the server, and the server may notify the relevant technical staff of improving the special effect addition function or further performing manual check.

In an embodiment, when the special effect test result is that the special effect addition is normal, it indicates that the special effect video to be tested also meets the corresponding effect of the target special effect, and then the terminal may store the special effect video in the test process as the standard video in the next test.

In the method for testing the video special effects, the reference video frame extracted from the standard video according with the corresponding effect of the target special effects is obtained, the standard video is obtained by adding the target special effects to the initial video, and the reference video frame is matched with the effective range of the target special effects, so that the reference video frame can accurately present the special effect corresponding to the target special effects, the terminal further takes the corresponding effect of the target special effects as an expected effect, carries out special effect adding treatment on the initial video to obtain the special effect video to be tested, extracts the test video frame matched with the effective range in the special effect video, and can accurately present the special effect corresponding to the target special effects by the extracted test video frame under the condition that the special effect function of the terminal is normal, so that the test video frame and the reference video frame can be compared to obtain a first comparison result, and the test result of the special effect video is obtained according to the first comparison result, therefore, automatic inspection of the special effect adding function is realized, and compared with the method for inspecting whether the special effect function is normal or not from the code implementation level of the special effect function in the related technology, the method is higher in efficiency through an automatic inspection mode, cannot be restricted by the technical level of code analysts, and is higher in accuracy.

In one embodiment, the obtaining the special effect video to be checked by performing special effect adding processing on the initial video with the effect of the target special effect as an expected effect comprises: and detecting the video content of the initial video by taking the effect corresponding to the target special effect as an expected effect, and when the trigger content corresponding to the target special effect is detected, performing special effect adding processing on the initial video to obtain the special effect video to be detected.

The trigger content corresponding to the target special effect refers to video content for triggering addition of the target special effect. The trigger content corresponding to the target special effect comprises at least one of a target object and a target action corresponding to the target object. The target object may specifically be an independent living body or object, such as a natural person, an animal, a vehicle, a virtual character, or the like, or may be a specific part, such as a face, a hand, or the like. For example, the target special effect is realized by adding glasses to a human face in a video frame containing the human face, and then the human face is a target object. The target action corresponding to the target object refers to an action formed by the movement of a body corresponding to the target object. The target action may be an action performed by an independent living body, such as a five-sense organ action or a limb action of a human body. The target action may also be a movement of an object, for example, the target action may be a travel of a vehicle. It is understood that the target objects of different target special effects may be the same or different.

Specifically, the terminal detects the video frames of the initial video frame by taking the effect corresponding to the target special effect as an expected effect to detect the video content of the initial video, and performs special effect adding processing on the initial video when the trigger content corresponding to the target special effect is detected to obtain the special effect video to be detected. In the detection process, the terminal can input the video frames in the initial video frame by frame into the detection model corresponding to the trigger content corresponding to the target special effect to obtain the detection result. The detection model refers to a machine learning model capable of detecting video frames. It will be appreciated that the detection model is not the same for different trigger contents. For example, when the trigger content is a target object, the detection model may be an object detection model; and when the trigger content is the target action, the detection model is the target action detection model.

In a specific embodiment, taking a target object as a face as an example, a general machine learning model which can be used for face recognition can be obtained as a detection model, and the face recognition is performed on video frames in an initial video frame by frame; or the face image with the marked face region can be obtained as a training sample to perform supervised machine learning training, so as to obtain a detection model for face recognition. In the training process, the network parameters of the detection model can be adjusted by using a random Gradient descent algorithm, an adarad (Adaptive Gradient) algorithm, an adaelta (improved Adagrad algorithm), an RMSprop (improved Adagrad algorithm), an Adam (Adaptive Moment Estimation) algorithm and the like.

In the above embodiment, the terminal detects the video content of the initial video with the effect corresponding to the target special effect as the expected effect, and performs special effect adding processing on the initial video when the trigger content corresponding to the target special effect is detected to obtain the special effect video to be checked, so that the target special effect should be triggered by the trigger content under the condition that the video special effect adding function of the terminal is normal, and therefore, the effective range of the target special effect in the standard video can be determined according to the occurrence time of the trigger content.

In one embodiment, the trigger content corresponding to the target special effect includes a target object, and the step of determining the effective time of the target special effect includes: acquiring the appearance time period of a target object; and determining the appearance time period of the target object as the effective range of the target special effect.

Specifically, when the trigger content corresponding to the target special effect includes the target object, the target special effect is triggered when the target object appears, and the target special effect disappears after the target object disappears.

In one embodiment, after the terminal acquires the initial video, the terminal may detect video content of the initial video to determine an appearance period of a target object in the initial video, and determine the appearance period of the target object as an effective range of a target special effect. For example, the terminal detects the video content of the initial video, determines that the occurrence period of the target object in the initial video is 3.0 seconds to 3.2 seconds, and may determine that 3.0 seconds to 3.2 seconds are the effective range of the target special effect.

In other embodiments, after the terminal acquires the initial video, the terminal may detect the video content of the initial video to determine the video frame number of the video frame in which the target object appears in the initial video, and determine a video frame number range formed by the video frame number of the video frame in which the target object appears as an effective range of the target special effect. For example, after the terminal detects the video content of the initial video, it is determined that the video frames in which the target object appears in the initial video are the 90 th frame to the 120 th frame, and then it may be determined that the effective range of the target special effect is the 90 th frame to the 120 th frame.

In one embodiment, the trigger content corresponding to the target special effect comprises a target action corresponding to the target object; the determination step of the effective range of the target special effect comprises the following steps: acquiring the occurrence time period of the target action; and determining a preset time after the occurrence time period of the target action as the effective range of the target special effect.

Specifically, when the trigger content corresponding to the target special effect includes the target action corresponding to the target object, the target special effect is triggered after the target action is completed, and the target special effect lasts for a period of time which is preset, so that the terminal can acquire the occurrence time period of the target action, and after the occurrence time period of the target action is acquired, determine a period of time which is preset after the occurrence time period of the target action as the effective range of the target special effect. And determining the duration of a preset time after the occurrence time of the target action according to the preset duration of the target special effect. It is understood that, in order to ensure the accuracy of the validation range determination, the shorter the time interval between the time of the validation range and the occurrence period of the target action, the better.

In one embodiment, after the terminal acquires the initial video, the terminal may detect video content of the initial video to determine an occurrence period of a target action in the initial video, and determine a period of time after the occurrence period of the target action as an effective range of the target special effect. For example, the terminal detects the video content of the initial video, and determines that the occurrence period of the target motion in the initial video is 3.0 seconds to 3.2 seconds, and then may determine that 3.3 seconds to 3.4 seconds are the effective range of the target special effect.

In one embodiment, the target action includes a target five sense organ action of the target object, and acquiring the occurrence period of the target action includes: acquiring the occurrence time period of the target facial features action; determining a preset time after the occurrence period of the target action as an effective range of the target special effect comprises the following steps: determining the tail unit time corresponding to the target facial features action according to the appearance time period of the target facial features action; acquiring preset time of a target special effect; and determining a preset time period which takes the next unit time of the last unit time corresponding to the target facial features as the starting time and the time length matched with the preset time length as the effective range of the target special effect.

The target object is a human face, and the five sense organs of the target object perform actions including blinking, mouth opening, head shaking, head pointing, head raising and the like. The target five sense organ actions of the target object refer to five sense organ actions capable of triggering the target special effect in the five sense organ actions of the target object. For example, if the target special effect is a love animation when a blinking motion is detected, the target special effect is a blinking motion of the five sense organs. The preset duration of the target special effect refers to preset effective duration of the target special effect. The unit time may be set in advance, and may be 0.1 second, for example. The last unit time refers to the last unit time in the appearance period of the target facial action. For example, the appearance period of the target facial movement is 3.0 seconds to 3.2 seconds, and when the unit time is 0.1 second, the end unit time of the target facial movement is 3.2 seconds.

Specifically, after the terminal acquires the initial video, the terminal can detect the video content of the initial video to determine the occurrence time period of the target facial features action in the initial video, determine the last unit moment in the occurrence time period of the target facial features action as the last unit moment corresponding to the target facial features action, further acquire the preset duration of the target special effect, then take the next unit moment of the last unit moment as the starting moment of the effective range of the target special effect, determine the end moment of the effective range of the target special effect according to the sum of the starting moment and the preset duration, and determine a period from the starting moment of the effective range to the end moment of the effective range as the effective range of the target special effect.

For example, assuming that the movement of the five sense organs of the target is a blinking movement, after the terminal detects the video content of the initial video, it is determined that the occurrence period of the blinking movement in the initial video is 3.0 seconds to 3.2 seconds, that is, 3.2 seconds are the end time of the blinking movement, and assuming that the unit time is 0.1 second and the preset time duration of the target special effect is 0.2 seconds, it may be determined that 3.3 seconds to 3.5 seconds are the effective range of the target special effect.

In the above embodiment, the terminal determines the last unit time corresponding to the target facial features action according to the occurrence time of the target facial features action by obtaining the occurrence time period of the target facial features action, obtains the preset duration of the target special effect, and determines a preset time period, in which the next unit time of the last unit time corresponding to the target facial features action is the starting time and the duration is matched with the preset duration, as the effective range of the target special effect, because the effective range of the target special effect is the next unit time of the last unit time corresponding to the target facial features action, the obtained effective range is more accurate.

In one embodiment, the trigger content corresponding to the target special effect includes a target limb action corresponding to the target object, and the determination step of the effective range of the target special effect includes: acquiring the occurrence time period of the target limb action, and acquiring the video frame rate of an initial video; determining the frame number of the last video frame of the target limb action according to the appearance time interval of the target limb action and the video frame rate of the initial video frame; determining the frame number of the next video frame according to the frame number of the last video frame of the target limb movement to obtain the initial frame number of the target special effect; acquiring preset time of the target special effect, and determining the number of video frames of the target special effect according to the preset time and the video frame rate of the initial video frame; and determining the video frame number range of the target special effect according to the initial frame number of the target special effect and the number of the video frames of the target special effect, and taking the video frame number range of the target special effect as the effective range of the target special effect.

Wherein the limb movement of the target object refers to a limb movement of the person, such as raising a hand, making a fist, kicking a leg, and the like. The target limb action refers to a limb action capable of triggering a target special effect in the limb actions of the person, for example, if a certain target special effect is a twinkling star when a hand-raising action is detected, the hand-raising action is the target limb action of the target special effect. The video frame rate of the initial video is known, and the terminal can directly obtain the video frame rate, for example, the video frame rate may be 30 frames/second, and according to the video frame rate of the initial video, the terminal can calculate the video frame number at any time in the initial video according to the following formula (1):

fi=t*f0formula (1)

Wherein f isiIs the video frame number, t is the time in the initial video, f0The video frame rate of the initial video.

For example, assuming that the video frame rate can be 30 frames/second, the video frame number of the 2.2 second in the initial video is: 2.2 × 30 ═ 66.

It is understood that, with reference to the above formula (1), the calculation may also calculate the number of video frames in any time period in the initial video, and when calculating the number of video frames in any time period, t in the above formula (1) is the time length corresponding to the time period. For example, assuming that the video frame rate may be 30 frames/second, the number of video frames in the time period of 10 seconds in the initial video is: 10 x 30-300.

Specifically, after the terminal acquires the initial video, the terminal may detect the video content of the initial video to determine the occurrence time period of the target body motion in the initial video, and further acquire the video frame rate of the initial video, determine the end time of the target body motion according to the occurrence time period of the target body motion, where the video frame corresponding to the end time is the end video frame of the target body motion, and then the terminal may calculate the video frame number of the video frame corresponding to the end time by referring to the above formula (1), obtain the frame number of the end video frame of the target body motion, add 1 to the frame number of the end video frame of the target body motion to obtain the frame number of the next video frame corresponding to the end video frame, and use the frame number as the start frame number of the target special effect. The starting frame number here is the frame number of the first video frame corresponding to the target special effect.

After the starting frame number of the target special effect is obtained, the terminal can obtain the preset time length of the target special effect, and calculate the number of video frames corresponding to the preset time length by referring to the formula (1) to obtain the number of video frames of the target special effect.

After the starting frame number of the target special effect and the number of the video frames of the target special effect are obtained, the terminal can determine the end frame number of the target special effect, determine the video frame number range of the target special effect according to the starting frame number and the end frame number, and take the video frame number range as the effective range of the target special effect. The last frame number of the target special effect refers to the frame number of the last video frame corresponding to the target special effect.

For example, assuming that the occurrence period of the target body motion is 3.0 seconds to 3.2 seconds, the video frame rate of the initial video is 30 frames per second, the frame number of the last video frame of the target body motion is determined to be 66 according to the occurrence period of the target body motion and the video frame rate of the initial video frame, the frame number of the next video frame is determined according to the frame number of the last video frame of the target body motion, the start frame number of the target special effect is obtained to be 67, assuming that the preset duration of the target special effect is 0.2 seconds, the number of video frames of the target special effect is determined to be 6 frames according to the preset duration and the video frame rate of the initial video frame, and finally, the video frame number range 67-72 of the target special effect is determined according to the start frame number of the target special effect and the number of video frames of the target special effect.

In the above embodiment, the terminal determines the frame number of the last video frame of the target body motion according to the occurrence time period of the target body motion and the video frame rate of the initial video frame, determines the next video frame after the target body motion is completed as the initial video frame of the target special effect, obtains the initial frame number of the target special effect, and determines the number of the video frames of the target special effect according to the preset duration of the target special effect and the video frame rate of the initial video frame, so that the range of the video frame numbers of the target special effect can be determined, the range of the video frame numbers is used as the effective range of the target special effect, and the effective range is determined by the video frame numbers since the effective range is started from the next video frame after the target body motion is completed, and the effective range is determined by the video frame numbers, so that the obtained effective range is more accurate.

In one embodiment, the trigger content corresponding to the target special effect comprises a target action corresponding to the target object; the generating step of the initial video comprises the following steps: acquiring a basic video frame containing a target object; acquiring a motion video frame corresponding to a target motion; in the action video frame, the target object executes a target action; and generating an initial video according to the basic video frame and the action video frame.

Wherein the base video frame is a video frame containing the target object. In the action video frame, the target object executes the target action, and then the action video frame is a video frame of which the content executes the target action for the target object.

In this embodiment, the trigger content corresponding to the target special effect includes a target action corresponding to the target object, and then the video content of the generated initial video needs to include the target action, so the terminal may obtain a base video frame, generate a plurality of video frames identical to the base video frame according to the base video frame, obtain an action video frame corresponding to the target action, generate a plurality of video frames identical to the action video frame according to the action video frame, combine a plurality of consecutive action video frames to obtain the target action, and combine the target action with other base frames to generate the initial video.

In the embodiment, the terminal acquires the action video frame corresponding to the target action by acquiring the basic video frame containing the target object, and generates the initial video according to the basic video frame and the action video frame, so that the automatic generation of the initial video is realized, and the generation efficiency of the initial video is improved.

In one embodiment, as shown in FIG. 3, generating an initial video frame from a base video frame and an action video frame comprises:

step 302, obtaining a video frame rate and a preset duration of the initial video, and determining a total frame number of the initial video according to the video frame rate and the preset duration of the initial video frame.

Specifically, after the terminal acquires the video frame rate and the preset duration of the initial video, the number of video frames of the initial video, that is, the total number of frames of the initial video, may be calculated by referring to the above formula (1).

Step 304, obtaining the appearance time period of the target action, and determining the first target frame number of the action video frame according to the appearance time period of the target action and the video frame rate.

Specifically, after acquiring the occurrence period of the target action, the terminal may determine the time of the start video frame and the time of the end video frame of the target action, and then determine the frame number of the start video frame and the frame number of the end video frame of the target action by referring to the above formula (1), so as to obtain the number required by combining the action video frames into the target action, and obtain the first target frame number.

Step 306, obtaining the difference between the total frame number of the initial video and the first target frame number of the motion video frame, and obtaining the second target frame number of the basic video frame.

Specifically, the video frames in the initial video except the motion video frame may be basic video frames, and then the terminal may obtain a difference between the total frame number of the initial video and the target video frame number of the motion video frame to obtain a second target frame number of the basic video frame.

Step 308, generating a video frame with a first target frame number as a video frame within the occurrence period of the target motion according to the motion video frame, and generating a video frame with a second target frame number as a video frame outside the occurrence period of the target motion according to the basic video frame to generate an initial video frame.

Specifically, the terminal may generate, according to the motion video frame, video frames that are equal in number to the first target frame number and have the same content as the motion video frame, use the video frames as video frames within an appearance period of the target motion, generate, according to the base video frame, video frames that are equal in number to the second target frame number and have the same content as the base video frame, and use the video frames as video frames outside the appearance period of the target motion, so that the motion video frame of the first target frame number and the base video frame of the second target frame number are combined to obtain an initial video, and in the initial video, the target motion may occur in the appearance period of the target motion.

In the above embodiment, the initial video is generated by using the motion video frame as the video frame within the appearance period of the target motion and using the base video frame as the video frame outside the appearance period of the target motion, so that any target motion can be generated at any time, and the degree of freedom is high.

In one embodiment, as shown in fig. 4, there is provided a method for checking a video effect, including the steps of:

step 402, acquiring a reference video frame extracted from a standard video conforming to a corresponding effect of a target special effect; the standard video is obtained by adding a target special effect to the initial video; the reference video frame is matched with the effective range of the target special effect in the standard video.

And step 404, taking the effect corresponding to the target special effect as an expected effect, and performing special effect adding processing on the initial video to obtain a special effect video to be checked.

And 406, extracting the checking video frames matched with the effective range in the special effect video.

Step 408, comparing the inspection video frame with the reference video frame to obtain a first comparison result.

The first comparison result obtained in steps 402 to 408 may be used to verify that the terminal performs the special effect adding process on the initial video to obtain whether the effect of the target special effect in the special effect video to be verified is normal, but it cannot be guaranteed whether the adding time of the target special effect in the special effect video is normal, and the terminal may have added the target special effect before the effective range of the target special effect, at this time, the special effect adding function of the terminal is also abnormal, and in order to consider such a situation, the terminal further performs the following steps 410 to 414 to obtain a second comparison result, and the adding time of the special effect in the special effect video is verified according to the second comparison result.

Step 410, acquiring a contrast video frame extracted from a standard video; matching the contrast range of the target special effect in the contrast video frame and the standard video; the contrast range of the target special effect is before the occurrence range of the trigger content in the initial video.

The comparison range of the target special effect may be a comparison time range or a comparison video frame sequence number range. For example, the control range for a particular target effect may be 2.7 seconds to 2.9 seconds; as another example, the comparison range of a certain target special effect may be from 85 th frame to 90 th frame in the video. Correspondingly, the matching of the comparison video frame with the comparison range of the target special effect means that the time of the comparison video frame is within the comparison time range of the target special effect or the video frame number of the comparison video frame is within the number range of the comparison video frame of the target special effect. For example, in the above example, when the control range of the target special effect may be 2.7 seconds to 2.9 seconds, the time for controlling the video frame needs to be within the time period of 2.7 seconds to 2.9 seconds; when the comparison range of the target special effect is from 85 th frame to 90 th frame, the video frame number of the reference video frame needs to be between 85 th frame and 90 th frame.

The trigger content comprises at least one of a target object and a target action corresponding to the target object. Then in one embodiment the range of occurrence of the trigger content may be a range of time or a range of video frame numbers in which the target object exists; in other embodiments, the range of occurrence of the trigger content may be a range of times that the target action occurs or a range of video frame numbers.

The comparison range of the target special effect is before the occurrence range of the trigger content in the initial video, so in one embodiment, the comparison range of the target special effect may be a period of time before the occurrence time range of the trigger content in the initial video, for example, the occurrence time range of the trigger content is 3.0 seconds to 3.2 seconds, and the comparison range of the target special effect may be 2.8 seconds to 2.9 seconds; in another embodiment, the comparison range of the target special effect may be a sequence number range before the video frame sequence number range of the trigger content in the initial video, where a sequence number range before the video frame sequence number range of the trigger content means that the maximum value of the sequence number range is smaller than the minimum value of the video frame sequence number range of the trigger content, for example, assuming that the video frame sequence number range of the trigger content is 100 th to 120 th frames, the comparison range of the target special effect may be 90 th to 95 th frames.

The comparison video frame is matched with the comparison range of the target special effect in the standard video, the comparison range of the target special effect is before the occurrence range of the trigger content in the initial video, the comparison video frame is the video frame before the target special effect occurs in the standard video, the target special effect is not added to the video frame, the comparison video frame is used as a basis to check whether the terminal adds the target special effect at the time when the target special effect is not added, and the accuracy of the target special effect adding time is ensured.

In step 412, the inspection video frames matching the comparison range in the special effect video are extracted.

Specifically, when the special effect adding function is normal, the terminal performs special effect processing on the initial video to obtain a special effect video to be detected, which can exhibit a special effect of the target special effect in an effective range of the target special effect, and cannot exhibit a special effect of the target special effect in a contrast range of the target special effect, so that the terminal can extract a video frame matched with the contrast range of the target special effect from the special effect video to be detected as a contrast video frame, and judge whether the special effect adding function of the terminal is normal by judging whether the contrast video frame is not added with the target special effect.

In one embodiment, the terminal may first determine a comparison time range of the target special effect, extract a video frame with a time range within the comparison time range from the special effect video to be checked as a video frame matched with the comparison range of the target special effect, and obtain a check video frame matched with the comparison time. For example, assuming that the comparison time range of the target special effect is 2.6 seconds to 2.7 seconds, one frame can be arbitrarily selected as the inspection video frame from the time period of 2.6 seconds to 2.7 seconds in the special effect video to be inspected. It can be understood that, in the specific implementation, in order to ensure the accuracy of the subsequent comparison, the terminal may obtain the exact time of the comparison video frame, and obtain, from the special effect video to be checked, a video frame having the same exact time as the comparison video frame as the check video frame matched with the comparison time.

In another embodiment, the terminal may first determine a sequence number range of a comparison video frame of the target special effect, extract a video frame with a sequence number range in the sequence number range from the special effect video to be checked as a video frame matched with the comparison range of the target special effect, and obtain a check video frame matched with the comparison time. For example, assuming that the comparison range of the target special effect is 80 th to 85 th frames, any one frame from the 80 th to 85 th frames of the special effect video to be inspected may be used as the inspection video frame. It can be understood that, in the specific implementation, in order to ensure the accuracy of the subsequent comparison, the terminal may obtain the exact serial number of the comparison video frame, and obtain, from the special-effect video to be checked, a video frame having the same exact serial number as the comparison video frame as the check video frame matched with the comparison time.

And step 414, comparing the inspection video frame matched with the comparison range with the comparison video frame to obtain a second comparison result.

And step 416, obtaining a special effect test result of the special effect video according to the first comparison result and the second comparison result.

Specifically, since the standard video and the special effect video to be inspected are obtained by adding the same special effect to the same initial video, and the special effect addition effect of the standard video is in accordance with the corresponding effect of the target special effect, under the condition that the special effect function of the terminal is normally added, the inspection video frame extracted from the comparison time of the target special effect in the special effect video to be inspected is not added with the special effect as the reference video extracted from the comparison time of the target special effect in the standard video, so that the terminal can compare the inspection video frame with the reference video frame, and obtain the special effect inspection result of the special effect video to be inspected according to the obtained comparison result.

In one embodiment, the terminal may compare the similarity between the verification video frame and the reference video frame when the verification video frame and the reference video frame are compared. In other embodiments, when the terminal compares the verification video frame with the reference video frame, the difference between the verification video frame and the reference video frame may also be compared.

And finally, the terminal determines a special effect test result of the special effect video by combining the first comparison result and the second comparison result, so that the special effect added by the special effect adding function of the terminal is ensured to be in accordance with the corresponding effect of the target special effect and the adding time is in accordance with the adding time corresponding to the target special effect, namely the adding effect is accurate and the adding time is accurate.

In the above embodiment, the terminal further extracts the test video frame matched with the comparison time in the special-effect video by acquiring the comparison video frame extracted from the standard video, where the comparison video frame is matched with the comparison range of the target special effect in the standard video, and the comparison range of the target special effect is a range before the trigger range of the trigger content in the special-effect video, compares the test video frame matched with the comparison time with the comparison video frame to obtain a second comparison result, and finally obtains the special-effect test result of the special-effect video by combining the first comparison result and the second comparison result, so that the effect accuracy and the time accuracy of special-effect addition can be tested at the same time, and the accuracy of special-effect test is further improved.

In one embodiment, the initial video comprises trigger content corresponding to the target special effect; the trigger content corresponding to the target special effect comprises a target action corresponding to the target object; before acquiring the reference video frame extracted from the standard video, the method further comprises: acquiring the occurrence time period of the target action; and determining a preset time period before the occurrence time period of the target action as a comparison range of the target special effect.

In this embodiment, the initial video includes trigger content corresponding to a target special effect, and the trigger content corresponding to the target special effect includes a target action corresponding to a target object, so that the standard video of this embodiment takes effect after the occurrence period of the target action, and a video frame before the occurrence period of the target action does not exhibit a special effect of the target special effect, so that for the terminal, under the condition that the special effect adding function is normal, the terminal does not exhibit the special effect of the target special effect in a time period in which the special effect of the target special effect is not exhibited in the standard video, so that the terminal may obtain the occurrence period of the target action, and determine a preset time before the occurrence period of the target action as a comparison range of the target special effect.

It is to be understood that, in a video frame before the occurrence period of the target motion, the greater the time interval between the time of the video frame and the occurrence period of the target motion is, the less the possibility that the video frame is erroneously added with the target special effect is, and then, in determining the collation range, a preset time interval as small as possible from the occurrence period of the target motion may be selected. In a specific embodiment, a start time of the occurrence period of the target action may be obtained, a time immediately preceding the start time is determined as an end time of the comparison range, and a preset time period ending with the end time is determined as the comparison range. For example, assuming that the occurrence period of the target motion is 3.0 seconds to 3.2 seconds, and the end time of the comparison range of the target special effect is 2.9 seconds, 2.8 to 2.9 seconds may be determined as the comparison range.

In other embodiments, after acquiring the occurrence period of the target motion, the terminal may further calculate a video frame number of a start video frame of the target motion by referring to formula (1) in the above embodiments, determine a previous number of the video frame number as an end video frame number of the comparison range of the target special effect, and determine a number range ending with the end video frame number as the comparison range. For example, assuming that the occurrence period of the target motion is 3.0 seconds to 3.2 seconds and the video frame rate is 30 frames/second, it can be calculated that the video frame number of the start video frame of the target motion is 3 × 30 — 90, and then the end video frame number of the collation range of the target special effect is 89, then the 86 th frame to the 89 th frame can be determined as the collation range of the target special effect.

In the above embodiment, the terminal determines a preset time period before the occurrence time period of the target action as the comparison range of the target special effect by acquiring the occurrence time period of the target action, so that the comparison range of the target special effect can be accurately and quickly determined.

In one embodiment, comparing the inspection video frame and the reference video frame to obtain the first comparison result comprises: calculating the similarity between the test video frame and the reference video frame; taking the calculated similarity as a comparison result of the inspection video frame and the reference video frame to obtain a first comparison result; obtaining a special effect test result of the special effect video according to the first comparison result comprises the following steps: and determining a special effect test result of the special effect video according to the size relation between the similarity and a preset similarity threshold.

Wherein the similarity is used to characterize the degree of similarity between the test video frame and the reference video frame. The greater the similarity, the greater the degree of similarity between the verification video frame and the reference video frame, the greater the likelihood that the special effect exhibited by the verification video frame is the same as the special effect exhibited by the reference video frame.

Specifically, when the terminal compares the inspection video frame with the reference video frame, the similarity between the inspection video frame and the reference video frame can be calculated, the calculated similarity is used as a comparison result of the inspection video frame and the reference video frame, then the size relationship between the obtained similarity and a preset similarity threshold is compared, and the special effect inspection result of the special effect video is determined according to the size relationship between the similarity and the preset similarity threshold. The preset similarity threshold value can be preset according to experience.

In one embodiment, when the similarity between the test video frame and the reference video frame exceeds a preset similarity threshold, it may be determined that the special effect test result of the special effect video is that the special effect addition is normal; on the contrary, when the similarity between the test video frame and the reference video frame does not exceed the preset similarity threshold, the special effect test result of the special effect video can be determined to be abnormal for special effect addition.

In another embodiment, considering that the normal effect of the special effect addition includes normal effect of the special effect addition and normal time of the special effect addition, the terminal may further obtain a comparison video frame extracted from the standard video, wherein the comparison video frame matches a comparison range of a target special effect in the standard video, the comparison range of the target special effect is before an appearance range of trigger content in the initial video, further extract a verification video frame matched with the comparison range in the special effect video, calculate a similarity between the verification video frame matched with the comparison range and the comparison video frame, and determine a verification result of the special effect video according to a size relationship between the similarity and a preset similarity threshold and a size relationship between the similarity between the verification video frame matched with the effective time and a reference video frame and the preset similarity threshold.

In a specific embodiment, when the similarity between the verification video frame and the comparison video frame matched with the comparison range and the similarity between the verification video frame and the reference video frame matched with the validation time exceed a preset similarity threshold, determining that the special effect addition of the special effect video is normal; and conversely, when the similarity between the verification video frame matched with the comparison range and the comparison video frame and the similarity between the verification video frame matched with the validation time and the reference video frame do not exceed a preset similarity threshold, determining that the special effect addition of the special effect video is abnormal.

In one embodiment, calculating the similarity between the verification video frame and the reference video frame comprises: acquiring a target initial frame corresponding to a target special effect; updating the pixel value of the inspection video frame through the pixel difference value of the inspection video frame and the target initial frame; according to the size relation between the updated pixel value of the inspection video frame and a preset pixel difference threshold value, eliminating the content similar to the target initial frame in the inspection video frame to obtain a difference value image corresponding to the inspection video frame; updating the pixel value of the reference video frame through the pixel difference value of the reference video frame and the target initial frame; according to the size relation between the updated pixel value of the reference video frame and a preset pixel difference threshold value, eliminating the content similar to the target initial frame in the reference video frame to obtain a difference value image corresponding to the reference video frame; and determining the similarity between the checking video frame and the reference video frame based on the similarity between the difference image corresponding to the checking video frame and the difference image corresponding to the reference video frame.

The target initial frame corresponding to the target special effect refers to a video frame, except for the target special effect, of which the similarity of other contents exceeds a preset threshold. In a specific embodiment, the target initial frame corresponding to the target special effect refers to a video frame having exactly the same content as the video frame to which the target special effect is added, except for the target special effect.

Specifically, the terminal may obtain a target initial frame corresponding to the target special effect from a video frame of the initial video, calculate a pixel difference between the inspection video frame matched with the effective range and the target initial frame, update a pixel value of the inspection video frame by using the calculated pixel difference, where the pixel value in the obtained inspection video frame is an updated pixel value, and the updated pixel value is obtained from the pixel difference, and may reflect a pixel difference between the inspection video frame and the target initial frame, so that the updated pixel value may be compared with a preset pixel difference threshold, and when a certain updated pixel value exceeds the preset pixel difference threshold, it is described that the pixel difference between the pixel and the target initial frame is large, the pixel may be retained; on the contrary, when a certain updated pixel value does not exceed the preset pixel difference threshold value, which indicates that the pixel is relatively similar to the pixel of the target initial frame, the pixel can be used for eliminating the content similar to the target initial frame in the inspection video frame, so as to obtain the difference map corresponding to the inspection video frame.

In a specific embodiment, the elimination process may be performed on the content similar to the target initial frame in the inspection video frame by the following formula (2), so as to obtain a difference map of the inspection video frame:

wherein, (R, G, B)x,y(R, G, B) pixel values representing the (x, y) -th coordinates in the difference map,to examine the R (Red) pixel value at the (x, y) th coordinate of a video frame,the value of R pixel at the (x, y) th coordinate of the target initial frame,to examine the G (Rreen) pixel value at the (x, y) th coordinate of a video frame,the value of G pixel at the (x, y) th coordinate of the target initial frame,to examine the B pixel value at the (x, y) th coordinate of a video frame,the value of b (blue) pixel at the (x, y) th coordinate of the target initial frame, and L is a pixel difference threshold, which can be set empirically, for example, L may be set to 20.

Further, the terminal may calculate a pixel difference between the reference video frame and the target initial frame, and update a pixel value of the reference video frame by using the calculated pixel difference, where the obtained pixel value in the reference video frame is an updated pixel value, and the updated pixel value is obtained from the pixel difference between the reference video frame and the target initial frame, so that the pixel difference between the reference video frame and the target initial frame may be reflected, and therefore, the updated pixel value may be compared with a preset pixel difference threshold, and when a certain updated pixel value exceeds the preset pixel difference threshold, it indicates that the pixel difference between the pixel and the target initial frame is large, the pixel may be retained; on the contrary, when a certain updated pixel value does not exceed the preset pixel difference threshold value, which indicates that the pixel is relatively similar to the pixel of the target initial frame, the pixel can be used for eliminating the content similar to the target initial frame in the reference video frame, so as to obtain the difference map corresponding to the reference video frame. It is understood that only the image with the special effect added is retained in the difference image obtained at this time.

It is understood that, in a specific embodiment, the elimination process may be performed on the content of the reference video frame similar to the target initial frame with reference to the above formula (2), so as to obtain a difference map of the reference video frame.

Further, the terminal may determine the similarity between the verification video frame and the reference video frame based on the similarity between the difference map corresponding to the verification video frame and the difference map corresponding to the reference video frame.

In the above embodiment, since the difference map eliminates the content similar to the target initial frame and only the image after the effect is added is retained, the similarity calculation is performed based on the difference map, and since redundant information is reduced, the accuracy and efficiency of the similarity calculation can be greatly improved.

In one embodiment, determining the similarity between the verification video frame and the reference video frame based on the similarity between the difference map corresponding to the verification video frame and the difference map corresponding to the reference video frame comprises: performing histogram calculation on a difference value image corresponding to the detected video frame to obtain a first gray level histogram; performing histogram calculation on a difference value image corresponding to the reference video frame to obtain a second gray level histogram; calculating a pixel number difference between the first gray level histogram and the second gray level histogram for each gray level value of each color channel, and calculating a similarity between the first gray level histogram and the second gray level histogram based on the calculated pixel number difference; and taking the calculated similarity as the similarity between the test video frame and the reference video frame.

A Histogram (Histogram) is a statistical report graph, and a series of vertical stripes or line segments with different heights represent the data distribution. The data type is generally represented by the horizontal axis, and the distribution is represented by the vertical axis. The gray level histogram in the embodiment of the present application is used to represent the gray level distribution in the digital image, and the number of pixels of each gray level in the image is plotted. The gray scale value is 0-255.

Specifically, the terminal performs histogram calculation on a difference map corresponding to the inspection video frame, and counts the number of pixels corresponding to each gray value in each color channel in the inspection video frame to obtain a gray histogram of each color channel, so that the first gray histogram includes three subgraphs, which are respectively divided into a histogram corresponding to R, a histogram corresponding to G, and a histogram corresponding to B, where the histogram corresponding to R is used to represent the gray distribution of a red channel, the histogram corresponding to G is used to represent the gray distribution of a green channel, and the histogram corresponding to B is used to represent the gray distribution of a blue channel. Fig. 5 is a histogram illustrating a difference map corresponding to a video frame under test in one embodiment.

And the terminal calculates a histogram of the difference image corresponding to the reference video frame, counts the number of pixels of each color channel corresponding to each gray value in the reference video frame, and obtains a gray histogram of each color channel, so that the second gray histogram also comprises three subgraphs, namely a histogram corresponding to R, a histogram corresponding to G and a histogram corresponding to B.

Further, for each gray value of each color channel, the terminal obtains the number of pixels corresponding to the gray histogram in the first gray histogram and obtains the number of pixels corresponding to the gray histogram in the second gray histogram, and calculates the difference between the two numbers of pixels to obtain the difference value of the number of pixels, and since the difference value of the number of pixels can reflect the difference between the first gray histogram and the second gray histogram, the terminal can calculate the similarity between the first gray histogram and the second gray histogram based on the calculated difference value of the number of pixels.

In a specific embodiment, the terminal may calculate the similarity between the first histogram of gray scales and the second histogram of gray scales with reference to the following formula (3):

wherein, S represents the similarity,the nth data representing the first gray histogram,the nth data representing the second histogram includes 256 sub-graphs each including 256 data, and thus each histogram includes 256 × 3 — 768 data, where each data represents the number of pixels of the current gray value of the current color channel in the current histogram, and then N takes the value 768.

Further, after the terminal calculates the similarity between the first gray level histogram and the second gray level histogram, the similarity is used as the similarity between the test video frame and the reference video frame.

In the above embodiment, the gray level histograms corresponding to the inspection video frame and the reference video frame are obtained through histogram calculation, and then the similarity between the two histograms is calculated, so that whether the two difference maps are similar or not can be quickly calculated, and the method has high efficiency.

In a specific embodiment, a method for checking a video special effect is provided, which includes the following steps:

1. the method comprises the steps that a terminal obtains an initial video and a standard video, wherein the initial video and the standard video comprise trigger contents of target special effects, the target special effects comprise a first target special effect and a second target special effect, the trigger contents of the first target special effect are target objects, the trigger contents of the second target special effect are target actions corresponding to the target objects, and the initial video and the standard video are generated in the following mode:

1-1, acquiring a basic video frame containing a target object, and acquiring a motion video frame corresponding to a target motion, wherein the target object executes the target motion in the motion video frame.

For example, taking a target motion as a limb motion of "open five fingers" as an example, as shown in fig. 6A, a schematic diagram of a basic video frame is shown, as shown in fig. 6B, a schematic diagram of a motion video frame is shown, it can be seen that both the basic video frame and the motion video frame contain a human body, and the human body performs the target motion in the motion video frame.

1-2, acquiring a video frame rate and preset duration of an initial video, and determining the total frame number of the initial video according to the video frame rate and the preset duration of the initial video frame.

1-3, acquiring the occurrence time period of the target action, and determining the first target frame number of the action video frame according to the occurrence time period of the target action and the video frame rate.

1-4, obtaining the difference value of the total frame number of the initial video and the first target frame number of the action video frame to obtain the second target frame number of the basic video frame.

1-5, generating a video frame with a first target frame number as a video frame within the appearance period of the target action according to the action video frame, and generating a video frame with a second target frame number as a video frame outside the appearance period of the target action according to the basic video frame to generate an initial video frame.

And 1-6, carrying out special effect adding treatment on the initial video frame, adding the video frame with the target special effect, and obtaining a reference video frame by manually confirming that the added target special effect accords with the corresponding effect of the target special effect.

2. The terminal obtains the appearance time period of the target object, and determines the appearance time period of the target object as the effective range of the first target special effect in the standard video.

It can be understood that, in this embodiment, since the initial video is generated from the base video frame and the motion video frame, and both the base video frame and the motion video frame include the target object, the effective range of the first target special effect is the entire video.

3. The terminal obtains the occurrence time period of the target action, and determines a period of preset time after the occurrence time period of the target action as the effective range of the target special effect in the standard video.

Wherein, the occurrence time interval of the target action is known in advance, and the terminal can directly acquire the target action.

Specifically, after the occurrence time period of the target action is obtained, the last unit time corresponding to the target action is determined according to the occurrence time period of the target action, the preset time length of the second target special effect is obtained, and a preset time period, which takes the next unit time of the last unit time corresponding to the target action as the starting time and has the time length matched with the preset time length, is determined as the effective range of the second target special effect.

4. And the terminal extracts a video frame matched with the effective range of the second target special effect from the standard video as a reference video frame.

It can be understood that, in this embodiment, since the effective range of the first target special effect is the whole video, the first target special effect is also effective in the effective range of the second target special effect, so that the video frame matched with the effective range of the second target special effect simultaneously includes the special effect effects of the first target special effect and the second target special effect, and can be used as the reference video frame of the two special effect effects.

For example, as shown in fig. 6C, the video frame is a video frame that matches the effective range of the first target special effect but does not match the effective range of the second target special effect, and the video frame includes the first target special effect, that is, when a human face is recognized, a "two-column hat" pattern is added; as shown in fig. 6 (d), for a video frame matching the effective range of the second target effect, which is the action of recognizing "open five fingers", a pumpkin appears in the palm of the hand, it can be seen that fig. 6 (d) includes both the effect of the first target effect (two rows of hat) and the effect of the second target effect (pumpkin in the palm of the hand).

5. And the terminal detects the video content of the initial video by taking the effect corresponding to the target special effect as an expected effect, and performs special effect adding processing on the initial video when detecting the trigger content corresponding to the target special effect to obtain the special effect video to be detected.

6. And the terminal extracts the video frame matched with the effective range of the second target special effect from the special effect video to be used as a first inspection video frame.

7. The terminal compares the first inspection video frame with the reference video frame to obtain a first comparison result.

Specifically, the terminal calculates the similarity between the first inspection video frame and the reference video frame as a first comparison result, and the steps are as follows:

and 7-1, acquiring a target initial frame corresponding to the target special effect.

And 7-2, updating the pixel value of the first inspection video frame through the pixel difference value of the first inspection video frame and the target initial frame.

7-3, according to the size relation between the updated pixel value of the first inspection video frame and a preset pixel difference threshold value, eliminating the content similar to the target initial frame in the first inspection video frame to obtain a difference image corresponding to the first inspection video frame.

And 7-4, updating the pixel value of the reference video frame through the pixel difference value of the reference video frame and the target initial frame.

7-5, eliminating the content similar to the target initial frame in the reference video frame according to the size relation between the updated pixel value of the reference video frame and the preset pixel difference threshold value to obtain a difference value image corresponding to the reference video frame;

7-6, performing histogram calculation on the difference value image corresponding to the first inspection video frame to obtain a first gray level histogram.

And 7-7, performing histogram calculation on the difference value image corresponding to the reference video frame to obtain a second gray level histogram.

And 7-8, calculating a pixel number difference value between the first gray level histogram and the second gray level histogram for each gray level value of each color channel, and calculating the similarity between the first gray level histogram and the second gray level histogram based on the calculated pixel number difference value.

And 7-9, taking the calculated similarity as the similarity between the first test video frame and the reference video frame.

8. The terminal acquires the starting time of the appearance time period of the target action, determines the last time of the starting time as the end time, and determines a preset time period ending at the end time as the comparison range of the second target special effect in the standard video.

9. And the terminal extracts the video frame matched with the comparison range of the second target special effect from the standard video to obtain a comparison video frame.

10. And the terminal extracts the video frame matched with the comparison range of the second target special effect from the special effect video to obtain a second inspection video frame.

11. And the terminal compares the comparison video frame with the second inspection video frame to obtain a second comparison result.

Specifically, the terminal may calculate the similarity between the control video frame and the second verification video frame as the second comparison result with reference to the steps of 7-1 to 7-9 described above.

12. When the first comparison result exceeds a similarity threshold and the second comparison result exceeds a similarity threshold, determining that the special effect of the special effect video is added normally; and conversely, when at least one of the first comparison result and the second comparison result does not exceed the similarity threshold, determining that the special effect addition of the special effect video is abnormal.

The application also provides an application scene, and the application scene applies the video special effect detection method. Specifically, the application of the method for testing the video special effect in the application scene is as follows:

in the application scene, a video editing type APP is arranged on the terminal, and a plurality of target special effects can be added to the video editing type APP, wherein one target special effect is that rabbit ears are added when the face is recognized, as shown in fig. 7A; a "love heart" animation effect is triggered when the occurrence of a blinking action is detected, as shown in fig. 7B. The terminal performs the following steps at preset time (for example, nine points) every day to check that the APP can normally add the target special effect:

1. and obtaining an initial video and a standard video through the APP. For the generation of the initial video and the acquisition of the standard video, reference may be made to the description of the above embodiments, which is not repeated herein.

It should be noted that, in the application scene, it is known that the duration of the initial video is 10 seconds, the video frame rate is 30 frames per second, a base video frame used when generating the initial video frame is shown in fig. 6A, a motion video frame is shown in fig. 8A, the occurrence time of the blink is 3.0 seconds to 3.2 seconds of the initial video, that is, the blink motion is completed in 3.2 seconds, and then a "love heart" animation effect is triggered from 3.3 seconds in the standard video.

2. According to the APP, the effect corresponding to the target special effect is taken as an expected effect, the special effect adding processing is carried out on the initial video, and the special effect video to be detected is obtained.

3. The 3.3 rd second, i.e., 99 th frame is extracted from the standard video and the special effect video, respectively, to perform "contrast after adding effect".

The specific comparison method is the same for the above two points of time, and the following description will be given only by taking "comparison after adding effect" as an example.

4. And respectively subtracting the 99 th frame and the basic video frame in the standard video and the special-effect video, reserving the pixel when the difference value is greater than a preset threshold value, and setting the value of the pixel to be 0 when the difference value is less than the preset threshold value, so as to obtain a difference value image only with an adding effect, as shown in fig. 7C, the difference value image only has rabbit ears and love.

5. And respectively carrying out histogram calculation on difference value graphs corresponding to 99 th frames in the standard video and the special effect video to obtain respective corresponding histograms. For a specific calculation method, reference is made to the description in the above embodiments, which is not repeated herein.

6. And calculating the similarity of the two calculated histograms, and when the similarity is greater than a preset threshold value, indicating that the special effect in the special effect video is consistent with that in the standard video, namely the special effect is normal.

7. The 2.9 th frame, namely 2.9 × 30 frame, is extracted from the standard video and the special effect video respectively to perform "comparison before adding effect", and when the similarity obtained by the "comparison before adding effect" is also greater than a preset threshold, it indicates that the time of adding effect is accurate, that is, the APP can correctly recognize the blinking motion and adds a love animation triggered by the blinking animation at the correct time, so that it indicates that the special effect adding function of the APP to the target special effect is normal.

It should be understood that although the various steps in the flow charts of fig. 2-4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-4 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.

In one embodiment, as shown in fig. 8, there is provided an apparatus 800 for checking a video effect, which may be a part of a computer device using a software module or a hardware module, or a combination of the two modules, and specifically includes:

a reference video frame obtaining module 802, configured to obtain a reference video frame extracted from a standard video that meets a corresponding effect of a target special effect; the standard video is obtained by adding a target special effect to the initial video; matching the effective range of the target special effect in the reference video frame and the standard video;

a special effect adding processing module 804, configured to perform special effect adding processing on the initial video with the target special effect corresponding effect as an expected effect, and obtain a special effect video to be inspected;

an inspection video frame extraction module 806, configured to extract an inspection video frame that matches the effective range in the special-effect video;

the comparison module 808 is configured to compare the inspection video frame with the reference video frame to obtain a first comparison result, and obtain a special effect inspection result of the special effect video according to the first comparison result.

The video special effect inspection device acquires the reference video frame extracted from the standard video according with the corresponding effect of the target special effect, the reference video frame can accurately present the special effect corresponding to the target special effect because the standard video is obtained by adding the target special effect to the initial video and the reference video frame is matched with the effective range of the target special effect, the terminal further takes the corresponding effect of the target special effect as an expected effect, carries out special effect adding treatment on the initial video to obtain the special effect video to be inspected, extracts the inspection video frame matched with the effective range in the special effect video, and can accurately present the special effect corresponding to the target special effect by the extracted inspection video frame under the condition that the special effect function of the terminal is normal, so the inspection video frame and the reference video frame can be compared to obtain a first comparison result, and the special effect inspection result of the video special effect is obtained according to the first comparison result, therefore, automatic inspection of the special effect adding function is realized, and compared with the method for inspecting whether the special effect function is normal or not from the code implementation level of the special effect function in the related technology, the method is higher in efficiency through an automatic inspection mode, cannot be restricted by the technical level of code analysts, and is higher in accuracy.

In an embodiment, the special effect adding processing module is further configured to detect video content of the initial video with the target special effect corresponding effect as an expected effect, and perform special effect adding processing on the initial video when the trigger content corresponding to the target special effect is detected, so as to obtain a special effect video to be detected.

In one embodiment, the trigger content corresponding to the target special effect comprises a target object; the above-mentioned device still includes: the effective range determining module is used for acquiring the appearance time period of the target object; and determining the appearance time period of the target object as the effective range of the target special effect.

In one embodiment, the trigger content corresponding to the target special effect comprises a target action corresponding to the target object; the effective range determining module is also used for acquiring the occurrence time period of the target action; and determining a preset time after the occurrence time period of the target action as the effective range of the target special effect.

In one embodiment, the target action includes a target facial action of the target object; the effective range determining module is also used for acquiring the occurrence time period of the target facial features; determining the tail unit time corresponding to the target facial features action according to the appearance time period of the target facial features action; acquiring preset time of a target special effect; and determining a preset time period which takes the next unit time of the last unit time corresponding to the target facial features as the starting time and the time length matched with the preset time length as the effective range of the target special effect.

In one embodiment, the trigger content corresponding to the target special effect comprises a target limb action corresponding to the target object; the effective range determining module is also used for acquiring the occurrence time interval of the target limb action and acquiring the video frame rate of the initial video; determining the frame number of the last video frame of the target limb action according to the appearance time interval of the target limb action and the video frame rate of the initial video frame; determining the frame number of the next video frame according to the frame number of the last video frame of the target limb movement to obtain the initial frame number of the target special effect; acquiring preset time of the target special effect, and determining the number of video frames of the target special effect according to the preset time and the video frame rate of the initial video frame; and determining the video frame number range of the target special effect according to the initial frame number of the target special effect and the number of the video frames of the target special effect, and taking the video frame number range of the target special effect as the effective range of the target special effect.

In one embodiment, the trigger content corresponding to the target special effect comprises a target action corresponding to the target object; the above-mentioned device still includes: the initial video generation module is used for acquiring a basic video frame containing a target object; acquiring a motion video frame corresponding to a target motion; in the action video frame, the target object executes a target action; and generating an initial video according to the basic video frame and the action video frame.

In one embodiment, the initial video generation module is further configured to obtain a video frame rate and a preset duration of an initial video, and determine a total frame number of the initial video according to the video frame rate and the preset duration of the initial video frame; acquiring the occurrence time period of a target action, and determining a first target frame number of an action video frame according to the occurrence time period of the target action and a video frame rate; acquiring a difference value between the total frame number of the initial video and the first target frame number of the action video frame to obtain a second target frame number of the basic video frame; and generating a video frame with a first target frame number as a video frame within the appearance time period of the target action according to the action video frame, and generating a video frame with a second target frame number as a video frame outside the appearance time period of the target action according to the basic video frame to generate an initial video frame.

In one embodiment, the above apparatus further comprises: the contrast module is used for acquiring contrast video frames extracted from the standard video; matching the contrast range of the target special effect in the contrast video frame and the standard video; the contrast range of the target special effect is before the appearance range of the trigger content in the initial video; extracting a test video frame matched with the comparison range in the special effect video; comparing the inspection video frame matched with the comparison range with the comparison video frame to obtain a second comparison result; the comparison module is further used for obtaining a special effect test result of the special effect video according to the first comparison result and the second comparison result.

In one embodiment, the initial video comprises trigger content corresponding to the target special effect; the trigger content corresponding to the target special effect comprises a target action corresponding to the target object; the above-mentioned device still includes: the comparison time determining module is used for acquiring the occurrence time period of the target action; and determining a preset time period before the occurrence time period of the target action as a comparison range of the target special effect.

In one embodiment, the comparison module is further configured to calculate a similarity between the test video frame and the reference video frame; taking the calculated similarity as a comparison result of the inspection video frame and the reference video frame to obtain a first comparison result; obtaining a special effect test result of the special effect video according to the first comparison result comprises the following steps: and determining a special effect test result of the special effect video according to the size relation between the similarity and a preset similarity threshold.

In one embodiment, the comparison module is further configured to obtain a target initial frame corresponding to the target special effect; updating the pixel value of the inspection video frame through the pixel difference value of the inspection video frame and the target initial frame; according to the size relation between the updated pixel value of the inspection video frame and a preset pixel difference threshold value, eliminating the content similar to the target initial frame in the inspection video frame to obtain a difference value image corresponding to the inspection video frame; updating the pixel value of the reference video frame through the pixel difference value of the reference video frame and the target initial frame; according to the size relation between the updated pixel value of the reference video frame and a preset pixel difference threshold value, eliminating the content similar to the target initial frame in the reference video frame to obtain a difference value image corresponding to the reference video frame; and determining the similarity between the checking video frame and the reference video frame based on the similarity between the difference image corresponding to the checking video frame and the difference image corresponding to the reference video frame.

In one embodiment, the comparison module is further configured to perform histogram calculation on a difference map corresponding to the inspection video frame to obtain a first grayscale histogram; performing histogram calculation on a difference value image corresponding to the reference video frame to obtain a second gray level histogram; calculating a pixel number difference between the first gray level histogram and the second gray level histogram for each gray level of each color channel, and calculating a similarity between the first gray level histogram and the second gray level histogram based on the calculated pixel number difference; and taking the calculated similarity as the similarity between the test video frame and the reference video frame.

For specific limitations of the video special effect inspection apparatus, reference may be made to the above limitations of the video special effect inspection method, and details thereof are not repeated here. The modules in the video special effect checking device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.

In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of verifying a video effect. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.

Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.

In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.

In an embodiment, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.

In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The computer instructions are read by a processor of a computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps in the above-mentioned method embodiments.

It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.

The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.

The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

35页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:动态振动传感器光学失真预测

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!