Video detection method and device and electronic equipment

文档序号:142708 发布日期:2021-10-22 浏览:31次 中文

阅读说明:本技术 视频检测方法、装置及电子设备 (Video detection method and device and electronic equipment ) 是由 王磊 薛子育 刘庆同 郭沛宇 张乃光 于 2020-04-22 设计创作,主要内容包括:本公开涉及视频检测方法、装置及电子设备。该方法包括:获取视频的标识信息并将其与预设标识库中的信息进行比较,得到第一检测结果,其中,标识信息包括视频指纹、视频水印中的至少一种;将视频输入预设的深度伪造鉴别模型,得到第二检测结果;根据第一检测结果和第二检测结果,获得第三检测结果,其中,第三检测结果表示视频是否通过审核。该方法有利于实现对伪造音视频的鉴别和判定,对提升行业深度伪造内容防范有一定帮助。(The disclosure relates to a video detection method and device and electronic equipment. The method comprises the following steps: acquiring identification information of a video and comparing the identification information with information in a preset identification library to obtain a first detection result, wherein the identification information comprises at least one of video fingerprints and video watermarks; inputting the video into a preset depth forgery identification model to obtain a second detection result; and obtaining a third detection result according to the first detection result and the second detection result, wherein the third detection result indicates whether the video passes the audit. The method is beneficial to realizing the identification and judgment of the forged audio and video and has certain help for improving the deep forged content of the industry.)

1. A video detection method, comprising:

acquiring identification information of the video and comparing the identification information with information in a preset identification library to obtain a first detection result, wherein the identification information comprises at least one of video fingerprints and video watermarks;

inputting the video into a preset depth forgery identification model to obtain a second detection result;

and obtaining a third detection result according to the first detection result and the second detection result, wherein the third detection result indicates whether the video passes the audit.

2. The method of claim 1, further comprising, after said obtaining the third detection result:

and under the condition that the third detection result shows that the video is not audited, sending the video to preset terminal equipment so as to perform manual audit processing on the video.

3. The method of claim 1, wherein the identification information comprises a video fingerprint;

the acquiring the identification information of the video and comparing the identification information with information in a preset identification library comprises at least one of the following items:

comparing the video fingerprint with a video white list in a preset identification library;

and comparing the video fingerprint with a video blacklist in a preset identification library.

4. The method of claim 1, wherein the identification information comprises a video watermark;

the acquiring the identification information of the video and comparing the identification information with the information in a preset identification library comprises the following steps:

and comparing the video watermark with a publisher white list in the preset identification library.

5. The method of claim 1, wherein the inputting the video into a preset depth-forgery-authentication model comprises at least one of:

obtaining key characters in the video and inputting the key characters into the depth forgery identification model;

inputting the complete scene of the video into the depth forgery identification model.

6. The method of claim 5, wherein the obtaining key people in the video comprises:

extracting a face image in the video through a face recognition technology;

and comparing the face image with a preset key figure image to obtain the key figure.

7. The method of claim 1, wherein obtaining a third detection result based on the first detection result and the second detection result comprises:

and determining that the third detection result is that the video is not approved under the condition that at least one of the first detection result and the second detection result does not meet the relevant requirements.

8. The method of claim 2, wherein the manual review process includes at least one of: marking the video as a depth fake video, taking the video off line, and replacing the video with an original video.

9. A video detection apparatus, comprising:

the first detection module is used for acquiring identification information of the video and comparing the identification information with information in a preset identification library to obtain a first detection result, wherein the identification information comprises at least one of video fingerprints and video watermarks;

the second detection module is used for inputting the video into a preset depth forgery identification model to obtain a second detection result;

and the third detection module is used for obtaining a third detection result according to the first detection result and the second detection result, wherein the third detection result indicates whether the video passes the audit or not.

10. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the video detection method of any one of claims 1 to 8.

Technical Field

The present disclosure relates to the field of video technologies, and in particular, to a video detection method, a video detection apparatus, and an electronic device.

Background

With the continuous development of deep learning technology, deep counterfeiting technology is gradually mature. The depth counterfeiting technology is an artificial intelligence technology which combines and superimposes pictures or videos onto source pictures or videos by using a depth model, performs a large amount of sample learning by means of a neural network technology, and splices personal sound, facial expressions and body actions to synthesize false contents. The technology can digitally process sound, images or videos to imitate a specific person, and as the training sample size input to the neural network is increased, the data and images generated by training become more and more vivid, so that an observer can not distinguish authenticity by naked eyes finally. The deep counterfeiting technology has a plurality of application scenes in the broadcasting television and network audio-visual industries, including face changing of a substitute actor, virtual host and the like. However, the abuse of deep counterfeiting technology also brings great risks to national security. A synthetic false video may affect international safety order, increasing the risk of war outbreaks and the probability of misjudging international situation. In conclusion, the broadcast television and network audio-visual industry needs to prevent and deal with the potential risk of the deep counterfeiting technology and ensure safe broadcasting.

Therefore, it is necessary to provide a new technical solution for performing detection processing on a video.

Disclosure of Invention

An object of the present disclosure is to provide a new technical solution for performing detection processing on a video.

According to a first aspect of the present disclosure, there is provided a video detection method, including:

acquiring identification information of the video and comparing the identification information with information in a preset identification library to obtain a first detection result, wherein the identification information comprises at least one of video fingerprints and video watermarks;

inputting the video into a preset depth forgery identification model to obtain a second detection result;

and obtaining a third detection result according to the first detection result and the second detection result, wherein the third detection result indicates whether the video passes the audit.

Optionally, after the obtaining the third detection result, the method further includes:

and under the condition that the third detection result shows that the video is not audited, sending the video to preset terminal equipment so as to perform manual audit processing on the video.

Optionally, the identification information comprises a video fingerprint;

the acquiring the identification information of the video and comparing the identification information with information in a preset identification library comprises at least one of the following items:

comparing the video fingerprint with a video white list in a preset identification library;

and comparing the video fingerprint with a video blacklist in a preset identification library.

Optionally, the identification information comprises a video watermark;

the acquiring the identification information of the video and comparing the identification information with the information in a preset identification library comprises the following steps:

and comparing the video watermark with a publisher white list in the preset identification library.

Optionally, the inputting the video into a preset depth-forgery-inhibited authentication model includes at least one of:

obtaining key characters in the video and inputting the key characters into the depth forgery identification model;

inputting the complete scene of the video into the depth forgery identification model.

Optionally, the acquiring key people in the video includes:

extracting a face image in the video through a face recognition technology;

and comparing the face image with a preset key figure image to obtain the key figure.

Optionally, the obtaining a third detection result according to the first detection result and the second detection result includes:

and determining that the third detection result is that the video is not approved under the condition that at least one of the first detection result and the second detection result does not meet the relevant requirements.

Optionally, the manual review process comprises at least one of: marking the video as a depth fake video, taking the video off line, and replacing the video with an original video.

According to a second aspect of the present disclosure, there is provided a video detection apparatus comprising:

the first detection module is used for acquiring identification information of the video and comparing the identification information with information in a preset identification library to obtain a first detection result, wherein the identification information comprises at least one of video fingerprints and video watermarks;

the second detection module is used for inputting the video into a preset depth forgery identification model to obtain a second detection result;

and the third detection module is used for obtaining a third detection result according to the first detection result and the second detection result, wherein the third detection result indicates whether the video passes the audit or not.

According to a third aspect of the present disclosure, there is provided an electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the video detection method of the first aspect of the present disclosure.

The video detection method in the embodiment comprehensively adopts the steps of video identification, deep counterfeiting identification and the like, can automatically detect the authenticity and the credibility of the video, has high detection accuracy, is favorable for identifying and judging the counterfeit audio and video, and helps to improve the prevention of deep counterfeiting contents in the industry.

Other features of the present disclosure and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.

FIG. 1 is a schematic diagram of an electronic device that may be used to implement embodiments of the present disclosure.

Fig. 2 is a flow chart of a video detection method according to an embodiment of the present disclosure.

FIG. 3 is a schematic diagram of one example in accordance with an embodiment of the present disclosure.

Detailed Description

Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.

The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.

Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.

In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.

It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.

< hardware configuration >

Fig. 1 illustrates a hardware configuration of an electronic device that can be used to implement embodiments of the present disclosure.

Referring to fig. 1, an electronic device 1000 includes a processor 1100, a memory 1200, an interface device 1300, a communication device 1400, a display device 1500, and an input device 1600. The processor 1100 may be, for example, a central processing unit CPU, a micro control unit MCU, or the like. The memory 1200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface device 1300 includes, for example, a USB interface, a serial interface, and the like. The communication device 1400 is, for example, a wired network card or a wireless network card. The display device 1500 is, for example, a liquid crystal display panel. The input device 1600 includes, for example, a touch screen, a keyboard, a mouse, a microphone, and the like.

In an embodiment applied to this description, the memory 1200 of the electronic device 1000 is used to store instructions for controlling the processor 1100 to operate in support of implementing a method according to any embodiment of this description. Those skilled in the art can design instructions in accordance with the teachings disclosed herein. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.

It should be understood by those skilled in the art that although a plurality of devices of the electronic apparatus 1000 are shown in fig. 1, the electronic apparatus 1000 of the embodiments of the present specification may refer to only some of the devices, for example, only the processor 1100, the memory 1200 and the communication device 1400.

The electronic device 1000 shown in fig. 1 is, for example, a server for providing a video detection service.

The hardware configuration shown in fig. 1 is merely illustrative and is in no way intended to limit the present disclosure, its application, or uses.

< method examples >

The present embodiment provides a video detection method, for example, implemented by the electronic device 1000 shown in fig. 1.

As shown in fig. 2, the method includes the following steps S1100-S1300.

In step S1100, identification information of the video is obtained and compared with information in a preset identification library to obtain a first detection result, where the identification information includes at least one of a video fingerprint and a video watermark.

In this embodiment, the identification information of the video includes at least one of a video fingerprint and a video watermark.

Video Fingerprinting (Video Fingerprinting) refers to the unique identification of a Video. In one example, the video fingerprint service is based on a video fingerprint technology, generates a string of fingerprint characters capable of uniquely identifying a current video according to video content, has high stability, and effectively avoids the influence of operations such as format conversion, editing, cutting and splicing, compression and rotation and the like of a video file. The method can be used for various scenes such as video similarity check, video copyright, advertisement identification and the like.

The video watermark is the digital watermark loaded on the digital video, and the information representing the copyright is embedded into the original video data by utilizing the ubiquitous redundant data and the randomness in the video data, so that the copyright or the integrity of a digital product is protected, and the legal rights and interests of a copyright owner are ensured.

In one example, the identification information is a video fingerprint. In this example, the obtaining of the identification information of the video and comparing the identification information with the information in the preset identification library includes at least one of the following: comparing the video fingerprints with a video white list in a preset identification library; and comparing the video fingerprints with a video blacklist in a preset identification library.

In the above example, the preset video identifier library includes a video white list and/or a video black list. The video white list is a list of videos which are allowed to be played, and the video black list is a list of videos which are forbidden to be played. The video whitelists and video blacklists are derived, for example, from playable segments or no-play segments obtained from various media.

In the above example, if the video fingerprint of the video to be detected matches the video fingerprint in the video white list, the video to be detected is determined to be the playable video. And if the video fingerprint of the video to be detected is matched with the video fingerprint in the video blacklist, determining that the video to be detected is the no-play video. And if the video fingerprint of the video to be detected is not matched with the video fingerprint in the video white list or the video fingerprint in the video black list, determining that the video to be detected is the unknown content video. That is, the first detection result may include a playable video, a no-play video, and an unknown content video.

In one example, the identification information is a video watermark. In this example, the obtaining and comparing the identification information of the video with the information in the preset identification library includes: and comparing the video watermark with a publisher white list in a preset identification library.

In the above example, the preset video identifier library includes a publisher white list. The publisher white list is a list of publishers allowed to play, such as authenticated "XX net", "XX newspaper", and the like.

In the above example, if the video watermark of the video to be detected matches the watermark content in the publisher white list, the video to be detected is determined to be the authentication channel video. And if the video watermark of the video to be detected is not matched with the watermark content in the publisher white list, determining that the video to be detected is the video of the unknown channel. That is, the first detection result may include the authenticated channel video and the unknown channel video.

In step S1200, the video is input into a preset depth forgery authentication model to obtain a second detection result.

The depth forgery identification model is used for identifying whether the video is subjected to depth forgery processing. In one example, the deep forgery authentication model is obtained by a machine learning training method, which is as follows. Firstly, training samples are obtained, wherein each training sample comprises a video and a label which indicates whether the video is a depth forgery or not. Second, an initial model, such as a neural network model, is established. And finally, performing machine learning training on the initial model through a large number of training samples until the parameters of the initial model reach a convergence state, and obtaining the deep counterfeiting identification model.

In one example, the video is input into a preset depth forgery authentication model, and the preset depth forgery authentication model comprises at least one of the following items: obtaining key characters in a video and inputting the key characters into a depth forgery identification model; and inputting the complete scene of the video into a depth forgery identification model.

In one scenario of the above example, the focus is on whether a key person (e.g., a famous person) in the video is deeply forged, so the key person in the video is obtained and input into the deep-forging authentication model. The above-mentioned obtaining the key person in the video is, for example, obtaining a video clip including the key person and marking a video area corresponding to the key person.

The key characters in the video are obtained, for example, by: extracting a face image in a video by a face recognition technology; and comparing the face image with a preset key figure image to obtain a key figure. The extraction of the face image can be performed based on the existing face recognition model, and is not described herein again.

In one case of the above example, attention is paid to whether the entire content of the video is depth forgery, and thus the entire scene of the video is input to the depth forgery authentication model. The complete scene refers to a complete picture of each frame in the video.

The deep forgery authentication model can comprise a plurality of sub models, and can carry out processing such as replacement, change, deletion, addition and the like on the sub models. In this case, the second detection result is obtained by performing an and operation on the detection result of each submodel. For example, it is determined that the video to be detected is not a depth-forged video only when each submodel determines that the video to be detected is not a depth-forged video.

In step S1300, a third detection result is obtained according to the first detection result and the second detection result, where the third detection result indicates whether the video passes the audit.

In one example, obtaining a third detection result based on the first detection result and the second detection result includes: and under the condition that at least one of the first detection result and the second detection result does not meet the relevant requirements, determining that the third detection result is that the video is not approved. The first detection result is related to a requirement that, for example, the identification information of the video to be detected is matched with a white list. The relevant requirement of the second detection result is, for example, that the video to be detected does not belong to a depth-forgery video.

In one example, the first detection result and the second detection result may be determined comprehensively, and the determination process is as follows.

Firstly, whether the video fingerprint of the video to be detected is matched with a video fingerprint blacklist or a video fingerprint whitelist is judged.

And if the video fingerprint of the video to be detected is matched with the video fingerprint blacklist, judging that the third detection result is not approved.

And if the video fingerprint of the video to be detected is matched with the video fingerprint white list, further judging whether the key character is a deep forgery. And if the video to be detected does not contain key characters or the key characters do not belong to deep forgery, judging that the video to be detected passes the audit. And if the key character belongs to deep forgery, judging that the third detection result is approved.

And if the video to be detected belongs to the unknown content video, judging whether the video watermark is matched with the white list of the publisher. And if the video watermark is matched with the white list of the publisher, further judging whether the key character is deeply forged. And if the video to be detected does not contain key characters or the key characters do not belong to deep forgery, judging that the video to be detected passes the audit. And if the key character belongs to deep forgery, judging that the third detection result is approved. And if the video watermark is not matched with the publisher white list, judging that the third detection result is not approved.

The video detection method in the embodiment comprehensively adopts the steps of video identification, deep counterfeiting identification and the like, can automatically detect the authenticity and the credibility of the video, has high detection accuracy, is favorable for identifying and judging the counterfeit audio and video, and helps to improve the prevention of deep counterfeiting contents in the industry.

In one example, after obtaining the third detection result, the method further includes: and under the condition that the third detection result shows that the video is not audited, sending the video to preset terminal equipment so as to perform manual auditing processing on the video.

The results of the manual review process are, for example, flags, downlinks, replacements, and the like. The marking is to add a mark on the video, and the marked video is a depth fake video; the offline refers to offline processing performed on poor deep counterfeit content; the replacement means that the content without the deep-counterfeiting processing replaces the content of the lower line to continue playing aiming at the content containing the rumors in the bad deep-counterfeiting content. In addition, the processing log of the manual audit is recorded for the related department to audit.

In the above example, for a video that the system automatically determines as not having passed the review and that the manual review is deemed to be available for playout, it may be determined whether playout is available based on the manual review result.

Fig. 3 is a schematic diagram of a specific example according to an embodiment of the present method. As shown in fig. 3, the implementation process of the video detection method in this example is as follows: firstly, fingerprint extraction is carried out on video content, fingerprint comparison is carried out on the video content and a black/white list fingerprint database, and a fingerprint comparison result is obtained, wherein the comparison result comprises the following steps: black list content, white list content, unknown content. Then, extracting the watermark from the video content, and performing watermark analysis by combining with a watermark information base to obtain an analysis result, wherein the analysis result comprises: official channel content, unknown content. Secondly, performing key character recognition on the video content by adopting a face recognition technology and combining a key character library, and performing key character deep counterfeiting identification on the recognized key characters, wherein the judgment result comprises the following steps: the key person exists, the key person is tampered, the related key person does not detect the tampering, and the key person does not exist. Thirdly, calling general depth forgery identification for video content to identify, wherein the identification result comprises: tampering is detected, no tampering is detected. And then, comprehensively judging comprehensive fingerprint comparison, watermark extraction, key character identification and depth forgery identification and general depth forgery identification results. And finally, carrying out manual examination according to the comprehensive judgment result, and giving a disposal scheme after the manual examination.

In the above example, the rule of the comprehensive decision is as follows:

1. and the video judged as the blacklist is processed by adopting an offline processing mechanism.

2. Video judged to be white list:

(1) if the key figure is not included, directly marking the key figure as approved;

(2) the key character is included, and if the tampering is not identified, the key character is marked as pass for verification;

(3) the person who contains the key character and is identified as falsification passes through the verification of manual examination.

3. The video of unknown origin is judged to have the following conditions:

(1) the watermark analysis result is of official channel video:

a) if the key figure is not included, directly marking the key figure as approved;

b) the key character is included, and if the tampering is not identified, the key character is marked as pass for verification;

c) the person who contains the key character and is identified as falsification passes through the verification of manual examination.

(2) The watermark analysis result is of the unknown channel video:

a) not containing key characters:

-identifying as a depth-forged video that should be marked on the video;

and identifying the video as the non-deep fake video, and judging after manual examination.

b) Including key characters:

identifying the video as a deep fake video, performing offline processing and replacing the contents of the ballad;

and identifying the video as the non-deep fake video, and judging after manual examination.

< apparatus embodiment >

The embodiment provides a video detection device, which comprises a first detection module, a second detection module and a third detection module.

The first detection module is used for acquiring identification information of the video and comparing the identification information with information in a preset identification library to obtain a first detection result, wherein the identification information comprises at least one of video fingerprints and video watermarks.

And the second detection module is used for inputting the video into a preset depth forgery identification model to obtain a second detection result.

And the third detection module is used for obtaining a third detection result according to the first detection result and the second detection result, wherein the third detection result indicates whether the video passes the audit or not.

In one example, the video detection apparatus further includes a transmission module. The sending module is used for: and under the condition that the third detection result shows that the video is not audited, sending the video to preset terminal equipment so as to perform manual auditing processing on the video.

In one example, the identification information includes a video fingerprint, and the first detection module is to perform at least one of: comparing the video fingerprints with a video white list in a preset identification library; and comparing the video fingerprints with a video blacklist in a preset identification library.

In one example, the identification information includes a video watermark, and the first detection module is configured to: and comparing the video watermark with a publisher white list in a preset identification library.

In one example, the second detection module is to perform at least one of: obtaining key characters in a video and inputting the key characters into a depth forgery identification model; and inputting the complete scene of the video into a depth forgery identification model.

In one example, the second detection module is to: extracting a face image in a video by a face recognition technology; and comparing the face image with a preset key figure image to obtain a key figure.

In one example, the third detection module is to: and under the condition that at least one of the first detection result and the second detection result does not meet the relevant requirements, determining that the third detection result is that the video is not approved.

In one example, the manual review process includes at least one of: marking the video as a depth fake video, taking the video off line, and replacing the video with the original video.

< electronic device embodiment >

The embodiment provides an electronic device, which comprises a processor and a memory, wherein the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to implement the video detection method described in the embodiment of the method disclosed by the disclosure.

The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.

The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.

The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.

The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).

Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.

These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.

The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.

Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the present disclosure is defined by the appended claims.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:视频处理方法、装置、电子设备及计算机存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类