Virus detection method, model training method, device, equipment and storage medium

文档序号:153386 发布日期:2021-10-26 浏览:17次 中文

阅读说明:本技术 病毒检测方法、模型训练方法、装置、设备及存储介质 (Virus detection method, model training method, device, equipment and storage medium ) 是由 闫华 位凯志 古亮 于 2021-06-16 设计创作,主要内容包括:本申请实施例公开了一种病毒检测方法、模型训练方法、装置、设备及存储介质,该方法包括:获取待检测程序;对待检测程序进行转化处理,生成至少一组待检测图片;根据预设模型对至少一组待检测图片进行病毒检测,获得检测结果;基于检测结果,确定待检测程序是否为病毒程序。这样,通过将待检测程序转化为图片,并通过预设模型对其进行检测,强化了病毒程序和正常程序在编辑器中的视觉差异,从而能够有效鉴别其是否为病毒程序,提高病毒的检出准确度,为计算机安全提供有效保障。(The embodiment of the application discloses a virus detection method, a model training method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring a program to be detected; converting the program to be detected to generate at least one group of pictures to be detected; performing virus detection on at least one group of pictures to be detected according to a preset model to obtain a detection result; and determining whether the program to be detected is a virus program or not based on the detection result. Therefore, the program to be detected is converted into the picture, and is detected through the preset model, so that the visual difference between the virus program and the normal program in the editor is strengthened, whether the virus program is the virus program can be effectively identified, the virus detection accuracy is improved, and the computer safety is effectively guaranteed.)

1. A method for detecting a virus, the method comprising:

acquiring a program to be detected;

converting the program to be detected to generate at least one group of pictures to be detected;

performing virus detection on the at least one group of pictures to be detected according to a preset model to obtain a detection result;

and determining whether the program to be detected is a macro virus program or not based on the detection result.

2. The method of claim 1, wherein after said acquiring a procedure to be detected, the method further comprises:

highlighting the key information of the macro program to be detected to obtain the highlighted macro program to be detected; wherein the type of the key information at least comprises one of the following types: numbers, keywords, sensitive strings, and common strings.

3. The method according to claim 2, wherein the converting the program to be detected to generate at least one group of pictures to be detected comprises:

cutting the highlighted program to be detected according to at least one preset size to obtain at least one group of program fragments;

converting the at least one group of program fragments to generate at least one group of pictures to be detected; each group of pictures to be detected corresponds to a preset size, and each group of pictures to be detected comprises at least one picture to be detected.

4. The method according to claim 2 or 3, wherein the highlighting of the key information of the program to be detected comprises:

and highlighting the different types of key information according to different colors.

5. The method of claim 1, wherein the preset model comprises at least one sub-preset model;

correspondingly, the performing virus detection on the at least one group of pictures to be detected according to the preset model to obtain a detection result includes:

performing virus detection on a first group of pictures to be detected according to a first sub-preset model to obtain a detection result of each picture to be detected in the first group of pictures to be detected; the first sub-preset model is any one of the at least one sub-preset model, and the first group of pictures to be detected is a group of pictures to be detected corresponding to the first sub-preset model in the at least one group of pictures to be detected.

6. The method according to claim 1, wherein the at least one group of pictures to be detected comprises: the preset model comprises a small graph model, a middle graph model and a large graph model;

correspondingly, the performing virus detection on the at least one group of pictures to be detected according to the preset model to obtain a detection result includes:

performing virus detection on the small-image-size images by using the small-image model to obtain a detection result of each small-image-size image;

performing virus detection on the pictures with the middle picture size by using the middle picture model to obtain a detection result of each picture with the middle picture size;

and carrying out virus detection on the large-image-size images by using the large-image model to obtain a detection result of each large-image-size image.

7. The method according to claim 5 or 6, wherein the determining whether the program to be detected is a virus program based on the detection result comprises:

if the detection result indicates that one picture to be detected in the at least one group of pictures to be detected is a virus picture, determining that the program to be detected is a virus program;

and if the detection result indicates that all the pictures to be detected in the at least one group of pictures to be detected are normal program pictures, determining that the program to be detected is a normal program.

8. The method of claim 7, further comprising:

determining the clipping position of the program segment corresponding to the virus picture in the program to be detected;

and determining the confusion position of the virus in the program to be detected according to the virus picture and the clipping position.

9. A method of model training, the method comprising:

acquiring a sample program set; wherein the sample assembly comprises at least one normal program sample and at least one obfuscated program sample;

converting the sample program in the sample program set to generate a sample picture set;

and training an initial model by using the sample picture set to obtain a preset model.

10. The method of claim 9, wherein the at least one obfuscator sample comprises a first type obfuscator sample and a second type obfuscator sample; the first type of obfuscated program sample is an obfuscated program read from a known virus program, and the second type of obfuscated program sample is an obfuscated program generated by processing a normal program by using an obfuscation tool.

11. The method according to claim 9, wherein the transforming the sample macro procedure in the sample macro procedure set to generate a sample picture set comprises:

highlight processing is carried out on key information of the sample program in the sample program set to obtain a target sample program set; wherein the type of the key information at least comprises one of the following types: numbers, keywords, sensitive strings and common strings;

and converting the sample programs in the target sample program set to generate the sample picture set.

12. The method of claim 11, wherein highlighting key information of sample programs in the sample program set comprises:

and highlighting the different types of key information according to different colors.

13. The method of claim 11, wherein the training of the initial model using the sample picture set to obtain the preset model comprises:

respectively training the initial model by using at least one group of sample pictures to obtain at least one sub-preset model, and determining the at least one sub-preset model as the preset model; the at least one group of sample pictures are obtained by classifying the sample pictures in the sample picture set according to at least one preset size.

14. The method of claim 13, wherein the at least one preset size comprises a small map size, a medium map size, and a large map size; the method further comprises the following steps:

classifying the sample pictures in the sample picture set according to the small picture size, the middle picture size and the large picture size to obtain a small picture size sample picture group, a middle picture size sample picture group and a large picture size sample picture group;

correspondingly, the training of the initial model by using at least one group of sample pictures to obtain at least one sub-preset model comprises:

training the initial model by using the small-graph size sample picture group to obtain a small-graph model;

training the initial model by using the middle graph size sample picture group to obtain a middle graph model;

and training the initial model by utilizing the large-graph size sample picture group to obtain a large-graph model.

15. A virus detection apparatus, comprising: the device comprises a first acquisition unit, a first conversion unit, a detection unit and a determination unit; wherein the content of the first and second substances,

the first acquisition unit is used for acquiring a program to be detected;

the first conversion unit is used for converting the program to be detected to generate at least one group of pictures to be detected;

the detection unit is used for carrying out virus detection on the at least one group of pictures to be detected according to a preset model to obtain a detection result;

and the determining unit is used for determining whether the program to be detected is a virus program or not based on the detection result.

16. A model training apparatus, characterized in that the model training apparatus comprises: the device comprises a second acquisition unit, a second conversion unit and a training unit; wherein the content of the first and second substances,

the second acquiring unit is used for acquiring a sample program set; wherein the sample assembly comprises at least one normal program sample and at least one obfuscated program sample;

the second conversion unit is used for converting the sample program in the sample program set to generate a sample picture set;

and the training unit is used for training the initial model by utilizing the sample picture set to obtain a preset model.

17. An electronic device comprising a memory and a processor, the memory storing a computer program operable on the processor, wherein the processor, when executing the program, performs the steps in the virus detection method of any one of claims 1 to 8 or performs the steps in the model training method of any one of claims 9 to 14.

18. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the virus detection method according to any one of claims 1 to 8 or the steps of the model training method according to any one of claims 9 to 14.

Technical Field

The present application relates to the field of computer security technologies, and in particular, to a virus detection method, a model training method, an apparatus, a device, and a storage medium.

Background

A Computer Virus (Computer Virus) is a set of Computer instructions or program code that a compiler inserts into a Computer program to destroy Computer functions or data, to affect Computer use, and to replicate itself. Computer viruses are transmissible, covert, destructive, and the like. Computer viruses are not self-contained, but rather are hidden among other executable programs. After viruses exist in the computer, the running speed of the machine is influenced if the viruses exist in the computer, and a system is damaged if the viruses exist in the computer; therefore, the virus causes a great loss to the user.

The existing virus detection methods in the industry all have various defects, and for viruses and unknown variants thereof, the prior art often has a large number of false reports and false reports, so that the detection result is inaccurate.

Disclosure of Invention

In view of this, the present application provides a virus detection method, a model training method, an apparatus, a device and a storage medium, which can accurately detect viruses and provide effective guarantee for computer security.

The technical scheme of the application is realized as follows:

in a first aspect, an embodiment of the present application provides a virus detection method, including:

acquiring a program to be detected;

converting the program to be detected to generate at least one group of pictures to be detected;

performing virus detection on the at least one group of pictures to be detected according to a preset model to obtain a detection result;

and determining whether the program to be detected is a virus program or not based on the detection result.

Therefore, the program to be detected is converted into the picture to be detected, and the process of identifying the virus program and the normal program when a human being reads codes is simulated, so that the normal program and the virus program can be quickly distinguished by a mature deep learning algorithm in the field of computer vision.

In some embodiments, after the acquiring a procedure to be detected, the method further comprises:

highlighting the key information of the program to be detected to obtain the highlighted program to be detected; wherein the type of the key information at least comprises one of the following types: numbers, keywords, sensitive strings, and common strings.

Thus, the visual difference of the normal program and the virus program in the editor is strengthened due to the highlighting of the program to be detected.

In some embodiments, the converting the program to be detected to generate at least one group of pictures to be detected includes:

cutting the highlighted program to be detected according to at least one preset size to obtain at least one group of program fragments;

converting the at least one group of program fragments to generate at least one group of pictures to be detected; each group of pictures to be detected corresponds to a preset size, and each group of pictures to be detected comprises at least one picture to be detected.

Therefore, the program to be detected after the highlighting processing is cut according to the preset size, at least one group of pictures to be detected with the preset size is generated, and the program to be detected is cut according to the preset size, so that the effective detection effect can be kept under the special application scene of low-density confusion viroid detection.

In some embodiments, the highlighting the key information of the program to be detected includes:

and highlighting the different types of key information according to different colors.

Therefore, different types of key information are highlighted according to different colors, so that the model can rapidly distinguish virus program pictures from normal program pictures, and meanwhile, the model can be converged quickly to obtain a detection result.

In some embodiments, the preset model comprises at least one sub-preset model;

correspondingly, the detecting macro virus of the at least one group of pictures to be detected according to the preset model to obtain the detecting result includes:

performing virus detection on a first group of pictures to be detected according to a first sub-preset model to obtain a detection result of each picture to be detected in the first group of pictures to be detected; the first sub-preset model is any one of the at least one sub-preset model, and the first group of pictures to be detected is a group of pictures to be detected corresponding to the first sub-preset model in the at least one group of pictures to be detected.

Therefore, different groups of pictures to be detected, which are cut according to different preset sizes, are detected according to the corresponding sub preset models respectively, and each sub preset model corresponds to each group of pictures to be detected, so that the models have higher pertinence and better detection effect during detection.

In some embodiments, the at least one group of pictures to be detected includes: the preset model comprises a small graph model, a middle graph model and a large graph model;

correspondingly, the performing virus detection on the at least one group of pictures to be detected according to the preset model to obtain a detection result includes:

performing virus detection on the small-image-size images by using the small-image model to obtain a detection result of each small-image-size image;

performing virus detection on the pictures with the middle picture size by using the middle picture model to obtain a detection result of each picture with the middle picture size;

and carrying out virus detection on the large-image-size images by using the large-image model to obtain a detection result of each large-image-size image.

Therefore, the pictures to be detected are divided into the small-picture-size pictures, the middle-picture-size pictures and the large-picture-size pictures, detection is carried out according to the corresponding small-picture model, the middle-picture model and the large-picture model respectively, the detection result of each picture is obtained, when detection is carried out, different models are targeted to carry out detection, the multiple models jointly obtain the detection result, and the problem that omission exists in detection of one model is also avoided.

In some embodiments, the determining whether the program to be detected is a virus program based on the detection result includes:

if the detection result indicates that one picture to be detected in the at least one group of pictures to be detected is a virus picture, determining that the program to be detected is a virus program;

and if the detection result indicates that all the detection pictures in the at least one group of pictures to be detected are normal program pictures, determining that the program to be detected is a normal program.

Thus, whether the program to be detected is a virus program or not is determined according to the detection result of the detection of the pictures to be detected, and the program to be detected can be determined to be the virus program as long as the detection result indicates that one of the pictures to be detected is the virus picture; and meanwhile, determining that the program to be detected is a normal program only when the detection result indicates that all the pictures to be detected are normal program pictures. Therefore, the missing rate in virus detection can be reduced, the detection accuracy of the confusing virus in a low-density confusing scene can be improved, and effective guarantee is provided for computer safety.

In some embodiments, the method further comprises:

determining the clipping position of the program segment corresponding to the virus picture in the program to be detected;

and determining the confusion position of the virus in the program to be detected according to the virus picture and the clipping position.

Therefore, the specific confusion position of the virus in the program to be detected can be determined through the cutting positions of the virus image and the program segment corresponding to the virus image in the program to be detected. Therefore, after the virus program is detected, reliable basis is provided for subsequent processing of the virus program.

In a second aspect, an embodiment of the present application provides a model training method, where the method includes:

acquiring a sample program set; wherein the sample assembly comprises at least one normal program sample and at least one obfuscated program sample;

converting the sample program in the sample program set to generate a sample picture set;

and training an initial model by using the sample picture set to obtain a preset model.

Therefore, the sample program is converted into the sample picture, the generated sample picture is used for training the model, and the program is converted into the image entity, so that the obtained preset model can effectively detect whether the program converted into the picture is a virus program.

In some embodiments, the at least one obfuscator sample comprises a first type obfuscator sample and a second type obfuscator sample; the first type of obfuscated program sample is an obfuscated program extracted from a known virus program, and the second type of obfuscated program sample is an obfuscated program generated by processing a normal program by using an obfuscation tool.

Therefore, the obfuscated program sample not only comprises the obfuscated program extracted from the known virus program, but also comprises the obfuscated program generated by processing the normal program by using the obfuscation tool, so that the sample range is wider, and the model generalization capability is stronger.

In some embodiments, the transforming a sample program in the sample program set to generate a sample picture set includes:

highlight processing is carried out on key information of the sample program in the sample program set to obtain a target sample program set; wherein the type of the key information at least comprises one of the following types: numbers, keywords, sensitive strings and common strings;

and converting the sample program in the target sample program set to generate a sample picture set.

In this way, the visual difference in the editor between the normal program and the virus program is enhanced due to the highlighting of the sample program.

In some embodiments, the highlighting key information for a sample procedure of the sample procedure set comprises:

and highlighting the different types of key information according to different colors.

Therefore, different types of key information are highlighted according to different colors, so that the preset model obtained by training can quickly distinguish virus program pictures from normal program pictures, and meanwhile, quick convergence in the model training process can be realized.

In some embodiments, the training an initial model by using the sample picture set to obtain a preset model includes:

respectively training an initial model by using the at least one group of sample pictures to obtain at least one sub-preset model, and determining the at least one sub-preset model as the preset model; the at least one group of sample pictures are obtained by classifying the sample pictures in the sample picture set according to at least one preset size.

Therefore, the sample pictures used for training the model are obtained by classifying the sample macro program according to the preset size, so that the preset model obtained by training can keep the effective detection effect under the special application scene of low-density confusion virus detection.

In some embodiments, the at least one preset dimension includes a small drawing dimension, a medium drawing dimension, and a large drawing dimension; the method further comprises the following steps:

classifying the sample pictures in the sample picture set according to the small picture size, the middle picture size and the large picture size to obtain a small picture size sample picture group, a middle picture size sample picture group and a large picture size sample picture group;

correspondingly, the training of the initial model by using at least one group of sample pictures to obtain at least one sub-preset model comprises:

training the initial model by using the small-graph size sample picture group to obtain a small-graph model;

training the initial model by using the middle graph size sample picture group to obtain a middle graph model;

and training the initial model by utilizing the large-graph size sample picture group to obtain a large-graph model.

Therefore, the small-image size, the medium-image size and the large-image size are preferably selected to classify the sample images in the sample image set, and finally the small-image model, the medium-image model and the large-image model are obtained through training, so that the three sub-preset models can detect the images with three sizes, and the detection effect of the scheme is improved.

In a third aspect, an embodiment of the present application provides a virus detection apparatus, including: the device comprises a first acquisition unit, a first conversion unit, a detection unit and a determination unit; wherein the content of the first and second substances,

the first acquisition unit is used for acquiring a program to be detected;

the first conversion unit is used for converting the program to be detected to generate at least one group of pictures to be detected;

the detection unit is used for carrying out virus detection on the at least one group of pictures to be detected according to a preset model to obtain a detection result;

and the determining unit is used for determining whether the program to be detected is a virus program or not based on the detection result.

Therefore, the program to be detected is converted into the picture to be detected by the virus detection device, and the process of identifying the confused and deformed virus program and the normal program when a human being reads codes is simulated, so that the preset model can quickly distinguish the normal program from the virus program.

In a fourth aspect, an embodiment of the present application provides a model training apparatus, including: the device comprises a second acquisition unit, a second conversion unit and a training unit; wherein the content of the first and second substances,

the second acquiring unit is used for acquiring a sample program set; wherein the sample assembly comprises at least one normal program sample and at least one obfuscated program sample;

the second conversion unit is used for converting the sample program in the sample program set to generate a sample picture set;

and the training unit is used for training the initial model by utilizing the sample picture set to obtain a preset model.

Therefore, the sample program is converted into the sample picture by the model training device, the generated sample picture is used for training the model, and the program is converted into the image entity, so that the obtained preset model can effectively detect whether the program converted into the picture is a virus program or not.

In a fifth aspect, an embodiment of the present application further provides an electronic device, including: a memory and a processor, the memory storing a computer program operable on the processor, wherein the processor when executing the program implements the steps in the virus detection method of any one of the first aspect or implements the steps in the model training method of any one of the second aspect.

In a sixth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the virus detection method according to any one of the first aspect, or implements the steps in the model training method according to any one of the second aspect.

The embodiment of the application provides a virus detection method, a model training method, a device, equipment and a storage medium, wherein a program to be detected is obtained; converting the program to be detected to generate at least one group of pictures to be detected; performing virus detection on the at least one group of pictures to be detected according to a preset model to obtain a detection result; and determining whether the program to be detected is a virus program or not based on the detection result. Therefore, the program to be detected is converted into the picture and is detected through the preset model, the visual difference between the virus program and the normal program in the editor is enhanced, so that whether the program is the virus program can be effectively identified, the virus detection accuracy is improved, and the computer safety is effectively guaranteed.

Drawings

FIG. 1A is a schematic diagram of a full text obfuscated code provided in the related art;

FIG. 1B is a diagram illustrating a structure of a low-density obfuscated code provided in the related art;

FIG. 2 is a schematic view of a flow chart of a virus detection method according to an embodiment of the present application;

FIG. 3 is a schematic view of a flow chart of a virus detection method according to an embodiment of the present application;

FIG. 4 is a schematic diagram of a flow chart of an implementation of a model training method according to an embodiment of the present application;

FIG. 5 is a schematic diagram of a flow chart of an implementation of a model training method according to an embodiment of the present application;

FIG. 6 is a schematic diagram of a flow chart of an implementation of a model training method according to an embodiment of the present application;

FIG. 7 is a schematic view of a flowchart of an implementation of a virus detection method according to an embodiment of the present application;

FIG. 8 is a schematic structural diagram of a virus detection apparatus according to an embodiment of the present application;

FIG. 9 is a schematic diagram of a structure of a model training apparatus according to an embodiment of the present application;

FIG. 10 is a schematic diagram illustrating a structure of a model training apparatus according to an embodiment of the present application;

fig. 11 is a hardware entity diagram of an electronic device according to an embodiment of the present application.

Detailed Description

The technical solution of the present application is further elaborated below with reference to the drawings and the embodiments. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.

In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.

In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning by themselves. Thus, "module", "component" or "unit" may be used mixedly.

It should be noted that the terms "first \ second \ third" referred to in the embodiments of the present application are only used for distinguishing similar objects and do not represent a specific ordering for the objects, and it should be understood that "first \ second \ third" may be interchanged under certain circumstances or in a specific order or sequence order so that the embodiments of the present application described herein can be implemented in an order other than that illustrated or described herein.

Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application are explained, and the terms and expressions referred to in the embodiments of the present application are applicable to the following explanations:

visual Basic for Applications (VBA), which is a scripting language extension based on Visual Basic programming language (VB), is mainly used for extending the functions of Windows Applications, and is commonly used in Microsoft Office software.

Macro virus (Macro Malware), refers to a malicious Macro program or a file embedded with a malicious Macro program that can cause an attack on a computer or a network, where the Macro program generally refers to a VBA program used in Microsoft Office software.

Obfuscation (Obfuscation), a technical means of transforming a computer program into a functionally equivalent but difficult to read and understand form, is often used by hackers to bypass antivirus systems.

Low-Density Obfuscation (Low-sensitivity Obfuscation) means that the proportion of obfuscated parts in a program is Low. Typically, by adding non-obfuscated garbage code to an obfuscated program, the density of obfuscation is reduced, resulting in a low density obfuscated program.

Advanced Persistent Threat (APT), is a Persistent and effective attack activity that is organized to spread out on a particular object.

Machine Learning (Machine Learning), is a computer algorithm that is automatically refined with data and experience.

A Neural Network (Neural Network) is an algorithm model which imitates animal Neural Network behavior characteristics and carries out distributed parallel information processing.

Deep Learning (Deep Learning), a branch of machine Learning research, is an algorithm that models complex data using nonlinear transformations of a multi-layer neural network.

Convolutional Neural Networks (CNN), a deep learning algorithm that involves convolution calculations. Representative algorithm implementations may include: AlexNet, ZFNet, VGGNet, google lenet, and ResNet, among others.

The computer viruses are various in types, and taking macro viruses as an example, macro virus detection is the focus of current anti-virus research and is also a key technology of an anti-virus system. Since Office files are widely used, the attack surface exposed by the macro program extension function is large. In consideration of the cost of an attacker and the probability of successful attack, the macrovirus is an extremely cost-effective attack means and is frequently used.

The difficulty of virus detection is its confounding deformation. For example, macro virus, which is a script program, has low confusion distortion cost, so that confusing macro virus is very common, and the difficulty of virus detection is greatly increased. Some security manufacturers can accurately identify macro viruses with obvious confusion characteristics by adopting methods such as machine learning and the like. One of the important challenges, however, is that if the obfuscated features of the macro virus are weakened or diluted, i.e., the obfuscation density is reduced by adding normal code or non-obfuscated garbage code, the existing machine learning approaches in the industry will produce a large number of false positives and false negatives. As shown in fig. 1A, the full text is an obfuscated code, so it is easy to detect; as shown in FIG. 1B, the obfuscated codes are embedded in a large number of normal codes and are not easily detected.

It can be understood that accurate detection of viruses is of great significance to guarantee computer security, however, existing virus detection schemes in the industry all have shortcomings. For example, there are three common solutions in the industry today:

the first scheme is a scheme based on virus rules. In particular, virus rules are extracted manually or automatically into a virus rule base by a virus analyst or automated tool. The disadvantages of this method are mainly two-fold: firstly, the cost of extracting rules manually is high, and the quality of rules extracted by an automatic tool is low; second, the constant expansion of the virus rule base will severely degrade the performance of the anti-virus system; third, the method is difficult to cope with virus variants and variants that are treated by aliasing techniques and prone to false negatives.

The second scheme is a scheme based on a conventional machine learning algorithm. Specifically, firstly, performing feature engineering, namely extracting feature vectors from a large number of virus files and normal files, and abstractly representing file samples; then, training the model by using a traditional machine learning algorithm by taking the characteristic vector as training data; finally, the model predicts whether the file under test is a macro virus file. This approach has the ability to detect confounding macroviruses due to the inherent generalization ability of the models generated by machine learning algorithms. The disadvantages of this solution are twofold: firstly, feature engineering needs to be implemented manually, that is, which contents and data of a file need to be processed to generate a feature vector; second, this solution cannot cope with low density aliasing, which will produce false positives.

The third scheme is a scheme based on a deep neural network in the computer vision field. Specifically, both the virus file and the normal file are converted into picture pixel matrixes, then the picture pixel matrixes of the files in the training set are used as input, a model is trained by a mature algorithm in the field of computer vision, and then the model is used for identifying the virus file and the normal file. The scheme utilizes the property of the neural network, does not need to manually implement feature engineering, and overcomes the defect that the traditional machine learning algorithm needs manual participation in the feature engineering. The disadvantages of this solution are: in the process of converting the virus file into the picture pixel matrix, only the abstract concept of the picture is utilized without generating an actual picture for the consideration of convenience, and the pixel matrix is directly generated; the pixel matrix generated in the way just meets the requirement of neural network algorithm input in format or form, and cannot reflect the visual difference between the confusing macro virus and the normal code seen manually in code examination. For example, in terms of line breaks in code, what humans see in an actual text editor or integrated development environment is a visual line break effect; the existing method generally treats the line change character and other characters equally, so that the line change effect cannot be embodied in the generated pixel matrix. This limitation leads to solutions based on deep neural networks in the field of computer vision being less effective in practice.

Based on this, the embodiments of the present application provide a virus detection method, and the basic idea of the method is: acquiring a program to be detected; converting the program to be detected to generate at least one group of pictures to be detected; performing virus detection on the at least one group of pictures to be detected according to a preset model to obtain a detection result; and determining whether the program to be detected is a virus program or not based on the detection result. Therefore, the program to be detected is converted into the picture and is detected through the preset model, the visual difference of the virus program and the normal program in the editor is enhanced, whether the program is the virus program or not can be effectively identified, the virus detection accuracy is improved, and effective guarantee is provided for computer safety.

The embodiment of the present application provides a virus detection method, which is applied to an electronic device, and functions implemented by the method can be implemented by a processor in the electronic device calling a program code, which may be stored in a storage medium of the electronic device. Fig. 2 is a schematic flow chart illustrating an implementation of the macro virus detection method according to the embodiment of the present application. As shown in fig. 2, the method includes:

s101, acquiring a program to be detected;

it should be noted that, the embodiments of the present application provide a method for statically detecting whether a program is a virus program, so that whether a program is a virus program can be accurately detected without executing the program. Here, the program to be detected may be any program that may be infected with a computer virus.

By way of example, the embodiment of the application can detect whether a file with macro function is infected with macro virus or not. A typical example of a file with macro functions is a Microsoft Office file, which generally uses macro language to implement macro functions such as dynamic calculation of a form and design of an interactive window. Of course, Microsoft Office is an exemplary application scenario of the embodiments of the present application, and does not constitute a limitation of the present application. In this embodiment, the program to be detected may be obtained from these files with macro functions, or may be obtained in other manners, and in addition, in a specific example, the program to be detected may be a macro program, but the virus detection method provided in this embodiment of the present application is not limited to detecting a macro program. In practical application, the obtaining mode of the program to be detected and the type of the program to be detected need to be determined in combination with a specific application scenario, which is not specifically limited in the embodiment of the present application.

Step S102, converting the program to be detected to generate at least one group of pictures to be detected;

the embodiment of the application converts the program into the image entity to detect the virus of the program. Therefore, before virus detection, conversion processing is firstly carried out on the program to be detected, the program to be detected is converted into pictures, and at least one group of pictures to be detected is obtained. Illustratively, the transcoding of the program into pictures may be performed by a conversion tool such as a carbon, polacode, codezen, etc.

Step S103, performing virus detection on the at least one group of pictures to be detected according to a preset model to obtain a detection result;

the preset model is a pre-trained model which can be used for carrying out virus detection on the picture to be detected according to the picture to be detected so as to determine whether the picture to be detected is a virus picture. The preset model may be a model in various forms, such as a Long Short-Term Memory artificial Neural network (LSTM) model, a Bidirectional Long Short-Term Memory Neural network (Bi-LSTM) model, a Convolutional Neural Network (CNN) model, and even a non-Neural network model.

Before virus detection, a preset model needs to be trained according to a sample program set. Namely, the preset model is obtained by training the preset model by using a sample program set; the sample assembly comprises at least one normal program sample and at least one obfuscated macro program sample, wherein the obfuscated macro program sample is also a virus program sample. Specifically, a sample program in the sample program set is converted to generate a sample picture set; and training a preset model by using the sample picture set to obtain the preset model.

In this way, the preset model is used to detect the at least one group of pictures to be detected generated in step S102, and it is determined whether the pictures in the at least one group of pictures to be detected are virus program pictures or normal program pictures. Because the preset model is obtained by converting the sample program into the picture for training, the visual difference between the virus program and the normal program in the editor is enhanced, and the detection result of the preset model when detecting the virus is more accurate.

And step S104, determining whether the program to be detected is a virus program or not based on the detection result.

Here, virus detection is performed on at least one group of pictures to be detected through a preset model, and after whether the at least one group of pictures to be detected are virus pictures is determined, whether a program to be detected is a virus program can be determined according to a detection result. Specifically, if the detection result indicates that at least one picture in at least one group of pictures to be detected is a virus program picture, the program to be detected is a virus program; and if the detection result shows that the pictures to be detected are normal program pictures, the program to be detected is a normal program.

In the embodiment of the application, a program to be detected is obtained; then, converting the program to be detected to generate at least one group of pictures to be detected; performing virus detection on the at least one group of pictures to be detected according to a preset model to obtain a detection result; and finally, determining whether the program to be detected is a virus program or not based on the detection result. Therefore, the program to be detected is converted into the picture and is detected through the preset model, the visual difference of the virus program and the normal program in the editor is enhanced, and whether the program to be detected is the virus program or not can be effectively identified.

Based on the foregoing embodiments, an embodiment of the present application further provides a virus detection method, and fig. 3 is a schematic flow chart illustrating an implementation of the virus detection method according to the embodiment of the present application. As shown in fig. 3, the method includes:

step S201, acquiring a program to be detected;

it should be noted that the specific implementation process of this step is the same as step S101 in the foregoing embodiment, and specific details are given above, and are not repeated here.

Step S202, performing highlight processing on the key information of the program to be detected to obtain the highlighted program to be detected;

wherein the type of the key information at least comprises one of the following types: numbers, keywords, sensitive strings, and common strings.

In some embodiments, the highlighting the key information of the program to be detected includes:

and highlighting the different types of key information according to different colors.

Wherein, the highlighting process for the program to be detected can be: the key information of the program is highlighted in different colors according to types by using a preset tool, and numbers, keywords, character strings and sensitive character strings determined by a virus analyst according to experience can be highlighted in different colors. The way of highlighting the program is the same as the way of highlighting the sample program when the preset model is trained, and for example, when the preset model is trained, the numbers in the sample program are highlighted in red, and when virus detection is performed, the numbers in the program to be detected are also highlighted in red.

The sensitive character string may be identical to a certain character string or may be a part of a certain character string. It can be understood that when the sensitive character string is highlighted, since the sensitive character string is a character string, the sensitive character string is actually highlighted by adopting two colors, and the highlight color finally presented by the sensitive character string is the color for highlighting the sensitive character string when the sensitive character string is regarded as the sensitive character string. Therefore, the character string can be divided into a sensitive character string and a common character string, and different colors are respectively adopted for highlighting.

S203, cutting the highlighted program to be detected according to at least one preset size to obtain at least one group of program fragments;

in the embodiment of the application, the program to be detected after the highlighting processing can be cut according to different preset sizes, and program segments of different sizes of the program to be detected are generated by classifying according to the preset sizes used during cutting. When the program to be detected is cut, the at least one preset size may be one, two, three, four, five or more preset sizes. Illustratively, the preset size may be a specific size value, for example, the program to be detected may be cut according to the following three sizes, which are 20k, 100k, 1M, and the like; the preset size can also be a preset size range, and can be set by a person skilled in the art according to actual needs, and the above exemplary sizes do not constitute a limitation to the present application.

For example, by clipping the program to be detected according to a preset size, a set of program fragments with preset sizes can be obtained.

Illustratively, by clipping the program to be detected according to three preset sizes (defined as a small-image size, a medium-image size and a large-image size), three groups of program segments with preset sizes can be obtained, namely a small-image program segment, a medium-image program segment and a large-image program segment.

When the program to be detected is cut according to the preset sizes, if the size of the program to be detected is smaller than a certain preset size, the program to be detected does not need to be cut according to the size, and the program to be detected is respectively cut according to the preset size smaller than or equal to the size of the program. It is understood that when the size of the program itself is equal to the preset size, the step S204 is directly performed without clipping the program. It can also be understood that the clipped program fragment is the code fragment of the program.

In some embodiments, the clipping the highlighted program to be detected according to at least one preset size to obtain at least one group of program fragments may include:

and performing sliding cutting on the highlighted program to be detected according to at least one preset size and preset granularity to iteratively obtain at least one group of program fragments.

The preset granularity may be one line of the program, two lines of the program, even multiple lines of the program, and the like, which is not specifically limited in this embodiment of the application.

For example, with a line of the program as the preset granularity, when the program to be detected is cut, the specific values may be: the content displayed by the program to be detected in the editor is cut by taking a certain preset size as a cutting window and taking one line of the program as granularity from the beginning to the end of the program. Preferably, the method for clipping the program to be detected in the embodiment of the present application is to slide and clip at a certain granularity. Illustratively, if the program to be detected is six lines of code: A. b, C, D, E and F, preferably at a granularity of one row, assuming two rows correspond to a predetermined size, the program to be tested is cut into five program segments AB, BC, CD, DE and EF, which are a set of program segments. The above clipping manner is a preferable clipping manner in the embodiment of the present application, and those skilled in the art can also clip the program to be detected in other manners, for example, divide the program into AB, CD, and EF program segments.

Here, for steps S202 and S203, the program to be detected may be cut according to at least one preset size to obtain at least one group of program segments (without highlighting); and then carrying out highlighting processing on the at least one group of program fragments to obtain at least one group of highlighted program fragments. The clipping manner and the highlighting manner are the same as in steps S202 and S203, and both processing sequences can obtain program fragments subjected to the highlighting processing, which is not specifically limited in this embodiment of the present application.

Step S204, converting the at least one group of program fragments to generate at least one group of pictures to be detected;

the method and the device for detecting the virus program detect whether the program to be detected is the virus program or not by detecting the image generated after the program to be detected is subjected to conversion processing after the highlight processing. Therefore, after obtaining at least one group of program segments, a plurality of pictures with preset sizes can be obtained iteratively through the conversion tool of code conversion pictures.

Each group of pictures to be detected corresponds to a preset size, and each group of pictures to be detected comprises at least one picture to be detected.

Illustratively, a set of pictures to be detected is generated corresponding to the procedure fragment with a preset size obtained in step S203.

Illustratively, three sets of program fragments of preset sizes are obtained corresponding to the example of step S203: and correspondingly generating a small picture size picture, a medium picture size picture and a large picture size picture by the small picture program fragment, the medium picture program fragment and the large picture program fragment.

S205, performing virus detection on the at least one group of pictures to be detected according to a preset model to obtain a detection result;

after the program to be detected is converted into at least one group of pictures to be detected, virus detection can be carried out on the at least one group of pictures to be detected according to a preset model.

In some embodiments, the preset model comprises at least one sub-preset model;

correspondingly, the performing virus detection on the at least one group of pictures to be detected according to the preset model to obtain a detection result includes:

performing virus detection on a first group of pictures to be detected according to a first sub-preset model to obtain a detection result of each picture to be detected in the first group of pictures to be detected; the first sub-preset model is any one of the at least one sub-preset model, and the first group of pictures to be detected is a group of pictures to be detected corresponding to the first sub-preset model in the at least one group of pictures to be detected.

Each sub-preset model can be obtained by training based on a group of highlighted sample pictures meeting preset sizes.

That is to say, in the embodiment of the present application, each sub-default model and each group of pictures to be detected have a corresponding relationship therebetween, and one sub-default model detects a group of pictures to be detected corresponding to the sub-default model to obtain a detection result of each picture in the group of pictures to be detected.

Illustratively, a set of pictures to be detected generated according to a set of program segments with a preset size in step S203 is detected by using a preset model corresponding to the size to determine whether a virus picture exists in the set of pictures to be detected.

The at least one group of pictures to be detected comprises: the preset model comprises a small graph model, a middle graph model and a large graph model;

correspondingly, the performing virus detection on the at least one group of pictures to be detected according to the preset model to obtain a detection result includes:

performing virus detection on the small-image-size images by using the small-image model to obtain a detection result of each small-image-size image;

performing virus detection on the pictures with the middle picture size by using the middle picture model to obtain a detection result of each picture with the middle picture size;

and carrying out virus detection on the large-image-size images by using the large-image model to obtain a detection result of each large-image-size image.

Here, the program segments according to three sets of preset sizes (small, medium, and large) may correspond to the program segments exemplified in step S203: the small-image program section, the medium-image program section and the large-image program section are used for generating a small-image-size picture, a medium-image-size picture and a large-image-size picture, a small-image model corresponding to the small image size is used for carrying out virus detection on the small-image-size picture, a medium-image model corresponding to the medium image size is used for carrying out virus detection on the medium-image-size picture, a large-image model corresponding to the large image size is used for carrying out virus detection on the large-image-size picture, and in three groups of pictures with different sizes, the detection result of each picture is obtained respectively so as to respectively determine whether virus pictures exist in the three groups of pictures to be detected.

Step S206, if the detection result indicates that one picture to be detected in the at least one group of pictures to be detected is a virus picture, determining that the program to be detected is a virus program;

and if the detection result shows that any picture in the at least one group of pictures to be detected is a virus picture, the program to be detected is a virus program.

Step S207, if the detection result indicates that all the pictures to be detected in the at least one group of pictures to be detected are normal program pictures, determining that the program to be detected is a normal program.

After the virus detection is performed on at least one group of pictures to be detected according to the preset model and the detection result is obtained, if the detection result indicates that all the pictures in the at least one group of pictures to be detected are normal program pictures, the program to be detected is a normal program.

And step S208, determining the position of the virus.

In the embodiment of the application, when the program to be detected is determined to be a virus program, the position of the virus can be further determined.

In some embodiments, the method may further comprise:

determining the clipping position of the program segment corresponding to the virus picture in the program to be detected;

and determining the confusion position of the virus in the program to be detected according to the virus picture and the clipping position.

Illustratively, the program to be tested is cut into five program segments, i.e. AB, BC, CD, DE and EF, corresponding to the example of step S203, and if it is determined that the picture generated in the DE segment is a virus picture, it indicates that the position where the virus is located is the DE line of the program.

In the embodiment of the present application, detailed descriptions are given to specific implementations of the foregoing embodiments, and it can be seen that, in the virus detection method provided in this embodiment, a program to be detected is highlighted, then the highlighted program to be detected is cut according to at least one preset size to obtain a program fragment to be detected, the program fragment to be detected is converted into a picture, at least one group of pictures to be detected with a preset size is generated, and then virus detection is performed on the generated at least one group of pictures to be detected according to at least one sub-preset model corresponding to the at least one preset size, so that not only can whether the program is a virus program be accurately identified, but also the position of the virus can be further determined. Thus, the static virus detection method based on image recognition can detect viruses and unknown variants/variants thereof, has a good detection effect on a large number of programs which confuse viruses, and can also be kept effective in a low-density scene (for example, a small number of obfuscated codes are embedded into a large number of normal codes).

In an embodiment of the present application, a model training method is provided, where the method is applied to an electronic device, and functions implemented by the method may be implemented by a processor in the electronic device calling a program code, where the program code may be stored in a storage medium of the electronic device. Fig. 4 is a schematic flow chart illustrating an implementation of the model training method according to the embodiment of the present application. As shown in fig. 4, the method includes:

s301, obtaining a sample program set;

here, when performing model training, it is necessary to first acquire sample data for training. In the embodiment of the application, the model is trained to enable the model to perform virus detection, that is, whether the program is a virus program or a normal program is accurately identified through the model. Therefore, the sample data of the embodiment of the present application is a sample program, and the sample program set includes at least one normal program sample and at least one obfuscated program sample, where the obfuscated program sample is a virus program sample.

Step S302, converting the sample program in the sample program set to generate a sample picture set;

here, the embodiment of the application trains a sample picture generated after converting a program to obtain a preset model. Therefore, after the sample program is acquired, the sample program is processed to convert it into a sample picture.

And S303, training an initial model by using the sample picture set to obtain a preset model.

Here, after the sample picture is obtained, the initial model may be trained by using the sample picture to obtain the preset model.

Illustratively, the sample picture is trained by using a neural network as a machine learning algorithm, such as AlexNet, ZFNet, VGGNet, google lenet, ResNet, and the like, so as to obtain a preset model. Other machine learning algorithms may also be utilized, and the embodiment of the present application is not particularly limited in this respect.

In the embodiment of the application, a sample program set is obtained; then, converting the sample program in the sample program set to generate a sample picture set; and finally, training an initial model by using the sample picture set to obtain a preset model. Therefore, the sample program is converted into the sample picture, and the model training is carried out on the sample picture, so that the visual difference between the virus program and the normal program in an editor is enhanced, and the preset model obtained by training can effectively identify whether the program is the virus program.

Based on the foregoing embodiments, the embodiments of the present application further provide a model training method. Fig. 5 is a schematic flow chart of an implementation of the model training method according to the embodiment of the present application. As shown in fig. 5, the method includes:

s401, obtaining a sample program set;

here, the sample program represents sample data for training a model, the sample program set including at least one normal program sample and at least one obfuscated macro program sample.

Wherein the at least one obfuscator sample comprises a first type obfuscator sample and a second type obfuscator sample; the first type of obfuscated program sample is an obfuscated program extracted from a known virus program, and the second type of obfuscated program sample is an obfuscated program generated by processing a normal program by using an obfuscation tool.

The known virus program is an existing program that has been determined to be a virus program, for example: a virus program in which a small amount of obfuscated code is embedded in a large amount of normal code. In this step, the obfuscated code may be extracted as a sample program by manual extraction.

In addition, converting the normal program into the obfuscated program may be performed using an obfuscation tool, such as a Macro _ pack, Macroshop, vba-obfuscator, VBad, Veil Framework, Generator-Macro, or the like. In addition, since the size of the code generated by the obfuscation tool is generally proportional to its input, the size of the code can be adjusted by selection of known programs, i.e., obfuscated programs of different sizes can be generated by adjusting the size of the normal program used for conversion.

It will be appreciated that the sample set of programs also includes normal program samples, so that the trained model can distinguish between normal programs and virus programs.

Step S402, performing highlight processing on key information of the sample program in the sample program set to obtain a target sample program set;

wherein the type of the key information at least comprises one of the following types: numbers, keywords, sensitive strings, and common strings.

In some embodiments, the highlighting key information for a sample procedure of the sample procedure set comprises:

and highlighting the different types of key information according to different colors.

The highlighting of the sample program in the sample program set can be performed by highlighting the program by using a tool, and highlighting the numbers, keywords, character strings in the program and sensitive character strings determined by a virus analyst according to experience according to different colors. The sensitive character string may be identical to a certain character string, or may be a part of a certain character string. It can be understood that when the sensitive character string is highlighted, since the sensitive character string is a character string, the sensitive character string is actually highlighted by adopting two colors, and the highlight color finally presented by the sensitive character string is the color for highlighting the sensitive character string when the sensitive character string is regarded as the sensitive character string. Therefore, the character string may be divided into a sensitive character string and a normal character string, and the highlighting process may be performed by using different colors.

Step S403, converting the sample program in the target sample program set to generate a sample picture set;

in this step, the highlighted program may be converted into a picture using a transcoding tool such as a carbon, polacode, codezen, etc., so as to generate a sample picture. The sample picture set comprises at least one sample picture, and the sample picture is obtained by converting a sample program in the target sample program set.

S404, classifying the sample pictures in the sample picture set according to at least one preset size to obtain at least one group of sample pictures;

here, the sample pictures in the sample picture set may be classified according to different preset sizes. In the classification, the at least one preset size may be one, two, three, four, five, and more preset sizes. For example, the preset size may be a specific size value, and for example, the sample picture may be classified into the following three sizes: 20k, 100k, 1M; the preset size can also be a preset size range, and can be set by a person skilled in the art according to actual needs, and the above exemplary sizes do not limit the present application.

For example, when there is only one preset size, a set of samples of the preset size can be obtained. It can be understood that the sample program is derived from the obfuscated fragments in the known virus program extracted manually, and the obfuscated program generated by the obfuscation tool, and the size of the obfuscated program can be adjusted by the size of the normal program input, so that the sample picture can be made to be the same size or the same size range by manual adjustment.

In some embodiments, the at least one preset dimension includes a small drawing dimension, a medium drawing dimension, and a large drawing dimension; the method may further comprise:

and classifying the sample pictures in the sample picture set according to the small picture size, the medium picture size and the large picture size to obtain a small picture size sample picture group, a medium picture size sample picture group and a large picture size sample picture group.

In the embodiment of the present application, sample pictures are classified according to three preset sizes (a small-size sample picture group, a medium-size sample picture group, and a large-size sample picture group), so that three groups of sample pictures, that is, a small-size sample picture group, a medium-size sample picture group, and a large-size sample picture group, can be obtained. It will be appreciated that, as previously described, three sets of sample pictures of different sizes or different size ranges may be obtained by manually adjusting the size of the extracted confusing segment and the size of the input normal macro procedure. It will also be appreciated that the classification may be adjusted for more groups of classifications.

Step S405, training the initial model by using at least one group of sample pictures respectively to obtain at least one sub-preset model, and determining the at least one sub-preset model as the preset model.

The at least one group of sample pictures are obtained by classifying the sample pictures in the sample picture set according to at least one preset size.

Here, after the sample pictures are classified by size, the models are trained for the pictures classified by different sizes, respectively, so as to obtain the preset models.

In some specific embodiments, the initial model is a neural network model. In addition, the initial model may also be other types of models, which is not specifically limited in this embodiment of the present application.

Illustratively, corresponding to step S404, when the preset size is only one size, a preset model can be obtained.

In some embodiments, in step S404, sample pictures in the sample picture set may be classified according to the small picture size, the medium picture size, and the large picture size, so as to obtain a small picture size sample picture group, a medium picture size sample picture group, and a large picture size sample picture group;

correspondingly, the training the initial model by using at least one group of sample pictures to obtain at least one sub-preset model may include:

training an initial model by using the small-image-size sample image group to obtain a small-image model;

training an initial model by using the middle graph size sample picture group to obtain a middle graph model;

and training an initial model by using the large-graph-size sample picture group to obtain a large-graph model.

Namely, three sub-preset models are generated: a small graph model, a middle graph model and a large graph model, and determining them as preset models. Illustratively, mature deep neural networks in the computer vision domain are employed as machine learning algorithms, such as training models of AlexNet, ZFNet, VGGNet, google lenet, and ResNet.

In the embodiment of the application, the preset model has a process of continuous updating, so that when a new virus type is detected, the preset model can be updated according to the new virus type, and the generalization capability of the preset model is improved by continuous updating. For example, a new data sample is manually added for updating, or the model training device detects whether a new program sample exists in real time to update the preset model.

In the embodiment of the present application, the specific implementation of the foregoing embodiment is explained in detail, and it can be seen that the model training method provided in this embodiment generates a sample picture after highlighting a sample program, classifies the sample picture according to at least one preset size, and trains the sample picture respectively, thereby obtaining a plurality of preset models corresponding to different preset sizes. In this way, the preset model obtained by the embodiment of the present application can detect the aliasing virus and unknown deformation/variant thereof, and can also remain effective in a low-density scene (e.g., a small amount of aliasing code is embedded into a large amount of normal code).

Based on the foregoing embodiments, the present application provides a static virus detection method for image recognition, which aims to detect viruses and unknown variants/variants thereof and also can remain effective in low-density scenes (e.g., a small amount of obfuscated codes embedded in a large amount of normal codes).

In the embodiment of the application, taking the detection aiming at the macro virus as an example, a machine learning algorithm is provided firstly, and the machine learning algorithm can comprise two stages of training and prediction; secondly, normal macro programs and macro viruses can be classified by utilizing a mature deep learning algorithm in the field of computer vision; thirdly, the process of identifying the confusing and deformed macro virus and the normal macro program when a human reads the code can be simulated, and the visual difference between the confusing and deformed macro virus and the normal macro program in the editor is strengthened. Therefore, numbers, keywords, character strings and sensitive character strings of the macro program are highlighted in different colors in an editor respectively, and display effects are segmented to be screenshots. That is to say, the embodiments of the present application are mainly used to solve the problem how to represent a macro program, so that a deep learning algorithm in the computer vision field can effectively identify whether the macro program is a virus.

The training phase and the prediction phase in the embodiments of the present application will be described in detail below with reference to the accompanying drawings.

For the training phase, refer to fig. 6, which shows a schematic flow chart of an implementation of the model training method according to the embodiment of the present application. As shown in fig. 6, the model training method mainly includes:

step S501, obtaining a known macro virus program;

step S502, obtaining a known normal macro program;

here, before obtaining the preset model, sample data for training the preset model needs to be acquired first. In an embodiment of the present application, the sample data for training the preset model includes: known macro virus programs and known normal macro programs, wherein the known macro virus programs may include both known macro virus programs that are obfuscated code in their entirety, and partially obfuscated macro virus programs that embed obfuscated code in large amounts of normal code, as well as other types of known macro virus programs.

Step S501 and step S502 may be performed simultaneously, or may be performed first in any step, and the step number does not limit the order of the steps.

Step S503, extracting confusion fragments manually;

step S504, the confusion tool generates confusion macro programs with different sizes;

step S503 corresponds to step S501, namely, the step of manually extracting the obfuscated fragment is to perform obfuscated fragment extraction on the known macro virus program. Illustratively, with a known macro virus program as input, the virus analyst manually extracts the obfuscated program fragments therein, discarding other non-obfuscated program portions.

Step S504 corresponds to step S502, in which the obfuscation tool generates obfuscated macro procedures of different sizes to perform obfuscation processing on the known normal macro procedures. Illustratively, the obfuscated macros are generated using obfuscation tools (e.g., Macro _ pack, Macroshop, vba-obfuscator, VBad, Veil Framework, general-Macro, etc.). Since the size of the code generated by the obfuscation tool is generally proportional to its input, its size can be adjusted by the selection of known macro procedures.

It should be noted that the use of obfuscation tools to generate obfuscated programs can improve the effect of training the model, and any obfuscation tool that can obfuscate programs is within the scope of the embodiments of the present application, and the obfuscation tools in the above examples are not limited to the present application.

It should be noted that step S503 and step S504 may be performed simultaneously, or may be performed first in accordance with any step, and the step number does not limit the order of performing the steps. It is also possible to perform S501 and S503 first, and then perform S502 and S04 in reverse order, or simultaneously. When the model training result is not affected, the execution sequence is not specifically limited in the embodiment of the present application.

Step S505, highlighting the macro program by using a tool to generate a picture;

here, highlighting the macro program includes highlighting the obfuscated sections extracted in steps S503 and S504 and the generated obfuscated macro program, as well as a known normal macro program. For example, the normal macro procedure may be the normal macro procedure acquired in step S502. The highlighting may be of different colors for numbers, keywords, strings, and sensitive strings empirically determined by the virus analyst. The pictures are then generated using transcoding conversion tools (e.g., carbon, polacode, codezen, etc.). It will be appreciated that the generated pictures include the aforementioned obfuscated fragments, pictures of obfuscated macro procedures, and pictures of normal macro procedures.

Highlighting the numbers, the keywords, the character strings and the sensitive character strings determined by the virus analyst according to the experience according to different colors refers to highlighting the confusion macro program and the normal macro program which are used as samples, namely highlighting the numbers, the keywords, the character strings and the sensitive character strings determined by the virus analyst according to the experience according to different colors, for example, highlighting all the numbers in red, highlighting all the keywords in green, highlighting all the character strings in blue and highlighting all the sensitive character strings in yellow. It will be appreciated that there is the possibility that a string of characters is wholly or partially a sensitive string of characters, and that the sensitive string or sensitive string portion is actually highlighted in both blue and yellow, but that the effect of yellow highlighting is ultimately present. It should be noted that, when the model trained in the embodiment of the present application is used to detect the macro program to be detected, the manner of highlighting the macro program is the same as that in this step.

It is to be understood that the above-described highlighted colors are merely exemplary and do not constitute a limitation of the present application.

It should be noted that transcoding is a mature technology, there are many open source implementations, and any transcoding tool or technology that can be used to convert code into pictures is within the scope of the embodiments of the present application, and the above exemplary transcoding tool is not intended to limit the present application.

Step S506, classifying the pictures according to the sizes;

here, the pictures converted by the macro program are classified by size. Illustratively, three dimensions are defined, a small drawing dimension, a medium drawing dimension, and a large drawing dimension, respectively. It should be noted that, defining three dimensions is only an exemplary way for classifying pictures in the embodiment of the present application, and a person skilled in the art may define one, two, three, four, five, and more dimensions according to actual requirements to classify pictures, which is not specifically limited in the embodiment of the present application.

Step S507, training a model according to different pictures;

here, the classified pictures are subjected to model training. It is understood that in the aforementioned step S305, the pictures are divided into several types according to the sizes, and the present step is trained according to several types of models. Illustratively, the model is trained in small, medium and large picture size pictures, respectively, corresponding to the division of the picture into small, medium and large picture sizes. Illustratively, this step may employ mature deep neural networks in the computer vision field as machine learning algorithms, such as AlexNet, ZFNet, VGGNet, google lenet, and ResNet.

It should be noted that, many mature deep neural networks in the field of computer vision are provided, and new deep neural networks are continuously developed, and any kind of deep neural networks are within the selection range of the embodiments of the present application as long as the training purpose of the embodiments of the present application can be achieved, and the above-mentioned exemplary machine learning algorithm is exemplary and does not constitute a limitation to the present application.

Step S508, obtaining a small graph model;

step S509, obtaining a middle graph model;

and step S5010, obtaining a large graph model.

Here, after training the models in the different kinds of pictures, preset models corresponding to the different kinds of sizes can be obtained. Illustratively, after training the models for pictures with different size classifications, three models, namely a small graph model, a middle graph model and a large graph model, are generated corresponding to the division of the pictures into the small graph size, the middle graph size and the large graph size. It will be appreciated that if the pictures are classified by other sizes, a corresponding number of preset models will result.

After the preset model is obtained, the embodiment of the application further provides a static detection method for the aliasing macro virus for image recognition, which aims to detect the aliasing macro virus and unknown deformation/variation thereof, that is, macro virus detection/prediction processing needs to be performed on a macro program to be detected (which may also be referred to as a "macro program under test").

Specifically, for the prediction stage, fig. 7 is a schematic flow chart of an implementation of the macro virus detection method according to the embodiment of the present application. As shown in fig. 7, the macro virus prediction method includes:

step S601, obtaining a macro program to be tested;

it should be noted that the specific implementation process of this step is the same as step S101 in the foregoing embodiment, and therefore, the detailed description is omitted here.

Step S602, sliding and cutting according to the size of a small picture;

the sliding clipping according to the small graph size may be to clip the content of the macro program to be tested, where the small graph size defined in the above embodiment is a clipping window, and the macro program to be tested is clipped from the beginning to the end of the program with one line of the program as granularity; a plurality of pictures with small picture size are obtained iteratively by the code-to-picture conversion tool used in step S505 in the previous embodiment.

Step S603, sliding and cutting according to the size of the middle graph;

the sliding clipping according to the size of the middle graph can be that the content of the macro program to be tested is displayed in an editor, the size of the middle graph defined in the previous embodiment is a clipping window, and the macro program to be tested is clipped from the beginning to the end of the program by taking one line of the program as granularity; through the code-to-picture conversion tool used in step S505 in the previous embodiment, a plurality of pictures of the middle picture size are obtained iteratively. It should be noted that if the macro under test is shorter and less than the middle size, this step is skipped.

Step S604, sliding cutting according to the size of the large graph;

the sliding clipping according to the size of the large graph can be that the content of the macro program to be tested displayed in the editor is clipped, the size of the large graph defined in the previous embodiment is a clipping window, and the macro program to be tested is clipped from the beginning to the end of the program by taking one line of the program as granularity; a plurality of pictures with large picture size are obtained iteratively by the code-to-picture conversion tool used in step S505 in the previous embodiment. It should be noted that if the macro procedure under test is short and not larger than the size of the graph, this step is skipped.

Here, in the above steps S602, S603, and S604, when performing slide clipping on the macro program under test, the clipping may be performed at a granularity of two or more lines of the program. It will be appreciated that when clipping is done at a finer granularity, the accuracy of the training and prediction macro procedure will be higher, with a corresponding increase in time consumption; when clipping at a coarser granularity, the macro procedure can be trained and predicted faster, and accordingly less accurate than fine granularity clipping.

For example, taking ten lines of program codes of the macro program under test, respectively A, B, C, D, E, F, G, H, I and J as an example, when sliding and clipping according to the small-size of the graph, if the macro program under test is clipped from beginning to end at the granularity of one line of the program, if the defined small-size of the graph corresponds to four lines of codes, seven program segments including ABCD, BCDE, CDEF, DEFG, EFGH, FGHI and GHIJ can be obtained, and then a conversion tool for converting the code into the graph is used, so as to obtain seven pictures with small-size of the graph iteratively. When the macro program to be tested is cut from beginning to end with one line of the program as granularity when the middle size is cut in a sliding way, if the defined small size corresponds to eight lines of codes, three program segments including ABCDEFGH, BCDEFGHI and CDEFGHIJ can be obtained, and then a conversion tool of code conversion pictures is used, so that three pictures with the middle size are obtained in an iterative way. When the macro program is cut in the sliding mode according to the large image size, if the macro program to be tested is cut from the beginning to the end by taking one line of the program as the granularity, if the defined small image size corresponds to twelve lines of codes, the size of the macro program to be tested can not reach the large image size, so the step is skipped, and the large image cutting is not carried out on the macro program to be tested.

It should be noted that the above description of the measured macro program clipping manner is only exemplary, and those skilled in the art can set the granularity, the program size, and the like according to actual requirements, and this is not specifically limited in the embodiments of the present application.

It should be further noted that, defining three dimensions is only an exemplary way for classifying pictures in the embodiment of the present application, and a person skilled in the art may define one, two, three, four, five, and more dimensions according to actual requirements to classify pictures, which is not specifically limited in the embodiment of the present application.

Here, by cutting out the macro program under test in steps S602, S603, and S604, a picture of a small picture size, a picture of a medium picture size, and a picture of a large picture size can be obtained, respectively. It will be appreciated that if the measured program size is less than the medium or large picture size, only a small picture size picture, or a small and medium picture size picture will be obtained.

Step S605, predicting by using a small graph model;

the small graph model used for prediction may be a small graph model trained in the previous embodiment, which is used for predicting the small-size picture generated in step S605. This step uses a mature deep neural network in the computer vision field as a machine learning algorithm, such as AlexNet, ZFNet, VGGNet, google lenet, and ResNet, in accordance with step S307 of the previous embodiment.

Step S606, forecasting by using a middle graph model;

the prediction using the middle graph model may be that the middle graph model trained in the previous embodiment is used to predict the large-graph-size picture generated in step S605. This step uses a mature deep neural network in the computer vision domain as a machine learning algorithm, such as AlexNet, ZFNet, VGGNet, google lenet, and ResNet, in accordance with step S507 of the previous embodiment.

Step S607, predicting by using a large graph model;

the prediction using the large graph model may be that the large graph model trained in the previous embodiment is used to predict the large-graph-size picture generated in step S405. This step uses a mature deep neural network in the computer vision field as a machine learning algorithm, such as AlexNet, ZFNet, VGGNet, google lenet, and ResNet, in accordance with step S307 of the previous embodiment.

And step S608, judging according to the prediction result.

Here, after prediction is performed using different models of steps S608, and S6010, a prediction result may be obtained, and it is determined whether the macro procedure to be tested is a normal macro procedure or an obfuscated macro virus according to the prediction result. When performing the prediction, steps S605, S606, and S607 may be performed simultaneously, or may be performed in any order, which is not specifically limited in this embodiment of the application.

For example, if the prediction results are collected after three (or one, two, or multiple) models are adopted for prediction, and any one of the tested image is an image of an obfuscated macro virus code, it is determined that the whole macro program to be tested is a macro virus processed by an obfuscation technique, and meanwhile, a specific obfuscated position of the macro virus is determined according to the clipping positions of steps S602, S603, and S604; otherwise (all predicted pictures are pictures of the normal macro program), the tested macro program is judged to be the normal macro program.

For example, if the prediction is performed in the order of executing step S605 first, if the small graph model determines that the macro program is a macro virus program, the prediction may be stopped, and the prediction may not be performed using any other model. Thus, energy consumption can be effectively saved.

Here, before obtaining the prediction result, the above steps S602 to S607 may be performed by converting the macro program to be tested into pictures with different sizes at the same time or in a certain order, and then performing prediction with different models at the same time or in a certain order; the macro program to be tested is converted into a picture with a certain size, then the corresponding model is used for prediction, if macro virus is predicted, the prediction can be stopped, or the macro program to be tested is converted into a picture with another size continuously like the macro virus, and then the corresponding model is used for prediction, so that the accuracy and the coverage rate are improved. That is, the execution order is not particularly limited in the embodiments of the present application on the premise that the prediction result can be obtained.

In a model training stage, pictures are generated after highlighting a confusion fragment of a known macro virus program, a confusion macro program generated by a confusion tool and a normal macro program, the generated pictures are classified according to sizes (for example, the pictures are divided into a small-size picture, a medium-size picture and a large-size picture), decibel training models of the classified pictures with different sizes are generated, and models (for example, the small-size model, the medium-size model and the large-size model) corresponding to different sizes are generated. In the prediction stage, a macro program to be tested is cut according to different sizes to generate pictures, the generated pictures are predicted through a model, whether the pictures are pictures of macro viruses or not is judged, when the pictures are predicted to be pictures of confusion macro virus codes at any time, the whole macro program to be tested is judged to be the macro viruses processed through the confusion technology, and the specific confusion positions of the macro viruses are determined through the cutting positions. Thus, the embodiment simulates the process of identifying confusing and deforming macro virus and normal macro program when human reads codes, strengthens the visual difference between the macro program and the normal macro program in the editor, highlights the numbers, keywords, character strings and sensitive character strings of the macro program in different colors respectively in the editor, and segments the display effect into screenshots (i.e. generates pictures). The present application trains these screenshots. In this way, the macro program is represented by the macro program picture after the highlighting processing, so that the deep learning algorithm in the computer vision field can effectively identify whether the macro program is a virus or not, and the prediction method can still be effective in a low-density scene because the macro program to be detected is cut according to the size.

In summary, it should be noted that, first, the mature deep neural networks in the computer vision field are many and are continuously developing new, such as AlexNet, ZFNet, VGGNet, google lenet and ResNet. The innovation point of the embodiment of the application is not in a certain deep neural network, but in an application mode in a special application scene of low-density confusion macro virus detection. Therefore, the deep neural network model of the embodiment of the present application is not uniquely determined and may be replaced.

Second, transcoding into pictures is a mature technology, and there are many open source implementations, such as carbon, polacode, codezen, etc. The implementation of this part is also not the innovation point of the embodiments of the present application, and therefore, any code-to-picture technology can be used instead.

Thirdly, in order to improve the effect, the obfuscation tool is used to generate the obfuscated macro program, and various obfuscation tools used in the embodiment are not used as the innovation point of the embodiment; these obfuscating means are also replaceable.

Fourthly, the embodiment of the present application positions the picture size to be three, large, medium and small, and the part can be divided into other categories according to the needs in practical operation, for example, four categories, five categories and the like according to the size.

The embodiment of the application provides a macro virus detection method, which comprises two stages of model training and prediction, wherein a normal macro program and a macro virus are classified by utilizing a mature deep learning algorithm in the field of computer vision; the method includes the steps of simulating the process that a human recognizes confusing and deforming macro virus and a normal macro program when reading codes, strengthening visual difference of the macro virus and the normal macro program in an editor, highlighting numbers, keywords, character strings and sensitive character strings of the macro program in different colors in the editor respectively, and making screenshots by segmenting display effects. Therefore, the macro program is converted into the picture for representation, so that the deep learning algorithm in the computer vision field can effectively identify whether the macro program is a virus or not. That is to say, not only for a general confusion macro-like virus detection scenario, but also in a special application scenario of low-density confusion macro-like virus detection, in the embodiment of the present application, a macro program is converted into an image entity, and training and prediction are performed through a mature deep neural network in the computer vision field; under the special application scene of low-density confusion macro virus detection, the embodiment of the application can also classify according to the size to generate image segments of macro programs with different sizes, and respectively train and predict to improve the scheme effect; in addition, under the special application scenario of low-density confusing macro-virus detection, the embodiment of the present application may further perform highlighting processing on the keyword to distinguish confusing and non-confusing samples, so as to improve the scheme effect.

Based on the foregoing virus detection method embodiment, an embodiment of the present application provides a virus detection apparatus, where each unit included in the apparatus may be a partial circuit, a partial processor, a partial program, software, or the like, and all of the units may be implemented by a processor in an electronic device; of course, it can also be implemented by specific logic circuits; in the implementation process, the processor may be a CPU (Central Processing Unit), an MPU (Microprocessor Unit), a DSP (Digital Signal processor), an FPGA (Field Programmable Gate Array), or the like.

Fig. 8 is a schematic structural diagram of a virus detection apparatus according to an embodiment of the present application. As shown in fig. 8, the virus detection apparatus 700 includes a first obtaining unit 701, a first converting unit 702, a detecting unit 703, and a determining unit 704; wherein the content of the first and second substances,

a first acquisition unit 701 for acquiring a program to be detected;

a first conversion unit 702, configured to perform conversion processing on the program to be detected, so as to generate at least one group of pictures to be detected;

the detecting unit 703 is configured to perform virus detection on the at least one group of pictures to be detected according to a preset model to obtain a detection result;

a determining unit 704, configured to determine whether the program to be detected is a virus program based on the detection result.

In some embodiments, the first converting unit 702 is further configured to highlight the key information of the program to be detected, so as to obtain the highlighted program to be detected; wherein the type of the key information at least comprises one of the following types: numbers, keywords, sensitive character strings and common character strings

In some embodiments, the first converting unit 702 is further configured to cut the highlighted program to be detected according to at least one preset size, so as to obtain at least one group of program fragments; converting the at least one group of program fragments to generate at least one group of pictures to be detected; each group of pictures to be detected corresponds to a preset size, and each group of pictures to be detected comprises at least one picture to be detected.

In some embodiments, the first conversion unit 702 is further configured to highlight different types of key information according to different colors, respectively.

In some embodiments, the preset model comprises at least one sub-preset model; correspondingly, the detecting unit 703 is further configured to perform virus detection on a first group of pictures to be detected according to a first sub-preset model, so as to obtain a detection result of each picture to be detected in the first group of pictures to be detected; the first sub-preset model is any one of the at least one sub-preset model, and the first group of pictures to be detected is a group of pictures to be detected corresponding to the first sub-preset model in the at least one group of pictures to be detected.

In some embodiments, the at least one group of pictures to be detected includes: the preset model comprises a small graph model, a middle graph model and a large graph model; correspondingly, the detecting unit 703 is further configured to perform virus detection on the small-image-size picture by using the small-image model to obtain a detection result of each small-image-size picture; performing virus detection on the pictures with the medium picture size by using the medium picture model to obtain the detection result of each picture with the medium picture size; and carrying out virus detection on the large-image-size images by using the large-image model to obtain a detection result of each large-image-size image.

In some embodiments, the determining unit 704 is further configured to determine that the program to be detected is a virus program if the detection result indicates that one of the pictures to be detected in the at least one group of pictures to be detected is a virus picture; and if the detection result indicates that all the pictures to be detected in the at least one group of pictures to be detected are normal program pictures, determining that the program to be detected is a normal program.

In some embodiments, the determining unit 704 is further configured to determine a clipping position of a program segment corresponding to the virus picture in the program to be detected; and determining the confusion position of the virus in the program to be detected according to the virus picture and the clipping position.

The above description of the embodiments of the virus detection apparatus is similar to that of the above embodiments of the virus detection method, and has similar advantageous effects to those of the embodiments of the method. For technical details not disclosed in the embodiments of the virus detection apparatus of the present application, refer to the description of the embodiments of the method of the present application.

Based on the foregoing model training method embodiment, an embodiment of the present application provides a model training apparatus, where each unit included in the apparatus may be a partial circuit, a partial processor, a partial program, software, or the like, and all of the units may be implemented by a processor in an electronic device; of course, it can also be implemented by specific logic circuits; in implementation, the processor may be a CPU, MPU, DSP, FPGA, or the like.

Fig. 9 is a schematic structural diagram of a model training apparatus 800 according to an embodiment of the present application, and as shown in fig. 9, the model training apparatus 800 includes a second obtaining unit 801, a second transforming unit 802, and a training unit 803; wherein the content of the first and second substances,

a second obtaining unit 801 for obtaining a sample assembly; wherein the sample assembly includes at least one normal program sample and at least one obfuscated program sample.

A second conversion unit 802, configured to perform conversion processing on a sample program in the sample program set, so as to generate a sample image set;

and the training unit 803 is configured to train the initial model by using the sample picture set to obtain a preset model.

In some embodiments, the at least one obfuscated macro procedure sample comprises a first type obfuscated macro procedure sample and a second type obfuscated macro procedure sample; the first type of obfuscated macro program sample is an obfuscated macro program read from a known macro virus program, and the second type of obfuscated macro program sample is an obfuscated macro program generated by processing a normal macro program by using an obfuscation tool.

In some embodiments, the second conversion unit 802 is further configured to highlight key information of a sample program in the sample program set to obtain a target sample macro program set; wherein the type of the key information at least comprises one of the following types: numbers, keywords, sensitive strings and common strings; and converting the sample macro program in the target sample macro program set to generate a sample picture set.

In some embodiments, the second conversion unit 802 is further configured to highlight different types of the key information according to different colors, respectively.

In some embodiments, the training unit 803 is further configured to train the initial model by using at least one group of sample pictures, respectively, to obtain at least one sub-preset model, and determine the at least one sub-preset model as the preset model; the at least one group of sample pictures are obtained by classifying the sample pictures in the sample picture set according to at least one preset size.

In some embodiments, the at least one preset size includes a small picture size, a medium picture size, and a large picture size, and the second conversion unit 802 is further configured to classify the sample pictures in the sample picture set according to the small picture size, the medium picture size, and the large picture size, so as to obtain a small picture size sample picture group, a medium picture size sample picture group, and a large picture size sample picture group.

The training unit 803 is further configured to train the initial model by using the small-size sample picture group to obtain a small-size model; training an initial model by using the middle graph size sample picture group to obtain a middle graph model; and training the initial model by utilizing the large-image-size sample picture group to obtain a large-image model.

In some embodiments, as shown in fig. 10, the model training apparatus 800 further comprises: an updating unit 804, configured to update the at least one preset model according to a new virus type when the new virus type is detected.

The above description of the embodiment of the model training apparatus is similar to that of the above embodiment of the model training method, and has similar beneficial effects to the embodiment of the method. For technical details not disclosed in the embodiments of the present application model training apparatus, please refer to the description of the embodiments of the present application model training method for understanding.

It should be noted that, in the embodiment of the present application, if the method described above is implemented in the form of a software functional module and sold or used as a standalone product, it may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including instructions for causing an electronic device (which may be a personal computer, a server, or the like) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a ROM (Read Only Memory), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.

Correspondingly, an embodiment of the present application provides an electronic device, which includes a memory and a processor, where the memory stores a computer program that can be run on the processor, and the processor executes the program to implement the steps in the virus detection method or the model training method provided in the foregoing embodiments.

Correspondingly, the present application provides a readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the virus detection method or the model training method described above.

Here, it should be noted that: the above description of the embodiments of the storage medium and the electronic device is similar to the description of the embodiments of the macro virus detection method or the model training method, and has similar beneficial effects to the embodiments of the method. For technical details not disclosed in the embodiments of the storage medium and the electronic device of the present application, please refer to the description of the embodiments of the virus detection method or the model training method of the present application.

It should be noted that fig. 11 is a schematic diagram of a hardware entity of an electronic device according to an embodiment of the present application, and as shown in fig. 11, the hardware entity of the electronic device 900 includes: a processor 901, a communication interface 902 and a memory 903, wherein

The processor 901 generally controls the overall operation of the electronic device 900.

The communication interface 902 may enable the electronic device 900 to communicate with other terminals or servers via a network.

The Memory 903 is configured to store instructions and applications executable by the processor 901, and may also buffer data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by the processor 901 and modules in the electronic device 900, and may be implemented by a FLASH Memory or a RAM (Random Access Memory).

In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or certain features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.

The units described above may or may not be physically separated, and what is displayed as a unit may or may not be a physical unit, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, all functional units in the embodiments of the present application may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit. Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.

The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.

Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.

The features disclosed in the several method or apparatus embodiments provided in this application may be combined in any combination to arrive at a new method or apparatus embodiment without conflict.

The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

35页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种文件检测方法、装置、设备及可读存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类