Method and device for classifying large number of images based on small number of image samples

文档序号:1904693 发布日期:2021-11-30 浏览:27次 中文

阅读说明:本技术 基于少量图像样本对大量图像进行归类的方法及装置 (Method and device for classifying large number of images based on small number of image samples ) 是由 郭大勇 张海龙 兰永 于 2021-07-16 设计创作,主要内容包括:本申请公开了一种基于少量图像样本对大量图像进行归类的方法及装置,通过选取少量特征明显的图像进行标注,并修改Inception-v3预训练模型,然后基于少量样本训练分类模型,获得分类模型后,将该分类模型改为特征抽取模型,利用特征抽取模型获取标注图像的特征向量,并计算每个图像类别的中心点向量,最后利用特征抽取模型获取数据库中图像的特征向量,并分别与各图像类别的中心点向量计算相似度,超过阈值就保留。本申请方便了不同类别图像的识别和提高了不同类别图像的识别效率,从而实现对大量图像的准确快速归类,避免了浪费过多的时间成本和人力成本的问题。(The application discloses a method and a device for classifying a large number of images based on a small number of image samples, wherein a small number of images with obvious characteristics are selected for labeling, an increment _ v3 pre-training model is modified, then a classification model is trained based on a small number of samples, after the classification model is obtained, the classification model is changed into a characteristic extraction model, a characteristic vector of the labeled image is obtained by using the characteristic extraction model, a central point vector of each image category is calculated, finally, a characteristic vector of the image in a database is obtained by using the characteristic extraction model, the similarity of the characteristic vector and the central point vector of each image category is calculated, and the similarity is reserved when the threshold value is exceeded. The method and the device facilitate the identification of the images of different types and improve the identification efficiency of the images of different types, so that the images can be accurately and quickly classified, and the problem of excessive time cost and labor cost waste is avoided.)

1. A method for classifying a plurality of images based on a small number of image samples, comprising:

step S1: setting at least one image category and setting target characteristics corresponding to the image categories;

step S2: screening partial images according to preset image categories and corresponding target characteristics in an image database, and classifying and labeling the screened partial images;

step S3: modifying the last layer of the Incep _ v3 pre-training model to enable the length of the one-dimensional vector output by the Incep _ v3 pre-training model to be equal to the number of preset image categories;

step S4: dividing the classified and labeled partial images into a training set and a verification set according to a preset proportion, wherein the verification set does not participate in training;

step S5: inputting the training set into a modified inclusion _ v3 pre-training model for model training to obtain a classification model;

step S6: modifying the obtained classification model into a feature extraction model;

step S7: respectively inputting the classified and labeled partial images into the feature extraction model to obtain feature vectors corresponding to all the images, and adding all the feature vectors of the same image category to obtain a central point vector of the image category;

step S8: and utilizing a feature extraction model to obtain feature vectors corresponding to each image in an image database, calculating similarity with the central point vector respectively, and classifying the image into an image category corresponding to the central point vector if the similarity exceeds a preset threshold.

2. The method according to claim 1, wherein the step S2 of selecting partial images according to the preset image categories and their corresponding target features comprises:

obtaining N image categories and target characteristics corresponding to the image categories respectively, wherein N is a natural number which is more than or equal to 1 and less than or equal to 1000;

acquiring a plurality of images to be annotated, respectively matching each image to be annotated with each target feature, classifying all the images to be annotated matched with the same target feature into image categories corresponding to the target feature, and obtaining the images to be annotated respectively included in each image category.

3. The method of claim 1, wherein in step S2, the number of images between different image classes of the classified labels does not exceed twice the difference.

4. The method of claim 1, wherein the predetermined ratio is 7: 3.

5. the method of claim 1, wherein in step S5, the inclusion _ v3 pre-trained model has 47 layers, and only the last 5-10 layers are opened for training when model training is started, and all layers are opened for training after loss is stabilized.

6. The method of claim 1, wherein the step S6 of modifying the obtained classification model into the feature extraction model comprises:

and deleting the last layer of the obtained classification model, wherein when the image is input into the classification model, the output of the classification model is not a one-dimensional vector with the length equal to the number of preset image categories but a one-dimensional vector with the length of 2048, and at the moment, the output of the classification model is a vector representing the target characteristics.

7. The method according to claim 1, wherein in step S8, the similarity is cosine similarity, and the greater the cosine value, the higher the similarity.

8. The method of claim 1, further comprising the step of classifying a plurality of images based on a small number of image samples:

step S9: and adding the images which are screened out from the image database and accord with the preset image category into the training set through similarity calculation, and returning to the step S5.

9. An apparatus for classifying a plurality of images based on a small number of image samples, comprising:

the image processing device comprises a category acquisition module, a classification acquisition module and a classification analysis module, wherein the category acquisition module is configured to acquire a plurality of image categories and target characteristics corresponding to each image category;

the image labeling module is configured to screen partial images from an image database according to the image categories acquired by the category acquisition module and the corresponding target features of the partial images, and classify and label the screened partial images;

the pre-training model modification module is configured to modify the last layer of the inclusion _ v3 pre-training model to enable the length of a one-dimensional vector output by the inclusion _ v3 pre-training model to be equal to the number of preset image categories;

the classification model training module is configured to divide the classified and labeled partial images into a training set and a verification set according to a preset proportion, and the training set is input into a modified inclusion _ v3 pre-training model for model training to obtain a classification model;

an adjusting module configured to modify the obtained classification model into a feature extraction model;

the central point vector calculation module is configured to input the classified and labeled partial images into the feature extraction model respectively to obtain feature vectors corresponding to the images, and the feature vectors of the same image category are added to obtain a central point vector of the image category;

and the similarity grading module is configured to utilize the feature extraction module obtained by the adjustment module to obtain feature vectors corresponding to the images in the image database, calculate similarity with the central point vector respectively, and classify the images with the similarity exceeding a preset threshold value into image categories corresponding to the central point vector according to a similarity matching result.

10. The apparatus of claim 9, wherein the adjustment module comprises: a deletion module configured to delete the last layer of the obtained classification model such that the output of the classification model is a 2048-length one-dimensional vector characterizing a target feature.

Technical Field

The invention relates to the technical field of image processing, in particular to a method and a device for classifying a large number of images based on a small number of image samples.

Background

In production or life, people increasingly use a mode of taking images such as images or photos to record important information, and a large number of images need to be searched in a complicated way when being used, so that the classification management of image materials is very important.

Image classification is an important pattern recognition application, different images need to be recognized according to target characteristics, and a classification model trained in advance determines classification accuracy in application.

At present, image data are more and more, and no good intelligent processing method exists for how to quickly classify a large number of images.

Disclosure of Invention

The present invention is directed to a method and an apparatus for classifying a large number of images based on a small number of image samples, so as to solve the problems mentioned in the background.

In order to achieve the purpose, the invention adopts the following technical scheme:

a first aspect of the present application provides a method for classifying a large number of images based on a small number of image samples, comprising:

step S1: setting at least one image category and setting target characteristics corresponding to the image categories;

step S2: screening partial images according to preset image categories and corresponding target characteristics in an image database, and classifying and labeling the screened partial images;

step S3: modifying the last layer of the Incep _ v3 pre-training model to enable the length of the one-dimensional vector output by the Incep _ v3 pre-training model to be equal to the number of preset image categories;

step S4: dividing the classified and labeled partial images into a training set and a verification set according to a preset proportion, wherein the verification set does not participate in training;

step S5: inputting the training set into a modified inclusion _ v3 pre-training model for model training to obtain a classification model;

step S6: modifying the obtained classification model into a feature extraction model;

step S7: respectively inputting the classified and labeled partial images into the feature extraction model to obtain feature vectors corresponding to all the images, and adding all the feature vectors of the same image category to obtain a central point vector of the image category;

step S8: and utilizing a feature extraction model to obtain feature vectors corresponding to each image in an image database, calculating similarity with the central point vector respectively, and classifying the image into an image category corresponding to the central point vector if the similarity exceeds a preset threshold.

Preferably, in step S2, the screening out partial images according to the preset image categories and the corresponding target features includes:

obtaining N image categories and target characteristics corresponding to the image categories respectively, wherein N is a natural number which is more than or equal to 1 and less than or equal to 1000;

acquiring a plurality of images to be annotated, respectively matching each image to be annotated with each target feature, classifying all the images to be annotated matched with the same target feature into image categories corresponding to the target feature, and obtaining the images to be annotated respectively included in each image category.

Preferably, in step S2, the sorting and labeling of the selected partial images may be performed by placing different types of images in different folders, that is, placing images of the same type in one folder.

Preferably, in step S2, the number of images between different image classes of the classified labels does not exceed twice the difference.

In the above, the inclusion _ v3 pre-training model is an image classification model that has been trained on a large image database ImageNet, the pre-training model can classify 1000 types of images, a one-dimensional vector with a length of 1000 is originally output in the last layer of the pre-training model, and each value in the vector can be regarded as a confidence of each image type.

Preferably, the preset ratio is 7: 3.

preferably, in step S5, the inclusion _ v3 pre-training model is adjusted in real time according to the classification and the number of the images labeled by the classification, according to how many layers are opened in the model training and how many times the model needs to be iterated.

In a preferred embodiment, in step S5, the inclusion _ v3 pre-training model has 47 layers, and when the model training is started, only the last 5-10 layers are opened for training, and after the loss is stabilized, all layers are opened for training.

Preferably, in step S5, after the inclusion _ v3 pre-training model is trained, the method further includes: and testing the classification model obtained after training by using the test set.

Preferably, in step S6, the modifying the obtained classification model into a feature extraction model includes:

and deleting the last layer of the obtained classification model, wherein when the image is input into the classification model, the output of the classification model is not a one-dimensional vector with the length equal to the number of preset image categories but a one-dimensional vector with the length of 2048, and at the moment, the output of the classification model is a vector representing the target characteristics.

Preferably, in step S8, the similarity is cosine similarity, and the greater the cosine value, the higher the similarity.

More preferably, the calculation formula of the cosine similarity is as follows:

wherein, the vector A is a characteristic vector corresponding to a pair of images to be compared in the image database, AiRepresenting its characteristic components; vector B is the center point vector, BiRepresenting its characteristic components; similarity is the calculated cosine value with the range of [ -1,1 []。

Preferably, the method further comprises:

step S9: and adding the images which are screened out from the image database and accord with the preset image category into the training set through similarity calculation, and returning to the step S5.

A second aspect of the present application provides an apparatus for classifying a large number of images based on a small number of image samples, comprising:

the image processing device comprises a category acquisition module, a classification acquisition module and a classification analysis module, wherein the category acquisition module is configured to acquire a plurality of image categories and target characteristics corresponding to each image category;

the image labeling module is configured to screen partial images from an image database according to the image categories acquired by the category acquisition module and the corresponding target features of the partial images, and classify and label the screened partial images;

the pre-training model modification module is configured to modify the last layer of the inclusion _ v3 pre-training model to enable the length of a one-dimensional vector output by the inclusion _ v3 pre-training model to be equal to the number of preset image categories;

the classification model training module is configured to divide the classified and labeled partial images into a training set and a verification set according to a preset proportion, and the training set is input into a modified inclusion _ v3 pre-training model for model training to obtain a classification model;

an adjusting module configured to modify the obtained classification model into a feature extraction model;

the central point vector calculation module is configured to input the classified and labeled partial images into the feature extraction model respectively to obtain feature vectors corresponding to the images, and the feature vectors of the same image category are added to obtain a central point vector of the image category;

and the similarity grading module is configured to utilize the feature extraction module obtained by the adjustment module to obtain feature vectors corresponding to the images in the image database, calculate similarity with the central point vector respectively, and classify the images with the similarity exceeding a preset threshold value into image categories corresponding to the central point vector according to a similarity matching result.

Preferably, the preset ratio is 7: 3.

preferably, the adjusting module comprises: a deletion module configured to delete the last layer of the obtained classification model such that the output of the classification model is a 2048-length one-dimensional vector characterizing a target feature.

Compared with the prior art, the technical scheme of the invention has the following beneficial effects:

the method comprises the steps of selecting a small number of images with obvious features for labeling, modifying an increment _ v3 pre-training model, training a classification model based on the small number of samples, changing the classification model into a feature extraction model after obtaining the classification model, obtaining feature vectors of the labeled images by using the feature extraction model, calculating a central point vector of each image category, obtaining the feature vectors of the images in a database by using the feature extraction model, calculating similarity with the central point vectors of the image categories respectively, and keeping when the similarity exceeds a threshold value. The method and the device facilitate the identification of the images of different types and improve the identification efficiency of the images of different types, so that the images can be accurately and quickly classified, and the problem of excessive time cost and labor cost waste is avoided.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. In the drawings:

FIG. 1 is a flow chart of a method for classifying a large number of images based on a small number of image samples according to a preferred embodiment of the present invention;

FIG. 2 is a schematic diagram illustrating the operation of a method for classifying a large number of images based on a small number of image samples according to an embodiment of the present application;

FIG. 3 is a schematic diagram of a processing procedure for obtaining a few images with distinct features for annotation in an embodiment of the present application;

FIG. 4 is another schematic processing diagram for obtaining a few distinct images for annotation in the embodiment of the present application;

FIG. 5 is a schematic diagram of a processing procedure of obtaining feature vectors of labeled images by using a feature extraction model and calculating a center point vector of each image category in the embodiment of the present application;

FIG. 6 is a schematic diagram of a processing procedure for calculating similarity between feature vectors and center point vectors of images in an image database according to an embodiment of the present disclosure;

fig. 7 is a block diagram of an apparatus for classifying a large number of images based on a small number of image samples according to a preferred embodiment of the present invention.

Detailed Description

In order to make the objects, technical solutions and effects of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.

It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order, it being understood that the data so used may be interchanged under appropriate circumstances. Furthermore, the terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.

Referring to fig. 1, fig. 1 is a flow chart illustrating a method for classifying a large number of images based on a small number of image samples according to the present application.

The method for classifying a large number of images based on a small number of image samples mainly comprises the following steps:

step S1: setting at least one image category and setting target characteristics corresponding to the image categories;

step S2: screening partial images according to preset image categories and corresponding target characteristics in an image database, and classifying and labeling the screened partial images;

step S3: modifying the last layer of the Incep _ v3 pre-training model to enable the length of the one-dimensional vector output by the Incep _ v3 pre-training model to be equal to the number of preset image categories;

step S4: dividing the classified and labeled partial images into a training set and a verification set according to a preset proportion, wherein the verification set does not participate in training;

step S5: inputting the training set into a modified inclusion _ v3 pre-training model for model training to obtain a classification model; testing the classification model obtained after training by using the test set;

step S6: modifying the obtained classification model into a feature extraction model;

step S7: respectively inputting the classified and labeled partial images into the feature extraction model to obtain feature vectors corresponding to all the images, and adding all the feature vectors of the same image category to obtain a central point vector of the image category;

step S8: and utilizing a feature extraction model to obtain feature vectors corresponding to each image in an image database, calculating similarity with the central point vector respectively, and classifying the image into an image category corresponding to the central point vector if the similarity exceeds a preset threshold.

Step S9: and adding the images which are screened out from the image database and accord with the preset image category into the training set through similarity calculation, and returning to the step S5.

Example (b):

specifically, refer to fig. 2 to 6, where fig. 2 is a schematic diagram illustrating an operation principle of a method for classifying a large number of images based on a small number of image samples according to an embodiment of the present application.

The first step is as follows: and selecting a small number of images with obvious characteristics for marking.

Step 101: firstly, defining the image category to be summarized, setting N image categories (N is a natural number, and N is more than or equal to 1 and less than or equal to 1000), and setting the target characteristic corresponding to the image of each image category.

Step 102: and screening partial images in an image database according to a preset image category and a target characteristic corresponding to the image category, and classifying and labeling the screened partial images. For example, the classified labeling can be realized by placing images of different categories in different folders, that is, images of the same category in one folder.

As shown in fig. 3, the left side is an image database, and it is assumed that two types of image categories, i.e. an identification card category and a certificate photo category, are preset, and then an image meeting the requirements of the type is found from the image database. Different categories are set for different image databases and different tasks, such as setting card classes, and the screened image contains the card we want as much as possible, see fig. 4.

Wherein, the labeled data quantity between different image categories does not exceed the two-fold difference. For example, image category a is labeled 50, then image category b should be between 25-100.

The second step is that: the inclusion _ v3 pre-training model was modified.

The inclusion _ v3 pre-trained model is an image classification model that has been trained on a large image database ImageNet and can classify 1000 types of pictures. The final layer output of the original model is a one-dimensional vector with a length of 1000, and each value in the vector can be regarded as the confidence of each image category.

Assuming that we label image classes of two types, the last layer of the inclusion _ v3 pre-training model is rewritten, so that the pre-training model finally outputs a one-dimensional vector with the length of 2.

The third step: a classification model is trained based on a small number of samples.

Step 301: and (3) classifying the labeled images according to the following steps of 7: 3, the preset proportion is divided into a training set and a verification set, and the verification set does not participate in training.

Step 302: inputting the training set into a modified inclusion _ v3 pre-training model for model training to obtain a classification model. The Incep _ v3 pre-training model has 47 layers in total, only the last 5-10 layers are opened for training in the training process of the model, and all layers are opened for training after the loss is stable. How many layers are opened first and how many times of iteration is needed is selected, and real-time adjustment is needed according to the image types and the number marked by the user.

The fourth step: and after the classification model is obtained, changing the classification model into a feature extraction model.

The structure of the first 46 layers of the original model is fixed, the second last layer outputs a 2048 one-dimensional vector, and the last layer is modified in the second step according to the image class. The last layer of the trained classification model is removed, and when the image is input into the classification model, the output of the classification model is not a one-dimensional vector with the length equal to the number of preset image classes (for example, the image class is set to be 2) any more, but a one-dimensional vector with the length of 2048. At this time, the output of the classification model is the vector representing the target feature. The image feature vectors obtained here have no relation to the number of image classes.

The fifth step: and acquiring a feature vector of the marked image by using the feature extraction model, and calculating a central point vector of each image category.

Step 501: and respectively inputting the partial images labeled in the classification mode into the feature extraction model to obtain the feature vector corresponding to each image.

Step 502: and adding the feature vectors of the same image category to obtain a central point vector of the image category.

For example, referring to fig. 5, the id image is input into the feature extraction model to obtain vectors a1, a2, and A3 … …, respectively, and the center point vector a of the id is a1+ a2+ A3+. + An, where n is a natural number greater than or equal to 1 and less than or equal to 1000.

And a sixth step: and acquiring the feature vectors of the images in the image database by using the feature extraction model, calculating the similarity with the central point vectors of the image categories respectively, and keeping the similarity when the similarity exceeds a threshold value.

And utilizing a feature extraction model to obtain feature vectors corresponding to each image in an image database, calculating cosine similarity with the central point vector, and classifying the image into an image category corresponding to the central point vector if the similarity exceeds a preset threshold.

Cosine similarity, namely, the similarity of two eigenvectors is evaluated by using the cosine value of the included angle between the two eigenvectors, and the specific calculation formula is as follows:

in the formula, the vector A is a feature vector corresponding to a pair of images to be compared in the image database, and AiRepresenting its characteristic components; vector B is the center point vector, BiRepresenting its characteristic components; similarity is the calculated cosine value.

The value range of the cosine similarity is [ -1,1], and the larger the value is, the higher the similarity of the two images is. The feature vectors of the same image are the same, and the cosine similarity of two identical feature vectors is 1.

Setting a threshold value of similarity screening, for example, setting a screening threshold value of the identity card class to be 0.7, and when the similarity of the image in the image database and the central point vector of the identity card class exceeds 0.7, determining that the image is the identity card class. As shown in fig. 6, if the similarity between the first image and the center point vector of the identification card class is 0.872, which is greater than 0.7, then we classify the image as the identification card class; similarly, images with similarity not reaching 0.7 are filtered out.

The seventh step: and taking the image reserved in the sixth step as a newly added image label, retraining the model and screening.

Sometimes, because the number of the primary labeled images is small, the performance of the trained classification model in the screening process is not good enough, and then the threshold value in the sixth step can be set higher, so that the screened images are all images meeting the requirements as much as possible.

And adding the screened images meeting the requirements into a training set, and repeating the third step, so that the effect of the newly obtained classification model is better.

On the other hand, the application also provides a device for classifying a large number of images based on a small number of image samples. Since the working principle of the apparatus for classifying a large number of images based on a small number of image samples disclosed in the present application is the same as or similar to the principle of the method for classifying a large number of images based on a small number of image samples disclosed in the present application, repeated descriptions are omitted.

Referring to fig. 7, the present application discloses an apparatus 100 for classifying a large number of images based on a small number of image samples, comprising: the image classification method comprises a category acquisition module 110, an image annotation module 120, a pre-training model modification module 130, a classification model training module 140, an adjustment module 150, a central point vector calculation module 160 and a similarity degree scoring module 170. The method comprises the following specific steps:

a category obtaining module 110 configured to obtain a plurality of image categories and target features corresponding to each image category;

an image labeling module 120 configured to screen out partial images from an image database according to the image categories acquired by the category acquiring module 110 and the corresponding target features thereof, and classify and label the screened partial images;

a pre-training model modification module 130 configured to modify the last layer of the inclusion _ v3 pre-training model such that the length of the one-dimensional vector output by the inclusion _ v3 pre-training model is equal to the number of preset image categories;

the classification model training module 140 is configured to divide the classified and labeled partial images into a training set and a verification set according to a preset proportion, and input the training set into a modified inclusion _ v3 pre-training model for model training to obtain a classification model;

an adjustment module 150 configured to delete the last layer of the obtained classification model, so that the output of the classification model is a one-dimensional vector of length 2048 that characterizes the target feature;

a central point vector calculation module 160, configured to input the classified and labeled partial images into the feature extraction model respectively, obtain feature vectors corresponding to the respective images, and add the feature vectors of the same image category to obtain a central point vector of the image category;

the similarity scoring module 170 is configured to utilize the feature extraction module obtained by the adjustment module 150 to obtain feature vectors corresponding to the images in the image database, calculate similarities with the center point vector respectively, and classify the images with the similarities exceeding a preset threshold as image categories corresponding to the center point vector according to a similarity matching result.

In summary, the present application provides a method and an apparatus for classifying a large number of images based on a small number of image samples, wherein a small number of images with obvious features are selected for labeling, an inclusion _ v3 pre-training model is modified, then a classification model is trained based on a small number of samples, after the classification model is obtained, the classification model is changed into a feature extraction model, a feature vector of the labeled image is obtained by using the feature extraction model, a central point vector of each image category is calculated, finally, a feature vector of the image in a database is obtained by using the feature extraction model, similarity between the feature vector and the central point vector of each image category is calculated, and the similarity is retained when the similarity exceeds a threshold value. The method and the device facilitate the identification of the images of different types and improve the identification efficiency of the images of different types, so that the images can be accurately and quickly classified, and the problem of excessive time cost and labor cost waste is avoided.

The embodiments of the present invention have been described in detail, but the embodiments are merely examples, and the present invention is not limited to the embodiments described above. Any equivalent modifications and substitutions to those skilled in the art are also within the scope of the present invention. Accordingly, equivalent changes and modifications made without departing from the spirit and scope of the present invention should be covered by the present invention.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:图像检索方法、装置、计算机设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!