Image direction identification method based on multi-layer feature fusion

文档序号:749677 发布日期:2021-04-23 浏览:4次 中文

阅读说明:本技术 一种基于多层特征融合的图像方向识别方法 (Image direction identification method based on multi-layer feature fusion ) 是由 白茹意 于 2020-12-29 设计创作,主要内容包括:本发明涉及一种基于多层特征融合的图像方向识别方法,目的是解决目前识别图像方向的实际需求,以及现有识别方法需要对图像进行裁剪破坏原图像大小的技术问题,本发明的技术方案为:先旋转并得到不同角度的原图,然后建立LBP-SPP-AlexNet模型对图像进行训练和预测,再进行分类和识别,最后对结果进行验证。本发明识别方法可以在不改变图像原始大小的情况下,通过多层特征融合对图像的方向进行准确识别。(The invention relates to an image direction identification method based on multi-layer feature fusion, aiming at solving the actual requirement of identifying the image direction at present and the technical problems that the image needs to be cut to destroy the size of the original image in the existing identification method, and the technical scheme of the invention is as follows: the method comprises the steps of firstly rotating to obtain original images at different angles, then establishing an LBP-SPP-AlexNet model to train and predict images, then classifying and identifying, and finally verifying results. The identification method can accurately identify the direction of the image through multi-layer feature fusion under the condition of not changing the original size of the image.)

1. An image direction identification method based on multilayer feature fusion is characterized in that: the method comprises the following steps:

1) rotate and get images in different directions: sequentially rotating all the images in four directions to respectively obtain images in four different directions, and expressing all the rotated images in an RGB color mode;

2) establishing an LBP-SPP-AlexNet model: establishing an LBP-SPP-AlexNet model based on a local Binary pattern LBP (local Binary patterns), a spatial Pyramid pooling SPP (spatial Pyramid Pooling) and AlexNet;

3) image training and prediction: putting the image obtained in the step 1) into the LBP-SPP-AlexNet model established in the step 2) for training and prediction;

4) image classification and recognition: classifying the images processed in step 3) into four categories: four different directions, and then automatically identifying the direction of the image;

5) and (3) verification of the identification result: and respectively adopting different performance evaluation index comparison experiment models to verify the prediction result.

2. The image direction identification method based on the multi-layer feature fusion as claimed in claim 1, wherein: the four rotation directions of the image in the step 1) are respectively anticlockwise rotated by 0 degree, 90 degrees, 180 degrees and 270 degrees; in the step 4), the obtained images are divided into four categories, which are 0 °, 90 °, 180 ° and 270 °.

3. The image direction identification method based on the multi-layer feature fusion as claimed in claim 1, wherein: the specific steps of establishing the LBP-SPP-AlexNet model in the step 2) comprise:

2.1) under an RGB mode, dividing the color image into three components of R, G and B, respectively calculating the non-rotation invariant LBP characteristics of the three components, and then synthesizing into an LBP-RGB map; the calculation process uses 3 different scales (LBP)1,8、LBP2,16And LBP3,24) Generating 3 LBP-RGB maps;

2.2) inputting 3 maps with different scales obtained in the step 2.1) into an LBP-SPP-AlexNet model, wherein the model takes AlexNet as a network basic framework, 5 convolutional layers utilize a filter to carry out convolution on input samples, and an activation function adopts ReLU to obtain 5 groups of characteristic diagrams;

2.3) carrying out 3 Spatial Pyramid Pooling (SPP) operations with different scales on the 5 groups of feature maps obtained in the step 2.2), taking the maximum value of each block as a pooling feature, and sampling the feature maps with different sizes by a pooling layer to obtain 5 SPP features;

2.4) fusing the 5 SPP characteristics obtained in the step 2.3) by using an LBP-SPP-AlexNet model, and inputting the fused SPP characteristics into 3 full-connection layers for connecting all neurons;

2.5) after the last fully connected layer in step 2.4), adopting a softmax activation function and realizing four classifications;

2.6) establishing an LBP-SPP-AlexNet model based on LBP, SPP and AlexNet, wherein the loss function adopts a cross entropy loss function.

4. The image direction identification method based on the multi-layer feature fusion as claimed in claim 3, characterized in that: what is needed isThe specific steps of calculating the non-rotation invariant LBP features of the image in step 2.1) are as follows: taking a certain pixel point in the image as a central point, taking the radius as R, and carrying out interpolation according to a (Rcos (2 pi/P), Rsin (2 pi/P)) method to obtain a circular sampling point set as a field point of the central point, wherein P is the number of sampling points; then comparing the value of the central pixel point with the value of the neighborhood pixel point, if the value of the neighborhood pixel point is larger than the central pixel point, setting the position of the field to be 1, otherwise setting the position to be 0, then reading the circular sampling point clockwise, finally combining the circular sampling point into a binary number sequence, converting the sequence into a decimal system, namely the LBPR,PCode, calculated as follows:

wherein g iscIs the gray level of the current pixel, gnIs the gray level of its domain, and s (x) is a sign function.

5. The image direction identification method based on the multi-layer feature fusion as claimed in claim 3, characterized in that: the 5 convolutional layers in the step 2.2) are respectively as follows: the 1 st convolutional layer consists of 96 convolution kernels of 11 × 11, step size is 4, padding is 0, and maximum pooling of 3 × 3 is achieved; the 2 nd convolutional layer consists of 256 5 × 5 convolutional kernel groups, the step size is 1, padding is 1, and the maximum pooling of 3 × 3 is realized; the 3 rd convolutional layer consists of 384 convolution kernels of 3 × 3, the step size is 1, and padding is 1; the 4 th convolutional layer consists of 384 convolutional kernels, 3 × 3, with step size of 1 and padding of 1; the 5 th convolutional layer consists of 256 convolutional kernels with the step size of 1 and the convolution is the maximum pooling of 1 and 3 x 3;

the activation function ReLU, also called a linear rectification function, has the formula:

in the formula: x is the function input and f (x) is the function output.

6. The image direction identification method based on the multi-layer feature fusion as claimed in claim 3, characterized in that: the pooling of 3 different scales in step 2.3) is specifically 1 × 1 ═ 1 block, 2 × 2 ═ 4 block, 4 × 4 ═ 16 block, and 21 blocks in total, and the maximum value of each block is taken as the pooling characteristic.

7. The image direction identification method based on the multi-layer feature fusion as claimed in claim 3, characterized in that: the dimensions of the 3 fully connected layers in step 2.4) are 2048, 2048 and 1000, respectively.

8. The image direction identification method based on the multi-layer feature fusion as claimed in claim 3, characterized in that: the formula of the softmax activation function in the step 2.5) is as follows:

in the formula: x is the number ofiIs the output of the preceding output unit of the classifier, i represents the class index, K represents the number of classes, t (x)i) Is the ratio of the current index of the sample to the sum of all indices, expressed as the probability that the sample belongs to a class.

9. The image direction identification method based on the multi-layer feature fusion as claimed in claim 3, characterized in that: in step 2.6), cross entropy (cross entropy) is used as a loss function, and the multi-class cross entropy loss function formula is as follows:

wherein N represents the number of samples, K represents the number of categories, yiA label representing the ith sample; p is a radical ofi,kIndicating the probability that the ith sample is predicted as the kth class.

10. The image direction identification method based on the multi-layer feature fusion as claimed in claim 1, wherein: in step 5), the different performance evaluation indexes refer to Accuracy (AC), Sensitivity (SE), and Specificity (SP).

Technical Field

The invention belongs to the technical field of image processing and computer vision processing, and particularly relates to an image direction identification method based on multilayer feature fusion.

Background

Almost all imaging applications and picture management systems require that the image be correctly positioned prior to processing and visualization. For example, most applications of image recognition and scene classification rely heavily on that a given image is positive.

Information about the orientation of the photograph can be obtained from the camera's sensor and recorded in the data tag. However, this information is often lost on low-end digital cameras or may have been deleted by the picture editing software. In these cases, determining the orientation of the image requires user intervention. Humans can use their image understanding capabilities to recognize the orientation of a photograph. However, manual correction of the image orientation is a tedious, time consuming and error prone task. This is particularly true when a large number of pictures need to be processed. For these situations, it is necessary to design an automatic image direction recognition algorithm that simulates the high level of human comprehension, and this is a challenging task.

In recent years, researchers identify the direction of an image by a computer-aided method according to the relationship between the calculated visual features and the human visual perception, and the current research situation on the direction of the image is as follows:

1) human beings generally recognize directions through understanding the image content, so most researches adopt low-level features (color, texture, layout and the like) to recognize the directions of the images, do not consider high-level semantic features, and therefore the accuracy of the method depends on whether the selected low-level features can accurately express the directional characteristics of the images.

2) At present, some deep learning methods adopted by research are consistent in size of used images, so that cropping is needed before inputting the images into a network, so that much information of the images is damaged, the size of some images is set by an author in advance, and the length and width of the images are one of important factors for direction identification, so that the original image size is hoped not to be changed in the calculation process.

Disclosure of Invention

The invention aims to provide an image direction identification method based on multi-layer feature fusion under the condition of not changing the original size of an image, aiming at the actual requirement of the current image direction identification and the technical problem that the original size of the image is damaged by cutting the image in the existing identification method.

In order to solve the technical problems, the invention adopts the technical scheme that:

an image direction identification method based on multi-layer feature fusion comprises the following steps:

1) rotate and get images in different directions: sequentially rotating all the images in four directions to respectively obtain images in four different directions, and expressing all the rotated images in an RGB color mode;

2) establishing an LBP-SPP-AlexNet model: establishing an LBP-SPP-AlexNet model based on a local binary pattern LBP (LocalBinaryPattern), a spatial pyramid pooling SPP (spatial pyramid Pooling) and AlexNet;

3) image training and prediction: putting the image obtained in the step 1) into the LBP-SPP-AlexNet model established in the step 2) for training and prediction;

4) image classification and recognition: classifying the images processed in step 3) into four categories: four different directions, and then automatically identifying the direction of the image;

5) and (3) verification of the identification result: and respectively adopting different performance evaluation index comparison experiment models to verify the prediction result.

Further, the four rotation directions of the image in the step 1) are respectively anticlockwise rotated by 0 degree, 90 degrees, 180 degrees and 270 degrees; in the step 4), the obtained images are divided into four categories, which are 0 °, 90 °, 180 ° and 270 °.

Further, the specific step of establishing the LBP-SPP-AlexNet model in the step 2) includes:

2.1) under the RGB mode, dividing the color image into three components of R, G and B, respectively calculating the non-rotation invariant LBP characteristics of the three components, and then synthesizing into an LBP-RGB map. The calculation process uses 3 different scales (LBP)1,8、LBP2,16And LBP3,24) 3 LBP-RGB maps were generated.

2.2) inputting 3 maps with different scales obtained in the step 2.1) into an LBP-SPP-AlexNet model, wherein the model takes AlexNet as a network basic framework, 5 convolutional layers utilize a filter to carry out convolution on input samples, and an activation function adopts ReLU to obtain 5 groups of characteristic diagrams;

2.3) carrying out 3 Spatial Pyramid Pooling (SPP) operations with different scales on the 5 groups of feature maps obtained in the step 2.2), taking the maximum value of each block as a pooling feature, and sampling the feature maps with different sizes by a pooling layer to obtain 5 SPP features;

2.4) fusing the 5 SPP characteristics obtained in the step 2.3) by using an LBP-SPP-AlexNet model, and inputting the fused SPP characteristics into 3 full-connection layers for connecting all neurons;

2.5) after the last fully connected layer in step 2.4), adopting a softmax activation function and realizing four classifications;

2.6) establishing an LBP-SPP-AlexNet model based on LBP, SPP and AlexNet, wherein the loss function adopts a cross entropy loss function.

Further, the specific steps of calculating the non-rotation invariant LBP features of the image in step 2.1) are as follows: taking a certain pixel point in the image as a central point, taking the radius as R, interpolating according to a (Rcos (2 pi/P), Rsin (2 pi/P)) method, taking the obtained circular sampling point set as a field point of the central point, wherein P is the number of sampling points. Then comparing the value of the central pixel point with the value of the neighborhood pixel point, if the value of the neighborhood pixel point is larger than the central pixel point, setting the position of the field to be 1, otherwise setting the position to be 0, then reading the circular sampling point clockwise, finally combining the circular sampling point into a binary number sequence, converting the sequence into a decimal system, namely the LBPR,PCode, calculated as follows:

wherein g iscIs the gray level of the current pixel, gnIs the gray level of its domain, and s (x) is a sign function.

Further, the 5 convolutional layers in the step 2.2) are respectively: the 1 st convolutional layer consists of 96 convolution kernels of 11 × 11, step size is 4, padding is 0, and maximum pooling of 3 × 3 is achieved; the 2 nd convolutional layer consists of 256 5 × 5 convolutional kernel groups, the step size is 1, padding is 1, and the maximum pooling of 3 × 3 is realized; the 3 rd convolutional layer consists of 384 convolution kernels of 3 × 3, the step size is 1, and padding is 1; the 4 th convolutional layer consists of 384 convolutional kernels, 3 × 3, with step size of 1 and padding of 1; the 5 th convolutional layer consists of 256 convolutional kernels with the step size of 1 and the convolution is the maximum pooling of 1 and 3 x 3;

the activation function ReLU, also called a linear rectification function, has the formula:

in the formula: x is the function input and f (x) is the function output.

Further, the pooling of 3 different scales in the step 2.3) specifically means 1 × 1-1 block, 2 × 2-4 block, 4 × 4-16 blocks, and a total of 21 blocks, and the maximum value of each block is taken as the pooling characteristic.

Further, the dimensions of the 3 fully-connected layers in the step 2.4) are 2048, 2048 and 1000 respectively.

Further, the softmax activation function formula in step 2.5) is:

in the formula: x is the number ofiIs the output of the preceding output unit of the classifier, i represents the class index, K represents the number of classes, t (x)i) Is the ratio of the current index of the sample to the sum of all indices, expressed as the probability that the sample belongs to a class.

Further, in step 2.6), cross entropy (cross entropy) is used as a loss function, and the multi-class cross entropy loss function formula is as follows:

wherein N represents the number of samples, K represents the number of categories, yiA label representing the ith sample; p is a radical ofi,kIndicating the probability that the ith sample is predicted as the kth class.

Further, in step 5), the different performance evaluation indexes refer to Accuracy (AC), Sensitivity (SE), and Specificity (SP).

Compared with the prior art, the invention has the beneficial effects that:

1. the invention adopts a deep learning framework to realize the automatic identification of the image direction;

2. in the RGB mode, the non-rotation invariant LBP features with different scales in 3 are adopted to generate 3 LBP-RGB feature maps, so that the direction attribute of the image can be better expressed;

3. according to the method, the spatial pyramid pooling layer (SPP) is used as a pooling layer of the deep learning network frame, so that pooling characteristics with the same length can be obtained by inputting different sizes of the network, the scale of the image is ensured to be unchanged, and overfitting is reduced;

4. according to the method, the SPP characteristics are adopted in the characteristic diagrams obtained by the 5 convolutional layers, and the 5 SPP characteristics are fused, so that the characteristics can well describe the low-level and high-level characteristics of the image, and the classification accuracy is improved.

To fully prove the effectiveness and applicability of the method of the invention, firstly, different images (original image and LBP-RGB map) are tested, and the LBP-RGB map with different scales is used as an input experimental result, and the following four conditions are designed for inputting: original image, single-scale (LBP)1,8) Double scale (LBP)1,8And LBP2,16) Three dimensions (LBP)1,8、LBP2,16And LBP3,24). The experimental results are shown in table 1, when a three-scale LBP-RGB map is input, the accuracy is 94.36%, the sensitivity is 95.12% and the specificity is 92.89%, which are superior to those of other three models, thereby showing that the LBP features can well reflect the rotation characteristics of the image, express the direction of the image, and select three different scales, which can more effectively improve the accuracy.

TABLE 1 comparison of experimental results obtained by inputting LBP-RGB maps of different scales

Network input AC(%) SE(%) SP(%)
Original image 82.96 80.82 84.87
Single scale 88.72 89.57 86.35
Double scale 91.23 92.45 90.14
Three dimensions 94.36 95.12 92.89

In order to fully illustrate the influence of multi-feature fusion on the performance of the directional recognition model, the invention respectively uses pyramid pooling layers of different forms to perform feature fusion, wherein the model 1 only comprises SPP 1; model 2 contains SPP1 and SPP 2; model 3 comprises SPP1, SPP2, and SPP 3; model 4 comprises SPP1, SPP2, SPP3, and SPP 4; model 5, the model proposed by the present invention, contains all 5 SPP. The experimental results are shown in table 2, the three evaluation indexes (AC, SP, SE) of the model 5 after the test are all superior to the other 4 models, and the results show that the characteristics of different layers are fused, so that the accuracy of direction identification can be obviously improved.

TABLE 2 results of feature fusion experiments under different models

In conclusion, the recognition rate of the image direction of the model provided by the invention is obviously improved.

The model provided by the invention can effectively identify the direction of the image, namely, the relation between the visual content and the direction of the image can be established under the framework of machine learning.

Drawings

FIG. 1 is a flow chart of an identification method of the present invention;

FIG. 2 is a schematic view of four rotational directions of an image according to the present invention;

FIG. 3 is a flowchart of a process for building an LBP-SPP-AlexNet model;

FIG. 4 is a schematic structural diagram of the LBP-SPP-AlexNet model;

FIG. 5 is a schematic diagram of a multi-scale LBP structure

FIG. 6 is a schematic diagram of the SPP structure.

Detailed Description

The invention is further illustrated by the following figures and examples.

As shown in fig. 1 to 6, an image orientation recognition method based on multi-layer feature fusion includes the following steps:

1) rotate and get images in different directions: rotating all the images in four directions (0 degrees, 90 degrees, 180 degrees and 270 degrees) anticlockwise in sequence to respectively obtain images in four different directions, and expressing all the rotated images by an RGB color mode;

2) establishing an LBP-SPP-AlexNet model: establishing an LBP-SPP-AlexNet model based on a Local Binary Pattern (LBP), a local Binary pattern (SPP), a spatial Pyramid pooling layer (SPP), and AlexNet;

the specific steps for establishing the LBP-SPP-AlexNet model comprise:

2.1) in RGB mode, calculate 3 LBP-RGB maps of the image. Dividing the color image into three R, G and BAnd (3) components, respectively calculating the non-rotation-invariant LBP characteristics of the three components, and synthesizing into an LBP-RGB map. The calculation process uses 3 different scales (LBP)1,8、LBP2,16And LBP3,24) 3 LBP-RGB maps were generated.

2.2) inputting the 3 maps with different scales obtained in the step 2.1) into an LBP-SPP-AlexNet model, wherein the model takes AlexNet as a network basic framework, 5 convolutional layers utilize a filter to carry out convolution on input samples, and an activation function adopts ReLU to obtain 5 groups of characteristic maps. The 5 convolutional layers are respectively as follows: the 1 st convolutional layer consists of 96 convolution kernels of 11 × 11, step size is 4, padding is 0, and maximum pooling of 3 × 3 is achieved; the 2 nd convolutional layer consists of 256 5 × 5 convolutional kernel groups, the step size is 1, padding is 1, and the maximum pooling of 3 × 3 is realized; the 3 rd convolutional layer consists of 384 convolution kernels of 3 × 3, the step size is 1, and padding is 1; the 4 th convolutional layer consists of 384 convolutional kernels, 3 × 3, with step size of 1 and padding of 1; the 5 th convolutional layer consists of 256 convolutional kernels with the step size of 1 and the convolution is the maximum pooling of 1 and 3 x 3;

the activation function ReLU, also called a linear rectification function, has the formula:

in the formula: x is the function input and f (x) is the function output.

2.3) performing Spatial Pyramid Pooling (SPP) of 3 different scales on the feature map obtained in step 2.2) (1 × 1 equals 1 block, 2 × 2 equals 4 blocks, 4 × 4 equals 16 blocks, and 21 blocks in total), and taking the maximum value of each block as a pooling feature. 5 sets of feature maps pass through the SPP layer to obtain 5 SPP features.

2.4) combining the 5 SPP features obtained in step 2.3) and fusing the combined SPP features into a pooled feature.

2.5) connecting 3 full-connection layers after fusing the pooled features of 2.4), wherein the dimensions of the 3 full-connection layers are 2048, 2048 and 1000 respectively, and the 3 full-connection layers are used for connecting all neurons;

2.6) after the last full connection layer in step 2.5), implementing four classifications using a softmax activation function; the softmax activation function formula is as follows:

in the formula: x is the number ofiIs the output of the pre-stage output unit of the classifier. i denotes a category index, and K denotes the number of categories. t (x)i) Is the ratio of the current index of the sample to the sum of all indices, expressed as the probability that the sample belongs to a class.

2.7) establishing an LBP-SPP-AlexNet model based on LBP, SPP and AlexNet, and taking cross loss entropy as a loss function;

the non-rotation invariant LBP is characterized in that a certain pixel point in an image is used as a central point, the radius is R, interpolation is carried out according to a (Rcos (2 pi/P), Rsin (2 pi/P)) method, an obtained circular sampling point set is used as a field point of the central point, and P is the number of sampling points. Then comparing the value of the central pixel point with the value of the neighborhood pixel point, if the value of the neighborhood pixel point is larger than the central pixel point, setting the position of the field to be 1, otherwise setting the position to be 0, then reading the circular sampling point clockwise, finally combining the circular sampling point into a binary number sequence, converting the sequence into a decimal system, namely the LBPR,PCode, calculated as follows:

wherein g iscIs the gray level of the current pixel, gnIs the gray level of its domain, and s (x) is a sign function.

The cross entropy (cross entropy) is adopted as a loss function, and the multi-classification cross entropy loss function formula is as follows:

wherein N represents the number of samples, K represents the number of label categories, yiA label representing the ith sample; p is a radical ofi,kRepresenting the probability that the ith sample is predicted to be the kth class;

3) image training and prediction: putting the image obtained in the step 1) into the LBP-SPP-AlexNet model established in the step 2) for training and prediction;

4) image classification and recognition: dividing the image obtained in the step 3) into four different directions (0 degrees, 90 degrees, 180 degrees and 270 degrees), namely four types, and then automatically identifying the direction of the image;

5) and (3) verification of the identification result: and comparing the experimental model with three different performance evaluation indexes, namely Accuracy (AC), Sensitivity (SE) and Specificity (SP), to verify the prediction result.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于生成对抗网络的高光谱图像特征提取方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!