Deep learning SAR image oil spilling region identification method based on feature fusion

文档序号:170378 发布日期:2021-10-29 浏览:26次 中文

阅读说明:本技术 一种基于特征融合的深度学习sar影像溢油区识别方法 (Deep learning SAR image oil spilling region identification method based on feature fusion ) 是由 范永磊 芮小平 张光远 徐锡杰 于 2021-08-03 设计创作,主要内容包括:本发明公开了一种基于特征融合的深度学习SAR影像溢油区识别方法,包括:利用ToZero阈值分割方法进行全局特征提取;将全局特征与源数据进行融合;利用卷积神经网络对融合的数据进行高维特征提取,并记录池化过程中最大值的位置;反卷积利用记录的最大值位置特征将高维小尺寸特征恢复到原图像尺寸;获得图像分割的结果。本发明的方法提高了原模型分割准确率并且降低了过拟合现象,提供了一种新型的提高模型识别精度的方法。(The invention discloses a deep learning SAR image oil spilling region identification method based on feature fusion, which comprises the following steps: extracting global features by using a ToZero threshold segmentation method; fusing the global features with the source data; performing high-dimensional feature extraction on the fused data by using a convolutional neural network, and recording the position of a maximum value in the pooling process; deconvolution is carried out to restore the high-dimensional small-size features to the original image size by utilizing the recorded maximum position features; the result of the image segmentation is obtained. The method improves the segmentation accuracy of the original model, reduces the overfitting phenomenon, and provides a novel method for improving the identification accuracy of the model.)

1. A deep learning SAR image oil spilling region identification method based on feature fusion is characterized by comprising the following steps:

step 1, global feature extraction is carried out by utilizing a ToZero threshold segmentation method to obtain a source data set;

step 2, fusing the global features and the downloaded source data set in dimensionality;

step 3, performing high-dimensional feature extraction on the fused data by using a convolutional neural network, and recording the position of the maximum value in the pooling process;

step 4, deconvolution is carried out to restore the high-dimensional and small-dimensional features to the original image size by using the recorded maximum position features;

and 5, obtaining the result of image segmentation.

2. The feature fusion based identification method for the SAR image oil spill area based on deep learning of the claim 1 is characterized in that: the ToZero threshold segmentation method has a threshold size of 75.

3. The feature fusion based identification method for the SAR image oil spill area based on deep learning of the claim 1 is characterized in that: the global feature and source data feature fusion is an overlay fusion in dimension.

4. The feature fusion based identification method for the SAR image oil spill area based on deep learning of the claim 1 is characterized in that: the high-dimensional feature extraction process and the subsequent deconvolution process of the SAR image adopt a UNet image segmentation method.

Technical Field

The invention relates to the technical field of image segmentation, in particular to an automatic marine oil spill monitoring method based on fusion of Tozero threshold segmentation and UNet deep learning algorithms.

Background

The problems of the ocean have become more serious in recent years, and as the amount of ocean oil further extracted and the amount of ocean oil transported has increased, the destruction of the ocean's ecological environment has also become more serious. The monitoring of the ocean oil spill is an effective method for controlling the pollution diffusion of the oil spill in time and reducing the economic loss and the environmental pollution caused by the ocean oil spill.

There are two main types of modes for identifying the oil spilling area, one is a manual extraction mode, and the other is an automatic extraction method. The method for automatically and accurately identifying the marine oil spill by using the deep learning model becomes a research hotspot in recent years, and some semantic segmentation models based on deep learning, such as UNet, SegNet and the like, have relatively wide application in the field, but the identification effectiveness is insufficient, and the model has an overfitting problem.

Disclosure of Invention

Aiming at the defects of the prior art, the invention provides a method for identifying an oil overflow area of a deep learning SAR image based on feature fusion.

In order to realize the purpose, the technical scheme adopted by the invention is as follows:

a method for identifying an SAR image oil spill area based on feature fusion comprises the following steps:

step 1, extracting global features by using a ToZero threshold segmentation method;

step 2, fusing the global features and the downloaded source data set (SAR oil spill data set) in dimensionality;

step 3, performing high-dimensional feature extraction on the fused data by using a convolutional neural network, and recording the position of the maximum value in the pooling process;

step 4, deconvolution is carried out to restore the high-dimensional and small-dimensional features to the original image size by using the recorded maximum position features;

and 5, obtaining the result of image segmentation.

Further, the ToZero threshold segmentation method has a threshold size of 75.

Further, the global feature and the source data feature fusion is an overlay fusion in dimension.

Further, a UNet image segmentation method is adopted in the high-dimensional feature extraction process and the subsequent deconvolution process of the SAR image.

Compared with the prior art, the invention has the advantages that:

the recognition accuracy of the UNet and SegNet models is improved, the overfitting problem of the models is relieved, the fused models obtain more characteristics, and more accurate decisions can be made. The FMNet model based on fusion of UNet and ToZero is improved by 0.26 percentage point and reaches 98.40%, the overfitting phenomenon is relieved by 0.53 percentage point, and the difference is reduced from 4.89% to 4.36%.

Drawings

FIG. 1 is a diagram of an FMNet model framework according to an embodiment of the present invention;

FIG. 2 is a diagram illustrating the variation of accuracy and error in training according to an embodiment of the present invention;

FIG. 3 shows the variation of accuracy and error in the experiment according to the embodiment of the present invention;

FIG. 4 is result I of an embodiment of the present invention in a test sample, in which: (a) is Dataset, (b) is Label, (c) is BaselineUNet, (d) is BinaryFMNet, (e) is TruncFMNet, (f) is ToZeroFMNet, (g) is OSTUFMNet, and (h) is TriangleFMNet.

FIG. 5 is result II of the example of the present invention in a test sample, in which: (a) is Dataset, (b) is Label, (c) is BaselineUNet, (d) is BinaryFMNet, (e) is TruncFMNet, (f) is ToZeroFMNet, (g) is OSTUFMNet, and (h) is TriangleFMNet.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings by way of examples.

As shown in fig. 1, as a design idea of the model, a data set is first subjected to threshold segmentation to obtain global features of an image, and then the global features are fused with source data. And then, extracting high-dimensional features by using the UNet network model, then, performing up-sampling by using the extracted high-dimensional features, gradually converting the high-dimensional features into features with the size consistent with that of the original image, and performing classification decision on the result by using a softmax algorithm to finally obtain an oil spill area segmentation result.

SAR image global feature extraction

The purpose of image thresholding is to divide the set of pixels by gray level, each resulting subset forming a region corresponding to the real scene, each region having consistent properties within it, while adjacent regions do not have such consistent properties. Such a division can be achieved by choosing one or more threshold values from the grey scale. High frequency information is represented in this process and low frequency information is attenuated. The image masks detail features in a simple form, and highlights global features. The present embodiment compares the effects of five common threshold segmentation techniques on the SAR image segmentation.

1.1 binary threshold segmentation

The segmentation principle is as follows: selecting a specific threshold, setting the gray value of the pixel point which is greater than or equal to the threshold as a maximum value 255, and setting the gray value of the pixel point which is less than the threshold as 0, wherein the segmentation algorithm is as follows:

1.2Truncate thresholding

The segmentation principle is as follows: firstly, a threshold is selected, the gray value of a pixel point which is greater than or equal to the threshold in an image is set as the threshold, and the gray value of a pixel point which is smaller than the threshold is kept unchanged, wherein the formula is as follows:

1.3 zero-valued threshold partitioning

The segmentation principle is as follows: selecting a threshold, keeping the gray value of the pixel point which is greater than or equal to the threshold, and setting the threshold of the pixel point which is less than the threshold to be 0, wherein the formula is as follows:

1.4 triangular threshold segmentation

The segmentation principle is as follows: the method uses histogram data, an optimal threshold value is searched based on a pure geometric method, the establishment condition is that the maximum peak of the histogram is assumed to be close to the brightest side, then the maximum straight-line distance is obtained through a triangle, and the gray level of the fat belly corresponding to the maximum straight-line distance is used as a segmentation threshold value.

1.5 Dajin threshold segmentation

The maximum between-class variance is proposed by the japanese scholars Otsu and 1979, and is an adaptive threshold determination method. The algorithm assumes that the image pixels can be divided into two parts, background and object, according to a threshold. The ratio of the target pixel points to the image is omega 0, the average gray value of the target pixel points is mu 0, the ratio of the background pixel points to the image is omega 1, the average gray value of the background pixel points is mu 1, the average gray value of all pixels of the image is mu, the inter-class variance is g, and the optimal threshold T is calculated through the following formula to distinguish two types of pixels, so that the distinguishing degree of the two types of pixels is maximum. The global binarization-based algorithm is simple and quick in calculation and is not influenced by the brightness and the contrast of an image. The method has the defects that the method is sensitive to image noise, can only be used for segmenting a single target, and when the size ratio of the target to the background is very different, the inter-class variance function can possibly present a double-peak or multi-peak phenomenon, so that the segmentation effect is poor.

g=[ω0ω101)]2

U-Net local feature extraction

U-Net is one of the older algorithms for semantic segmentation by using a full-rolling machine network, and the symmetrical U-shaped structure comprising a compression path and an expansion path is very innovative at that time, and influences the design of a plurality of subsequent segmentation networks to a certain extent. The network is a full convolution network and comprises two parts, wherein the left side of the network is a compression path formed by a convolution kernel Max Pooling, and the main purpose is to extract features. The compression path consists of 4 blocks, each block uses 3 effective convolution kernels and 1 Max Pooling, and the number of feature maps is doubled after each down-sampling. The right part of the network is called an extension path and consists of 4 blocks, before each block starts, the size of the Feature Map is multiplied by 2 by convolution, the number of the Feature maps is reduced by half, the last layer is slightly different and then merged with the Feature Map of the left compression path, and the U-Net is normalized by clipping the Feature Map of the compression path to the Feature Map with the same size as the extension path.

3. Feature fusion network model

According to a design idea, firstly, threshold segmentation is carried out on source data, the threshold segmentation is carried out according to pixel gray value classification, different categories are numerically distinguished by utilizing a simple clustering principle, the function is to extract global features of images, and local features are enhanced. After threshold segmentation, texture features in the source data image are highlighted, the boundaries between categories are clearer, and features of the source data on a global area are strengthened. In addition, local features inside the same category are weakened, and as the pixel values inside the same category are similar, the similar pixel values are converted into the same values through a threshold value, so that the influence of noise inside the category is reduced. The image is subjected to global feature extraction by utilizing the five threshold segmentation methods, and model building is carried out by combining with a deep convolution network.

And inputting the source data and the feature data into an Encoder Network, and performing high-dimensional feature extraction by using convolution operation. The convolution kernel size mainly used by this part of convolution operation is 3 × 3, and the normalization and activation operation is performed on the convolution result. Then, maximal pooling was performed, with 2 x 2 pooling kernels, with the step size set to 2, in order to increase the receptive field of the model. The index of the position of the maximum is recorded at the same time as the maximum pooling, which has the effect of non-linear upsampling upwards in the decoder process. High-dimensional features of Feature maps containing global features of high-dimensional features of source data are finally output by the Encode network.

Then, Feature maps are upsampled, i.e. decoder operation. An expanding path consists of several blocks, with different decoder networks having different numbers of blocks. The size of the input feature maps is expanded by a factor of 2 within each block, while halving its number. Then it is clipped to the same size as the feature maps of the extended path as the feature maps of the left symmetric compressed path, and normalized. This is where the size of the deconvolution kernel used for upsampling is 2 x 2. Finally, k (the number of classes, k is 5 in this embodiment) prediction results having the same size as the original image are input to the softmax layer, and the final class discrimination is performed.

4. Results and conclusions

4.1 training results

Fig. 2 and table 1 show the dynamic change process and result of the accuracy and error of the model in the training process, and it can be seen from fig. 2 that in the later process of the training, the accuracy of the original model on the training data set reaches 98.16%, the error is 0.051, the accuracy on the verification data set is 93.2%, and the error is 0.235, and these data are used as the basic comparison data to prove the effect of the feature fusion model. As can be seen from the change curve, the model of the zero-value threshold segmentation method and the U-Net network fusion achieves the best effect, the accuracy rate reaches 98.4%, and the error is 0.047; the recognition accuracy on the validation set was 94.04% with an error of 0.230.

TABLE 1

Fig. 4 and 5 show the recognition effect of the model in practical application, and it can be seen that the feature fusion network model provided by the invention achieves better recognition accuracy. The experimental result of the FMNet model provided by the invention is more excellent in detail performance, and the final graph outline is also closest to the data label.

Table 2 shows the statistical advantages of the FMNet model, and the results show that the FMNet effect is the best by comparing two statistical parameters, namely interaction-of-Union (IoU) and MIoU.

TABLE 2

As shown in fig. 3, it can be seen that the accuracy and error value of the five feature fusion models and the original model in the verification set vary.

It will be appreciated by those of ordinary skill in the art that the examples described herein are intended to assist the reader in understanding the manner in which the invention is practiced, and it is to be understood that the scope of the invention is not limited to such specifically recited statements and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于视觉感知的面部神经麻痹患者康复检测系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!