Double-view eyeground image fusion method based on deep learning

文档序号:519485 发布日期:2021-06-01 浏览:4次 中文

阅读说明:本技术 一种基于深度学习的双视野眼底图像融合方法 (Double-view eyeground image fusion method based on deep learning ) 是由 姜璐璐 侯君临 邵金杰 冯瑞 于 2021-02-24 设计创作,主要内容包括:本发明提供了一种基于深度学习的双视野眼底图像融合方法,具有这样的特征,包括以下步骤,步骤S1,对两张待测图像进行预处理获得两张预处理图像;步骤S2,搭建卷积神经网络模型,对卷积神经网络模型进行训练,从而得到训练后的卷积神经网络模型,称为M-net;步骤S3,将M-net分成两部分,称为M-net PartⅠ和M-net PartⅡ;步骤S4,将两张预处理图像分别放入M-net PartⅠ进行特征提取,获得两张图像特征图;步骤S5,将两张图像特征图进行拼接,得到拼接图像;步骤S6,将拼接图像放入M-net PartⅡ进行特征融合。(The invention provides a double-vision-field fundus image fusion method based on deep learning, which has the characteristics that the method comprises the following steps of S1, preprocessing two images to be detected to obtain two preprocessed images; step S2, building a convolutional neural network model, and training the convolutional neural network model to obtain a trained convolutional neural network model which is called M-net; step S3, dividing M-net into two parts, namely M-net Part I and M-net Part II; step S4, respectively putting the two preprocessed images into an M-net Part I for feature extraction to obtain two image feature maps; step S5, splicing the two image feature maps to obtain a spliced image; and step S6, putting the spliced image into an M-net Part II for feature fusion.)

1. A double-view fundus image fusion method based on deep learning is characterized by comprising the following steps:

step S1, preprocessing the two images to be detected to obtain two preprocessed images;

step S2, building a convolutional neural network model, and training the convolutional neural network model to obtain a trained convolutional neural network model which is called M-net;

step S3, dividing the M-net into two parts, namely M-net Part I and M-net Part II;

step S4, respectively putting the two preprocessed images into the M-net Part I for feature extraction to obtain two image feature maps;

step S5, splicing the two image feature maps to obtain a spliced image;

and step S6, putting the spliced image into the M-net Part II for feature fusion.

2. The dual-field fundus image fusion method based on deep learning according to claim 1, characterized in that:

in step S1, the two images to be detected are dual-view fundus images, that is, photographs of an eyeball to be detected at two viewing angles.

3. The dual-field fundus image fusion method based on deep learning according to claim 1, characterized in that:

in step S1, the preprocessing includes horizontal flipping, brightness, contrast, and saturation adjustment, and size normalization.

4. The dual-field fundus image fusion method based on deep learning according to claim 1, characterized in that:

wherein, step S2 includes the following substeps:

step S2-1, constructing the convolutional neural network model, wherein model parameters contained in the convolutional neural network model are randomly set;

s2-2, after each training image in the training set is preprocessed in the S1, the training images are sequentially input into the convolutional neural network model and are iterated for one time;

step S2-3, after iteration, calculating a loss error, and then reversely propagating the loss error, thereby updating model parameters;

and S2-4, repeating the step S2-2 to the step S2-3 until a training completion condition is reached, and obtaining the M-net.

5. The dual-field fundus image fusion method based on deep learning according to claim 1, characterized in that:

wherein, step S6 includes the following substeps:

s6-1, putting the spliced image into the M-net Part II for feature fusion;

and step S6-2, following the M-net Part II by the full connection layer, mapping to a probability vector, wherein the probability vector represents the probability that the image belongs to the serious degree grade of the diabetes mellitus, and selecting the grade with the highest probability as the serious degree grade of the diabetes mellitus of the pair of double-view images.

6. The dual-field fundus image fusion method based on deep learning according to claim 1, characterized in that:

in step S2, the model structure of the convolutional neural network model is one of VGG, Resnet, and inclusion-Resnet.

Technical Field

The invention relates to the technical field of computer vision, in particular to a double-view fundus image fusion method based on deep learning.

Background

Medical imaging technology has rapidly developed and become an indispensable technology in medical diagnosis. Since the digital image age, the generation of massive data provides more possibilities for the future development of medical images. Therefore, how to further analyze and mine the medical image big data, how to extract valuable information from the medical image high-dimensional data, and how to closely combine the development of modern medical images with precise medical treatment become important topics for the future development of medical images.

In recent years, with the enhancement of computing power and the explosive increase of data, artificial intelligence techniques represented by deep learning have advanced sufficiently, and have begun to be applied to various fields in production and life. The deep learning algorithm can automatically extract features, and complex processing of high-dimensional medical image data is avoided. Under the common promotion of more and more public medical image data resources, open-source artificial intelligence algorithm resources and open high-performance computing resources, the deep learning algorithm is further developed rapidly in the field of medical images.

Diabetic Retinopathy (DR), referred to as diabetes mellitus, is a common blinding eye disease. China is the most number of type 2 diabetes mellitus patients worldwide, and with the increase of diabetes mellitus patients, the prevalence rate and the blindness causing rate of diabetic retinopathy are also increased year by year, and the disease is the first blindness causing disease of people of working age at present.

By 2015, about 1.1 million people are diabetic patients in China, and about 2700 ten thousand people are reckoned according to the calculation result. At present, 87% of diabetics see a doctor in county level and below medical institutions. More than 50% of diabetic patients are not informed of the need for regular fundus examination.

The fundus color photograph can be used as a rapid screening tool for the diabetes mellitus, and provides a simple, convenient and easy observation and detection means for the prevention and treatment of the diabetes mellitus of the basic layer. Fundus photography examined morphological changes of the entire retina. The principle is to record the scene seen under the ophthalmoscope with a specially made camera. The fundus photography can observe the forms of retina, optic disc, macular area, retinal blood vessel, and the change of retinal hemorrhage, exudation, hemangioma, retinal degeneration area, retinal hole, new blood vessel, atrophic spot, pigment disorder, etc.

The fundus photography has two shooting positions, namely a single-view shooting method and a double-view shooting method. The single-vision shooting method takes the midpoint of the connecting line of the macula lutea and the optic disc as the center of the shooting vision, and the imaging at least covers 60 percent of the retina area. In the double-vision shooting method, the vision 1 takes the central fovea of the macula lutea as the center of the shooting vision, the imaging at least covers a 45-degree retina area, the vision 2 takes the optic disc as the center of the shooting vision, and the imaging at least covers a 45-degree retina area.

The international clinical classification standard for diabetic retinopathy grades sugar network diseases into 5 grades of severity, which are respectively non-obvious diabetic retinopathy, mild NPDR, moderate NPDR and incremental diabetic retinopathy PDR. Currently published datasets, such as IDRiD, Messidor, Kaggle datasets, all use single view pictures, whereas dual view photography is commonly used in clinical and basic settings. Most of the existing candy network disease grading algorithms also use a single image. When the convolutional neural network is used for classification, the images of the two visual fields need to be fused.

Disclosure of Invention

The present invention has been made to solve the above problems, and an object of the present invention is to provide a method for fusing dual-field fundus images based on deep learning.

The invention provides a double-view fundus image fusion method based on deep learning, which is characterized by comprising the following steps of: step S1, preprocessing the two images to be detected to obtain two preprocessed images; step S2, building a convolutional neural network model, and training the convolutional neural network model to obtain a trained convolutional neural network model which is called M-net; step S3, dividing M-net into two parts, namely M-net Part I and M-net Part II; step S4, respectively putting the two preprocessed images into an M-net Part I for feature extraction to obtain two image feature maps; step S5, splicing the two image feature maps to obtain a spliced image; and step S6, putting the spliced image into an M-net Part II for feature fusion.

The method for fusing the dual-view fundus images based on deep learning provided by the invention can also have the following characteristics: in step S1, the two images to be detected are dual-view fundus images, that is, photographs of an eyeball to be detected at two viewing angles.

The method for fusing the dual-view fundus images based on deep learning provided by the invention can also have the following characteristics: in step S1, the preprocessing includes horizontal flipping, brightness, contrast, and saturation adjustment, and size normalization.

The method for fusing the dual-view fundus images based on deep learning provided by the invention can also have the following characteristics: the step S2 comprises the following substeps of constructing a convolutional neural network model in step S2-1, wherein model parameters contained in the convolutional neural network model are randomly set; step S2-2, each training image in the training set is sequentially input into the convolutional neural network model and is iterated once; step S2-3, after iteration, calculating loss errors, and then reversely propagating the loss errors, thereby updating model parameters; and S2-4, repeating the step S2-2 to the step S2-3 until the training completion condition is reached, and obtaining the M-net.

The method for fusing the dual-view fundus images based on deep learning provided by the invention can also have the following characteristics: the step S6 comprises the following substeps, step S6-1, putting the spliced image into an M-net Part II for feature fusion; and step S6-2, following the full connection layer after the M-net Part II, mapping to a probability vector, wherein the probability vector represents the probability that the image belongs to the sugar network disease severity grade, and selecting the grade with the highest probability as the sugar network disease severity grade of the pair of double-view images.

The method for fusing the dual-view fundus images based on deep learning provided by the invention can also have the following characteristics: in step S2, the model structure of the convolutional neural network model is one of VGG, Resnet, and inclusion-Resnet.

Action and Effect of the invention

According to the double-view eyeground image fusion method based on deep learning, due to the fact that feature fusion is conducted according to eyeground images of double views, the double-view eyeground image fusion method based on deep learning can obtain more features, eye disease grading diagnosis results are more accurate, and the double-view eyeground image fusion method based on deep learning is simple to achieve, can be used on various conventional convolution neural network models, and is simple, convenient and fast.

Drawings

FIG. 1 is a flowchart of a depth learning-based dual-view fundus image fusion method according to an embodiment of the present invention;

FIG. 2 is a schematic diagram of a depth learning-based dual-view fundus image fusion method according to an embodiment of the present invention; and

fig. 3 is a model structure of a convolutional neural network employed in an embodiment of the present invention.

Detailed Description

In order to make the technical means, creation features, achievement purposes and effects of the present invention easy to understand, the following embodiments specifically describe the dual-view fundus image fusion method based on deep learning according to the present invention with reference to the accompanying drawings.

< example >

The data set of this example was an unpublished community-based data set collected by the centers for eye disease control in shanghai city, which included dual-view images of 2000 patients and 5-level labels according to the "international clinical classification standard for diabetic retinopathy" for severity of diabetic retinopathy. The present embodiment separates the data set into a training set and a test set.

In addition, the hardware platform implemented in this embodiment needs an NVIDIA TITANX graphics card (GPU acceleration).

Fig. 1 is a flowchart of a dual-view fundus image fusion method based on deep learning according to the present embodiment, and fig. 2 is a schematic diagram of the dual-view fundus image fusion method based on deep learning according to the present embodiment. As shown in fig. 1 and 2, the depth learning-based dual-field fundus image fusion method of the present embodiment includes the steps of:

and step S1, preprocessing the two images to be detected to obtain two preprocessed images.

The two images to be measured are dual-view fundus images, namely two images obtained by performing a dual-view photographing method on one eye of a patient.

The preprocessing comprises horizontal turning, the number of acquired images is increased, and data expansion is realized; adjusting brightness, contrast and saturation to realize image enhancement; and normalized in size to 224x224 pixels.

And step S2, building a convolutional neural network model, and training the convolutional neural network model to obtain a trained convolutional neural network model, which is called M-net.

The convolutional neural network model is built by a deep learning frame Pythrch by adopting an increment-ResNet-v 1 structure.

Fig. 3 is a schematic structural diagram of the convolutional neural network model of the present embodiment. As can be seen from fig. 3, the inclusion-ResNet-v 1 includes three types of inclusion-respet blocks, namely, inclusion-respet-A, Inception-respet-B and inclusion-respet-C. The three inclusion-respet blocks finally use 1 × 1 convolution to increase the dimension, and direct connection structures of ResNet are added to make the network deeper and converge faster.

Wherein the inclusion-rest-a module uses 32 channels of 3 x 3 convolution kernels and the inclusion-rest-B module uses 128 channels of 1 x 7 convolution kernels and 7 x 1 convolution kernels. The inclusion-respet-C module uses 192 channels of 1 x 3 and 3 x 1 convolution kernels.

Step S2 includes the following substeps:

and step S2-1, constructing a convolutional neural network model, wherein the model parameters are randomly set.

And S2-2, preprocessing each training image in the training set in the step S1, sequentially inputting the training images into the convolutional neural network model, and performing iteration once.

And step S2-3, after iteration, calculating loss errors by using the model parameters of the last layer of the convolutional neural network model, and then reversely propagating the loss errors so as to update the model parameters.

And S2-4, repeating the steps S2-2 to S2-3 until a training completion condition is reached, and obtaining a trained convolutional neural network model, namely M-net.

In step S3, the M-net is divided into two parts, referred to as M-net Part I and M-net Part II. M-net Part I is the module from Input to Reduction-B in FIG. 3, and M-net Part II is the module from inclusion-rest-C to Softmax in FIG. 3.

And step S4, respectively putting the two preprocessed images into an M-net Part I for feature extraction, and obtaining two image feature maps, namely feature map 1 and feature map 2.

Step S5, splicing the two image feature maps to obtain a spliced image;

and step S6, putting the spliced image into an M-net Part II for feature fusion, and then grading the severity of the diabetes mellitus.

Wherein, step S6 includes the following substeps:

and step S6-1, putting the spliced image into an M-net Part II for feature fusion.

And step S6-2, following the full connection layer after the M-net Part II, mapping to a probability vector, wherein the probability vector represents the probability that the image belongs to the sugar network disease severity grade, and selecting the grade with the highest probability as the sugar network disease severity grade of the pair of double-view images.

In this embodiment, after each test image in the test set is preprocessed in step S1, M-net is sequentially input for testing, and the accuracy of classification of glycoretinopathy in the test set by M-net is 80.1%.

Effects and effects of the embodiments

According to the method for fusing the double-view eye fundus images based on the deep learning, due to the fact that feature fusion is conducted according to the eye fundus images of the two views, the method for fusing the double-view eye fundus images based on the deep learning can obtain more features, the grading diagnosis result of the diabetes mellitus is more accurate, and the method for fusing the double-view eye fundus images based on the deep learning is simple to achieve, can be used for various conventional convolution neural network models, and is simple, convenient and fast.

The above embodiments are preferred examples of the present invention, and are not intended to limit the scope of the present invention.

The present embodiment is applied to the task of diagnosing the diabetes mellitus by grading, and the present invention can also be applied to the eye diseases such as glaucoma, etc.

8页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:近红外第二窗口检测pH的双通道比率荧光传感器及其制备方法和应用

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!