Diabetic retinopathy area automatic segmentation method based on circulation self-adaptive multi-target weighting network

文档序号:519483 发布日期:2021-06-01 浏览:8次 中文

阅读说明:本技术 一种基于循环自适应多目标加权网络的糖尿病视网膜病变区域自动分割方法 (Diabetic retinopathy area automatic segmentation method based on circulation self-adaptive multi-target weighting network ) 是由 陈新建 汪恋雨 朱伟芳 陈中悦 于 2021-02-02 设计创作,主要内容包括:本申请公开了一种基于循环自适应多目标加权网络的糖尿病视网膜病变区域自动分割方法,涉及医学图像分割技术领域,所述方法包括:获取样本眼底彩照图像;根据样本眼底彩照图像训练糖尿病视网膜病变区域分割模型,糖尿病视网膜病变区域分割模型包括循环自适应多目标加权网络,循环自适应多目标加权网络用于自适应的为不同目标分配权重并增强网络的稳定性,不同目标包括样本眼底彩照图像中的背景、出血HE、硬性渗出EX、微血管瘤MA、视盘OD和棉绒斑SE中的至少一种,训练后的视网膜病变分割模型用于对目标眼底彩照图像进行分割。也即通过自适应的为不同目标分配权重,进而缓解子类间不平衡的问题,达到了可以提高网络的分割精度的效果。(The application discloses a diabetic retinopathy area automatic segmentation method based on a loop self-adaptive multi-target weighting network, which relates to the technical field of medical image segmentation, and comprises the following steps: acquiring a sample fundus color photograph image; the diabetic retinopathy regional segmentation model is trained according to a sample fundus color photographic image and comprises a circulation self-adaptive multi-target weighting network, the circulation self-adaptive multi-target weighting network is used for self-adaptively distributing weights to different targets and enhancing the stability of the network, the different targets comprise at least one of background, bleeding HE, hard exudation EX, microangioma MA, optic disc OD and cotton velvet spot SE in the sample fundus color photographic image, and the trained retinopathy segmentation model is used for segmenting the target fundus color photographic image. The weight is adaptively distributed to different targets, so that the problem of inter-subclass imbalance is solved, and the effect of improving the segmentation precision of the network is achieved.)

1. A method for training a diabetic retinopathy region segmentation model, which is characterized by comprising the following steps:

acquiring a sample fundus color photograph image;

training a diabetic retinopathy region segmentation model according to the sample fundus color photographic image, wherein the diabetic retinopathy region segmentation model comprises a circulation self-adaptive multi-target weighting network, the circulation self-adaptive multi-target weighting network is used for self-adaptively distributing weights to different targets and enhancing the stability of the network, the different targets comprise at least one of background, bleeding HE, hard exudation EX, microangioma MA, optic disc OD and cotton velvet spot SE in the sample fundus color photographic image, and the trained retinopathy segmentation model is used for segmenting the target fundus color photographic image.

2. The method according to claim 1, wherein the cyclic adaptive multi-objective weighting network comprises a forward coding and decoding module and an adaptive multi-objective weighting module, wherein the input of the forward coding and decoding module is the three-channel sample fundus color photographic image, and the output of the forward coding and decoding module is a six-channel prediction probability map; wherein, each output channel respectively corresponds to the background, hemorrhage HE, hard exudation EX, microangioma MA, optic disc OD and cotton wool spot SE in the fundus color photograph image of the sample.

3. The method of claim 2, wherein the input of the adaptive multi-target weighting module is the high-level semantic features extracted by the encoder in the forward encoding and decoding module, and the output is the weights allocated to different targets.

4. The method of claim 3,

the output of the self-adaptive multi-target weighting module is as follows: w is gAMW(XH);

Wherein, XHThe resulting high level semantic features are extracted for the encoder,weight, 5 is the number of targets;

multiplying the weight by the output of a decoder in the forward codec module as:

Fcand wcRespectively representing the predicted probability map and the weight corresponding to the c-th channel, x is scalar multiplication,a prediction probability map representing the weighted c-th channel;

the final prediction result of the cyclic adaptive multi-target weighting network is as follows:

5. the method of claim 3 wherein the round-robin adaptive multi-objective weighting network further comprises a reverse data recovery network having an input of a decoder in the forward codec module and an output of a recovered fundus color picture.

6. The method of claim 5, wherein the fundus color photograph images recovered by the reverse data recovery network are:

XR=RRN(F);

wherein, XRThe restored image is shown, and F is a prediction overview.

7. The method according to any one of claims 1 to 6, wherein the training of the diabetic retinopathy region segmentation model from the sample fundus oculi color photograph image comprises:

down-sampling the sample fundus color photograph image by a bilinear interpolation method;

normalizing the downsampled fundus color photograph image;

and training the diabetic retinopathy region segmentation model according to the normalized sample fundus color photograph image.

8. The method of any of claims 1 to 6, further comprising:

and optimizing the trained diabetic retinopathy region segmentation model according to binary cross entropy BCE loss, Dice loss and mean square error MSE loss.

9. A method of segmenting diabetic retinopathy regions, the method comprising:

acquiring a target eye ground color photograph image;

segmenting the target fundus color-photograph image according to a trained forward codec module, the trained forward codec module segmentation model being trained by the method of any one of claims 1 to 8.

10. The method of claim 9, further comprising:

down-sampling the target eye fundus color photograph image by a bilinear interpolation method;

normalizing the downsampled target eye fundus color photograph image;

and segmenting the normalized target eye fundus color photograph image according to the trained forward encoding and decoding module.

Technical Field

The invention relates to a diabetic retinopathy region segmentation model training method and a diabetic retinopathy region segmentation method, and belongs to the technical field of medical image segmentation.

Background

Diabetic Retinopathy (DR) is one of the most common microvascular complications of diabetes, an irreversible blinding disease, and one of four generally blinding factors. Early accurate DR screening, especially segmentation of lesion areas such as Hemorrhage (HE), Hard Extravasation (EX), Microaneurysms (MA), and cotton velvet Spots (SE), is crucial for ophthalmologists to develop treatment plans. However, due to the diverse shapes, fuzzy boundaries and ambiguous pathological features of lesion regions, the joint segmentation of multiple lesions still has great challenges.

In recent years, with rapid development of deep learning, many deep learning methods based on Convolutional Neural Networks (CNNs) are applied to DR image analysis. However, most of the methods for DR segmentation based on CNN have insufficient accuracy, and no study report has been reported on the combined segmentation of bleeding HE, hard effusion EX, microangioma MA, and cotton wool spot SE in DR.

Disclosure of Invention

The invention aims to provide a diabetic retinopathy region segmentation model training method and a diabetic retinopathy region segmentation method, which are used for solving the problems in the prior art.

In order to achieve the purpose, the invention provides the following technical scheme:

according to a first aspect, an embodiment of the present invention provides a method for training a diabetic retinopathy region segmentation model, where the method includes:

acquiring a sample fundus color photograph image;

training a diabetic retinopathy regional segmentation model according to the sample fundus color photographic image, wherein the diabetic retinopathy regional segmentation model comprises a circulation self-adaptive multi-target weighting network, the circulation self-adaptive multi-target weighting network is used for self-adaptively distributing weights to different targets and increasing the stability of the network, the different targets comprise at least one of background, bleeding HE, hard exudation EX, microangioma MA, optic disc OD and cotton velvet spot SE in the sample fundus color photographic image, and the trained diabetic retinopathy regional variable segmentation model is used for segmenting the target fundus color photographic image.

Optionally, the cyclic adaptive multi-target weighting network includes a forward encoding and decoding module and an adaptive multi-target weighting module, where an input of the forward encoding and decoding module is a three-channel fundus color photograph image and an output is a six-channel prediction probability map; wherein, each output channel respectively corresponds to the background, hemorrhage HE, hard exudation EX, microangioma MA, optic disc OD and cotton wool spot SE in the fundus color photograph image of the sample.

Optionally, the input of the adaptive multi-target weighting module is a high-level semantic feature extracted by an encoder in the forward encoding and decoding module, and the output is a weight assigned to different targets.

Optionally, the output of the adaptive multi-target weighting module is: w is gAMW(XH);

Wherein, XHThe resulting high level semantic features are extracted for the encoder,weight, 5 is the number of targets;

multiplying the weight by the output of a decoder in the forward codec module as:

Fcand wcRespectively representing the predicted probability map and the weight corresponding to the c-th channel, x is scalar multiplication,a prediction probability map representing the weighted c-th channel;

the final prediction result of the cyclic adaptive multi-target weighting network is as follows:

optionally, the circular adaptive multi-target weighting network further includes a reverse data recovery network, where an input of the reverse data recovery network is an output of a decoder in the forward encoding and decoding module, and an output of the reverse data recovery network is a recovered fundus color photograph image.

Optionally, the fundus color-photograph image obtained by the reverse data recovery network recovery is:

XR=RRN(F);

wherein, XRThe restored image is shown, and F is a prediction overview.

Optionally, the training of the diabetic retinopathy region segmentation model according to the sample fundus color photograph image includes:

down-sampling the sample fundus color photograph image by a bilinear interpolation method;

normalizing the downsampled fundus color photograph image;

and training the diabetic retinopathy region segmentation model according to the normalized sample fundus color photograph image.

Optionally, the method further includes:

and optimizing the trained diabetic retinopathy region segmentation model according to binary cross entropy BCE loss, Dice loss and mean square error MSE loss.

In a second aspect, there is provided a method for segmenting a diabetic retinopathy region, the method comprising:

acquiring a target eye ground color photograph image;

and segmenting the target eye fundus color photograph image according to the trained forward encoding and decoding module, wherein the trained forward encoding and decoding module is obtained by training through the method of the first aspect.

Optionally, the method further includes:

down-sampling the target eye fundus color photograph image by a bilinear interpolation method;

normalizing the downsampled target eye fundus color photograph image;

and segmenting the normalized target eye fundus color photograph image according to the trained forward encoding and decoding module.

Obtaining a sample fundus color photograph image; training a diabetic retinopathy region segmentation model according to the sample fundus color photographic image, wherein the diabetic retinopathy region segmentation model comprises a circulation self-adaptive multi-target weighting network, the circulation self-adaptive multi-target weighting network is used for self-adaptively distributing weights to different targets and increasing the stability of the network, the different targets comprise at least one of background, bleeding HE, hard exudation EX, microangioma MA, optic disc OD and cotton velvet spot SE in the sample fundus color photographic image, and the trained diabetic retinopathy region segmentation model is used for segmenting the target fundus color photographic image. The weight is adaptively distributed to different targets, so that the problem of inter-subclass imbalance is solved, and the effect of improving the segmentation precision of the network is achieved.

The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical solutions of the present invention more clearly understood and to implement them in accordance with the contents of the description, the following detailed description is given with reference to the preferred embodiments of the present invention and the accompanying drawings.

Drawings

FIG. 1 is a flowchart of a method for training a segmentation model of diabetic retinopathy according to an embodiment of the present invention;

FIG. 2 is a schematic diagram of several possible configurations of a sample fundus color photograph image provided in accordance with one embodiment of the present invention;

FIG. 3 is a block diagram of a diabetic retinopathy segmentation model CAMWNet according to an embodiment of the present invention;

FIG. 4 is a diagram illustrating a forward codec module and an adaptive multi-target weighting module according to an embodiment of the present invention;

fig. 5 is a schematic structural diagram of a reverse data recovery network according to an embodiment of the present invention;

FIG. 6 is a graph comparing segmentation results of the method of the present application with prior art methods, provided in accordance with an embodiment of the present invention;

fig. 7 is a flowchart of a method for segmenting diabetic retinopathy according to an embodiment of the present invention.

Detailed Description

The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.

In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.

In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.

Referring to fig. 1, a flowchart of a method for training a diabetic retinopathy region segmentation model according to an embodiment of the present application is shown, where the method includes:

step 101, obtaining a sample fundus color photograph image;

referring to fig. 2, several possible configurations of a sample fundus color photograph image are shown.

The sample fundus color photograph image used in the present application is an indian diabetic retinopathy region image data set (IDRiD) containing 81 pictures each having a resolution of 4288 × 2848.

The training of the model is completed by the integrated environment based on the Pythrch and 3 blocks of NVIDIA Tesla K40 GPU with 12GB storage space.

102, training a diabetic retinopathy region segmentation model according to the sample fundus color photographic image, wherein the diabetic retinopathy region segmentation model comprises a loop self-adaptive multi-target weighting network, the loop self-adaptive multi-target weighting network is used for self-adaptively distributing weights to different targets and increasing the stability of the network, the different targets comprise at least one of a background, a bleeding HE, a hard exudation EX, a microangioma MA, a disc OD and a cotton velvet spot SE in the sample fundus color photographic image, and the trained diabetic retinopathy region segmentation model is used for segmenting the target fundus color photographic image.

Referring to fig. 3, which shows a possible block diagram of a diabetic retinopathy region segmentation model CAM WNet provided by the present application, the CAM nett is a neural network based on an encoding-decoding structure, and mainly includes three parts, namely a forward encoding-decoding Module, an Adaptive Multi-Target Weighting Module AMW (Adaptive Multi-Target Weighting Module), and a reverse data recovery network RRN. The self-adaptive multi-target weighting module AMW is positioned at the top end of the encoder to obtain rich and important high-level global semantic information, and the reverse data recovery network RRN designed by the invention is positioned at the top end of the decoder to simulate the feedback from a high-level visual layer to a low-level visual layer in a biological visual system. The specific structure of each module is described in detail below.

A forward encoding and decoding module, please refer to fig. 4, wherein the input of the forward encoding and decoding module is a three-channel fundus color photograph image, and the output is a six-channel prediction probability map; wherein, each output channel respectively corresponds to the background, hemorrhage HE, hard exudation EX, microangioma MA, optic disc OD and cotton wool spot SE in the fundus color photograph image of the sample. Wherein:

the coding module is used for extracting abundant semantic information and global features in the input image and carrying out down-sampling on feature mapping at different stages. To save computation, the present invention uses five encoders in the encoder path, with depths of 32, 64, 128, 256, and 512, respectively. Each encoder consists of two 3 x 3 convolutional layers (followed by a Relu nonlinear layer) and one max pooling layer. The purpose of the decoding module is to upsample feature maps that have strong semantic information but lower resolution. Corresponding to the encoding path, four decoders are used in the decoding path, with depths of 256, 128, 64 and 32, respectively. Each decoder deconvolves the signature from the upper decoder by 2 x 2 and concatenates it with the signature from the peer encoder, then passes through two 3 x 3 convolutional layers (followed by the Relu nonlinear layer), and finally transmits the convolution result to the next decoder. The step length of all 3 multiplied by 3 convolution layers in the U-shaped structure is 1, and the step length of all pooling layers and deconvolution layers is 2. In addition, the present invention does not use the existing pre-training model, but rather employs a model random initialization.

The input of the self-adaptive multi-target weighting module is high-level semantic features extracted by an encoder in the forward encoding and decoding module, and the output is weights distributed for different targets.

The adaptive multi-target weighting module is composed of three 3 x 3 convolutional layers (followed by Relu nonlinear layers) and two fully connected layers. The convolutional layers are all 2 in step size, and the fully connected layer output dimensions are 128 and 5, respectively. Because the top feature of the highest CNN stage contains the strongest semantic information beneficial to classification, in order to solve the problem of class imbalance, the high-level semantic feature X at the top of the coding module is usedHAs input to the AMW. Subsequently, the high level feature XHIs encoded into weight vector by AMWThe method comprises the following specific steps:

w=gAMW(XH) (1)

wherein, gAMWThe parameter representing AMW is 5 the number of segmentation targets (hemorrhage HE, hard effusion EX, microangioma MA, optic disc OD and cotton wool patch SE). Then, the weight w is multiplied channel by the prediction probability map at the top of the decoder, as shown in equation 2:

wherein, FcAnd wcRespectively representing the predicted probability map and the weight corresponding to the c-th channel, x is scalar multiplication,representing the weighted prediction probability map of the c-th channel. Finally, pixel-level prediction is performed on the spatial values through a SoftMax function, as shown in formula 3, wherein (x, y) represents the spatial coordinates of the feature map. Y is the final prediction result, whose resolution is consistent with the original input image.

The effect of the AMW proposed by the invention is as follows:

AMW adopts a channel attention mechanism, learns the weight coefficient related to the class from the high-level semantic features, strengthens the feature channel of the important class of the decoding module by using the obtained weight coefficient and inhibits the channel which is not important for the current task, thereby relieving the problem of inter-class imbalance.

The input of the reverse data recovery network is the output of a decoder in the forward coding and decoding module, and the output is the recovered fundus color photograph image.

Biology has demonstrated that the cognition of the human brain is a cyclic perceptual process that maps visual objects from a source domain to a target domain, and vice versa. This two-way learning process increases the stability of the visual system, while improving the source domain to target domain mapping capability, i.e., the brain's semantic comprehension capability. The forward coding-decoding structure simulates the mapping from a source domain to a target domain in a biological visual system, and takes an original picture as input and outputs a prediction probability map; the reverse coding-decoding network simulates the mapping from the target domain to the source domain, and takes the prediction probability map as input to recover the original picture. The invention provides a reverse data recovery network RRN to recover the original picture, thereby improving the feature extraction capability of a forward coding-decoding network. As shown in fig. 5, the RRN employs a U-type network including five encoders and four decoders, the depth of which is the same as that of the forward encoding-decoding network. The network takes a prediction probability graph F as input and outputs a restored original picture, and the fundus color photograph image obtained by the reverse data restoration network restoration is as follows:

XR=RRN(F);

wherein, XRThe restored image is shown, and F is a prediction overview.

When the diabetic retinopathy region segmentation model is trained, the trained diabetic retinopathy region segmentation model is optimized according to BCE (Binary Cross Entropy) loss, Dice loss and MSE (mean-square error) loss.

It should be noted that, before training, the sample fundus color-photograph image may be preprocessed, and then training may be performed according to the preprocessed sample fundus color-photograph image.

Wherein, the step of pretreatment includes:

(1) and downsampling the sample fundus color photograph image by a bilinear interpolation method. For example, the sample fundus color-photographed image is down-sampled to 512 × 512 × 3 by a bilinear interpolation method.

(2) And carrying out normalization processing on the sampled fundus color photograph image.

In addition, in order to prevent overfitting and enhance the generalization ability of the model, the data is subjected to online random horizontal and vertical flip operations for data amplification. Because the boundary of the focus area is fuzzy and the contrast is not high, random noise is not used for enhancement in the invention.

After the diabetic retinopathy region segmentation model is obtained through training, the performance of the trained diabetic retinopathy region segmentation model can be verified, and 7 common classification evaluation indexes are adopted in the method, including a Dice Similarity Coefficient (DSC), accuracy (accuracycacy, ACC), Sensitivity (SEN), Specificity (SEP), a Jaccard Similarity Coefficient (JSC), Precision (PC) and a Pearson's Correlation Coefficient (PCC). These indices are defined as follows:

wherein TP, FP, TN and FN represent true positive, false positive, true negative and false negative, respectively, and X and Y represent prediction set and gold standard set, respectively.

The invention evaluates and compares the U-shaped network and the circular adaptive multi-target weighting network CAMWNet proposed by the invention in the test data set. To demonstrate the effectiveness of AMW and RRN, a series of ablation experiments were performed. The results of the experiment are shown in tables 1 and 2. The original U-type network is represented by "Baseline", the "Baseline + AMW" represents that an AMW module is added in the original U-type network, the "Baseline + RRN" represents that RRN is added in the original U-type network, and the "CAMWNet" represents that AMW and RRN are added in the original U-type network simultaneously, namely the method provided by the invention. It can be seen that DSC, ACC, SEN, SPE, JSC, PC and PCC of the present invention are improved by 3.38%, 0.14%, 0.87%, 0.03%, 2.60%, 4.74% and 2.64%, respectively, compared to the original U-type network. Ablation experiments are shown in table 3, and it can be seen that AMWNet and RRN designed in the present invention both have better segmentation performance than the original U-shaped network.

TABLE 1

TABLE 2

Method/evaluation index DSC HEDSC EXDSC MADSC SEDSC
baseline 50.46±1.80 51.54±3.16 71.21±3.26 39.62±2.41 39.50±1.63
baseline+AMW 53.43±1.42 52.97±2.97 71.44±2.43 39.17±1.61 50.14±2.16
baseline+RRN 52.29±1.97 51.74±3.02 71.25±2.31 39.02±2.12 47.15±2.88
CAMWNet 53.84±2.36 52.73±5.44 71.41±2.48 38.13±3.29 53.12±3.09

TABLE 3

To further demonstrate the effectiveness of the method of the present invention, figure 6 also shows the qualitative segmentation results. Therefore, the CAMWNet has higher accuracy and better robustness in a DR focus segmentation task.

So far, a method for automatically segmenting fundus color photography images aiming at diabetic retinopathy, namely CAMWNet, is realized and verified. The performance of the method in an experiment is superior to that of an original U-shaped network, the method can make a better judgment on the two-dimensional retina fundus color photographic image, and on the other hand, the self-adaptive multi-target weighting module AMW and the reverse data recovery network RRN designed in the method are not complex and can be embedded into any other convolutional neural network, so that the characteristic extraction capability of the network is stronger, the overall performance of the network is improved, the method is beneficial to the segmentation and detection of the two-dimensional retina fundus color photographic image, and the screening efficiency of the two-dimensional retina fundus color photographic image is greatly improved. The method combines image preprocessing, construction, training and testing of the CAMWNet network model, and makes follow-up researches on diabetic retinopathy, such as lesion area registration and automatic grading research, greatly helpful.

In conclusion, by acquiring a sample fundus color photograph image; training a diabetic retinopathy region segmentation model according to the sample fundus color photographic image, wherein the diabetic retinopathy region segmentation model comprises a circulation self-adaptive multi-target weighting network, the circulation self-adaptive multi-target weighting network is used for self-adaptively distributing weights to different targets and increasing the stability of the network, the different targets comprise at least one of background, bleeding HE, hard exudation EX, microangioma MA, optic disc OD and cotton velvet spot SE in the sample fundus color photographic image, and the trained diabetic retinopathy region segmentation model is used for segmenting the target fundus color photographic image. The weight is adaptively distributed to different targets, so that the problem of inter-subclass imbalance is solved, and the effect of improving the segmentation precision of the network is achieved.

Referring to fig. 7, a flowchart of a method for segmenting a diabetic retinopathy region according to an embodiment of the present application is shown, where as shown in fig. 7, the method includes:

step 701, acquiring a target eye ground color photograph image;

and step 702, segmenting the target fundus color photograph image according to the trained forward encoding and decoding module.

The forward codec module is obtained by training through the training method of the embodiment shown in fig. 1.

Wherein the method further comprises:

down-sampling the target eye fundus color photograph image by a bilinear interpolation method;

normalizing the downsampled target eye fundus color photograph image;

and segmenting the normalized target eye fundus color photograph image according to the trained forward encoding and decoding module.

In conclusion, the target fundus color photograph image is obtained; and segmenting the target eye fundus color photograph image according to the trained retinopathy segmentation model. The weight is adaptively distributed to different targets, so that the problem of inter-subclass imbalance is solved, and the effect of improving the segmentation precision of the network is achieved.

The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.

The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于U-Net神经网络的眼底视盘分割方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!