MVCT image texture enhancement method based on double regular constraints

文档序号:1739003 发布日期:2019-12-20 浏览:18次 中文

阅读说明:本技术 基于双正则约束的mvct图像纹理增强方法 (MVCT image texture enhancement method based on double regular constraints ) 是由 缑水平 刘豪锋 卢云飞 顾裕 毛莎莎 焦昶哲 刘芳 李阳阳 于 2019-09-03 设计创作,主要内容包括:本发明公开了一种基于双正则约束的MVCT图像纹理增强方法,主要解决现有技术不能进行MVCT图像增强的问题。其方案是:1)从人体同一部位获取多张KVCT和MVCT图像;2)对获得的CT图像数据集进行归一化,再在每对CT图像上进行取块,得到CT图像块数据集;3)建立一个13层的MVCT图像纹理增强网络,使用CT图像块数据集作为训练数据,使用梯度下降算法优化该网络,得到训练好的网络;4)输入一张完整的MVCT图像到该训练好的网络中,即可输出增强后的MVCT图像。本发明能在图像纹理增强的同时,较好地保持图像的边缘和细节,提高图像质量,便于医生对MVCT图像阅片诊断,修正病灶位置误差,保证放疗的准确性。(The invention discloses an MVCT image texture enhancement method based on double regular constraints, which mainly solves the problem that the MVCT image enhancement cannot be carried out in the prior art. The scheme is as follows: 1) acquiring a plurality of KVCT and MVCT images from the same part of a human body; 2) normalizing the obtained CT image data sets, and then taking blocks from each pair of CT images to obtain CT image block data sets; 3) establishing a 13-layer MVCT image texture enhancement network, using a CT image block data set as training data, and optimizing the network by using a gradient descent algorithm to obtain a trained network; 4) and inputting a complete MVCT image into the trained network, and outputting the enhanced MVCT image. The invention can better keep the edge and the detail of the image while enhancing the image texture, improve the image quality, facilitate the diagnosis of the MVCT image reading by a doctor, correct the position error of the focus and ensure the accuracy of the radiotherapy.)

1. A MVCT image texture enhancement method based on double regular constraints is characterized by comprising the following steps:

(1) identity of a human body using a megavolt computed tomography MVCT device and a kilovolt computed tomography KVCT deviceA plurality of pairs of an MVCT image X and a KVCT image Y are obtained by imaging a region and are expressed as { X, Y }, wherein the energy during MVCT imaging is 6MV, the energy during KVCT imaging is 120KV, and the plurality of { X, Y } images are expressed as an image data set DA

(2) For image data set DAPerforming normalization operation on each of MVCT and KVCT images, namely mapping Hounsfield unit Hu value of CT image to interval [0,1];

(3) For image data set DAEach pair of MVCT image and KVCT image in the image block data set D is established by taking the blockP

(4) Constructing an MVCT image texture enhancement network N based on double regular constraints, and initializing:

(4a) from the resulting image block dataset DPSetting a 13-layer network comprising an input layer, an edge operator layer, a residual layer, an deconvolution layer and an output layer through cross validation to form an MVCT image texture enhancement network N based on double regular constraints, wherein the input of the network is an MVCT image, and the output of the network is an enhanced MVCT image;

(4b) initializing a weight W of the network by using an MSRA initialization method, and initializing all biases b of the network to 0;

(5) using an image block data set DPTraining the MVCT image texture enhancement network N based on the double regular constraint:

(5a) disturbing image block data set DPSequence of middle image block pairs, sequentially from DPOne training block X of the MVCT image is selectedPAnd one training block Y of KVCT imagePIs recorded as a training sample pair { XP,YP};

(5b) Mixing XPAnd YPInputting the data into a network N, carrying out forward propagation, and obtaining an output result after the first convolution of a deconvolution layerAndobtaining an output result after the second convolutionAndthe output of the network is an enhanced MVCT image Block AT

(5c) Output result according to twice deconvolutionAndcalculating the difference between the KVCT image feature maps and the MVCT image feature maps to obtain the dual regular constraint loss L of the image texture enhancement network Na

(5d) Input X from image texture enhancement network NP、YPAnd enhanced MVCT image Block ATCalculating loss L of the generated network GAN separatelybAnd loss L of Cycle GAN of Cycle networkc

(5e) Dual regularization constraint penalty L for image texture enhancement network NaLoss L of generated network GANbAnd Cycle-by-Cycle GAN loss LcAnd carrying out linear weighting to obtain the integral loss of the image texture enhancement network N:

LN=α×La+β×Lb+γ×Lc

where α is the regularized constraint penalty LaBeta is the loss L of the generative network GANbGamma is the Cycle GAN loss L of the Cycle generation networkcThe weight coefficient of (a);

(5f) input X from image texture enhancement network NP、YPAnd enhanced MVCT image Block ATCalculating the loss L of the discriminant function D _ YdAnd the loss L of the discriminant function D _ Xe

(5g) MVCT image training using Laplace convolution kernel g to convolve input image texture enhancement network N at edge operator layerBlock XPObtaining a gradient map G of the training blockx

(5h) Updating the weight W and all the biases b of the image texture enhancement network N:

(5h1) enhancing the overall loss L of the network N according to the image textureNAnd the loss L of the discriminant function D _ YdUpdating the weight W and all the offsets b of the network by using an adaptive moment estimation optimizer;

(5h2) enhancing the overall loss L of the network N according to the image textureNAnd the loss L of the discriminant function D _ XeUpdating the weight W and all the offsets b of the network by using an adaptive moment estimation optimizer;

(5i) repeating the steps (5a) to (5h) until the maximum iteration number T of the network training is 5000, and obtaining the trained MVCT image texture enhancement network NT

(6) A complete MVCT image XTInput to a trained image texture enhancement network NTThrough the enhanced network NTTo obtain an enhanced MVCT image AT

2. The method of claim 1, wherein (2) pair of image datasets DAEach of the MVCT image and the KVCT image was normalized as follows:

(2a) the range of Hounsfield Unit Hu values of the input CT image [ -1024,3071]Linear translation to [0,4095]To obtain the image after Hu value range linear translation

Wherein, X is an input CT image;

(2b) normalizing the translated Hu value range [0,4095] to [0,1], yielding a normalized image y:

whereinFor the purpose of the input CT image,represents the minimum value of the values of the CT image Hu,represents the maximum value of the values of the CT image Hu.

3. The method of claim 1, wherein (3) pair of image datasets DATaking block from each pair of CT images, and establishing an image block data set DPThe method comprises the following steps:

(3a) randomly selecting a position in a central region on an MVCT image, recording the position, and cutting an image block X with the size of 64X 64 from the upper left corner to the lower right corner by using the positionPA total of 32;

(3b) on the KVCT image, an image block Y of size 64 x 64 is cut out from the upper left to the lower right on the basis of the position of the block taken recorded on the MVCT imagePA total of 32;

(3c) image block X to be interceptedPAnd YPDenoted as CT image block pair { XP,YP};

(3d) Repeating operations (3a) to (3c) to process D sequentiallyARemoving partial hole images from each pair of CT images to obtain CT image block data set DP

4. The method of claim 1, wherein the 13-layer MVCT image texture enhancement network N constructed in (4) is structured as follows:

the 1 st to 3 rd layers are input layers, each layer comprises a convolution layer Conv and a modified linear unit activation layer Relu, wherein the convolution kernel size of the first convolution layer Conv is 7 × 7, the moving step is 1, the convolution kernel sizes of the second convolution layer Conv and the third convolution layer Conv are both 3 × 3, and the moving step is 2;

the layers from the 4 th layer to the 9 th layer are residual error layers Res block, each layer is constructed by the same module, and the module consists of a convolution layer Conv, a modified linear unit activation layer Relu and a convolution layer Conv which are connected in sequence, wherein each convolution layer Conv contains 64 convolution kernels, and the size of each convolution kernel is 3;

the 10 th layer to the 11 th layer are deconvolution layers, each layer comprises a deconvolution layer Deconv and a modified linear unit activation layer Relu, wherein the deconvolution layer Deconv only comprises a convolution kernel, and the size of the convolution kernel is 3 x 3;

layer 12 is the edge operator layer, which contains a Laplace with a convolution kernel size of 3 x 3;

layer 13 is the output layer, which contains a convolution kernel size of 3 × 3 convolution layer Conv and a modified linear cell activation layer tanh.

5. The method of claim 1, wherein the regularized constraint penalty L of the image texture enhancement network N is calculated in (5c)aThe method is carried out according to the following formula:

wherein, XPAccording with the MVCT distribution rule p (X), YPAn image conforming to the KVCT distribution rule p (Y),andrespectively representing a first deconvolution layer and a second deconvolution layer in the image texture enhancement network N,respectively represent XPAnd YPAnd (4) correspondingly outputting the feature map.

6. The method of claim 1, wherein the loss L of the generative network GAN is calculated in (5d)bThe method is carried out according to the following formula:

wherein, G () is a function of generating an image conforming to MVCT distribution rule p (x) by image texture enhancement network N, and D _ Y () is a discriminant function.

7. The method according to claim 1, wherein the loss L of the recurrent network Cycle GAN is calculated in (5d)cThe method is carried out according to the following formula:

wherein, XPIs an image according with the MVCT distribution law p (X), YPThe image conforming to the KVCT distribution rule p (Y) is generated by the image texture enhancing network N according to the image conforming to the KVCT distribution rule p (X), G (the) is the image conforming to the KVCT distribution rule p (X) generated by the image conforming to the MVCT distribution rule p (X) through the image texture enhancing network N, and F (the) is the image conforming to the KVCT distribution rule p (Y) generated by the image texture enhancing network N according to the MVCT distribution rule p (X).

8. The method of claim 1, wherein the overall loss L of the image texture enhancement network N is calculated in (5e)NThe method is carried out according to the following formula:

LN=α×La+β×Lb+γ×Lc

wherein L isaIs the birormal constraint loss, L, of the image texture enhancement network NbAnd is the loss of the generative network GAN, LcIs the loss of the Cycle GAN of the Cycle generation network, and alpha is the dual regular constraint loss LaThe weight coefficient of (1) is set, and beta is the loss LbIs set to a value of10, γ is the loss LcThe value of the weight coefficient of (2) is set to 0.1.

9. The method of claim 1, wherein the loss L of discriminant function D _ Y is calculated in (5f)dAnd loss L of D _ XeThe method is carried out according to the following formula:

wherein, XPIs an image according with the MVCT distribution law p (X), YPThe image conforming to the KVCT distribution rule p (Y) is generated by the image texture enhancing network N according to the image conforming to the KVCT distribution rule p (X), G (the) is the image conforming to the KVCT distribution rule p (X) generated by the image conforming to the MVCT distribution rule p (X) through the image texture enhancing network N, and F (the) is the image conforming to the KVCT distribution rule p (Y) generated by the image texture enhancing network N according to the MVCT distribution rule p (X).

10. The method of claim 1, wherein the global penalty L of (5h) for the network N is enhanced according to the image textureNAnd the loss L of the discriminant function D _ YdAnd loss L of D _ XeUpdating the weight W and all the offsets b of the network by using a gradient descent algorithm according to the following formula:

wherein, W (K) represents the network weight obtained after the K-th training, W (K +1) represents the network weight obtained after the K + 1-th training, b (K) represents the network bias obtained after the K-th training, b (K +1) represents the network bias obtained after the K + 1-th training, α represents the learning rate of the network N, the value of the initial learning rate is set to 0.0002, and the learning rate is adjusted to 0.0001 by using a polynomial attenuation function after 1000 times of training.

Technical Field

The invention belongs to the technical field of medical image processing, and particularly relates to an MVCT image texture enhancement method which can be used for improving CT image quality and the visual effect of imaged organs and tissues.

Background

Megavolt computed tomography MVCT and kilovolt computed tomography KVCT are two common forms in X-ray CT. Compared with KVCT, MVCT equipment has higher imaging tube voltage, can find canceration areas in tissues and organs, and is widely used for preoperative radiotherapy of tumors and cancers. MVCT is however noisy and not suitable for follow-up treatment. At present, MVCT imaging is used before treatment, and is registered with KVCT images of radiotherapy plans, so that focus position errors are corrected, and the accuracy of radiotherapy is guaranteed. With the concern of CT radiation, the use rate of MVCT which is less harmful to human body is gradually increased. This places a demand on improving the quality of MVCT images to the standard of physician interpretation diagnosis. In general, MVCT images acquired during treatment lack corresponding high contrast and noise-free images for reference and evaluation, and it is difficult to enhance MVCT images by means of learning. Therefore, an effective method for improving the quality of an MVCT image is urgently needed.

At present, MVCT image enhancement mainly focuses on image denoising directions, and the methods adopted by the MVCT image enhancement are of two types, one is a projection domain-based method, which comprises bilateral filtering, static wavelet transformation, maximum posterior probability estimation and the like, the methods have strong dependence on original signals, and the resolution of the denoised image is reduced to a certain extent; the other is a neural network-based method, which includes RED-CNN, noise reduction automatic coding machine, DnCNN, etc., and the images denoised by these methods have no significant improvement in visual effect and contrast, and soft tissue edge blurring, etc. can occur.

Furthermore, the biggest disadvantages of the two types of methods are: since the MVCT image enhancement task is simply reduced to the denoising task, the pure denoising can not greatly improve the image quality, and the contrast and detail information of the MVCT image are lost. The patient still needs KVCT imaging to provide detailed information of CT images during the treatment process, and doctors are assisted in better making a treatment scheme.

Disclosure of Invention

Aiming at the defects of the method in the image enhancement process, the invention provides an MVCT image texture enhancement method based on double regular constraints, so as to maintain the gray information and the gradient information of the MVCT image, remove the noise of the MVCT image, enhance the texture of the MVCT image, reduce the phenomena of fuzzy edges and loss of details and improve the quality of the image.

To achieve the above object, the implementation scheme comprises the following steps:

1. a MVCT image texture enhancement method based on double regular constraints is characterized by comprising the following steps:

(1) imaging the same part of a human body by using megavolt computed tomography MVCT equipment and kilovolt computed tomography KVCT equipment to obtain a plurality of pairs of MVCT images X and KVCT images Y, and recording the images as { X, Y }, wherein the energy during MVCT imaging is 6MV, the energy during KVCT imaging is 120KV, and the plurality of { X, Y } images are recorded as an image data set DA

(2) For image data set DAPerforming normalization operation on each of MVCT and KVCT images, namely mapping Hounsfield unit Hu value of CT image to interval [0,1];

(3) For image data set DAEach pair of MVCT image and KVCT image in the image block data set D is established by taking the blockP

(4) Constructing an MVCT image texture enhancement network N based on double regular constraints, and initializing:

(4a) from the resulting image block dataset DPSetting a 13-layer network comprising an input layer, an edge operator layer, a residual layer, an deconvolution layer and an output layer through cross validation to form an MVCT image texture enhancement network N based on double regular constraints, wherein the input of the network is an MVCT image, and the output of the network is an enhanced MVCT image;

(4b) initializing a weight W of the network by using an MSRA initialization method, and initializing all biases b of the network to 0;

(5) using an image block data set DPTraining the MVCT image texture enhancement network N based on the double regular constraint:

(5a) disturbing image block data set DPSequence of middle image block pairs, sequentially from DPOne training block X of the MVCT image is selectedPAnd one training block Y of KVCT imagePIs recorded as a training sample pair { XP,YP};

(5b) Mixing XPAnd YPInputting the data into a network N, carrying out forward propagation, and obtaining an output result after the first convolution of a deconvolution layerAndobtaining an output result after the second convolutionAndthe output of the network is an enhanced MVCT image Block AT

(5c) Output result according to twice deconvolutionAndcalculating the difference between the KVCT image feature map and the MVCT image feature map to obtain an imageDouble regular constraint loss L of texture enhancement network Na

(5d) Input X from image texture enhancement network NP、YPAnd enhanced MVCT image Block ATCalculating loss L of the generated network GAN separatelybAnd loss L of Cycle GAN of Cycle networkc

(5e) Dual regularization constraint penalty L for image texture enhancement network NaLoss L of generated network GANbAnd Cycle-by-Cycle GAN loss LcAnd carrying out linear weighting to obtain the integral loss of the image texture enhancement network N:

LN=α×La+β×Lb+γ×Lc

where α is the regularized constraint penalty LaBeta is the loss L of the generative network GANbGamma is the Cycle GAN loss L of the Cycle generation networkcThe weight coefficient of (a);

(5f) input X from image texture enhancement network NP、YPAnd enhanced MVCT image Block ATCalculating the loss L of the discriminant function D _ YdAnd the loss L of the discriminant function D _ Xe

(5g) MVCT image training block X of input image texture enhancement network N is convolved at edge operator layer by using Laplace convolution kernel gPObtaining a gradient map G of the training blockx

(5h) Updating the weight W and all the biases b of the image texture enhancement network N:

(5h1) enhancing the overall loss L of the network N according to the image textureNAnd the loss L of the discriminant function D _ YdUpdating the weight W and all the offsets b of the network by using an adaptive moment estimation optimizer;

(5h2) enhancing the overall loss L of the network N according to the image textureNAnd the loss L of the discriminant function D _ XeUpdating the weight W and all the offsets b of the network by using an adaptive moment estimation optimizer;

(5i) repeating the steps (5a) to (5h) until the maximum iteration number T of the network training is reached5000, obtaining the trained MVCT image texture enhancement network NT

(6) A complete MVCT image XTInput to a trained image texture enhancement network NTThrough the enhanced network NTTo obtain an enhanced MVCT image AT

Compared with the prior art, the invention has the following advantages:

1. the invention has the integral loss L in the MVCT image texture enhancement networkNAdding regularized constraint loss LaAdding the loss L of the discriminant function D _ Y to the generated network GANdAnd the loss L of the discriminant function D _ XeOptimizing L during trainingN、LdAnd LeThrough the game of the generating network and the discrimination function, the weights of the image texture enhancement network and the discrimination function are continuously updated, so that the enhanced MVCT image is close to the KVCT image in visual effect and also conforms to the gray level statistical distribution rule of the KVCT image as much as possible in image statistical information.

2. In the invention, in the training process of the MVCT image texture enhancement network, the deconvolution layer of the network is constrained by using double regular constraint loss, so that a high-order characteristic diagram obtained after a series of convolution operations of an input MVCT image is reconstructed by the image texture enhancement network, the output enhanced MVCT image is closer to a KVCT image, and clearer texture information can be recovered.

3. According to the invention, the edge operator is used for adding the edge information extracted from the MVCT image into the output layer of the image texture enhancement network, so that the high-frequency information of the output MVCT image edge is consistent with the input MVCT image, and the organization edge information in the MVCT image is enhanced.

4. The invention can directly carry out denoising and enhancement in the image domain, and has wider application range.

Drawings

FIG. 1 is a flow chart of an implementation of the present invention;

FIG. 2 is an MVCT image used in the present invention;

FIG. 3 is a KVCT image used in the present invention;

FIG. 4 is an exemplary diagram of image fetching in the present invention;

FIG. 5 is an MVCT image texture enhancement network based on dual canonical constraints constructed in the present invention;

FIG. 6 is a graph of the results of texture enhancement of an MVCT image using the present invention.

Detailed Description

The invention will be further explained and explained with reference to the following drawings, in which:

referring to fig. 1, the MVCT image texture enhancement method based on double regular constraints of the present invention includes the following implementation steps:

step 1: and (4) preparing data.

1a) Imaging the same part of a human body by using megavolt computed tomography (MVCT) equipment and kilovolt computed tomography (KVCT) equipment to obtain a plurality of pairs of MVCT images X and KVCT images Y, and marking each pair as { X, Y }, wherein the energy is 6MV during MVCT imaging, and as shown in FIG. 2, the size of the image is 512X 512; the energy for KVCT imaging is 120KV, as shown in FIG. 3, the size of the image is 512X 512, and a plurality of { X, Y } groups are combined into an image data set DA

1b) For image data set DANormalizing each MVCT and KVCT image:

(1b1) the Henschel unit Hu value range of the input CT image [ -1024,3071]Linear translation to [0,4095]To obtain the image after Hu value range linear translation

Wherein, X is an input CT image;

(1b2) normalizing the translated Hu value range [0,4095] to [0,1], yielding a normalized image y:

whereinFor the CT image after the translation, the image is obtained,represents the minimum value of the values of the translated CT image Hu,represents the maximum value of the translated CT image Hu values;

1c) for image data set DATaking blocks from each pair of CT images after normalization, and establishing an image block data set DP

Referring to fig. 4, the specific implementation of this step is as follows:

(1c1) randomly selecting a position in a central region on an MVCT image, recording the position, and cutting an image block X with the size of 64X 64 from the upper left corner to the lower right corner by using the positionPA total of 32;

(1c2) on the KVCT image, an image block Y of size 64 x 64 is cut out from the upper left to the lower right on the basis of the position of the block taken recorded on the MVCT imagePA total of 32;

(1c3) image block X to be interceptedPAnd YPDenoted as CT image block pair { XP,YP};

(1c4) Repeating operations (1c1) to (1c3) to sequentially process DARemoving partial hole images from each pair of CT images to obtain CT image block data set DP

Step 2: and constructing the MVCT image texture enhancement network N based on the dual regular constraint.

From the resulting image block dataset DPAnd setting a 13-layer network comprising an input layer, an edge operator layer, a residual layer, an deconvolution layer and an output layer through cross validation, wherein the input of the network is the MVCT image, and the output of the network is the enhanced MVCT image.

Referring to fig. 5, the specific implementation of this step is as follows:

2a) the 1 st to 3 rd layers are input layers, each layer comprises a convolution layer Conv and a modified linear unit activation layer Relu, wherein the convolution kernel size of the first convolution layer Conv is 7 × 7, the moving step is 1, the convolution kernel sizes of the second convolution layer Conv and the third convolution layer Conv are both 3 × 3, and the moving step is 2;

2b) the layers from the 4 th layer to the 9 th layer are residual error layers Res block, each layer is constructed by the same module, and the module consists of a convolution layer Conv, a modified linear unit activation layer Relu and a convolution layer Conv which are connected in sequence, wherein each convolution layer Conv contains 64 convolution kernels, and the size of each convolution kernel is 3;

2c) the 10 th layer to the 11 th layer are deconvolution layers, each layer comprises a deconvolution layer Deconv and a modified linear unit activation layer Relu, wherein the deconvolution layer Deconv only comprises a convolution kernel, and the size of the convolution kernel is 3 x 3;

2d) layer 12 is the edge operator layer, which contains a Laplace with a convolution kernel size of 3 x 3;

2e) layer 13 is the output layer, which contains a convolution kernel size of 3 × 3 convolution layer Conv and a modified linear cell activation layer tanh.

The convolutional layer in the network N has the following mathematical form:

wherein the content of the first and second substances,i-th feature diagram representing the l-th layer of the network, when l is 0, F0An MVCT image block representing a network input,represents the weights of the ith convolution kernel at the ith layer of the network,representing the offset of the i-th convolution kernel of the l-th layer of the network, nlTo representThe number of convolution kernels at layer i of the network,representing image convolution operation, wherein the convolution operation adopts a 'same' mode to keep the size of an image before and after convolution unchanged;

the modified linear unit active layer ReLU in the image enhancement network N is mathematically formed as follows:

where x represents the input data.

And 3, initializing the weight W and the bias b of the image texture enhancement network N.

3a) Initializing the weight W of the network N by using an MSRA method, wherein the formula is as follows:

wherein, W represents the weight of the network, N (·,) represents Gaussian distribution, that is, the weight W of the network obeys the mean value of 0, and the standard deviation is(ii) a gaussian distribution of;

3b) all offsets b of the network N are initialized to a value of 0.

And 4, step 4: using an image block data set DPAnd training the image texture enhancement network N.

4a) Disturbing image block data set DPSequence of middle image block pairs, sequentially from DPOne training block X of the MVCT image is selectedPAnd one training block Y of KVCT imagePIs recorded as a training sample pair { XP,YP};

4b) Selecting an edge detection operator Laplace convolution kernel G, and obtaining an edge feature map G of the input MVCT image block according to the Laplace convolution kernel Gx

Wherein, XPA block of an MVCT image is represented,representing image convolution operation, wherein the convolution operation adopts a 'same' mode to keep the size of an image before and after convolution unchanged;

4c) mixing XPAnd YPInputting the data into a network N, carrying out forward propagation, and obtaining an output result after the first convolution of a deconvolution layerAndobtaining an output result after the second convolutionAndthe output of the network is an enhanced MVCT image Block ATBased on the output results of the two deconvolutionsAndcalculating the difference between the KVCT image feature maps and the MVCT image feature maps to obtain the dual regular constraint loss L of the image texture enhancement network Na

Wherein, XPIs in accordance with the MVCT distribution ruleImage of p (X), YPIs an image according with KVCT distribution rule p (Y),andrespectively representing a first deconvolution layer and a second deconvolution layer in the image texture enhancement network N,respectively represent XPAnd YPA corresponding output characteristic diagram;

4d) calculating loss L of a generative network GANb

Wherein, G () is a function of generating an image conforming to MVCT distribution rule p (x) by image texture enhancement network N, and D _ Y () is a discriminant function.

4e) Calculating loss L of Cycle GANc

Wherein, XPIs an image according with the MVCT distribution law p (X), YPThe image conforming to the KVCT distribution rule p (Y) is generated by the image texture enhancing network N according to the image conforming to the KVCT distribution rule p (X), G (the) is the image conforming to the KVCT distribution rule p (X) generated by the image conforming to the MVCT distribution rule p (X) through the image texture enhancing network N, and F (the) is the image conforming to the KVCT distribution rule p (Y) generated by the image texture enhancing network N according to the MVCT distribution rule p (X).

4f) Calculating the overall loss L of the image texture enhancement network N according to the results of 4c) to 4e)NThe method is carried out according to the following formula:

LN=α×La+β×Lb+γ×Lc

wherein alpha isN double regular constraint loss L of image texture enhancement networkaIs set to 1, beta is the loss L of the generated network GANbThe weight coefficient of (2) is set to 10, and γ is a Cycle GAN loss L of the recurrent networkcThe value of the weight coefficient of (a) is set to 0.1;

4g) calculating the loss L of the discriminant function D _ Y of the image texture enhancement network NdAnd loss L of D _ Xe

Wherein, XPIs in accordance with the MVCT distribution law p (X), YPThe image conforming to the KVCT distribution rule p (Y) is generated by the image texture enhancing network N according to the image conforming to the KVCT distribution rule p (X), G (the) is the image conforming to the KVCT distribution rule p (X) generated by the image conforming to the MVCT distribution rule p (X) through the image texture enhancing network N, and F (the) is the image conforming to the KVCT distribution rule p (Y) generated by the image texture enhancing network N according to the MVCT distribution rule p (X).

4h) Updating the weight W and all the biases b of the image texture enhancement network N:

(4h1) enhancing the overall loss L of the network N according to the image textureNAnd the loss L of the discriminant function D _ YdUpdating the weight W and all the offsets b of the network by using an adaptive moment estimation optimizer;

(4h2) enhancing the overall loss L of the network N according to the image textureNAnd the loss L of the discriminant function D _ XeUpdating the weight W and all the offsets b of the network by using an adaptive moment estimation optimizer;

wherein, the current network weight is updated according to the following formula:

wherein, W (k) represents the network weight obtained after the kth training, W (k +1) represents the network weight obtained after the kth training, b (k) represents the network bias obtained after the kth training, and b (k +1) represents the network bias obtained after the kth training; mu represents the learning rate of the image texture enhancement network N, the value of the initial learning rate is set to be 0.0002, and the learning rate is adjusted to be 0.0001 by using a polynomial attenuation function after 1000 times of training;

4i) repeating the steps (4a) to (4h) until the maximum iteration number T of the network training is 5000, and obtaining the trained image texture enhancement network NT

And 5: enhancing network N using trained image texturesTThe MVCT image is enhanced.

A complete MVCT image XTAs shown in fig. 6(a), the input is input to the trained MVCT image texture enhancing network NTThrough the image texture enhancement network NTObtaining an enhanced CT image ATAs shown in fig. 6 (B).

As can be seen from fig. 6, in the process of image restoration, the reconstruction of the image gray scale information and the preservation of the image gradient are considered at the same time, and the obtained enhanced image has richer and clearer details and better visibility.

The foregoing description is only an example of the present invention and is not intended to limit the invention, so that it will be apparent to those skilled in the art that various modifications and variations in form and detail can be made therein without departing from the spirit and scope of the invention.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:重复纹理特征描述方法和装置、双目立体匹配方法和装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!