MRI system and method for detecting patient motion using neural networks
阅读说明:本技术 利用神经网络检测患者运动的mri系统和方法 (MRI system and method for detecting patient motion using neural networks ) 是由 伊莎贝尔·休肯斯费尔特詹森 桑塔伊·安 克里斯托弗·贾德森·哈迪 伊齐克·马尔基尔 拉斐尔· 于 2020-04-21 设计创作,主要内容包括:本发明题为“利用神经网络检测患者运动的MRI系统和方法”。本发明提供了一种磁共振成像(MRI)系统,该MRI系统包括控制和分析电路,该控制和分析电路具有编程以使用MRI系统的线圈元件来获取磁共振(MR)数据、分析MR数据并将MR数据重建为MR子图像。系统还包括与控制和分析电路相关联的经训练的神经网络,以将MR子图像转换为与MR子图像中运动破坏的存在和程度相关的预测。控制和分析电路的编程包括用于至少部分地基于经训练的神经网络的预测来控制MRI系统的操作的指令。(The invention provides an MRI system and method for detecting patient motion using neural networks. A Magnetic Resonance Imaging (MRI) system includes control and analysis circuitry programmed to acquire Magnetic Resonance (MR) data using coil elements of the MRI system, analyze the MR data, and reconstruct the MR data into MR sub-images. The system also includes a trained neural network associated with the control and analysis circuitry to convert the MR sub-images into predictions relating to the presence and extent of motion disruption in the MR sub-images. The programming of the control and analysis circuitry includes instructions for controlling operation of the MRI system based at least in part on the prediction of the trained neural network.)
1. A Magnetic Resonance Imaging (MRI) method comprising:
generating a first sub-image from first Magnetic Resonance (MR) partial k-space data acquired by an MRI system over a first time interval;
generating a second sub-image from second MR part k-space data acquired by the MRI system over a second time interval, wherein the first time interval and the second time interval are temporally adjacent to each other;
combining the first sub-image and the second sub-image to generate a combined sub-image;
generating, using a trained neural network, a prediction relating to the presence and extent of motion occurring between the first time interval and the second time interval using the combined sub-image as an input; and
Performing further operations of the MRI system based at least in part on the prediction generated by the trained neural network.
2. The method of claim 1, wherein the partial k-space data from the first time interval and the partial k-space data from the second time interval are from a single coil within a multi-coil receiver array of the MRI system.
3. The method of claim 1, wherein the prediction related to the presence and extent of motion occurring between the first time interval and the second time interval is a motion score, and wherein the motion score is calculated such that motion is more likely to have an effect on the combined sub-image as the magnitude of the motion score increases.
4. The method of claim 3, wherein a motion score is calculated for each coil in a multi-coil receiver array of the MRI system to generate a motion score for the multi-coil receiver array, and the motion scores for the multi-coil receiver array are combined into a net motion score by taking an average, median, maximum, or minimum of individual coil scores.
5. The method of claim 4, comprising determining whether motion has occurred by comparing the net motion score to a threshold.
6. The method of claim 1, wherein generating the first sub-image comprises using only the first MR data collected over the first time interval, and wherein generating the second sub-image comprises using only the second MR data collected over the second time interval.
7. The method of claim 1, wherein the first sub-image and the second sub-image are composite and combined by addition.
8. The method according to claim 1, wherein the first sub-image and the second sub-image are combined by aggregating their respective partial k-space data in k-space and then converted to the image domain.
9. The method of claim 1, wherein generating the first sub-image using the first MR data collected over the first time interval and generating the second sub-image using the second MR data collected over the second time interval comprises: using data collected by all receive coils of a receive coil array of the MRI system during the first time interval and the second time interval, respectively.
10. The method of claim 3, wherein the motion score is based on a weighted sum, or a logarithm of a weighted sum, or an average of a per-pixel measure of differences between the combined sub-image and a combined sub-image obtained without motion between the first time interval and the second time interval.
11. The method of claim 10, wherein the per-pixel metric is an entropy per pixel of the difference, or a difference per pixel, or a log per pixel of the difference, or a log per pixel of an absolute value of the difference.
12. The method of claim 1, wherein the trained neural network is a trained convolutional neural network having a plurality of convolutional layers, a plurality of max-pooling layers, and a plurality of fully-connected layers.
13. The method of claim 1, wherein the trained neural network is trained according to a training process comprising:
generating training data by a training data generation process, the training data generation process comprising:
simulating rigid body motion by applying translation and/or rotation to the motion-free image to generate an offset image;
replacing portions of the motion-free k-space data of the motion-free image with k-space data of the offset image to generate motion-corrupted hybrid k-space data according to a scan order, wherein the scan order describes how k-space is filled by phase encoding as a function of a time step;
Simulating partial data collection by applying a mask to the mixed k-space data according to the scan order to generate partial k-space data;
generating motion-corrupted sub-images from the partial k-space data; and
calculating a motion score for the motion-corrupted sub-image based at least in part on a difference between the motion-corrupted sub-image and a corresponding motion-free sub-image generated using corresponding partial k-space data of the motion-free image;
repeating at least a portion of the training data generation process by applying different translations and/or rotations to the motion-free image or other motion-free images to produce a set of motion-corrupted sub-images and associated motion scores; and
training a neural network using at least a portion of the set of motion corrupted sub-images and associated motion scores to generate the trained neural network.
14. The method of claim 1, wherein performing further operations of the MRI system based at least in part on the prediction generated by the trained neural network comprises: in response to determining that the prediction indicates that motion occurred between the first time interval and the second time interval:
Aggregating the first MR data and MR data collected prior to the first time interval into k-space data corresponding to a first motion state as a first aggregate;
aggregating the second MR data and MR data collected after the first time interval into k-space data corresponding to a second motion state as a second aggregate; and
reconstructing the first aggregate and the second aggregate, respectively, to produce a first motion-free sub-image and a second motion-free sub-image, respectively.
15. The method of claim 14, wherein performing further operations of the MRI system based at least in part on the prediction generated by the trained neural network further comprises registering and combining the first and second motion-free sub-images.
16. The method of claim 1, wherein performing further operations of the MRI system based at least in part on the prediction generated by the trained neural network comprises: in response to determining that the prediction indicates that motion occurred between the first time interval and the second time interval:
determining whether a sufficient amount of k-space has been filled to produce a motion-free image;
In response to determining that an insufficient amount of k-space has been filled, reacquiring pre-motion MR data corresponding to the portion of k-space that was filled prior to the second time interval; and
aggregating the re-acquired MR data with second MR data into a new motion state.
17. The method of claim 16, wherein performing further operations of the MRI system based at least in part on the prediction generated by the trained neural network further comprises performing a parallel imaging/compression sensing (PICS) reconstruction to generate a motion-free image from the first MR data, or from the combined second MR data and reacquired MR data.
18. A computer-based method of generating a trained neural network to generate predictions relating to the presence and extent of motion in Magnetic Resonance (MR) sub-images, the method comprising:
providing training data comprising the motion corrupted sub-images as available input and providing corresponding motion scores as output;
training a neural network using the training data to convert MR sub-images to corresponding motion scores indicating whether motion occurred during an MRI scan used to acquire data for the MR sub-images;
Wherein the motion corrupted sub-image is generated from at least one non-motion sub-image, and wherein the motion score is calculated based on a weighted sum of per-pixel difference metrics between the at least one non-motion sub-image and a respective one of the motion corrupted sub-images.
19. The method of claim 18, comprising generating the training data by a training data generation process comprising:
applying a translation and/or rotation to the motion-free image to produce an offset image;
replacing a portion of the motion-free k-space data associated with the motion-free image with motion-corrupted k-space data associated with the offset image to generate hybrid k-space data; and
generating partial k-space data by simulating partial data collection by applying a mask to the mixed k-space data according to a scan order, wherein the scan order defines how k-space is filled with the motion-free k-space data as a function of the time step of the motion-free image, and wherein the mask defines a section of k-space from which the partial k-space data is collected, the section corresponding to an adjacent time step in the scan order.
20. The method of claim 19, wherein the training data generation process further comprises:
generating motion-corrupted sub-images from the partial k-space data;
generating a motion-free sub-image using a portion of the motion-free k-space data, the portion of the motion-free k-space data corresponding to a same section of k-space defined by the mask; and
calculating a motion score for the motion corrupted sub-image as an entropy of a difference between a non-motion sub-image and the motion corrupted sub-image.
21. The method of claim 20, wherein a size of the motion score corresponds to a degree to which motion affects the motion corrupted sub-images.
22. A method according to claim 20, comprising repeating at least part of the training data generation process by applying different translations and/or rotations to the motion free image or other motion free images to produce a set of motion corrupted sub-images and associated motion scores.
23. A Magnetic Resonance Imaging (MRI) system comprising:
control and analysis circuitry including programming to acquire Magnetic Resonance (MR) data using coil elements of the MRI system, analyze the MR data, and reconstruct the MR data into MR sub-images; and
A trained neural network associated with the control and analysis circuitry to convert the MR sub-images into predictions relating to the presence and extent of motion disruption in the MR sub-images; and is
Wherein the programming of the control and analysis circuitry includes instructions to control operation of the MRI system based at least in part on the prediction of the trained neural network.
24. The system of claim 23, wherein the programming of the control and analysis circuitry includes instructions to:
acquiring the MR data by an acquisition process performed according to a scan order defining how k-space is filled as a function of time step;
generating MR sub-images from MR data acquired in adjacent time steps in the scan order; and
providing the MR sub-images to the trained neural network for conversion into the prediction relating to the presence and extent of motion disruption in the MR sub-images.
25. The system of claim 23, wherein the programming to reconstruct the MR data into MR sub-images includes instructions to: aggregating MR data according to a motion state identified based on a prediction of the trained neural network.
Background
Generally, Magnetic Resonance Imaging (MRI) examination is based on the interaction between a main magnetic field, a Radio Frequency (RF) magnetic field, and a time-varying gradient magnetic field with a gyromagnetic material having nuclear spins within a subject of interest, such as a patient. Some gyromagnetic materials, such as hydrogen nuclei in water molecules, have characteristic properties in response to external magnetic fields. The precession of the spins of these nuclei can be influenced by manipulating the fields to produce RF signals that can be detected, processed, and used to reconstruct useful images.
Patient motion is one of the biggest causes of clinical MRI inefficiency, often requiring the patient to re-scan or even a second visit. In particular, patient motion can lead to MR image blurring, artifacts, and other inconsistencies. Some methods of correcting motion require some hardware for monitoring motion (increasing cost and extending patient setup time) or navigator sequence (occupying time of the imaging sequence). There is therefore a need for an improved method for data acquisition and reconstruction in magnetic resonance imaging techniques sensitive to patient motion.
Disclosure of Invention
In one embodiment, a Magnetic Resonance Imaging (MRI) method includes: a first sub-image is generated from first Magnetic Resonance (MR) partial k-space data acquired by the MRI system over a first time interval, and a second sub-image is generated from second MR partial k-space data from a different portion of k-space acquired by the magnetic resonance imaging system over a second time interval. The first time interval and the second time interval are adjacent to each other in time. The method further includes combining the first sub-image and the second sub-image to generate a combined sub-image; generating a prediction relating to the presence and extent of motion occurring between the first time interval and the second time interval using the trained neural network using the combined sub-images as inputs; and performing further operations of the MRI system based at least in part on the prediction generated by the trained neural network.
In another embodiment, a computer-based method of generating a trained neural network to generate predictions related to the presence and extent of motion in Magnetic Resonance (MR) sub-images includes: providing training data comprising the motion corrupted sub-images as available input and a corresponding motion score as output; the neural network is trained using the training data to convert the MR sub-images into corresponding motion scores that indicate whether motion occurred during an MRI scan used to acquire the MR sub-image data. Motion corrupted sub-images are generated from the at least one motion free sub-image and a motion score is calculated as the entropy of the difference between the at least one motion free sub-image and a respective one of the motion corrupted sub-images.
In a further embodiment, a Magnetic Resonance Imaging (MRI) system includes control and analysis circuitry having programming to acquire Magnetic Resonance (MR) data using coil elements of the MRI system, analyze the MR data, and reconstruct the MR data into MR sub-images. The system also includes a trained neural network associated with the control and analysis circuitry to convert the MR sub-image into a prediction relating to the presence and extent of motion disruption in the MR sub-image. The programming of the control and analysis circuitry includes instructions for controlling operation of the MRI system based at least in part on the prediction of the trained neural network.
Drawings
These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
figure 1 is a schematic diagram of an embodiment of a magnetic resonance imaging system configured to perform data acquisition, motion detection and scoring, and image reconstruction as described herein;
FIG. 2 is a process flow diagram of an embodiment of a method for training a neural network using motion corrupted images to detect motion during a scan;
FIG. 3 is an exemplary Fast Spin Echo (FSE) type scanning sequence in which the phase encoding is a function of the time step;
FIG. 4 is a motion free sub-image resulting from an undersampled k-space data set;
FIG. 5 is a motion corrupted sub-image resulting from an undersampled k-space data set;
FIG. 6 is a motion free sub-image resulting from an undersampled k-space data set;
FIG. 7 is a motion corrupted sub-image resulting from an undersampled k-space data set;
FIG. 8 is an exemplary relatively motion-free image with a relatively low motion score;
FIG. 9 is another exemplary relatively motion-free image with a relatively low motion score;
FIG. 10 is an exemplary motion corrupted image with a relatively high motion score;
FIG. 11 is another exemplary motion corrupted image with a relatively high motion score;
fig. 12 is a schematic diagram of an embodiment of a Convolutional Neural Network (CNN) configured to predict a motion score from an input sub-image;
fig. 13 is an embodiment of a method wherein CNN is configured to generate motion scores for a single coil sub-image and combine multiple motion scores to generate a motion score for the entire set of sub-images from multiple coils;
FIG. 14 is a comparison between a first histogram of calibrated true (ground true) scores for a set of sub-images and a second histogram of scores generated by a neural network for the same set of sub-images;
FIG. 15 is a comparison between a motion profile of a subject being imaged, a predicted motion score profile based on sub-images generated during imaging when the subject moves according to the motion profile, and a predicted motion score profile based on sub-images generated during imaging when the subject remains relatively stationary;
FIG. 16 is a process flow diagram of an embodiment of a method for predicting motion and scoring motion during a scan;
FIG. 17 is an embodiment of an algorithm for performing a scan, monitoring motion during the scan, and aggregating motion states when motion is detected;
FIG. 18 is an embodiment of an algorithm for performing a scan, monitoring motion during the scan, and aggregating final motion states when motion is detected;
FIG. 19 is an embodiment of an algorithm for performing a scan, monitoring motion during the scan, and adapting to motion during the scan when motion is detected;
FIG. 20 is an embodiment of a method of reconstructing a motion free image from a motion corrupted data set; and is
Fig. 21 is another embodiment of a method of reconstructing a motion free image from a motion corrupted data set.
Detailed Description
One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, all features of an actual implementation may not be described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
When introducing elements of various embodiments of the present invention, the articles "a," "an," "the," and "said" are intended to mean that there are one or more of the elements. The terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements other than the listed elements. Further, any numerical examples in the following discussion are intended to be non-limiting, and thus additional numbers, ranges, and percentages are within the scope of the disclosed embodiments.
As mentioned above, patient motion is one of the biggest causes of clinical MRI inefficiency, often requiring the patient to re-scan or even a second visit. Studies have shown that patient motion can lead to repeated acquisition sequences in up to 20% of MRI examinations. The annual loss per scanner is large due to the reduced throughput.
The present disclosure includes a system and method for detecting, timing, and adapting to patient motion during or after an MR scan without the need for external tracking hardware. Once the timing is known, various actions can be taken, including restarting the scan, reacquiring those portions of k-space that were acquired before the motion, or using existing data to correct for the motion. This correction can be done using a deep learning neural network or an iterative optimization method. The disclosed embodiments also include an adaptive system for detecting patient motion in real time during an MR scan without the need for external monitoring devices or navigation, and optionally adjusting scan parameters to compensate for inconsistent data. The system uses a neural network trained on motion-corrupted images (e.g., a convolutional neural network implemented as one or more specialized processors or simulated by software) to detect motion in 1/16 down to k-space. Once motion is detected, the system may track the individual sub-images to combine them into a motion-free image, or may adjust the scan to re-acquire those sections of k-space that were acquired before motion occurred.
An exemplary system for performing the techniques described herein is discussed with reference to FIG. 1. Embodiments described herein may be performed by a Magnetic Resonance Imaging (MRI) system in which a particular imaging routine (e.g., an accelerated imaging routine for an MRI sequence) is initiated by a user (e.g., a radiologist). In addition, the MRI system may perform data acquisition, data correction, and image reconstruction. Thus, referring to figure 1, a magnetic resonance imaging system 10 is schematically shown as including a
The
The various coils of system 10 are controlled by external circuitry to generate the required fields and pulses and to read the emissions from the gyromagnetic material in a controlled manner. In the illustrated embodiment, the
It should be noted that although the
As shown, the scanner control circuitry 14 includes
The
The
In particular, some aspects of the present disclosure include methods for acquiring magnetic resonance data and processing the data to construct one or more motion corrected images. At least a portion of the methods disclosed herein may be performed by the system 10 described above with respect to fig. 1. That is, MRI system 10 may perform the acquisition techniques described herein and, in some embodiments, perform the data processing techniques described herein. It should be noted that after the acquisition described herein, system 10 may simply store the acquired data for later local and/or remote access, such as in a memory circuit (e.g., memory 62). Thus, the acquired data may be manipulated by one or more processors contained within a special purpose or general purpose computer when accessed locally and/or remotely. One or more processors may access the acquired data and execute routines stored on one or more non-transitory machine-readable media collectively storing instructions for performing methods including the motion detection, image processing, and reconstruction methods described herein.
To facilitate presentation of certain embodiments described herein, exemplary acquisition and reconstruction sequences are described below. However, unless explicitly stated otherwise, the present disclosure is not limited to such acquisitions and sequences.
In certain embodiments, 2D MR images are generated from cartesian k-space using gradient echo (GRE) or Fast Spin Echo (FSE) pulse sequences and acquired using an RF receiver coil array of 8 or more coils. Each coil has a corresponding sensitivity to RF signals generated during acquisition, and the sensitivity of each coil can be mapped to generate a sensitivity map of the coil array. Image reconstruction may involve generating a partial image corresponding to each coil by 2D fourier transforming the data acquired by the particular coil (referred to as "coil data") and multiplying with the conjugate of the coil sensitivity map. To generate a complete image, the partial images are added and the result is then divided by the sum of the squares of the coil sensitivity maps to yield the final image.
As the patient moves during the scan, the coil data may contain a mixture of fourier components from two or more motion states. As discussed herein, the state of motion may also be referred to herein as a "gesture. In particular, the poses disclosed herein are intended to represent positions of a subject being imaged that correspond to a portion of k-space acquired at a given time (or time step, as described below). When two or more motion states or poses occur, the resulting reconstructed image will be corrupted and contain motion-related artifacts. An aspect of the present disclosure relates to detecting the presence of motion and identifying the time at which the motion occurs during scanning. According to certain disclosed embodiments, this motion detection may be performed after the scan has been completed, or may also be performed during the scan.
The disclosed embodiments include methods and systems for generating neural networks trained to identify the presence and severity of motion during an MR scan. Fig. 2 depicts a process flow diagram of an embodiment of a
As shown, to generate training data, the
The resulting offset image is converted into k-space data (motion state 2 k-space data) and combined with the k-space data of the original pre-motion image (motion state 1 k-space data), e.g. according to a scan order describing how k-space is filled by phase encoding, which is a function of the time step of the pre-motion image. For example, the combining may involve replacing certain sections of k-space of the original image with k-space data of the offset image based on the order of k-space filling and the timing of the simulated motion. Further, converting the raw and offset images to k-space may involve multiplying the images by a coil sensitivity map and fourier transforming the image data to k-space.
Thus, the k-space data set resulting from the action of
In the illustrated embodiment, the
To help illustrate an example of applying a mask in the manner represented by
More specifically, as shown in FIG. 3, each point represents a phase code, e.g., 158 (echo), whose readout direction is orthogonal to the page. Each angled column of phase encodings represents a phase encoding produced from a single lens. Thus, in the illustrated embodiment, the echo train length is 8. The adjacent angled columns represent the time steps of adjacent shots. Thus, in the illustrated embodiment, the
It should be noted that the k-space scan order of fig. 3 is merely an example of a pattern, and other scan order patterns may be used in accordance with the presented embodiments. For example, according to the
The
Using the motion corrupted sub-images and the non-motion sub-images, the
In some embodiments, the score may be calculated according to the following formula:
equation 1:
equation 2:
equation 3:
Equation 2 calculates the coil entropy S of the difference corresponding to two sub-images (no motion and motion corrupted) for a particular coilcoil. In particular, ScoilIs the negative of the sum of the difference times the logarithm of the difference over all pixels of the sub-image corresponding to a particular coil. While the difference itself may be used to calculate the score, it has been found that using a logarithmic term may provide a distribution of scores that are more closely related to the severity of the motion damage.
Once the coil entropies are calculated, in some embodiments, the coil entropies may be combined according to equation 3, which is a weighted sum of the coil entropies calculated for all coils to obtain a final score. In this way, the entropy measure can be summed over the pixels (using equation 2) and the coils (using equation 3). It should be noted that in some embodiments, the score calculation may be different. For example, in other embodiments, the natural logarithm of the difference may be used for the score calculation instead of the entropy of the difference. Further, in some embodiments, to make the neural network and score calculations compatible with different image resolutions (e.g., 256 × 256, 244 × 244, 312 × 321), rather than using the sum of the difference per-pixel entropies to perform the score calculations, an average may be used. Thus, because multiple outputs may be combined in some embodiments, the disclosed embodiments may use the sum of all pixels across all coils, the sum of each coil, the average across all pixels and coils, and the average across each coil. It should be noted that the resulting motion score will therefore have a number that may be between 0 and 1 (e.g., where the score is an average), or a higher range (e.g., where the score is a sum). In some embodiments, an alternative metric such as normalized mean square error may be used as the quality score. Other embodiments may use metrics such as 1) difference, 2) entropy of difference, or, 3) logarithm of difference or logarithm of weighted sum.
To help illustrate the efficacy of the motion score described with reference to block 164, scores for images with varying degrees of motion impairment are calculated using equations 1-3. Fig. 8, 9, 10, and 11 depict exemplary motion corrupted images and scores associated with each image. In these images, the score is calculated as the sum of the entropy per pixel, and the score can be up to 7000. As shown in these images, almost no moving images have a mass fraction at the lower end of the fractional spectrum (<2000), while severely corrupted images are at the higher end of the fractional spectrum (> 3000).
Returning to the
In certain embodiments, the neural network trained according to block 166 may be a Convolutional Neural Network (CNN) having one or more convolutional layers, one or more max pooling layers, one or more planarization layers, one or more fully connected layers, or any combination thereof. One embodiment of a
In the representation of fig. 12, the data is shown in the form of a multi-dimensional input 181, for example (256 × 256 × 16) of the leftmost data set. This is because
The input amount to
The
A second
This pattern (two convolutional layers 182, then a single max-pooling layer 184) is repeated twice to produce fifth, sixth, seventh, eighth, ninth, and
As shown,
Although
The
Fig. 13 is a schematic diagram of a
The single-coil motion scores 204 are combined via a
The efficacy of the
More specifically, the
The ability of
The
Once the neural network of the present disclosure (e.g., CNN 180) has been trained and validated,
As with other methods described herein, the
As shown in fig. 16, the
The sub-images of the current shot are then combined with the sub-images of the previous shot (block 244). This can likewise be done on a multi-coil basis, as well as on a single-coil basis. This combination of two adjacent shots allows the neural network to determine whether motion has occurred because the motion time frame is much longer than the time frame of each shot.
Once the combined sub-image (multi-coil) is obtained, or once multiple combined sub-images (single coil) are obtained, the sub-images are passed into a neural network (block 246) to generate motion score predictors. In the case of a single coil sub-image, additional steps may be performed as discussed with reference to fig. 13.
The motion score prediction is then compared to a threshold (block 248) to identify whether motion has occurred. For example, the threshold may be selected based on minimizing a positive and/or negative false positive rate in the training data or a separately generated set of validation data. As set forth with reference to fig. 15, if the predicted motion score is above a threshold, the neural network may be deemed to have predicted a motion event.
In some embodiments, the timing of motion may be reduced to a single shot since the sub-image of each shot is used twice, once as newly acquired data, and once as the "previous shot" sub-image for the next sub-image. It has been demonstrated that this time sequence can be resolved within 8 lines (3% of k-space) of a 256 line image.
Once motion is detected and the timing determined, various actions may be taken, including restarting the scan, reacquiring those portions of k-space acquired prior to the motion, or using existing data to correct for the motion to reconstruct an artifact-free MRI image. The manner in which the effect of motion can be mitigated depends, among other things, on the relationship between the time at which motion is detected and the time at which motion occurs. For example, where motion is not detected until after scanning is complete, the methods available to mitigate the effects of motion may be different than those available when motion is detected during scanning. Fig. 17-21 detail various methods that may be performed by the MR system 10 under different motion scenarios.
Fig. 17 depicts a process flow diagram of an embodiment of an algorithm 260 executed, for example, by the control and
Once the data has been acquired, algorithm 260 performs
If motion is detected at query 266, the previously collected data is saved as one motion state at operation 278 and a new motion state is started at operation 280. The new motion state initially comprises only the most recent k-space data collected. As the scan continues, k-space data will be aggregated to this motion state as long as no further motion is detected. The results of the operations described thus far result in continuing to add to the current motion state or create a new motion state until the scan is complete.
At query 274, if multiple motion states exist, each aggregate (each set of k-space data corresponding to a single motion state) is reconstructed separately at operation 282. In this respect, each reconstructed motion state results in a non-motion sub-image and a plurality of non-motion sub-images 284 are generated.
At operation 286, the different sub-images may be combined using various known techniques or separately reconstructed into a complete image. For example, the sub-images 284 may be registered and combined to create a motion free image by methods known in the art. Alternatively, parallel imaging, compressed sensing, or sparse reconstruction neural networks may be used to reconstruct the k-space data for each motion state. The resulting images may then be registered and combined by methods known in the art. As one example, operation 286 may include iterative joint estimation of motion and images with temporal constraints. Timing constraints (i.e., motion timing) are obtained based on the neural network predictions.
Using a similar sequence, instead of aggregating individual motion states, the k-space data can be adaptively re-acquired as shown in fig. 18. In particular, the algorithm 290 of FIG. 18 includes many of the same operations as the algorithm 260 of FIG. 17, including operations 262, 264, 268, 272, and 276 and queries 266, 270, and 274.
For algorithm 290, if motion is detected at query 266, the system (e.g., control and analysis circuitry 52, 60) determines at query 292 whether sufficient k-space has been filled to enable parallel imaging/compressed sensing (PICS) or reconstruction of a neural network using sparse images.
If sufficient k-space has not been filled, the algorithm 290 continues to acquire data by adding data to the new motion state at operation 294. If necessary, the k-space lines filled during the previous motion state can be re-acquired. The previous motion state data may be discarded or used for other purposes.
Once sufficient k-space has been filled to enable parallel imaging or sparse image reconstruction, the scan is ended at operation 296. At operation 298, a final image is reconstructed with only that portion of k-space acquired at the final motion state, using one of the aforementioned reconstruction algorithms (e.g., PICS reconstruction or sparse image reconstruction neural network).
In certain embodiments, the detected motion may be so severe that the data is essentially unusable. Fig. 19 depicts an embodiment of an algorithm 300 that involves ending a scan as early as possible if motion is detected. For example, the algorithm 300 may include many of the operations and queries previously described with reference to fig. 17 and 18, except that once vigorous motion is detected at query 266, the scan ends at operation 302. For example, the motion score predicted by
By ending the scan in this manner, the operator may be allowed to take adaptive action at operation 304. For example, the operator may indicate that the subject is to remain stationary, provide assistance if the subject is difficult to remain stationary, or may utilize a more motion-robust imaging sequence (e.g., automatically selected by the system). Once the adaptive correction is performed, the scan may be restarted at operation 306.
The algorithm 300 may be used in conjunction with both algorithms 260, 290 described above by exploiting the fact that the motion score reflects not only the presence of motion, but also the severity of the motion. For example, if severe motion is detected multiple times, the scan may end early, but one of the other algorithms may be implemented in response to a smaller motion score. The algorithm 300 also utilizes the mass fraction distribution from the neural network to allow selection of a particular tolerance for the motion fraction. For example, slight motion artifacts may not affect the diagnosis depending on the intended use of the completed scan. The same neural network with dynamic thresholds allows multiple thresholds to be tailored to a particular application.
The disclosed embodiments also include methods for image reconstruction when motion has occurred. For example, fig. 20 and 21 each depict an embodiment of a method that may be used to reconstruct an image free of motion artifacts. In particular, fig. 20 is a method 310 for reconstructing an image free of motion artifacts by first dividing coil data into pre-motion and post-motion data sets (block 312). For example, using the scan order and the timing of the motion occurrences, the k-space data of the coils are divided into two groups. The first set includes those portions of k-space that are scanned before subject movement occurs and the second set includes data after movement occurs.
After the coil data is partitioned, two images are reconstructed for each coil (block 314). The first image is reconstructed using zero-padded k-space data collected before the movement occurs and the second image is reconstructed using zero-padded k-space data collected after the movement occurs. In the method 310, the two sets of images for each coil are fed (block 316) into a deep learning neural network that reconstructs a single motion corrected image.
On the other hand, and as depicted in fig. 21,
Technical effects of the invention include automatic detection and timing of patient motion, and mitigation of the effect of patient motion on the overall MR scan. The remedial action may include restarting the scan, reacquiring those portions of k-space that were acquired prior to the motion, or using existing data to correct the motion. In this manner, the motion detection and correction techniques described herein may improve the throughput of the MRI machine, improve patient experience, and reduce the burden on the MR technician.
This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:一种全日盲紫外成像仪的核心参数的测试装置及方法