MRI system and method for detecting patient motion using neural networks

文档序号:946147 发布日期:2020-10-30 浏览:22次 中文

阅读说明:本技术 利用神经网络检测患者运动的mri系统和方法 (MRI system and method for detecting patient motion using neural networks ) 是由 伊莎贝尔·休肯斯费尔特詹森 桑塔伊·安 克里斯托弗·贾德森·哈迪 伊齐克·马尔基尔 拉斐尔· 于 2020-04-21 设计创作,主要内容包括:本发明题为“利用神经网络检测患者运动的MRI系统和方法”。本发明提供了一种磁共振成像(MRI)系统,该MRI系统包括控制和分析电路,该控制和分析电路具有编程以使用MRI系统的线圈元件来获取磁共振(MR)数据、分析MR数据并将MR数据重建为MR子图像。系统还包括与控制和分析电路相关联的经训练的神经网络,以将MR子图像转换为与MR子图像中运动破坏的存在和程度相关的预测。控制和分析电路的编程包括用于至少部分地基于经训练的神经网络的预测来控制MRI系统的操作的指令。(The invention provides an MRI system and method for detecting patient motion using neural networks. A Magnetic Resonance Imaging (MRI) system includes control and analysis circuitry programmed to acquire Magnetic Resonance (MR) data using coil elements of the MRI system, analyze the MR data, and reconstruct the MR data into MR sub-images. The system also includes a trained neural network associated with the control and analysis circuitry to convert the MR sub-images into predictions relating to the presence and extent of motion disruption in the MR sub-images. The programming of the control and analysis circuitry includes instructions for controlling operation of the MRI system based at least in part on the prediction of the trained neural network.)

1. A Magnetic Resonance Imaging (MRI) method comprising:

generating a first sub-image from first Magnetic Resonance (MR) partial k-space data acquired by an MRI system over a first time interval;

generating a second sub-image from second MR part k-space data acquired by the MRI system over a second time interval, wherein the first time interval and the second time interval are temporally adjacent to each other;

combining the first sub-image and the second sub-image to generate a combined sub-image;

generating, using a trained neural network, a prediction relating to the presence and extent of motion occurring between the first time interval and the second time interval using the combined sub-image as an input; and

Performing further operations of the MRI system based at least in part on the prediction generated by the trained neural network.

2. The method of claim 1, wherein the partial k-space data from the first time interval and the partial k-space data from the second time interval are from a single coil within a multi-coil receiver array of the MRI system.

3. The method of claim 1, wherein the prediction related to the presence and extent of motion occurring between the first time interval and the second time interval is a motion score, and wherein the motion score is calculated such that motion is more likely to have an effect on the combined sub-image as the magnitude of the motion score increases.

4. The method of claim 3, wherein a motion score is calculated for each coil in a multi-coil receiver array of the MRI system to generate a motion score for the multi-coil receiver array, and the motion scores for the multi-coil receiver array are combined into a net motion score by taking an average, median, maximum, or minimum of individual coil scores.

5. The method of claim 4, comprising determining whether motion has occurred by comparing the net motion score to a threshold.

6. The method of claim 1, wherein generating the first sub-image comprises using only the first MR data collected over the first time interval, and wherein generating the second sub-image comprises using only the second MR data collected over the second time interval.

7. The method of claim 1, wherein the first sub-image and the second sub-image are composite and combined by addition.

8. The method according to claim 1, wherein the first sub-image and the second sub-image are combined by aggregating their respective partial k-space data in k-space and then converted to the image domain.

9. The method of claim 1, wherein generating the first sub-image using the first MR data collected over the first time interval and generating the second sub-image using the second MR data collected over the second time interval comprises: using data collected by all receive coils of a receive coil array of the MRI system during the first time interval and the second time interval, respectively.

10. The method of claim 3, wherein the motion score is based on a weighted sum, or a logarithm of a weighted sum, or an average of a per-pixel measure of differences between the combined sub-image and a combined sub-image obtained without motion between the first time interval and the second time interval.

11. The method of claim 10, wherein the per-pixel metric is an entropy per pixel of the difference, or a difference per pixel, or a log per pixel of the difference, or a log per pixel of an absolute value of the difference.

12. The method of claim 1, wherein the trained neural network is a trained convolutional neural network having a plurality of convolutional layers, a plurality of max-pooling layers, and a plurality of fully-connected layers.

13. The method of claim 1, wherein the trained neural network is trained according to a training process comprising:

generating training data by a training data generation process, the training data generation process comprising:

simulating rigid body motion by applying translation and/or rotation to the motion-free image to generate an offset image;

replacing portions of the motion-free k-space data of the motion-free image with k-space data of the offset image to generate motion-corrupted hybrid k-space data according to a scan order, wherein the scan order describes how k-space is filled by phase encoding as a function of a time step;

Simulating partial data collection by applying a mask to the mixed k-space data according to the scan order to generate partial k-space data;

generating motion-corrupted sub-images from the partial k-space data; and

calculating a motion score for the motion-corrupted sub-image based at least in part on a difference between the motion-corrupted sub-image and a corresponding motion-free sub-image generated using corresponding partial k-space data of the motion-free image;

repeating at least a portion of the training data generation process by applying different translations and/or rotations to the motion-free image or other motion-free images to produce a set of motion-corrupted sub-images and associated motion scores; and

training a neural network using at least a portion of the set of motion corrupted sub-images and associated motion scores to generate the trained neural network.

14. The method of claim 1, wherein performing further operations of the MRI system based at least in part on the prediction generated by the trained neural network comprises: in response to determining that the prediction indicates that motion occurred between the first time interval and the second time interval:

Aggregating the first MR data and MR data collected prior to the first time interval into k-space data corresponding to a first motion state as a first aggregate;

aggregating the second MR data and MR data collected after the first time interval into k-space data corresponding to a second motion state as a second aggregate; and

reconstructing the first aggregate and the second aggregate, respectively, to produce a first motion-free sub-image and a second motion-free sub-image, respectively.

15. The method of claim 14, wherein performing further operations of the MRI system based at least in part on the prediction generated by the trained neural network further comprises registering and combining the first and second motion-free sub-images.

16. The method of claim 1, wherein performing further operations of the MRI system based at least in part on the prediction generated by the trained neural network comprises: in response to determining that the prediction indicates that motion occurred between the first time interval and the second time interval:

determining whether a sufficient amount of k-space has been filled to produce a motion-free image;

In response to determining that an insufficient amount of k-space has been filled, reacquiring pre-motion MR data corresponding to the portion of k-space that was filled prior to the second time interval; and

aggregating the re-acquired MR data with second MR data into a new motion state.

17. The method of claim 16, wherein performing further operations of the MRI system based at least in part on the prediction generated by the trained neural network further comprises performing a parallel imaging/compression sensing (PICS) reconstruction to generate a motion-free image from the first MR data, or from the combined second MR data and reacquired MR data.

18. A computer-based method of generating a trained neural network to generate predictions relating to the presence and extent of motion in Magnetic Resonance (MR) sub-images, the method comprising:

providing training data comprising the motion corrupted sub-images as available input and providing corresponding motion scores as output;

training a neural network using the training data to convert MR sub-images to corresponding motion scores indicating whether motion occurred during an MRI scan used to acquire data for the MR sub-images;

Wherein the motion corrupted sub-image is generated from at least one non-motion sub-image, and wherein the motion score is calculated based on a weighted sum of per-pixel difference metrics between the at least one non-motion sub-image and a respective one of the motion corrupted sub-images.

19. The method of claim 18, comprising generating the training data by a training data generation process comprising:

applying a translation and/or rotation to the motion-free image to produce an offset image;

replacing a portion of the motion-free k-space data associated with the motion-free image with motion-corrupted k-space data associated with the offset image to generate hybrid k-space data; and

generating partial k-space data by simulating partial data collection by applying a mask to the mixed k-space data according to a scan order, wherein the scan order defines how k-space is filled with the motion-free k-space data as a function of the time step of the motion-free image, and wherein the mask defines a section of k-space from which the partial k-space data is collected, the section corresponding to an adjacent time step in the scan order.

20. The method of claim 19, wherein the training data generation process further comprises:

generating motion-corrupted sub-images from the partial k-space data;

generating a motion-free sub-image using a portion of the motion-free k-space data, the portion of the motion-free k-space data corresponding to a same section of k-space defined by the mask; and

calculating a motion score for the motion corrupted sub-image as an entropy of a difference between a non-motion sub-image and the motion corrupted sub-image.

21. The method of claim 20, wherein a size of the motion score corresponds to a degree to which motion affects the motion corrupted sub-images.

22. A method according to claim 20, comprising repeating at least part of the training data generation process by applying different translations and/or rotations to the motion free image or other motion free images to produce a set of motion corrupted sub-images and associated motion scores.

23. A Magnetic Resonance Imaging (MRI) system comprising:

control and analysis circuitry including programming to acquire Magnetic Resonance (MR) data using coil elements of the MRI system, analyze the MR data, and reconstruct the MR data into MR sub-images; and

A trained neural network associated with the control and analysis circuitry to convert the MR sub-images into predictions relating to the presence and extent of motion disruption in the MR sub-images; and is

Wherein the programming of the control and analysis circuitry includes instructions to control operation of the MRI system based at least in part on the prediction of the trained neural network.

24. The system of claim 23, wherein the programming of the control and analysis circuitry includes instructions to:

acquiring the MR data by an acquisition process performed according to a scan order defining how k-space is filled as a function of time step;

generating MR sub-images from MR data acquired in adjacent time steps in the scan order; and

providing the MR sub-images to the trained neural network for conversion into the prediction relating to the presence and extent of motion disruption in the MR sub-images.

25. The system of claim 23, wherein the programming to reconstruct the MR data into MR sub-images includes instructions to: aggregating MR data according to a motion state identified based on a prediction of the trained neural network.

Background

Generally, Magnetic Resonance Imaging (MRI) examination is based on the interaction between a main magnetic field, a Radio Frequency (RF) magnetic field, and a time-varying gradient magnetic field with a gyromagnetic material having nuclear spins within a subject of interest, such as a patient. Some gyromagnetic materials, such as hydrogen nuclei in water molecules, have characteristic properties in response to external magnetic fields. The precession of the spins of these nuclei can be influenced by manipulating the fields to produce RF signals that can be detected, processed, and used to reconstruct useful images.

Patient motion is one of the biggest causes of clinical MRI inefficiency, often requiring the patient to re-scan or even a second visit. In particular, patient motion can lead to MR image blurring, artifacts, and other inconsistencies. Some methods of correcting motion require some hardware for monitoring motion (increasing cost and extending patient setup time) or navigator sequence (occupying time of the imaging sequence). There is therefore a need for an improved method for data acquisition and reconstruction in magnetic resonance imaging techniques sensitive to patient motion.

Disclosure of Invention

In one embodiment, a Magnetic Resonance Imaging (MRI) method includes: a first sub-image is generated from first Magnetic Resonance (MR) partial k-space data acquired by the MRI system over a first time interval, and a second sub-image is generated from second MR partial k-space data from a different portion of k-space acquired by the magnetic resonance imaging system over a second time interval. The first time interval and the second time interval are adjacent to each other in time. The method further includes combining the first sub-image and the second sub-image to generate a combined sub-image; generating a prediction relating to the presence and extent of motion occurring between the first time interval and the second time interval using the trained neural network using the combined sub-images as inputs; and performing further operations of the MRI system based at least in part on the prediction generated by the trained neural network.

In another embodiment, a computer-based method of generating a trained neural network to generate predictions related to the presence and extent of motion in Magnetic Resonance (MR) sub-images includes: providing training data comprising the motion corrupted sub-images as available input and a corresponding motion score as output; the neural network is trained using the training data to convert the MR sub-images into corresponding motion scores that indicate whether motion occurred during an MRI scan used to acquire the MR sub-image data. Motion corrupted sub-images are generated from the at least one motion free sub-image and a motion score is calculated as the entropy of the difference between the at least one motion free sub-image and a respective one of the motion corrupted sub-images.

In a further embodiment, a Magnetic Resonance Imaging (MRI) system includes control and analysis circuitry having programming to acquire Magnetic Resonance (MR) data using coil elements of the MRI system, analyze the MR data, and reconstruct the MR data into MR sub-images. The system also includes a trained neural network associated with the control and analysis circuitry to convert the MR sub-image into a prediction relating to the presence and extent of motion disruption in the MR sub-image. The programming of the control and analysis circuitry includes instructions for controlling operation of the MRI system based at least in part on the prediction of the trained neural network.

Drawings

These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:

figure 1 is a schematic diagram of an embodiment of a magnetic resonance imaging system configured to perform data acquisition, motion detection and scoring, and image reconstruction as described herein;

FIG. 2 is a process flow diagram of an embodiment of a method for training a neural network using motion corrupted images to detect motion during a scan;

FIG. 3 is an exemplary Fast Spin Echo (FSE) type scanning sequence in which the phase encoding is a function of the time step;

FIG. 4 is a motion free sub-image resulting from an undersampled k-space data set;

FIG. 5 is a motion corrupted sub-image resulting from an undersampled k-space data set;

FIG. 6 is a motion free sub-image resulting from an undersampled k-space data set;

FIG. 7 is a motion corrupted sub-image resulting from an undersampled k-space data set;

FIG. 8 is an exemplary relatively motion-free image with a relatively low motion score;

FIG. 9 is another exemplary relatively motion-free image with a relatively low motion score;

FIG. 10 is an exemplary motion corrupted image with a relatively high motion score;

FIG. 11 is another exemplary motion corrupted image with a relatively high motion score;

fig. 12 is a schematic diagram of an embodiment of a Convolutional Neural Network (CNN) configured to predict a motion score from an input sub-image;

fig. 13 is an embodiment of a method wherein CNN is configured to generate motion scores for a single coil sub-image and combine multiple motion scores to generate a motion score for the entire set of sub-images from multiple coils;

FIG. 14 is a comparison between a first histogram of calibrated true (ground true) scores for a set of sub-images and a second histogram of scores generated by a neural network for the same set of sub-images;

FIG. 15 is a comparison between a motion profile of a subject being imaged, a predicted motion score profile based on sub-images generated during imaging when the subject moves according to the motion profile, and a predicted motion score profile based on sub-images generated during imaging when the subject remains relatively stationary;

FIG. 16 is a process flow diagram of an embodiment of a method for predicting motion and scoring motion during a scan;

FIG. 17 is an embodiment of an algorithm for performing a scan, monitoring motion during the scan, and aggregating motion states when motion is detected;

FIG. 18 is an embodiment of an algorithm for performing a scan, monitoring motion during the scan, and aggregating final motion states when motion is detected;

FIG. 19 is an embodiment of an algorithm for performing a scan, monitoring motion during the scan, and adapting to motion during the scan when motion is detected;

FIG. 20 is an embodiment of a method of reconstructing a motion free image from a motion corrupted data set; and is

Fig. 21 is another embodiment of a method of reconstructing a motion free image from a motion corrupted data set.

Detailed Description

One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, all features of an actual implementation may not be described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.

When introducing elements of various embodiments of the present invention, the articles "a," "an," "the," and "said" are intended to mean that there are one or more of the elements. The terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements other than the listed elements. Further, any numerical examples in the following discussion are intended to be non-limiting, and thus additional numbers, ranges, and percentages are within the scope of the disclosed embodiments.

As mentioned above, patient motion is one of the biggest causes of clinical MRI inefficiency, often requiring the patient to re-scan or even a second visit. Studies have shown that patient motion can lead to repeated acquisition sequences in up to 20% of MRI examinations. The annual loss per scanner is large due to the reduced throughput.

The present disclosure includes a system and method for detecting, timing, and adapting to patient motion during or after an MR scan without the need for external tracking hardware. Once the timing is known, various actions can be taken, including restarting the scan, reacquiring those portions of k-space that were acquired before the motion, or using existing data to correct for the motion. This correction can be done using a deep learning neural network or an iterative optimization method. The disclosed embodiments also include an adaptive system for detecting patient motion in real time during an MR scan without the need for external monitoring devices or navigation, and optionally adjusting scan parameters to compensate for inconsistent data. The system uses a neural network trained on motion-corrupted images (e.g., a convolutional neural network implemented as one or more specialized processors or simulated by software) to detect motion in 1/16 down to k-space. Once motion is detected, the system may track the individual sub-images to combine them into a motion-free image, or may adjust the scan to re-acquire those sections of k-space that were acquired before motion occurred.

An exemplary system for performing the techniques described herein is discussed with reference to FIG. 1. Embodiments described herein may be performed by a Magnetic Resonance Imaging (MRI) system in which a particular imaging routine (e.g., an accelerated imaging routine for an MRI sequence) is initiated by a user (e.g., a radiologist). In addition, the MRI system may perform data acquisition, data correction, and image reconstruction. Thus, referring to figure 1, a magnetic resonance imaging system 10 is schematically shown as including a scanner 12, scanner control circuitry 14, and system control circuitry 16. According to embodiments described herein, the MRI system 10 is generally configured to perform MR imaging, such as imaging sequences with adaptive motion correction, various weighting techniques, fluid attenuation techniques, perfusion techniques, tensor imaging, and so forth. The system 10 further comprises: a remote access and storage system or device, such as a Picture Archiving and Communication System (PACS) 18; or other means, such as remote radiological equipment, to enable on-site or off-site access to the data acquired by the system 10. In this way, the acquired data may be acquired and then processed and evaluated on-site or off-site. While the MRI system 10 may include any suitable scanner or detector, in the illustrated embodiment, the system 10 includes a whole-body scanner 12 having a housing 20 with an aperture 22 formed therethrough. The diagnostic table 24 is movable into the bore 22 so that a patient 26 can be positioned therein for imaging selected anatomical structures within the patient.

The scanner 12 includes a series of associated coils for generating controlled magnetic fields for exciting gyromagnetic materials within the anatomy of the subject to be imaged. In particular, a primary magnet coil 28 is provided for generating a primary magnetic field generally aligned with the bore 22. A series of gradient coils 30, 32 and 34 allow for the generation of controlled gradient magnetic fields during an examination sequence for positionally encoding certain gyromagnetic nuclei within the patient 26. A Radio Frequency (RF) coil 36 is provided and is configured to generate radio frequency pulses for exciting certain gyromagnetic nuclei within the patient. In addition to the coils that may be local to the scanner 12, the system 10 also includes a set of receive coils 38 (e.g., a coil phased array) configured for placement proximal to (e.g., against) the patient 26. The receive coil 38 may have any geometry, including closed and single sided geometries. For example, the receive coils 38 may include cervical/thoracic/lumbar (CTL) coils, head coils, single-sided spine coils, and so forth. Generally, the receive coil 38 is placed near or overhead the patient 26 in order to receive weak RF signals (weak relative to the transmit pulses generated by the scan coil) generated by certain gyromagnetic nuclei within the patient as the patient 26 returns to its relaxed state. The receive coil 38 may be switched off so as not to receive or resonate with the transmit pulses generated by the scanner coil, and may be switched on so as to receive or resonate with the RF signals generated by the relaxation gyromagnetic nuclei.

The various coils of system 10 are controlled by external circuitry to generate the required fields and pulses and to read the emissions from the gyromagnetic material in a controlled manner. In the illustrated embodiment, the primary power source 40 provides power to the primary field coils 28. A driver circuit 42 is provided for applying pulses to the gradient field coils 30, 32 and 34. Such circuitry may include amplification and control circuitry for supplying current to the coils as defined by the sequence of digitized pulses output by the scanner control circuitry 14. Another control circuit 44 is provided for regulating the operation of the RF coil 36. The circuit 44 includes switching means for alternating between an active mode of operation and a passive mode of operation in which the RF coil 36 transmits signals and does not transmit signals, respectively. The circuitry 44 also includes amplification circuitry for generating the RF pulses. Similarly, the receive coil 38 is connected to a switch 46 that is capable of switching the receive coil 38 between receive and non-receive modes such that in the receive state, the receive coil 38 resonates with RF signals generated by relaxation gyromagnetic nuclei within the patient 26 and in the non-receive state, they do not resonate with RF energy from the transmit coil (i.e., coil 36) to prevent unintended operation from occurring. Additionally, a receive circuit 48 is provided for receiving data detected by the receive coil 38, and may include one or more multiplexing and/or amplification circuits.

It should be noted that although the scanner 12 and control/amplification circuitry described above are shown as being coupled by a single line, many such lines may be present in a practical example. For example, separate lines may be used for control, data communication, and the like. In addition, appropriate hardware may be provided along each type of line for proper processing of the data. Indeed, various filters, digitizers, and processors may be disposed between the scanner and either or both of the scanner control circuitry 14 and system control circuitry 16. As a non-limiting example, although shown as a single unit, certain control and analysis circuitry described in detail below includes additional hardware (such as image reconstruction hardware) configured to perform the motion correction and image reconstruction techniques described herein. Furthermore, in certain embodiments, the control and analysis circuitry described herein may be associated with a trained neural network for motion detection and/or another trained neural network for image reconstruction. Indeed, wherever neural networks are described in this disclosure, it should be noted that neural networks may be associated with (e.g., as part of or connected to) MRI system 10. The neural network may be implemented, for example, as a specific hardware component (e.g., a dedicated processor), or may be implemented as software via emulation on a computing platform.

As shown, the scanner control circuitry 14 includes interface circuitry 50 that outputs signals for driving the gradient field coils and the RF coils, and for receiving data representative of magnetic resonance signals generated during an examination sequence. The interface circuit 50 is coupled to a control and analysis circuit 52. Based on the defined scheme selected via the system control circuit 16, the control and analysis circuit 52 executes commands for the drive circuit 42 and the circuit 44. The control and analysis circuitry 52 is also operative to receive the magnetic resonance signals and perform subsequent processing prior to transmission of the data to the system control circuitry 16. The scanner control circuitry 14 also includes one or more memory circuits 54 that store configuration parameters, pulse sequence descriptions, inspection results, and the like during operation. The interface circuit 56 is coupled to the control and analysis circuit 52 for exchanging data between the scanner control circuit 14 and the system control circuit 16. Such data will typically include a selection of the particular examination sequence to be performed, configuration parameters for those sequences, and acquired data that may be transmitted from the scanner control circuit 14 in raw or processed form for subsequent processing, storage, transmission, and display. Thus, in certain embodiments, the control and analysis circuitry 52, while shown as a single unit, may comprise one or more hardware devices.

The system control circuitry 16 includes interface circuitry 58 that receives data from the scanner control circuitry 14 and transmits data and commands back to the scanner control circuitry 14. The interface circuit 58 is coupled to a control and analysis circuit 60, which may comprise a CPU in a multi-function or special purpose computer or workstation. The control and analysis circuitry 60 is coupled to memory circuitry 62 to store programming code for operating the MRI system 10, as well as to store processed image data for later reconstruction, display, and transmission. The programming code may execute one or more algorithms capable of, for example, executing non-cartesian imaging sequences and processing sampled image data (e.g., leaves of data, undersampled data, fluid attenuation data), as discussed in detail below. Additional interface circuitry 64 may be provided for exchanging image data, configuration parameters, etc. with external system components, such as the remote access and storage device 18. Finally, the system control and analysis circuitry 60 may include various peripherals for facilitating an operator interface and generating a hard copy of the reconstructed image. In the illustrated embodiment, these peripheral devices include a printer 60, a monitor 62, and a user interface 64 that includes devices such as a keyboard or mouse.

The scanner 12 and its associated control and analysis circuitry 52 generate magnetic fields and radio frequency pulses in a controlled manner to excite and encode certain gyromagnetic materials within the patient 26. The scanner 12 and control and analysis circuitry 52 also sense signals emanating from such material and create an image of the material being scanned. In certain embodiments, the scan may comprise a Fast Spin Echo (FSE) scan, a gradient echo (GRE) scan sequence, or the like. It should be noted that the described MRI system is provided as an example only, and other system types, such as so-called "open" MRI systems, may also be used. Similarly, such systems may be rated by the strength of their primary magnets, and any suitably rated system capable of performing the data acquisition and processing described below may be employed.

In particular, some aspects of the present disclosure include methods for acquiring magnetic resonance data and processing the data to construct one or more motion corrected images. At least a portion of the methods disclosed herein may be performed by the system 10 described above with respect to fig. 1. That is, MRI system 10 may perform the acquisition techniques described herein and, in some embodiments, perform the data processing techniques described herein. It should be noted that after the acquisition described herein, system 10 may simply store the acquired data for later local and/or remote access, such as in a memory circuit (e.g., memory 62). Thus, the acquired data may be manipulated by one or more processors contained within a special purpose or general purpose computer when accessed locally and/or remotely. One or more processors may access the acquired data and execute routines stored on one or more non-transitory machine-readable media collectively storing instructions for performing methods including the motion detection, image processing, and reconstruction methods described herein.

To facilitate presentation of certain embodiments described herein, exemplary acquisition and reconstruction sequences are described below. However, unless explicitly stated otherwise, the present disclosure is not limited to such acquisitions and sequences.

In certain embodiments, 2D MR images are generated from cartesian k-space using gradient echo (GRE) or Fast Spin Echo (FSE) pulse sequences and acquired using an RF receiver coil array of 8 or more coils. Each coil has a corresponding sensitivity to RF signals generated during acquisition, and the sensitivity of each coil can be mapped to generate a sensitivity map of the coil array. Image reconstruction may involve generating a partial image corresponding to each coil by 2D fourier transforming the data acquired by the particular coil (referred to as "coil data") and multiplying with the conjugate of the coil sensitivity map. To generate a complete image, the partial images are added and the result is then divided by the sum of the squares of the coil sensitivity maps to yield the final image.

As the patient moves during the scan, the coil data may contain a mixture of fourier components from two or more motion states. As discussed herein, the state of motion may also be referred to herein as a "gesture. In particular, the poses disclosed herein are intended to represent positions of a subject being imaged that correspond to a portion of k-space acquired at a given time (or time step, as described below). When two or more motion states or poses occur, the resulting reconstructed image will be corrupted and contain motion-related artifacts. An aspect of the present disclosure relates to detecting the presence of motion and identifying the time at which the motion occurs during scanning. According to certain disclosed embodiments, this motion detection may be performed after the scan has been completed, or may also be performed during the scan.

The disclosed embodiments include methods and systems for generating neural networks trained to identify the presence and severity of motion during an MR scan. Fig. 2 depicts a process flow diagram of an embodiment of a method 150 for training a neural network using motion corrupted images to detect motion during a scan. In general, the method 150 involves using a training data generation process to produce training data, and using at least a portion of the training data to train a neural network, such as a convolutional neural network, to generate predictions related to the presence and extent of motion occurring between time steps using only sub-images (e.g., corrupted by motion or no motion) as inputs. For example, for training, both motion corrupted sub-images and non-motion sub-images may be used as training inputs for the network learning to distinguish between the two. The non-motion sub-image can be considered as a motion corrupted image with simulation parameters "0 translation, 0 rotation". The method 150 may be performed, for example, in whole or in part, by a computing element of the MRI system 10 or another specially programmed computing system, and thus may be considered a computer-implemented method.

As shown, to generate training data, the method 150 includes simulating (block 152) motion, such as rigid body motion. In particular, the operations of block 152 begin with no motion image, and may include translating and/or rotating it in a controlled manner so that the timing of the motion is known, as well as the nature of the motion. This results in an offset image. The acts of block 152 may be performed one or more times by applying different translations and/or rotations to generate multiple offset images whose timing and motion states are known.

The resulting offset image is converted into k-space data (motion state 2 k-space data) and combined with the k-space data of the original pre-motion image (motion state 1 k-space data), e.g. according to a scan order describing how k-space is filled by phase encoding, which is a function of the time step of the pre-motion image. For example, the combining may involve replacing certain sections of k-space of the original image with k-space data of the offset image based on the order of k-space filling and the timing of the simulated motion. Further, converting the raw and offset images to k-space may involve multiplying the images by a coil sensitivity map and fourier transforming the image data to k-space.

Thus, the k-space data set resulting from the action of block 152 models the entire k-space data set that would be obtained if the scan were performed with the subject being imaged moving between the two poses. These steps may be repeated to create images having more gestures (e.g., three or more gestures). It should be noted that in certain embodiments, this data may constitute training data for a Neural Network (NN), such as NN training data for identifying a fully acquired data set and identifying the presence and severity of motion. However, in order to more accurately represent the data collected during the scan (not just after the scan) and to enable the trained NN to make predictions during the scan, in this embodiment the data is further processed before being used as training data.

In the illustrated embodiment, the method 150 includes applying (block 154) a mask or window to the combined k-space dataset (including the moving k-space dataset) to represent the partial data collection. In certain embodiments, the mask or window is configured to isolate at least two adjacent time steps of a scan sequence of data acquisitions. It will be appreciated that isolating at least two adjacent time sets may allow for a determination of whether motion has occurred between time steps (e.g., between each shot) because each time step may be considered unaffected by subject motion. In particular, the time scale for acquiring each shot is much smaller than the time scale between each shot, so that motion is considered to always occur between shots. Further, it should be noted that the neural network of the presented embodiments can be trained to predict motion scores for only a single k-space scan order mode, coil configuration, and/or image resolution, or trained to predict motion scores for multiple scan modes, coil configurations, and resolutions.

To help illustrate an example of applying a mask in the manner represented by block 154, FIG. 3 depicts an exemplary Fast Spin Echo (FSE) type scan sequence 156 in which the phase encoding is a function of a time step. In the illustrated embodiment, k-space is filled by interleaving multiple shots across k-space, each shot having an Echo Train Length (ETL) of 8. Thus, when creating "sub-images" from partial data, two consecutive shots will typically be combined, representing 6% of k-space of the depicted scanning order.

More specifically, as shown in FIG. 3, each point represents a phase code, e.g., 158 (echo), whose readout direction is orthogonal to the page. Each angled column of phase encodings represents a phase encoding produced from a single lens. Thus, in the illustrated embodiment, the echo train length is 8. The adjacent angled columns represent the time steps of adjacent shots. Thus, in the illustrated embodiment, the mask 160 applied to the scan order 156 may window two adjacent shots, each having 8 phase encodings. Because each shot can be processed to generate an image, adjacent shots are windowed so that successive partial images can be created for training or evaluation to identify and characterize motion. In this example, mask 160 is used to create partial k-space data from shots numbered 3 and 4.

It should be noted that the k-space scan order of fig. 3 is merely an example of a pattern, and other scan order patterns may be used in accordance with the presented embodiments. For example, according to the method 150, the neural network may be trained using a "linear" mode, where k-space is filled from top to bottom, rather than in a shot with a spread of multiple k-space lines. However, when trained in linear mode, the results produced by the trained neural network are less accurate than those obtained using the multi-shot scan sequence, as shown in fig. 3.

The method 150 further includes applying (block 162) a 2D (inverse) fourier transform to the partial data to create a motion-corrupted composite multi-coil sub-image. Fig. 4 and 5 provide a comparison between the no-motion (fig. 4) and motion corrupted (fig. 5) sub-images. In particular, fig. 4 is an example comprising overlapping sub-images, since the k-space dataset is undersampled. However, there is no motion disruption. In fig. 5, the sub-images include overlaps due to undersampling and motion corruption. While the differences in the images depicted in fig. 4 and 5 are apparent to an ordinary human observer, and a trained observer will recognize the onset of the ghosting artifact feature of the motion corrupted image in fig. 5, it will be appreciated that the presented embodiments may also identify and characterize motion in more complex cases, for example as shown by the non-motion sub-image of fig. 6 and the motion corrupted sub-image of fig. 7.

Using the motion corrupted sub-images and the non-motion sub-images, the method 150 involves generating (block 164) a quality score that represents a degree of corruption of the motion corrupted sub-images relative to the corresponding non-motion sub-images. For example, in one embodiment, the score calculated according to block 164 may be a score based on the average entropy of the difference in the two sub-images. In such an embodiment, no moving image will have a score of 0, and the score will increase with the severity of the motion artifact. Furthermore, based on the fraction of the mean entropy of the difference between the two images for each coil (no motion and motion corrupted), the fractions of the different motion corrupted images will produce a gaussian distribution when there is motion corresponding to the severity of the motion corruption. In other words, the severity of the motor impairment may be more closely related to the size of the score.

In some embodiments, the score may be calculated according to the following formula:

equation 1:

Figure BDA0002460213510000111

equation 2:

equation 3:

equation 1 represents the normalized difference between two sub-images, where IMotionIs the pixel value of a motion-corrupted sub-image, ITruthIs the pixel value of the "calibrated true" or corresponding motion-free sub-image, and I Truth,MaxIs the maximum pixel value of a non-motion sub-image.

Equation 2 calculates the coil entropy S of the difference corresponding to two sub-images (no motion and motion corrupted) for a particular coilcoil. In particular, ScoilIs the negative of the sum of the difference times the logarithm of the difference over all pixels of the sub-image corresponding to a particular coil. While the difference itself may be used to calculate the score, it has been found that using a logarithmic term may provide a distribution of scores that are more closely related to the severity of the motion damage.

Once the coil entropies are calculated, in some embodiments, the coil entropies may be combined according to equation 3, which is a weighted sum of the coil entropies calculated for all coils to obtain a final score. In this way, the entropy measure can be summed over the pixels (using equation 2) and the coils (using equation 3). It should be noted that in some embodiments, the score calculation may be different. For example, in other embodiments, the natural logarithm of the difference may be used for the score calculation instead of the entropy of the difference. Further, in some embodiments, to make the neural network and score calculations compatible with different image resolutions (e.g., 256 × 256, 244 × 244, 312 × 321), rather than using the sum of the difference per-pixel entropies to perform the score calculations, an average may be used. Thus, because multiple outputs may be combined in some embodiments, the disclosed embodiments may use the sum of all pixels across all coils, the sum of each coil, the average across all pixels and coils, and the average across each coil. It should be noted that the resulting motion score will therefore have a number that may be between 0 and 1 (e.g., where the score is an average), or a higher range (e.g., where the score is a sum). In some embodiments, an alternative metric such as normalized mean square error may be used as the quality score. Other embodiments may use metrics such as 1) difference, 2) entropy of difference, or, 3) logarithm of difference or logarithm of weighted sum.

To help illustrate the efficacy of the motion score described with reference to block 164, scores for images with varying degrees of motion impairment are calculated using equations 1-3. Fig. 8, 9, 10, and 11 depict exemplary motion corrupted images and scores associated with each image. In these images, the score is calculated as the sum of the entropy per pixel, and the score can be up to 7000. As shown in these images, almost no moving images have a mass fraction at the lower end of the fractional spectrum (<2000), while severely corrupted images are at the higher end of the fractional spectrum (> 3000).

Returning to the method 150 of fig. 2, once the quality scores are generated for the various sub-images according to block 164, the neural network is trained (block 166) to predict the degree of motion disruption for the various images, particularly by predicting the scores for the combined sub-images. As an example, the neural network may be a "deep learning" neural network, meaning that it includes multiple layers. Training the neural network according to block 166 may include providing a large number (e.g., thousands) of non-motion and motion corrupted sub-images to predict the degree of corruption. Indeed, until block 166, various outputs of the operations performed as part of method 150 may be used as training and/or validation data to train and validate the neural network. The neural network may be trained using images produced by random translations and rotations, combinations of adjacent lens pairs, and various ETLs, according to block 166.

In certain embodiments, the neural network trained according to block 166 may be a Convolutional Neural Network (CNN) having one or more convolutional layers, one or more max pooling layers, one or more planarization layers, one or more fully connected layers, or any combination thereof. One embodiment of a CNN 180 trained to predict motion scores from sub-images is shown in fig. 12.

In the representation of fig. 12, the data is shown in the form of a multi-dimensional input 181, for example (256 × 256 × 16) of the leftmost data set. This is because CNN 180 treats the image as a multi-dimensional tensor or stacked matrix. For example, moving from left to right in FIG. 12, the data begins with the sub-image itself 181a, the sub-image 181a being 256 by 256 or 256 pixels by 256 pixels. 16 of 256 × 256 × 16 represents the number of channels of the image. In particular, since the data is composite (including real data and imaginary data), 16 represents the number of coils (8 in the present embodiment) used to generate the image multiplied by two channels per coil. Thus, the 256 × 256 × 16 representation of the matrix is 256 × 256 (corresponding to pixels), and each pixel has a nested array of 16 values. For systems employing varying numbers of coils, the data may instead have a different number of channels. Such embodiments will be described in further detail later.

The input amount to CNN 180 is the original pixel value of the image on all 16 channels, in this case 256 pixels wide by 256 pixels high by 16 channels. The image is a combined sub-image generated from fourier transforms of two neighboring lenses (e.g., as outlined by window 160 in fig. 3). The resulting sub-images (e.g. as shown in fig. 6, 7) will be generated by fourier transformation of the window data, the remainder of k-space being zero-padded and fed to CNN as input 181a, as shown in fig. 12.

The first convolution layer 182a operates on the input 181a using a window (i.e., a trainable filter or kernel) followed by a non-linear activation function (e.g., a leakage rectifier linear unit (ReLU) activation function). In one embodiment, the window may have a size of 3 x 3, although other window sizes may be used. The step of the window may be 1, which means that the window slides by one pixel in one direction, or more than one, which means that the window slides by more than one pixel in one direction. The number of features may or may not be 32. The output of the first convolutional layer 182a is second data 181b, which is a 256 × 256 × 32 data set.

A second convolutional layer 182b, which also uses a window (trainable filter or kernel) and a non-linear activation function, is applied to the second data 181b to generate third data 181c, which is also a 256 x 32 data set. The third data 181c is input to a first max-pooling layer 184a, which is configured to down-sample the data. In particular, the first max-pooling layer 184a downsamples the third data 181c along the spatial dimensions (width and height) by applying a small window (e.g., 2 × 2) with a corresponding step (e.g., step 2) to each 256 × 256 matrix and retaining only the maximum values that fall within the window. Thus, the fourth data 181d output by the first max pooling layer 184a comprises a smaller data set 128 × 128 × 32.

This pattern (two convolutional layers 182, then a single max-pooling layer 184) is repeated twice to produce fifth, sixth, seventh, eighth, ninth, and tenth data 181e, 181f, 181g, 181h, 181i, and 181 j. However, the convolutional layer 182 and the max-pooling layer 184 are not identical, nor do they necessarily apply the same type or number of filters. In the illustrated embodiment, the tenth data 181j is a 32 x 32 data set, which is subjected to a planarization layer 186.

As shown, planarization layer 186 planarizes the multi-dimensional data set to a one-dimensional set of values, in this case 32,768 values. Each of the first, second and third fully connected layers 188a, 188b and 188c includes nodes that are fully connected to all activation points in the previous layer. Fully connected layers 188a, 188b and 188c may serve as classifiers and are layered as shown to ultimately provide a single value output that is the motion score 190 of the sub-image 181 a.

Although CNN 180 of fig. 12 is shown as a particular pattern including convolutional layer 182, pooling layer 184, and fully-connected layer 188, it should be noted that the techniques described herein are not limited to the particular pattern shown. While the particular pattern shown in fig. 12 has been found to produce more accurate results than other patterns, other embodiments of CNN 180 may use different numbers of layers in different orders for different numbers of sequences depending on the input received. Indeed, any combination of the convolutional layer 182, the pooling layer 184, the fully-connected layer 188, and other operations may be combined and trained in a particular manner to produce a motion score as described herein, although some patterns may produce more accurate motion scores than others.

The CNN 180 of fig. 12 is configured to capture a sub-image 181a generated from data obtained from a plurality of coils (all coils of the receive array) and to predict an image score 190 accordingly. However, it should be noted that different MR imaging systems may comprise different numbers of coils. To provide a system that is compatible with a different number of coils, in some embodiments, the CNN 180 may be trained to take images from a single coil and generate a corresponding motion score for each coil sub-image. For example, taking the example shown in fig. 12 as an example, where the data set produced by sub-images from the entire 8 coils is 256 × 256 × 16, a single coil sub-image for this configuration would produce a data set of 256 × 256 × 2, with two channels instead of 16 channels. Also, each coil produces two channels-a real data channel and an imaginary data channel, because the images are complex.

Fig. 13 is a schematic diagram of a method 200 for generating a motion score 190 from a single coil sub-image fed into CNN 180. In the illustrated embodiment, the method 200 begins with a sub-image from coil 1202 a (first coil), a sub-image from coil 2202b (second coil), and other sub-images from other coils of the particular system, up to a sub-image from coil N202N (nth coil), where N is the total number of coils of the MR imaging system. Each coil sub-image 202 is fed separately into an embodiment of CNN 180 (e.g., as a 256 x 2 data set). Likewise, the coil sub-images 202 correspond to images resulting from data obtained from only a portion of k-space (e.g., data obtained from two continuous shots).

CNN 180 of fig. 13 is trained to produce a corresponding motion score for each coil sub-image 202. Thus, in the method 200 of fig. 13, CNN 180 outputs the motion fraction of coil 1204 a, the motion fraction of coil 2204 b, etc., up to the motion fraction of coil N204N, respectively.

The single-coil motion scores 204 are combined via a score combination operation 206 to produce the motion scores 190 for the entire data set. The combining operation 206 may involve a weighted sum of the single-coil motion scores 204, an average of the single-coil motion scores 204, or some other combining operation. Thus, the method 200 shown in fig. 13 may be performed where flexibility in the prediction is needed from the perspective of the number of coils used by the MR system (e.g., system 10).

The efficacy of the CNN 180 of fig. 12 was tested on a set of sample data, as shown in fig. 14, which is a comparison 210 between a first histogram 212 of scaled true scores for a set of sub-images and a second histogram 214 of network-generated scores for the same set of sub-images.

More specifically, the first histogram 212 is a set of calculated motion-corrupted scores for 6% of the k-space that are filled, indicating the presence and extent of motion artifacts. The second histogram is a set of neural network predictions for the score using only the sub-image as input. By selecting the appropriate threshold, the network predicts that a classifier will be made, determining whether motion has occurred in the sub-image, as shown by line 216 in both histograms. Once it is determined that motion has occurred, the network prediction can be further used to determine whether the motion is significant (i.e., whether motion has occurred that results in a significant amount of motion artifacts).

The ability of CNN 180 of fig. 12 to detect motion timing was also tested. In particular, fig. 15 shows an example scenario 220 in which during a T1-weighted Fast Spin Echo (FSE) scan, the subject being imaged is instructed to rotate their head back and forth at regular intervals. Contour 222 illustrates an approximate motion profile of head motion. Contour 224 is a plot of the predicted motion score obtained from CNN 180 of fig. 12 as a function of lens number using the raw data. As shown, the prediction score jumps above the motion score threshold 226 with the start of motion, indicating that CNN 180 is able to identify not only that motion has occurred, but also the timing of each motion event that caused the motion to break.

The profile 228 is a plot of the predicted score for a no motion scan as a function of the number of FSE lenses. As shown, the predicted motion score remains below the motion score threshold 226, which means that the CNN 180 correctly predicts that no motion has occurred.

Once the neural network of the present disclosure (e.g., CNN 180) has been trained and validated, CNN 180 may be used during scanning to predict whether motion is occurring and the effect of that motion on the acquired data (e.g., the effect of motion on the images to be produced). Fig. 16 is a process flow diagram of an embodiment of a method 240 for predicting motion and scoring motion during a scan.

As with other methods described herein, the method 240 may be performed using the control and analysis circuitry 52, 60 of the MR system 10, for example. In other embodiments, the method 240 may be performed by a suitably programmed computing device having processing circuitry and memory storing instructions that, when executed by the processing circuitry, perform the operations set forth in the method 240. Indeed, because the method 240 is performed during an MR scan, the method 240 may be performed by the same system that acquired the MR data to reduce latency in the event that the method 240 results in some remedial action (e.g., data re-acquisition).

As shown in fig. 16, the method 240 includes, for each lens in k-space, performing an (inverse) fourier transform (block 242) to create a composite multi-coil sub-image to represent the most recently acquired data. In some embodiments, this may be done on a coil-by-coil basis to produce multiple sub-images as a composite single coil sub-image.

The sub-images of the current shot are then combined with the sub-images of the previous shot (block 244). This can likewise be done on a multi-coil basis, as well as on a single-coil basis. This combination of two adjacent shots allows the neural network to determine whether motion has occurred because the motion time frame is much longer than the time frame of each shot.

Once the combined sub-image (multi-coil) is obtained, or once multiple combined sub-images (single coil) are obtained, the sub-images are passed into a neural network (block 246) to generate motion score predictors. In the case of a single coil sub-image, additional steps may be performed as discussed with reference to fig. 13.

The motion score prediction is then compared to a threshold (block 248) to identify whether motion has occurred. For example, the threshold may be selected based on minimizing a positive and/or negative false positive rate in the training data or a separately generated set of validation data. As set forth with reference to fig. 15, if the predicted motion score is above a threshold, the neural network may be deemed to have predicted a motion event.

In some embodiments, the timing of motion may be reduced to a single shot since the sub-image of each shot is used twice, once as newly acquired data, and once as the "previous shot" sub-image for the next sub-image. It has been demonstrated that this time sequence can be resolved within 8 lines (3% of k-space) of a 256 line image.

Once motion is detected and the timing determined, various actions may be taken, including restarting the scan, reacquiring those portions of k-space acquired prior to the motion, or using existing data to correct for the motion to reconstruct an artifact-free MRI image. The manner in which the effect of motion can be mitigated depends, among other things, on the relationship between the time at which motion is detected and the time at which motion occurs. For example, where motion is not detected until after scanning is complete, the methods available to mitigate the effects of motion may be different than those available when motion is detected during scanning. Fig. 17-21 detail various methods that may be performed by the MR system 10 under different motion scenarios.

Fig. 17 depicts a process flow diagram of an embodiment of an algorithm 260 executed, for example, by the control and analysis circuitry 52, 60 of the MR system 10 in the event motion is detected during a scan. The algorithm 260 includes various operations including starting a scan at operation 262. This begins the process of acquiring new data at operation 264.

Once the data has been acquired, algorithm 260 performs method 240 of FIG. 16 using, for example, CNN 180 of FIG. 12, performing query 266 to determine whether motion has been detected. If no motion is detected in the nearest shot, the k-space data is aggregated with the previous k-space data in operation 268, and if the scan has not been completed (query 270), the scan continues as usual in operation 272. If the scan is complete, another query 274 is performed to determine if motion has been detected, and if the scan is motion-free, the image is reconstructed according to conventional techniques.

If motion is detected at query 266, the previously collected data is saved as one motion state at operation 278 and a new motion state is started at operation 280. The new motion state initially comprises only the most recent k-space data collected. As the scan continues, k-space data will be aggregated to this motion state as long as no further motion is detected. The results of the operations described thus far result in continuing to add to the current motion state or create a new motion state until the scan is complete.

At query 274, if multiple motion states exist, each aggregate (each set of k-space data corresponding to a single motion state) is reconstructed separately at operation 282. In this respect, each reconstructed motion state results in a non-motion sub-image and a plurality of non-motion sub-images 284 are generated.

At operation 286, the different sub-images may be combined using various known techniques or separately reconstructed into a complete image. For example, the sub-images 284 may be registered and combined to create a motion free image by methods known in the art. Alternatively, parallel imaging, compressed sensing, or sparse reconstruction neural networks may be used to reconstruct the k-space data for each motion state. The resulting images may then be registered and combined by methods known in the art. As one example, operation 286 may include iterative joint estimation of motion and images with temporal constraints. Timing constraints (i.e., motion timing) are obtained based on the neural network predictions.

Using a similar sequence, instead of aggregating individual motion states, the k-space data can be adaptively re-acquired as shown in fig. 18. In particular, the algorithm 290 of FIG. 18 includes many of the same operations as the algorithm 260 of FIG. 17, including operations 262, 264, 268, 272, and 276 and queries 266, 270, and 274.

For algorithm 290, if motion is detected at query 266, the system (e.g., control and analysis circuitry 52, 60) determines at query 292 whether sufficient k-space has been filled to enable parallel imaging/compressed sensing (PICS) or reconstruction of a neural network using sparse images.

If sufficient k-space has not been filled, the algorithm 290 continues to acquire data by adding data to the new motion state at operation 294. If necessary, the k-space lines filled during the previous motion state can be re-acquired. The previous motion state data may be discarded or used for other purposes.

Once sufficient k-space has been filled to enable parallel imaging or sparse image reconstruction, the scan is ended at operation 296. At operation 298, a final image is reconstructed with only that portion of k-space acquired at the final motion state, using one of the aforementioned reconstruction algorithms (e.g., PICS reconstruction or sparse image reconstruction neural network).

In certain embodiments, the detected motion may be so severe that the data is essentially unusable. Fig. 19 depicts an embodiment of an algorithm 300 that involves ending a scan as early as possible if motion is detected. For example, the algorithm 300 may include many of the operations and queries previously described with reference to fig. 17 and 18, except that once vigorous motion is detected at query 266, the scan ends at operation 302. For example, the motion score predicted by CNN 180 may be so high that the motion may be considered severe and the scan ends.

By ending the scan in this manner, the operator may be allowed to take adaptive action at operation 304. For example, the operator may indicate that the subject is to remain stationary, provide assistance if the subject is difficult to remain stationary, or may utilize a more motion-robust imaging sequence (e.g., automatically selected by the system). Once the adaptive correction is performed, the scan may be restarted at operation 306.

The algorithm 300 may be used in conjunction with both algorithms 260, 290 described above by exploiting the fact that the motion score reflects not only the presence of motion, but also the severity of the motion. For example, if severe motion is detected multiple times, the scan may end early, but one of the other algorithms may be implemented in response to a smaller motion score. The algorithm 300 also utilizes the mass fraction distribution from the neural network to allow selection of a particular tolerance for the motion fraction. For example, slight motion artifacts may not affect the diagnosis depending on the intended use of the completed scan. The same neural network with dynamic thresholds allows multiple thresholds to be tailored to a particular application.

The disclosed embodiments also include methods for image reconstruction when motion has occurred. For example, fig. 20 and 21 each depict an embodiment of a method that may be used to reconstruct an image free of motion artifacts. In particular, fig. 20 is a method 310 for reconstructing an image free of motion artifacts by first dividing coil data into pre-motion and post-motion data sets (block 312). For example, using the scan order and the timing of the motion occurrences, the k-space data of the coils are divided into two groups. The first set includes those portions of k-space that are scanned before subject movement occurs and the second set includes data after movement occurs.

After the coil data is partitioned, two images are reconstructed for each coil (block 314). The first image is reconstructed using zero-padded k-space data collected before the movement occurs and the second image is reconstructed using zero-padded k-space data collected after the movement occurs. In the method 310, the two sets of images for each coil are fed (block 316) into a deep learning neural network that reconstructs a single motion corrected image.

On the other hand, and as depicted in fig. 21, method 320 includes the acts represented by blocks 312 and 314, but instead, both partial k-space images are processed using a sparse reconstruction algorithm (block 322). The images from the sparse reconstruction algorithm may then be further processed and combined or fed to a neural network to generate the final motion-free image (block 324).

Technical effects of the invention include automatic detection and timing of patient motion, and mitigation of the effect of patient motion on the overall MR scan. The remedial action may include restarting the scan, reacquiring those portions of k-space that were acquired prior to the motion, or using existing data to correct the motion. In this manner, the motion detection and correction techniques described herein may improve the throughput of the MRI machine, improve patient experience, and reduce the burden on the MR technician.

This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.

34页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种全日盲紫外成像仪的核心参数的测试装置及方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!