Artificial intelligence ejection fraction measurement

文档序号:1144939 发布日期:2020-09-11 浏览:24次 中文

阅读说明:本技术 人工智能射血分数测量 (Artificial intelligence ejection fraction measurement ) 是由 C·卡迪厄 M·坎农 H·洪 K·克普泽尔 J·马特 N·普瓦韦尔 于 2018-10-11 设计创作,主要内容包括:本发明的实施例提供了用于人工智能射血分数测量的方法、系统和计算机程序产品。在人工智能射血分数测量方法中,神经网络加载到计算机的储存器中,其在不同心室的心室成像期间,用针对不同心脏的心室获取的不同组心脏成像数据,以及针对每一组的已知的射血分数,进行训练。然后,获取心脏心室的同时期成像数据组,并将同时期成像数据组提供给神经网络。最后,在计算机显示器上显示由神经网络输出的射血分数测量,而无需跟踪心脏的心室边界。(Embodiments of the present invention provide methods, systems, and computer program products for artificial intelligence ejection fraction measurement. In an artificial intelligence ejection fraction measurement method, a neural network is loaded into the memory of a computer that is trained during ventricular imaging of different hearts with different sets of cardiac imaging data acquired for the ventricles of different hearts, and a known ejection fraction for each set. A contemporaneous imaging dataset of the heart ventricle is then acquired and provided to the neural network. Finally, ejection fraction measurements output by the neural network are displayed on a computer display without tracking the ventricular boundaries of the heart.)

1. An artificial intelligence ejection fraction measurement method, the method comprising:

training a neural network with different sets of cardiac imaging data acquired for ventricles of different hearts, and a known ejection fraction for each set;

loading the trained neural network into computer storage;

acquiring a contemporaneous imaging dataset of a ventricle of a heart;

providing the contemporaneous imaging data set to a neural network; and the number of the first and second groups,

ejection fraction measurements output by the neural network are displayed on a computer display without tracking the ventricular boundaries of the heart.

2. The method of claim 1, further comprising filtering the acquired contemporaneous imaging data set into imaging data relating only to the video clip acquired using a particular imaging modality and displaying a particular view of the ventricle.

3. The method of claim 1, wherein:

providing only a portion of the contemporaneous imaging data set to the neural network; and the number of the first and second groups,

upon receiving an indication of indeterminate output from the neural network, providing other portions of the contemporaneous imaging data set to the neural network for receiving the determined ejection fraction from the neural network output.

4. The method of claim 1, further comprising:

training a plurality of different neural networks using different views and modes, respectively, with different sets of cardiac imaging data acquired for ventricles of different hearts, and known ejection scores for each set;

loading one of the trained specific neural networks into a memory of the computer, the specific one trained neural network corresponding to one of the identified modalities and at least one designated view of the contemporaneous imaging dataset.

5. The method of claim 1, further comprising pre-processing the contemporaneous imaging dataset in the form of a video clip image by resizing and cropping each video clip of the video clip image.

6. The method of claim 1, wherein the contemporaneous imaging dataset is provided to a neural network, the method comprising, when in the form of a video clip image, decomposing each video clip image of the contemporaneous imaging dataset into a plurality of frames of a movie clip and repeatedly submitting the frames in parallel to the neural network.

7. A cardiogram data processing system for artificial intelligence determination of ejection fraction measurement configuration, the system comprising:

a host computing platform comprising one or more computers, each computer having storage and at least one processor; and the number of the first and second groups,

an ejection fraction measurement module comprising computer program instructions that when executed in memory of a host computing platform are capable of implementing:

training a neural network with different sets of cardiac imaging data acquired for ventricles of different hearts, and a known ejection fraction for each set;

loading the trained neural network into computer storage;

acquiring a contemporaneous imaging dataset of a ventricle of a heart;

providing the contemporaneous imaging data set to a neural network; and the number of the first and second groups,

the ejection fraction measurements output by the neural network are displayed on at least one computer display without tracking the ventricular boundaries of the heart.

8. The system of claim 7, further comprising filtering the acquired contemporaneous imaging data set into imaging data relating only to the video clip acquired using a particular imaging modality and displaying a particular view of the ventricle.

9. The system of claim 7, wherein:

providing only a portion of the contemporaneous imaging data set to the neural network; and the number of the first and second groups,

upon receiving an indication of indeterminate output from the neural network, providing other portions of the contemporaneous imaging data set to the neural network for receiving the determined ejection fraction output from the neural network.

10. The system of claim 7, wherein the computer program instructions, when executed in the storage of the host computing platform, are further capable of implementing:

training a plurality of different neural networks using different views and modes, respectively, with different sets of cardiac imaging data acquired for ventricles of different hearts, and known ejection fraction for each set;

loading one of the trained specific neural networks into a memory of the computer, the specific one neural network corresponding to one of the identified modalities and at least one designated view of the contemporaneous imaging dataset.

11. The system of claim 7, wherein the computer program instructions, when executed in memory of the host computing platform, are further capable of pre-processing the contemporaneous imaging dataset in the form of a video clip image by resizing and cropping each video clip of the video clip image.

12. The system of claim 7 wherein the computer program instructions provide the contemporaneous imaging dataset to a neural network, the system comprising computer program instructions, when in the form of video clip images, to decompose each video clip image of the contemporaneous imaging dataset into a plurality of frames of a movie clip and repeatedly submit the frames in parallel to the neural network.

13. A computer program product for artificial intelligence ejection fraction measurement, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, a method executed by an apparatus to cause the apparatus to perform, the program instructions comprising:

training a neural network with different sets of cardiac imaging data acquired for ventricles of different hearts, and a known ejection fraction for each set;

loading the trained neural network into computer memory;

acquiring a contemporaneous imaging dataset of a ventricle of a heart;

providing the contemporaneous imaging data set to a neural network; and the number of the first and second groups,

ejection fraction measurements output by the neural network are displayed on a computer display without tracking the ventricular boundaries of the heart.

14. The computer program product of claim 13, further comprising filtering the acquired contemporaneous imaging data set into imaging data relating only to the video clip acquired using a particular imaging modality, and displaying a particular view of the ventricle.

15. The computer program product of claim 13, wherein:

providing only a portion of the contemporaneous imaging data set to the neural network; and the number of the first and second groups,

upon receiving an indication of indeterminate output from the neural network, providing other portions of the contemporaneous imaging data set to the neural network for receiving the determined ejection fraction output from the neural network.

16. The computer program product of claim 13, further comprising:

training a plurality of different neural networks using different views and modes, respectively, with different sets of cardiac imaging data acquired for ventricles of different hearts, and known ejection fraction for each set;

loading one of the trained specific neural networks into a memory of the computer, the specific one trained neural network corresponding to one of the identified modalities and at least one designated view of the contemporaneous imaging dataset.

17. The computer program product of claim 13, wherein the method further comprises pre-processing the contemporaneous imaging dataset in the form of a video clip image by resizing and cropping each video clip of the video clip image.

18. The computer program product of claim 13, wherein the contemporaneous imaging dataset is provided to a neural network, the computer program product comprising, when in the form of a video clip image, decomposing each video clip image of the contemporaneous imaging dataset into a plurality of frames of a movie clip and repeatedly submitting the frames in parallel to the neural network.

Technical Field

The present invention relates to systolic function of the heart and more particularly to ejection fraction measurement.

Background

Ejection fraction is a measure of cardiac contractile function, which refers to the percentage of blood leaving the heart at each contraction. Specifically, in each pumping cycle of the heart, the heart both contracts and relaxes. When the heart contracts, the heart ejects blood from its two pumping chambers (called the left and right ventricles). Conversely, when the heart relaxes, both ventricles are refilled with blood. Notably, the heart is unable to pump all of the blood out of each ventricle no matter how violent the heart's contraction is. Instead, some blood will remain. Thus, the term "ejection fraction" refers to the percentage of blood that a full ventricle can pump out each heartbeat.

Of the two ventricles, the left ventricle is the heart's main pumping chamber that pumps oxygenated blood through the ascending aorta to other parts of the body, while the right ventricle is the chamber that pumps blood to the lungs for oxygenation. The ejection fraction of the left or right ventricle can be measured using several different imaging techniques. The most commonly used technique is echocardiography, in which the ejection fraction is measured by images produced by the heart and the acoustic waves pumping through the heart's blood. Other alternatives to echocardiography include the use of Magnetic Resonance Imaging (MRI), Computed Tomography (CT) and nuclear medicine scanning, catheter-based imaging.

Current methods of ejection fraction measurement often do not accurately assess cardiac disease conditions. This error can lead to delayed treatment for the patient, and the disease condition can deteriorate significantly during the delay. In this regard, echocardiography relies on Simpson Biplane methodology to produce measurements. In particular, in simpson biplane, the end-systolic and end-diastolic volumes of the ventricles are measured in order to calculate the fractional difference. However, by doing so, the ventricular boundaries need to be manually tracked by a human reader, which is subjective to humans. The ventricular volume is then assumed to consist of a limited number, typically 20, of elliptic cylinders, which, while convenient, is not accurate. Furthermore, this method relies on finding the exact end systole and end diastole image frames, a generally unusual step that can lead to errors if the operation is inaccurate. Thus, despite the advances in modern diagnostic techniques, current methods of measuring ejection fraction are still unable to measure in an optimal or reproducible manner.

Disclosure of Invention

Embodiments of the present invention address deficiencies of the art in respect to ejection fraction measurement methods and provide a novel and non-obvious method, system and computer program product for artificial intelligence of ejection fraction measurements. In an embodiment of the invention, a method for artificial intelligence ejection fraction determination includes training a neural network with different sets of cardiac imaging data acquired for ventricles of different hearts, and a known ejection fraction for each set. The trained neural network is then loaded into computer memory. A set of contemporaneous imaging data of the heart ventricle, whether in the form of a clip image or data generated during imaging of the heart ventricle, is acquired, over time, converted to spatial data of the heart and can be visualized in a display. Thereafter, the set of contemporaneous imaging data is provided to a neural network. Finally, ejection fraction measurements output by the neural network are displayed on a computer display without tracking the ventricular boundaries of the heart.

In one aspect of the embodiment, the acquired contemporaneous imaging data set is filtered to include only imaging data acquired with a particular imaging modality and displaying a particular view of the ventricle. In yet another aspect of the embodiment, only a portion of the contemporaneous imaging dataset is provided to the neural network, and, upon receiving an indication of indeterminate output from the neural network, the other portion of the contemporaneous imaging dataset is provided to the neural network for receiving the determined ejection fraction output from the neural network. In another aspect of the embodiment, a plurality of different each neural network is trained using different views and modalities, respectively, with different sets of cardiac imaging data acquired for ventricles of different hearts, and known ejection fractions for each set. As such, only a particular one or more trained neural networks are loaded into the memory of the computer, the particular one trained neural network corresponding to the identified modality and at least one specified view of the simultaneous imaging dataset. Finally, in even yet another aspect of the embodiments, the contemporaneous imaging dataset in the form of the video clip image is pre-processed by resizing and cropping each video clip of the video clip image.

In another embodiment of the invention, a cardiographic data processing system is configured for artificial intelligence ejection fraction measurement. The system includes a host computing platform including one or more computers, wherein each computer has storage and at least one processor. The system also includes an ejection fraction measurement module. The module includes computer program instructions that, when executed in a memory of a host computing platform, are capable of loading a neural network into the memory, the neural network being trained with different sets of cardiac imaging data acquired for ventricles of different hearts and a known ejection fraction for each set to acquire contemporaneous imaging data sets of the ventricles of the hearts, providing the contemporaneous imaging data sets to the neural network, and displaying an output of the ejection fraction output by the neural network on a display of the computer without tracking ventricular boundaries of the hearts.

Additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The aspects of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention. It should be understood that the embodiments illustrated herein are presently preferred, however, the invention is not limited to the precise arrangements and instrumentalities shown, wherein:

FIG. 1 is a diagrammatic illustration of a process for artificial intelligence ejection fraction measurement;

FIG. 2 is a schematic diagram of a cardiogram data processing system configured for artificial intelligence ejection fraction measurement; and the number of the first and second groups,

fig. 3 is a flow chart illustrating a process for artificial intelligence ejection fraction measurement.

Detailed Description

Embodiments of the present invention provide artificial intelligence ejection fraction measurements. According to an embodiment of the invention, different sets of cardiac imaging data of a ventricle (left, right, or both) in the form of different video clips are acquired for different hearts, and a known ejection fraction for each set is associated with the sets. These groups are then provided as training inputs to the neural network for the relevant ejection fraction, thereby training the neural network. In this regard, these groups are provided as training inputs to the neural network in conjunction with specific modalities for acquiring the video clips (e.g., echocardiograms, CT or MRI), and indications of the specific view types displayed in each video clip (e.g., parasternal major and minor axes, apical two, three, four and five chambers, and endocardial (sub-total) views). The trained neural network may then be stored in conjunction with the particular modality and view.

Thereafter, a contemporaneous cardiac imaging data set of the heart ventricle is acquired and the modality and view are identified for the contemporaneous image set. Optionally, video clips determined to be of substandard quality are removed from the group. A neural network corresponding to the modality and view of the identified set of contemporaneous images is then selected and provided to the selected neural network so as to receive as output from the selected neural network a ejection fraction value without tracking a ventricular boundary of the heart. By this method, the ejection fraction of the ventricle can be measured without relying on traditional manual tracking of the ventricle by a human reader, manual measurement of end systole or end diastole, or measurement that incorrectly assumes that the ventricle volume is modeled by an ellipsoid.

In further illustration, FIG. 1 is a diagrammatic illustration of a process for artificial intelligence ejection fraction measurement. As shown in FIG. 1, the cardiographic data processing system 120 acquires imaging data 130 of the heart chamber 110 during imaging of the heart chamber 110. The imaging data 130 may be acquired according to one or more different modalities, such as ultrasound acquired imaging data, CT images, or MRI images. The imaging data may also include one-, two-, or three-dimensional time-varying spatial data from many different views of the heart 110, including parasternal long and short axes, bicentricular, tricuspid, quadriventricular, and pentaventricular, and ventral views. The video clip images 130 are then preprocessed to filter the imaging data 130 into a selection of imaging data 150, the imaging data 150 corresponding to one or more selected views of the heart 110 acquired by one or more selected means, such as four-chamber or two-chamber views of suitable quality, including omitted partial views or oblique views acquired by B-mode echocardiography, doppler echocardiography, M-mode echocardiography, CT, or MRI.

A selected subset 160A of the imaging data 150 is then provided as input to the neural network 170, which neural network 170 was previously trained in conjunction with the selection of different views and morphologies of the ventricles of the heart, and the selection of known ejection fractions corresponding to these different views. To the extent that the neural network outputs an uncertain result 190, a new subset 160B of the image data 150 is selected and provided to the neural network 170. This process may continue until no imaging data exists in the selection of imaging data 150, or until neural network 170 outputs an ejection fraction 180. In this case, ejection fraction 180 is displayed in a display for viewing by the end user. Alternatively, the output of the neural network 170 may instead be stored in conjunction with a Picture Archiving and Communication System (PACS), an Electronic Medical Record (EMR) system, or an echocardiographic/radiologic information management system.

The process described in connection with fig. 1 may be implemented in a cardiographic data processing system. In yet further illustration, FIG. 2 schematically illustrates a cardiographic data processing system configured for artificial intelligent ejection fraction measurement. The system includes a host computing platform 210, which may include one or more computers, each having storage and at least one processor. Neural network 240 is loaded into memory of host computing platform 210, which host computing platform 210 has been trained using training module 220 with input of a set of video clip images of different hearts, each heart having a known ejection fraction. Alternatively, the neural network 240 may include a selection of different neural networks, each trained on different video clip images acquired according to different modalities, and displaying different views of the heart.

Notably, an ejection fraction measurement module 300 is provided. Ejection fraction measurement module 300 includes computer program instructions that, when executed in memory of host computing platform 210, are capable of processing a selection of contemporaneous acquisitions of imaging data of the heart stored in image memory 230. The program instructions are also capable of preprocessing the selection of imaging data by filtering the imaging data to imaging data originating only from a selected view of a particular modality. When the image data is pre-processed in the form of a video clip image, the program instructions are also capable of cropping the video clip image to facilitate removing heart-independent foreign matter from the video clip image, rotating the video clip image to a correct angular orientation, and resizing the video clip image.

In one aspect of an embodiment, the filtered and preprocessed image data, once completed, may be submitted to the neural network 240 for ejection fraction measurement. In this regard, at commit time, each video clip in the set of contemporaneous video clip images is decomposed into a plurality of frames, and the frames are committed in parallel to the neural network 240. However, as an alternative, the program instructions may be able to select only a portion of the filtered and pre-processed imaging data for submission to the neural network 240, where the imaging data of each video clip in the portion is decomposed into a plurality of frames, and the frames are submitted to the neural network 240 in parallel. In either case, the program instructions are capable of displaying the ejection fraction produced by the neural network 240 when provided by the neural network 240. However, to address the situation when the neural network 240 produces uncertain results, the program instructions can select additional portions of the filtered and preprocessed image for additional submission to the neural network 240 in subsequent attempts to determine the ejection fraction of the ventricles of the heart.

Even further in the description of the operation of ejection fraction measurement module 300, fig. 3 is a flow chart illustrating the process of artificial intelligence ejection fraction measurement. Beginning at block 310, imaging data in the form of a set of cardiac ventricular video clip images is loaded into memory and block 320, the video clip images in the set being filtered to include only one or more selected views of images acquired according to a selected modality. Then, in block 330, the filtered video clip images are each corrected by a cropping, padding, or rotation function. Finally, in block 340, a subset of the filtered video clip images is selected as input in block 350 to the neural network. Optionally, a trained particular neural network is associated with the selected view and the selected modality, thereby receiving the input in block 350. In decision block 360, it measures whether the neural network is capable of outputting ejection fraction. If so, the ejection fraction is displayed in block 390. Otherwise, in decision block 370, it measures whether there are more remaining images to be processed by the neural network. If so, an additional subset of the filtered image is selected in block 340 as input to the neural network in block 350. If no more images remain to be processed, an error condition results in block 380.

The present invention may be embodied in a system, method, computer program product, or any combination thereof. The computer program product may include a computer-readable storage medium, or media having computer-readable program instructions, for causing a processor to perform various aspects of the present invention. The computer readable storage medium may be a tangible device capable of retaining and storing instructions for use by an instruction execution device. The medium of the computer readable program instructions may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing.

The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or over a network to an external computer or external storage device. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.

These computer-readable program instructions may be provided to a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, a programmable data processing apparatus, and/or a device to function in a particular manner, such as a computer-readable storage medium having instructions stored therein, including an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.

The computer-readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention.

In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions, or combinations of special purpose hardware and computer instructions.

Finally, the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Having described the present invention in detail and by reference to embodiments thereof, it will be apparent that modifications and variations are possible without departing from the scope of the invention defined in the appended claims.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:超声图像数据的三维成像和建模

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!