System and method for identifying, labeling and navigating to a target using real-time two-dimensional fluoroscopy data
阅读说明:本技术 用于使用实时二维荧光镜检查数据识别、标记和导航到靶标的系统和方法 (System and method for identifying, labeling and navigating to a target using real-time two-dimensional fluoroscopy data ) 是由 O·P·维加藤 R·巴拉克 E·科佩尔 B·格林伯格 E·凯德密-沙哈尔 D·玛尔迪克斯 于 2018-06-29 设计创作,主要内容包括:一种用于促进在患者的身体部位的荧光镜检查图像中识别和标记靶标的系统,所述系统包含一个或多个存储装置、至少一个硬件处理器和显示器,所述存储装置上存储有用于以下的指令:接收所述患者的所述身体部位的CT扫描和荧光镜检查3D重建,其中所述CT扫描包括所述靶标的标记;和基于所述患者的所述CT扫描生成至少一个虚拟荧光镜检查图像,其中所述虚拟荧光镜检查图像包括所述靶标和所述靶标的所述标记,所述硬件处理器被配置为执行这些指令,所述显示器被配置为向用户显示所述虚拟荧光镜检查图像和所述荧光镜检查3D重建。(A system for facilitating identification and labeling of a target in a fluoroscopic image of a body part of a patient, the system comprising or more storage devices having stored thereon instructions for receiving a CT scan and a fluoroscopic 3D reconstruction of the body part of the patient, wherein the CT scan includes a label of the target, and generating at least virtual fluoroscopic images based on the CT scan of the patient, wherein the virtual fluoroscopic images include the label of the target and the target, at least hardware processors configured to execute the instructions, and a display configured to display the virtual fluoroscopic images and the fluoroscopic 3D reconstruction to a user.)
1, a system for facilitating identification and labeling of targets in fluoroscopic images of a body part of a patient, the system comprising:
(i) , one or more storage devices having instructions stored thereon for:
receiving a CT scan and a fluoroscopic 3D reconstruction of the body part of the patient, wherein the CT scan includes a marker of the target; and
generating at least virtual fluoroscopic images based on the CT scan of the patient, wherein the virtual fluoroscopic images include the target and the marker of the target,
(ii) at least hardware processors configured to execute the instructions, an
(iii) A display configured to display the virtual fluoroscopic image and the fluoroscopic 3D reconstruction to a user.
2. The system of claim 1 wherein said or more storage devices have stored thereon further instructions for guiding said user in identifying and labeling said target in said fluoroscopic 3D reconstruction while using said virtual fluoroscopic image as a reference.
3. The system of claim 2, wherein the user is guided to identify and label the target in two fluoroscopic 3D reconstructed fluorescence slice images captured at two different angles.
4. The system of claim 1, wherein said generating said at least virtual fluoroscopic images comprises:
generating a virtual fluoroscope pose around the target by simulating a fluoroscope trajectory while scanning the target;
generating a virtual fluoroscopy image by projecting the CT scan volume according to the virtual fluoroscopy pose;
generating a virtual fluoroscopy 3D reconstruction based on the virtual fluoroscopy image; and
selecting a slice image from the virtual fluoroscopic 3D reconstruction of the marker containing the target.
5. The system of claim 1, wherein the target is a soft tissue target.
Technical Field
The present disclosure relates generally to the field of identifying and labeling targets in fluoroscopic images, and in particular to such target identification and labeling in medical procedures involving in vivo navigation. Further, the present disclosure relates to systems, devices and methods of navigation in medical procedures.
Background
Generally, a clinician employs or more imaging modalities (e.g., Magnetic Resonance Imaging (MRI), ultrasound imaging, Computed Tomography (CT), fluoroscopy), and other imaging modalities to identify and navigate to a region of interest in a patient and ultimately to a target for treatment.
For example, endoscopy methods have proven useful for navigating to regions of interest within a patient's body, and in particular to regions within a network of cavities of the body (such as the lungs). To enable endoscopy in the lungs, and more particularly bronchoscopy methods, endobronchial navigation systems have been developed that use previously acquired MRI data or CT image data to generate a three-dimensional (3D) rendering, model or volume of a particular body part, such as the lung.
The resulting volume generated by the MRI scan or the CT scan is then used to create a navigation plan to facilitate advancement of a navigation catheter (or other suitable medical device) through the bronchoscope and branches of the patient's bronchi to the region of interest.
However, the patient's lungs of the three-dimensional volume generated by a previously acquired scan (e.g., a CT scan) may not provide a sufficient basis for accurately guiding the medical instrument to the target during the navigation procedure. In some cases, the inaccuracy is due to deformation of the patient's lungs during the procedure relative to the lungs at the time of acquisition of previously acquired CT data. This deformation (CT-to-body divergence) can be caused by many different factors, such as: sedation and non-sedation, bronchoscopy changing the patient's posture and pushing on tissues, different lung volumes (since CT is in the inspiratory state and navigation is in the respiratory process), different beds, days, etc.
Furthermore, in order to accurately and safely navigate a medical device to a remote target, e.g., for biopsy or therapy, both the medical device and the target should be visible in some three-dimensional guidance system.
Fluoroscopic imaging devices are typically located in the operating room during the navigation procedure. Clinicians may use standard fluoroscopic imaging devices, for example, for visualizing and confirming the position of a medical device after it is navigated to a desired location. However, while standard fluoroscopic images show high density objects (such as metal tools and bone) as well as large soft tissue objects (such as the heart), the fluoroscopic images can make it difficult to resolve small soft tissue objects of interest, such as lesions. Furthermore, fluoroscopic images are only two-dimensional projections. Thus, X-ray volumetric reconstruction can identify such soft tissue objects and navigate to the target.
There are several solutions to provide three-dimensional volumetric reconstruction (e.g., CT and cone-beam CT) that are widely used in the medical field these machines algorithmically combine multiple X-ray projections from known, calibrated X-ray source locations into a three-dimensional volume where soft tissue is more visible, for example, CT machines can be used with iterative scanning during a procedure to provide guidance through the body until the tool reaches the target, which is a cumbersome process because it requires several complete CT scans, a dedicated CT room, and blind navigation between scans, additionally, each scan requires a worker to leave the room and expose the patient to such radiation due to the high level of ionizing radiation, another option is a cone-beam CT machine that can be used in some operating rooms and is easier to operate , but expensive, and similar CT only provides blind navigation between scans, requires iterative navigation multiple times, and requires a worker to leave the room, and is very expensive based on the CT system and is not available in many imaging situations in the same location.
Thus, imaging techniques have been introduced which use STANDARD FLUOROSCOPIC apparatus TO reconstruct a LOCAL THREE-DIMENSIONAL VOLUME FOR visualization and ease of navigation TO an in-vivo TARGET, and in particular TO a small soft tissue object, U.S. patent application No. 2017/035379 entitled "System and METHOD FOR LOCAL THREE-DIMENSIONAL RECONSTRUCTION Using Standard FLUOROSCOPEs" (SYSTEMS AND METHOD FOR LOCAL THREE-DIMENSIONAL RECONSTRUCTION Using Standard FLUOROSCOPIC METHOD and digital imaging mirror), incorporated herein by reference, Barak et al entitled "System and METHOD FOR reconstructing a LOCAL THREE-DIMENSIONAL VOLUME FOR navigation TO and performance of a TARGET USING fluoroscopy-BASED examination (SYSTEM AND METHOD FOR NAVIGATING and performing a program ON a TARGET) (the system and METHOD FOR reconstructing a THREE-DIMENSIONAL imaging program and recording system), incorporated herein by reference, the U.S. patent application No. 3932, incorporated by reference, the U.S. patent application No. 3,32, incorporated herein by reference, the subject of the Standard FLUOROSCOPIC apparatus, and METHOD FOR LOCAL THREE-DIMENSIONAL RECONSTRUCTION USING fluorescence VOLUME, incorporated herein by reference," the subject of the patent application No. 2, incorporated by reference.
generally, according to the systems and methods disclosed in the above-mentioned patent applications, during a medical procedure, a standard fluoroscopic C-arm can be rotated, for example, about 30 degrees around the patient and a fluoroscopic 3D reconstruction of the region of interest is generated by a software algorithm specific to .
Such a fast generation of a 3D reconstruction of the region of interest may provide real-time three-dimensional imaging of the target region. Real-time imaging of the target and the medical device located within its region may be beneficial for many interventional procedures, such as biopsy and ablation procedures of various organs, vascular interventions, and orthopedic surgery. For example, when referring to a navigation bronchoscopy, the goal may be to receive accurate information about the location of the biopsy catheter relative to the target lesion.
As another example, minimally invasive procedures (such as laparoscopic procedures, including robotic-assisted surgery) may employ intraoperative fluoroscopy to increase visualization, e.g., for guidance and lesion localization, and to prevent unnecessary injury and complications. The above-described systems and methods that employ real-time reconstruction for fluoroscopic three-dimensional imaging of a target region and navigation based on the reconstruction may also benefit such procedures.
Accordingly, there is a need for systems and methods for facilitating identification and labeling of targets in fluoroscopic image data, and in particular fluoroscopic 3D reconstruction, to facilitate navigation to the targets and the generation of related medical procedures.
Disclosure of Invention
, in accordance with the present disclosure, a system for facilitating identification and labeling of a target in a fluoroscopic image of a body part of a patient is provided, the system comprising (i) one or more storage devices having instructions stored thereon for receiving a CT scan and a fluoroscopic 3D reconstruction of the body part of the patient, wherein the CT scan includes a label of the target, generating at least virtual fluoroscopic images based on the CT scan of the patient, wherein the virtual fluoroscopic images include the target and the label of the target, (ii) at least hardware processors configured to execute the instructions, and (iii) a display configured to display the virtual fluoroscopic images to a user concurrently with the fluoroscopic 3D reconstruction.
In accordance with the present disclosure, there is further provided at a system for facilitating identification and labeling of a target in a fluoroscopic image of a body part of a patient, the system comprising (i) or more storage devices having instructions stored thereon for receiving a CT scan and a fluoroscopic 3D reconstruction of the body part of the patient, wherein the CT scan includes a label of the target, generating at least virtual fluoroscopic images based on the CT scan of the patient, wherein the virtual fluoroscopic images include labels of the target and the target, (ii) at least hardware processors configured to execute the instructions, and (iii) a display configured to display the virtual fluoroscopic images and the fluoroscopic 3D reconstruction to a user.
In accordance with the present disclosure, there are further steps of providing methods for identifying and labeling targets in an image of a body part of a patient, the method comprising using at least hardware processors to receive a CT scan and a fluoroscopic 3D reconstruction of the body part of the patient, wherein the CT scan includes a label of the target, generate at least virtual fluoroscopic images based on the CT scan of the patient, wherein at least virtual fluoroscopic images include labels of the target and the target, and display at least virtual fluoroscopic images to a user on a display simultaneously with the fluoroscopic 3D reconstruction, thereby facilitating the user in identifying the target in the fluoroscopic 3D reconstruction.
In accordance with the present disclosure, there are further steps of providing methods for identifying and labeling targets in an image of a body part of a patient, the method comprising using at least hardware processors to receive a CT scan and a fluoroscopic 3D reconstruction of the body part of the patient, wherein the CT scan includes a label of the target, generate at least virtual fluoroscopic images based on the CT scan of the patient, wherein at least virtual fluoroscopic images include labels of the target and the target, and display to a user on a display at least virtual fluoroscopic images and the fluoroscopic 3D reconstruction, thereby facilitating the user to identify the target in the fluoroscopic 3D reconstruction.
In accordance with the present disclosure, there are provided in steps systems for navigating to a target region in a patient's body using real-time two-dimensional fluoroscopic images during a medical procedure, the system comprising a medical device configured to be navigated to the target region, a fluoroscopic imaging device configured to acquire a 2D fluoroscopic image sequence of the target region at a plurality of angles relative to the target region while the medical device is positioned in the target region, and a computing device configured to receive a pre-operative CT scan of the target region, wherein the pre-operative CT scan includes indicia of the target, generate at least virtual fluoroscopic images based on the pre-operative CT scan, wherein at least virtual fluoroscopic images include indicia of the target and the target, generate a three-dimensional reconstruction of the target region based on the acquired 2D fluoroscopic image sequence, simultaneously display to a user at least virtual fluoroscopic 3D images and the fluoroscopic 3D reconstruction, receive by the user a selection of the target from the fluoroscopic 3D images or from the sequence of the reconstruction of the target, and determine a shift of the medical device relative to the medical device selection of the target.
In accordance with the present disclosure, there are provided in steps systems for navigating to a target region in a patient's body using real-time two-dimensional fluoroscopic images during a medical procedure, the system comprising a medical device configured to navigate to the target region, a fluoroscopic imaging device configured to acquire a 2D fluoroscopic image sequence of the target region at a plurality of angles relative to the target region while the medical device is positioned in the target region, and a computing device configured to receive a pre-operative CT scan of the target region, wherein the pre-operative CT scan includes a marker of the target, generate at least virtual fluoroscopic images based on the pre-operative CT scan, wherein at least virtual fluoroscopic images include markers of the target and the target, generate a three-dimensional reconstruction of the target region based on the acquired 2D fluoroscopic image sequence, display to a user at least virtual fluoroscopic 3D reconstructions, receive a selection of the target from the fluoroscopic 3D reconstructions by the user, receive a fluoroscopic image sequence of the target from the three-dimensional reconstructions or the 2D fluoroscopic image sequence, and determine a target offset of the medical device based on the medical device selection and the target offset.
In accordance with the present disclosure, there are provided in methods for navigating to a target region in a patient's body using real-time two-dimensional fluoroscopic images during a medical procedure, the method comprising using at least hardware processors for receiving a pre-operative CT scan of the target region, wherein the pre-operative CT scan includes a marker of the target, generating at least virtual fluoroscopic images based on the pre-operative CT scan, wherein at least virtual fluoroscopic images include markers of the target and the target, receiving a sequence of 2D fluoroscopic images of the target region acquired in real-time at a plurality of angles relative to the target region when a medical device is positioned in the target region, generating a three-dimensional reconstruction of the target region based on the sequence of 2D fluoroscopic images, displaying at least virtual fluoroscopic images and a fluoroscopic 3D reconstruction to a user, receiving a selection of the target from the fluoroscopic 3D reconstruction by the user, receiving a selection of the medical device from the three-dimensional reconstruction or the sequence of 2D fluoroscopic images, and determining a shift of the medical device relative to the target based on the selection of the target and the medical device.
In accordance with the present disclosure, there are provided in methods for navigating to a target region in a patient's body using real-time two-dimensional fluoroscopic images during a medical procedure, the method comprising using at least hardware processors for receiving a pre-operative CT scan of the target region, wherein the pre-operative CT scan includes a marker of the target, generating at least virtual fluoroscopic images based on the pre-operative CT scan, wherein at least virtual fluoroscopic images include the target and the marker of the target, receiving a sequence of 2D fluoroscopic images of the target region acquired in real-time at a plurality of angles relative to the target region when a medical device is positioned in the target region, generating a three-dimensional reconstruction of the target region based on the sequence of 2D fluoroscopic images, displaying at least virtual fluoroscopic images and a fluoroscopic 3D reconstruction to a user, receiving a selection of the target from the fluoroscopic 3D reconstruction by the user, receiving a selection of the medical device from the sequence of three-dimensional or 2D fluoroscopic images, and determining a shift of the medical device relative to the target based on the selection of the target and the medical device.
In accordance with the present disclosure, there are provided in steps computer program products comprising a non-transitory computer readable storage medium having program code embodied therewith, the program code executable by at least hardware processors to receive a pre-operative CT scan of a target region, wherein the pre-operative CT scan includes indicia of the target, generate at least virtual fluoroscopy images based on the pre-operative CT scan, wherein the at least virtual fluoroscopy images include the target and the indicia of the target, receive a sequence of 2D fluoroscopy images of the target region acquired in real time at a plurality of angles relative to the target region when a medical device is positioned in the target region, generate a fluoroscopy three-dimensional reconstruction of the target region based on the sequence of 2D fluoroscopy images, simultaneously display to a user at least virtual fluoroscopy images and the fluoroscopy three-dimensional reconstruction, receive a selection of the target from the fluoroscopy three-dimensional reconstruction by the user, receive the three-dimensional reconstruction of the medical device from the sequence of fluoroscopy images or 2D fluoroscopy images, determine a target shift for the medical device based on the target selection and the medical device shift.
In accordance with the present disclosure, there are provided in steps computer program products comprising a non-transitory computer readable storage medium having program code embodied therewith, the program code executable by at least hardware processors to receive a pre-operative CT scan of a target region, wherein the pre-operative CT scan includes indicia of the target, generate at least virtual fluoroscopy images based on the pre-operative CT scan, wherein the at least virtual fluoroscopy images include the target and the indicia of the target, receive a sequence of 2D fluoroscopy images of the target region acquired in real time at a plurality of angles relative to the target region when a medical device is positioned in the target region, generate a fluoroscopy three-dimensional reconstruction of the target region based on the sequence of 2D fluoroscopy images, display to a user at least virtual fluoroscopy images and the fluoroscopy three-dimensional reconstruction, receive a selection of the target from the fluoroscopy three-dimensional reconstruction or the sequence of 2D fluoroscopy images by the user, determine a target shift for the medical device based on the medical device selection and the target shift.
In another aspect of the present disclosure, or more storage devices have stored thereon other instructions for guiding a user in identifying and labeling targets in a fluoroscopic 3D reconstruction.
In another aspect of the disclosure, or more storage devices have stored thereon further instructions for guiding a user in fluoroscopic 3D reconstruction identifying and labeling targets while using the virtual fluoroscopic image as a reference.
In another aspect of the present disclosure, or more storage devices have stored thereon further instructions for guiding a user in identifying and labeling a target in two fluoroscopic 3D reconstructed two fluorescence slice images captured at two different angles.
In another aspect of the disclosure, generating at least virtual fluoroscopic images is based on digitally reconstructed radiographic techniques.
In another aspect of the disclosure, generating at least virtual fluoroscopic images includes generating a virtual fluoroscopic pose around the target by simulating a fluoroscopic trajectory while scanning the target, generating a virtual 2D fluoroscopic image by projecting the CT scan volume according to the virtual fluoroscopic pose, generating a virtual fluoroscopic 3D reconstruction based on the virtual 2D fluoroscopic image, and selecting a slice image from the virtual fluoroscopic 3D reconstruction that includes a marker of the target.
In another aspect of the disclosure, the target is a soft tissue target.
In another aspect of the disclosure, receiving a fluoroscopic 3D reconstruction of the body part includes receiving a sequence of 2D fluoroscopic images of the body part acquired at a plurality of angles relative to the body part and generating the fluoroscopic 3D reconstruction of the body part based on the sequence of 2D fluoroscopic images.
In another aspect of the disclosure, the method further includes using the at least hardware processors to guide a user in identifying and labeling targets in a fluoroscopic 3D reconstruction.
In another aspect of the disclosure, the method further includes using the at least hardware processors to guide a user in identifying and labeling the target in a fluoroscopic 3D reconstruction while using at least virtual fluoroscopic images as a reference.
In another aspect of the disclosure, the method further includes using the at least hardware processors to instruct a user to identify and label a target in two fluoroscopic 3D reconstructed two fluorescence slice images captured at two different angles.
In another aspect of the disclosure, the system further includes a tracking system configured to provide data indicative of a position of the medical device within the patient's body, and a display, wherein the computing device further is configured to determine the position of the medical device based on the data provided by the tracking system, display the target area and the position of the medical device relative to the target on the display, and correct the display of the position of the medical device relative to the target based on the determined offset between the medical device and the target.
In another aspect of the present disclosure, the computing device is further configured to generate a 3D rendering of the target region based on the pre-operative CT scan, wherein the displaying of the target region includes displaying the 3D rendering, and register the tracking system to the 3D rendering, wherein the positional correction of the medical device relative to the target includes updating the registration between the tracking system and the 3D rendering.
In another aspect of the disclosure, a tracking system includes a sensor, and an electromagnetic field generator configured to generate an electromagnetic field for determining a position of the sensor, wherein the medical device includes a catheter guidance assembly having the sensor disposed thereon, and the position determination of the medical device includes determining a position of the sensor based on the generated electromagnetic field.
In another aspect of the disclosure, the target region includes at least portions of a lung, and the medical device is configured to be navigated to the target region through the airway lumen network.
In another aspect of the disclosure, the computing device is configured to receive a selection of a medical device by automatically detecting portions of the medical device in a sequence or three-dimensional reconstruction of the acquired 2D fluoroscopic images, and receiving a user command to accept or reject the detection.
In another aspect of the disclosure, the computing device is further configured to estimate a pose of the fluoroscopic imaging device while the fluoroscopic imaging device acquires each of the at least a plurality of images of the 2D fluoroscopic image sequence, and wherein generating the three-dimensional reconstruction of the target region is based on the pose estimate of the fluoroscopic imaging device.
In another aspect of the disclosure, the system further includes a structure of the marker, wherein the fluoroscopic imaging apparatus is configured to acquire a sequence of 2D fluoroscopic images of the target region and the structure of the marker, and wherein estimating the pose of the fluoroscopic imaging apparatus while acquiring each image of the at least plurality of images is based on detection of a probable and most probable projection of the marker structure as a whole on each image.
In another aspect of the disclosure, the computing device is further configured to guide the user in identifying and labeling the target in the fluoroscopic 3D reconstruction while using at least virtual fluoroscopic images as a reference.
In another aspect of the disclosure, the method further includes using the at least hardware processors for determining a position of the medical device within the patient's body based on data provided by the tracking system, displaying on the display the target area and the position of the medical device relative to the target, and correcting the display of the position of the medical device relative to the target based on the determined offset between the medical device and the target.
In another aspect of the disclosure, the method further includes using the at least hardware processors for generating a 3D rendering of the target region based on the pre-operative CT scan, wherein the displaying of the target region includes displaying the 3D rendering, and registering the tracking system to the 3D rendering, wherein the correcting of the position of the medical device relative to the target includes updating the registration between the tracking system and the 3D rendering.
In another aspect of the disclosure, receiving the selection of the medical device includes automatically detecting portions of the medical device in a sequence or three-dimensional reconstruction of 2D fluoroscopic images, and receiving a user command to accept or reject the detection.
In another aspect of the disclosure, the method further includes estimating a pose of the fluoroscopic imaging apparatus using the at least hardware processors while acquiring each of the at least a plurality of images of the 2D fluoroscopic image sequence, wherein generating the three-dimensional reconstruction of the target region is based on the pose estimate of the fluoroscopic imaging apparatus.
In another aspect of the disclosure, the structure of the marker is positioned relative to the patient and the fluoroscopic imaging apparatus such that each image of the at least a plurality of images contains a projection of at least portion of the marker structure, and it estimates the pose of the fluoroscopic imaging apparatus while acquiring each image of the at least a plurality of images is based on the detection of the likely and most likely projection of the marker structure as a whole on each image.
In another aspect of the disclosure, the non-transitory computer readable storage medium has further program code executable by the at least hardware processors for determining a position of the medical device within the patient's body based on data provided by the tracking system, displaying on the display the target area and the position of the medical device relative to the target, and correcting the display of the position of the medical device relative to the target based on the determined offset between the medical device and the target.
Any of the above aspects and embodiments of the disclosure may be combined without departing from the scope of the disclosure.
Drawings
Various aspects and embodiments of the disclosure are described below with reference to the drawings, in which:
FIG. 1 is a flow chart of a method for identifying and labeling targets in fluoroscopic 3D reconstruction according to the present disclosure;
FIG. 2 is a schematic diagram of a system configured for use with the method of FIG. 1;
FIG. 3A is an exemplary screenshot illustrating a display of a fluoroscopic 3D reconstructed slice image according to the present disclosure;
FIG. 3B is an exemplary screenshot illustrating a virtual fluoroscopic image presented concurrently with a fluoroscopic 3D reconstructed slice image according to the present disclosure;
FIG. 3C is an example screenshot illustrating a display of a fluoroscopic 3D reconstruction according to the present disclosure;
FIG. 4 is a flow chart of a method for navigating to a target using real-time two-dimensional fluoroscopic images according to the present disclosure; and is
Fig. 5 is a perspective view of illustrative embodiments of an exemplary system for navigating to a soft tissue target via an airway network according to the method of fig. 4.
Detailed Description
As referred to herein, the term "target" may relate to any element, biological or artificial, or to a location of interest within a patient's body, such as tissue (including soft tissue and skeletal tissue), an organ, an implant, or a fiducial marker.
As referred to herein, the term "target region" may refer to at least portions of the target and its surrounding region when the term "body part" refers to a body part in which the target is located, the term "target region" and the term "body part" are used interchangeably.
The terms "and", "or" and/or "are used interchangeably, and each term may be combined with other terms, all in accordance with the context of the term.
As referred to herein, the term "medical device" may include, but is not limited to, optical systems, ultrasound probes, marker placement tools, biopsy tools, ablation tools (i.e., microwave ablation devices), laser probes, cryoprobes, sensor probes, and aspiration needles.
The term "fluoroscopic image" may refer to a 2D fluoroscopic image and/or any fluoroscopic 3D reconstructed slice image, all according to the context of the term.
The term "virtual fluoroscopy image" may refer to a virtual 2D fluoroscopy image and/or a virtual fluoroscopy 3D reconstructed or any other 3D reconstructed virtual fluoroscopy slice image, all according to the context of the term.
The present disclosure relates to systems, methods, and computer program products for facilitating user identification and labeling of targets in real-time fluoroscopic images of a body part of interest produced via standard fluoroscopy. Such real-time fluoroscopic images may include two-dimensional images and/or three-dimensional reconstructed slice images. In particular, identification and labeling of targets in real-time fluoroscopy data may be facilitated by using synthetic or virtual fluoroscopy data (which includes a label or indication of the target) as a reference. The virtual fluoroscopy data may be generated from previously acquired volume data and preferably such that it will mimic fluoroscopy type data as much as possible. In general, the target may be better shown in an imaging mode of previously acquired volumetric data than real-time fluoroscopic data.
The present disclosure further relates to systems and methods for facilitating navigation of a medical device to a target and/or region thereof using real-time two-dimensional fluoroscopic images of the target region navigation may be facilitated by using local three-dimensional volume data, wherein small soft tissue objects are visible, comprised of a sequence of fluoroscopic images captured by standard fluoroscopic imaging devices available in most operating rooms.
Referring now to FIG. 1, FIG. 1 is a flow chart of a method for identifying and labeling a target in a 3D fluoroscopic reconstruction in accordance with the present disclosure, in
In embodiments, the receiving of the fluoroscopic 3D reconstruction of the body part may include receiving a sequence of fluoroscopic images of the subject location, and generating the fluoroscopic 3D reconstruction of the body part based on at least portion of the fluoroscopic images in embodiments, the method may further may include directing the user to acquire the sequence of fluoroscopic images by manually sweeping the fluoroscope in embodiments, the method may further includes automatically acquiring the sequence of fluoroscopic images.
In embodiments, a fluoroscopic 3D reconstruction may be generated based on tomosynthesis methods and/or according to the systems and methods disclosed in U.S. patent application No. 2017/035379 and U.S. patent application No. 15/892,053, which are described above and incorporated by reference herein.
In
In embodiments, a virtual 2D fluoroscopic image may be generated based on digitally reconstructed radiographic techniques.
In embodiments, a virtual fluoroscopy slice image may be generated according to the following steps, in a step, the received CT volume is aligned with a fluoroscopy 3D reconstruction, in a second step, an estimate of the pose of the fluoroscopy device is received or calculated while capturing a set of fluoroscopy images for generating a fluoroscopy 3D reconstruction in a selected position relative to the target or patient (e.g., in an AP (anterior-posterior) position), in a third step, or more slices of the CT scan volume perpendicular to the selected position and including the target are generated, in a fourth step, or more CT slices are projected according to the estimated fluoroscopy pose to receive the virtual fluoroscopy slice image.
In the embodiments, the generation of a virtual fluoroscopy slice image of the target region may include the following steps, in the step, a virtual fluoroscopy pose around the target may be obtained, in the embodiments, the virtual fluoroscopy pose may be generated by simulating a fluoroscopy trajectory while the fluoroscope scans the target, in the embodiments, the method may further step include the generation of a 3D fluoroscopy reconstruction, as described with respect to step 430 of FIG. 4, then the estimated pose of the fluoroscopy device may be utilized while capturing a sequence of fluoroscopy images for generating a fluoroscopy 3D reconstruction, in the second step, a virtual fluoroscopy image may be generated by projecting the CT scan volume according to the virtual fluoroscopy pose.
In embodiments, it may be more advantageous to generate and use a virtual slice image as a reference when it is desired to mark a target in a fluoroscopic 3D reconstructed slice image in embodiments it may be more advantageous to generate and use a virtual fluoroscopic 2D image when it is desired to mark a target in a fluoroscopic 2D image.
In
In accordance with the present disclosure, various workflows and displays may be used to identify and mark targets while using virtual fluoroscopic data as a reference, such displays are illustrated in fig. 3B and 3C referring now to fig. 3B, which is an
Referring now to FIG. 3C, which is an
In embodiments, the virtual fluoroscopic image and the fluoroscopic 3D reconstruction (e.g., a selected slice of the fluoroscopic 3D reconstruction) may be displayed to the user simultaneously in embodiments, the virtual fluoroscopic image and the fluoroscopic 3D reconstruction may be displayed in a non-simultaneous manner.
In
In embodiments, identification and tagging of the target may be performed in or more two-dimensional fluoroscopic images (i.e., the originally captured fluoroscopic images). The fluoroscopic images may then be received and displayed to the user in place of the fluoroscopic 3D reconstruction.
In embodiments, a set of two-dimensional fluoroscopic images (e.g., as originally captured) used to construct the fluoroscopic 3D reconstruction may additionally be received (e.g., in addition to the 3D fluoroscopic reconstruction). The fluoroscopic 3D reconstruction, the corresponding set of two-dimensional fluoroscopic images, and the virtual fluoroscopic image may be displayed to the user.
Referring now to FIG. 2, FIG. 2 is a schematic diagram of a
The memory 202 may include any non-transitory computer readable storage medium for storing data and/or software including instructions executable by the
The application 81, when executed by the
The
Referring now to fig. 4, fig. 4 is a flow chart of a method for navigating to a target using real-time two-dimensional fluoroscopic images according to the present disclosure. The method facilitates navigation to a target region within a patient during a medical procedure. The method utilizes real-time three-dimensional volumetric data based on fluoroscopy. Fluoroscopic three-dimensional volume data may be generated from the two-dimensional fluoroscopic images.
In
In
In
In
In embodiments, the method further includes or more steps for estimating the pose of the fluoroscopic imaging apparatus while acquiring each of the fluoroscopic images or at least a plurality of the fluoroscopic images.
In embodiments, the structure of markers may be positioned relative to the patient and the fluoroscopic imaging device such that each fluoroscopic image includes a projection of at least portions of the marker structure.
Exemplary systems and methods for constructing such fluoroscopy-based three-dimensional volumetric data are disclosed in the above commonly owned U.S. patent publication No. 2017/0035379, U.S. patent application No. 15/892,053, and U.S. provisional application No. 62/628,017, which are incorporated by reference.
In , once the pose estimation process is complete, the projection of the marker structure onto the IMAGE can be removed by using well known METHODs such METHODs are disclosed in commonly owned U.S. patent application No. 62/628028 entitled "IMAGE RECONSTRUCTION system and METHOD (IMAGE RECONSTRUCTION SYSTEM AND METHOD"), filed 2018, 8.s.c. by Alexandroni et al, the entire contents of which are hereby incorporated by reference.
In
At
In
In
In embodiments, the method may further comprise steps for determining a position of the medical device within the patient based on data provided by a tracking system, such as an electromagnetic tracking system.
In embodiments, the method may further comprise steps including steps for generating a 3D rendering of the target region based on the pre-operative CT scan, then the display of the target region may comprise a display of the 3D rendering, in another step, the tracking system may be registered with the 3D rendering, then, the correction of the position of the medical device relative to the target based on the determined offset may comprise a local update of the registration between the tracking system and the 3D rendering in the target region, in embodiments the method may further comprise steps for registering the fluoroscopic 3D reconstruction to the tracking system, in another step and based on the above, a local registration between the fluoroscopic 3D reconstruction and the 3D rendering may be performed in the target region.
In embodiments, the target region may include at least portions of the lung, and the medical device may be configured to be navigated to the target region through the network of airway lumens.
In embodiments, the method may include receiving a selection of the target from or more images of the sequence of fluoroscopic images in addition to, or instead of, receiving a selection of the target from the fluoroscopic 3D reconstruction.
computer program products for navigating to a target using real-time two-dimensional fluoroscopic images are disclosed herein the computer program products may include a non-transitory computer-readable storage medium having program code embodied therewith, the program code executable by at least hardware processors to perform the steps of the method of FIG. 1 and/or FIG. 4.
FIG. 5 is a perspective view of illustrative embodiments of an exemplary system for facilitating navigation to a soft tissue target via an airway network according to the method of FIG. 4. the
As shown in FIG. 5, the
The
With respect to the planning phase,
With respect to the navigation phase, a six degree-of-freedom electromagnetic positioning or tracking system 550 (e.g., similar to those disclosed in U.S. Pat. Nos. 8,467,589, 6,188,355 and published PCT applications WO 00/10456 and WO 01/67035) and other suitable systems for determining position are used to perform registration of images and navigation paths, although other configurations are also contemplated, each of which is incorporated herein by reference in its entirety.
Registration of the patient "P" position on the
After registration of the patient "P" to the image data and path planning, a user interface is displayed in the navigation software for the path settings that the clinician should follow to reach the target such navigation software is currently sold by Medtronic PLC
A navigation kit.As depicted by the user interface, once the
The fluoroscopic 3D reconstruction is generated based on the sequence of fluoroscopic images and the projection of the structure of
As referred to herein, the terms "tracking" or "positioning" are used interchangeably. Although the present disclosure specifically describes using an EM tracking system to navigate or determine the position of a medical device, various tracking systems or positioning systems may be used or applied with respect to the methods and systems disclosed herein. Such tracking, positioning or navigation systems may use various methods including electromagnetic, infrared, echolocation, optical or imaging-based methods. Such systems may be based on pre-operative imaging and/or real-time imaging.
In embodiments, standard fluoroscopy may be used to facilitate navigation and tracking of the medical device, as disclosed, for example, in Averbuch, U.S. Pat. No. 9,743,896, for example, such fluoroscopy-based positioning or navigation methods may be used in addition to or in lieu of the EM tracking methods described above (e.g., as described with respect to FIG. 5) to facilitate or enhance navigation of the medical device.
From the foregoing and with reference to the various figures, those skilled in the art will appreciate that modifications may also be made to the present disclosure without departing from the scope thereof.
Detailed embodiments of the present disclosure are disclosed herein. However, the disclosed embodiments are merely examples of the disclosure, which may be embodied in various forms and aspects. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure.
While several embodiments of the disclosure have been illustrated in the accompanying drawings, it is not intended that the disclosure be limited thereto, as it is intended that the disclosure be broader than the art-permissible range - and also intended that the specification be read.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:用于基于法线的纹理混合的来自深度图的平滑法线