System and method for identifying, labeling and navigating to a target using real-time two-dimensional fluoroscopy data

文档序号:1581034 发布日期:2020-01-31 浏览:10次 中文

阅读说明:本技术 用于使用实时二维荧光镜检查数据识别、标记和导航到靶标的系统和方法 (System and method for identifying, labeling and navigating to a target using real-time two-dimensional fluoroscopy data ) 是由 O·P·维加藤 R·巴拉克 E·科佩尔 B·格林伯格 E·凯德密-沙哈尔 D·玛尔迪克斯 于 2018-06-29 设计创作,主要内容包括:一种用于促进在患者的身体部位的荧光镜检查图像中识别和标记靶标的系统,所述系统包含一个或多个存储装置、至少一个硬件处理器和显示器,所述存储装置上存储有用于以下的指令:接收所述患者的所述身体部位的CT扫描和荧光镜检查3D重建,其中所述CT扫描包括所述靶标的标记;和基于所述患者的所述CT扫描生成至少一个虚拟荧光镜检查图像,其中所述虚拟荧光镜检查图像包括所述靶标和所述靶标的所述标记,所述硬件处理器被配置为执行这些指令,所述显示器被配置为向用户显示所述虚拟荧光镜检查图像和所述荧光镜检查3D重建。(A system for facilitating identification and labeling of a target in a fluoroscopic image of a body part of a patient, the system comprising or more storage devices having stored thereon instructions for receiving a CT scan and a fluoroscopic 3D reconstruction of the body part of the patient, wherein the CT scan includes a label of the target, and generating at least virtual fluoroscopic images based on the CT scan of the patient, wherein the virtual fluoroscopic images include the label of the target and the target, at least hardware processors configured to execute the instructions, and a display configured to display the virtual fluoroscopic images and the fluoroscopic 3D reconstruction to a user.)

1, a system for facilitating identification and labeling of targets in fluoroscopic images of a body part of a patient, the system comprising:

(i) , one or more storage devices having instructions stored thereon for:

receiving a CT scan and a fluoroscopic 3D reconstruction of the body part of the patient, wherein the CT scan includes a marker of the target; and

generating at least virtual fluoroscopic images based on the CT scan of the patient, wherein the virtual fluoroscopic images include the target and the marker of the target,

(ii) at least hardware processors configured to execute the instructions, an

(iii) A display configured to display the virtual fluoroscopic image and the fluoroscopic 3D reconstruction to a user.

2. The system of claim 1 wherein said or more storage devices have stored thereon further instructions for guiding said user in identifying and labeling said target in said fluoroscopic 3D reconstruction while using said virtual fluoroscopic image as a reference.

3. The system of claim 2, wherein the user is guided to identify and label the target in two fluoroscopic 3D reconstructed fluorescence slice images captured at two different angles.

4. The system of claim 1, wherein said generating said at least virtual fluoroscopic images comprises:

generating a virtual fluoroscope pose around the target by simulating a fluoroscope trajectory while scanning the target;

generating a virtual fluoroscopy image by projecting the CT scan volume according to the virtual fluoroscopy pose;

generating a virtual fluoroscopy 3D reconstruction based on the virtual fluoroscopy image; and

selecting a slice image from the virtual fluoroscopic 3D reconstruction of the marker containing the target.

5. The system of claim 1, wherein the target is a soft tissue target.

Technical Field

The present disclosure relates generally to the field of identifying and labeling targets in fluoroscopic images, and in particular to such target identification and labeling in medical procedures involving in vivo navigation. Further, the present disclosure relates to systems, devices and methods of navigation in medical procedures.

Background

Generally, a clinician employs or more imaging modalities (e.g., Magnetic Resonance Imaging (MRI), ultrasound imaging, Computed Tomography (CT), fluoroscopy), and other imaging modalities to identify and navigate to a region of interest in a patient and ultimately to a target for treatment.

For example, endoscopy methods have proven useful for navigating to regions of interest within a patient's body, and in particular to regions within a network of cavities of the body (such as the lungs). To enable endoscopy in the lungs, and more particularly bronchoscopy methods, endobronchial navigation systems have been developed that use previously acquired MRI data or CT image data to generate a three-dimensional (3D) rendering, model or volume of a particular body part, such as the lung.

The resulting volume generated by the MRI scan or the CT scan is then used to create a navigation plan to facilitate advancement of a navigation catheter (or other suitable medical device) through the bronchoscope and branches of the patient's bronchi to the region of interest.

However, the patient's lungs of the three-dimensional volume generated by a previously acquired scan (e.g., a CT scan) may not provide a sufficient basis for accurately guiding the medical instrument to the target during the navigation procedure. In some cases, the inaccuracy is due to deformation of the patient's lungs during the procedure relative to the lungs at the time of acquisition of previously acquired CT data. This deformation (CT-to-body divergence) can be caused by many different factors, such as: sedation and non-sedation, bronchoscopy changing the patient's posture and pushing on tissues, different lung volumes (since CT is in the inspiratory state and navigation is in the respiratory process), different beds, days, etc.

Furthermore, in order to accurately and safely navigate a medical device to a remote target, e.g., for biopsy or therapy, both the medical device and the target should be visible in some three-dimensional guidance system.

Fluoroscopic imaging devices are typically located in the operating room during the navigation procedure. Clinicians may use standard fluoroscopic imaging devices, for example, for visualizing and confirming the position of a medical device after it is navigated to a desired location. However, while standard fluoroscopic images show high density objects (such as metal tools and bone) as well as large soft tissue objects (such as the heart), the fluoroscopic images can make it difficult to resolve small soft tissue objects of interest, such as lesions. Furthermore, fluoroscopic images are only two-dimensional projections. Thus, X-ray volumetric reconstruction can identify such soft tissue objects and navigate to the target.

There are several solutions to provide three-dimensional volumetric reconstruction (e.g., CT and cone-beam CT) that are widely used in the medical field these machines algorithmically combine multiple X-ray projections from known, calibrated X-ray source locations into a three-dimensional volume where soft tissue is more visible, for example, CT machines can be used with iterative scanning during a procedure to provide guidance through the body until the tool reaches the target, which is a cumbersome process because it requires several complete CT scans, a dedicated CT room, and blind navigation between scans, additionally, each scan requires a worker to leave the room and expose the patient to such radiation due to the high level of ionizing radiation, another option is a cone-beam CT machine that can be used in some operating rooms and is easier to operate , but expensive, and similar CT only provides blind navigation between scans, requires iterative navigation multiple times, and requires a worker to leave the room, and is very expensive based on the CT system and is not available in many imaging situations in the same location.

Thus, imaging techniques have been introduced which use STANDARD FLUOROSCOPIC apparatus TO reconstruct a LOCAL THREE-DIMENSIONAL VOLUME FOR visualization and ease of navigation TO an in-vivo TARGET, and in particular TO a small soft tissue object, U.S. patent application No. 2017/035379 entitled "System and METHOD FOR LOCAL THREE-DIMENSIONAL RECONSTRUCTION Using Standard FLUOROSCOPEs" (SYSTEMS AND METHOD FOR LOCAL THREE-DIMENSIONAL RECONSTRUCTION Using Standard FLUOROSCOPIC METHOD and digital imaging mirror), incorporated herein by reference, Barak et al entitled "System and METHOD FOR reconstructing a LOCAL THREE-DIMENSIONAL VOLUME FOR navigation TO and performance of a TARGET USING fluoroscopy-BASED examination (SYSTEM AND METHOD FOR NAVIGATING and performing a program ON a TARGET) (the system and METHOD FOR reconstructing a THREE-DIMENSIONAL imaging program and recording system), incorporated herein by reference, the U.S. patent application No. 3932, incorporated by reference, the U.S. patent application No. 3,32, incorporated herein by reference, the subject of the Standard FLUOROSCOPIC apparatus, and METHOD FOR LOCAL THREE-DIMENSIONAL RECONSTRUCTION USING fluorescence VOLUME, incorporated herein by reference," the subject of the patent application No. 2, incorporated by reference.

generally, according to the systems and methods disclosed in the above-mentioned patent applications, during a medical procedure, a standard fluoroscopic C-arm can be rotated, for example, about 30 degrees around the patient and a fluoroscopic 3D reconstruction of the region of interest is generated by a software algorithm specific to .

Such a fast generation of a 3D reconstruction of the region of interest may provide real-time three-dimensional imaging of the target region. Real-time imaging of the target and the medical device located within its region may be beneficial for many interventional procedures, such as biopsy and ablation procedures of various organs, vascular interventions, and orthopedic surgery. For example, when referring to a navigation bronchoscopy, the goal may be to receive accurate information about the location of the biopsy catheter relative to the target lesion.

As another example, minimally invasive procedures (such as laparoscopic procedures, including robotic-assisted surgery) may employ intraoperative fluoroscopy to increase visualization, e.g., for guidance and lesion localization, and to prevent unnecessary injury and complications. The above-described systems and methods that employ real-time reconstruction for fluoroscopic three-dimensional imaging of a target region and navigation based on the reconstruction may also benefit such procedures.

Accordingly, there is a need for systems and methods for facilitating identification and labeling of targets in fluoroscopic image data, and in particular fluoroscopic 3D reconstruction, to facilitate navigation to the targets and the generation of related medical procedures.

Disclosure of Invention

, in accordance with the present disclosure, a system for facilitating identification and labeling of a target in a fluoroscopic image of a body part of a patient is provided, the system comprising (i) one or more storage devices having instructions stored thereon for receiving a CT scan and a fluoroscopic 3D reconstruction of the body part of the patient, wherein the CT scan includes a label of the target, generating at least virtual fluoroscopic images based on the CT scan of the patient, wherein the virtual fluoroscopic images include the target and the label of the target, (ii) at least hardware processors configured to execute the instructions, and (iii) a display configured to display the virtual fluoroscopic images to a user concurrently with the fluoroscopic 3D reconstruction.

In accordance with the present disclosure, there is further provided at a system for facilitating identification and labeling of a target in a fluoroscopic image of a body part of a patient, the system comprising (i) or more storage devices having instructions stored thereon for receiving a CT scan and a fluoroscopic 3D reconstruction of the body part of the patient, wherein the CT scan includes a label of the target, generating at least virtual fluoroscopic images based on the CT scan of the patient, wherein the virtual fluoroscopic images include labels of the target and the target, (ii) at least hardware processors configured to execute the instructions, and (iii) a display configured to display the virtual fluoroscopic images and the fluoroscopic 3D reconstruction to a user.

In accordance with the present disclosure, there are further steps of providing methods for identifying and labeling targets in an image of a body part of a patient, the method comprising using at least hardware processors to receive a CT scan and a fluoroscopic 3D reconstruction of the body part of the patient, wherein the CT scan includes a label of the target, generate at least virtual fluoroscopic images based on the CT scan of the patient, wherein at least virtual fluoroscopic images include labels of the target and the target, and display at least virtual fluoroscopic images to a user on a display simultaneously with the fluoroscopic 3D reconstruction, thereby facilitating the user in identifying the target in the fluoroscopic 3D reconstruction.

In accordance with the present disclosure, there are further steps of providing methods for identifying and labeling targets in an image of a body part of a patient, the method comprising using at least hardware processors to receive a CT scan and a fluoroscopic 3D reconstruction of the body part of the patient, wherein the CT scan includes a label of the target, generate at least virtual fluoroscopic images based on the CT scan of the patient, wherein at least virtual fluoroscopic images include labels of the target and the target, and display to a user on a display at least virtual fluoroscopic images and the fluoroscopic 3D reconstruction, thereby facilitating the user to identify the target in the fluoroscopic 3D reconstruction.

In accordance with the present disclosure, there are provided in steps systems for navigating to a target region in a patient's body using real-time two-dimensional fluoroscopic images during a medical procedure, the system comprising a medical device configured to be navigated to the target region, a fluoroscopic imaging device configured to acquire a 2D fluoroscopic image sequence of the target region at a plurality of angles relative to the target region while the medical device is positioned in the target region, and a computing device configured to receive a pre-operative CT scan of the target region, wherein the pre-operative CT scan includes indicia of the target, generate at least virtual fluoroscopic images based on the pre-operative CT scan, wherein at least virtual fluoroscopic images include indicia of the target and the target, generate a three-dimensional reconstruction of the target region based on the acquired 2D fluoroscopic image sequence, simultaneously display to a user at least virtual fluoroscopic 3D images and the fluoroscopic 3D reconstruction, receive by the user a selection of the target from the fluoroscopic 3D images or from the sequence of the reconstruction of the target, and determine a shift of the medical device relative to the medical device selection of the target.

In accordance with the present disclosure, there are provided in steps systems for navigating to a target region in a patient's body using real-time two-dimensional fluoroscopic images during a medical procedure, the system comprising a medical device configured to navigate to the target region, a fluoroscopic imaging device configured to acquire a 2D fluoroscopic image sequence of the target region at a plurality of angles relative to the target region while the medical device is positioned in the target region, and a computing device configured to receive a pre-operative CT scan of the target region, wherein the pre-operative CT scan includes a marker of the target, generate at least virtual fluoroscopic images based on the pre-operative CT scan, wherein at least virtual fluoroscopic images include markers of the target and the target, generate a three-dimensional reconstruction of the target region based on the acquired 2D fluoroscopic image sequence, display to a user at least virtual fluoroscopic 3D reconstructions, receive a selection of the target from the fluoroscopic 3D reconstructions by the user, receive a fluoroscopic image sequence of the target from the three-dimensional reconstructions or the 2D fluoroscopic image sequence, and determine a target offset of the medical device based on the medical device selection and the target offset.

In accordance with the present disclosure, there are provided in methods for navigating to a target region in a patient's body using real-time two-dimensional fluoroscopic images during a medical procedure, the method comprising using at least hardware processors for receiving a pre-operative CT scan of the target region, wherein the pre-operative CT scan includes a marker of the target, generating at least virtual fluoroscopic images based on the pre-operative CT scan, wherein at least virtual fluoroscopic images include markers of the target and the target, receiving a sequence of 2D fluoroscopic images of the target region acquired in real-time at a plurality of angles relative to the target region when a medical device is positioned in the target region, generating a three-dimensional reconstruction of the target region based on the sequence of 2D fluoroscopic images, displaying at least virtual fluoroscopic images and a fluoroscopic 3D reconstruction to a user, receiving a selection of the target from the fluoroscopic 3D reconstruction by the user, receiving a selection of the medical device from the three-dimensional reconstruction or the sequence of 2D fluoroscopic images, and determining a shift of the medical device relative to the target based on the selection of the target and the medical device.

In accordance with the present disclosure, there are provided in methods for navigating to a target region in a patient's body using real-time two-dimensional fluoroscopic images during a medical procedure, the method comprising using at least hardware processors for receiving a pre-operative CT scan of the target region, wherein the pre-operative CT scan includes a marker of the target, generating at least virtual fluoroscopic images based on the pre-operative CT scan, wherein at least virtual fluoroscopic images include the target and the marker of the target, receiving a sequence of 2D fluoroscopic images of the target region acquired in real-time at a plurality of angles relative to the target region when a medical device is positioned in the target region, generating a three-dimensional reconstruction of the target region based on the sequence of 2D fluoroscopic images, displaying at least virtual fluoroscopic images and a fluoroscopic 3D reconstruction to a user, receiving a selection of the target from the fluoroscopic 3D reconstruction by the user, receiving a selection of the medical device from the sequence of three-dimensional or 2D fluoroscopic images, and determining a shift of the medical device relative to the target based on the selection of the target and the medical device.

In accordance with the present disclosure, there are provided in steps computer program products comprising a non-transitory computer readable storage medium having program code embodied therewith, the program code executable by at least hardware processors to receive a pre-operative CT scan of a target region, wherein the pre-operative CT scan includes indicia of the target, generate at least virtual fluoroscopy images based on the pre-operative CT scan, wherein the at least virtual fluoroscopy images include the target and the indicia of the target, receive a sequence of 2D fluoroscopy images of the target region acquired in real time at a plurality of angles relative to the target region when a medical device is positioned in the target region, generate a fluoroscopy three-dimensional reconstruction of the target region based on the sequence of 2D fluoroscopy images, simultaneously display to a user at least virtual fluoroscopy images and the fluoroscopy three-dimensional reconstruction, receive a selection of the target from the fluoroscopy three-dimensional reconstruction by the user, receive the three-dimensional reconstruction of the medical device from the sequence of fluoroscopy images or 2D fluoroscopy images, determine a target shift for the medical device based on the target selection and the medical device shift.

In accordance with the present disclosure, there are provided in steps computer program products comprising a non-transitory computer readable storage medium having program code embodied therewith, the program code executable by at least hardware processors to receive a pre-operative CT scan of a target region, wherein the pre-operative CT scan includes indicia of the target, generate at least virtual fluoroscopy images based on the pre-operative CT scan, wherein the at least virtual fluoroscopy images include the target and the indicia of the target, receive a sequence of 2D fluoroscopy images of the target region acquired in real time at a plurality of angles relative to the target region when a medical device is positioned in the target region, generate a fluoroscopy three-dimensional reconstruction of the target region based on the sequence of 2D fluoroscopy images, display to a user at least virtual fluoroscopy images and the fluoroscopy three-dimensional reconstruction, receive a selection of the target from the fluoroscopy three-dimensional reconstruction or the sequence of 2D fluoroscopy images by the user, determine a target shift for the medical device based on the medical device selection and the target shift.

In another aspect of the present disclosure, or more storage devices have stored thereon other instructions for guiding a user in identifying and labeling targets in a fluoroscopic 3D reconstruction.

In another aspect of the disclosure, or more storage devices have stored thereon further instructions for guiding a user in fluoroscopic 3D reconstruction identifying and labeling targets while using the virtual fluoroscopic image as a reference.

In another aspect of the present disclosure, or more storage devices have stored thereon further instructions for guiding a user in identifying and labeling a target in two fluoroscopic 3D reconstructed two fluorescence slice images captured at two different angles.

In another aspect of the disclosure, generating at least virtual fluoroscopic images is based on digitally reconstructed radiographic techniques.

In another aspect of the disclosure, generating at least virtual fluoroscopic images includes generating a virtual fluoroscopic pose around the target by simulating a fluoroscopic trajectory while scanning the target, generating a virtual 2D fluoroscopic image by projecting the CT scan volume according to the virtual fluoroscopic pose, generating a virtual fluoroscopic 3D reconstruction based on the virtual 2D fluoroscopic image, and selecting a slice image from the virtual fluoroscopic 3D reconstruction that includes a marker of the target.

In another aspect of the disclosure, the target is a soft tissue target.

In another aspect of the disclosure, receiving a fluoroscopic 3D reconstruction of the body part includes receiving a sequence of 2D fluoroscopic images of the body part acquired at a plurality of angles relative to the body part and generating the fluoroscopic 3D reconstruction of the body part based on the sequence of 2D fluoroscopic images.

In another aspect of the disclosure, the method further includes using the at least hardware processors to guide a user in identifying and labeling targets in a fluoroscopic 3D reconstruction.

In another aspect of the disclosure, the method further includes using the at least hardware processors to guide a user in identifying and labeling the target in a fluoroscopic 3D reconstruction while using at least virtual fluoroscopic images as a reference.

In another aspect of the disclosure, the method further includes using the at least hardware processors to instruct a user to identify and label a target in two fluoroscopic 3D reconstructed two fluorescence slice images captured at two different angles.

In another aspect of the disclosure, the system further includes a tracking system configured to provide data indicative of a position of the medical device within the patient's body, and a display, wherein the computing device further is configured to determine the position of the medical device based on the data provided by the tracking system, display the target area and the position of the medical device relative to the target on the display, and correct the display of the position of the medical device relative to the target based on the determined offset between the medical device and the target.

In another aspect of the present disclosure, the computing device is further configured to generate a 3D rendering of the target region based on the pre-operative CT scan, wherein the displaying of the target region includes displaying the 3D rendering, and register the tracking system to the 3D rendering, wherein the positional correction of the medical device relative to the target includes updating the registration between the tracking system and the 3D rendering.

In another aspect of the disclosure, a tracking system includes a sensor, and an electromagnetic field generator configured to generate an electromagnetic field for determining a position of the sensor, wherein the medical device includes a catheter guidance assembly having the sensor disposed thereon, and the position determination of the medical device includes determining a position of the sensor based on the generated electromagnetic field.

In another aspect of the disclosure, the target region includes at least portions of a lung, and the medical device is configured to be navigated to the target region through the airway lumen network.

In another aspect of the disclosure, the computing device is configured to receive a selection of a medical device by automatically detecting portions of the medical device in a sequence or three-dimensional reconstruction of the acquired 2D fluoroscopic images, and receiving a user command to accept or reject the detection.

In another aspect of the disclosure, the computing device is further configured to estimate a pose of the fluoroscopic imaging device while the fluoroscopic imaging device acquires each of the at least a plurality of images of the 2D fluoroscopic image sequence, and wherein generating the three-dimensional reconstruction of the target region is based on the pose estimate of the fluoroscopic imaging device.

In another aspect of the disclosure, the system further includes a structure of the marker, wherein the fluoroscopic imaging apparatus is configured to acquire a sequence of 2D fluoroscopic images of the target region and the structure of the marker, and wherein estimating the pose of the fluoroscopic imaging apparatus while acquiring each image of the at least plurality of images is based on detection of a probable and most probable projection of the marker structure as a whole on each image.

In another aspect of the disclosure, the computing device is further configured to guide the user in identifying and labeling the target in the fluoroscopic 3D reconstruction while using at least virtual fluoroscopic images as a reference.

In another aspect of the disclosure, the method further includes using the at least hardware processors for determining a position of the medical device within the patient's body based on data provided by the tracking system, displaying on the display the target area and the position of the medical device relative to the target, and correcting the display of the position of the medical device relative to the target based on the determined offset between the medical device and the target.

In another aspect of the disclosure, the method further includes using the at least hardware processors for generating a 3D rendering of the target region based on the pre-operative CT scan, wherein the displaying of the target region includes displaying the 3D rendering, and registering the tracking system to the 3D rendering, wherein the correcting of the position of the medical device relative to the target includes updating the registration between the tracking system and the 3D rendering.

In another aspect of the disclosure, receiving the selection of the medical device includes automatically detecting portions of the medical device in a sequence or three-dimensional reconstruction of 2D fluoroscopic images, and receiving a user command to accept or reject the detection.

In another aspect of the disclosure, the method further includes estimating a pose of the fluoroscopic imaging apparatus using the at least hardware processors while acquiring each of the at least a plurality of images of the 2D fluoroscopic image sequence, wherein generating the three-dimensional reconstruction of the target region is based on the pose estimate of the fluoroscopic imaging apparatus.

In another aspect of the disclosure, the structure of the marker is positioned relative to the patient and the fluoroscopic imaging apparatus such that each image of the at least a plurality of images contains a projection of at least portion of the marker structure, and it estimates the pose of the fluoroscopic imaging apparatus while acquiring each image of the at least a plurality of images is based on the detection of the likely and most likely projection of the marker structure as a whole on each image.

In another aspect of the disclosure, the non-transitory computer readable storage medium has further program code executable by the at least hardware processors for determining a position of the medical device within the patient's body based on data provided by the tracking system, displaying on the display the target area and the position of the medical device relative to the target, and correcting the display of the position of the medical device relative to the target based on the determined offset between the medical device and the target.

Any of the above aspects and embodiments of the disclosure may be combined without departing from the scope of the disclosure.

Drawings

Various aspects and embodiments of the disclosure are described below with reference to the drawings, in which:

FIG. 1 is a flow chart of a method for identifying and labeling targets in fluoroscopic 3D reconstruction according to the present disclosure;

FIG. 2 is a schematic diagram of a system configured for use with the method of FIG. 1;

FIG. 3A is an exemplary screenshot illustrating a display of a fluoroscopic 3D reconstructed slice image according to the present disclosure;

FIG. 3B is an exemplary screenshot illustrating a virtual fluoroscopic image presented concurrently with a fluoroscopic 3D reconstructed slice image according to the present disclosure;

FIG. 3C is an example screenshot illustrating a display of a fluoroscopic 3D reconstruction according to the present disclosure;

FIG. 4 is a flow chart of a method for navigating to a target using real-time two-dimensional fluoroscopic images according to the present disclosure; and is

Fig. 5 is a perspective view of illustrative embodiments of an exemplary system for navigating to a soft tissue target via an airway network according to the method of fig. 4.

Detailed Description

As referred to herein, the term "target" may relate to any element, biological or artificial, or to a location of interest within a patient's body, such as tissue (including soft tissue and skeletal tissue), an organ, an implant, or a fiducial marker.

As referred to herein, the term "target region" may refer to at least portions of the target and its surrounding region when the term "body part" refers to a body part in which the target is located, the term "target region" and the term "body part" are used interchangeably.

The terms "and", "or" and/or "are used interchangeably, and each term may be combined with other terms, all in accordance with the context of the term.

As referred to herein, the term "medical device" may include, but is not limited to, optical systems, ultrasound probes, marker placement tools, biopsy tools, ablation tools (i.e., microwave ablation devices), laser probes, cryoprobes, sensor probes, and aspiration needles.

The term "fluoroscopic image" may refer to a 2D fluoroscopic image and/or any fluoroscopic 3D reconstructed slice image, all according to the context of the term.

The term "virtual fluoroscopy image" may refer to a virtual 2D fluoroscopy image and/or a virtual fluoroscopy 3D reconstructed or any other 3D reconstructed virtual fluoroscopy slice image, all according to the context of the term.

The present disclosure relates to systems, methods, and computer program products for facilitating user identification and labeling of targets in real-time fluoroscopic images of a body part of interest produced via standard fluoroscopy. Such real-time fluoroscopic images may include two-dimensional images and/or three-dimensional reconstructed slice images. In particular, identification and labeling of targets in real-time fluoroscopy data may be facilitated by using synthetic or virtual fluoroscopy data (which includes a label or indication of the target) as a reference. The virtual fluoroscopy data may be generated from previously acquired volume data and preferably such that it will mimic fluoroscopy type data as much as possible. In general, the target may be better shown in an imaging mode of previously acquired volumetric data than real-time fluoroscopic data.

The present disclosure further relates to systems and methods for facilitating navigation of a medical device to a target and/or region thereof using real-time two-dimensional fluoroscopic images of the target region navigation may be facilitated by using local three-dimensional volume data, wherein small soft tissue objects are visible, comprised of a sequence of fluoroscopic images captured by standard fluoroscopic imaging devices available in most operating rooms.

Referring now to FIG. 1, FIG. 1 is a flow chart of a method for identifying and labeling a target in a 3D fluoroscopic reconstruction in accordance with the present disclosure, in step 100, a CT scan of a body part of a patient and a fluoroscopic 3D reconstruction may be received, the CT scan may include a label or indication of the target located at the body part of the patient, alternatively, a qualified medical professional may be directed to identify and label the target in the CT scan, in embodiments, the target may be a soft tissue target, such as a lesion, in embodiments, the imaged body part may include at least a portion of the lung, in embodiments, the 3D reconstruction may be displayed to a user, in embodiments, the 3D reconstruction may be displayed such that the user may scroll through different slice images thereof, referring now to FIG. 3A, which is an exemplary screenshot 300 showing a display of a slice image reconstructed in accordance with the fluoroscopic 3D disclosed, the screenshot 300 including a slice image 310, a scroll bar 320 and an indicator 330, the scroll bar 320 allowing the user to scroll the 3D image, the scroll bar 320, the scroll bar indicating the location of the slice image, the current fluoroscopic 3D reconstructed slice image, the scroll bar 330, the scroll bar.

In embodiments, the receiving of the fluoroscopic 3D reconstruction of the body part may include receiving a sequence of fluoroscopic images of the subject location, and generating the fluoroscopic 3D reconstruction of the body part based on at least portion of the fluoroscopic images in embodiments, the method may further may include directing the user to acquire the sequence of fluoroscopic images by manually sweeping the fluoroscope in embodiments, the method may further includes automatically acquiring the sequence of fluoroscopic images.

In embodiments, a fluoroscopic 3D reconstruction may be generated based on tomosynthesis methods and/or according to the systems and methods disclosed in U.S. patent application No. 2017/035379 and U.S. patent application No. 15/892,053, which are described above and incorporated by reference herein.

In step 110, at least virtual fluoroscopic images may be generated based on the CT scan of the patient, as indicated by the CT scan, the virtual fluoroscopic images may then include the target and indicia of the target.

In embodiments, a virtual 2D fluoroscopic image may be generated based on digitally reconstructed radiographic techniques.

In embodiments, a virtual fluoroscopy slice image may be generated according to the following steps, in a step, the received CT volume is aligned with a fluoroscopy 3D reconstruction, in a second step, an estimate of the pose of the fluoroscopy device is received or calculated while capturing a set of fluoroscopy images for generating a fluoroscopy 3D reconstruction in a selected position relative to the target or patient (e.g., in an AP (anterior-posterior) position), in a third step, or more slices of the CT scan volume perpendicular to the selected position and including the target are generated, in a fourth step, or more CT slices are projected according to the estimated fluoroscopy pose to receive the virtual fluoroscopy slice image.

In the embodiments, the generation of a virtual fluoroscopy slice image of the target region may include the following steps, in the step, a virtual fluoroscopy pose around the target may be obtained, in the embodiments, the virtual fluoroscopy pose may be generated by simulating a fluoroscopy trajectory while the fluoroscope scans the target, in the embodiments, the method may further step include the generation of a 3D fluoroscopy reconstruction, as described with respect to step 430 of FIG. 4, then the estimated pose of the fluoroscopy device may be utilized while capturing a sequence of fluoroscopy images for generating a fluoroscopy 3D reconstruction, in the second step, a virtual fluoroscopy image may be generated by projecting the CT scan volume according to the virtual fluoroscopy pose.

In embodiments, it may be more advantageous to generate and use a virtual slice image as a reference when it is desired to mark a target in a fluoroscopic 3D reconstructed slice image in embodiments it may be more advantageous to generate and use a virtual fluoroscopic 2D image when it is desired to mark a target in a fluoroscopic 2D image.

In step 120, the virtual fluoroscopic image and the fluoroscopic 3D reconstruction may be displayed to the user. The target indication in the virtual fluoroscopic image may then be used as a reference to identify and label the target in the fluoroscopic 3D reconstructed slice image. Thus, identification and labeling of the target is facilitated in fluoroscopic 3D reconstruction. The identification and marking of the target by the user may then be more accurate. The user may use virtual fluoroscopy as a reference before identifying and labeling targets in the real-time fluoroscopic images, and/or may use virtual fluoroscopy as a reference after such identification and labeling.

In accordance with the present disclosure, various workflows and displays may be used to identify and mark targets while using virtual fluoroscopic data as a reference, such displays are illustrated in fig. 3B and 3C referring now to fig. 3B, which is an exemplary screenshot 350 showing a virtual fluoroscopic image 360 displayed concurrently with fluoroscopic 3D reconstructed fluoroscopic slice images 310a and 310B in accordance with the present disclosure screen screenshot 350 includes a virtual fluoroscopic image 360, fluoroscopic slice images 310a and 310B, a scroll bar 320 and an indicator 330, virtual fluoroscopic image 360 includes a circular marker 370 of a target, fluoroscopic slice images 310a and 310B include circular markers 380a and 380B of a target performed by a user correspondingly in embodiments, a user may display visual alignment between fluoroscopic slice images 310a and 310B and markers 380a and virtual fluoroscopic image 360 and markers 370 to verify markers 380a and 380B in embodiments, a user may use fluoroscopic slice images and markers 380a and 380B to display virtual fluoroscopic images 360 and markers in parallel with other fluoroscopic images 360 and embodiments, however, a central fluoroscopic image 360 and 36360 may display a central fluoroscopic image 360 and a fluoroscopic image may display only display a central slice image 360 and a fluoroscopic image 360 and a central slice image.

Referring now to FIG. 3C, which is an exemplary screenshot 355 showing a display of at least portion of a 3D fluoroscopic reconstruction 365, screenshot 355 includes a 3D reconstructed image 365 (which includes at least portions (e.g., slices) of the 3D fluoroscopic reconstruction), demarcated areas 315a and 315b, a scroll bar 325, an indicator 335, and a button 375. the demarcated areas 315a and 315b are designated areas for presenting slice images of 3D reconstructed portions presented in the 3D reconstructed image 365 selected by a user (e.g., selected by marking targets in these slice images). The button 374 is entitled "plan targets". in 0 embodiments, upon pressing or clicking on the button 374 by the user, he is presented with at least virtual fluoroscopic images showing targets and markings that use them as references.

In embodiments, the virtual fluoroscopic image and the fluoroscopic 3D reconstruction (e.g., a selected slice of the fluoroscopic 3D reconstruction) may be displayed to the user simultaneously in embodiments, the virtual fluoroscopic image and the fluoroscopic 3D reconstruction may be displayed in a non-simultaneous manner.

In optional step 130, the user may be guided to identify and label the target in the fluoroscopic 3D reconstruction, in embodiments the user may be specifically guided to use the virtual fluoroscopic image as a reference, in embodiments the user may be instructed to identify and label the target in two fluoroscopic slice images of the fluoroscopic 3D reconstruction captured at two different angles.

In embodiments, identification and tagging of the target may be performed in or more two-dimensional fluoroscopic images (i.e., the originally captured fluoroscopic images). The fluoroscopic images may then be received and displayed to the user in place of the fluoroscopic 3D reconstruction.

In embodiments, a set of two-dimensional fluoroscopic images (e.g., as originally captured) used to construct the fluoroscopic 3D reconstruction may additionally be received (e.g., in addition to the 3D fluoroscopic reconstruction). The fluoroscopic 3D reconstruction, the corresponding set of two-dimensional fluoroscopic images, and the virtual fluoroscopic image may be displayed to the user.

Referring now to FIG. 2, FIG. 2 is a schematic diagram of a system 200 configured for use with the method of FIG. 1. the system 200 may include a workstation 80, and optionally include a fluoroscopic imaging device or fluoroscope 215. in embodiments, the workstation 80 may be coupled directly or indirectly to the fluoroscope 215, such as by wireless communication. the workstation 80 may include a memory or storage device 202, a processor 204, a display 206, and an input device 210. the processor or hardware processor 204 may include or more hardware processors. the workstation 80 may optionally include an output module 212 and a network interface 208. the memory 202 may store applications 81 and image data 214. the applications 81 may include instructions executable by the processor 204, and in particular instructions for performing the method steps of FIG. 1. the applications 81 may further include a user interface 216. the image data 214 may include CT scans, fluoroscopic 3D reconstruction of target regions, and/or any other fluoroscopic image data and/or generated or more image data and the image processing device may be coupled to a computer device 202, a computer device, such as a portable computing device 202, a portable computing device, a computer device, and a computer device 210, a computer device, a computer device, and a computer device, such as a portable computer device 210.

The memory 202 may include any non-transitory computer readable storage medium for storing data and/or software including instructions executable by the processor 204 and controlling operation of the workstation 80 and, in embodiments, may also control operation of the fluoroscope 215. the fluoroscope 215 may be used to capture a sequence of fluoroscopic images, based on which a fluoroscope 3D reconstruction is generated. in embodiments, the memory or storage 202 may include or more storage devices, such as solid state storage devices, such as flash memory chips. alternatively or in addition to or more solid state storage devices, the memory 202 may include or more mass storage devices connected to the processor 204 through a mass storage controller (not shown) and a communications bus (not shown). although the description of computer readable medium contained herein refers to solid state storage, those skilled in the art will appreciate that the computer readable storage medium may be any available medium that is accessible by the processor 204, that computer readable storage media may include any other non-transitory computer readable storage medium such as computer readable storage media for storing data, programs, data, flash memory modules, or other non-volatile storage media including, flash-readable medium, flash-read-or other non-volatile data storage media for implementation by a computer readable medium, such as, flash-read-flash memory, flash-or other non-volatile storage media, including, non-volatile data-volatile, non-volatile storage media for example, non-volatile.

The application 81, when executed by the processor 204, may cause the display 206 to present a user interface 216 the user interface 216 may be configured to present the fluoroscopic 3D reconstruction and the generated virtual fluoroscopic image to the user, for example, as shown in FIGS. 3A and 3B, in accordance with the present disclosure, the user interface 216 may be further configured to guide the user in identifying and labeling the target in the displayed fluoroscopic 3D reconstruction or any other fluoroscopic image data.

The network interface 208 may be configured to connect to a network, such as a Local Area Network (LAN), domain network (WAN), wireless mobile network, Bluetooth network, and/or the Internet, comprised of a wired network and/or a wireless network the network interface 208 may be used to connect between the workstation 80 and the fluoroscope 215 the network interface 208 may also be used to receive image data 214, the input device 210 may be any device that a user may use to interact with the workstation 80, such as a mouse, keyboard, foot pedal, touch screen, and/or voice interface, the output module 212 may include any connectivity port or bus, such as a parallel port, a serial port, a Universal Serial Bus (USB), or any other similar connectivity port known to those skilled in the art.

Referring now to fig. 4, fig. 4 is a flow chart of a method for navigating to a target using real-time two-dimensional fluoroscopic images according to the present disclosure. The method facilitates navigation to a target region within a patient during a medical procedure. The method utilizes real-time three-dimensional volumetric data based on fluoroscopy. Fluoroscopic three-dimensional volume data may be generated from the two-dimensional fluoroscopic images.

In step 400, a pre-operative CT scan of a target region may be received. The pre-operative CT scan may include a label or indication of the target. Step 400 may be similar to step 100 of the method of fig. 1.

In step 410, or more virtual fluoroscopic images may be generated based on the pre-operative CT scan.

In step 420, a sequence of fluoroscopic images of the target region acquired in real time at a plurality of angles relative to the target region may be received the sequence of images may be captured while the medical device is positioned in the target region embodiments the method may include additional steps for guiding a user in acquiring the sequence of fluoroscopic images embodiments the method may include or more additional steps for automatically acquiring the sequence of fluoroscopic images.

In step 430, a three-dimensional reconstruction of the target region may be generated based on the sequence of fluoroscopic images.

In embodiments, the method further includes or more steps for estimating the pose of the fluoroscopic imaging apparatus while acquiring each of the fluoroscopic images or at least a plurality of the fluoroscopic images.

In embodiments, the structure of markers may be positioned relative to the patient and the fluoroscopic imaging device such that each fluoroscopic image includes a projection of at least portions of the marker structure.

Exemplary systems and methods for constructing such fluoroscopy-based three-dimensional volumetric data are disclosed in the above commonly owned U.S. patent publication No. 2017/0035379, U.S. patent application No. 15/892,053, and U.S. provisional application No. 62/628,017, which are incorporated by reference.

In , once the pose estimation process is complete, the projection of the marker structure onto the IMAGE can be removed by using well known METHODs such METHODs are disclosed in commonly owned U.S. patent application No. 62/628028 entitled "IMAGE RECONSTRUCTION system and METHOD (IMAGE RECONSTRUCTION SYSTEM AND METHOD"), filed 2018, 8.s.c. by Alexandroni et al, the entire contents of which are hereby incorporated by reference.

In step 440, one or more virtual fluoroscopic images and fluoroscopic 3D reconstructions may be displayed to the user, the display may be in accordance with step 120 of the method of FIG. 1 and as illustrated in FIG. 3B in embodiments or more virtual fluoroscopic images and fluoroscopic 3D reconstructions may be displayed to the user simultaneously in embodiments the virtual fluoroscopic images and fluoroscopic 3D reconstructions may be displayed in a non-simultaneous manner, for example, the virtual fluoroscopic images may be displayed in a separate screen, or may be displayed, for example, upon request of the user, rather than a display of the fluoroscopic 3D reconstruction.

At step 450, a selection of a target from the fluoroscopic 3D reconstruction may be received via the user in embodiments, the user may be directed to identify and label the target in the fluoroscopic 3D reconstruction when using or more virtual fluoroscopic images as a reference.

In step 460, a selection of a medical device from a sequence of three-dimensional or Fluoroscopic Images may be received, In embodiments, the receiving of the selection may include automatically detecting at least portions of the medical device In the sequence of Fluoroscopic Images or the three-dimensional reconstruction, In embodiments, a user command to accept or reject the detection may also be received, In embodiments, the selection may be received by a user, In Weingarten et al, commonly owned U.S. provisional application No. 62/627,911, entitled System And Method For catheter detection And Updating a display Position of a catheter In a Fluoroscopic image, which is incorporated herein by reference.

In step 470, the offset of the medical device relative to the target may be determined. The determination of the offset may be based on a selection of the target and the receipt of the medical device.

In embodiments, the method may further comprise steps for determining a position of the medical device within the patient based on data provided by a tracking system, such as an electromagnetic tracking system.

In embodiments, the method may further comprise steps including steps for generating a 3D rendering of the target region based on the pre-operative CT scan, then the display of the target region may comprise a display of the 3D rendering, in another step, the tracking system may be registered with the 3D rendering, then, the correction of the position of the medical device relative to the target based on the determined offset may comprise a local update of the registration between the tracking system and the 3D rendering in the target region, in embodiments the method may further comprise steps for registering the fluoroscopic 3D reconstruction to the tracking system, in another step and based on the above, a local registration between the fluoroscopic 3D reconstruction and the 3D rendering may be performed in the target region.

In embodiments, the target region may include at least portions of the lung, and the medical device may be configured to be navigated to the target region through the network of airway lumens.

In embodiments, the method may include receiving a selection of the target from or more images of the sequence of fluoroscopic images in addition to, or instead of, receiving a selection of the target from the fluoroscopic 3D reconstruction.

computer program products for navigating to a target using real-time two-dimensional fluoroscopic images are disclosed herein the computer program products may include a non-transitory computer-readable storage medium having program code embodied therewith, the program code executable by at least hardware processors to perform the steps of the method of FIG. 1 and/or FIG. 4.

FIG. 5 is a perspective view of illustrative embodiments of an exemplary system for facilitating navigation to a soft tissue target via an airway network according to the method of FIG. 4. the system 500 may be further configured to construct fluoroscopy-based three-dimensional volume data of the target region from the 2D fluoroscopy images. the system 500 may be further configured to facilitate proximity of the medical device to the target region and for determining a position of the medical device relative to the target by using Electromagnetic Navigation Bronchoscopy (ENB).

System 500 may be configured for examining CT image data to identify or more targets, planning a path to an identified target (planning phase), navigating an Extended Working Channel (EWC)512 of a catheter assembly to a target through a user interface (NAVIGATION phase), and confirming a position of the EWC 512 relative to the target of these types of EMN systems are the ELECTROMAGNETIC NAVIGATION currently sold by Medtronic PLC

Figure BDA0002313939290000231

Provided is a system. The target may be a tissue of interest identified by examining CT image data during a planning phase. After navigation, a medical device, such as a biopsy tool or other tool, may be inserted into the EWC 512 to obtain a tissue sample from tissue located at or near the target.

As shown in FIG. 5, the EWC 512 is an portion of the catheter guide assembly 540 in practice, the EWC 512 is inserted into a bronchoscope 530 to access the luminal network of a patient "P". The EWC 512 of the catheter guide assembly 540 may be inserted into a working channel of the bronchoscope 530 to navigate through the luminal network of the patient

Figure BDA0002313939290000241

Programme kit or EDGETMThe program suite is marketed and sold and is recognizedFor a more detailed description of the catheter guidance assembly 540, reference is made to commonly-owned U.S. patent publication nos. 2014/0046315, 7,233,820 and 9,044,254, filed 2013, 3, 15, of Ladtkow et al, which are incorporated herein by reference in their entirety.

System 500 generally includes a surgical table 520 configured to support a patient "P," a bronchoscope 530 configured to be inserted into an airway of patient "P" through a mouth of patient "P"; monitoring equipment 535 coupled to bronchoscope 530 (e.g., a video display for displaying video images received from the video imaging system of bronchoscope 530); a localization or tracking system 550 including a localization module 552, a plurality of reference sensors 554, and an emitter pad coupled to a structure of markers 556; and a computing device 525 comprising software and/or hardware for facilitating identification of the target, path planning to the target, navigation of the medical device to the target, and/or validation and/or determination of placement of the EWC 512 or a suitable device therethrough relative to the target. The computing device 525 may be similar to the workstation 80 of fig. 2 and may be configured to perform, among other things, the methods of fig. 1 and 4.

The system 500 in this particular aspect also includes a fluoroscopic imaging device 510 capable of acquiring fluoroscopic or X-ray images or video of the patient "P", the images, image sequences or video captured by the fluoroscopic imaging device 510 may be stored within the fluoroscopic imaging device 510 or transmitted to a computing device 525 for storage, processing and display, for example as described with respect to FIG. 2. additionally, the fluoroscopic imaging device 510 may be movable relative to the patient "P" such that images may be acquired from different angles or perspectives relative to the patient "P" to create a sequence of fluoroscopic images, such as fluoroscopic video, the pose of the fluoroscopic imaging device 510 relative to the patient "P" and at the time of image capture may be estimated by the structure of the markers 556 and according to the method of FIG. 4. the structure of the markers is positioned below the patient "P", between the patient "P" and the operating table 520, and may be positioned between the patient "P" 510 and the radiation source or sensing unit of the fluoroscopic imaging device 510 may be coupled to the transmitter 556 in a manner that the structure of the markers are coupled to the transmitter pads (each of the transmitter) and the transmitter of the transmitter, and the transmitter may be positioned on the patient 'P's bed 520 in a single imaging device, or in a manner described with reference to the imaging device or a single imaging device including more than the imaging device fixed in the imaging device 510.

Computing device 525 may be any suitable computing device including a processor and a storage medium, where the processor is capable of executing instructions stored on the storage medium computing device 525 may further include a database configured to store patient data, CT data sets including CT images, fluoroscopy data sets including fluoroscopy images and video, fluoroscopy 3D reconstruction, navigation planning, and any other such data.

With respect to the planning phase, computing device 525 generates and views a three-dimensional model or rendering of the airway of patient "P" using previously acquired CT image data, is able to identify the target on the three-dimensional model (automatically, semi-automatically, or manually), and allows for determination of a path through the airway of patient "P" to tissue located at and around the target. More specifically, CT images acquired from previous CT scans are processed and assembled into a three-dimensional CT volume, which is then used to generate a three-dimensional model of the airway of the patient "P". The three-dimensional model may be displayed on a display associated with computing device 525, or in any other suitable mannerThe three-dimensional model may be manipulated to facilitate identification of a target on the three-dimensional model or two-dimensional image, and an appropriate path through the airway of the patient "P" to tissue located at the target may be selected upon selection, the path plan, three-dimensional model, and images derived therefrom may be saved and exported to a navigation system for use during the navigation phase such planning software is currently sold by Medtronic PLC

Figure BDA0002313939290000261

A planning suite.

With respect to the navigation phase, a six degree-of-freedom electromagnetic positioning or tracking system 550 (e.g., similar to those disclosed in U.S. Pat. Nos. 8,467,589, 6,188,355 and published PCT applications WO 00/10456 and WO 01/67035) and other suitable systems for determining position are used to perform registration of images and navigation paths, although other configurations are also contemplated, each of which is incorporated herein by reference in its entirety.

Emitter pads 556 are positioned under the patient "P", emitter pads 556 generate an electromagnetic field around at least portions of the patient "P", in which the location of a plurality of reference sensors 554 and sensor elements 544 can be determined by using tracking module 552. or more of the reference sensors 554 are attached to the chest of the patient "P". six degree-of-freedom coordinates of the reference sensors 554 are sent to a computing device 525 (including appropriate software), where they are used to calculate a patient reference coordinate system. registration is typically performed to coordinate the location of the three-dimensional model and two-dimensional images of the planning stage with the airway of the patient "P" as viewed through bronchoscope 530, and to allow accurate knowledge of the location of the sensors 544 as the navigation stage progresses, even in portions of the airway that are inaccessible to bronchoscope 530.

Registration of the patient "P" position on the transmitter pad 556 is performed by moving LG 532 through the airway of the patient "P". More specifically, as the positionable introducer 532 is moved in the airway, data relating to the position of the sensor 544 is recorded using the transmitter pad 556, the reference sensor 554, and the tracking module 552. The shape resulting from this position data is compared to the internal geometry of the passageways of the three-dimensional model generated during the planning phase and a positional correlation between the shape and the three-dimensional model is determined based on the comparison, e.g., using software on computing device 525. In addition, the software identifies non-tissue spaces (e.g., air-filled cavities) in the three-dimensional model. The software aligns or registers the image representing the position of sensor 544 with the three-dimensional model and/or the two-dimensional image generated by the three-dimensional model, based on the recorded position data and the assumption that locatable introducer 532 is still located in non-tissue space in the airway of patient "P". Alternatively, a manual registration technique may be employed by: bronchoscope 530 with sensor 544 is navigated to a pre-designated location in the lungs of patient "P" and the images from the bronchoscope are manually correlated with model data of the three-dimensional model.

After registration of the patient "P" to the image data and path planning, a user interface is displayed in the navigation software for the path settings that the clinician should follow to reach the target such navigation software is currently sold by Medtronic PLC

Figure BDA0002313939290000271

A navigation kit.

As depicted by the user interface, once the EWC 512 has successfully navigated to the vicinity of the target, the positionable introducer 532 may be unlocked and removed from the EWC 512, leaving the EWC 512 in place as a guide channel for guiding a medical device (including but not limited to an optical system, an ultrasound probe, a marker placement tool, a biopsy tool, an ablation tool (i.e., a microwave ablation device), a laser probe, a cryoprobe, a sensor probe, and an aspiration needle) to the target.

The fluoroscopic 3D reconstruction is generated based on the sequence of fluoroscopic images and the projection of the structure of markers 556 onto the image sequence. then, or more virtual fluoroscopic images may be generated based on the pre-operative CT scan and via the computing device 525.

System 500 or a similar version thereof in conjunction with the method of FIG. 4 may be used for a variety of procedures, except for ENB procedures with obvious modifications as needed and procedures such as laparoscopy or robotic-assisted surgery.

As referred to herein, the terms "tracking" or "positioning" are used interchangeably. Although the present disclosure specifically describes using an EM tracking system to navigate or determine the position of a medical device, various tracking systems or positioning systems may be used or applied with respect to the methods and systems disclosed herein. Such tracking, positioning or navigation systems may use various methods including electromagnetic, infrared, echolocation, optical or imaging-based methods. Such systems may be based on pre-operative imaging and/or real-time imaging.

In embodiments, standard fluoroscopy may be used to facilitate navigation and tracking of the medical device, as disclosed, for example, in Averbuch, U.S. Pat. No. 9,743,896, for example, such fluoroscopy-based positioning or navigation methods may be used in addition to or in lieu of the EM tracking methods described above (e.g., as described with respect to FIG. 5) to facilitate or enhance navigation of the medical device.

From the foregoing and with reference to the various figures, those skilled in the art will appreciate that modifications may also be made to the present disclosure without departing from the scope thereof.

Detailed embodiments of the present disclosure are disclosed herein. However, the disclosed embodiments are merely examples of the disclosure, which may be embodied in various forms and aspects. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure.

While several embodiments of the disclosure have been illustrated in the accompanying drawings, it is not intended that the disclosure be limited thereto, as it is intended that the disclosure be broader than the art-permissible range - and also intended that the specification be read.

25页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于基于法线的纹理混合的来自深度图的平滑法线

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!