Graphical user interface for planning a program

文档序号:1602704 发布日期:2020-01-07 浏览:30次 中文

阅读说明:本技术 用于规划程序的图形用户界面 (Graphical user interface for planning a program ) 是由 王柏 F·巴巴格力 E·克鲁斯二世 S·S·埃切卡瓦 P·C·P·罗 O·G·萨拉察 A 于 2018-04-18 设计创作,主要内容包括:一种规划程序的系统和方法包括规划工作站,该规划工作站包括:显示系统;以及用户输入装置。规划工作站被配置为:经由显示系统显示图像数据;经由用户输入装置接收第一用户输入;经由显示系统显示至少基于第一用户输入识别的所显示的图像数据内的医疗程序的目标;经由显示系统显示交互式图像,交互式图像包括图像数据、多个连接的解剖通道和所识别的目标;经由用户输入装置接收第二用户输入;经由显示系统显示目标和沿着至少基于第二用户输入识别的多个连接的解剖通道中的最近通道的出口点之间的轨迹;经由用户输入装置接收第三用户输入;并且至少基于所定义的轨迹和第三用户输入调整交互式图像。(A system and method of planning a program includes a planning workstation comprising: a display system; and a user input device. The planning workstation is configured to: displaying image data via a display system; receiving a first user input via a user input device; displaying, via a display system, a target of a medical procedure within the displayed image data identified based at least on the first user input; displaying, via a display system, an interactive image, the interactive image comprising image data, a plurality of connected anatomical passageways, and the identified target; receiving a second user input via the user input device; displaying, via a display system, a trajectory between the target and an exit point along a nearest channel of a plurality of connected anatomical channels identified based at least on a second user input; receiving a third user input via the user input device; and adjusting the interactive image based on at least the defined trajectory and the third user input.)

1. A planning workstation, comprising:

a display system; and

a user input device;

wherein the planning workstation is configured to:

displaying image data via the display system;

receiving a first user input via the user input device;

displaying, via the display system, a target of a medical procedure within the displayed image data identified based at least on the first user input;

displaying, via the display system, an interactive image comprising image data, a plurality of connected anatomical passageways, and the identified target;

receiving a second user input via the user input device;

displaying, via the display system, a trajectory between the target and an exit point along a closest channel of the plurality of connected anatomical channels identified based at least on the second user input;

receiving a third user input via the user input device; and

adjusting the interactive image based on at least the defined trajectory and the third user input.

2. The planning workstation according to claim 1, wherein the user input device comprises a touch screen of the display system.

3. The planning workstation of claim 1, wherein the display system adjusts the interactive image, comprising:

determining a distance represented by the trajectory;

determining whether the distance is greater than a predetermined threshold;

receiving a fourth user input via the user input;

identifying an unconnected channel that is closer to the target than the nearest connected channel based at least on the fourth user input; and

connecting the unconnected channel to the plurality of connected channels.

4. The planning workstation according to claim 3, wherein the planning workstation is further configured to receive a fifth user input via the user input device and rotate the interactive image based at least on the fifth user input to identify the unconnected channel in the interactive image.

5. The planning workstation of claim 4, wherein the interactive image rotates about one or more user-defined rotation points.

6. The planning workstation of claim 1, wherein the display system adjusts the interactive image, comprising:

determining an exit angle based on the trajectory; and

adjusting the exit angle by changing the orientation of the exit point along the nearest connected channel.

7. The planning workstation according to claim 1, wherein the planning workstation is further configured to receive a fourth user input via the user input device and display, via a display system, a risk of the medical procedure within the displayed image data based on the fourth user input.

8. The planning workstation of claim 7, wherein the hazard corresponds to at least one of a vulnerable portion of a patient anatomy and excessive bending in one or more of the plurality of connected anatomical passageways.

9. The planning workstation according to claim 7, wherein the hazards are displayed using a hazard barrier comprising at least one of a circular disk, a conical hazard barrier, and a hemispherical hazard barrier.

10. A method of planning a medical procedure, the method comprising:

receiving a representation of an anatomical passageway comprising a plurality of branches;

displaying an image of the representation via a graphical user interface;

receiving a first user input representing a selection of a first tag;

receiving a second user input representing a selection of a first branch of the plurality of branches; and

in response to the first user input and the second user input:

marking the first branch with the first label; and

displaying, via the graphical user interface, a representation of the first label applied to the first branch.

11. The method of claim 10, further comprising: based on tagging the first branch with the first tag:

selecting a second tag; and

displaying, via the graphical user interface, an indication that the second tab has been selected.

12. The method of claim 11, wherein the second label is selected based on an arrangement of the plurality of branches within the anatomical passageway.

13. The method of claim 11, further comprising:

receiving a third user input representing a selection of a second branch of the plurality of branches;

marking the second branch with the second label; and

displaying, via the graphical user interface, a representation of the second label applied to the second branch.

14. The method of claim 10, further comprising:

identifying a group of branches from the plurality of branches including the first branch; and

in response to the first user input and the second user input, tagging the affiliate group with the first tag.

15. The method of claim 14, wherein identifying the branch group comprises identifying descendant branches of the first branch and including the descendant branches in the branch group.

16. The method of claim 10, further comprising: displaying, via the graphical user interface, an indication of a second branch of the plurality of branches that does not have an assigned tag.

17. The method of claim 10, further comprising: in response to the first user input and the second user input:

assigning a color to the first branch; and

displaying an image of a representation of the anatomical passageway, wherein the first branch is colored with an assigned color.

18. The method of claim 10, further comprising:

providing a cursor via the graphical user interface;

detecting that the cursor is aligned with a first branch of the plurality of branches; and

modifying a representation of the cursor based on detecting that the cursor is aligned with the first branch.

19. The method of claim 10, further comprising:

receiving a third user input representing a rotation instruction;

rotating the representation of the anatomical passageway in response to the third user input; and

displaying, via the graphical user interface, an image of a rotational representation of an anatomical passageway.

20. A non-transitory machine-readable medium comprising a plurality of machine-readable instructions which, when executed by one or more processors associated with a planning workstation, are adapted to cause the one or more processors to perform a method comprising:

providing a plurality of interactive windows for a user to view the plan of a medical procedure, wherein each window of the plurality of interactive windows displays a different rendering of an anatomical passageway model;

displaying a path through the anatomical passageway to a target of the medical procedure;

displaying a virtual image of an instrument within the anatomical passageway;

displaying a control point corresponding to the instrument distal end in at least one of the plurality of interactive windows;

receiving a user input;

identifying a position of the control point based at least on the user input; and

in response to receiving the user input, dynamically updating a position of an instrument in at least one of the plurality of interactive windows to match a position of the control point.

Technical Field

The present disclosure relates to systems and methods for performing image-guided procedures, and more particularly to systems and methods for analyzing, identifying, and/or labeling anatomy using a graphical user interface.

Background

Minimally invasive medical techniques aim to reduce the amount of tissue damaged during a medical procedure, thereby reducing patient recovery time, discomfort and harmful side effects. Such minimally invasive techniques may be performed through a natural orifice in the patient anatomy or through one or more surgical incisions. Through these natural orifices or incisions, a clinician may insert minimally invasive medical instruments (including surgical, diagnostic, therapeutic, or biopsy instruments) to reach a target tissue location. One such minimally invasive technique is the use of a steerable, flexible, elongate device (such as a catheter) that can be inserted into an anatomical passageway and navigated toward a region of interest within the patient's anatomy. Control of such elongated devices by medical personnel during image-guided procedures involves management of several degrees of freedom, including at least management of insertion and retraction of the elongated device and steering or bending radius of the device. In addition, different modes of operation may be supported.

Accordingly, it would be advantageous to provide a graphical user interface that supports intuitive planning of medical procedures, including minimally invasive medical techniques.

Disclosure of Invention

Embodiments of the invention are best summarized by the claims appended to the specification.

According to some embodiments, a method for planning a medical procedure using a graphical user interface may include displaying image data via the graphical user interface and receiving a first user input defining a target of the medical procedure within the displayed image data. The method may further include displaying, via a graphical user interface, an interactive image including image data, a plurality of connected anatomical passageways detected by segmentation of the image data, and the defined target. The method may further include receiving a second user input defining a trajectory between the target and an exit point along a nearest channel of the plurality of connected anatomical channels, and receiving a third user input adjusting the interactive image based on the defined trajectory.

According to some embodiments, a method for planning a medical procedure using a graphical user interface may include displaying image data via the graphical user interface, receiving a first user input defining a hazard within the displayed image data, and displaying an interactive image. The interactive image includes image data, a plurality of connected anatomical passageways detected by segmentation of the image data, and a defined risk.

According to some embodiments, a method for previewing a plan of a medical procedure using a graphical user interface may include providing a plurality of interactive windows for a user to view the plan of the medical procedure. Each of the plurality of interactive windows may display a different rendering of the model of the anatomical passageway. The method may further include displaying a path through the anatomical passageway to a target of the medical procedure, displaying a virtual image of the instrument within the anatomical passageway, displaying a control point corresponding to a distal end of the instrument in at least one of the plurality of interactive windows, receiving user input defining a position of the control point, and dynamically updating the position of the instrument in each of the plurality of interactive windows to match the position of the control point in response to receiving the user input.

According to some embodiments, a planning workstation may include a display system and a user input device. The planning workstation may be configured to: displaying image data via a display system; receiving a first user input via a user input device, the first user input defining a target of a medical procedure within the displayed image data; displaying, via a display system, an interactive image comprising image data, a plurality of connected anatomical passageways detected by segmenting the image data, and a defined target; receiving a second user input via the user input device, the second user input defining a trajectory between the target and an exit point along a nearest channel of the plurality of connected anatomical channels; and receiving a third user input via the user input device, the third user input adjusting the interactive image based on the defined trajectory.

According to some embodiments, a non-transitory machine-readable medium may comprise a plurality of machine-readable instructions adapted to cause one or more processors associated with a planning workstation to perform a method when executed by the one or more processors. The method may include displaying image data via a graphical user interface, receiving a first user input defining a hazard within the displayed image data, and displaying an interactive image. The interactive image may include image data, a plurality of connected anatomical passageways detected by segmentation of the image data, and a defined risk.

According to some embodiments, a non-transitory machine-readable medium may comprise a plurality of machine-readable instructions adapted to cause one or more processors associated with a planning workstation to perform a method when executed by the one or more processors. The method may include providing a plurality of interactive windows for a user to view a plan for a medical procedure, displaying a path through an anatomical passageway to a target of the medical procedure, displaying a virtual image of an instrument within the anatomical passageway, displaying a control point corresponding to a distal end of the instrument in at least one of the plurality of interactive windows, receiving user input defining a position of the control point, and dynamically updating the position of the instrument in each of the plurality of interactive windows to match the position of the control point in response to receiving the user input. Each window of the plurality of interactive windows displays a different rendering of the model of the anatomical passageway.

According to some embodiments, a method of planning a medical procedure may include receiving imaging data and providing a model of an anatomical passageway based on the imaging data, the anatomical passageway including a plurality of branches. An image of the model may be displayed via a graphical user interface. A first user input may be received representing a selection of a first tag and a second user input may be received representing a selection of a first branch of the plurality of branches. In response to the first user input and the second user input, the first branch may be labeled with a first label and a representation of the first label applied to the first branch may be displayed via the graphical user interface.

According to some embodiments, a non-transitory machine-readable medium may include a plurality of machine-readable instructions. The instructions may cause the one or more processors to: displaying, via a graphical user interface, a model of an anatomical passageway comprising a plurality of branches; displaying, via a graphical user interface, a list of anatomical tags; receiving a first user input selecting a first tag from a list of anatomical tags; receiving a second user input selecting a first branch of the plurality of branches; and/or applying the first label to the first branch.

According to some embodiments, a planning workstation may include a display system and a user input device. The planning workstation may be configured to: displaying an anatomical passageway comprising a plurality of branches; displaying a tag list; receiving, via a user input device, a first user input selecting a first branch of a plurality of branches; receiving, via the user input device, a second user input selecting the first tag from the list of tags; and in response to the first user input and the second user input, displaying, via the display system, a representation of the first label applied to the first branch.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory in nature and are intended to provide an understanding of the present disclosure without limiting the scope of the disclosure. In this regard, other aspects, features and advantages of the present disclosure will be apparent to those skilled in the art from the following detailed description.

Drawings

This patent or application document contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the office upon request and payment of the necessary fee.

Fig. 1 is a simplified diagram of a teleoperational medical system according to some embodiments.

Fig. 2A is a simplified diagram of a medical instrument system according to some embodiments.

Fig. 2B is a simplified diagram of a medical instrument with an extended medical tool according to some embodiments.

Fig. 3A and 3B are simplified diagrams of side views of a patient coordinate space including a medical instrument mounted on an insertion assembly, according to some embodiments.

FIG. 4 is a simplified diagram of a graphical user interface in a data selection mode according to some embodiments.

Fig. 5A-5G are simplified diagrams of a graphical user interface in a hybrid segmentation and planning mode according to some embodiments.

FIG. 6 is a simplified diagram of a graphical user interface in preview mode according to some embodiments.

FIG. 7 is a simplified diagram of a graphical user interface in a save mode according to some embodiments.

FIG. 8 is a simplified diagram of a graphical user interface in a manage mode according to some embodiments.

Fig. 9 is a simplified diagram of a graphical user interface in a review mode according to some embodiments.

Fig. 10 is a simplified diagram of a method for planning a medical procedure according to some embodiments.

Fig. 11 is a simplified diagram of a method for modifying an anatomical representation (e.g., a model) to provide access to a target of a medical procedure, according to some embodiments.

Fig. 12 is a simplified diagram of a method for augmenting an anatomical representation (e.g., model) to provide access to a target of a medical procedure, in accordance with some embodiments.

Fig. 13 is a simplified diagram of a method for planning a medical procedure using a graphical user interface, according to some embodiments.

14A-14F are simplified diagrams of a graphical user interface in a branch marking mode according to some embodiments.

Fig. 15A and 15B are simplified diagrams of methods of applying labels to a branch model of an anatomic passageway according to some embodiments.

Fig. 16 is a simplified diagram of a method for planning a medical procedure, according to some embodiments.

Fig. 17A-17N are schematic illustrations of a graphical user interface corresponding to performance of a method for planning a medical procedure, according to some embodiments.

Embodiments of the present disclosure and their advantages are best understood by referring to the detailed description that follows. It should be understood that like reference numerals are used to identify like elements illustrated in one or more of the figures, wherein the depictions in the figures are for the purpose of illustrating the embodiments of the disclosure and are not for the purpose of limiting the embodiments of the disclosure.

Detailed Description

In the following description, specific details are set forth describing some embodiments according to the present disclosure. Numerous specific details are set forth in order to provide a thorough understanding of the embodiments. It will be apparent, however, to one skilled in the art, that some embodiments may be practiced without some or all of these specific details. The specific embodiments disclosed herein are illustrative and not restrictive. Those skilled in the art may implement other elements that, although not specifically described herein, are within the scope and spirit of the present disclosure. In addition, to avoid unnecessary repetition, one or more features shown and described in connection with one embodiment may be incorporated into other embodiments unless specifically described otherwise or if one or more features render an embodiment inoperative.

In some instances, well known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the embodiments.

The present disclosure describes various instruments and portions of instruments in terms of their state in three-dimensional space. As used herein, the term "orientation" refers to the position of an object or a portion of an object in three-dimensional space (e.g., three translational degrees of freedom along cartesian x, y, and z coordinates). As used herein, the term "orientation" refers to the rotational placement (three rotational degrees of freedom-e.g., roll, pitch, and yaw) of an object or a portion of an object. As used herein, the term "pose" refers to a position of an object or a portion of an object in at least one translational degree of freedom and an orientation of an object or a portion of an object in at least one rotational degree of freedom (up to six degrees of freedom). As used herein, the term "shape" refers to a set of poses, orientations, or orientations measured along an object.

One general aspect of the present disclosure includes a method for planning a medical procedure, the method comprising: displaying the image data via a graphical user interface; receiving a first user input through a graphical user interface; identifying at least a portion of a target within the displayed image data using a first user input; displaying, via a graphical user interface, an interactive image, the interactive image comprising image data, a plurality of connected anatomical passageways associated with the image data, and the identified target; receiving a second user input; identifying at least a portion of a trajectory between the target and an exit point along a nearest connected channel of the plurality of connected anatomical channels using a second user input; receiving a third user input; and adjusting the interactive image based at least on the identified trajectory and using a third user input. Implementations may include one or more of the following features. The method includes providing a line tool via a graphical user interface to receive a second user input. The method for adjusting the interactive image comprises the following steps: determining a distance represented by the trajectory; determining whether the distance is greater than a predetermined threshold; receiving a fourth user input; identifying at least a portion of unconnected channels that are closer to the target than the nearest connected channel using a fourth user input; and connecting the unconnected channel to the plurality of connected channels. The method of identifying unconnected channels includes receiving a fifth user input and iteratively rotating the interactive image using the fifth user input to identify unconnected channels in the interactive image. The method wherein the interactive image is iteratively rotated about one or more user-defined rotation points. The method also includes identifying a rotation axis based at least on the one or more user-defined rotation points. The method wherein the interactive image is iteratively rotated about an axis of rotation. The method for adjusting the interactive image comprises the following steps: determining an exit angle based on the trajectory; and the exit angle is adjusted by changing the orientation of the exit point along the nearest connected channel. The method includes providing a slider via a graphical user interface and receiving user input via the slider that changes an exit point. The method also includes receiving a fourth user input and identifying a hazard of the medical procedure within the displayed image data using a sixth user input. The method wherein the risk corresponds to a vulnerable portion of the patient's anatomy. The method, wherein the hazard corresponds to excessive bending in one or more of the plurality of connected anatomical passageways. The method includes displaying a hazard barrier (nonce) to represent the hazard. The method, wherein the hazard barrier comprises at least one of a disk, a conical hazard barrier, and a hemispherical hazard barrier. The method also includes receiving a fourth user input and identifying at least a portion of a path within the plurality of connection channels to the target using a seventh user input. The method wherein the first user input is received before completing the segmentation of the image data. A method of displaying an interactive image includes overlaying a plurality of connected anatomical channels on displayed image data, and wherein the plurality of connected anatomical channels are dynamically updated to reflect progress of segmentation of the image data. The method also includes receiving a fourth user input and identifying at least a portion of the plurality of connected anatomical passageways that are disconnected from the plurality of connected anatomical passageways. The method wherein the plurality of connected anatomical passageways comprise lung airways.

One general aspect of the present disclosure includes a method for planning a medical procedure, the method comprising: providing a graphical user interface; displaying the image data via a graphical user interface; receiving a first user input; identifying at least a portion of a hazard within the displayed image data using a first user input; and displaying an interactive image comprising the image data, the plurality of connected anatomical passageways, and a representation of the identified hazard. Implementations may include one or more of the following features. The method wherein the risk includes a vulnerable portion of the patient's anatomy. The method, wherein the risk comprises excessive bending within a plurality of connected anatomical passageways. The method includes displaying a hazard barrier to represent the hazard. The method, wherein the hazard barrier comprises at least one of a disk, a conical hazard barrier, and a hemispherical hazard barrier. The method wherein the plurality of connected anatomical passageways comprise lung airways. The method further includes receiving a second user input, identifying at least a portion of a target of the medical procedure within the displayed image data, and wherein the hazard corresponds to a vulnerable portion of a patient anatomy that is proximate to the target. The method wherein the vulnerable portion of the patient's anatomy comprises at least one of lung pleura, blood vessels, bulla (largebullae), and heart. The method wherein the first user input is received before completing the segmentation of the image data.

One general aspect of the present disclosure includes a method for previewing a plan of a medical procedure, the method comprising: providing a graphical user interface comprising a plurality of interactive windows displaying a plan of a medical procedure, wherein at least two different renderings of a model of an anatomical passageway are displayed using the plurality of interactive windows; displaying a path through the anatomical passageway to the medical procedure target; displaying a virtual image of the instrument within the anatomical passageway; displaying a control point corresponding to the distal end of the instrument in at least one of a plurality of interactive windows; receiving a user input; identifying a position of the control point using the user input; and dynamically updating the orientation of the instrument in at least two of the plurality of interactive windows to match the orientation of the control point in response to receiving the user input.

One general aspect of the present disclosure includes a planning workstation comprising: a display system; and a user input device; wherein the planning workstation is configured to: displaying image data via a display system; receiving a first user input via a user input device; displaying, via a display system, a target of a medical procedure within the displayed image data identified based at least on the first user input; displaying, via a display system, an interactive image, the interactive image comprising image data, a plurality of connected anatomical passageways, and the identified target; receiving a second user input via the user input device; displaying, via a display system, a trajectory between the target and an exit point along a nearest channel of the plurality of connected anatomical channels identified based at least on a second user input; receiving a third user input via the user input device; and adjusting the interactive image based on at least the defined trajectory and the third user input. Implementations may include one or more of the following features. The planning workstation wherein the user input device comprises a touch screen of the display system. The planning workstation, wherein the display system adjusting the interactive image comprises: determining a distance represented by the trajectory; determining whether the distance is greater than a predetermined threshold; receiving a fourth user input via the user input; identifying an unconnected channel that is closer to the target than a nearest connected channel based at least on a fourth user input; and connecting the unconnected channel to the plurality of connected channels. The planning workstation, wherein the planning workstation is further configured to receive a fifth user input via the user input device and rotate the interactive image based at least on the fifth user input to identify unconnected channels in the interactive image. The planning workstation, wherein the interactive image rotates about one or more user-defined rotation points. The planning workstation, wherein the planning workstation is further configured to identify a rotation axis based on one or more user-defined rotation points. The planning workstation, wherein the display system adjusting the interactive image comprises: determining an exit angle based on the trajectory; and the exit angle is adjusted by changing the orientation of the exit point along the nearest connected channel. The planning workstation, wherein the planning workstation is further configured to receive a fourth user input via the user input device and display, via the display system, a risk of the medical procedure within the displayed image data based on the fourth user input. The planning workstation, wherein the hazard corresponds to at least one of an overbending in one or more of a vulnerable portion of a patient anatomy and a plurality of connected anatomical passageways. The planning workstation, wherein the hazard is displayed using a hazard barrier, wherein the hazard barrier comprises at least one of a disk, a conical hazard barrier, and a hemispherical hazard barrier. The planning workstation, wherein the user input device is configured to receive a first user input before segmentation of the image data is complete.

One general aspect of the present disclosure includes a non-transitory machine-readable medium comprising a plurality of machine-readable instructions which, when executed by one or more processors associated with a planning workstation, are adapted to cause the one or more processors to perform a method comprising: displaying the image data via a graphical user interface; receiving a first user input; identifying a hazard within the displayed image data based at least on the first user input; and displaying an interactive image comprising the image data, the plurality of connected anatomical passageways detected by segmentation of the image data, and the identified hazards. Implementations may include one or more of the following features. The non-transitory machine-readable medium, wherein the hazard comprises a vulnerable portion of a patient's anatomy. The non-transitory machine-readable medium, wherein the risk comprises excessive bending within a plurality of connected anatomical passageways. The non-transitory machine-readable medium, wherein the hazard is represented using a hazard barrier. The non-transitory machine readable medium, wherein the hazard barrier comprises at least one of a disk, a tapered hazard barrier, and a hemispherical hazard barrier. The non-transitory machine-readable medium, wherein the plurality of connected anatomical passageways comprises lung airways. The non-transitory machine-readable medium, wherein the machine-readable instructions are adapted to cause the one or more processors to perform the method, the method further comprising: receiving a second user input, identifying a target of the medical procedure within the displayed image data using at least the second user input, and wherein the hazard corresponds to a vulnerable portion of a patient anatomy that is proximate to the target. The non-transitory machine-readable medium, wherein the vulnerable portion of the patient anatomy comprises at least one of a lung pleura, a blood vessel, a bulla, and a heart. The non-transitory machine-readable medium, wherein the machine-readable instructions are adapted to cause one or more processors to perform the method, the method comprising receiving a first user input before segmentation of image data is complete.

One general aspect of the present disclosure includes a non-transitory machine-readable medium comprising a plurality of machine-readable instructions which, when executed by one or more processors associated with a planning workstation, are adapted to cause the one or more processors to perform a method comprising: providing a plurality of interactive windows to a user for viewing a plan of a medical procedure, wherein each window of the plurality of interactive windows displays a different rendering (rendering) of an anatomical passageway model; displaying a path through the anatomical passageway to the medical procedure target; displaying a virtual image of the instrument within the anatomical passageway; displaying a control point corresponding to the distal end of the instrument in at least one of a plurality of interactive windows; receiving a user input; identifying a position of a control point based at least on a user input; and in response to receiving the user input, dynamically updating the orientation of the instrument in at least one of the plurality of interactive windows to match the orientation of the control point.

One general aspect of the present disclosure includes a method of planning a medical procedure, the method comprising: receiving a representation of an anatomical passageway comprising a plurality of branches; displaying, via a graphical user interface, an image representing a view; receiving a first user input representing a selection of a first tag; receiving a second user input representing a selection of a first branch of the plurality of branches; and in response to the first user input and the second user input: marking the first branch with a first label; and displaying, via the graphical user interface, a representation of the first label applied to the first branch. Implementations may include one or more of the following features. The method also includes, based on tagging the first branch with the first tag: selecting a second tag; and displaying, via the graphical user interface, an indication that the second tab has been selected. The method, wherein the second label is selected based on an arrangement of a plurality of branches within the anatomical passageway. The method further comprises the following steps: receiving a third user input representing a selection of a second branch of the plurality of branches; labeling the second branch with a second label; and displaying, via the graphical user interface, a representation of a second label applied to the second branch. The method further comprises the following steps: identifying a group of branches from a plurality of branches including a first branch; and in response to the first user input and the second user input, marking the branch group with a first label. The method, wherein identifying the branch group comprises identifying a descending/descendant branch (descendant) of the first branch and including the descendant branch in the branch group. The method wherein identifying the branch group includes identifying antecedent (antecedent) descendant branches of the first branch and including antecedent branches in the branch group. The method further comprises the following steps: an indication of a second branch of the plurality of branches that does not have an assigned tag is displayed via the graphical user interface. The method further comprises the following steps: receiving a third user input representing a selection of a second tab; labeling the second branch with a second label; and displaying, via the graphical user interface, a representation of a second label applied to the second branch. The method wherein the representation of the anatomical passageway is based on imaging data of the patient. The method wherein the imaging data includes an anatomical structure and the image displaying a representation of the anatomical passageway displays the anatomical passageway and the anatomical structure. The method further comprises the following steps: in response to the first user input and the second user input: assigning a color to the first branch; and displaying an image of the representative view of the anatomical passageway with the first branch colored with the assigned color. The method further comprises the following steps: providing a cursor via a graphical user interface; detecting that the cursor is aligned with a first branch of the plurality of branches; and modifying the representation of the cursor based on detecting that the cursor is aligned with the first branch. The method further comprises the following steps: receiving a third user input representing a rotation instruction; rotating the representation of the anatomical passageway in response to a third user input; and displaying an image of the rotational representation of the anatomical passageway via the graphical user interface.

One general aspect of the present disclosure includes a non-transitory machine-readable medium comprising a plurality of machine-readable instructions, which when executed by one or more processors, cause the one or more processors to perform operations comprising: displaying, via a graphical user interface, a representation of an anatomical passageway, wherein the anatomical passageway comprises a plurality of branches; displaying, via a graphical user interface, a list of anatomical tags; receiving a first user input selecting a first tag from a list of anatomical tags; receiving a second user input selecting a first branch of the plurality of branches; and applying the first label to the first branch. Implementations may include one or more of the following features. The non-transitory machine readable medium, wherein the first branch is included in a branch group, comprising further instructions that cause the one or more processors to apply a first tag to the branch group based on a second user input that selects a first branch of the plurality of branches. The non-transitory machine readable medium includes further instructions that cause the one or more processors to identify a branch group by identifying a descendant branch of the first branch and adding the descendant branch to the branch group. The non-transitory machine-readable medium includes further instructions that cause one or more processors to perform operations comprising: in applying the first label to the first branch, a second label is selected based on the arrangement of branches within the anatomical passageway. The non-transitory machine-readable medium includes further instructions that cause one or more processors to perform operations comprising: receiving a third user input selecting a second branch of the plurality of branches; and a second tag is applied to the second branch. The non-transitory machine-readable medium includes further instructions that cause one or more processors to perform operations comprising: upon application of each tag in the list of anatomical tags: identifying an unlabeled second branch of the plurality of branches; and displaying, via the graphical user interface, an indication that the second branch is unmarked. The non-transitory machine-readable medium includes further instructions that cause one or more processors to perform operations comprising: receiving a third user input selecting a second label from the list of anatomical labels; and a second tag is applied to the second branch. The non-transitory machine-readable medium includes further instructions that cause one or more processors to perform operations comprising: a status indicator of the second label is displayed via the graphical user interface indicating that the second label is assigned to more than one branch. The non-transitory machine-readable medium includes further instructions that cause one or more processors to perform operations comprising: a representation of a first label applied to the first branch is displayed via a graphical user interface. The non-transitory machine-readable medium comprises further instructions that cause the one or more processors to perform operations comprising, based on applying the first tag to the first branch: assigning a color to the first branch; and displaying, via a graphical user interface, a representation of the anatomical passageway with the first branch colored with the assigned color. The non-transitory machine-readable medium, wherein the first branch is included in a branch group, comprising further instructions that cause one or more processors to perform operations comprising, based on applying a first tag to the first branch: assigning colors to the branch groups; and displaying, via a graphical user interface, a representation of the anatomical passageways with the set of branches colored with the assigned color. The non-transitory machine-readable medium comprises further instructions that cause one or more processors to: providing a cursor via a graphical user interface; detecting that the cursor is aligned with one of a plurality of branches; and modifying the representation of the cursor based on detecting the cursor alignment. The non-transitory machine-readable medium comprises further instructions that cause the one or more processors to, based on a first user input selecting the first tag and a second user input selecting a first branch of the plurality of branches: comparing the first tag and the first branch with a second tag applied to the second branch to determine whether the first tag and the second tag collide; and applying the first label to the first branch when it is determined that the first label and the second label do not collide.

One general aspect of the present disclosure includes a planning workstation comprising: a display system; and a user input device; wherein the planning workstation may be configured to display, via the display system, an anatomical passageway comprising a plurality of branches; displaying a list of tags via a display system; receiving, via a user input device, a first user input selecting a first branch of a plurality of branches; receiving, via the user input device, a second user input selecting the first tag from the list of tags; and in response to the first user input and the second user input, displaying, via the display system, a representation of the first label applied to the first branch. Implementations may include one or more of the following features. The planning workstation, wherein the planning workstation is further configured to select a second tag from the list of tags based on an arrangement of branches within the anatomical passageway in response to the first tag being applied to the first branch. The planning workstation, wherein the planning workstation is further configured to perform operations comprising: identifying an unlabeled second branch of the plurality of branches; and displaying, via the display system, an indication that the second branch is unmarked. The planning workstation, wherein the planning workstation is further configured to perform operations comprising, in response to a first user input selecting a first branch: identifying a group of branches from a plurality of branches including a first branch; applying the first label to the affiliate group; and displaying, via the display system, a representation of the first label applied to the affiliate group. The planning workstation, wherein the planning workstation is further configured to perform operations comprising, in response to a first user input selecting a first branch: determining whether the first tag conflicts with the second tag; and applying the first label to the first branch when it is determined that the first label does not collide with the second label.

Fig. 1 is a simplified diagram of a teleoperational medical system 100 according to some embodiments. In some embodiments, teleoperational medical system 100 may be suitable for use in, for example, surgical procedures, diagnostic procedures, therapeutic procedures, or biopsy procedures. As shown in fig. 1, the medical system 100 generally includes a teleoperated manipulator assembly 102 for operating a medical instrument 104 to perform various procedures on a patient P. The teleoperated manipulator assembly 102 is mounted on or near an operating table T. Master assembly 106 allows an operator (e.g., a surgeon, clinician, or physician O as shown in fig. 1) to view the intervention site and control teleoperated manipulator assembly 102.

The master control assembly 106 may be located at a surgeon's console, which is typically located in the same room as the operating table T, such as on the side of the operating table where the patient P is located. However, it should be understood that physician O may be located in a different room or a completely different building than patient P. The master assembly 106 generally includes one or more control devices for controlling the remotely operated manipulator assembly 102. The control device may include any number of various input devices, such as joysticks, trackballs, data gloves, trigger-guns, hand-operated controllers, voice recognition devices, body motion or presence sensors, and/or the like. In order to provide the physician O with a strong sense of directly controlling the instrument 104, the control means may be provided with the same degrees of freedom as the associated medical instrument 104. In this manner, the control device provides the physician O with a telepresence or perception that the control device is integral with the medical instrument 104.

In some embodiments, the control device may have more or fewer degrees of freedom than the associated medical instrument 104 and still provide telepresence for the physician O. In some embodiments, the control device may optionally be a manual input device that moves in six degrees of freedom, and may also include an actuatable handle for actuating an instrument (e.g., for closing a grasping clamp, applying an electrical potential to an electrode, delivering a medication, and/or the like).

Teleoperated manipulator assembly 102 supports medical instrument 104 and may include kinematic structures of one or more non-servo controlled links (e.g., one or more links that may be manually positioned and locked in place, commonly referred to as set-up structures) and teleoperated manipulators. The teleoperated manipulator assembly 102 may optionally include a plurality of actuators or motors that drive input devices on the medical instrument 104 in response to commands from a control system (e.g., control system 112). The actuator may optionally include a drive system that, when coupled to the medical instrument 104, may advance the medical instrument 104 into a natural or surgically created anatomical orifice. Other drive systems may move the distal end of the medical instrument 104 in multiple degrees of freedom, which may include three degrees of linear motion (e.g., linear motion along X, Y, Z cartesian axes) and three degrees of rotational motion (e.g., rotation about X, Y, Z cartesian coordinate axes). Additionally, an actuator may be used to actuate an articulatable end effector of the medical instrument 104 for grasping tissue in a clamp of a biopsy device and/or the like. Actuator position sensors, such as decoders, encoders, potentiometers, and other mechanisms, may provide sensor data describing the rotation and orientation of the motor shaft to the medical system 100. The position sensor data may be used to determine motion of an object manipulated by the actuator.

The teleoperational medical system 100 may include a sensor system 108 having one or more subsystems for receiving information about the instruments of the teleoperational manipulator assembly 102. Such subsystems may include position/location sensor systems (e.g., Electromagnetic (EM) sensor systems); a shape sensor system for determining a position, orientation, velocity, pose, and/or shape along a distal end and/or one or more segments of a flexible body that may comprise the medical instrument 104; and/or a visualization system for capturing images from the distal end of the medical instrument 104.

The teleoperational medical system 100 also includes a display system 110 for displaying images or representations of the surgical site and the medical instrument 104 generated by the subsystems of the sensor system 108. The display system 110 and the master assembly 106 may be oriented such that the perception of telepresence is utilized by the physician O to control the medical instrument 104 and the master assembly 106.

In some embodiments, the medical instrument 104 may have a visualization system (discussed in more detail below) that may include a viewing mirror assembly that records an instant or real-time image of the surgical site and provides the image to the operator or physician O via one or more displays of the medical system 100, such as one or more displays of the display system 110. The live image may be, for example, a two-dimensional or three-dimensional image captured by an endoscope located within the surgical site. In some embodiments, the visualization system includes an endoscopic component, which may be integrally or removably coupled to the medical instrument 104. However, in some embodiments, a separate endoscope attached to a separate manipulator assembly may be used with the medical instrument 104 to image the surgical site. The visualization system may be implemented as hardware, firmware, software, or a combination thereof that interacts with or is otherwise executed by one or more computer processors, which may include the processors of the control system 112.

The display system 110 may also display images of the surgical site and medical instruments captured by the visualization system. In some examples, teleoperational medical system 100 may configure controls of medical instrument 104 and master control assembly 106 such that the relative orientation of the medical instrument is similar to the relative orientation of the eyes and hands of physician O. In this manner, the physician O may manipulate the medical instrument 104 and manual controls as if viewing the workspace in substantially real-world presence. By true presence, it is meant that the presentation of the image is a true perspective image that simulates the point of view of the physician who is physically manipulating the medical instrument 104.

In some examples, the display system 110 may present images of the surgical site recorded preoperatively or intraoperatively using image data from imaging techniques such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), fluoroscopy, thermography, ultrasound, Optical Coherence Tomography (OCT), thermal imaging, impedance imaging, laser imaging, nanotube X-ray imaging, and/or the like. The pre-or intra-operative image data may be presented as two-dimensional, three-dimensional or four-dimensional (including, for example, time-based or velocity-based information) images and/or as images from a representation/representation (such as a model) created from the pre-or intra-operative image dataset.

In some embodiments, the display system 110 may display a virtual navigation image in which the actual position of the medical instrument 104 is registered (i.e., dynamically referenced) with a pre-operative or immediate image or representation (e.g., a model), typically for the purpose of imaging guided surgical procedures. Doing so may present a virtual image of the internal surgical site to physician O from the viewpoint of medical instrument 104. In some examples, the viewpoint may be from the tip of the medical instrument 104. An image of the tip of the medical instrument 104 and/or other graphical or alphanumeric indicators may be superimposed on the virtual image to assist the physician O in controlling the medical instrument 104. In some examples, the medical instrument 104 may not be visible in the virtual image.

In some embodiments, the display system 110 may display a virtual navigation image in which the actual position of the medical instrument 104 is registered with the pre-operative or immediate image to present the virtual image of the medical instrument 104 within the surgical site to the physician O from an external viewpoint. An image or other graphical or alphanumeric indicator of a portion of the medical instrument 104 may be superimposed on the virtual image to assist the physician O in controlling the medical instrument 104. As described herein, a visual representation of a data point may be rendered to the display system 110. For example, the measurement data points, movement data points, registration data points, and other data points described herein may be displayed on the display system 110 in a visual representation. The data points may be visually represented in the user interface by a plurality of points or spots on the display system 110, or as a rendered representation (e.g., a rendered model), such as a grid or line model created based on a set of data points. In some examples, the data points may be color coded according to the data they represent. In some embodiments, the visual representation may be refreshed in the display system 110 after each processing operation has been implemented to change the data point.

The teleoperational medical system 100 may also include a control system 112. The control system 112 includes at least one memory and at least one computer processor (not shown) for effecting control between the medical instrument 104, the master control assembly 106, the sensor system 108, and the display system 110. The control system 112 also includes programming instructions (e.g., a non-transitory machine-readable medium storing instructions) that implement some or all of the methods described in accordance with aspects disclosed herein, including instructions for providing information to the display system 110. While the control system 112 is shown as a single block in the simplified schematic of fig. 1, the present system may include two or more data processing circuits, with one portion of the processing optionally being performed on or near the remotely operated manipulator assembly 102 and another portion of the processing being performed at the master assembly 106 and/or the like. The processor of the control system 112 may execute instructions, including instructions corresponding to the processes disclosed herein and described in more detail below. Any of a variety of centralized or distributed data processing architectures may be employed. Similarly, the programming instructions may be implemented as separate programs or subroutines, or they may be integrated into many other aspects of the remote operating system described herein. In one embodiment, the control system 112 supports wireless communication protocols such as Bluetooth, IrDA (Infrared data communication), HomeRF (Home radio frequency), IEEE 802.11, DECT (digital enhanced Wireless communication), and wireless telemetry.

In some embodiments, the control system 112 may receive force and/or torque feedback from the medical instrument 104. In response to the feedback, the control system 112 may send a signal to the master control assembly 106. In some examples, the control system 112 may send a signal instructing one or more actuators of the teleoperated manipulator assembly 102 to move the medical instrument 104. The medical instrument 104 may extend to an internal surgical site within the body of the patient P via an opening in the body of the patient P. Any suitable conventional and/or dedicated actuator may be used. In some examples, the one or more actuators may be separate from or integral with the remotely operated manipulator assembly 102. In some embodiments, the one or more actuators and the teleoperated manipulator assembly 102 are provided as part of a teleoperated cart positioned adjacent to the patient P and the surgical table T.

The control system 112 may optionally further include a virtual visualization system to provide navigational assistance to the physician O in controlling the medical instrument 104 during the image-guided surgical procedure. The virtual navigation using the virtual visualization system may be based on a reference to a pre-operative or intra-operative data set of the acquired anatomical passageways. The virtual visualization system processes images of a surgical site imaged using imaging techniques such as, for example, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), fluoroscopy, thermography, ultrasound, Optical Coherence Tomography (OCT), thermal imaging, impedance imaging, laser imaging, nanotube X-ray imaging, and/or the like. Software, which may be used in conjunction with a manual input device, is used to convert the recorded images into a segmented two-dimensional or three-dimensional composite representation of part or the entire anatomical organ or anatomical region. The image dataset is associated with a composite representation. The composite representation and the image dataset describe various positions and shapes of the channels and their connectivity. The images used to generate the composite representation may be recorded preoperatively or intraoperatively during a clinical procedure. In some embodiments, the virtual visualization system may use a standard representation (i.e., not patient-specific) or a mix of standard and patient-specific data. The composite representation and any virtual images generated by the composite representation may represent a static pose of the deformable anatomical region during one or more motion phases (e.g., during an inhalation/exhalation cycle of the lung).

During the virtual navigation procedure, the sensor system 108 may be used to calculate an approximate position of the medical instrument 104 relative to the anatomy of the patient P. This position can be used to generate both a macro-level (external) tracking image of the anatomy of patient P and a virtual internal image of the anatomy of patient P. The present system may implement one or more Electromagnetic (EM) sensors, fiber optic sensors, and/or other sensors to register and display the medical instrument with the preoperatively recorded surgical images. For example, U.S. patent application No. 13/107,562 (filed 5/13 2011) (disclosing "Medical System monitoring Dynamic Registration of a Model of an atomic Structure for Image-Guided Surgery") discloses one such System, which is incorporated herein by reference in its entirety. The teleoperational medical system 100 may also include optional operational and support systems (not shown), such as an illumination system, a steering control system, an irrigation system, and/or a suction system. In some embodiments, the teleoperational medical system 100 may include more than one teleoperational manipulator assembly and/or more than one master assembly. The exact number of teleoperated manipulator assemblies will depend on the surgical procedure and the space constraints within the operating room, among other factors. The master components 106 may be collocated or they may be located in different locations. The multiple master control assemblies allow more than one operator to control one or more remotely operated manipulator assemblies in various combinations.

Fig. 2A is a simplified diagram of a medical instrument system 200 according to some embodiments. In some embodiments, the medical instrument system 200 may be used as a medical instrument 104 in an image-guided medical procedure performed with the teleoperational medical system 100. In some examples, the medical instrument system 200 may be used for non-teleoperational exploration procedures or procedures involving traditional manually operated medical instruments (such as endoscopy). Optionally, the medical instrument system 200 may be used to collect (i.e., measure) a set of data points corresponding to locations within an anatomical passageway of a patient, such as patient P.

The medical instrument system 200 includes an elongated device 202 coupled to a drive unit 204. The elongate device 202 includes a flexible body 216 having a proximal end 217 and a distal or tip portion 218. In some embodiments, the flexible body 216 has an outer diameter of about 3 mm. Other flexible body outer diameters may be larger or smaller.

The medical instrument system 200 also includes a tracking system 230 for determining a position, orientation, velocity, pose, and/or shape of the flexible body 216 along the distal end 218 and/or one or more sections 224 of the flexible body 216 using one or more sensors and/or imaging devices, as described in further detail below. The entire length of the flexible body 216 between the distal end 218 and the proximal end 217 may be effectively divided into sections 224. The system 230 is tracked if the medical instrument system 200 is consistent with the medical instrument 104 of the teleoperational medical system 100. The tracking system 230 may optionally be implemented as hardware, firmware, software, or a combination thereof that interacts with or is otherwise executed by one or more computer processors, which may include the processors of the control system 112 in fig. 1.

The tracking system 230 may optionally track the distal end 218 and/or one or more segments 224 using the shape sensor 222. The shape sensor 222 may optionally include an optical fiber aligned with the flexible body 216 (e.g., provided within an internal channel (not shown) or mounted externally). In one embodiment, the diameter of the optical fiber is about 200 μm. In other embodiments, the dimensions may be larger or smaller. The optical fibers of the shape sensor 222 form a fiber optic bend sensor for determining the shape of the flexible body 216. In one alternative, an optical fiber comprising a Fiber Bragg Grating (FBG) is used to provide strain measurements in the structure in one or more dimensions. Various systems and methods for monitoring the shape and relative orientation of an optical Fiber in three dimensions are described in U.S. patent application No. 11/180,389 (filed on 7/13/2005) (disclosing "Fiber optical positioning and shape sensing device and method relating"); U.S. patent application No. 12/047,056 (filed on 16/7/2004) (discloses "Fiber-optical shape and relative position"); U.S. Pat. No. 6,389,187 (filed 17.6.1998) (disclosing "Optical Fibre BendSensor"), which is incorporated herein by reference in its entirety. In some embodiments, the sensor may employ other suitable strain sensing techniques, such as rayleigh scattering, raman scattering, brillouin scattering, and fluorescence scattering. In some embodiments, the shape of the flexible body 216 may be determined using other techniques. For example, the history of the distal pose of the flexible body 216 may be used to reconstruct the shape of the flexible body 216 over a time interval. In some embodiments, tracking system 230 may alternatively and/or additionally track distal end 218 using position sensor system 220. The position sensor system 220 may be a component of an EM sensor system, where the position sensor system 220 includes one or more conductive coils that may be subjected to an externally generated electromagnetic field. Each coil of the EM sensor system 220 then generates an induced electrical signal having characteristics that depend on the position and orientation of the coil relative to the externally generated electromagnetic field. In some embodiments, the position sensor system 220 may be configured and positioned to measure six degrees of freedom (e.g., three position coordinates X, Y, Z and three orientation angles indicating pitch, yaw, and roll of the base point) or five degrees of freedom (e.g., three position coordinates X, Y, Z and two orientation angles indicating pitch and yaw of the base point). A further description of an orientation sensor System is provided in U.S. Pat. No. 6,380,732 (filed 11.8.1999) (disclosing "Six-free of free Tracking System had a Passive transmitter on the Object Tracking"), which is incorporated herein by reference in its entirety.

In some embodiments, the tracking system 230 may alternatively and/or additionally rely on historical pose, position, or orientation data stored for known points of the instrument system along a cycle of alternating motion (such as respiration). This stored data may be used to develop shape information about the flexible body 216. In some examples, a series of position sensors (not shown), such as Electromagnetic (EM) sensors similar to the sensors in position sensor 220, may be positioned along flexible body 216 and then used for shape sensing. In some examples, a history of data acquired from one or more of these sensors during a procedure may be used to represent the shape of the elongated device 202, particularly where the anatomical passageways are generally static.

The flexible body 216 includes a passage 221, the passage 221 being sized and shaped to receive the medical instrument 226. Fig. 2B is a simplified diagram of a flexible body 216 with an extended medical instrument 226 according to some embodiments. In some embodiments, the medical instrument 226 may be used for procedures such as surgery, biopsy, ablation, illumination, irrigation, or aspiration. The medical device 226 may be deployed through the passageway 221 of the flexible body 216 and used at a target location within the anatomy. The medical instrument 226 may include, for example, an image capture probe, a biopsy instrument, a laser ablation fiber, and/or other surgical, diagnostic, or therapeutic tools. The medical tool may include an end effector having a single working member, such as a scalpel, a blunt blade, an optical fiber, an electrode, and/or the like. Other end effectors may include, for example, forceps, graspers, scissors, clip appliers, and/or the like. Other end effectors may further include electrically activated end effectors such as electrosurgical electrodes, transducers, sensors, and/or the like. In various embodiments, the medical instrument 226 is a biopsy instrument that may be used to remove sample tissue or a cellular sample from a target anatomical location. The medical instrument 226 may also be used with an image capture probe within the flexible body 216. In various embodiments, the medical instrument 226 may be an image capture probe that includes a distal portion having a stereo camera or monoscopic camera at or near the distal end 218 of the flexible body 216 for capturing images (including video images) that are processed by the visualization system 231 for display and/or provided to the tracking system 230 to support tracking of the distal end 218 and/or one or more sections 224. The image capture probe may include a cable coupled to the camera for transmitting captured image data. In some examples, the image capture instrument may be a fiber optic bundle, such as a fiberscope, coupled to the visualization system 231. The image capture instrument may be single-spectrum or multi-spectral for capturing image data, for example, in one or more of the visible, infrared, and/or ultraviolet spectrums. Alternatively, the medical instrument 226 itself may be an image capture probe. The medical instrument 226 may be advanced from the opening of the passageway 221 to perform the procedure and then retracted into the passageway when the procedure is complete. The medical instrument 226 may be removed from the proximal end 217 of the flexible body 216 or removed from another optional instrument port (not shown) along the flexible body 216.

The medical instrument 226 may additionally house cables, linkages, or other actuation controls (not shown) that extend between its proximal and distal ends to controllably bend the distal end of the medical instrument 226. Steerable Instruments are described in detail in U.S. patent No. 7,316,681 (filed on 4.10.2005) (disclosing "engineered Surgical Instrument for Performance minimum investment with Enhanced depth and Sensitivity") and U.S. patent application No. 12/286,644 (filed on 30.9.2008) (disclosing "Passive priority and Capstan driver for Surgical Instruments"), which are incorporated herein by reference in their entirety.

The flexible body 216 may also house cables, linkages, or other steering controls (not shown) that extend between the drive unit 204 and the distal end 218 to controllably bend the distal end 218, for example, as shown by the dashed depiction 219 of the distal end 218. In some examples, at least four cables are used to provide independent "side-to-side" steering for controlling pitch and "up-and-down" steering of distal end 218 and for controlling yaw of distal end 281. Steerable catheters are described in detail in U.S. patent application No. 13/274,208 (filed 2011, 10/14), which discloses a "Catheter with Removable Vision Probe," and is incorporated by reference herein in its entirety. In embodiments where the medical instrument system 200 is actuated by a teleoperational assembly, the drive unit 204 may include a drive input that is removably coupled to a drive element (such as an actuator) of the teleoperational assembly and receives power therefrom. In some embodiments, the medical instrument system 200 may include gripping features, manual actuators, or other components for manually controlling the motion of the medical instrument system 200. The elongate device 202 may be steerable, or alternatively, the system may be non-steerable, without an integrated mechanism for the operator to control the bending of the distal end 218. In some examples, one or more lumens are defined in the wall of the flexible body 216 through which medical instruments may be deployed and used at a target surgical site.

In some embodiments, the medical instrument system 200 may include a flexible bronchial instrument, such as a bronchoscope or bronchial catheter, for examination, diagnosis, biopsy, or treatment of the lungs. Medical instrument system 200 is also suitable for navigating and treating other tissues in various anatomical systems, including the colon, intestines, kidneys and tubules, brain, heart, circulatory system (including vasculature) and/or the like, via natural or surgically created connecting channels.

Information from the tracking system 230 may be sent to a navigation system 232 where it is combined with information from a visualization system 231 and/or pre-operatively obtained representations (e.g., models) to provide real-time positional information to a physician, clinician or surgeon or other operator. In some examples, the real-time positional information may be displayed on the display system 110 of fig. 1 for controlling the medical instrument system 200. In some examples, the control system 116 of fig. 1 may use the positional information as feedback for positioning the medical instrument system 200. Various systems for registering and displaying surgical instruments and surgical images using fiber optic sensors are provided in U.S. patent application No. 13/107,562 filed on 5/13/2011, which discloses a "Medical System monitoring Dynamic Registration of a modular an atomic Structure for Image-Guided Surgery," which is incorporated herein by reference in its entirety.

In some examples, the medical instrument system 200 may be remotely operated within the medical system 100 of fig. 1. In some embodiments, the teleoperated manipulator assembly 102 of fig. 1 may be replaced by direct operator controls. In some examples, the direct operator controls may include various handles and operator interfaces for handheld operation of the instrument.

Fig. 3A and 3B are simplified diagrams of side views of a patient coordinate space including a medical instrument mounted on an insertion assembly, according to some embodiments. As shown in fig. 3A and 3B, surgical environment 300 includes a patient P positioned on a platform 302. Patient P may be stationary in the surgical environment in the sense that the overall motion of the patient is restricted by sedation, binding, and/or other means. The cyclic anatomical motion, including the breathing and cardiac motion of the patient P, may continue unless the patient is required to hold his or her breath to temporarily halt the breathing motion. Thus, in some embodiments, data may be collected at a particular stage of respiration and labeled and identified with that stage. In some embodiments, the phase of acquiring data may be inferred from physiological information acquired by patient P. Within surgical environment 300, a point collection instrument 304 is coupled to an instrument carrier 306. In some embodiments, the point collection instrument 304 may use EM sensors, shape sensors, and/or other sensor modalities. The instrument holder 306 is mounted to an insertion station 308 secured within the surgical environment 300. Alternatively, the insertion stage 308 may be movable, but have a known position within the surgical environment 300 (e.g., via tracking sensors or other tracking devices). The instrument carriage 306 may be a component of a teleoperated manipulator assembly (e.g., teleoperated manipulator assembly 102) that is coupled to the point collection instrument 304 to control insertion motion (i.e., motion along the a-axis), and optionally to control motion of the distal end 318 of the elongate device 310 in multiple directions (including yaw, pitch, and roll). The instrument carriage 306 or the insertion station 308 may include an actuator, such as a servo motor (not shown), that controls movement of the instrument carriage 306 along the insertion station 308.

The elongate device 310 is coupled to an instrument body 312. The instrument body 312 is coupled and fixed relative to the instrument holder 306. In some embodiments, the fiber optic shape sensor 314 is fixed at a proximal point 316 on the instrument body 312. In some embodiments, the proximal point 316 of the fiber optic shape sensor 314 may move with the instrument body 312, but the location of the proximal point 316 may be known (e.g., via a tracking sensor or other tracking device). The shape sensor 314 measures the shape from a proximal point 316 to another point, such as a distal end 318 of the elongated device 310. The point collection instrument 304 may be substantially similar to the medical instrument system 200.

The position measurement device 320 provides information about the position of the instrument body 312 as it moves along the insertion axis a on the insertion stage 308. The position measurement device 320 may include a decoder, encoder, potentiometer, and/or other sensor that determines the rotation and/or orientation of an actuator that controls the movement of the instrument carriage 306 and thus the instrument body 312. In some embodiments, the insertion stage 308 is linear. In some embodiments, the insertion station 308 may be curved or have a combination of curved and linear sections.

Fig. 3A shows the instrument body 312 and instrument carriage 306 in a retracted orientation along the insertion station 308. In this retracted orientation, the proximal point 316 is at an orientation L0 on axis a. In this position along the insertion stage 308, the a-component of the position of the proximal point 316 may be set to zero and/or another reference value to provide a reference to describe the position of the instrument holder 306 on the insertion stage 308 and thus the proximal point 316 on the insertion stage 308. With this retracted orientation of the instrument body 312 and instrument holder 306, the distal end 318 of the elongate device 310 may be positioned just within the access opening of the patient P. Also in this orientation, the orientation measurement device 320 may be set to zero and/or another reference value (e.g., I ═ 0). In fig. 3B, the instrument body 312 and instrument carriage 306 have been advanced along the linear track of the insertion station 308, and the distal end 318 of the elongated device 310 has been advanced into the patient P. In this advanced orientation, the proximal point 316 is at an orientation L1 on axis a. In some examples, encoders and/or other positional data from one or more actuators controlling movement of the instrument carriage 306 along the insertion station 308 and/or one or more positional sensors associated with the instrument carriage 306 and/or the insertion station 308 are used to determine the position Lx of the proximal point 316 relative to the position L0. In some examples, the position LX may further serve as an indicator of the distance or depth of insertion of the distal end 318 of the elongated device 310 into the passageway of the anatomy of the patient P.

In an illustrative application, a medical instrument system, such as medical instrument system 200, may include a robotic catheter system for lung biopsy procedures. The catheter of the robotic catheter system provides a conduit for tools, such as an endoscope, an endobronchial ultrasound (EBUS) probe, and/or a biopsy tool, that are delivered to locations within the airway where one or more targets of a lung biopsy, such as a lesion, nodule, tumor, and/or the like, are present. The endoscope is typically mounted so that a clinician (such as surgeon O) can monitor the real-time camera feed of the distal end of the catheter as it is driven through the anatomy. Real-time camera feeds and/or other real-time navigation information may be displayed to the clinician via a graphical user interface. An example of a Graphical user interface for Monitoring a biopsy Procedure is described in U.S. provisional patent application No. 62/486,879 entitled "Graphical user interface for Monitoring an Image-Guided Procedure and" filed 2017, 4 and 18, which is incorporated by reference above.

Prior to performing a biopsy procedure using the robotic catheter system, a pre-operative planning step may be performed to plan the biopsy procedure. The preoperative planning step may include segmentation of image data (such as a patient CT scan) to create a 3D model of the anatomy, selecting a target within the 3D model, determining an airway in the model, dilating the airway to form a connected airway tree, and planning a trajectory between the target and the connected tree. One or more of these steps may be performed on the same robotic catheter system used to perform the biopsy. Alternatively or additionally, planning may be performed on a different system (e.g., a workstation dedicated to preoperative planning). The plan for the biopsy procedure may be saved (e.g., as one or more digital files) and transferred to a robotic catheter system for performing the biopsy procedure. The saved plans may include a 3D model, identification of airways, target locations, trajectories to target locations, routes through the 3D model, and/or the like.

Illustrative embodiments of graphical user interfaces for planning medical procedures are provided below, including, but not limited to, the lung biopsy procedures described above. The graphical user interface may include a plurality of modes including a data selection mode, a hybrid segmentation and planning mode, a preview mode, a save mode, a manage mode, and a review mode. Some aspects of the Graphical User Interface are similar to the features described in U.S. provisional patent application No. 62/357,217 entitled "Graphical User Interface for Displaying guide Information and Dual and Image-Guided Procedure" filed on 30.6.2016 and U.S. provisional patent application No. 62/357,258 entitled "Graphical User Interface for Displaying guide Information and multiple of models and Image-Guided Procedure" filed on 30.6.2016, the entire contents of which are incorporated herein by reference.

Fig. 4-9 are simplified diagrams of graphical user interfaces 400 that may be displayed on a display system, such as display system 110 and/or a display system of an independent planning workstation, according to some embodiments. The graphical user interface 400 displays information associated with planning a medical procedure in one or more views viewable by a user, such as surgeon O. 4-9, it should be understood that graphical user interface 400 may display any suitable number of views in any suitable arrangement and/or on any suitable number of screens. In some examples, the number of views displayed simultaneously may be changed by opening and closing the views, minimizing and maximizing the views, moving the views between the foreground and background of the graphical user interface 400, switching between screens, and/or otherwise obscuring the views in whole or in part. Similarly, the arrangement of the views, including their size, shape, orientation, ordering (in the case of overlapping views) and/or the like, may vary and/or may be user configurable.

In some examples, graphical user interface 400 may include one or more headers, footers, sidebars, menus, message boxes, pop-up windows, and/or the like. As depicted in fig. 4-9, graphical user interface 400 includes a dynamic header 410 that is updated based on the mode of graphical user interface 400. In various examples, the header 400 can include a drop-down control menu, a page title, navigation controls (e.g., a continue button and/or a back button), patient information, a search bar, and/or the like.

FIG. 4 illustrates a graphical user interface 400 in a data selection mode according to some embodiments. The data selection mode is used to select a data source, patient, and/or image data for use in planning a medical procedure. Accordingly, the graphical user interface 400 in the source selection mode may include a data source selector 420, a patient selector 430, and a data selector 440. As depicted in fig. 4, data source selector 420 includes an option for loading data from a USB device, DVD, and/or network. It will be appreciated that data may be loaded from various other sources, including external and/or local sources (e.g., a local hard drive). The patient selector 430 includes a list of patients whose image data is available from the selected data source. Various patient attributes may be displayed in the list including patient name, gender, date of birth, unique patient ID, and/or the like. The data selector 440 includes a list of image data available for the selected patient from the selected data source. Various attributes of the data may be displayed in a list, including a description of the data, a date the data was acquired, and/or a suitability rating indicating the suitability of the image data for planning a medical procedure. The suitability rating may be qualitative and/or quantitative and may be assigned manually and/or automatically. The rating may be presented as a numerical score, a star rating, a percentile, a symbolic representation, and/or the like. In some examples, the suitability rating may be determined based on a quality of an imaging technique used to acquire the image data. Once the image data is selected, the user may continue planning the medical procedure using the selected image data. For example, the user may click and/or tap the load button of the navigation panel 450 to continue.

Fig. 5A-5G illustrate a graphical user interface 400 in a hybrid segmentation and planning mode, according to some embodiments. Into a process of analyzing image data (such as image data selected in a data selection mode) and creating a 3D model from the data. An example of an automated technique for performing CT data segmentation is described in U.S. patent application No. 14/845,031, entitled "Systems and Methods for Pre-Operative Modeling," which is incorporated by reference herein in its entirety. The segmentation process typically occurs over a period of time (e.g., one to three minutes), which may vary depending on a number of factors, including the quality of the CT image data, the size and/or complexity of the CT image data, the level of detail in the 3D model, the available computing resources, and/or the like. In some examples, the hybrid segmentation and planning mode of the graphical user interface 400 may allow a user to plan a medical procedure based on image data and/or a 3D model while a segmentation process is occurring and before the 3D model is complete. Thus, the process of planning a medical procedure may be accelerated, as the user can start planning a medical procedure without waiting for a potentially lengthy segmentation process to complete.

In some embodiments, the graphical user interface 400 in the hybrid segmentation and planning mode may be divided into one or more frames. As shown in fig. 5A-5G, the graphical user interface 400 includes a control frame 510 and a canvas frame 520. The control frame 510 provides a set of controls and/or indicators for planning a medical procedure. In some examples, the control frame 510 may provide controls for adding, viewing, modifying, and/or deleting one or more features of the model and/or plan (such as targets, paths, airways, trajectories, and/or hazards). In some examples, the control frame 510 may provide controls to undo and/or redo the most recent changes to the plan. In some examples, the control frame 510 may provide the segmentation progress indicator 515 based on the degree of progression along the segmentation process. The split progress indicator 515 may be formatted as a progress bar, an elapsed time indicator, an estimated remaining time indicator, and/or any other suitable split progress indicator. In some embodiments, the segmentation progress indicator 515 may disappear when the segmentation is complete.

In some embodiments, the graphical user interface 400 in the hybrid segmentation and planning mode may include a canvas frame 520. As shown in fig. 5A-5G, the canvas frame 520 provides a workspace 522 for selecting, viewing, and/or interacting with image data and/or model data. Illustrative functions that may be performed via the workspace 522 include adding, modifying, and/or deleting planned features (e.g., targets, paths, and/or hazards), manipulating the 3D model, and/or verifying the accuracy of the segmentation process. To accommodate these functions, workspace 522 may transition between multiple interactive views, including a selection view, one or more image views, and one or more model views.

In some examples, the canvas frame 520 may include a tool selector 524 that provides a list of available tools. As depicted in fig. 5A-5G, the tool list includes a movement tool, a magnifying glass tool, a window/level tool, an object drawing tool, a line drawing tool, a trimming tool, a danger tool, an angle and/or distance measurement tool, a undo/redo tool, and/or the like. In some examples, certain tools may be enabled and/or disabled based on the current view displayed in workspace 522. For example, tools not used by the current view may be hidden, grayed out, and/or otherwise not selectable. In some examples, clicking on a tool may cause a menu to display a list of sub-tools. For example, the object rendering tool may include sub-tools for rendering various 2D and/or 3D objects, such as free-form objects, predefined 2D shapes (e.g., circles, rectangles, ellipses, etc.), 3D shapes (e.g., spheres, 3D ellipsoids, etc.), and/or the like. In some examples, tool selector 524 may include tools for semi-automatically detecting objects in the underlying image data (e.g., clicking on a point in the image data and automatically identifying the corresponding object using edge detection techniques). Although the tool selector 524 is depicted as a sidebar, it should be understood that the tool selector 524 may be positioned and/or displayed in various formats (including a palette, a header, a footer, a drop down menu, an auto-hide menu, and/or the like). In some embodiments, the tool selector 524 may be omitted, such as when performing tool selection using keyboard shortcuts.

FIG. 5A illustrates an example of a selection view displayed in workspace 522. In the selection view, a set of selections 531 and 536 are presented as a thumbnail grid. Each of the selections 531 and 536 corresponds to a different rendering of image data and/or representation (e.g., model) data. The renderings may vary based on their perspective, zoom level, data type, style, and/or the like. For example, rendering of image data may be provided from various perspectives (including lateral, sagittal, coronal, and/or virtual endoscopic perspectives). In the example depicted in fig. 5A, selections 535 and 536 corresponding to rendered representation data display a wait indicator because the segmentation has not yet been completed and the representation is not ready for display. On the other hand, selection 531 and 534, which correspond to the rendering of the image data, is populated with the actual image data because graphical user interface 400 allows for the interactive input of the image data to be displayed and/or received before the segmentation is complete. Upon receiving a selection of a rendering, the selected rendering, the received interactive input, or both may be displayed via the interactive window. For example, a user input (such as a click or click) may be received via the selected expanded view button 537 to continue. In some examples, the expanded view button 537 may appear when a user is detected or a user hovering over a corresponding selection holds an object, otherwise the expanded view button 537 disappears. While selections 531 and 536 are depicted as being arranged in a thumbnail grid, various alternatives are possible, such as a list of selections.

5B-5F illustrate examples of interactive windows 541 displayed in workspace 522. The interactive window 541 displays a rendering selected using the selection view. In some examples, selection sidebar 542 may be displayed alongside interactive window 541 to allow a user to change to a different rendering without returning to the selection view. For example, the selection sidebar 542 may display a scrollable bar of thumbnail images that generally correspond to the selections 531 and 536, with the current selection identified by a blue border.

As depicted in fig. 5B, the raw image data 543 (e.g., CT image data) is displayed as a first color palette, such as grayscale, and the segmentation data 544 (e.g., detected airways) is displayed as a contrasting color or shade, such as pink. In some examples, the segmentation data 544 may be dynamically updated to reflect the segmentation progress while the segmentation is still in progress. For example, as new airways are detected over time, a new pink region may dynamically appear in interactive window 541.

In some examples, interactive window 541 may display one or more features of the plan for the medical procedure, such as targets, paths, and/or hazards. These features may include features based on user input, automatically extracted features, semi-automatically extracted features, and/or the like. According to some embodiments, changes to one or more features made in a particular rendering may be dynamically propagated to other renderings. For example, objects added in one rendering may automatically appear in other renderings, including selecting the thumbnail image of sidebar 542.

As depicted in fig. 5B, the features include a target 550 identified using a circular tool on a lesion visible in the underlying image data. The size, shape, and/or orientation of the target 550 may be adjusted to capture the shape of the lesion with a desired level of accuracy. As the size, shape, and/or orientation of the target 550 is adjusted, statistics 552 corresponding to the target 550 are updated in the control frame 510. In some examples, the targets 550 may be named, renamed, and/or deleted using controls provided in the control frame 510. In some examples, the controls provided in the control frame 510 can identify additional targets after the first target has been identified by detecting corresponding user input via the control frame 510. Additionally or alternatively, the tool selector 524 may include one or more tools for adding, modifying, and/or deleting targets. In some examples, identifying a target 550 in one rendering may automatically cause an updated representation of target 550 to appear in other renderings. Accordingly, parameters of the target 550 may be adjusted from multiple perspectives based on detecting user input from the switchably displayed available renderings.

In FIG. 5C, interactive window 541 includes a trajectory 560 between target 550 and exit location 562. The exit position 562 corresponds to the point at which the medical instrument leaves the anatomical passageway detected by the segmentation process to reach the target 550. In some examples, exit location 562 is the closest point of approach from the closest anatomical passageway to target 550. Trajectory 560 represents a trajectory of a medical instrument positioned at exit position 562 to perform one or more interventional steps at target 550. For example, the instrument may pierce the lumen of the anatomical passageway at the exit location. For example, the medical instrument may include a biopsy needle, an ablation tool, a chemical delivery system, an ultrasound probe, and/or the like. In some examples, the medical instrument may have a maximum trajectory length. For example, a biopsy needle may not be able to take a biopsy at a target that is more than 3cm from the exit location 562. Accordingly, when the length of the trace 560 is greater than 3cm, the graphical user interface 400 may display an out-of-range warning 564. In some embodiments, the out-of-range warning 564 may be provided based on a threshold, which may include a fixed threshold and/or a variable threshold, which is set based on, for example, the type of tool to be used to access the target. For example, a biopsy needle may provide a different insertion depth than an ablation tool or imaging probe, in which case the threshold may vary accordingly. In another example, different types of biopsy needles may provide different insertion depths. The user may enter the type of tool being used, or the system may automatically detect the tool. As depicted in fig. 5C, out-of-range warning 564 is presented as a message in control frame 510.

In some embodiments, multiple trajectories to a given target may be identified, such as alternative trajectories to be used when trajectory 560 is found to be unreachable and/or otherwise unsuitable for use during a medical procedure. Consistent with such embodiments, the control frame 510 may include controls for adding alternate tracks to the target 550. Additionally or alternatively, the tool selector 524 may include one or more tools for adding, modifying, and/or deleting trajectories.

In fig. 5D-5F, interactive window 541 includes a hazard barrier 570. The hazard barrier 570 is used to facilitate trajectory planning by identifying vulnerable portions of anatomy located near the target site. Examples of vulnerable portions of the anatomy may include blood vessels, lung pleura, bulla, and/or the like. For example, puncturing the lung pleura during a medical procedure may present a dangerous pneumothorax to a patient. Consistent with such embodiments, the exit location and/or the trajectory between the exit location and the target location may be constrained to avoid vulnerable portions of the anatomy. For example, a trajectory may be invalid when it passes within a threshold distance of a vulnerable portion of the anatomy, destroys a vulnerable portion of the anatomy, and/or the like. In the example depicted in fig. 5D-5F, the hazard barrier 570 provides a warning to protect a portion of the pleural membrane near the target location from puncture when using the planned trajectory. As depicted in fig. 5D-5F, the hazard barrier 570 is placed by a hazard tool using the tool selector 524. Additionally or alternatively, hazard barriers may be added, modified, and/or removed using controls presented in control frame 510.

Each of fig. 5D-5F illustrates a different type of hazard barrier 570. In FIG. 5D, hazard barrier 570 is shown as a planar hazard barrier with a pair of control points 571 and 572 for defining a disk in three dimensions. To express the three-dimensional aspect of the puck, the portion of the puck that protrudes from interactive window 541 can be rendered in a solid color, while the portion of the puck that is projected into interactive window 541 can be rendered in a faded color and/or a translucent color. In fig. 5E, the hazard barrier 570 is shown as a conical hazard barrier, with a pair of external control points 573 and 574 to define a three-dimensional disk as the cone base and a vertex control point 575 to define the cone height. In FIG. 5F, the hazard barrier 570 is shown as a hemispherical hazard barrier with three control points 576 and 578 defining a hemisphere. In fig. 5E and 5F, interactive window 541 further includes target 550 and track 560. When the trajectory 560 is connected to an exit location that is not in the plane of the underlying image (i.e., when the trajectory 560 is projected into the interactive window 541 and/or highlights the interactive window 541), a projection 579 is displayed to link the trajectory 560 to the modeled channel.

Various other types of hazards may be identified and marked using suitable indicators, such as hazard barrier 570. For example, the anatomical passageways may create tight bends that cannot be traversed by certain medical instruments (such as biopsy needles and/or catheters). Thus, the blockage flag may be used to indicate a bend so that the user knows to plan a different route to the target that avoids the bend. Automatic, manual, and/or semi-automatic techniques may be used to determine whether the planned route includes any bends that are too compact. For example, a too tight bend radius may be automatically identified in view of known physical characteristics of various medical instruments used in medical procedures. Additionally or alternatively, the user may visually identify a bend that appears to be too compact, and/or may perform measurements to confirm that the bend is too compact. In some examples, the candidate routes may be automatically ranked based on user-defined route rules and/or feasibility features of the route, such as the length of the route, the most compact curve encountered in the route, the width of a corridor along the route, the length of a trajectory between the route end and the target, and/or the like. Thus, the user may select among the candidate routes based on the ranking.

In fig. 5G, interactive window 541 displays a representation (e.g., model) 580 corresponding to a 3D model generated by segmentation of image data. In some examples, the representation 580 may not be available until the segmentation is complete. In some examples, when the segmentation is incomplete, a partial version of model 580 may be displayed and may be updated in real-time to reflect the ongoing segmentation progress. In some embodiments, boundary 581 may be displayed around model 580 as a semi-transparent pattern, a wire-frame pattern, and/or the like. As depicted in fig. 5G, boundary 581 corresponds to the lung pleura. The appearance (e.g., color, size, texture, and/or the like) of the boundary 581 can be varied to identify various features. For example, boundary 581 may be colored red to indicate the orientation of hazard barrier 570. Various features may also be depicted in the interactive window 541, including the target 550, the trajectory 560, the exit location 562, and/or the like. According to some embodiments, interactive window 541 may include orientation icon 582 to identify a viewing perspective of model 580 relative to a patient's body. In some examples, the appearance of the channels in model 580 along the route to target 550 may be changed to indicate that they are along the route. For example, channels on the route may be colored blue, while other channels in model 580 may be colored gray.

In some examples, an exit angle selector 583 may be provided in the control frame 510. The exit angle selector 583 provides an adjustment control, such as a slider, to adjust the orientation of the exit position 562 along the anatomical passageway. Adjusting the orientation of exit location 562 causes a corresponding adjustment of the trajectory 560 relative to the exit angle of the anatomical passageway. In some examples, it may be desirable to set the exit angle of trajectory 560 based on various factors and/or metrics, such as a default or "rule of thumb" exit angle (e.g., 45 degrees), the distance between exit location 562 and target 550, and/or the distance between exit location 562 and hazard barrier 570. Thus, exit angle selector 583 may speed up the process of defining trajectory 560 by allowing a user to quickly test the range of exit angles and confirm that the correlation metric falls within an acceptable range. For example, exit angle selector 583 may display the value of the exit angle (e.g., 37 degrees in the example provided), the distance from exit location 562 to target 550 (e.g., 2.4cm in the example provided), and/or the distance from exit location 562 to hazard barrier 570 (e.g., 7.4cm in the example provided). The appearance (e.g., color, texture, size, font, etc.) of the exit angle selector 583 may be varied to alert the user when one or more correlation metrics are not within a predetermined range and/or do not meet a predetermined threshold. In some examples, one or more values of the angle adjustment slider may be disabled when the determined value is outside of the acceptable range.

FIG. 6 illustrates a graphical user interface 400 in a preview mode according to some embodiments. The preview mode is used to preview the plan of the medical procedure prepared in the hybrid segmentation and planning mode. The graphical user interface 400 in preview mode displays a simulated live endoscopic view 610, a matched virtual endoscopic view 620, a global anatomical representation (e.g., model) view 630, and a zoomed out anatomical representation (e.g., model) view 640. Simulated live endoscopic view 610 and virtual endoscopic view 620 depict renderings of a representation (e.g., a model) from inside an anatomical channel. The rendering is from the perspective of a virtual endoscope following the route of the planned medical procedure. The simulated live endoscopic view 610 and the virtual endoscopic view 620 are substantially similar, except that the simulated live endoscopic view 610 includes realistic details (e.g., vessels in an anatomical lumen) to simulate actual camera feeds from the endoscope, while the virtual endoscopic view 620 is augmented with directional cues toward the target location, such as contour lines 621, route lines, arrows, and/or the like. In the case of anatomical channel branching, the channel may be brightened in the virtual endoscopic view 620 to indicate the direction in which the user should turn. One or more of the simulated live endoscopic view 610 and the virtual endoscopic view 620 may display various trajectory metrics, such as remaining distance to target location and/or hazard.

Global anatomical model view 630 generally corresponds to the 3D perspective view of model 580 depicted in fig. 5G. As depicted in fig. 6, the global anatomical model view 630 includes a model rendering 631 that includes an anatomical model, a boundary, a plurality of target locations, a hazard barrier, and orientation icons. Anatomical model view 630 further includes a depiction of catheter 632 as a green line. Endoscopic views 610 and 620 provide matching views from the distal end of catheter 632.

In some embodiments, the graphical user interface 400 in preview mode may display a zoomed out anatomical model view 640. The reduced anatomical model view 640 provides a simplified overview of a planned route for a medical procedure that includes critical anatomical features of the route. The route path 641 is represented as a straight line. A depiction of conduit 632 is overlaid on route path 641 to indicate the progress of conduit 632 along route path 641. The anatomical passageways 643 are rendered as 2D layered projections to provide a simplified indication of the width of the passageways 643. Branches 644 are rendered to show where they connect to lane 643, but other details of branches 644, such as their various sub-branches, are not rendered. A target icon 645 indicating an exit angle and/or nearby hazards is located at the distal end of the route path 641. When the planning of the medical procedure includes multiple targets and/or paths, a selector 646 is included to switch between the multiple targets and/or paths. Embodiments of reduced anatomical representation (e.g., model) views are further discussed in U.S. provisional patent application No. 62/486,879, which is incorporated herein by reference.

As depicted in fig. 6, the reduced anatomical model view 640 serves as a controller to allow the user to navigate through a preview of the route. In particular, the distal end of catheter 632 includes a control point 650 that can be dragged back and forth along a path of route 641. As control point 650 is dragged back and forth, endoscopic views 610 and 620 are updated to reflect the viewpoint of catheter 632 at the updated position of control point 650. Further, the depiction of the catheter 632 in the global anatomical model view 630 is updated to reflect the shape of the catheter 632 at the updated position of the control point 650. In the example depicted in fig. 6, control points 650 are shown as triangular cones, which represent a projected view of the endoscope from the distal end of the catheter. In alternative embodiments, the control points 650 may be various shapes and sizes.

Once the planned route or routes have been previewed, the clinician may continue to save the plan. For example, the clinician may click and/or tap the next button of the header 410 to continue. Alternatively, the clinician may return to an early stage of the planning process to make changes as needed.

FIG. 7 illustrates a graphical user interface 400 in a save mode according to some embodiments. The save mode is used when the planned medical procedure is completed and/or ready to be transferred to a medical instrument to perform the medical procedure. A set of options 710 is presented via the graphical user interface 400. Options 710 may include a transfer option, a discard option, a delete option, and/or a save option. Saving options may include saving the plan locally, to an external device, and/or sending the plan over a network to, for example, a cloud storage facility. One or more options may require the installation of an external storage device. For example, the delivery options may require a storage device (e.g., a USB device) that is compatible with the medical instrument to which the plan is to be delivered. Accordingly, a message 720 may be displayed to inform the user of the applicable storage requirements.

FIG. 8 illustrates a graphical user interface 400 in a manage mode according to some embodiments. The management schema is used to manage the available plans. The available plans may be stored locally, may be stored on an external drive, and/or may be downloaded over a network. A selection grid 810 is displayed that includes thumbnail views of the plan (e.g., representations (e.g., models), the planned route, the target location, and/or a rendering of the like). Additionally or alternatively, the selection grid 810 may include patient data, such as a patient name, a date of birth, and/or the like. In some examples, the preview mode of graphical user interface 400 may be used to reload and view a program being planned. In some examples, the program being planned may be saved at any time and reloaded to continue working at a later time using the hybrid segmentation and planning mode of the graphical user interface 400. Accordingly, the selection grid 810 may include an indication of the state of each plan (e.g., transfer, plan, start, and/or the like) and/or the time at which the plan was last saved. In some examples, the selected plan may be viewed, deleted, and/or transmitted. In some embodiments, selecting grid 810 may include an option to create a new plan. When a new plan is selected, the graphical user interface 400 may proceed to the data selection mode previously described in fig. 4.

FIG. 9 illustrates a graphical user interface 400 in a review mode according to some embodiments. The review mode is used to review records of completed medical procedures. After a medical procedure is performed using a given plan, a record of the procedure may be saved and transmitted to the planning workstation. In some embodiments, the recorded program files may include video of live endoscopic images captured during the procedure, related virtual endoscopic images, anatomical representations (e.g., models) showing catheter movement during the procedure, notes taken by a clinician during the procedure, and/or the like. Accordingly, a viewer 910 may be displayed that includes playback controls (e.g., play, pause, zoom, and/or the like), snapshot controls, annotation and/or bookmark controls, and/or the like.

Fig. 10 is a simplified diagram of a method 1000 for planning a medical procedure, according to some embodiments. According to some embodiments consistent with fig. 1-9, method 1000 may be used to operate a graphical user interface (such as graphical user interface 400) in multiple modes, including a data selection mode, a hybrid segmentation and planning mode, a preview mode, a save mode, a manage mode, and a review mode. In some embodiments, the graphical user interface is interactive and may receive user input via a mouse, keyboard, touch screen, stylus, trackball, joystick, voice command, virtual reality interface, and/or the like.

At process 1010, data is selected via the graphical user interface in a data selection mode. According to some embodiments, selecting data includes selecting a data source using a data source selector (such as data source selector 420), selecting a patient using a patient selector (such as patient selector 430), and selecting data using a data selector (such as data selector 440). The selection may be confirmed by using a load button on the graphical user interface. The data may include imaging data, such as CT data and/or any other type of imaging or patient data.

At process 1020, a medical procedure is planned via a graphical user interface in a hybrid segmentation and planning mode. According to some embodiments, the data selected at process 1010 includes image data segmented to generate an anatomical representation (e.g., a model) based on the extracted channels. While during segmentation, the medical procedure is planned by receiving user input defining the planned features (e.g., targets, hazards, and/or paths). In some examples, an interactive window, such as interactive window 541, may provide an interface for a user to add, modify, and/or delete features from a plan. When the segmentation has progressed such that the representation (e.g., model) is ready for viewing, an interactive window may be used to view and/or interact with the representation (e.g., model). In some examples, the target may not have any extraction channels that are close enough to draw a valid trajectory (e.g., a trajectory that is shorter than the maximum trajectory length) between the target and the extracted channel. Thus, a user may manually identify a representation (e.g., a model) and add nearby channels to the representation. An exemplary method for manually adding a connection channel to a representation (e.g., a model) is described in more detail below with reference to FIG. 11.

At process 1030, the planned medical procedure is previewed via the graphical user interface in a preview mode. According to some embodiments, previewing the medical procedure may include viewing a live simulated endoscopic view (such as live simulated endoscopic view 610), a virtual endoscopic view (such as virtual endoscopic view 620), an anatomical model view (such as anatomical model view 630), and/or a simplified model view (such as simplified model view 640). According to some embodiments, the simplified model view may include control points (such as control point 650) to scroll back and forth in a preview of the medical procedure.

At process 1040, the planned medical procedure is transferred to the medical instrument via the graphical user interface in the save mode. According to some embodiments, delivering the planned medical procedure may include installing a storage device compatible with the planning workstation and the medical instrument. In some examples, a message may be displayed via a graphical user interface to alert a user that compatibility is required. In some examples, the planned medical procedure may be saved during procedure 1040. In some examples, the planned medical procedure may be delivered to a robotic catheter system. In some examples, after procedure 1040, method 1000 may proceed to procedure 1050 to perform a medical procedure according to the plan.

Fig. 11 is a simplified diagram of a method 1100 for modifying an anatomical model to provide access to a target of a medical procedure, in accordance with some embodiments. According to some embodiments, the method 1100 may be performed in a hybrid segmentation and planning mode using a graphical user interface, such as the graphical user interface 400. In some examples, method 1100 may generally be performed after segmentation is complete and the anatomical model is available for viewing and/or manipulation.

Typically, the channel of interest to the user is a channel that is navigated by the instrument from a main channel (such as the trachea) through various branches in succession to exit points at the channel near the target. In some cases, automatic segmentation may not be able to detect all such channels. Therefore, the channel groups connected to the model generated by segmentation are incomplete. When the initial model fails to provide satisfactory entry into the target (e.g., when the closest exit point is not within a threshold distance, such as 3cm as previously described with respect to fig. 5C), the user may desire to connect one or more channels that were initially unconnected to the model. In some cases, automatic segmentation may detect channels that are not of interest to the user, such as channels that do not lead to the target. Method 1100 provides an example of a technique for identifying channels of interest that are not detected by automatic segmentation and connecting unconnected channels to a model. Method 1100 further provides an example of a technique for pruning channels from the model that are not of interest to the user.

At process 1110, the distance between the target and the nearest connected channel is measured. According to some embodiments, the distance may be measured automatically, for example, in response to a user defining the target via a graphical user interface. In some examples, distance may be measured via a graphical user interface by clicking on the target and the nearest connected channel.

At process 1120, it is determined whether the measured distance is greater than a predetermined threshold. In some examples, the predetermined threshold may correspond to a maximum range of medical tools (such as biopsy needles) used in the medical procedure. In some examples, the predetermined threshold may be a fixed value, such as 3cm, and/or may vary based on factors such as the type and/or model of medical tool used. At process 1130, it is determined that the measured distance is less than a predetermined threshold. The model may be saved and method 1100 may terminate at process 1130 because the existing model provides satisfactory access to the target. When the distance is greater than the predetermined threshold, the method 1100 may proceed to process 1140 for identifying unconnected channels near the target and augmenting the model to include the identified channels, as described in more detail below with reference to fig. 12. Once the identified channel has been connected to the model, the method 1100 may repeat the processes 1110 and 1120 until the channel has been connected to the model within a predetermined threshold distance from the target.

In some embodiments, the distance between the target and the nearest connected channel may not be the only consideration in determining whether the model provides sufficient airway to reach the target. In some examples, other factors may be considered that affect the exit point from closest to the airway to the target. These factors may include a satisfactory exit angle from the exit point, the presence of tight radius bends that must be navigated through the connecting airway to reach the exit point, the size of the diameter of the anatomical passageway, and/or potential hazards between the exit point and the target. Different paths through different airways may be selected based on these considerations. Thus, in alternative embodiments, optional processes 1150a, 1150b and 1150c may be completed to evaluate other factors regarding the selected channel, to determine if the other factors are satisfactory, and to select an alternative channel when the other factors are not satisfactory.

Fig. 12 is a simplified diagram of a method 1200 for augmenting an anatomical model to provide access to a target of a medical procedure, in accordance with some embodiments. At process 1210, an interactive image is displayed via a graphical user interface, such as graphical user interface 400. The interactive image depicts image data, connection channels within the image data, and objects within the image data. At process 1220, user input is received to identify at least a portion of unconnected channels that are closer to the target than the nearest connected channel. In one example, the method 1200 may be used to augment an anatomical model based on the identification of detected unconnected passageways adjacent to or near connected passageways of the model and working gradually in a "forward" manner toward the target. In an alternative example, the method 1200 may be used to augment an anatomical model based on the identification of detected unconnected passageways adjacent to or near the target and working in a "backward" manner gradually towards the connected passageways of the model.

At process 1230, when a suitable unconnected channel is not identified in the initially displayed interactive image, the interactive image may be searched by iteratively rotating the interactive image and determining whether the unconnected channel is visible. At process 1240, a rotation point is defined in the graphical user interface by selecting a point in the image data (e.g., by double-clicking on the point). In some examples, the rotation point is displayed by placing a crosshair over the interactive image. In one example, the rotation point is selected as the point along the connecting airway closest to the target. In another example, the rotation point is selected as a point on the target. In other examples, the rotation point may be any point in the interactive image. At process 1250, the interactive image is rotated around the rotation point. In some examples, the rotation point provides a 360 degree rotation in three dimensions around the rotation point. At process 1260, a determination is made whether an unconnected channel is identified in the interactive image. When an unconnected channel is not identified, a new rotation point is selected at process 1270 and the interactive image is rotated around the new rotation point to identify the unconnected channel. Process 1240 and 1270 may be repeated until an unconnected channel is identified.

When an unconnected channel is identified in the interactive image at process 1230 or process 1260, user input (e.g., a click and/or tap) identifying the unconnected channel may be received at process 1280 and the unconnected channel is connected to the model. In some embodiments, unconnected channels may be automatically connected to the model by using segmentation software to track the channels to the connection points with the model. When process 1280 is complete, method 1200 may return to method 1100 to determine whether the newly connected channel provides satisfactory access to the target. Process 1110-1150 may be repeated until satisfactory entry into the target is achieved.

According to some embodiments, unconnected channels are partially identified when the interactive image is rotated around the point of rotation during process 1250. After rotating the interactive image to a state where unconnected channels are partially identified, it may be helpful to limit the rotation to rotation around the rotation axis rather than an unconstrained 3D rotation around the rotation point. The axis of rotation is defined in the graphical user interface by drawing a line between the point of rotation and a second point, such as a target position. Restricting rotation to the axis of rotation may enhance usability with respect to rotation only about the point of rotation. Consistent with such embodiments, searching for unconnected channels may be performed by iteratively repeating process 1240-1270 using any combination of rotation points (when unconstrained rotation is required) and rotation axes (when limited rotation is required).

Starting from the rotation point may provide some advantages starting from the rotation axis. For example, if the user initially provides a rotation axis and a given unconnected channel is positioned in an orientation orthogonal to the rotation axis, the user will not see any change in the appearance of the unconnected channel when rotating the interactive image, i.e. the unconnected channel will appear circular, and the unconnected channel will always appear circular when rotated around the selected rotation axis. Using a single rotation point will provide 360 degrees of rotation in three dimensions. Thus, regardless of the initial orientation of the unconnected channel, the user will eventually rotate the interactive image in such a way that the unconnected channel will be visible. In addition, selecting a new axis of rotation can be difficult. The user has no guidance on how to draw the new line. However, changing the rotation point is simple, as it selects only a single point. The rotation point may be selected at a position close to the connection channel, but if the rotation point proves to be insufficient, the rotation point may be easily moved to the target.

In another example, referring again to process 1140 of FIG. 11, the model may alternatively be augmented when the user draws a line from the closest connecting channel to the target. Using automated techniques, structures that appear to correspond to unconnected channels may be detected, and the closest such structure may be connected to the model. The user may iteratively continue to draw a line from the most recent connected channel to the target while additional unconnected channels are detected and connected to the model until the closest connected channel falls within the threshold established in process 1120. In one example, when a channel that is not acceptable to the user is connected to the model, the user may select an airway and purge or delete the airway from the model.

The model may be pruned at any time during segmentation and/or during model expansion. In some examples, one or more connection channels may be determined to be irrelevant. For example, the connection channel may be determined to be far from the target and/or otherwise less relevant and/or irrelevant to the purpose of the medical procedure. In some examples, it may be desirable to disconnect and delete unrelated channels from the model. For example, breaking the unrelated channels may reduce visual clutter and/or may improve performance (e.g., improve load time, render time, and/or the like) by reducing the size of the model. Accordingly, the pruning tool may be provided to the user via a graphical user interface. For example, a pruning tool may be selected via a tool selector (such as tool selector 524). When the clipper is enabled and user input (e.g., a user click and/or tap) identifying an unrelated channel is received, the channel may be disconnected and deleted from the model. In some examples, the channel may be broken at a point identified by the user. In some examples, the channel may be disconnected at the closest point of connection to the model. In some examples, the identified channel may be disconnected along with any of the sub-branches of the identified channel. According to some embodiments, trimming may be performed at any time during methods 1100 and/or 1200 and/or as a separate process from methods 1100 and/or 1200.

Fig. 13 is a simplified diagram of a method 1300 for planning a medical procedure using a graphical user interface, according to some embodiments. According to some embodiments consistent with fig. 4-9, the method 1300 may be performed using the graphical user interface 400 in a hybrid segmentation and planning mode, as depicted in fig. 5A-5G. In some examples, method 1300 and/or various processes thereof may be performed before, during, and/or after segmentation of image data to generate a model.

At process 1210, an interactive window, such as interactive window 541, is provided for the user to create a plan for the medical procedure. The interactive window may be displayed via a display system and may interact via a user interface such as a mouse, trackball, joystick, touch screen, natural user interface (e.g., voice, gestures), augmented/virtual reality interface, and/or the like. According to some embodiments, the interactive window may be displayed in conjunction with one or more other views, such as a tool selector (e.g., tool selector 524), a selection sidebar (e.g., selection sidebar 542), a control frame (e.g., control frame 510), and/or the like.

At process 1310, the image data is displayed via a graphical user interface. In some examples, the image data may correspond to raw image data (e.g., CT data) of a patient. The image data may be pre-selected in a data selection mode of the graphical user interface. In some examples, the image data may be displayed while the image data is segmented using a background segmentation process. Segmentation data generated by a segmentation process (e.g., airways detected in the image data) may be overlaid on the image data. For example, the image data may be displayed in a first color palette (such as grayscale), and the segmentation data may be displayed in a contrasting color (such as pink). As the segmentation of the image data proceeds, the displayed segmentation data may be updated to reflect the progress of the segmentation.

At process 1320, a first user input defining one or more features of a plan within the displayed image data is received. According to some embodiments, the one or more characteristics of the plan may include a goal of the medical procedure, a risk of the medical procedure, and/or the like. In some examples, the target may be defined using an object placement tool (e.g., a circle tool and/or a 3D ellipse tool) having a suitable shape provided by the graphical user interface. In some examples, hazards may be defined using hazard barrier placement tools having suitable shapes (e.g., 3D discs, conical hazard barriers, and/or hemispherical hazard barriers) and/or suitable control points for defining hazard barriers. Examples of hazards may include vulnerable portions of the anatomy (e.g., lung pleura, blood vessels, bullae, and/or heart) and/or excessive bending in the anatomical passageways (e.g., bending that is too tight to accommodate passage of a medical instrument, such as a biopsy needle).

At process 1330, an interactive image is displayed via the graphical user interface. The interactive image includes image data, connected anatomical passageways detected by segmentation of the image data, and one or more features defined during process 1320. The connected anatomical passageways form a tree that each branch can reach from a main passageway (such as the trachea). Thus, a medical instrument inserted through the main channel may enter the connected anatomical passageways. A user may interact with the interactive images via a user interface such as a mouse, trackball, joystick, touch screen, natural user interface (e.g., voice, gestures), augmented/virtual reality interface, and/or the like. According to some embodiments, the interactive image may be displayed in conjunction with one or more other views, such as a tool selector (e.g., tool selector 524), a selection sidebar (e.g., selection sidebar 542), a control frame (e.g., control frame 510), and/or the like.

At process 1340, a second user input identifying at least a portion of a trajectory of a medical procedure within the interactive image is received. In some examples, the trajectory may be identified by connecting the target to the closest of the connected anatomical passageways. For example, the received second user input may include a line drawn between the target and the closest channel via a line tool provided by the graphical user interface.

At process 1350, a third user input is received that adjusts one or more aspects of the interactive image based at least in part on the defined trajectory. According to some embodiments, process 1350 may generally correspond to method 1200 for augmenting an anatomical model, in which case the third user input may include one or more user inputs received during method 1200. For example, process 1350 may include determining a distance represented by the trajectory, e.g., a distance between the closest channel and the target. Consistent with such an example, adjusting the interactive image may include connecting the unconnected channel to the connected channel when the distance is greater than a predetermined threshold (e.g., 3 cm). Unconnected channels may be connected by receiving a fourth user input identifying unconnected channels that are closer to the target than the nearest channel and connecting the identified channels using an automated technique. In some examples, adjusting the interactive image may include determining an exit angle (e.g., an angle at which a medical instrument pierces a lumen of the channel when entering the target from the channel) based on the trajectory and receiving a third user input to manipulate a control provided by the graphical user interface (such as a slider) for changing a position of the exit point along the channel. In some examples, the control may provide continuous control over the orientation of the exit point and/or may provide a real-time updated metric associated with the selected exit point, such as a corresponding exit angle.

Fig. 14A-14F are further simplified diagrams of a graphical user interface 400 in a branch marking mode according to some embodiments. In this example, the branch marker mode is applied to the lung anatomy, but in other examples, the graphical user interface 400 is used to mark any suitable anatomical structure. The lungs include a right lung and a left lung, where each lung is divided into lobes, which in turn can be divided into segments and leaflets. Within each lung lobe are various anatomical structures, including a set of anatomical passageways 1402, the anatomical passageways 1402 may include a plurality of branches 1403. In the example of fig. 14A-15E, a branch marking pattern may be used to identify and mark which lung lobes a single branch 1403 may belong to. In an alternative embodiment, the branch marking pattern is used to identify and mark which lung the segment and/or leaflet and individual branch 1403 may belong to. In the branch marking mode, the graphical user interface 400 provides a mechanism for a user to assign labels to the branches 1403, reflecting to which section 1404 of the lung (e.g., the lobe of the lung) the respective branch 1403 belongs. In an exemplary embodiment, the branch marker mode is used to help register the airway model to the human anatomy to provide navigation during an image-guided biopsy procedure. Further details regarding registration can be found in US62/486,879, which has previously been incorporated by reference.

The branch markers may operate on a 3D model (such as the model described above) or any other suitable model of the anatomical passageway 1402. The model may be created from imaging data including CT data, MRI data, OCT data, x-ray data, and/or other types of imaging or patient data as previously described. In some embodiments, the 3D model includes a set of anatomical channels 1402 and other anatomical structures, such as ribs, vessels, tumors, lesions, and/or organs (e.g., heart). In the illustrated embodiment, the 3D model includes lung branches 1403 and lung pleura 1405, but further embodiments include any type of suitable anatomical passageway 1402 and surrounding structures. The graphical user interface 400 displays elements of the 3D model, and may hide or display the display of various elements (e.g., lung pleura 1405) to improve clarity.

In some examples, the graphical user interface 400 automates aspects of the tagging process, including tag selection. Further, in some examples, the graphical user interface 400 automates aspects of tag validation (including identifying unlabeled branches 1403 and/or identifying conflicts in user input). In these examples and other examples, graphical user interface 400 may accelerate planning of a medical procedure by providing an accelerated process for identifying branches to a user.

Fig. 14A shows an example of a graphical user interface 400 in a branch marking mode before marking sections/lung lobes 1404 of any branch 1403. In some embodiments, the graphical user interface 400 includes an interactive window 1406 for viewing, selecting, marking, and/or otherwise interacting with the model of the anatomical passageway 1402. The interactive window 1406 may display images representing a model of the anatomical passageway 1402 and surrounding anatomy (e.g., lung pleura 1405) as a 3D rendering, a wireframe, a cross-section, and/or other suitable representation. In one such embodiment, interactive window 1406 represents branch 1403 by a centerline.

In some embodiments, the graphical user interface 400 includes a labeling tool 1408 for selecting labels to be assigned to branches of the model. The labeling tools 1408 may be represented as a palette, header, footer, sidebar, menu, message bar, drop down menu, pop-up window, and/or other suitable representation. In some embodiments, the tagging tool 1408 displays a list 1410 of tags to apply. The labeling tool 1408 may use highlighting, color, font, outline, emphasis, and/or other suitable indicators to indicate the currently selected label. The tagging tool 1408 may also display a status indicator 1412 for each tag that indicates a status such as whether the tag has been applied. Status indicator 1412 may indicate whether a label has been applied to more than one branch, and in one such embodiment, status indicator 1412 displays a single check mark to indicate that a label has been applied to a single branch and two check marks to indicate that a label has been applied to more than one branch. Additionally or alternatively, the tagging tool 1408 may display a set of interactive objects 1414 (e.g., buttons, check boxes, radio buttons, text-based buttons, etc.) for setting attributes of respective tags. For example, the user may select interactive object 1414 to indicate that the branch corresponding to the tag is missing and not present in the model and/or anatomy.

The graphical user interface 400 is operable to receive user input and display a cursor 1416 in response to the user input. When cursor 1416 is located within the boundaries of the tagging tool, graphical user interface 400 may receive user input to select a tag from list 1410, activate or deactivate interactive object 1414, and/or take other suitable action. When cursor 1416 is positioned within interactive window 1406, graphical user interface 400 may receive user input to select branches of the model, manipulate (e.g., rotate, translate, zoom, etc.) the image of the model and/or image data of the surrounding anatomy, and/or take other suitable actions. In an example, the graphical user interface 400 is selected in response to a first mouse button within the interactive window 1406 and rotates the perspective view shown in the interactive window by 90 ° in response to a second mouse button. When rotating the perspective view, the graphical user interface 400 may also rotate a patient orientation indicator 1417, which represents the perspective view of the model and surrounding anatomy displayed in the interactive window 1406 relative to the patient.

When the label is selected and cursor 1416 is positioned over branch 1403 in interactive window 1406, graphical user interface 400 may indicate that branch 1403 may be marked by changing the representation of cursor 1416, the representation of branch 1403 or multiple branches extending from the selected branch in interactive window 1406, and/or providing other suitable indications. Additionally or alternatively, when a tab is selected and the cursor 1416 is positioned in the interactive window 1406 rather than on any of the branches 1403, the graphical user interface 400 may indicate that the cursor is not on a branch by changing a representation of the cursor 1416 and/or providing other suitable indications.

Examples of graphical user interfaces 400 responsive to user input are described with reference to fig. 14B-14F. Fig. 14B shows an example of applying a label to a branch 1403 of an anatomical passageway 1402 via the graphical user interface 400. In some embodiments, the tagging tool 1408 of the graphical user interface 400 automatically selects the first tag to apply. The labeling tool 1408 may select a first label from those labels in the list 1410 that have not been applied to at least one branch. The tagging tool 1408 may also provide an indication that the first tag has been selected, such as those described above. The user may select a different label via the labeling tool 1408 to override the automatic selection. In the example shown in fig. 14A-14F, the labels indicate the lobes of the lungs. It should be understood that the label may indicate other sections of the lung, including the left or right lung, a lung section, and/or a lung leaflet.

Graphical user interface 400 may then receive a user selection of branch 1403 via interactive window 1406, and in response, may assign the selected label to selected branch 1403. Selecting a single branch 1403 may cause graphical user interface 400 to identify other branches 1403 connected to the selected branch and assign labels to the multiple branches 1403 as a whole. In some examples, a label indicator 1418 is displayed in the interactive window 1406, such as a flag, mark, text box, and/or other suitable indicator representative of the assigned label and the corresponding branch 1403 or branches 1403. The representation of branch 1403 or branches 1403 in interactive window 1406 may be colored, outlined, emphasized, deemphasized, or otherwise modified to indicate that a label has been assigned. The graphical user interface 400 may also update a respective status indicator 1412 in the tagging tool 1408 to indicate that a tag has been assigned to at least one branch. In the example shown, the first branch 1403 is selected by the user and labeled "upper right leaf". The graphical user interface 400 highlights all branches connected to the first branch 1403 up to the stem (e.g., the trachea of the lung) with a first color. At this point, all highlighted branches are effectively identified and marked as belonging to the "upper right lobe" section/lobe 1404.

As depicted in fig. 14C, when a label is applied to branch 1403 or multiple branches 1403, labeling tool 1408 may automatically select the next label to be applied in list 1410. In some embodiments, the labeling tool 1408 selects the next label based on the arrangement of branches in the anatomical passageway 1402. For example, the middle leaf label may be selected after assigning the upper leaf label because the upper leaf is near the middle leaf in the anatomical passageway 1402. In some embodiments, the tagging tool 1408 selects from those tags in the list 1410 that have not been applied to any branch. For example, if a middle leaf label is already used to mark branches, a lower leaf label may be selected after assigning an upper leaf label. By automatically selecting the next tab, in these and other embodiments, the user can continue to select the branch without moving the cursor out of the interactive window 1406.

The process of selecting the tag and the branch may be repeated. As explained above in the context of fig. 14B, in one example, the first branch 1403 is selected by the user and labeled as the "upper right leaf". Thus, all branches connected to the first branch 1403 up to the trunk are also labeled as "upper right leaf". In response, the graphical user interface 400 selects the next tab, "right middle leaf," as shown in FIG. 14C. Referring to fig. 14D, the user selects the second branch 1403 to assign the label "right middle leaf". The graphical user interface 400 assigns the label a second color and changes the color of all branches 1403 that are descendants of the second branch (e.g., the sub-branches that originate from or are distant from and connected to the second branch) in the second color, identifying and labeling the descendant branch that belongs to the right middle leaf. In one example, when the second branch 1403 is selected, all descendant branches are highlighted in the second color, and a portion of the proximal/parent connecting branch reflects a change in color of the trunk up to the previously identified segment/lobe 1404. In one embodiment, multiple branches 1403 may overlap, possibly belonging to two separate sections/lobes 1404. Overlapping branches may be highlighted in a single specific color reflecting the overlap. This process is repeated for the "lower right lobe" using the third color to identify and mark the distal branch of the third user-selected branch within the lower right lobe. The process may be repeated again for the "upper left lobe" and the "lower left lobe" until all branches 1403 have been identified and labeled as belonging to segment/lobe 1404 as shown in fig. 14E.

As depicted in fig. 14E, to assist the user, in some embodiments, the graphical user interface 400 indicates those branches 1403 in the interactive window 1406 that do not already have a label in the anatomical passageway 1402. Graphical user interface 400 may indicate unmarked branches 1403 using indicators 1420, such as marks, highlights, contours, colors, line widths, and/or other suitable indicators. In one such example, graphical user interface 400 displays a labeled question mark label with a connector that extends to unlabeled branch 1403. In some embodiments, graphical user interface 400 indicates the remaining unlabeled branches referenced by the user upon detecting that each label has been applied to at least one branch 1403. When the user selects the label and selects an unmarked branch, the label may be applied to branch 1403 and/or the segment/lobe 1404 to which it belongs, and indicator 1420 may be removed, as shown in fig. 14F. In examples where unlabeled branch detection is performed while applying each label to at least one branch, applying a label to an unlabeled branch may cause a label to be applied to more than one branch. Accordingly, the graphical user interface 400 may update the corresponding status indicators accordingly. In the example of fig. 14E, the status indicator 1412A is updated to display a pair of check marks because the upper left lobe has been assigned to more than one segment/lobe 1404. In another example of performing unmarked branch detection when all tags are applied to at least one branch, the graphical user interface 400 removes the unmarked branch indicator 1420 when a tag is deleted from a branch and it is no longer the case that all tags have been applied to at least one branch.

Fig. 15A and 15B are simplified diagrams of a method 1500 for planning a medical procedure according to some embodiments. According to some embodiments consistent with fig. 14A-14F, method 1500 may be used to operate a graphical user interface, such as graphical user interface 400, in a branch marking mode.

At process 1501 of fig. 15A, imaging data, such as CT data of a patient, is received. At process 1502, a model of a patient anatomy is provided from the image data. The model may include a set of anatomical channels 1402 and/or other anatomical structures, such as organs, vessels, tumors, lesions, and the like. In particular, the anatomic passageway 1402 may include branches 1403 and/or other suitable structures.

At process 1504, an image of the model is displayed via the graphical user interface 400. According to some embodiments, an image of the model is displayed in an interactive window 1406, the interactive window 1406 including a representation of the anatomical passageway 1402 and surrounding anatomy (e.g., lung pleura 1405). These elements of the surrounding anatomy may be displayed or hidden separately to provide a reference frame and improve clarity.

At process 1506, a first tag is selected. The first tag may be automatically selected and/or selected in response to a first user input received by a tagging tool 1408 of the graphical user interface. User input may be provided via any suitable input mechanism (including a mouse, keyboard, touch, stylus, trackball, joystick, voice command, virtual reality interface, and/or the like). At process 1508, an indication is displayed by the tagging tool 1408 of the graphical user interface that the first tag has been selected.

At process 1510, graphical user interface 400 may receive a second user input selecting first branch 1403 for the first tab. The second user input may select the first branch 1403 via the interactive window 1406 displaying the model. At process 1511, in some examples, the graphical user interface 400 identifies other branches 1403 connected to the first branch 1403 such that multiple branches 1403 may be labeled in a single process. In one example, this includes identifying those branches 1403 that are descendants of the first branch 1403 (e.g., the sub-branches starting from the far side of the first branch) and including the descendants in the section/lobe 1404. In this manner, the tag may propagate downstream from the selected branch. In one example, this includes identifying the look-ahead branch 1403 (e.g., parent branch, ancestor branch, etc. starting from the near side of the first branch) up to the main branch. The primary branch may be the primary branch of the initial case of the lung trachea or a previously identified section/lobe 1404. In this manner, the tags may be propagated upstream so that the user can mark the subtree without having to select the root of the subtree.

At process 1512, graphical user interface 400 determines whether the tag conflicts with a previously selected tag. For example, the user may label the subbranches with the top right leaf label. If the child branch includes a parent branch that was previously marked with an upper left leaf label, the algorithm may identify a conflict between the child branch and the parent branch. The graphical user interface 400 may refuse to apply the current tab and account for the conflict by highlighting the current tab, the currently selected branch, the previous tab, and/or the previously selected branch. The user may then be presented with an option to correct the previous label or the current label. Once any conflicts are resolved, at process 1513, the first branch 1403 and any other branches 1403 identified in process 1511 are tagged with a first tag.

At process 1514, the graphical user interface displays a representation of the first label applied to the first branch 1403 and/or its respective section/lobe 1404. In some such examples, the graphical user interface displays a flag, mark, text box, and/or other suitable indicator in interactive window 1406 to represent the first tab. In some examples, the representation of the first branch 1403 and/or its section/lobe 1404 in the interactive window 1406 is colored, outlined, emphasized, deemphasized, or otherwise modified to indicate that a label has been assigned.

Referring now to FIG. 15B, at process 1516, a determination is made as to whether the tagging tool 1408 has an additional unassigned tag. If so, at process 1518, the next tag is selected. The next label may be automatically selected from a set of unassigned labels based on the placement of the branches 1403 in the anatomic passageway 1402 and/or other suitable criteria. The next tab may also be selected in response to a first user input received by the tab tool 1408 of the graphical user interface. In some such examples, the user selection overrides the automatic selection. At process 1520, an indication that the first tab has been selected is displayed by a tab tool 1408 of the graphical user interface. At process 1522, the graphical user interface may receive a user input selecting a branch of the labeled anatomical passageway. In response, at process 1524, the selected branch is marked with a tag. At process 1526, the graphical user interface displays a representation of the label applied to the branch. The process 1518-1526 may be performed substantially similar to the process 1506-1514.

At process 1528, the graphical user interface may identify branches 1403 of the anatomical passageway 1402 that are not assigned a label. This may be performed when it is determined in process 1516 that each tag has been assigned to at least one branch. At process 1530, the graphical user interface displays an indicator, such as a flag, highlight, outline, color, line width, and/or other suitable indicator, that identifies the unassigned branch in interactive window 1406. At process 1532, user input is received selecting a tag. This may be performed substantially similar to processes 1506 and/or 1518. At process 1534, the selected tags are applied to the unassigned branches and their corresponding segments/lobes, and at process 1536, the graphical user interface displays a representation of the tags applied to the branches and/or segments/lobes. The process 1534-1536 may be performed substantially similar to the process 1511-1514. This may be repeated until a label is assigned to each branch.

Fig. 16 is a simplified diagram of a method 1600 for planning a medical procedure, according to some embodiments. Fig. 17A-17P are corresponding diagrams of a graphical user interface 400 during performance of a method 1600 according to some embodiments. According to some embodiments consistent with fig. 1-15B, graphical user interface 400 may include features generally corresponding to similar features depicted in fig. 1-15B, such as interactive header 410, control frame 510, canvas frame 520, workspace 522, tool selector 524, interactive window 541, selection sidebar 542, image data 543, segmentation data 544, and/or the like. In some embodiments, graphical user interface 400 may include a mode selector 1712 (illustratively positioned within interactive header 410) and/or a view selector 1714 (illustratively positioned at a lower portion of interactive window 541) for enabling and/or disabling various features of graphical user interface 400, including various features used during execution of method 1600.

At process 1610, a target 1720 of a medical procedure is added via the graphical user interface 400. An illustrative screenshot corresponding to process 1610 is depicted in fig. 17A and 17B. In some embodiments, target 1720 may generally correspond to target 550. As depicted in fig. 17A, during the addition of a target 1720, a new target menu 1722 may be displayed in the control frame 510. The new target menu 1722 may display instructions for identifying, placing, adjusting, and/or otherwise configuring the target 1720, controls for confirming and/or canceling the addition of the target 1720, and/or the like. When the target 1720 is added, target data 1724 may be displayed in the control frame 510. As depicted in fig. 17B, the target data 1724 can include metrics (e.g., size metrics) associated with the target 1720, controls (e.g., controls for deleting, renaming, and/or editing the target 1720), and/or the like.

At process 1620, the operator may optionally zoom to target 1720 (and/or other portions of image data 543) to view graphical user interface 400. Illustrative screenshots corresponding to process 1610 are depicted in fig. 17C-17E. Graphical user interface 400 may provide one or more options for zooming to target 1720, including non-synchronized zooming, auto-zooming, and/or the like. FIG. 17C depicts the zooming performed when the synchronous zoom feature of graphical user interface 400 is not enabled, and FIG. 17D depicts the zooming performed when the synchronous zoom feature is enabled. When the synchronous zoom feature is not enabled, zooming in and/or zooming out of the image displayed in interactive window 541 is not accompanied by a matching zoom effect in the thumbnail image displayed in selection sidebar 542. Conversely, when the synchronous zoom feature is enabled, zooming in and/or zooming out of the image displayed in interactive window 541 may be accompanied by selecting a matching zoom effect in the thumbnail image displayed in sidebar 542. FIG. 17E depicts the zooming performed using the auto-zoom feature of the graphical user interface 400. The use of the auto-zoom feature causes the target 1720 to automatically center and/or zoom in the image displayed in the interactive window 541. The auto-zoom feature may or may not be accompanied by a matching zoom effect in the thumbnail image displayed in the selection sidebar 542.

At process 1630, the operator may optionally edit the target 1720. An illustrative screenshot corresponding to process 1630 is depicted in FIG. 17F. As depicted in fig. 17F, during editing of a target 1720, an edit target menu 1726 may be displayed in the control frame 510. The edit target menu 1726 can display instructions for editing the target 1720, controls for confirming and/or canceling editing of the target 1720, and/or the like. In some embodiments, an operator may modify a property (e.g., size, orientation, and/or the like) of target 1720 via interactive window 541 and/or via selecting sidebar 542.

At process 1640, a path 1730 to the target 1720 is identified via the graphical user interface 400. Illustrative screenshots corresponding to process 1640 are depicted in fig. 17G-17I. As shown in fig. 17G, path data 1732 corresponding to the selected path may be displayed in the control frame 510. The path data 1732 may include metrics associated with the path 1730 (e.g., distances between the path and endpoints of the target 1720, exit angles at the endpoints of the path, and/or the like), controls (e.g., controls that delete, rename, and/or edit the path 1730), and/or the like. In some embodiments, one or more alerts associated with the path may be displayed via the graphical user interface 400. For example, as depicted in fig. 17G, when the distance between the end point of the path and the closest point of the target 1720 exceeds a predetermined threshold, an alert 1734 is displayed. Similarly, as depicted in fig. 17H, when the exit angle at the end point of the path exceeds a predetermined threshold, an alert 1736 is displayed. In some embodiments, endpoint slider 1738 may be displayed in control frame 510 to allow the operator to adjust the position of the endpoint along the path. In this regard, the operator may determine the orientation of the endpoint that meets a predetermined threshold associated with the distance to target 1720 and/or the exit angle.

At process 1650, one or more channels are optionally lengthened via the graphical user interface 400. For example, one or more channels may be lengthened when an acceptable path to a target 1720 is not identified at process 1640. Illustrative screenshots corresponding to process 1650 are depicted in FIGS. 17J-17M. As depicted in fig. 17J, an instruction panel 1740 may be displayed in the control frame 510 to provide instructions for extending one or more channels. As depicted in fig. 17K, the operator may draw a path extension 1742 via the interactive window 541. The path extension 1742 may then be rendered as depicted in fig. 17L. In some embodiments, the path extension 1742 may be rendered in a different color, texture, pattern, and/or the like to distinguish the path extension 1742 from the portion of the path determined by the segmentation. As depicted in fig. 17M, the updated path to the target 1720 may meet a predetermined threshold associated with the distance to the target 1720 and/or the exit angle such that alerts 1734 and/or 1736 are no longer displayed with the path data 1732.

At process 1660, the planning of the medical procedure is reviewed via the graphical user interface 400. An illustrative screenshot corresponding to process 1660 is depicted in FIG. 17N. The features depicted in FIG. 17N generally correspond to the features of the preview mode of the graphical user interface 400 depicted in FIG. 6. Consistent with these embodiments, fig. 17N depicts a virtual endoscopic view 620, a global anatomical model view 630, and a zoomed out anatomical model view 640. As depicted in fig. 17N, the global anatomical model view 630 includes controls, for example, for allowing the operator to pan, zoom, and/or rotate the model view and/or select among different types of images (e.g., CT images). Likewise, the scaled-down anatomical model view 640 includes controls for starting and/or pausing playback of the planning of the medical procedure, deriving a current planning, and/or the like. In some embodiments, an information panel 1750 may be displayed to provide information associated with a plan, such as a metric associated with a distance to a target 1720.

Some examples of a control unit, such as control unit 130, may include a non-transitory tangible machine-readable medium comprising executable code that, when executed by one or more processors (e.g., processor 140), may cause the one or more processors to perform the processes of method 1000, 1300, and/or method 1500 to render graphical user interface 400. Some common forms of machine-readable media that may include the processes of method 1000, method 1500, and/or instructions for rendering graphical user interface 400 are, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EPROM, any other memory chip or cartridge, and/or any other medium from which a processor or computer is adapted to read.

While illustrative embodiments have been shown and described, a wide range of modifications, changes, and substitutions is contemplated in the foregoing disclosure and, in some instances, some features of the embodiments may be employed without a corresponding use of the other features. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. Accordingly, the scope of the invention should be limited only by the attached claims, and it should be understood that the claims should be interpreted broadly, in a manner consistent with the scope of the embodiments disclosed herein.

78页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:固体电解质组合物、含固体电解质的片材及全固态二次电池、以及含固体电解质的片材及全固态二次电池的制造方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!