Continuous guidewire identification

文档序号:473715 发布日期:2021-12-31 浏览:2次 中文

阅读说明:本技术 持续导丝识别 (Continuous guidewire identification ) 是由 V·M·A·奥夫雷 R·弗洛朗 于 2020-03-25 设计创作,主要内容包括:一种用于支持基于图像的导航的系统(IPS),包括:输入接口(IN),其用于接收当在X射线成像装置的视场(FoV)中存在两个或更多个医学设备(GW1、GW2)时由X射线成像器(IA)采集的一幅或多幅输入图像;图像识别器(ID),其基于所述一幅或多幅图像中的图像信息来识别所述两个或更多个医学设备;标签器(TGG),其将相应的独特标签与如此识别的所述两个或更多个医学设备(GW1、GW2)中的至少两个中的每个相关联;以及图形显示生成器(GDG),其实现在显示设备(DD)上显示视频馈送中的所述一幅或多幅输入图像,其中,所述标签被包括在所述视频馈送中。(A system (IPS) for supporting image-based navigation, comprising: an input Interface (IN) for receiving one or more input images acquired by an X-ray Imager (IA) when two or more medical devices (GW1, GW2) are present IN a field of view (FoV) of the X-ray imaging apparatus; an image Identifier (ID) that identifies the two or more medical devices based on image information in the one or more images; a Tagger (TGG) that associates a respective unique tag with each of at least two of the two or more medical devices (GW1, GW2) so identified; and a Graphic Display Generator (GDG) that enables displaying the one or more input images in a video feed on a Display Device (DD), wherein the label is included in the video feed.)

1. A system (IPS) for supporting image-based navigation, comprising:

an input Interface (IN) for receiving one or more input images acquired by an X-ray Imager (IA) when two or more medical devices (GW1, GW2) are present simultaneously or at one or more different times IN a field of view (FoV) of the X-ray imaging apparatus;

an image Identifier (ID) configured to identify the two or more medical devices based on image information in the one or more images, the image Identifier (ID) thereby enabling to distinguish the two or more devices;

a Tagger (TGG) configured to associate a respective tag with each of at least two of the two or more medical devices (GW1, GW2) so identified; and

a Graphical Display Generator (GDG) configured to display the one or more input images in a video feed on a Display Device (DD), wherein the label is included in the video feed;

wherein at least one further input image is received at the input Interface (IN) and the Graphical Display Generator (GDG) is configured to display the one or more further input images on the Display Device (DD) as part of the video feed only when the image Identifier (ID) re-identifies at least one of the medical devices (GW1, GW2) IN the at least one or more further input images, thereby comprising the same label as the label with which the at least one medical device was associated earlier by the Tagger (TGG).

2. The system according to claim 1, wherein the Identifier (ID) comprises a Detector Module (DM) configured to detect emission footprints in the input image due to the presence of the two or more devices in the field of view (FoV).

3. The system according to claim 1 or 2, wherein the Tagger (TGG) is configured to implement a clustering algorithm to process the detected emission footprints into clusters, each cluster corresponding to one of the two or more medical devices.

4. The system according to claims 2-3, wherein the Tagger (TGG) is capable of changing the number of clusters depending on whether a new medical device is present or at least one of the two or more medical devices is removed.

5. System according to any of the previous claims 2-4, wherein the correlation operation by the Tagger (TGG) is based on one or more characteristics including any one or more of the following in combination: a respective shape of the detected overlay marks, a current respective image position of the detected overlay marks, a current anatomical position of a respective medical device.

6. The system of any one of the preceding claims, wherein at least two of the two or more input images or the at least one further input image are acquired in different imaging geometries.

7. The system according to any of the preceding claims 2-6, wherein the Tagger (TGG) is configured to selectively adjust the influence of one or more of the one or more features on the correlation operation according to a change of an imaging geometry of the X-ray Imager (IA).

8. The system of any one of the preceding claims, wherein the input image is a projection image.

9. The system of any one of the preceding claims, wherein at least one of the two medical devices is a guidewire.

10. An Arrangement (AR) comprising: -a system (IPS) according to any one of the preceding claims; and the Imaging Arrangement (IA) and/or the Display Device (DD).

11. A method for supporting image-based navigation, comprising:

receiving (S410) one or more input images acquired by an X-ray Imager (IA) when two or more medical devices (GW1, GW2) are present simultaneously or at one or more different times in a field of view (FoV) of the X-ray imaging apparatus;

identifying (S420) the two or more medical devices based on image information in the one or more images, the image Identifier (ID) thereby enabling to distinguish the two or more devices;

associating (S430) a respective unique tag with each of at least two of the two or more medical devices (GW1, GW2) so identified; and is

Displaying (S440) the one or more input images in a video feed on a Display Device (DD), wherein the label is included in the video feed;

the method further comprises the following steps:

receiving at least one further input image; and is

Displaying the one or more further input images on the Display Device (DD) as part of the video feed only when at least one of the medical devices (GW1, GW2) is re-identified in the at least one further input image, thereby including the same label as a label earlier associated with the at least one medical device.

12. A computer program element, which, when being executed by at least one Processing Unit (PU), is adapted to cause the at least one Processing Unit (PU) to carry out the method according to claim 11.

13. A computer readable medium having stored thereon the program element of claim 12.

Technical Field

The present invention relates to a system for supporting image based navigation, a method for supporting image based navigation, a computer program element and a computer readable medium.

Background

In certain medical interventions (e.g., Percutaneous Coronary Intervention (PCI)), a clinician sometimes needs to introduce several (e.g., three or more) medical devices or tools (e.g., guidewires) into a patient. For example, a clinician may begin treating a stenosis in one branch of a coronary artery and then switch to another branch, where the bifurcation must be treated with two guidewires before switching back to the previous branch. In other use cases, the clinician may use antegrade and retrograde guidewires to treat Chronic Total Occlusions (CTOs). These and similar interventions sometimes require the presence of multiple devices at the lesion.

During these difficult and demanding interventions, the clinician needs to switch from manipulating one guidewire to another-either because he or she decides to focus on a different part of the vessel or because he or she wishes to use a guidewire with different physical properties.

The guide wire may extend outside the patient and form a guide wire system lying on the examination table. Fluoroscopy systems may be used to assist clinicians in real-time imaging. The real-time video feed (motion picture) may be displayed on a display device on site at the catheter lab. The video feed reveals the emission footprint of the introduced device. While the clinician may know which device he or she wishes to operate by looking at the video feed, it may not be obvious which device corresponds to which coverage trace shown in the video feed.

For example, the clinician may eventually pick up the wrong guidewire. But such a false pick-up may be a highly undesirable situation. Early positioning of some guidewires can be difficult. The clinician may have tried several minutes to pass through the stenosis or bifurcation or to find a fairly stable "parked" position, and then see that the perfect parked position was lost due to the wrong pick. In particular, if the clinician does inadvertently manipulate the wrong guidewire to begin the next stage of intervention, he or she may inadvertently remove the guidewire. Such unfortunate events (even relatively rare) are very frustrating. Cumbersome positioning must be performed from scratch again.

Some operators may intentionally shape (e.g., twist) the tip of the guidewire to obtain a shape "feature" with an "exploratory" effort prior to insertion of the guidewire. But manually fabricating features in this manner can be cumbersome and time consuming, and may not be easily amenable to clinical workflow.

Disclosure of Invention

Therefore, there may be a need to improve the support for image-based navigation and to solve the last part of the above mentioned drawbacks.

The object of the invention is solved by the subject matter of the independent claims, wherein further embodiments are comprised in the dependent claims. It should be noted that the following described aspects of the invention apply equally to the method, the computer program element and the computer readable medium.

According to a first aspect of the present invention, there is provided a system for supporting image-based navigation, comprising:

an input interface for receiving one or more input images acquired by an X-ray imager when two or more medical devices are present in a field of view of the X-ray imaging apparatus simultaneously or at one or more different times;

an image identifier configured to identify the two or more medical devices based on image information in the one or more images, the image identifier thereby enabling distinguishing between the two or more devices;

a labeler configured to associate a respective label with each of the at least two of the two or more medical devices so identified; and

a graphical display generator configured to enable display of the one or more input images in a video feed on a display device, wherein the label is included in the video feed;

wherein at least one further input image is received at the input interface and the graphical display generator is configured to display the one or more further input images on the display device as part of the video feed only when the image identifier re-identifies at least one of the medical devices in the at least one further input image, thereby comprising the same tag as the tag with which the at least one medical device was associated earlier by the tagger.

In other words, the tagging by the herein proposed tagger is such that any given one of the two or more devices has its own such tag and maintains the same tag throughout the video feed or at least a portion thereof.

The detector may use an image structure filter with thresholding or a machine learning algorithm. The detector may include a segmentation operation, but in some cases it may be sufficient to find a boundary region such as a bounding box.

Not all devices need to be present in a single input image, but their presence may also be distributed over more than one input image.

According to an embodiment, the identifier comprises a detector module configured to detect emission footprints in the input image due to the presence of the two or more devices in the FoV.

According to one embodiment, the tagger is configured to implement a clustering algorithm to process the detected emission footprints into clusters, each cluster corresponding to one of the two or more medical devices.

According to one embodiment, the tagger is capable of changing the number of clusters depending on whether a new medical device is present or at least one of the two or more medical devices is removed.

According to one embodiment, the associating operation by the tagger is based on one or more characteristics including any one or more of the following in combination: a respective shape of the detected overlay marks, a current respective image position of the detected overlay marks, a current anatomical position of a respective medical device.

According to an embodiment, at least two of the two or more input images or the at least one further input image are acquired with different imaging geometries.

According to an embodiment, the tagger is configured to selectively adjust the influence of one or more of the one or more features on the correlation operation according to a change in an imaging geometry of the X-ray imager.

According to one embodiment, the input image is a projection image.

According to one embodiment, at least one of the two medical devices is a guidewire.

In another aspect, there is provided an arrangement comprising: a system according to any of the above embodiments; and the imaging device and/or the display apparatus.

In another aspect, a method for supporting image-based navigation is provided, comprising:

receiving one or more input images acquired by an X-ray imager when two or more medical devices are present in a FoV of the X-ray imaging apparatus simultaneously or at one or more different times;

identifying the two or more medical devices based on image information in the one or more images, the image identifier thereby being able to distinguish the two or more devices;

associating a respective unique tag with each of the at least two of the two or more medical devices so identified; and is

Displaying the one or more input images in a video feed on a display device, wherein the tag is included in the video feed;

the method further comprises the following steps:

receiving at least one further input image; and is

Displaying the one or more further input images on the display device as part of the video feed only when at least one of the medical devices is re-identified in the at least one further input image, thereby including a label that is the same as a label previously associated with the at least one medical device.

In another aspect, a computer program element is provided, which, when being executed by at least one processing unit, is adapted to cause the at least one processing unit to carry out the method.

In a further aspect, a computer readable medium is provided, on which the program element is stored.

The proposed system and method allow to make the association between a device and its coverage traces more explicit. The previously explored movement of the in-situ tool described above can thus be avoided. Each of the coverage traces is tagged with an identification code, each coverage trace being continuously and consistently associated with each of the medical devices. This also reduces the risk of the operator picking up and inadvertently removing the wrong equipment.

Each device is associated with its corresponding tag (identification code), which remains the same throughout the intervention. Usually, the marker code of the guide wire is first introduced in order to match the sequence of guide wires present on the examination table. The label may include, for example, a number displayed in spatial association with a corresponding guidewire overlay trace on the screen. Instead of such numbering codes or in addition to such numbering codes, color coding may be used: for example, the first introduction guidewire is shown in red, the second introduction guidewire is shown in blue, and so on.

This visual aid makes it more intuitive to mentally associate "guide wire tip on screen"/"guide wire body sequenced on table," allowing the clinician to reliably switch guide wires with minimal mental effort.

"tags" can be explicit, appearing (optionally including text or graphics) by annotations in the form of composite box widgets or otherwise; or the label is implicit, e.g. visualized by color-coding overlay marks, border line type changes (bold, dashed, etc.); or the label is a combination of the aforementioned means or any other visual scheme.

As used herein, "probing" may include "segmentation". Segmentation is intended to track the exact delineation of image objects, e.g., emission coverage marks. However, the overlay mark detection may not necessarily rely on segmentation, as in some cases it may be sufficient to find only the bounding box comprising the overlay mark, and not necessarily to trace the boundary of the overlay mark. Certain features, such as locations in the image, can be extracted from such bounding boxes.

"adjusting influence": this can be achieved by adjusting the weights of the cost function. The weights determine the degree to which the comparison associated with a given feature contributes to the total cost.

"imaging geometry": in fluoroscopy, this may include an angulation angle, a rotation angle, etc. In general, this may include any arrangement that changes the position or orientation of the optical axis, an imaginary axis connecting the X-ray source and the X-ray detector.

As used herein, a "user" is someone operating the imaging apparatus or someone using imagery for navigation purposes to assist in positioning the medical device in the patient.

The "(emission) coverage trace" is the portion of the image in the projection image that corresponds to the "shadow" of the object when exposed to X-radiation from a given direction after the emission of the X-radiation through the object.

Drawings

Exemplary embodiments of the invention will now be described with reference to the following drawings, which are not to scale, wherein:

FIG. 1 is a block diagram of an imaging arrangement including an X-ray imaging apparatus;

FIG. 2A is a plan view on a portion of an imaging arrangement;

FIG. 2B is a schematic illustration of emission coverage marks that can be recorded in an X-ray projection image;

FIG. 3 is a block diagram for an image processing system configured to support image-based navigation;

FIG. 4 is a flow diagram of a computer-implemented method for image-based navigation support;

FIG. 5 is a flow chart providing further details of the method of FIG. 4; and is

FIG. 6 is a schematic representation of a graphical display according to one embodiment.

Detailed Description

Referring to fig. 1, a schematic diagram of an arrangement AR for image-based navigation support is shown, which is preferably used in the context of medical interventions.

The arrangement AR comprises an imaging device IA, in particular an X-ray imaging device, which can be operated by a user for obtaining an X-ray image F of an internal structure of a patient at a region of interest ROIi. The region of interest ROI may be a human heart, a lung or another organ or group of organs.

Image Fi(also sometimes referred to herein as a sequence of frames) may be displayed to the user in real time on the display device DD as a moving picture or video feed.

The imaging arrangement AR further comprises an image processing system IPS for processing the imagery. Broadly, the image processing system IPS is a computerized system that processes a received image to include one or more visual indications or labels in the image, the one or more visual indications or labels representing respective one or more medical devices. The medical device is deployed in an intervention. When taking image FiAt times, one or more of these devices GW1 may appear in the field of view FoV of the imager from time to time. Not all of the one or more imaging devices will be present in each image. Thanks to the label provided by the IPS, the user can more easily distinguish which image portion (emission overlay mark) corresponds to which in the displayed imageA medical device.

As mentioned, the imaging apparatus AI and the imaging processing system IPS are primarily contemplated in embodiments herein to support medical interventions such as Percutaneous Coronary Intervention (PCI). Other medical interventions are also envisaged which are not necessarily performed in relation to the heart of a human or animal, and therefore non-medical applications are also envisaged, for example image-based support for: inspections and work performed in difficult-to-access mining or piping systems, or inspection of technical equipment such as engines and other complex mechanical equipment that cannot be directly inspected visually, all require that the imaging equipment be capable of visually inspecting the obstructed area of interest through a video feed.

In PCI applications, the medical device may specifically include one or more guide wires that are introduced into the patient PAT through one or more suitable access points (e.g., an incision in the femoral artery or vein at the groin area). The guidewire so introduced is then carefully advanced through the vessel to reach the lesion (e.g., a stenosis in a portion of the patient's coronary artery). Once the tip portion of the guidewire passes through the lesion, it is secured ("parked") there and then a catheter or other tool is slid along the guidewire to the lesion to perform the procedure. One such procedure may include treating the stenosis by using a balloon catheter to relieve the constraint. Additional guidewires, catheters, or other medical tools may be additionally introduced into the patient through the same entry point or different entry points. Coronary arteries are a complex system of branches and sub-branches of blood vessels. In order for the user to successfully navigate the guidewire to the lesion, navigation support is provided by the acquired images, which are displayed in real time on the display device D as a video feed or a moving picture. This allows the user to observe the location of one or more medical devices inside the patient as the devices are advanced through the patient or as interventions are performed.

Reference will now be briefly made in more detail to imaging device IA, which may be arranged as a C-arm type imaging device as shown in the exemplary embodiment in fig. 1. In the embodiment of fig. 1, the C-arm system IA is mounted on the ceiling CL, but this need not be the case in all embodiments. Alternatively, the imaging device IA is mounted on the floor or on a stand. In a further alternative, the imaging device may be mobile mounted, such as wheel mounted or rail mounted.

The X-ray imaging apparatus includes an X-ray detector XD and an X-ray source XS. Broadly, in embodiments, but not necessarily all, the imaging apparatus comprises a gantry G carrying an X-ray detector XD and an X-ray source XS (e.g. an X-ray tube). The X-ray detector and the X-ray source XS are arranged in an opposing spatial relationship on the gantry G to form an examination region between the X-ray source and the X-ray detector. The patient PAT IS located in the examination region such that the region of interest IS positioned approximately at the isocenter of the IS imaging apparatus. During imaging, the patient may lie on the table TB. The table can be adjusted in height H.

During an imaging procedure, X-ray source XS is energized by applying a cathode current and a voltage across the anode and cathode to generate an X-ray beam XB emanating from a focal spot of the anode. The beam leaves the X-ray source, passes through the examination region and thus through the patient tissue at and around the region of interest and then impinges on the X-ray sensitive surface of the X-ray detector XD. The X-ray sensitive surface of the detector may comprise pixel elements that convert impinging X-radiation into intensity values. The intensity values may vary from location to location, and since tissue or tissue types have locally different material densities, differential attenuation of the X-ray beam causes variations in the intensity values.

The intensity values recorded at the detector XS may be mapped to image values according to a color or gray value palette to form a projected image ("frame"). Acquisition circuitry is operative to capture different projection images at different time instants in this manner at a suitable frame rate during an imaging procedure. An exemplary frame rate contemplated herein is 20-30 fps. In fluoroscopy (as the primary modality contemplated herein), the intensity values may be mapped over a range of values ranging from black through gray values to white, with the darker the image values, the lower the intensity values. Other mapping schemes may be used, for example, inverse mapping, where lower intensity values are mapped to lighter image values, as is common in radiography, for example. Other mapping schemes may alternatively be used.

The spatial width of the main X-ray beam defines the field of view FoV of imager IA. An object residing in or extending into the field of view (and thus into the X-ray beam) will change the intensity of the X-rays detected locally at the detector. The field of view may be changed by a user request, for example by moving the X-ray source, moving the patient, or by enlarging or limiting the beam width using a collimator (not shown).

The X-ray detector may be arranged as a digital flat panel detector communicatively coupled to the display device DD. The flat panel detector XD may be of the direct conversion type or the indirect conversion type. In an alternative embodiment, the imaging detector may be arranged as an image intensifier coupled to the display device by a camera.

Although the contrast-giving mechanism of the projected image primarily contemplated herein is attenuation, other imaging techniques that utilize other contrast mechanisms in addition to or instead of attenuation, such as phase contrast imaging and/or dark-field imaging, are not excluded herein. In the latter two cases, the imaging device may include additional components, such as an interferometer or other components.

The imaging apparatus comprises a console CC through which a user can determine when to start and stop an imaging procedure, in particular when to energize the X-ray source XS. A pedal may be coupled to the console as a user interface to control the energization or de-energization of the X-ray source or to operate a grid switch to stop or resume exposure to the X-ray beam.

The main propagation direction of the main X-ray beam (without taking scattered radiation into account) is defined by the optical axis OX, which is an imaginary line extending from the focal spot (not shown) of the X-ray source to a central portion of the X-radiation sensitive surface of the X-ray detector XD. The optical axis defines the spatial projection direction.

To better support the user in navigation, the position or spatial orientation of the optical axis and thus the projection direction can be changed on user request. In one embodiment this can be achieved by arranging the gantry to be rotatable about one axis or preferably two respective axes perpendicular to each other. Having two such axes of rotation allows 2 degrees of freedom to change the optical axis. For example, in one geometry, one of the axes of rotation extends into the plane of the drawing of fig. 1 and allows the optical axis to rotate around an angle β. The other rotation axis is parallel to the drawing plane of fig. 1 and allows to change the orientation around another angle a independent of β, as schematically shown in fig. 1. By convention, the axis of α defines "rotation" and the axis of β defines "angulation".

Optionally, the gantry height itself, as indicated by the double arrow H in fig. 1, may also be varied. In addition, the optical axis OX may be translated by moving the gantry along a line accordingly. The position and orientation of the optical axis may also be considered herein to define, at least in part, the imaging geometry. In other words, in embodiments, the imaging apparatus contemplated herein allows a user to change the imaging geometry. The change to the imaging geometry may be requested by a user operating a joystick or other suitable interface unit coupled to the console CC. The interface operative to request a change to the imaging geometry may comprise applying control signals to suitable actuators arranged in relation to the gantry. The actuator acts to change the imaging geometry in response to the control signal.

Other options for changing the imaging geometry may include changing the distance of the detector from the X-ray source and/or changing the distance between the region of interest and the X-ray detector (and thus the X-ray source). The latter variation can be achieved by varying the height h of the table TB, at which the patient lies. Changing the height h and/or the distance of the source from the detector may be equivalent to a rescaling of the image with a certain magnification factor. Other options for changing the imaging geometry may also include operation of a collimator (not shown) to limit or enlarge the shape or size of the cross-section of the X-ray beam to change the field of view FoV. Yet another option for changing the imaging geometry may consist of translating the patient table in a plane parallel to the surface of the patient table TB in the X-direction, the Y-direction (one direction parallel to the plane of the drawing of fig. 1 and the other direction extending into the image plane).

Generally speaking, the change in geometry changes the spatial relationship of the X-ray source and/or detector with respect to the region of interest. Additionally or alternatively, the field of view may be changed by collimator action, as well as by moving the patient in translation, for example by the patient table TB as described above.

As previously mentioned, for fluoroscopy, rather than acquiring a single image, a series of images or "frames" F are typically acquirediOr image streams or "frames" FiAnd (4) streaming. For this purpose, the frame FiA stream may comprise different sequences of such frames. For this purpose, a sequence of frames is defined as a sequence in which all frames in the sequence have been acquired in a single imaging geometry, in particular during which the position and/or orientation of the optical axis has not changed. In a typical protocol, a user will power up an X-ray source and acquire a sequence of images in a given imaging geometry while displaying the sequence of frames one after another in a video feed on a display device. The user may request an imaging stop (i.e., the X-ray source is powered off or the X-ray beam is otherwise disabled), for example by a collimator or grid switching action. The imaging geometry can then be changed, for example by choosing a different angulation. The X-ray beam is then re-activated and/or the X-ray source is re-energized to acquire a second frame sequence, this time energizing the X-ray source and acquiring a sequence of images in a second imaging geometry, and so on for three or more different such sequences. The entire imaging procedure may then comprise one or more different image sequences, wherein the imaging geometries of the two adjoining sequences may be different. It may be the case that all sequences in the imaging procedure are acquired in different imaging geometries, but this may not necessarily be the case, since in some sequences (even contiguous sequences) the same imaging geometry may be retained.

Referring now to fig. 2A, fig. 2A illustrates the manner in which image information is encoded in a recorded X-ray frame F. A medical device GW (e.g., a guidewire) may reside in a field of view FoV of the imager IA. X-ray beam XB emanates from X-ray source XS and interacts with the guidewire. Since the guide wire GW has radio-opacity, the X-ray beam is attenuated. The attenuation so experienced is higher compared to the attenuation caused by the surrounding tissue. This differential attenuation produces an intensity value pattern corresponding to the shape of the guidewire when viewed along a given projection direction in accordance with the currently set imaging geometry. The intensity value pattern thus produced may be referred to herein as emission coverage trace tf (gw) or "shadow map" for a particular medical device, as schematically plotted in fig. 2A in the projection plane pp of the detector XD in a given imaging geometry.

If the surrounding tissue is soft tissue (e.g., blood vessels), their corresponding emission coverage marks may be recorded as having only very low contrast. The contrast of the soft tissue can be enhanced by administering a contrast agent, if desired. Frames acquired during the presence of contrast agent at the region of interest may be referred to herein as angiography. Additionally or alternatively, a "road mapping" technique may be used, wherein the vessel boundaries are indicated graphically by superimposed lines representing the vessel tree. Contrast agent-supported 3D images obtained earlier (possibly personalized to the patient) for the patient or general vessel model may be used after proper registration with the fluoroscopic frame to achieve road mapping. Road mapping is described for example in US8255037 of the applicant. However, relying only on the contrast imparted by the transmit footprint of the device (which may occasionally be administered with contrast) may sometimes be sufficient to support navigation, and thus may not require explicit road mapping.

Users are often faced with the situation that not just one medical device is used, but 2, 3, 4 or more devices of one or more types are taken and introduced into the patient. Four such devices GW1-GW4 are schematically and exemplarily shown in the plan view of fig. 2B. A given image frame shown during a video feed on screen DD may thus comprise a pattern of multiple emission footprints for multiple devices GW1-GW4, and the user may be overwhelmed as to which device GW1-GW4 corresponds to which emission footprint in the displayed frame. The arrangement of devices GW1-GW4 in fig. 2B is for illustration purposes, as in reality most devices will be introduced through the same entry point. However, a representation of the overlay marks similar to the representation of the overlay marks shown in the display device DD in fig. 2B (and also in fig. 6 below) may perform well, e.g. in a "tight" collimation setting of a certain narrow focus.

The proposed imaging device IPS solves this concern by labeling the corresponding image coverage traces in each frame in a consistent manner in all sequences for the imaging procedure. The tagging may be graphical or textual. Tagging can be explicit or implicit. Implicit graphical labeling may include, for example, displaying the respective shadow maps in different colors, or causing the respective shadow maps to be highlighted in different line types (e.g., dashed, dotted, or otherwise). Additionally or alternatively, explicit tagging may be used by displaying a graphical widget (e.g., a box, triangle, circle, oval, or any other geometric shape) associated with a respective emission footprint space. The explicit labels may also include textual material that may indicate a temporal order in which the respective devices are introduced into the field of view. Additionally or alternatively, the textual information may include a suitable acronym or full name or some alphanumeric string that provides a clue to the user as to which type of medical device the respective tag is associated with. Alternatively, the tags may simply comprise corresponding distinctive strings of alphanumeric characters to support better differentiation, without necessarily being tied to specific semantics.

The operation of the image processing system IPS will now be explained in more detail with reference to the block diagram of fig. 3. Broadly, the image processing system IPS is configured to perform image-based emission footprint tagging, which is consistent in different geometries and preferably continuously associated with individual respective ones of the medical devices.

The imaging processing system is computerized. Image processing systemThe system may be implemented on a single or multiple data processing systems PU. The processing system(s) implementing the image processing system are schematically and schematically shown as PUs in fig. 3. IPS is either integrated in imaging device IA or otherwise associated with supply frame FiIs communicatively coupled (either wired or wirelessly).

The imaging processing system IPS may be arranged as one or more software modules, suitably linked and running on one or more processing units PU. Alternatively, the imaging processing system IPS may be arranged in hardware as a suitably configured microcontroller or microprocessor. The image processing system may be implemented in a distributed manner by a plurality of communicatively coupled processing units. The image processing unit may be partly arranged in hardware and partly in software. Some or more of the components of the image processing system IPS described below may reside in one or more suitably communicatively coupled memories MEM. The data processing system PU may be a suitably programmed general purpose computing device. The processing unit may include a graphics processing system (GPU) to enable fast computations.

The image processing system IPS comprises a receiver for receiving an image FiInput port IN. The imaging processing system IPS comprises an image identifier ID to identify one or more of the medical devices that may happen to reside in the FoV during imaging. The identifier ID may comprise a detector module DM to detect the emission footprint of the devices GW1-GW 4. The identification operation by means of the identifier ID is based on the detected emission coverage trace TF, if any.

The image identifier ID attempts to identify the medical device(s) based on detected emission coverage marks in imagery received at the input port. As will be explained in more detail below, the identifier ID is configured to perform the identification operation consistently across images, in particular consistently across changes in imaging geometry. The image detector may also be configured to detect when a device is removed from the field of view and/or when an earlier device is reintroduced into the FoV or whether a new previously unseen device(s) were introduced into the field of view during imaging.

Once having identified the corresponding medical device belonging to a new or previously identified, the tagger TGG tags the detected footprint. The tagger TGG associates each footprint (to be found to identify a given device GW) with a unique tag (which is retained in the individual device GW).

The image with one or more labels provided by the labeler (if any) is then processed by the graphic display generator GDG. For each given frame, one or more tags are integrated into the received imagery, e.g., as an overlay or otherwise to accomplish this.

The image so tagged is then output at output port OUT. The tagged imagery may then be displayed on the display device DD by suitable graphics control circuitry to construct a video feed from the potentially tagged imagery.

More specifically, a given tag will be included in the processed image only if the identifier actually identifies the corresponding device in a given frame. The device may have been recorded in one frame but may not have been recorded in another frame. Therefore, its tag is included or not included in the corresponding frame.

The operation of the image recognizer ID is event driven. The events include: variations in the number of devices in the FoV, variations in the type of GW used, and variations in the imaging geometry. The identifier ID is dynamically and adaptively adjusted for the event. The algorithm implemented by the identifier ID may be adjusted in response to any one or more of these events. In an embodiment, a clustering algorithm is used. Given one or more existing clusters, the identifier ID preferably acts conservatively to assign and define a new cluster: an attempt is made to assign the detected coverage trace to one of the existing clusters. New clusters are declared only if this cannot be achieved at a reasonable cost (measured by a suitably configured cost function in the context of a minimization scheme).

Optionally, the image processing system may comprise circuitry arranged to detect when a change in imaging geometry has occurred.

In general, the operation of the detector modules of the identifier ID may include segmenting the recorded coverage marks (if any) by the detector modules. The segmentation can be solved by a manual processing computer vision processing tool (e.g., elongated filter) and thresholding. Alternatively, machine Learning, e.g., Marginal Space Learning (MLS), may be used, as reported, for example, in "local Space Learning for effective Detection of 2D/3D atomic structures in Medical Images" (Inf. Process in imaging, Vol. 21, p. 411-422, 2009) by Zheng Y et al. Deep learning architecture neural networks (i.e., neural networks having one or more hidden layers) are also contemplated. Preferably, the neural network is a convolutional neural network.

Clustering of the resulting segmentations can be performed by correlating similarity comparisons of the coverage trace segmentation results in two successive frames. The comparison may be based on the spatial distance between the positions in the images of the two overlay marks from different frames, since this distance is expected to be small if both segmentation results correspond to the same guidewire. Additionally or alternatively, the similarity of other features may be evaluated, for example, the geometry of the overlay marks. In case the same imaging geometry is used (not necessary in this context, as will be explained in more detail below), the coverage traces from two frames can be expected to be similar to each other if they do correspond to the same device. The comparison of shapes and positions in the image should be used in combination in order to resolve ambiguities that may arise if devices with the same or similar shape characteristics are used (since it is expected that any two devices are less likely to reside at exactly the same position). Optimization methods (e.g., dynamic programming) can be utilized to determine which tagging operations provide the best similarity between the associated partitions. Segmentation and recognition should preferably be performed in near real-time to achieve a better user experience. The GPU may be used to implement such near real-time processing.

Referring now to the flow chart in fig. 4, fig. 4 explains the method for supporting image-based navigation in more detail. This method can be understood as the basic operation of the image processing system IPS mentioned below. However, it will be further understood that the method steps illustrated in fig. 4, 5 are not necessarily linked to the architecture of the image processing system IPS described above with respect to fig. 1-3. More particularly, the methods described below may be understood as being taught in their own right.

In step S410, frames acquired by the X-ray imaging apparatus IA in the sequence are received.

Upon acquisition of the one or more frames in the sequence, there are two or more medical devices (e.g., the noted guidewires, catheters, or other tools) in the field of view of the imaging apparatus. However, for this purpose, it is not required that all medical devices be present together at any given time, but may be present separately at different acquisition times. In other words, the medical device may be recorded across two or more of the frames, rather than recording all of the devices in a single frame (which is of course contemplated in embodiments). For example, one frame acquired at a certain time may have recorded two or more (in particular all) medical devices currently in use. However, alternatively, one or more of the devices have been recorded in one frame, while one or more of the other devices have been recorded in one or more different frames. The presence of the device may change because the user may decide to remove one or more of the devices from the field of view or the user may choose an imaging geometry setting (e.g., collimator setting or tube XS angulation/rotation) so that one or more devices are no longer present in the new FoV, etc.

In step S420, an attempt is made to computationally identify the medical device based on the image information encoded in the received frame. The identification operation enables different medical devices to be distinguished from each other. The image information used for this recognition operation comprises in particular emission footprints encoded and recorded in the frame.

The identifying step S420 itself may comprise two sub-steps. One is to detect the image overlay itself as an image structure in one or more frames. The detection can be done by segmentation. Segmentation may be accomplished by computer vision techniques (e.g., machine learning, filtering, or intensity value thresholding, or other means). However, segmentation is not a necessary requirement, as in some cases it may be sufficient to establish a region (e.g., a bounding box, a circle, or other shape) and analyze statistical information (e.g., histogram, standard deviation, CV, etc.) for the pixels within the region. In all embodiments, the exact process of covering the boundaries of the traces is not necessarily required. In this sub-step, the coverage marks have not yet been associated with the respective medical device causing the emission of the coverage marks.

The identification or association of the detected coverage trace with the relation of the medical device is performed in a second sub-step of the identification step. The second sub-step of identifying the device itself with the corresponding covering traces can be implemented by means of a clustering algorithm. The clustering algorithm clusters the coverage traces into groups, where each group represents exactly a single one of the medical devices.

The second sub-step of the recognition step S420 will be explained in more detail below and is based on certain image features that can be extracted in relation to the recognized covering traces and/or background information. In particular, information about changes in the imaging geometry may be used to inform the clustering operation. Other features may be used in combination or alone, including geometry or size, location in the image, or anatomical location. These features may be stored in the memory MEM in association with the respective clusters. Some or all of such pre-stored features may be retrieved when clustering is performed in the next frame.

In the recognition operation at step S420, it may be sufficient to recognize only a portion of the overlay mark and calculate only the feature related to the part. In an embodiment, it may be useful to consider a neighborhood around the end of the coverage trace, as this portion represents the tip portion of the device (e.g., the guidewire tip), and this portion is typically the clinically relevant portion. Other features may be calculated in relation to the entire footprint.

In the following step S430, a respective unique tag is then associated with each respective medical device identified in step S420.

Each tab can be represented visually distinct from other tabs, and can be represented by a graphic widget as mentioned above. The type of label, color coding, etc. may be specified in advance by the user in the setting operation. For example, a menu structure may be presented to the user in a graphical user interface for the user to select a tab style. As previously mentioned, tagging can be explicit or implicit. Implicit labeling may include color coding the identified overlay marks and/or changing the outline by color or line type (dashed, dotted, etc.) coding.

Explicit tagging may include abutting individual discrete information boxes, circles, ovals, or other geometric widgets with corresponding coverage traces in a spatial relationship. The tag widget may or may not include textual information that allows a user to better identify the identity of the tagged device when the tag is included in the image.

At step S440, the received frame is displayed on the display device as a video feed (moving picture) in which a tag is included. In particular, this may require overlaying the graphic widget into an image that is properly positioned in space in association with its respective emission footprint and hence associated with the device.

The identifying operation S420 may be based on image features that may be stored in memory and retrieve the stored image features from memory when processing subsequent frames in a similar manner.

It is contemplated herein that the labels associated with the respective devices are persistent or continuous throughout the imaging procedure. In particular, a particular tag remains associated with a particular unique imaging device throughout the imaging process. In more detail, if a subsequent frame is received, it is processed as described above, but if the device is positively identified in the subsequent frame, the tag for the given medical device will only be included in the frame. If the device is not recognized, because the user has removed the device from the FoV or the FoV has changed due to a change in imaging geometry, the tag for the device will not be integrated into the frame. However, if the device is later identified in a later subsequent frame, its tag will be included in the later subsequent frame.

Likewise, if a new device is introduced, a new cluster may be defined to accommodate the newly introduced device and a new label associated with the newly introduced device. The new tag is different from all other tags previously assigned to other devices. For example, removal of the device from the FoV can occur when the user "pulls" the device (e.g., guidewire) out of the FoV (especially to remove the device completely from the patient). Some given devices may not be recorded in a given frame if the imaging geometry changes (e.g. the patient table TB is moved and/or the collimator limits the field of view) or if the X-ray source is repositioned by the user to request a new angulation or rotation or translation or a combination thereof.

In summary, if one of the devices is removed from the current FoV or a new device is introduced in the FoV or if a known device not present in a previous FoV is reintroduced into the current FoV, the number of clusters defined by the identifying step is dynamically adjusted accordingly for a given frame recorded for that FoV. In other words, the identifying step and the tagging step are event-driven operations and are dynamically adjusted for the actual image content in any given frame, as will be appreciated from the use cases mentioned above.

The identification operation may be implemented in the context of a numerical optimization procedure. In particular, the clustering operation may be formulated as an optimization procedure to optimize (e.g., minimize) a cost function associated with the clustering. If the number of devices recorded in a frame remains constant, then the expected cost (i.e., the scalar value of the cost function) will remain relatively stable; but the number of devices is expected to change significantly when a device is removed from the FoV or a new device is introduced or a known device reappears. This sudden change in cost may provide an indication that the number of clusters considered needs to be increased or decreased. In other words, the adjustment of the number of clusters can be done fully automatically. Such an event may hereinafter be referred to as "quantitative disturbance" and indicates an event in which the medical device has been removed or (re-) introduced into the field of view.

Another indication for a number of nuisance events is a change in the number of detected coverage marks. The number of clusters generally corresponds to the number of image footprints detected in a frame. This number can be monitored and if there is a change from one frame to another, the optimization procedure attempts to change the number of clusters accordingly. A local analysis may be performed. For example, if there is an overlay in a certain region of a transient image, and then there is no more overlay at the corresponding region in a subsequent image, it may be necessary to adjust the number of clusters.

Optionally, the user may provide a signal as input to the algorithm, for example by pressing a button or otherwise. This signal can be used in the identification step to adjust the number of clusters.

Thus, it can be seen that the identifying operation S420 is robust against number of disturbing events. This step is also robust against changes in the imaging geometry.

Robustness of the identification step S420 is achieved since the operation is based specifically on features associated with the identified emission coverage trace. Features contemplated herein include one or more or a combination of geometries present in the detected emission footprints. Additionally or alternatively, the position in the image in the current frame of the overlay mark constitutes another feature of the emission overlay mark. Suitable reference points in the detected overlay marks may be used to define locations in the image plane of a given frame. Alternatively, barycentric coordinates may be calculated for the emission footprint. In an embodiment, the terminal limit part of the overlay mark is identified, and this tip of the overlay mark is then used to define the image position, as this corresponds to the position of the physical tip of the guide wire, catheter or other device.

The background feature of a given emission footprint in a given frame is the anatomical location associated with the footprint and thus with the corresponding medical device. For example, as will be explained in more detail below, in the event of a change in imaging geometry, the anatomical location may be used to ensure consistent and consistent recognition of a given imaging device.

The anatomical location may be determined by road mapping techniques or by registering the current frame to a 3D image of the region of interest previously recorded by the same or a different image modality. In particular, a CT data set may be suitably used (segmented) which can then be registered with a current image frame, e.g. by forward projection, to identify the image position. Alternatively, an artificial model, preferably personalized for a given patient's region of interest, may be used to calculate the anatomical location.

In addition to or instead of road mapping, deep learning (suitably trained using training image data) may be used. In many different embodiments below, it will be described how anatomical locations are derived by machine learning techniques based on deep-architecture neural networks. However, other machine learning techniques are not excluded herein, such as Support Vector Machines (SVMs), other kernel-based methods or decision trees, and so forth. The training data is provided as a pair of inputs and (desired) outputs.

Machine learning may be used by training based on images representing the vessel tree along different projection directions.

In another embodiment, co-registration may be used, wherein the guidewire position is projected on the corresponding angiogram (if present), and the thus obtained contrast enhanced coronary artery tree present in the angiogram can be segmented and divided into its sub-segments in order to thus define the anatomical part. By training on the following pairs of training data, the imaging geometry changes can be taken into account using machine learning components. Inputting: segmentation of the image or vessels in the image and imaging geometry change specifications (e.g., angulation); and (3) outputting: anatomically labeled branches or sub-branches.

In one embodiment, the anatomical location associated with the device GW location is obtained from the fluoroscopic images only. The machine learning component may be trained based on the following input/output pairs. Inputting: fluoroscopic images and geometry change specifications (e.g., angulation and current device GW position output: anatomical positioning specifications.

Alternatively, the machine learning method is combined with the road mapping technique mentioned earlier. More specifically, in a first step, the device GW location is found in the angiogram by road mapping, in which case this is equivalent to co-registration. The machine learning method is then applied with the following input/output pairs: inputting: the angiogram obtained from the fluoroscopic image and the device location GW are mapped by the road. And (3) outputting: an anatomical location.

In a further embodiment, more specific than the previously described embodiment, in a first step the road mapping is used to obtain the device GW location on the angiogram. In a second step, probing (e.g., segmentation) is performed. Thus, branches in the vessel tree so found in the angiogram are labeled by machine learning. Finally, the branches in the tagged vessel tree are scanned to find the location where the device coverage trace GW is located.

In a further embodiment, a given device GW coverage trace is projected into a 3D model of the coronary artery tree. The model is generic, personalized, obtained from 3D images such as CT volumes. The model may be pre-tagged or tagging may be accomplished through machine learning. The projections may be obtained directly or via angiography, for example in the following sequence: fluoroscopic image- > angiogram- > model.

In PCI or similar heart related interventions, the anatomical location is defined by the respective sub-vessels (e.g., RCA medial or LDA distal) in which the guidewire tip is located. In fact, given a certain imaging geometry (e.g. angulation), one will roughly know what the projected coronary tree looks like. One can then identify in which vessel branch the guidewire is navigating into the next frame and approximately at which distal position the guidewire is located. One can then associate the guidewire coverage trace residing in the proximal symbol (circumflex), for example, of the current frame in the current sequence with the guidewire coverage trace residing in the proximal symbol in the next sequence of frames. Devices that may happen to reside in the same vessel subsection can be disambiguated by assessing which of the two devices is more distal. Here, "distal" refers to the direction downstream of the blood vessel, which is the opposite direction ("proximal") of the advancement of the device.

One or more features (e.g., shapes) extracted in the image location and/or the anatomical location with respect to a given overlay can be associated with the given overlay for a given frame and stored in memory. The clustering operation may then include comparing the stored characteristics of the emission coverage trace in one frame with the stored characteristics of the emission coverage trace in a subsequent frame to perform clustering. The comparison may be performed by: the subtraction of one overlay from another is done pixel by pixel in subsequent frames and the appropriate quantization or penalty for difference is done. Similar comparisons by subtraction can be done using the position in the image and/or the anatomical position, if desired.

Instead of comparing features in the image domain, it may be preferable to compare descriptors of the shape of the overlay marks. In one embodiment, this may be the superposition of two overlay marks to be compared by registration. The comparison may then be based on a measure of deviation between their centerlines. Suitable metrics may include any of a maximum/mean/median value.

Alternatively, the shapes of the overlay marks (e.g., their boundaries) are represented as respective coordinates (coefficients) of a set of basis functions (e.g., splines). The comparison is then based on the coordinates. Absolute subtraction or euclidean distance or any other suitable distance measure may be used.

The comparison with respect to the anatomical location may be binary. Both of the coverage marks are either in the same anatomical section (e.g., a vessel branch) or neither are in the same anatomical section (e.g., a vessel branch). Adjacent segments may also be considered, provided that the device passes from the far end of one segment to the near end of another adjacent segment between frames. Anatomical constraints may be imposed to exclude anatomically impossible movements: for example, it is not possible to "jump" from the left coronary artery to the right coronary artery.

Feature comparisons for one or more of the features may be normalized and combined into a single cost value to form a cost function. For example, the partial costs generated in the different feature comparisons may be summed.

Each cluster attracts a total cost. The total cost for different clusters can be calculated and then the cluster that attracts the lowest cost or a cost below a threshold can be chosen as the correct cluster. Clusters are defined by assigning all coverage traces from a pair of frames to a cluster. The number of clusters can be used as an additional feature, which is appropriately quantified and included in the cost function.

In general, pairs are for any two pairs of consecutive frames Fj、Fj+kAll detected emission footprints of (k-1) are clustered, but it may happen that some frames are skipped due to poor image quality or various other reasons, then the considered pair is Fj、Fj+k(k>1). And then based on the fact that in each given pair Fj、Fj+k(k) The extracted features of the overlay traces in (a) perform clustering. Each cluster representing a certain device typically comprises at least two coverage traces, at least one from a pair of frames FjAnd Fj+kEach of which. However, singleton clustering, which includes only a single frame, may also occur when one of the frames in the pair includes an overlay of a newly introduced or reintroduced device or when the device is outside the current FoV.

Broadly speaking, the clustering cost function f (-) can be defined as follows:

wherein:

is in frame FiIs another frame F in a given cluster C, χi+kThe overlay mark of (2). The given cluster is generated from an early cluster with respect to an early frame;

c is from a given frame pair Fi、Fi+kGiven clustering of coverage traces;

is thatRelevance to a given cluster C;

p, s, a are the position, shape and anatomy features in the image, respectively, and Δ is a comparator that defines the partial cost. Δ may be determined by pixel-by-pixel subtraction, absolute value subtraction, squared subtraction, Euclidean distance, or other suitable criteria (e.g., Lp,p>2) To implement;

w is the corresponding feature weight; and is

G (-) is a partial cost of the number of clusters used, which may be properly normalized.

If anatomically impossible movements are proposed by the algorithm, the anatomical item is aimed ata(χC) May be set to a large value or infinite. In this way, anatomical constraints may be imposed.

In the optimization, the clustering (i.e. the assignment of coverage traces to clusters) is adjusted such that the cost function can be improved. Preferably, the clusters are found so that the cost function f takes a global or local minimum. Such improvements may be achieved in an iterative manner, where the optimization is configured to converge by one or more iterations towards a local or global minimum. Once the stop condition is satisfied, it is sufficient to abort the iteration. The stop condition may be implemented as a threshold. If the change in the cost function does not exceed the threshold, the iteration is aborted and the current cluster is output as the "best" cluster to achieve the optimization. The optimization scheme contemplated herein for solving (1) may include a gradient-based scheme, such as conjugate gradient, Nelder-Mead, or other schemes. The cost function may not necessarily be explicit. Implicit optimization schemes (e.g., k-means clustering) or other machine learning schemes (iterative or non-iterative) may be used. Other machine learning schemes contemplated include neural networks or support vector machines, and the like. If the number of devices used is relatively small (e.g., 2-3), a purely discrete evaluation of (1) by comparing the costs for each cluster and by choosing one that minimizes (1) may be sufficient and fast enough in some cases. The number of clusters can be incremented from 1 to a reasonable upper limit that can be set by the user. Covering the number of clusters considered in this way can also be done in any of the above described successive optimization schemes to reduce the search space and hence the computation time. Although (1), (2) have been formulated as a minimization of the cost function f, this is not intended to limit "double" formulation in maximizing the utility function as also contemplated herein.

The cost function may be adjusted when any of the above events occur. In particular, the influence or contribution of some of the individual terms of the weight w function may be adjusted upon the occurrence of an event.

More specifically, if the imaging geometry changes from one sequence to the next, the cost function needs to be modified to account for such changes. However, the structure of the cost function can be maintained for the frames in a given sequence.

The cost function f needs to be modified in case of a change in the imaging geometry, since the correspondence between the shape and/or the position in the image is now no longer desired. This is because: from different perspective spatial perspectives, the coverage marks of very same imaging devices may look completely different and may also be located in completely different positions in the image. To account for this, weights can be used to disable or discount the effect of features and/or shape features in the image, but instead emphasize the contribution of the anatomical location. In contrast to other features, the anatomical location is invariant, in other words, it remains persistent even if imaging geometry changes occur. It will be appreciated that other such invariants (not necessarily anatomical locations) may be beneficially used to detect and process sequences of different imaging geometries, for example, contrast specifications or length of the tip of the device. The position and shape in the image are changing features as these will change under different imaging geometries. For example, after a change in imaging geometry, the "upper right" in one frame may correspond to the "lower left" in the next frame. Also, for devices that are not perfectly symmetric, the shape characteristics may change completely under, for example, rotation.

By combining the considered features, robustness to the mentioned events can be achieved, which in turn results in a consistent and consistent tagging.

Referring now to the flow chart in fig. 5, fig. 5 explains in more detail in particular the identifying step S420. In particular, the process flow of the proposed method may be adjusted based on the acquisition timing for any given frame.

Specifically, at step S42010, it is determined whether the given frame received at step S410 is the first such frame F in the sequence1(indicated in fig. 5 by the query "i>1? "to indicate).

Turning now to the first branch "a" in the process flow according to fig. 5, where a given frame is actually the first frame for a given sequence at a given imaging geometry, it is then determined whether the imaging geometry is the same as in the previous sequence. If so, the same identification operation based on the same cost function can be used for the clusters, which can then be identified in step S42020A, as in the previous sequence and as explained above with respect to step S420 in the general flow diagram of FIG. 4. In an embodiment, the cost constraint on distance in the images may be relaxed between sequences to account for larger movements that may occur with the device, for example, when X-rays are turned off.

The pre-stored features extracted for the last frame in the previous sequence may be retrieved to perform for the instantaneous first frame F using features from the previous frame in the instantaneous sequence1And similarly for all other frames in the sequence. Notably, at step S4020A, anatomical features can now be ignored in branch "a" for processing, and clustering can be done based only on shape similarity and/or position in the images. In particular, the weight w of the anatomical location feature may be set to "zero" in order to invalidate its impact on the clustering when optimizing the cost function (1).

However, if it is determined that an imaging geometry change has occurred, the clustering algorithm is modified and the process flow proceeds according to branch "B".

In particular, the cost function f may be modified so as to now consider the anatomical location for the coverage trace (S) in a given frame and use this anatomical location to associate the coverage trace with the respective device in the identifying operation S420. In other words, in step S42020B, the anatomical location is determined for a given coverage trace, and in step S42030B, by considering a given current frame and next frame F2The anatomical location of the overlay mark in (a) to perform the identification. When processing in branch "B", the shape and/or position in the image is now disabled by setting one or both weights to zero, since the shape and position in the image are now no longer expected to be relevant. Once determined and frame F1、F2The relevant initial clustering, when processing the subsequent frame in branch "B", the anatomical features can now be "re-disabled" again, and the remaining clustering can proceed again as in branch "a" based only on shape similarity and/or position in the image until the next geometry change occurs, in which case the flow returns to step S42010.

In branch "B", the determination of the change in anatomical position may be based on receiving an indication signal received from an imaging device indicating a change in imaging geometry. Alternatively, the indication signal may not only indicate that a change in imaging geometry has occurred, but the signal may also include enriched information specifying how the change in imaging geometry has changed. For example, the indication signal may specify any one or more or all of: angulation, rotation angle, translation, patient table height. Alternatively, the event may be detected directly from the image, for example by machine learning, wherein the machine learning model is trained based on the following input-output pairs: inputting: two images. And (3) outputting: where the two images are at boolean indications of the same imaging geometry.

If the shape geometry of some or each device is known in 3D (e.g. available as a pre-stored model), a new shape according to the new imaging geometry can be pre-computed from the old imaging geometry by forward projection along the direction specified by the indication signal. The cost function can then be changed by updating the shape information. In a similar manner, the location in the image may be updated. Preferably, patient motion, e.g. motion caused by heart beats or breathing, is taken into account by a motion compensation technique.

Turning now to branch "C", in which the current frame is not the first frame of the given sequence in step S4202C, the identification (i.e. clustering) of the covering traces by the respective device of the covering traces can now be continued by ignoring the anatomical location p (tf) and purely on the basis of the feature shape similarity and/or the location f (tf) in the image retrieved from the memory MEM.

If it is determined at step S42030C that the current frame is the last frame in the given sequence, the anatomical location is still determined to be associated with the corresponding overlay mark (S) and submitted to the memory MEM. If imaging geometry occurs, the anatomical location(s) for the overlay traces in the last frame can be used for clustering in subsequent sequences. The new sequence can then continue to be clustered according to branch a or B as described above.

For each frame so processed with the respective device identified therein, tagging can be applied in step S430 in a consistent and persistent manner as described above to form a tagged sequence Fi'.

Anatomical location features are not required within a given sequence or if no imaging geometry occurs between two sequences, and preferably only "internal" features (i.e., shapes and/or locations in the image) need be used for clustering. The anatomical features are invariant to imaging geometry variations. In this way, by using imaging geometry variation invariant information when needed, the proposed dynamic clustering scheme can be robustly tuned to ensure consistent and consistent labeling across multiple, possibly different, sequences acquired. Features for clustering are used dynamically, wherein anatomical location features are included only when needed, particularly for the first frame in a new sequence after a change in imaging geometry, wherein the anatomical location features are preferably ignored to simplify optimization.

The additional robustness against new devices being added and device(s) being removed described above is achieved by attaching a cost G () to the number of clusters. This can be achieved as described above by adding an implicit cost term or an explicit cost term for the number of clusters to be considered. Clustering algorithms function conservatively in a manner related to turning on new clusters or reducing the number of clusters. Furthermore, by using shape features in combination with location features in the images for the clustering problems (1), (2), previously identified devices that have had their FoV removed and later reintroduced into them can be robustly re-identified. In the examples, it is assumed herein that: the removal and reintroduction of a given device is due to imaging geometry changes such that the device is no longer in the new FoV due to imaging geometry changes, or an earlier device re-enters the new FoV due to imaging geometry changes. This document assumes the best performance of the proposed method and system, i.e. the affected device that re-enters the FoV after a change in imaging geometry has not moved yet. This assumption then allows a more robust re-identification of the devices, i.e. re-assignment to the correct clusters and consistent re-labeling. This assumption is also consistent with clinical practice, where catheters, guidewires or other tools are not moved while placing the FoV.

To resolve such ambiguity related to (re) appearance/disappearance of the medical device, the processing may be based on an indication signal, e.g. provided by a user, to direct the identifying step to change the number of clusters. However, fully automatic embodiments are also envisaged in which, as described above, the cost function evolution is observed. The cost rise may be used as an indication that a new device is present or that a known device is re-present or that a device is removed. As described above, it is assumed in the embodiments that a change in FoV has occurred due to a change in imaging geometry, and that a "removal" of the device has occurred. If a given footprint cannot be matched at a reasonable cost, it must belong to a device that is not in the early frame of the pair of frames under consideration. A set of user definable cost function thresholding processes can be used to determine whether to change the number of clusters. Conversely, if an existing cluster cannot match any segmentation at a reasonable cost in terms of similarity, the corresponding device must exit the image.

Another complication may arise because in some imaging geometries, in particular along some projection directions, the coverage marks can overlap sufficiently or at least partially. Frames with overlapping coverage marks are likely not completely disambiguated.

However, once one of the devices is advanced so that the two devices no longer overlap, we can expect to re-identify the device by comparing its pre-overlap shape to its post-overlap shape, possibly taking into account additional geometric features such as length, curvature, etc. If this is the case, which cannot be disambiguated in this way, one possible strategy is to assume that only one of the two guidewires is moved during the overlap period, so that the corresponding tags associated with the static and mobile devices, respectively, assigned before the overlap occurs should be retained.

If the overlap-related condition cannot be completely disambiguated, the proposed system can issue a warning alert by indicating the uncertainty associated with the tag. The relevant overlapping parts may be highlighted by color coding or line type coding. In addition, a question mark or other indicia may be attached to the associated tag. Optionally, the system IPS may invite the user to re-establish a reliable ticket through a suitable interface. For example, a voice command handler may be used to receive commands such as "tag correct/switch tags 1 and 3" to request re-tagging.

FIG. 6 is a schematic diagram of a graphical display GD as contemplated herein, in accordance with one embodiment. The graphical display GD is displayed on the device DD. GD comprises three emission coverage traces TF (GW1) -TF (GW3), each with its own distinct label TG1-TG 3. FIG. 6 illustrates explicit tagging. Some or all of the tags may include alphanumeric strings (e.g., text) to provide further information. This information may encode the order in which the respective devices GW1-GW3 are introduced into the FoV by sequence number. That is, the labeling is performed in chronological order, and in the order in which the respective devices appear and are identified by the identifier IDs. The temporal order may be encoded by a label (including an increasing number) and/or by color or other means.

The operation of the proposed system and method may be further illustrated by the following use cases: it is assumed that two (or more) devices are recorded in the current frame, each device being labeled, e.g., with its color coding of the overlay, one device being labeled red and one device being labeled blue. It is assumed that devices with blue overlay are removed while devices for red overlay are retained for subsequent frames. Then, a new device, not previously seen, is introduced at a point in time during imaging. This new device can then be tagged with colors other than blue and red, since the blue and red tags are already occupied by the two earlier mentioned devices. The footprint of the new device may be labeled green, for example. If there are one or both of the two original devices, only red and blue labeling will be shown. As contemplated herein in the embodiments, once tagged, the device retains its unique tag, which preferably persists throughout the imaging process. Its tagging scheme is not used by other devices.

One or more features disclosed herein may be configured or implemented as/with circuitry encoded within a computer-readable medium and/or combinations thereof. The circuits may include discrete circuits and/or integrated circuits, Application Specific Integrated Circuits (ASICs), systems on a chip (SOCs) and combinations thereof, machines, computer systems, processors and memory, computer programs.

In another exemplary embodiment of the invention, a computer program or a computer program element is provided, which is characterized in that it is adapted to run the method steps of the method according to one of the preceding embodiments on a suitable system.

Thus, the computer program element may be stored in a computer unit, which may also be part of an embodiment of the present invention. The computing unit may be adapted to perform or cause the performance of the steps of the above-described method. Furthermore, the computing unit may be adapted to operate the components of the apparatus described above. The computing unit can be adapted to operate automatically and/or to run commands of a user. The computer program may be loaded into a working memory of a data processor. Accordingly, a data processor may be equipped to perform the methods of the present invention.

This exemplary embodiment of the invention covers both a computer program that uses the invention from the outset and a computer program that is updated by means of an existing program to a program that uses the invention.

Further, the computer program element may be able to provide all necessary steps to complete the flow of an exemplary embodiment of the method as described above.

According to a further exemplary embodiment of the present invention, a computer-readable medium, for example a CD-ROM, is proposed, wherein the computer-readable medium has a computer program element stored thereon, which computer program element is described by the preceding sections.

A computer program may be stored and/or distributed on a suitable medium, particularly but not necessarily a non-transitory medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems.

However, the computer program may also be present on a network, such as the world wide web, and may be downloaded into the working memory of a data processor from such a network. According to a further exemplary embodiment of the present invention, a medium for making a computer program element available for downloading is provided, the computer program element being arranged to perform the method according to one of the previously described embodiments of the present invention.

It has to be noted that embodiments of the invention are described with reference to different subject matters. In particular, some embodiments are described with reference to method type claims whereas other embodiments are described with reference to apparatus type claims. However, unless otherwise indicated, a person skilled in the art will gather from the above and the following description that, in addition to any combination of features belonging to one type of subject-matter, also any combination between features relating to different subject-matters is considered to be disclosed with this application. However, all features can be combined to provide a synergistic effect more than a simple addition of features.

While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive. The invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims.

In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. Although some measures are recited in mutually different dependent claims, this does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims shall not be construed as limiting the scope.

27页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:心力衰竭管理的感测

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!