Position planning method for a recording system of a medical imaging device and medical imaging device

文档序号:1159786 发布日期:2020-09-15 浏览:12次 中文

阅读说明:本技术 医学成像设备的拍摄系统的位置规划方法和医学成像设备 (Position planning method for a recording system of a medical imaging device and medical imaging device ) 是由 V.海因里希 H.G.梅尔 H.施威策尔 C.乌尔里希 于 2018-11-20 设计创作,主要内容包括:一种用于关于患者的可选择的拍摄区域对成像设备的拍摄系统进行位置规划的方法,所述方法具有以下步骤:·检测拍摄系统和/或成像设备的当前的位置信息和/或成像设备的准直器的设置信息,·检测患者的当前的位置信息,·确定拍摄系统能够发出的X射线射束的当前的走向,·根据X射线射束的当前的走向和患者包络,来确定X射线射束与患者、特别是患者包络之间的当前的相交体积和/或当前的拍摄区域,患者包络根据患者的位置信息来确定,·作为虚拟显示元素,来显示当前的相交体积和/或当前的拍摄区域,·通过操纵虚拟显示元素,来获取目标相交体积和/或目标拍摄区域,以及·以如下方式来确定拍摄系统和/或成像设备和/或准直器的设置的目标位置,即,在占据目标位置时,目标相交体积和/或目标拍摄区域变为当前的相交体积和/或当前的拍摄区域。(A method for position planning of a recording system of an imaging device with respect to selectable recording regions of a patient, having the following steps: detecting current position information of the imaging system and/or of the imaging device and/or of a collimator of the imaging device, detecting current position information of the patient, determining a current course of an X-ray beam which can be emitted by the imaging system, determining a current intersection volume and/or a current acquisition region between the X-ray beam and the patient, in particular the patient envelope, as a function of the current course of the X-ray beam and the patient envelope, which is determined as a function of the position information of the patient, displaying the current intersection volume and/or the current acquisition region as a virtual display element, acquiring a target intersection volume and/or a target acquisition region by manipulating the virtual display element, and determining a target position of the settings of the imaging system and/or of the imaging device and/or of the collimator in the following manner, that is, the target intersecting volume and/or the target photographing region becomes the current intersecting volume and/or the current photographing region when occupying the target position.)

1. A method for position planning of a recording system of an imaging device with respect to selectable recording regions of a patient, having the following steps:

detecting or acquiring current position information of the photographing system and/or the imaging device and/or setting information of a collimator of the imaging device,

detecting or acquiring current location information of the patient,

determining the current course of the X-ray beam that the acquisition system can emit,

determining a current intersection volume and/or a current acquisition region between the X-ray beam and the patient, in particular the patient envelope, from a current course of the X-ray beam and the patient envelope, the patient envelope being determined from the position information of the patient,

as a virtual display element, to display the current intersecting volume and/or the current photographing region,

by manipulating the virtual display element, a target intersecting volume and/or a target capturing area are obtained, and

the target position of the settings of the recording system and/or of the imaging device and/or of the collimator is determined in such a way that, when the target position is occupied, the target intersection volume and/or the target recording area changes to the current intersection volume and/or the current recording area.

2. A method for position planning of a recording system of an imaging device with respect to selectable recording regions of a patient, having the following steps:

detecting or acquiring current position information of the photographing system and/or the imaging device and/or setting information of a collimator of the imaging device,

detecting or acquiring current location information of the patient,

determining the current 3D reconstructed volume that the capturing system is capable of capturing during its current position,

determining a current shot volume from the current 3D reconstructed volume and the patient envelope,

as a virtual display element, to display the current shot volume,

obtaining a target shot volume by manipulating the virtual display element, an

The target position of the settings of the recording system and/or of the imaging device and/or of the collimator is determined in such a way that, when the target position is occupied, the target recording volume becomes the current recording volume.

3. Method according to claim 1 or 2, wherein the settings of the camera system and/or the imaging device and/or the collimator are moved to the target position.

4. Method according to any of the preceding claims, wherein the camera system is formed by a C-arm and/or the imaging device is formed by a mobile C-arm device.

5. The method according to claim 1, wherein the recording area is two-dimensional or three-dimensional.

6. Method according to one of the preceding claims, wherein in particular a current intersection volume or a current acquisition region or a current acquisition volume and a current patient envelope or a patient position are jointly displayed on a display unit.

7. Method according to claim 6, wherein the display unit is formed by a monitor or a smart device or a virtual or augmented reality display unit, in particular augmented reality glasses.

8. The method of any preceding claim, wherein the virtual display elements are configured to be movable and/or changeable in size, position and orientation.

9. The method according to claim 8, wherein the virtual display element can be manipulated by an operator by means of at least one operating element and/or an input unit.

10. The method according to any of the preceding claims, wherein the current position information of the photographing system and/or of the imaging device and/or the setting information of the collimator of the imaging device and/or the current position information of the patient is updated periodically or continuously.

11. The method according to any one of the preceding claims, wherein a series of multiple target intersection volumes or target capture areas or target capture volumes are acquired and respective target positions are determined and occupied.

12. Method according to any of the preceding claims, wherein the positionally correct gradual appearance of the recordings of previously taken patients is performed on a display unit.

13. A medical imaging device for performing the method according to any one of claims 1 to 12, the medical imaging device having: a camera system having an X-ray detector and an X-ray source; a collimator for collimating an X-ray beam that the X-ray source is capable of emitting; control means for controlling the method; a calculation unit for determining a current intersection volume or a current capture area or a current capture volume; a display unit for displaying a current intersection volume or a current photographing region or a current photographing volume as a virtual display element; and an input unit for manipulating the virtual display element.

14. Medical imaging device according to claim 13, having a tracking system for detecting current position information of the camera system and/or of the imaging device and/or of the collimator and for detecting current position information of the patient.

15. The medical imaging device of claim 14, wherein the tracking system is formed by a 3D camera or augmented reality glasses.

16. The medical imaging device of claim 13, formed by a mobile C-arm device.

17. A method for representing a recording area of a recording system of an imaging device, having the following steps:

detecting current position information of the photographing system and/or the imaging device and/or setting information of a collimator of the imaging device,

detecting current location information of the patient,

determining the current course of the X-ray beam that the acquisition system can emit,

determining a current intersection volume and/or a current acquisition region between the X-ray beam and the patient, in particular the patient envelope, from a current course of the X-ray beam and the patient envelope, the patient envelope being determined from the position information of the patient, and

display the current intersection volume and/or the current capture area.

Technical Field

The invention relates to a method for planning the position of a recording system of a medical imaging device according to claim 1 and according to claim 2, and to a device for carrying out such a method according to claim 13.

Background

Today, medical imaging devices, such as stationary mounted X-ray imaging systems or mobile C-arm devices, typically use the following techniques for previewing a body region of a patient to be photographed:

1. the 2D shot area is displayed by direct projection onto the patient. This is mostly achieved by multiple line lasers or by projecting a bright "window" directly onto the patient surface. For this purpose, projection hardware is installed in the imaging device, for example on the image recording device and/or on the emitter housing. 2. On the monitor of the imaging system, the shooting area is previewed in a virtual manner by a polygonal line, for example, a rectangle.

However, these methods are less flexible and require the user to manually adjust the position of the imaging system until the desired target area is captured. With the method of technique 2, although this part can be improved, in particular when setting up a 3D scan, the planar projection onto the patient surface can still only give an incomplete impression of the resulting reconstructed volume or its position relative to the patient. This is sometimes a problem when scanning is to be used to image larger target structures, especially in the case of a moving C-arm with a spatially very limited reconstruction volume (a cube with a side length of about 16cm in the case of Cios spins). Optimally placing the volume to detect all structures simultaneously requires some experience by the surgical team to optimally place the C-arm for a shot or 3D scan.

Disclosure of Invention

The technical problem to be solved by the present invention is to provide a method that overcomes the drawbacks of the prior art; furthermore, the object of the invention is to provide an imaging device which is suitable for carrying out the method.

According to the invention, the above-mentioned object is solved by a method for planning the position of a recording system of a medical imaging device according to claim 1 and by a method according to claim 2 and by an apparatus according to claim 13. Advantageous embodiments of the invention are the subject matter of the dependent claims.

The method according to the invention for the position planning of a recording system of an imaging device with respect to selectable recording regions of a patient comprises the following steps: detecting or acquiring current position information of a shooting system and/or imaging equipment and/or setting information of a collimator of the imaging equipment; detecting or acquiring current position information of a patient; determining the current trend of an X-ray beam which can be emitted by a shooting system; determining a current intersection volume and/or a current acquisition region between the X-ray beam and the patient, in particular the patient envelope, from a current course of the X-ray beam and the patient envelope, the patient envelope being determined from the position information of the patient; displaying the current intersection volume and/or the current photographing region as a virtual display element; acquiring a target intersecting volume and/or a target shooting area by manipulating the virtual display element; and determining the set target position of the camera system and/or of the imaging device and/or of the collimator in such a way that, upon occupying the target position, the target intersection volume and/or the target recording area becomes the current intersection volume and/or the current recording area.

Manipulating a display element is to be understood here as arbitrarily changing the display element, i.e. moving or rotating, changing the position, size or orientation, etc.

That is, it is proposed to detect the current position of the patient or of a part of the patient and the position (e.g. the contour) of the imaging system (e.g. the mobile C-arm device), the acquisition system (e.g. only the X-ray source and the X-ray detector) or information about the collimator system, either once, continuously or at predefined time intervals by suitable known position determination methods (e.g. tracking methods). Such detection can be performed, for example, by means of an external 3D Tracking camera and suitable marker structures fixed on the patient and the imaging system, e.g. the C-arm (out-in Tracking). Corresponding Tracking hardware (inside-out Tracking) used in mixed reality or Augmented Reality (AR) glasses, such as mshollelens, may also be used. A combination of these two approaches may also be used to improve robustness. Furthermore, other possibilities for detecting the position may also be used, or the position may be acquired from already existing data.

For example, the detected position and data are transmitted to the control unit of the imaging device. Thereby, the control unit knows or can determine the current relative position of the image system and the patient with respect to each other. According to the invention, a current intersection volume and/or a current acquisition region between the X-ray beam and the patient, in particular the patient envelope determined from the position information of the patient, is then determined from the current course of the X-ray beam and the patient envelope. Now, on a display unit (e.g. a 2D screen or preferably in stereoscopic form on AR glasses), a virtual display element is displayed showing the current intersection volume and/or the current shooting area. This can also be superimposed, for example, with real patients. In this case, for example, in the case of a planned 2D recording, the display element is the intersection between the patient volume and the collimated direct beam profile of the X-ray beam. According to one embodiment of the invention, the virtual display element can be manipulated by the operator by means of at least one operating element and/or an input unit. Now, the User can manipulate the display element, for example, on a display unit, for example, a 2D screen, for example, using a suitable GUI (graphical User Interface), operating elements (e.g., graphical handle (Anfasser), slider, etc.), and known input devices (e.g., mouse and keyboard, touch screen, etc.). According to one embodiment of the invention, the virtual display element is designed to be movable and/or changeable in size, position and orientation. If AR glasses are used for display, the recording area can also be manipulated by the user directly manipulating the superimposed holographic volume on the patient, for example by means of a gesture control. In addition to the gestural control, position sensors, acceleration sensors and magnetic field sensors of the usual glasses can also be incorporated into the interaction in order to influence the alignment of the system, for example by head movements.

According to the invention, on the system side, the target intersection volume and/or the target capture area are/is acquired by manipulating the virtual display element. The target position of the setting of the recording system and/or of the imaging device and/or of the collimator is then determined in such a way that, when occupying the target position, the target intersection volume and/or the target recording area becomes the current intersection volume and/or the current recording area.

For this purpose, the control unit can derive movement instructions for different device axes (e.g. position of the C-arm relative to the patient, orbit/angle, height of the vertical axis) and/or collimator settings, for example, when confirming the target intersection volume and/or the target recording area. For example, the collision sensor parameters can also be incorporated into the calculation of the movement of the device in order to automatically determine and, if necessary, dynamically adjust the optimal movement process for positioning the device.

Instead of 2D shooting, 3D scanning may also be provided. For this purpose, for example, instead of the intersection of the direct beam and the patient volume, a 3D reconstructed volume (e.g. a cube) technically possible with the imaging device is superimposed on the patient or the patient envelope, for example in the original size. The user can then manipulate the position of the volume as described.

Another method according to the invention for position planning of a recording system of an imaging device with respect to selectable recording regions of a patient comprises the following steps: detecting or acquiring current position information of a shooting system and/or imaging equipment and/or setting information of a collimator of the imaging equipment; detecting or acquiring current position information of a patient; determining a current 3D reconstructed volume that the capturing system is capable of capturing during its current position; determining a current shot volume from the current 3D reconstructed volume and the patient envelope; displaying the current shot volume as a virtual display element; acquiring a target shot volume by manipulating the virtual display element; and determining a target position of the settings of the camera system and/or the imaging device and/or the collimator in such a way that the target camera volume becomes the current camera volume when occupying the target position.

Suitably, the camera system and/or the imaging device and/or the collimator setting are moved to the target position.

According to one embodiment of the invention, the recording system is formed by a C-arm and/or the imaging device is formed by a mobile C-arm device. The mobile C-arm device has C-arms which are fixed on the device cart and can be adjusted in various ways, for example can be rotated and can be translated. In addition, the equipment cart can be freely moved automatically or manually. The fixedly mounted C-arm device likewise has an adjustable C-arm. Thus, the C-arm can be arranged on an articulated arm robot, for example, and can be adjusted in any spatial direction.

According to a further embodiment of the invention, the current intersection volume or the current recording region or the current recording volume and the current patient envelope are displayed together, in particular on a corresponding display unit. In an advantageous manner, the display unit is formed by a monitor or a smart device or a virtual or augmented reality display unit, in particular augmented reality glasses.

According to a further embodiment of the invention, the current position information of the recording system and/or of the imaging device and/or the setting information of the collimator of the imaging device and/or the current position information of the patient is updated periodically or continuously. In this way, reliable functionality of the method can be ensured.

According to a further embodiment of the invention, a series of multiple target intersection volumes or target recording regions or target recording volumes is acquired and the respective target positions are determined and occupied. Then, at each target position, a corresponding photographing may be performed. Such a series of shots (panoramas) is meaningful when the object intersection volume and/or object capture area exceeds the size that can be imaged with one shot (ultimately limited due to the size of the image camera). In this case, the control unit can plan a series of shots and/or calculate the motion vector corresponding to the sequence it needs over time.

If the normal reconstruction volume of a typical single scan, for example an orbital scan (Orbitalscan) in the case of a C-arm, is not sufficient in terms of size, a 3D panorama can also be theoretically calculated here, from which the required orbital curves and motion patterns can be derived for the imaging system within the limits of this approach due to the technology.

According to a further embodiment of the invention, the positionally correct incremental appearance (Einblendung) of the previously recorded patient is carried out on the display unit. In this context, for better orientation, for example, a preoperatively created 2D recording or 3D scan (e.g., CT or MRT) can be corrected for position and/or superimposed on the patient in the original size. This may be advantageous when a specific area is to be re-imaged, for example for monitoring purposes, for example to check the position of an implant, or for surgery on a tumor. Thereby, it is also enabled to compare the size of the relevant structure with respect to the change since the last shot. Furthermore, it may be advantageous if the doctor has marked the position of the relevant structures in the corresponding planning system before the operation and the control unit then determines the acquisition position of the imaging device (for example, a mobile C-arm) required for the intraoperative 2D acquisition or 3D scanning, for example intraoperatively, with knowledge of the patient position.

By the method according to the invention, the clinical workflow is simplified by largely eliminating the cumbersome placement interaction that is now common with imaging devices (e.g. C-arms) and by replacing it by a patient or image centric approach. In this case, the optimum device position is automatically determined. As the auxiliary device, for example, a camera system for detecting a volume and AR glasses are used as necessary. The method serves as an intuitive, clear and fast positioning aid for the user or operator of the medical imaging device in order to simplify the workflow and improve patient care.

The invention also comprises a medical imaging device for carrying out the method according to the invention, the medical imaging device having: a photographing system having an X-ray detector and an X-ray source; a collimator for collimating an X-ray beam that the X-ray source is capable of emitting; control means for controlling the method; a calculation unit for determining a current intersection volume or a current photographing region or a current photographing volume; a display unit for displaying a current intersection volume or a current photographing region or a current photographing volume as a virtual display element; and an input unit for manipulating the display element.

According to one embodiment of the invention, the device further comprises a tracking system for detecting current position information of the camera system and/or of the imaging device and/or of the collimator and for detecting current position information of the patient. Such a tracking system may be formed, for example, by an (external) 3D camera or augmented reality glasses. An external 3D tracking camera may also use, for example, suitable marker structures (outlying-tracking) fixed to the patient and the camera system, for example, the C-arm. Corresponding Tracking hardware (inside-out Tracking) used in mixed reality or Augmented Reality (AR) glasses (e.g., MS HoloLens) may also be used. A combination of these two approaches may also be used to improve robustness.

The invention also comprises a method for representing a recording area of a recording system of an imaging device, comprising the following steps: detecting current position information of a shooting system and/or imaging equipment and/or setting information of a collimator of the imaging equipment; detecting current location information of a patient; determining the current trend of an X-ray beam which can be emitted by a shooting system; determining a current intersection volume and/or a current acquisition region between the X-ray beam and the patient, in particular the patient envelope, from a current course of the X-ray beam and the patient envelope, the patient envelope being determined from the position information of the patient; and displaying the current intersection volume and/or the current photographing region.

Drawings

The invention and further advantageous embodiments of the features according to the dependent claims are explained in detail below on the basis of exemplary embodiments which are schematically illustrated in the drawings, without the invention being restricted to these exemplary embodiments thereby.

Fig. 1 shows an imaging device according to the invention with a camera system and a display unit;

fig. 2 shows a display of a current intersecting volume composed of a patient envelope and an X-ray beam and a virtual display element in the case of the imaging device according to fig. 1;

fig. 3 shows an enlarged view according to fig. 2 with superimposed 2D recordings;

FIG. 4 shows a view of manually moving a virtual display element according to FIGS. 2 and 3;

FIG. 5 shows a sequence of a method according to the invention;

fig. 6 shows a display of a current shot volume and virtual display elements in the case of an imaging device for 3D shooting; and

fig. 7 shows an enlarged view according to fig. 3 with superimposed 3D recordings.

Detailed Description

In fig. 1, a medical imaging device with a C-arm 1 is shown, on which C-arm 1 an X-ray detector 2 and an X-ray source 3 are fixed. The X-ray source 3 may emit an X-ray beam 4 additionally shaped or formable by a collimator (not shown), the X-ray beam 4 passing through a patient 5 supported on a patient bed 6. The position and/or envelope (hulle) of the C-arm and the position of the patient 5, for example in the form of a patient envelope, are detected, for example, by a detection system, for example a tracking system with a 3D tracking camera 14. The camera system (C-arm 1 with X-ray source 3 and X-ray detector 2) is adjustable, for example rotatable and translatable, relative to the patient 5. The camera system can be fastened to the ceiling, floor or to the equipment vehicle, for example, by means of a holder. The imaging device is controlled by a system controller 13, the system controller 13 controlling the issuing of the X-ray beam and the movement of the photographing system, for example, on command or automatically. Furthermore, the imaging device has a computing unit 17 and an operating unit with a display unit, for example a touch monitor 16 with a display 18. The method according to the invention may be performed using an imaging device.

Fig. 5 shows a flowchart of a method according to the invention. In a first step 21, current position information of the camera system and/or of the imaging device and/or setting information of a collimator of the imaging device is detected or acquired, for example, by a detection system. Such a detection system may be formed, for example, by a tracking system with a 3D tracking camera 14. For this purpose, suitable marker structures (out-in Tracking) fixed on the patient and the imaging system, for example the C-arm, can additionally be used, if necessary. Corresponding Tracking hardware (inside-out Tracking), such as used in mixed reality or Augmented Reality (AR) glasses (e.g., MS hollelens), may also be used. In this regard, augmented reality glasses 15 are exemplarily shown in fig. 1. Furthermore, other possibilities for detecting the position may also be used, or the position may be acquired from already existing data, such as previously taken X-ray images.

In a second step 22, current position information of the patient 5 or a part/organ of the patient 5 is detected or acquired. The location information may also be detected by a tracking system. The patient envelope or organ envelope may be determined from the position information, for example. In a third step 23, the current course of the X-ray beam that can be emitted by the imaging system is determined. In this case, the current course of the X-ray beam 4 is determined, for example, using position information of the entire imaging device or only the acquisition system and/or information of the collimator. In this case, the actual X-ray beam need not be emitted yet, but rather only a planned or intentional setting of, for example, a collimator may be used.

In a fourth step 24, a current intersection volume between the X-ray beam 4 and the patient 5/patient envelope and/or a current acquisition region or a current acquisition volume is determined as a function of the current course of the X-ray beam and the position of the patient/patient envelope. This may be done, for example, by transmitting the detected position and data to the control unit 13 of the imaging device. Thereby, the control unit knows or can determine the current relative position of the image system and the patient/patient envelope/organ envelope with respect to each other.

In a fifth step 25, the current intersecting volume 7 and/or the current acquisition region or the current acquisition volume is displayed, for example, on the display 18 of the touch monitor 16 as a virtual display element 8, which is shown in fig. 2 and in an enlarged manner in fig. 3. It is preferably displayed together with the current patient envelope or current patient position to provide an accurate representation of the reality to the user. Instead of a real display unit, for example a monitor, a touch screen or a flat panel, a virtual display unit may also be used. Then, from the current intersecting volume 7, the image region in, for example, a 2D image is obtained in a simple manner. In the 3D case, the currently available acquisition volume can be determined, for example, using a technically possible 3D reconstruction volume or with respect to the position of the imaging device and the patient envelope of the patient 5.

In a sixth step 26, the target intersection volume and/or the target recording area or the target recording volume is acquired by manipulating the virtual display element, for example, according to a user input of the operator on the touch monitor. As shown in fig. 4, the user/operator may move the virtual display element 8, alternatively may change position or orientation, or otherwise manipulate the virtual display element 8. This can be done, for example, by manually moving the virtual display element 8, as is shown by hand in fig. 4. This may also be done by mouse clicks or other user input. For example, gesture control or voice control may also be used. If AR glasses are used for display, the recording area can also be manipulated by the user directly manipulating the superimposed holographic volume on the patient, for example by means of a gesture control. In addition to the gestural control, position sensors, acceleration sensors and magnetic field sensors of the usual glasses can also be incorporated into the interaction in order to influence the alignment of the system, for example by head movements. By performing the manipulation, the current intersection volume becomes the target intersection volume, or the current photographing region becomes the target photographing region, or the current photographing volume becomes the target photographing volume.

In a seventh step 27, the set target position of the acquisition system and/or of the imaging device and/or of the collimator is acquired in such a way that, upon occupying the target position, the target intersection volume and/or the target acquisition area becomes the current intersection volume and/or the current acquisition area. The imaging device or the control unit 13 and/or the calculation unit 17 then calculates from the target intersection volume which position the C-arm and/or also the entire imaging device and/or collimator has to occupy in order to form the current intersection volume from the target intersection volume. Alternatively, the imaging device or the control unit 13 and/or the calculation unit 17 calculates from the target recording area or the target recording volume which position the C-arm and/or also the entire imaging device and/or collimator has to take in order to form the current recording area from the target recording area or the current recording volume from the target recording volume. The C-arm and/or the imaging device and/or the collimator may then be moved, for example automatically, to the respective position and/or setting. For this purpose, for example, the control unit can derive movement commands for different device axes (e.g. the position of the C-arm relative to the patient, the Orbital/angular angle (Orbital/angular winkel), the height of the vertical axis) and/or the setting of the collimator when confirming the target intersection volume and/or the target recording area. The collision sensor parameters can also be incorporated into the calculation of the movement of the device, for example, in order to automatically determine and, if necessary, dynamically adjust an optimal movement process for the positioning of the device.

In an optional eighth step 28, the target position of the setting of the camera system and/or the imaging device and/or the collimator is moved.

Furthermore, in fig. 3, in an enlarged section, it is shown that the previously taken 2D recordings 9 of the patient 5 gradually appear in the region of the current intersection volume or acquisition area. The 2D recording 9 can be used, for example, as an orientation or as an additional positioning aid for the user or operator.

The imaging device may be a fixedly mounted C-arm device or may for example also be a mobile C-arm device.

In fig. 6, a currently photographable acquisition volume 10 is shown for the case in which a 3D acquisition is to be planned. In this case, the currently available acquisition volume 10 is determined by means of the technically or locally available 3D reconstruction volume 11 of the imaging device and the patient envelope of the patient 5. The imaging device, i.e. for example the C-arm 1 and the patient 5, is also tracked by a detection system, for example a tracking system, as in the two-dimensional case.

In fig. 7, an enlarged section of fig. 6 shows how a 3D volume image previously acquired, for example, by means of angiography or CT, is additionally superimposed on the current acquisition volume.

The current position information of the recording system and/or of the imaging device and/or of the collimator of the imaging device and/or of the patient can be determined once or preferably also periodically or continuously updated. In this way, reliable functionality of the method can be ensured.

A series of multiple target intersection volumes or target capture areas or target capture volumes may also be acquired and corresponding target positions determined and occupied. Then, at each target position, a corresponding photographing may be performed. Such a series of shots (panoramas) is meaningful when the object intersection volume and/or object capture area exceeds the size that can be imaged with one shot, ultimately limited due to the size of the X-ray detector. The control unit can plan the series of shots and/or calculate the motion vector from the sequence it needs in time.

List of reference numerals

1C-shaped arm

2X-ray detector

3X-ray source

4X-ray beam

5 patients or patient envelopes

6 patient bed

7 intersecting volume

8 virtual display element

92D recording

10 volume of shots

11 possible 3D reconstruction volumes

123D recording

13 System controller

143D tracking camera

15 augmented reality glasses

16 monitor touch screen

17 calculation unit

18 display

21 first step

22 second step

23 third step

24 fourth step

25 fifth step

26 sixth step

27 seventh step

28 eighth step

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:X射线拍摄装置的控制方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!