X-ray imaging apparatus and X-ray image processing method

文档序号:1416407 发布日期:2020-03-13 浏览:12次 中文

阅读说明:本技术 X射线摄影装置和x射线图像处理方法 (X-ray imaging apparatus and X-ray image processing method ) 是由 吉田贵则 于 2019-09-05 设计创作,主要内容包括:本发明提供X射线摄影装置和X射线图像处理方法。所述X射线摄影装置的图像合成部构成为:基于特征点的移动信息和像素的移动信息来校正合成对象图像或透视图像,并且将校正后的合成对象图像与透视图像进行合成或者将合成对象图像与校正后的透视图像进行合成来生成合成图像。(The invention provides an X-ray imaging apparatus and an X-ray image processing method. The image synthesis unit of the X-ray imaging apparatus is configured to: the synthesis target image or the fluoroscopic image is corrected based on the movement information of the feature points and the movement information of the pixels, and the synthesis target image after the correction is synthesized with the fluoroscopic image or the synthesis target image is synthesized with the fluoroscopic image after the correction to generate the synthesis image.)

1. An X-ray imaging apparatus includes:

an imaging unit including an X-ray generating unit that irradiates an object with X-rays and an X-ray detecting unit that detects the X-rays irradiated from the X-ray generating unit and transmitted through the object, the imaging unit capturing an X-ray image of the object;

a fluoroscopic image acquiring unit that acquires a fluoroscopic image which is the X-ray image obtained by fluoroscopy of the subject captured by the imaging unit and which includes a feature point image;

an image synthesizing unit that synthesizes the fluoroscopic image and a synthesis target image, which is the X-ray image captured before a time point at which the fluoroscopic image is captured and is to be synthesized with the fluoroscopic image, to generate a synthesized image;

a reference image acquisition unit that acquires the X-ray image including the feature point image captured before a time point at which the fluoroscopic image is captured as a reference image; and

a movement information acquiring unit that extracts the feature point image from each of the reference image and the fluoroscopic image, acquires movement information of a feature point based on the extracted feature point image, and acquires movement information of at least some pixels among pixels belonging to the reference image based on the reference image and the fluoroscopic image,

wherein the image synthesizing unit is configured to: the synthesis target image or the fluoroscopic image is corrected based on the movement information of the feature point and the movement information of the pixel, and the synthesis target image after the correction is synthesized with the fluoroscopic image or the synthesis target image after the correction is synthesized with the fluoroscopic image to generate the synthesis image.

2. The radiography apparatus according to claim 1,

the fluoroscopic image acquisition unit is configured to: live images successively generated in real time are acquired as the fluoroscopic images,

the image combining unit is configured to: correcting the synthetic object image or the live image based on the movement information of the feature points and the movement information of the pixels, and synthesizing the corrected synthetic object image with the live image or synthesizing the synthetic object image with the corrected live image to generate the synthetic image.

3. The radiography apparatus according to claim 2,

the image combining unit is configured to: each time the live-view image is acquired by the fluoroscopic image acquisition unit, the synthetic object image or the live-view image is corrected based on the movement information of the feature points and the movement information of the pixels, and the synthetic image is generated by synthesizing the corrected synthetic object image with the live-view image or by synthesizing the synthetic object image with the corrected live-view image.

4. The radiography apparatus according to claim 2,

the image combining unit is configured to: acquiring a difference image between a contrast image, which is the X-ray image, in which a contrast medium is administered to a blood vessel of a lower limb of the subject and a non-contrast image, which is the X-ray image, in which a contrast medium is not administered to a blood vessel of the subject, as the synthesis target image, correcting the difference image or the live image based on the movement information of the feature points and the movement information of the pixels, and synthesizing the corrected difference image with the live image or synthesizing the difference image with the corrected live image to generate the synthesis image.

5. The radiography apparatus according to claim 4,

the image combining unit is configured to: and combining an inverted image obtained by performing black-and-white inversion processing on at least a part of the corrected difference image with the live-view image, or combining an inverted image obtained by performing black-and-white inversion processing on at least a part of the difference image with the corrected live-view image, to generate the combined image.

6. The radiography apparatus according to claim 5,

the image combining unit is configured to: the reverse image is synthesized with the live image including an image taken into at least one of a catheter, a stent, and a guide wire inserted into the subject to generate the synthesized image.

7. The radiography apparatus according to claim 4,

the reference image acquisition unit is configured to: acquiring the live-action image captured at the same imaging position as that of the synthetic object image before the fluoroscopic image is captured as the reference image.

8. The radiography apparatus according to claim 4,

the reference image acquisition unit is configured to: the contrast image is acquired as the reference image.

9. The radiography apparatus according to claim 4,

the reference image acquisition unit is configured to: acquiring the non-contrast image as the reference image.

10. The radiography apparatus according to claim 1,

the movement information acquiring unit is configured to: in a case where a movement amount from the feature point image of the reference image to the feature point image of the fluoroscopic image exceeds a movement amount threshold, the reference image is corrected based on movement information of the feature point, and movement information of the pixel is acquired based on the corrected reference image and the fluoroscopic image.

11. The radiography apparatus according to claim 1,

the movement information acquiring unit is configured to: the method includes extracting a plurality of feature point images from the reference image and extracting a plurality of feature point images from the fluoroscopic image, performing correction to move the reference image by an amount corresponding to an average value of movement amounts from the feature point images of the reference image to the feature point images of the fluoroscopic image, and acquiring movement information of the pixels based on the corrected reference image and the fluoroscopic image.

12. The radiography apparatus according to claim 1,

the movement information acquiring unit is configured to: a movement map indicating a movement direction and a movement amount of at least some of pixels belonging to the reference image is acquired based on the reference image and the perspective image, and a smoothed movement map in which high-frequency components in a spatial direction of the movement map are suppressed is acquired as movement information of the pixels.

13. An X-ray image processing method in which,

acquiring a fluoroscopic image which is an X-ray image obtained by fluoroscopy of a subject and which includes a feature point image, and acquiring a reference image which is an X-ray image taken before a time point at which the fluoroscopic image is taken and which includes the feature point image,

extracting the feature point image from each of the reference image and the fluoroscopic image,

acquiring movement information of a feature point based on the extracted feature point image, and acquiring movement information of at least a part of pixels among pixels belonging to the reference image based on the reference image and the fluoroscopic image,

correcting the synthetic object image or the fluoroscopic image as an X-ray image of the subject based on the movement information of the feature points and the movement information of the pixels,

and synthesizing the corrected synthesis target image with the fluoroscopic image or synthesizing the synthesis target image with the corrected fluoroscopic image to generate a synthesized image.

Technical Field

The present invention relates to an X-ray imaging apparatus and an X-ray image processing method.

Background

Conventionally, an X-ray imaging apparatus and an X-ray image processing method are known which synthesize a synthetic image with a fluoroscopic image to generate a synthetic image. Such an X-ray imaging apparatus is disclosed in, for example, japanese patent No. 5366618.

Japanese patent No. 5366618 discloses an X-ray image diagnostic apparatus that displays road map data (synthetic target image) and fluoroscopic image data (fluoroscopic image) in a superimposed manner. In the X-ray image diagnostic apparatus, road map data captured at a position of an imaging system closest to a position of the imaging system capturing fluoroscopic image data is extracted from a plurality of reference image data so as to suppress a positional deviation between the position of the imaging system capturing fluoroscopic image data and the position of the imaging system capturing road map data displayed so as to overlap with the fluoroscopic image data. Specifically, the X-ray image diagnostic apparatus is provided with a position detection unit, a movement determination unit, a road map data search unit, and a display unit. The position detection unit is configured to: position information of the camera system is detected. The movement determination unit is configured to: when the fluoroscopic image data is collected, a movement stop state indicating a state in which the imaging system has stopped moving is determined based on the position information detected by the position detection unit. The road map data search unit is configured to: the reference image data having the position information closest to the position information of the imaging system at the time point determined as the movement stop state is extracted from the plurality of reference image data as the road map data. The display unit is configured to: the road map data and the fluoroscopic image data are displayed in a superimposed manner (a composite image is displayed).

However, in the X-ray image diagnostic apparatus as described in japanese patent No. 5366618, the subject may move (body movement) with respect to the imaging system while the plurality of X-ray images are captured. Therefore, the X-ray image diagnostic apparatus described in japanese patent No. 5366618 may have the following configurations: the position of the subject captured in the fluoroscopic image data at the time point when the movement stop state is determined and the position of the subject captured in the road map data at the time when the road map data is captured and the imaging system are displaced by the movement of the subject (the positional displacement between the subject and the imaging system is caused). In this case, it is considered that: even when the roadmap data closest to the position of the imaging system of the fluoroscopic image data is extracted as the image constituting the composite image, the X-ray images of the subjects that are displaced from each other due to the movement of the subjects are displayed so as to be superimposed on each other. In this case, the operation of re-capturing the road map data (X-ray image) is required, and the amount of X-rays to be irradiated to the subject increases. In addition, in the case of using a contrast agent, the amount of the contrast agent used increases. Therefore, in the conventional X-ray image diagnostic apparatus as described in japanese patent No. 5366618, when a synthetic image is generated by synthesizing a synthetic target image and a fluoroscopic image which are captured at different time points from each other, there is a problem that it is not possible to appropriately generate the synthetic image and the amount of X-rays to be irradiated to the subject is increased.

Disclosure of Invention

Problems to be solved by the invention

The present invention has been made to solve the above-described problems, and an object of the present invention is to provide an X-ray imaging apparatus and an X-ray image processing method, in which: even when the subject operates after the synthesis target image is captured when the synthesis target image captured at different time points is combined with the fluoroscopic image to generate the synthetic image, the increase of the amount of X-ray irradiation on the subject can be suppressed by appropriately generating the synthetic image.

In order to achieve the above object, an X-ray imaging apparatus according to a first aspect of the present invention includes: an imaging unit including an X-ray generating unit that irradiates an object with X-rays and an X-ray detecting unit that detects the X-rays irradiated from the X-ray generating unit and transmitted through the object, the imaging unit capturing an X-ray image of the object; a fluoroscopic image acquisition unit that acquires a fluoroscopic image which is an X-ray image obtained by fluoroscopy of a subject captured by an imaging unit and which includes a feature point image; an image synthesizing unit that synthesizes a fluoroscopic image and a synthesis target image that is an X-ray image captured before a time point at which the fluoroscopic image is captured and is to be synthesized with the fluoroscopic image, and generates a synthesized image; a reference image acquisition unit that acquires, as a reference image, an X-ray image including a feature point image captured before a time point at which a fluoroscopic image is captured; and a movement information acquiring unit that extracts a feature point image from each of the reference image and the fluoroscopic image, acquires movement information of a feature point based on the extracted feature point image, and acquires movement information of at least a part of pixels belonging to each of the pixels of the reference image based on the reference image and the fluoroscopic image, wherein the image synthesizing unit is configured to: the synthesis target image or the fluoroscopic image is corrected based on the movement information of the feature points and the movement information of the pixels, and the synthesis target image after the correction is synthesized with the fluoroscopic image or the synthesis target image is synthesized with the fluoroscopic image after the correction to generate the synthesis image.

In the X-ray imaging apparatus according to the first aspect of the present invention, as described above, the movement information acquiring unit is configured to: movement information of the feature points is acquired based on the feature point image of the reference image, which is an X-ray image captured before the time point at which the fluoroscopic image is captured, and the feature point image of the fluoroscopic image, and movement information of the pixels is acquired. The image synthesizing unit is configured to: the synthesis target image or the fluoroscopic image is corrected based on the movement information of the feature points and the movement information of the pixels, and the synthesis target image after the correction is synthesized with the fluoroscopic image or synthesized with the corrected fluoroscopic image to generate the synthetic image. Thus, even when the subject moves with respect to the imaging unit after the synthesis target image and the reference image are captured, the synthesis target image or the fluoroscopic image can be corrected so as to correspond to the movement of the subject. Therefore, it is possible to suppress the X-ray images (the synthesis target image and the fluoroscopic image) from being synthesized with each other, the X-ray images having the positions of the feature points of the subject shifted from each other. As a result, even when the subject operates after the imaging of the synthesis target image when the synthesis target image and the fluoroscopic image (X-ray image) captured at different time points are synthesized to generate the synthesis image, it is possible to appropriately generate the synthesis image (in which the positional deviation between the images is suppressed). This reduces the number of times of re-capturing the image to be synthesized, and thus can suppress an increase in the amount of X-rays irradiated to the subject.

Here, it is difficult to correct only a large movement of the subject (movement of the feature point itself) to follow a fine movement of the subject, and it is difficult to correct only a small movement of the subject (movement in pixel units) to correspond to a large movement of the feature point itself of the subject. In contrast, in the present invention, as described above, the movement information acquiring unit is configured to: movement information of the feature points and movement information of the pixels are acquired as information for correcting the synthesis target image. In this way, since the movement information of the feature point is acquired as the movement information of a large range (macro) in the reference image and the fluoroscopic image, it is possible to correct for a large range of movement (large movement) of the subject. Further, since the movement information of the pixels is acquired as the movement information of the smaller range (microscopic) in the reference image and the fluoroscopic image, it is possible to correct the movement of the subject in the smaller range (smaller movement). As a result, the synthesis target image can be corrected more appropriately by performing both correction for a large movement of the subject and correction for a relatively small movement of the subject, which are complementary to each other in terms of advantages and disadvantages. Therefore, even when a synthetic image is generated by synthesizing an X-ray image and a synthetic target image captured at different time points, it is possible to generate a synthetic image more appropriately (with positional displacement between the images further suppressed).

In the X-ray imaging apparatus according to the first aspect, the fluoroscopic image acquiring unit is preferably configured to: the image combining unit acquires live images sequentially generated in real time as fluoroscopic images, and is configured to: the synthesis object image or the live-action image is corrected based on the movement information of the feature points and the movement information of the pixels, and the synthesis object image after the correction is synthesized with the live-action image or the synthesis object image is synthesized with the live-action image after the correction to generate the synthesis image. With such a configuration, since the synthesis target image or the live image can be corrected in accordance with a change in the live image, it is possible to appropriately generate the synthesis image even when the live image, which is displayed in real time and sequentially changed, is synthesized with the synthesis target image. In the present specification, the term "real time" refers not only to the time or at the same time but also to a period during which an operator (operator) is using the X-ray imaging apparatus (a state in which an X-ray image captured or operated can be visually confirmed). The "sequentially generated live image" is described in the meaning of updating a displayed X-ray image for each frame of a plurality of X-ray images (for example, moving images) that are continuously acquired, for example.

In this case, preferably, the image combining unit is configured to: when a live image is acquired by a fluoroscopic image acquisition unit, a synthesis target image or the live image is corrected based on the movement information of the feature points and the movement information of the pixels, and the corrected synthesis target image is synthesized with the live image or the synthesis target image is synthesized with the corrected live image to generate a synthesized image. With such a configuration, the correction can be performed so that the synthesis target image or the live image is updated one by one in accordance with the updated live image. Therefore, even when a live-action image (for example, a moving image) that changes gradually is combined with a combining target image, a combined image (in which positional deviation between images is suppressed) can be generated more appropriately. As a result, even when the operator can perform treatment of the subject while visually checking the composite image displayed as a moving image, it is possible to generate a composite image in which the positional deviation is more effectively suppressed.

In the X-ray imaging apparatus for generating a composite image by combining the image to be combined with the live-view image, the image combining unit is preferably configured to: a difference image between a contrast image, which is an X-ray image in a state in which a contrast medium is injected into a blood vessel of a lower limb of a subject, and a non-contrast image, which is an X-ray image in a state in which a contrast medium is not injected into a blood vessel of the subject, is acquired as a synthesis target image, the difference image or a live image is corrected based on movement information of feature points and movement information of pixels, and the corrected difference image and the live image are synthesized or the difference image and the corrected live image are synthesized to generate a synthesis image. Here, in an X-ray imaging apparatus that generates a composite image by combining a live-view image with a composite image, an operator can perform various kinds of treatment by inserting a catheter into a blood vessel of a lower limb of a subject while visually checking the composite image. In this case, a difference image (a difference image between a contrast image and a non-contrast image) composed of only images representing blood vessels is used as the image to be synthesized. In view of this, in the present invention, the image synthesizing unit is configured to: and correcting the difference image or the live image, and combining the corrected difference image with the live image or combining the difference image with the corrected live image to generate a combined image. This makes it possible to provide an X-ray imaging apparatus that: when an operator inserts a catheter into a blood vessel of a lower limb of a subject to perform various kinds of treatment, a composite image in which positional deviation between images is effectively suppressed can be generated.

In this case, preferably, the image combining unit is configured to: the image processing device generates a composite image by combining an inverted image obtained by subjecting at least a part of the corrected difference image to black-and-white inversion processing with the live image, or by combining an inverted image obtained by subjecting at least a part of the difference image to black-and-white inversion processing with the corrected live image. Here, for example, when the image of the blood vessel after the image is formed in black in the image to be synthesized and the image of the therapeutic device in the live-action image is formed in the same black as the image of the blood vessel, it is considered that it is difficult to distinguish the image of the blood vessel after the image is formed in the synthesized image from the image of the therapeutic device. In contrast, in the present invention, by configuring as described above, since the substantially black image (for example, the image of the blood vessel after the image creation) in the difference image is synthesized with the live image in a state of being converted into a substantially white image (for example, an image of a substantially background color), the treatment instrument (for example, a catheter, a stent, a guide wire, or the like) at the portion corresponding to the blood vessel in the live image can be displayed in black (a color different from the image of the blood vessel). As a result, the visibility of the portion (treatment instrument) corresponding to the blood vessel in the live image can be improved, and the operator can appropriately visually confirm the image of the blood vessel after the contrast. In addition, in the present specification, "black" means, for example, having a relatively low luminance value in an image (pixel), and "white" means, for example, having a relatively high luminance value in an image (pixel).

In the X-ray imaging apparatus for generating a composite image by combining the inverted image and the live image, it is preferable that the image combining unit is configured to: the reversed image is synthesized with a live image including an image of at least one of a catheter, a stent, and a guide wire inserted into the subject to generate a synthesized image. With such a configuration, the operator can visually confirm the image of the blood vessel after the contrast, which is taken with a substantially background color, and the image taken of at least one of the catheter, the stent, and the guide wire inserted into the subject by inverting the image so as to be more easily distinguished.

In the X-ray imaging apparatus for generating a composite image by combining the difference image with the live view image, the reference image acquiring unit is preferably configured to: a live image captured before the fluoroscopic image is acquired as a reference image at the same imaging position as that of the synthetic target image. Here, the X-ray quantities irradiated at the time of imaging between the synthesis target image (contrast image and non-contrast image) and the live image (for example, fluoroscopic image) may be different from each other. In this case, when the synthesis target image (difference image) is used as the reference image and the live view image is used as the perspective image, the brightness differs depending on the difference in the amount of X-rays irradiated even in the same feature point image. Therefore, it is considered that the luminance needs to be corrected when the feature point image of the reference image is associated with (matched with) the feature point image of the fluoroscopic image. In contrast, in the present invention, by configuring the reference image acquiring unit to acquire the live-view image captured before the fluoroscopic image as the reference image, it is possible to associate the feature point image of the reference image (live-view image) with the feature point image of the fluoroscopic image (live-view image) between the live-view images having substantially the same X-ray exposure amount. This makes it possible to correct the synthesis target image while suppressing complication of the control process at the time of the correspondence to the extent of the control process for correcting the luminance which is omitted.

In the X-ray imaging apparatus that generates the composite image by combining the difference image and the live-view image, the reference image acquiring unit is preferably configured to acquire the contrast image as the reference image. With this configuration, since the contrast image, which is an image including the blood vessel after the contrast image (the image to be synthesized) remaining in the difference image (the image to be synthesized), and the image of the blood vessel in the image to be synthesized are images captured at the same time point, the image to be synthesized can be corrected more accurately than in the case where a live-view image captured at a time point after the contrast image is captured is acquired as the reference image.

In the X-ray imaging apparatus that generates the composite image by combining the difference image and the live-view image, the reference image acquiring unit is preferably configured to acquire the non-contrast image as the reference image. Here, since the image of the blood vessel after the contrast image is comparatively characteristic, when the contrast image is set as the reference image, the image of the blood vessel may be extracted as the characteristic point. On the other hand, in a live image (fluoroscopic image), a blood vessel is not contrasted, and the possibility that the blood vessel is extracted as a feature point is low. In this case, it is considered that different feature points may be extracted from the reference image and the fluoroscopic image. In contrast, in the present invention, the reference image acquiring unit is configured to acquire the non-contrast image as the reference image, and thereby the non-contrast image excluding the image of the blood vessel after the angiography is configured as the reference image, and therefore it is possible to suppress the feature points different from each other from being extracted in the reference image and the fluoroscopic image. As a result, the feature points can be easily associated with each other, and therefore, the movement information of the feature points can be easily acquired. In addition, since the non-contrast image can increase the amount of X-ray irradiation compared to the case where a live-view image is acquired as a reference image, the reference image can be configured as a relatively clear image.

In the X-ray imaging apparatus according to the first aspect, the movement information acquiring unit is preferably configured to: when the amount of movement from the feature point image of the reference image to the feature point image of the fluoroscopic image exceeds the movement amount threshold, the reference image is corrected based on the movement information of the feature point, and the movement information of the pixel is acquired based on the corrected reference image and the fluoroscopic image. With this configuration, when the subject moves with a large width and the position of the feature point moves with a large width with respect to the imaging unit (when the amount of movement exceeds the amount of movement threshold), the reference image can be corrected based on the movement information of the feature point. For example, if the control processing for correcting the reference image is not performed and the synthesis target image is corrected based on the movement information of the pixels (if the correction is performed only for a small movement of the subject) in a case where the movement of the subject is small (a case where the movement amount does not exceed the movement amount threshold), the control load of the image synthesizing unit can be reduced and the synthesized image can be appropriately generated.

In the X-ray imaging apparatus according to the first aspect, the movement information acquiring unit is preferably configured to: a plurality of feature point images are extracted from a reference image and a plurality of feature point images are extracted from a fluoroscopic image, correction is performed so that the reference image is moved by an amount corresponding to an average value of the amounts of movement from the feature point images of the reference image to the feature point images of the fluoroscopic image, and movement information of pixels is acquired based on the corrected reference image and the fluoroscopic image. With this configuration, since the reference image is corrected based on the average value of the plurality of movement amounts, it is possible to more accurately acquire the movement information of the entire subject captured in the reference image and the fluoroscopic image than in the case of using only the movement amount of one feature point.

In the X-ray imaging apparatus according to the first aspect, the movement information acquiring unit is preferably configured to: a movement map indicating the movement direction and the movement amount of at least some of the pixels belonging to the reference image is acquired based on the reference image and the fluoroscopic image, and a smoothed movement map in which high-frequency components in the spatial direction of the movement map are suppressed is acquired as movement information of the pixels. With this configuration, by acquiring the smoothed motion map in which the high-frequency components in the spatial direction of the motion map are suppressed as the motion information of the pixels, even if an error occurs in the motion map due to the generation of the motion map for each pixel, the influence of the error can be reduced by suppressing the high-frequency components in the spatial direction. As a result, the synthesis target image and the perspective image can be appropriately synthesized in consideration of not only the linear motion of the subject but also the nonlinear motion (relatively complicated motion) between the two X-ray images captured at different times.

In an X-ray image processing method according to a second aspect of the present invention, a fluoroscopic image including a feature point image and an X-ray image obtained by fluoroscopy of a subject is acquired, a reference image including a feature point image and an X-ray image obtained before a time point at which the fluoroscopic image is captured is acquired, a feature point image is extracted from each of the reference image and the fluoroscopic image, movement information of a feature point based on the extracted feature point image is acquired, movement information of at least a part of pixels belonging to the reference image is acquired based on the reference image and the fluoroscopic image, a synthetic object image or a fluoroscopic image which is the X-ray image of the subject is corrected based on the movement information of the feature point and the movement information of the pixels, and the synthetic object image or the synthetic object image and the corrected fluoroscopic image are synthesized to generate a synthetic object image And (5) imaging.

In an X-ray image processing method according to a second aspect of the present invention, there is provided an X-ray image processing method including: with the above configuration, even when the subject operates after the subject image is captured when the composite image is generated by combining the composite image and the fluoroscopic image captured at different time points from each other, it is possible to suppress an increase in the amount of X-rays irradiated to the subject by appropriately generating the composite image.

Drawings

Fig. 1 is a diagram illustrating an overall configuration of an X-ray imaging apparatus according to the first to third embodiments.

Fig. 2 is a diagram for explaining acquisition of a displacement amount, acquisition of a movement amount, and generation of a road map fluoroscopic image by the X-ray imaging apparatus according to the first embodiment.

Fig. 3 is a block diagram showing the configuration of an image processing unit according to the first to third embodiments.

Fig. 4 is a diagram for explaining the extraction and correspondence of feature point images according to the first embodiment.

Fig. 5 is a diagram for explaining comparison of pixel values and determination of a pixel value difference minimum pixel according to the first embodiment.

Fig. 6 is a diagram for explaining generation of a movement map according to the first embodiment.

Fig. 7 is a diagram for explaining generation of a smoothing movement map according to the first embodiment.

Fig. 8 is a diagram for explaining a movement map expressed in one dimension according to the first embodiment.

Fig. 9 is a diagram (flowchart) showing a flow of control processing performed by the X-ray imaging apparatus according to the first embodiment.

Fig. 10 is a diagram for explaining acquisition of a displacement amount, acquisition of a movement amount, and generation of a road map fluoroscopic image by the X-ray imaging apparatus according to the second embodiment.

Fig. 11 is a diagram for explaining acquisition of a displacement amount, acquisition of a movement amount, and generation of a road map fluoroscopic image by the X-ray imaging apparatus according to the third embodiment.

Fig. 12 is a diagram for explaining generation of a road map fluoroscopic image of an X-ray imaging apparatus according to a modification of the first to third embodiments.

Detailed Description

Hereinafter, embodiments embodying the present invention will be described based on the drawings.

[ first embodiment ]

The configuration of an X-ray imaging apparatus 100 according to a first embodiment of the present invention will be described with reference to fig. 1 to 8.

(Structure of X-ray photographing apparatus)

As shown in fig. 1, an X-ray imaging apparatus 100 (radiographic apparatus) according to a first embodiment includes a top plate 1 on which a subject P is placed, an imaging unit 2 that images an X-ray image R of the subject P, a control unit 3 that controls various configurations of the X-ray imaging apparatus 100, a storage unit 4 that stores the imaged X-ray image R and the like, a display unit 5 that displays the X-ray image R and the like, and an operation unit 6 that receives an input operation from an operator. In the following description, the "operator" is not limited to the person who performs the treatment of the subject P, and includes an "operator" who simply operates the X-ray imaging apparatus 100 without directly participating in the treatment of the subject P.

As shown in fig. 2, the X-ray imaging apparatus 100 is configured to: the X-ray images R of the subject P are continuously captured, and a live image Rr (moving image) generated successively is acquired in real time. The X-ray imaging apparatus 100 is configured to: the live image Rr is captured as a fluoroscopic image obtained by fluoroscopy the subject P so that the X-ray exposure dose is reduced from the contrast image Rc and the mask image Rm. Thus, the operator using the X-ray imaging apparatus 100 can perform various treatments by inserting a treatment instrument such as a catheter into a blood vessel of the subject P (for example, a blood vessel of a lower limb of the subject P) while visually checking the live-view image Rr (fluoroscopic image) of the subject P.

The X-ray imaging apparatus 100 is configured to: a roadmap fluoroscopic image Rs obtained by combining a DSA image Rd, which is a difference image between the contrast image Rc and the mask image Rm, and the live image Rr is displayed on the display unit 5. Here, the X-ray imaging apparatus 100 according to the first embodiment is configured to perform the following two processes when generating the road map fluoroscopic image Rs: correction (macro registration correction) processing of extracting a feature point image F (see fig. 4) and processing the feature point image F based on movement information (displacement amount d1 described later) of a feature point based on the extracted feature point image F; and correction (microscopic alignment correction) processing, which is correction processing performed based on the movement information of the pixels (movement amount d2 described later), and which is correction processing using the Flex-APS (Flexible active pixel Shift: advanced real-time pixel Shift) technique.

(construction of Each part of X-ray imaging apparatus)

As shown in fig. 1, the top plate 1 is configured as a table on which a subject P is placed. The top board 1 is provided with a driving unit, and the top board 1 is configured to be movable in accordance with an instruction from the control unit 3 based on an input operation of the operation unit 6.

The imaging unit 2 includes an X-ray generation unit 2a that irradiates the subject P with X-rays, and an X-ray detection unit 2b that detects X-rays irradiated from the X-ray generation unit and transmitted through the subject P. The X-ray generator 2a is configured as an X-ray tube device disposed on one side of the top plate 1. The X-ray generation unit 2a includes an X-ray source, and is configured to be capable of emitting X-rays by applying a voltage thereto by an X-ray tube driving device, not shown. The X-ray detector 2b is configured as an FPD (flat panel display) disposed on the other side of the top plate 1, and is configured to be able to detect X-rays. In addition, a collimator 2c for adjusting an irradiation region of the X-ray irradiated from the X-ray generating unit 2a is provided in the vicinity of the X-ray generating unit 2 a.

The control Unit 3 is a computer including a CPU (Central Processing Unit), a ROM (read only Memory), a RAM (Random Access Memory), and the like. The control unit 3 includes an image processing unit 10, and the image processing unit 10 synthesizes X-ray images R obtained by X-ray imaging the internal structure of the subject P based on the detection signal transmitted from the X-ray detection unit 2b to generate a road map fluoroscopic image Rs.

Further, the control unit 3 is configured to: switching between the roadmap fluoroscopic image photographing mode and the DSA image photographing mode is controlled based on an input operation performed by the operator on the operation unit 6. In the road map fluoroscopic image photographing mode, the control unit 3 controls to photograph the live image Rr, and in the DSA image photographing mode, the control unit 3 controls to photograph the contrast image Rc and the mask image Rm.

The storage section 4 includes, for example, a nonvolatile memory. Further, the structure is: the storage unit 4 stores a program used for the processing of the control unit 3, and stores the X-ray images R (mask image Rm, contrast image Rc, DSA image Rd, live image Rr, road map fluoroscopic image Rs) generated by the image processing unit 10, and the like.

The display unit 5 is configured as a liquid crystal display, for example. The display unit 5 is configured to be able to display a later-described road map fluoroscopic image Rs (see fig. 2) generated by the image processing unit 10. For example, the display unit 5 is configured to display the road map see-through image Rs as a moving image.

The operation unit 6 is constituted by, for example, an input button switch, a keyboard, a touch panel, a mouse, or the like.

(configuration of image processing section)

The image Processing Unit 10 is a computer including a processor such as a GPU (Graphics Processing Unit) and an FPGA (Field-Programmable Gate Array) used for image Processing. The image processing unit 10 functions as an image processing device by executing the image processing program stored in the storage unit 4. The X-ray image processing method described later is a method of controlling the image processing unit 10 to execute.

As shown in fig. 3, the image processing unit 10 includes an image acquiring unit 20, a movement information acquiring unit 30, and an image synthesizing unit 40. In fig. 3, the image processing unit 10 is illustrated as a functional block, but is not limited to this example. That is, each part of the image processing unit 10 may be configured as independent hardware (software) or may be configured as one piece of hardware (software).

Structure of image acquisition section

As shown in fig. 2, the image acquisition unit 20 is configured to acquire the X-ray image R captured by the imaging unit 2. The image acquisition unit 20 is configured to acquire the X-ray image R from the imaging unit 2 as a mask image Rm, a contrast image Rc, or a live image Rr. The mask image Rm is an example of the "non-contrast image" in the claims. The live image Rr is an example of the "reference image" and the "fluoroscopic image" in the claims. The image acquiring unit 20 is an example of the "fluoroscopic image acquiring unit" and the "reference image acquiring unit" in the claims.

Here, the X-ray imaging apparatus 100 performs imaging in the order of the mask image Rm, the contrast image Rc, and the live image Rr, for example. That is, in the X-ray imaging apparatus 100, when the operator operates the operation unit 6 (selects the DSA image imaging mode), the lower limb of the subject P is first imaged in a state where the contrast medium is not administered to the blood vessel of the lower limb of the subject P (in a state where the blood vessel is free from the contrast medium), and thereby a mask image Rm which becomes a mask of a DSA image Rd to be described later is imaged. Then, the X-ray imaging apparatus 100 captures a live image Rr.

Specifically, the background structure Ab of the subject P is captured in the mask image Rm, while the blood vessels of the subject P are not clearly captured. Next, in the X-ray imaging apparatus 100, the operator operates the operation unit 6 to capture an image of the lower limb of the subject P in a state where a contrast medium is administered to the blood vessel of the lower limb of the subject P (a state where the contrast medium remains in the blood vessel), and captures a contrast image Rc including an image Av of the blood vessel after the imaging. The background structure Ab includes, for example, the bones and muscles of the subject P. The contrast image Rc may be an image obtained by applying peak hold to a plurality of X-ray images R continuously captured, for example.

The image acquiring unit 20 is configured to generate a DSA (Digital Subtraction Angiography) image Rd by performing Digital Subtraction processing on the mask image Rm from the contrast image Rc. In the DSA image Rd, the background structure Ab is substantially deleted, and an image Av of the blood vessel after the image is created remains. In the following description, the DSA image Rd in which the image Av remains among the DSA images Rd is described as a blood vessel image Rdb. The DSA image Rd and the blood vessel image Rdb are examples of the "synthesis target image" and the "difference image" in the claims.

After the imaging of the contrast image Rc and the mask image Rm, the operator operates the operation unit 6 (selects the fluoroscopic image imaging mode) in the X-ray imaging apparatus 100 to start the imaging of the live image Rr (X-ray image R) in a state where the X-ray irradiation amount from the imaging unit 2 is reduced from the irradiation amount during the imaging of the mask image Rm and the contrast image Rc. The X-ray imaging apparatus 100 is configured to: the live image Rr is sequentially captured (as a moving image) in real time. Specifically, the image acquisition unit 20 generates a live image Rr at a predetermined frame rate by imaging the X-ray detection signals sequentially output from the X-ray detection unit 2 b. The frame rate is, for example, about 15FPS to 30 FPS. The live image Rr (X-ray image R) is, for example, an image having pixel values of a predetermined number of gradations (e.g., 10 bits to 12 bits) in terms of gradation.

In the first embodiment, the image acquisition unit 20 acquires the first live image Rr1 as the reference image, and the first live image Rr1 includes the feature point image F1 (see fig. 4) and is the X-ray image R captured after the time point at which the mask image Rm and the contrast image Rc are captured and before the time point at which the second live image Rr2 is captured. The image acquisition unit 20 is configured to acquire a second live-action image Rr2, which is an X-ray image R obtained by fluoroscopy of the subject P captured by the imaging unit 2 and includes the feature point image F2 (see fig. 4). The first live image Rr1 is an example of the "reference image" in the claims. The second live image Rr2 is an example of "perspective image" in the claims.

The mask image Rm, the contrast image Rc, and the first live image Rr1 are images captured at substantially the same imaging position (preferably, the same imaging position). Specifically, the "X-ray images R captured at the same imaging position" are X-ray images R at the same position (at a displacement d1 of substantially 0) as the feature point images F described later.

Specifically, when the imaging position of the first live view image Rr1 is changed by the operator, the DSA image Rd (mask image Rm and contrast image Rc) obtained at the same imaging position is read from the storage unit 4 based on the changed imaging position of the first live view image Rr 1. Further, the control unit 3 is configured to: the image acquiring unit 20 acquires position information from the imaging unit 2 and a driving unit (such as an encoder) of the top board 1, and is configured to: the imaging position is acquired by associating the position information with each X-ray image R.

The first live image Rr1 is a live image Rr at a time point when the road map fluoroscopic image photographing mode is started (immediately after the time point when the contrast image Rc is captured). For example, the first live image Rr1 is the first live image Rr after the start of the road map fluoroscopic image photographing mode. That is, the first live view image Rr1 is a fluoroscopic image at a time point when the subject P is not substantially moved (body movement) with respect to the contrast image Rc. The second live image Rr2 is, for example, the latest (current) live image Rr. That is, the second live image Rr2 is an image captured at a point in time after the contrast image Rc, the mask image Rm, and the first live image Rr1 are captured. In addition, the second live image Rr2 is an X-ray image R constituting the roadmap fluoroscopic image Rs.

Structure of mobile information acquisition unit

As shown in fig. 3, in the first embodiment, the movement information acquiring unit 30 includes: a feature point extraction unit 31 that extracts a feature point image F from each of the first live-action image Rr1 and the second live-action image Rr 2; a feature point movement information acquisition unit 32 that acquires movement information E1 based on the feature points of the feature point image F extracted by the feature point extraction unit 31; and a pixel movement information acquisition unit 33 that acquires movement information E2 of at least some of the pixels belonging to the first live image Rr1 based on the first live image Rr1 and the second live image Rr 2.

In the first embodiment, as shown in fig. 4, the feature point extracting unit 31 is configured to: a plurality of (four in fig. 4) feature point images F1(F1a, F1b, F1c, and F1d) are extracted from the first live image Rr1, and a plurality of (four in fig. 4) feature point images F2(F2a, F2b, F2c, and F2d) are extracted from the second live image Rr 2. The feature point extraction unit 31 is configured to, for example: the first live view image Rr1 and the second live view image Rr2 extract image regions having a high similarity to the pattern image stored in the storage unit 4 and having a luminance change larger than a predetermined change amount as feature point images F1 and F2. The extraction method for extracting the feature point images F1 and F2 by the feature point extraction unit 31 is not limited to this example, and a feature point extraction method using a known image processing technique may be used.

The feature point images F1 and F2 are regions (portions surrounded by circles in fig. 4) constituted by a plurality of pixels. Note that, in fig. 4, the feature point images F1 and F2 are illustrated as circles, but are not limited to circles, and may be other shapes (e.g., rectangular shapes) than circles. For example, the feature point image F1a and the feature point image F2a are images showing the same part in the subject P. Similarly, the feature point images F1b, F1c, and F1d are images showing portions corresponding to the feature point images F2b, F2c, and F2d, respectively.

The feature point movement information acquiring unit 32 is configured to: a process (matching process) of associating the feature point image F1 with the feature point image F2 is performed. For example, the feature point movement information acquiring unit 32 associates (associates) the feature point images F1a with F2a, associates the feature point images F1b with F2b, associates the feature point images F1c with F2c, and associates the feature point images F1d with F2d by performing pattern matching processing on the feature point images F1a to F1d and the feature point images of F2a to F2 d. For example, the feature point movement information acquiring unit 32 associates the feature point images with high similarity among the feature point images F1a to F1d and the feature point images F2a to F2 d.

The feature point movement information acquiring unit 32 is configured to: a movement amount of the position of the subject P in the second live image Rr2 relative to the position of the subject P in the first live image Rr1 is acquired. Specifically, the feature point movement information acquiring unit 32 is configured to: an average value d1 (hereinafter referred to as "displacement amount d 1") of the movement amounts from the feature point images F1a to F1d to the feature point images F2a to F2d is acquired (calculated). That is, the feature point movement information acquiring unit 32 is configured to: the displacement amount d1 of the feature point due to the movement of the subject P and the change in the position of the imaging unit 2 is calculated.

Specifically, the feature point movement information acquiring unit 32 acquires the displacement da (dxa, dya) between the center coordinates (x1a, y1a) of the feature point image F1a and the center coordinates (x2a, y2a) of the feature point image F2a corresponding to the feature point image F1 a. The feature point movement information acquisition unit 32 acquires the displacements db, dc, and dd from the feature point images F1b to F1d to the feature point images F2b to F2d in the same manner as when acquiring the displacement da. Then, the characteristic point movement information acquisition unit 32 acquires the average value of the displacements da to dd as the displacement amount d 1. Further, the displacement amount d1 is included in the movement information E1 of the feature point. The "acquisition of the average value" means not only calculation of the addition average value (arithmetic average value) but also calculation of an average value such as a weighted average value other than the addition average value.

Further, the feature point movement information acquiring unit 32 is configured to: the displacement amount d1 is acquired each time a new second live image Rr2 is acquired. In the first embodiment, the feature point movement information acquiring unit 32 is configured to: when the displacement amount d1 exceeds the threshold value d1t, the first live view image Rr1 is corrected so as to be displaced by an amount corresponding to the displacement amount d 1. That is, when the displacement amount d1 is smaller than the threshold value d1t, the feature point movement information acquisition unit 32 does not correct the first live image Rr 1. For example, the feature point movement information acquiring unit 32 is configured to: when the displacement amount d1 exceeds the threshold value d1t, image processing is performed to generate a first corrected live view image Rr1c by at least one of translating the first live view image Rr1 by an amount equivalent to the displacement amount d1 and rotating the first live view image Rr1 by an amount equivalent to the displacement amount d 1. The threshold value d1t is an example of the "movement amount threshold" in the claims.

The pixel movement information acquiring unit 33 is configured to: in the case where the first corrected live image Rr1c is generated, the movement information E2 of the pixel is acquired based on the first corrected live image Rr1c and the second live image Rr 2. Further, the pixel movement information acquiring unit 33 is configured to: in the case where the first corrected live image Rr1c is not generated, the movement information E2 of the pixel is acquired based on the first live image Rr1 and the second live image Rr 2. In the following, a description will be given of a case where the first corrected live view image Rr1c is generated, but in a case where the first corrected live view image Rr1c is not generated, the following description of the "first corrected live view image Rr1 c" is replaced with the "first live view image Rr 1".

Specifically, the pixel movement information acquiring unit 33 is configured to: the movement information E2 of the pixel for performing the correction process using the Flex-APS technique is acquired. In the first embodiment, the pixel movement information acquiring section 33 includes: a movement map generator 33a that acquires a movement map M1 indicating the movement direction and the movement amount of at least some of the pixels belonging to the first corrected live view image Rr1c, based on the first corrected live view image Rr1c and the second live view image Rr 2; and a smoothed motion map generator 33b that acquires, as the motion information E2 of the pixel, a smoothed motion map M2 in which high-frequency components in the spatial direction of the motion map M1 are suppressed. The motion map M1 is a motion vector. The smoothed motion map M2 is a smoothed motion vector.

As shown in fig. 5, the movement map generation unit 33a is configured to: the movement correspondence map M1 indicating the movement direction and the movement amount of the pixel B1 of the first corrected live view image Rr1c is generated based on the pixel value difference between the pixel value of the pixel B2 of the second live view image Rr2 and the pixel values of the pixel B1 corresponding to the pixel B2 and the pixel B1 belonging to the predetermined peripheral region in the first corrected live view image Rr1 c. More specifically, the movement map generation unit 33a is configured to: the movement correspondence map M1 representing the movement direction and movement amount of the pixel B1 of the first corrected live image Rr1c is generated based on the pixel value of the pixel B2 of the second live image Rr2 and the pixel B1 of the first corrected live image Rr1c having the smallest pixel value difference with the pixel B2 of the second live image Rr2, that is, the pixel value of the pixel value difference smallest pixel B1 a.

Specifically, as shown in fig. 6, the movement map generator 33a compares a certain pixel B2 of the second live view image Rr2 with a pixel B1 of a predetermined peripheral region (eight pixels in total, i.e., the upper, upper right, lower left, upper left, and lower left of the corresponding pixel B1) of the pixel B1 (having the same coordinates) corresponding to the pixel B2 and the corresponding pixel B1 of the first corrected live view image Rr1c, and compares the pixel values of the nine pixels B1 in total. Then, the movement map generator 33a specifies a pixel (pixel value difference minimum pixel B1a) having the smallest difference in pixel value from among the nine pixels B1 of the first corrected live view image Rr1c and a certain pixel B2 of the second live view image Rr 2. Here, since the pixel value is a quantitative value that differs depending on the position of the subject P, the pixel value becomes an index of the position of the subject P in the second live view image Rr2 and the first corrected live view image Rr1 c. Therefore, as described above, comparing a certain pixel B2 of the second live view image Rr2 with nine pixels B1, i.e., a pixel B1 having the same coordinates as the pixel B2 and a pixel B1 around the pixel B1 in the first corrected live view image Rr1c, corresponds to checking the positional deviation of the first corrected live view image Rr1c with respect to the pixel B2 of the second live view image Rr 2. The pixel value difference minimum pixel B1a of the first corrected live view image Rr1c is a pixel having the highest possibility of being a pixel having a shifted position of the pixel B2 of the second live view image Rr 2. As shown in fig. 6, when the pixel value difference minimum pixel B1a is moved to the position of the pixel B1 (of the same coordinate) corresponding to the pixel B2 of the second live view image Rr2 in the first corrected live view image Rr1c, the movement direction and the movement amount of the pixel value difference minimum pixel B1a in the movement map generating unit 33a are set as the movement map M1 corresponding to the pixel B1 of the first corrected live view image Rr1 c.

As shown in fig. 7, the smoothing movement map generator 33b is configured to generate the smoothing movement map M2 by suppressing a high-frequency component in the spatial direction of the movement map M1. Specifically, the smoothed movement map generation unit 33b calculates the smoothed movement map M2 obtained as follows: for each pixel B1 in the first corrected live image Rr1c, the movement correspondence map M1 is smoothed by the pixel B1 in the first corrected live image Rr1c and the pixel B1 around the pixel B1 in the first corrected live image Rr1 c. That is, the smoothed-motion-map generating unit 33B calculates the smoothed-motion-map M2 obtained by smoothing the motion-map M1 corresponding to each pixel B1 in the first corrected live-view image Rr1c by the pixel B1 and the eight pixels B1 around the pixel B. Note that, for the smoothing, the movement map M1 is averaged by nine pixels B1, for example.

As shown in fig. 8, even when a movement correspondence map (movement correspondence map M1x) in which the movement correspondence map M1 is excessively different is included in the nine pixels B1 by performing smoothing, the influence of the excessively different movement correspondence map M1x can be reduced by averaging the movement correspondence map M1. As a result, the high-frequency component in the spatial direction of the movement map M1 is suppressed. In fig. 7, for convenience of illustration, the smoothed movement map M2 is depicted by a vector having the same direction and size as the movement map M1 before smoothing. The smoothing is not limited to the case where the movement map M1 is simply averaged by nine pixels B1. For example, the movement correspondence map M1 of nine pixels B1 is plotted by the movement amount in each movement direction, and then fourier transform is performed to extract a high-frequency component. Further, it is also possible to suppress a high-frequency component in the spatial direction of the movement map M1 by removing the high-frequency component.

Then, the smoothing movement map generator 33B associates the generated smoothing movement map M2 with the pixel B1 of the first corrected live view image Rr1c corresponding to the pixel B2 of the second live view image Rr 2. Then, the smoothing movement map generator 33B associates all the pixels B1 of the first corrected live view image Rr1c with each other, so that all the pixels B1 of the first corrected live view image Rr1c are associated with the smoothing movement map M2. In the first embodiment, information on the shift amount d2 of the first corrected live view image Rr1c in a state where all the pixels B1 of the first corrected live view image Rr1c are associated with the smoothing shift map M2 is used as the pixel shift information E2, and the shift amount d2 is used for correction of the DSA image Rd.

Structure of image synthesizer

As shown in fig. 3, the image synthesizing section 40 includes: a blood vessel image correction unit 41 that generates a corrected blood vessel image Rdbc (black-and-white inverted image Ava) by correcting a DSA image Rd (blood vessel image Rdb) captured at a substantially same imaging position as the first live image Rr1 based on the movement information E1 (displacement amount d1) of the feature point and the movement information E2 (movement amount d2) of the pixel; and a synthetic image generating unit 42 that synthesizes the generated corrected blood vessel image Rdbc and the second live image Rr2 to generate a road map fluoroscopic image Rs. The corrected blood vessel image Rdbc and the black-and-white inverted image Ava are examples of the "inverted image" in the claims.

As shown in fig. 2, the blood vessel image correction unit 41 is configured to: the blood vessel image Rdb is read from the DSA image Rd stored in the storage unit 4. Then, the blood vessel image correction unit 41 performs a process of moving the position of the blood vessel image Rdb (the position of the image Av) by an amount corresponding to the displacement amount d1 (at least one of translational movement and rotational movement) with respect to the position before correction. The blood vessel image correction unit 41 performs a process of shifting a part (for example, each pixel or each predetermined region) of the entire image (the entire image Av or the entire DSA image Rd) by an amount corresponding to the shift amount d 2.

In the first embodiment, the blood vessel image correction unit 41 is configured to: a corrected blood vessel image Rdbc obtained by performing black-and-white inversion processing on the blood vessel image Rdb corrected based on the movement information E1 of the feature points and the movement information E2 of the pixels is generated. Specifically, the blood vessel image correction unit 41 generates a corrected blood vessel image Rdbc including the black-and-white reversed image Ava by performing black-and-white inversion processing (processing of reversing the brightness) on the corrected blood vessel image Rdbc moved by the amount corresponding to the displacement amount d1 and moved by the amount corresponding to the movement amount d 2.

Further, the blood vessel image correction unit 41 is configured to: each time a new second live image Rr2 is acquired by the image acquisition section 20, the blood vessel image Rdb (DSA image Rd) is corrected based on the movement information E1 of the feature points and the movement information E2 of the pixels. That is, the blood vessel image correction unit 41 is configured to: the blood vessel images Rdb are successively corrected in real time to generate corrected blood vessel images Rdbc.

The synthetic image generator 42 is configured to: the corrected vessel image Rdbc is synthesized with the second live image Rr2 to generate a roadmap perspective image Rs. Specifically, the synthetic image generator 42 is configured to: the roadmap perspective image Rs is generated in such a manner that the corrected blood vessel image Rdbc is displayed overlapping with the second live image Rr 2. Further, the synthetic image generator 42 is configured to: each time a new second live-action image Rr2 is acquired by the image acquisition unit 20, the corrected blood vessel image Rdbc is generated by correcting the blood vessel image Rdb based on the movement information E1 of the feature points and the movement information E2 of the pixels, and the road map perspective image Rs is generated by synthesizing the generated corrected blood vessel image Rdbc and the second live-action image Rr 2.

In other words, in the first embodiment, the synthetic image generator 42 is configured to: a difference image, i.e., DSA image Rd, between the X-ray image R, i.e., the contrast image Rc in a state where a contrast medium is administered to the blood vessels of the lower limb of the subject P and the X-ray image R, i.e., the mask image Rm, in a state where a contrast medium is not administered to the blood vessels of the subject P is acquired as a blood vessel image Rdb, the blood vessel image Rdb is corrected to a corrected blood vessel image Rdbc on the basis of the feature point movement information E1 and the pixel movement information E2, and the corrected blood vessel image Rdbc is synthesized with the second live view image Rr2 to generate a road map fluoroscopic image Rs.

Thus, in the first embodiment, the synthetic image generator 42 is configured to: the corrected blood vessel image Rdbc is synthesized with a second image Rr2 including an image Ak of at least one of a catheter, a stent, and a guide wire inserted into the subject P, to generate a road map fluoroscopic image Rs. The road map fluoroscopic image Rs is displayed on the display unit 5 and visually confirmed by the operator.

(X-ray image processing method)

Next, the control processing (X-ray image processing method) of the X-ray image R by the X-ray imaging apparatus 100 according to the first embodiment will be described with reference to fig. 9. Fig. 9 shows a flow of control processing of the X-ray image R by the X-ray imaging apparatus 100. The control processing of the X-ray image R by the X-ray imaging apparatus 100 is executed by the control unit 3 (image processing unit 10).

First, in step S1, DSA image Rd is acquired in the DSA image photography mode. That is, the mask image Rm and the contrast image Rc are captured (acquired), and the DSA image Rd is generated based on the mask image Rm and the contrast image Rc.

In step S2, a blood vessel image Rdb is selected (generated) based on the DSA image Rd. For example, the blood vessel image Rdb is selected from the DSA images Rd stored in the storage unit 4 automatically or by an input operation to the operation unit 6.

In step S3, the operator starts the road map fluoroscopic image photographing mode in accordance with an input operation to the operation unit 6, and starts acquiring the live image Rr.

In step S4, the live image Rr immediately after the start of the road map fluoroscopic image photographing mode is held as the first live image Rr 1. That is, the roadmap fluoroscopic image photographing mode is started, and the live image Rr acquired first is stored as the first live image Rr1 in the storage unit 4 or the image processing unit 10.

In step S5, the live image Rr captured after the first live image Rr1 is acquired as the second live image Rr 2.

In step S6, a feature point image F is extracted from each of the first live image Rr1 and the second live image Rr 2. For example, as shown in fig. 4, feature point images F1a to F1d are extracted from the first live-action image Rr1, and feature point images F2a to F2d are extracted from the second live-action image Rr 2.

In step S7, the movement information E1 (displacement amount d1) of the feature point is acquired. Specifically, as shown in fig. 4, the feature point images F1a to F1d of the first live image Rr1 are associated with (matched with) the feature point images F1a to F1d of the first live image Rr1, and an average value (displacement amount) d1 of the displacements da to dd is obtained (calculated). That is, the displacement amount d1 of the feature point due to the movement of the subject P and the change in the position of the imaging unit 2 is calculated.

In step S8, it is determined whether the displacement amount d1 exceeds the threshold value d1 t. That is, the displacement amount d1 is compared with the threshold value d1t, and when the displacement amount d1 exceeds the threshold value d1t, the routine proceeds to step S9, and when the displacement amount d1 does not exceed the threshold value d1t, the routine proceeds to step S10.

In step S9, the first live image Rr1 is corrected based on the displacement amount d1, thereby generating a first corrected live image Rr1 c. Thereafter, the process proceeds to step S10.

In step S10, the movement information E2 (movement amount d2) of the image is acquired based on the first corrected live image Rr1c and the second live image Rr 2. That is, the movement correspondence map M1 shown in fig. 5 to 8 is acquired, and the smoothing movement correspondence map M2 is acquired as the movement information E2 (movement amount d2) of the image.

In step S11, the DSA image Rd (blood vessel image Rdb) is corrected based on the movement information E1 (displacement amount d1) of the feature point and the movement information E2 (movement amount d2) of the pixel, thereby generating a corrected blood vessel image Rdbc including the black-and-white inverted image Ava.

In step S12, the corrected blood vessel image Rdbc and the second live image Rr2 are synthesized to generate a road map fluoroscopic image Rs, and the road map fluoroscopic image Rs is displayed on the display unit 5.

In step S13, it is determined whether or not the road map fluoroscopic image photographing mode is to be continued. For example, if the operation unit 6 has not received an input operation to end the road map fluoroscopic image photographing mode, the road map fluoroscopic image photographing mode is continued, and the process returns to step S5. When the operation unit 6 receives an input operation to end the road map fluoroscopic image photographing mode, the operation unit does not continue (end) the road map fluoroscopic image photographing mode, and proceeds to step S14. That is, when the roadmap fluoroscopic image capturing mode is continued, steps S5 to S13 are repeated, and each time the second live view image Rr2 is acquired, the blood vessel image Rdb is corrected based on the movement information E1 of the feature point and the movement information E2 of the pixel, and the roadmap fluoroscopic image Rs in which the corrected blood vessel image Rdb, that is, the corrected blood vessel image Rdbc is synthesized with the second live view image Rr2 is generated.

In step S14, the road map fluoroscopic image photographing mode is ended. Then, the control processing of the X-ray image R by the X-ray imaging apparatus 100 is ended.

[ Effect of the first embodiment ]

In the first embodiment, the following effects can be obtained.

In the first embodiment, as described above, the movement information acquiring unit 30 is configured to: the movement information E1 of the feature point is acquired based on the X-ray image R taken before the time point at which the second live image Rr2 is taken, that is, the feature point image F1 of the first live image Rr1 and the feature point image F2 of the second live image Rr2, and the movement information E2 of the pixel is acquired. The image synthesizing unit 40 is configured to: the DSA image Rd (blood vessel image Rdb) captured at the substantially same imaging position as the first live image Rr1 is corrected based on the movement information E1 of the feature points and the movement information E2 of the pixels, and the corrected DSA image Rd (corrected blood vessel image Rdbc) is synthesized with the second live image Rr2 to generate the road map perspective image Rs. Thus, even when the subject P moves with respect to the imaging unit 2 after the DSA image Rd (blood vessel image Rdb) and the first live view image Rr1 are captured, the DSA image Rd (blood vessel image Rdb) can be corrected so as to coincide with the position of the subject P at the time point when the second live view image Rr2 is captured. Therefore, it is possible to suppress the X-ray images R of the subject P whose position has been displaced from being synthesized with each other (the DSA image Rd (blood vessel image Rdb) and the second live image Rr 2). As a result, even when the subject moves after the capturing of the synthesis target image when the DSA image Rd (blood vessel image Rdb) captured at different time points is synthesized with the second live view image Rr2 (X-ray image R) to generate the roadmap fluoroscopic image Rs, the roadmap fluoroscopic image Rs can be appropriately generated (with positional displacement between the images suppressed). This eliminates the need to re-capture the DSA image Rd (blood vessel image Rdb), and thus can suppress an increase in the amount of X-rays irradiated to the subject P.

In the first embodiment, as described above, the movement information acquiring unit 30 is configured to: the movement information E1 of the feature points and the movement information E2 of the pixels are acquired as information for correcting the DSA image Rd (blood vessel image Rdb). Thus, the movement information E1 of the feature point is acquired as the movement information of a large range (macro) in the first live image Rr1 and the second live image Rr2, and therefore, the movement information can be corrected for a large range of movement (large movement) of the subject P. Further, the movement information E2 of the pixel is acquired as the movement information of the smaller range (microscopic) in the first live image Rr1 and the second live image Rr2, and therefore, it is possible to correct for the movement of the smaller range (smaller movement) of the subject P. As a result, by performing both correction for a large movement of the subject P and correction for a small movement of the subject P, which complement each other in merits and demerits, the DSA image Rd (blood vessel image Rdb) can be corrected more appropriately. Therefore, even when the DSA image Rd (blood vessel image Rdb) and the X-ray image R which are captured at different time points are synthesized to generate the road map fluoroscopic image Rs, the road map fluoroscopic image Rs can be generated more appropriately (with positional displacement between the images further suppressed).

In the first embodiment, as described above, the image acquisition unit 20 is configured to: the live images Rr successively generated in real time are acquired as the second live image Rr 2. The image synthesizing unit 40 is configured to: the corrected DSA image Rd (corrected blood vessel image Rdbc) and the live image Rr are synthesized to generate a roadmap fluoroscopic image Rs. Thus, the DSA image Rd (blood vessel image Rdb) can be corrected in accordance with the change in the live image Rr, and therefore, even when the DSA image Rd is synthesized with the live image Rr which is displayed in real time and which changes sequentially, the roadmap fluoroscopic image Rs can be appropriately generated.

In the first embodiment, as described above, the image synthesizing unit 40 is configured to: each time the live image Rr is acquired by the image acquisition section 20, the DSA image Rd (blood vessel image Rdb) is corrected based on the movement information E1 of the feature points and the movement information E2 of the pixels. The image synthesizing unit 40 is configured to: the corrected DSA image Rd (corrected blood vessel image Rdbc) and the live image Rr are synthesized to generate a roadmap fluoroscopic image Rs. This enables the DSA image Rd (blood vessel image Rdb) to be sequentially updated in accordance with the updated live image Rr. Therefore, even when the DSA image Rd (blood vessel image Rdb) is synthesized for the live-action image Rr (for example, moving image) which changes sequentially, the road map fluoroscopic image Rs (in which positional displacement between images is suppressed) can be generated more appropriately. As a result, even when the operator performs treatment of the subject P while visually checking the road map fluoroscopic image Rs displayed as a moving image, the road map fluoroscopic image Rs in which the positional deviation is more effectively suppressed can be generated.

In the first embodiment, as described above, the image synthesizing unit 40 is configured to: a difference image between the X-ray image R, i.e., the contrast image Rc in a state where a contrast medium is administered to the blood vessels of the lower limbs of the subject P, and the mask image Rm in a state where no contrast medium is administered to the blood vessels of the subject P is acquired as the DSA image Rd (blood vessel image Rdb). The image synthesizing unit 40 is configured to: the DSA image Rd (blood vessel image Rdb) is corrected based on the movement information E1 of the feature points and the movement information E2 of the pixels, and the corrected DSA image Rd (corrected blood vessel image Rdbc) is synthesized with the live image Rr to generate the road map perspective image Rs. Thus, the X-ray imaging apparatus 100 can be provided which can generate the road map fluoroscopic image Rs in which the positional deviation between the images is effectively suppressed when the operator performs various treatments by inserting the catheter into the blood vessel of the lower limb of the subject P.

In the first embodiment, as described above, the image synthesizing unit 40 is configured to: the road map perspective image Rs is generated by synthesizing a monochrome inverted image Ava obtained by performing a monochrome inversion process on the image Av of the blood vessel after the image Av is corrected based on the movement information E1 of the feature point and the movement information E2 of the pixel in the corrected blood vessel image Rdbc and the live image Rr. Thus, since the substantially black image (blood vessel image Rdb) in the DSA image Rd is synthesized with the live image Rr in a state of being converted into a substantially white image (for example, the black-and-white inverted image Ava), the treatment instrument (for example, a catheter, a stent, a guide wire, or the like) at the portion corresponding to the blood vessel in the live image Rr can be displayed in black (a color different from the black-and-white inverted image Ava). As a result, the visibility of the portion (treatment instrument) corresponding to the blood vessel in the live image Rr can be improved, and the operator can appropriately visually confirm the image of the blood vessel after the contrast.

In the first embodiment, as described above, the image synthesizing unit 40 is configured to: the black-and-white inverted image Ava is synthesized with a live image Rr including an image Ak of at least one of a catheter, a stent, and a guide wire inserted into the subject P to generate a road map fluoroscopic image Rs. Thus, the image of the blood vessel taken with a substantially background color (white) and the image Ak taken of at least one of the catheter, stent, and guidewire inserted into the subject P can be visually confirmed by the operator through the black-and-white inversion processing so as to be more easily distinguished.

In the first embodiment, as described above, the movement information acquiring unit 30 is configured to: the live image Rr captured before the second live image Rr2 is acquired as the first live image Rr1 at an imaging position substantially the same as the imaging positions of the contrast image Rc and the mask image Rm. Thus, the feature point images F1a to F1d of the first live image Rr1 (live image Rr) can be associated with the feature point images F2a to F2d of the second live image Rr2 (live image Rr) between the live images Rr having substantially the same X-ray exposure amount. Thereby, complication of the control process at the time of the establishment of the correspondence can be suppressed to a degree corresponding to the omitted control process of the correction luminance, and the DSA image Rd (blood vessel image Rdb) can be corrected.

In the first embodiment, as described above, the movement information acquiring unit 30 is configured to: when the displacement amount d1 from the feature point image F of the first live image Rr1 to the feature point image F of the second live image Rr2 exceeds the threshold value d1t, the first live image Rr1 is corrected based on the movement information E1 of the feature point, and the movement information E2 of the pixel is acquired based on the corrected first live image Rr1 and second live image Rr 2. Thus, when the subject P moves with a large width and the position of the feature point moves with a large width with respect to the imaging unit 2 (when the displacement amount d1 exceeds the threshold value d1 t), the first live image Rr1 can be corrected based on the movement information E1 of the feature point. For example, in the case where the movement of the subject P is small (the case where the displacement amount d1 does not exceed the threshold value d1 t), if the DSA image Rd (blood vessel image Rdb) is corrected based on the pixel movement information E2 without performing the control process of correcting the first live view image Rr1 (only a small movement of the subject P is corrected), the control load of the image synthesizer 40 can be reduced, and the roadmap fluoroscopic image Rs can be appropriately generated.

In the first embodiment, as described above, the movement information acquiring unit 30 is configured to: the plurality of feature point images F1a to F1d are acquired from the first live-action image Rr1 and the plurality of feature point images F2a to F2d are acquired from the second live-action image Rr2, and correction is performed so that the first live-action image Rr1 is shifted by an amount (amount of the displacement amount d1) corresponding to an average of the amounts of shift from the feature point images F1a to F1d of the first live-action image Rr1 to the feature point images F2a to F2d of the second live-action image Rr 2. The movement information acquiring unit 30 is configured to: the movement information E2 of the pixels is acquired based on the first corrected live image Rr1c and the second live image Rr 2. Thus, the first live image Rr1 is corrected based on the average of the plurality of movement amounts (displacements da to dd), and therefore, compared to the case where only the movement amount of one feature point is used, information on the movement of the entire subject P captured in the first live image Rr1 and the second live image Rr2 can be acquired more accurately.

In the first embodiment, as described above, the movement information acquiring unit 30 is configured to: a movement correspondence map M1 indicating the movement direction and the movement amount of at least a part of the pixels B1 among the pixels B1 belonging to the first live image Rr1 is acquired based on the first live image Rr1 and the second live image Rr2, and a smoothed movement correspondence map M2 obtained by suppressing the high-frequency component in the spatial direction of the movement correspondence map M1 is acquired as the movement information E2 of the pixel. Thus, by acquiring the smoothed movement map M2, which suppresses the high frequency component in the spatial direction of the movement map M1, as the movement information E2 of the pixel, even if an error occurs in the movement map M1 due to the generation of the movement map M1 for each pixel B1, the influence of the error can be reduced by suppressing the high frequency component in the spatial direction. As a result, the DSA image Rd (blood vessel image Rdb) and the second live image Rr2 can be appropriately synthesized in consideration of not only the linear motion of the subject P between the two X-ray images R taken at different times but also the nonlinear motion (relatively complicated motion).

[ second embodiment ]

Next, the configuration of an X-ray imaging apparatus 200 according to a second embodiment of the present invention will be described with reference to fig. 1, 3, and 10. In the second embodiment, unlike the X-ray imaging apparatus 100 according to the first embodiment configured to acquire the movement information E1 of the feature point and the movement information E2 of the pixel based on the first live view image Rr1 and the second live view image Rr2, the X-ray imaging apparatus is configured to: the movement information E11 of the feature point and the movement information E12 of the pixel are acquired based on the contrast image Rc and the live image Rr 12. Note that, the same components as those of the first embodiment are denoted by the same reference numerals in the drawings, and the description thereof is omitted. In the second embodiment, the contrast image Rc is an example of the "reference image" in the claims, and the live image Rr12 is an example of the "fluoroscopic image" in the claims.

As shown in fig. 1, the X-ray imaging apparatus 200 according to the second embodiment includes a control unit 203, and the control unit 203 includes an image processing unit 210. As shown in fig. 3, the image processing unit 210 includes an image acquisition unit 220, a movement information acquisition unit 230, and an image synthesis unit 240. In the second embodiment, the image acquisition unit 220 is configured to: the contrast image Rc is acquired as a reference image.

As shown in fig. 10, the movement information acquiring unit 230 is configured to: feature point images are extracted from each of the contrast image Rc and the live image Rr12, and movement information E11 (displacement amount d11) of feature points based on the extracted feature point images is acquired. The movement information acquiring unit 230 is configured to: a corrected contrast image Rcc in which the synthetic image Rc is corrected is generated based on the movement information E11 of the feature points, and the movement information E12 (movement amount d12) of at least some of the pixels belonging to the corrected contrast image Rcc is acquired based on the corrected contrast image Rcc and the live image Rr 12.

The image combining unit 240 is configured to: the DSA image Rd1 (blood vessel image Rd1b) is corrected based on the movement information E11 of the feature point and the movement information E12 of the pixel, thereby generating a corrected blood vessel image Rd1bc, and the corrected blood vessel image Rd1bc is synthesized with the live image Rr12 to generate a road map perspective image Rs 1. The other structure of the second embodiment is the same as that of the first embodiment.

[ Effect of the second embodiment ]

In the second embodiment, the following effects can be obtained.

In the second embodiment, as described above, the image acquisition unit 220 is configured to: the contrast image Rc is acquired as a reference image. The movement information acquiring unit 230 is configured to: feature point images are extracted from each of the contrast image Rc and the live image Rr12, and movement information E11 (displacement amount d11) of feature points based on the extracted feature point images is acquired. Thus, since the contrast image Rc, which is an image including the image Av of the blood vessel after the imaging, and the image Ava of the blood vessel including the corrected blood vessel image Rd1bc are images captured at the same time point, the corrected blood vessel image Rd1bc can be generated more accurately than in the case where the feature point image is acquired from the live image Rr captured at a time point after the contrast image Rc. Other effects of the second embodiment are similar to those of the first embodiment.

[ third embodiment ]

Next, the configuration of an X-ray imaging apparatus 300 according to a third embodiment of the present invention will be described with reference to fig. 1, 3, and 11. In the third embodiment, unlike the X-ray imaging apparatus 100 according to the first embodiment configured to acquire the movement information E1 of the feature point and the movement information E2 of the pixel based on the first live view image Rr1 and the second live view image Rr2, the X-ray imaging apparatus is configured to: the movement information E21 of the feature point and the movement information E22 of the pixel are acquired based on the mask image Rm and the live image Rr 22. Note that, the same components as those of the first embodiment or the second embodiment are denoted by the same reference numerals in the drawings, and the description thereof is omitted. In the third embodiment, the mask image Rm is an example of the "reference image" and the "non-contrast image" in the claims, and the live image Rr22 is an example of the "perspective image" in the claims.

As shown in fig. 1, the X-ray imaging apparatus 300 according to the third embodiment includes a controller 303, and the controller 303 includes an image processor 310. As shown in fig. 3, the image processing unit 310 includes an image acquisition unit 320, a movement information acquisition unit 330, and an image synthesis unit 340. In the third embodiment, the image acquisition unit 320 is configured to: the mask image Rm is acquired as a reference image.

As shown in fig. 11, the movement information acquiring unit 330 is configured to: feature point images are extracted from each of the mask image Rm and the live image Rr22, and movement information E21 (displacement amount d21) of feature points based on the extracted feature point images is acquired. The movement information acquiring unit 330 is configured to: a correction mask image Rmc obtained by correcting the mask image Rm is generated based on the movement information E21 of the feature points, and movement information E22 (movement amount d22) of at least a part of pixels belonging to the correction mask image Rmc is acquired based on the correction mask image Rmc and the live image Rr 22.

The image combining unit 340 is configured to: the DSA image Rd2 (blood vessel image Rd2b) is corrected based on the movement information E21 of the feature point and the movement information E22 of the pixel, thereby generating a corrected blood vessel image Rd2bc, and the corrected blood vessel image Rd2bc is synthesized with the live image Rr22 to generate a road map perspective image Rs 2. The other structure of the third embodiment is the same as that of the first embodiment.

[ Effect of the third embodiment ]

In the third embodiment, the following effects can be obtained.

In the third embodiment, the image acquisition unit 320 is configured to: the mask image Rm is acquired as a reference image. The movement information acquiring unit 330 is configured to: feature point images are extracted from each of the mask image Rm and the live image Rr22, and movement information E21 (displacement amount d21) of feature points based on the extracted feature point images is acquired. This can prevent the feature points different from each other from being extracted from the mask image Rm and the live image Rr22 that do not include the blood vessel after the imaging. As a result, the feature points can be easily associated with each other, and therefore the movement information E21 of the feature points can be easily acquired. Further, since the X-ray exposure amount of the mask image Rm can be increased as compared with the live image Rr, the feature point image can be extracted from the relatively clear mask image Rm and the live image Rr 22. Other effects of the third embodiment are similar to those of the first embodiment.

[ modified examples ]

The embodiments disclosed herein are considered to be illustrative in all respects, rather than restrictive. The scope of the present invention is defined by the claims, not by the description of the above embodiments, and includes all modifications (variations) within the meaning and range equivalent to the claims.

For example, in the above-described embodiment, an example in which a DSA image and a blood vessel image are used as the synthesis target image of the present invention is described, but the present invention is not limited to this. That is, an X-ray image of a non-DSA image may be used as the synthesis target image, or an X-ray image of an image not including blood vessels may be used as the synthesis target image.

In the above-described embodiment, the X-ray images captured at the "same imaging position" have been described as if the positions of the feature point images were the same, but the present invention is not limited to this. For example, the "same imaging position" may be a case where the relative position of the top with respect to the imaging unit is the same and the relative position between the X-ray generation unit and the X-ray detection unit in the imaging unit is the same.

In the above-described embodiment, an example of generating a road map fluoroscopic image obtained by synthesizing a DSA image (blood vessel image) and a second live view image so as to be displayed in a superimposed manner is described, but the present invention is not limited to this. For example, a composite image may be generated in which a contrast image and a live image are combined so as to be displayed side by side.

In addition, although the above-described embodiment shows an example in which the DSA image (blood vessel image) is corrected each time the second live image is acquired (captured), the present invention is not limited to this. For example, the DSA image (blood vessel image) may be corrected only when an input operation to the operation unit by the operator is accepted.

In the above-described embodiment, an example of imaging the lower limbs of the subject is shown, but the present invention is not limited to this. That is, the X-ray imaging apparatus of the present invention is particularly effective in X-ray imaging of the lower limbs of the subject, and has an effect of being able to appropriately generate a composite image even when X-ray imaging is performed on a part other than the lower limbs of the subject.

In the above-described embodiment, an example is shown in which the blood vessel image is subjected to the black-and-white inversion process when the DSA image (blood vessel image) and the second live view image are synthesized, but the present invention is not limited to this. That is, the blood vessel image may be synthesized in the second live view image without performing black-and-white inversion on the blood vessel image, or the blood vessel image may be synthesized in the second live view image by performing image processing (color changing processing) other than black-and-white inversion on the blood vessel image.

In the above-described embodiment, an example is shown in which the average value of the displacements (displacement amounts) of all the feature point images is acquired as the movement information of the feature points, but the present invention is not limited to this. That is, a displacement may be acquired only for a part of the feature point images among the plurality of feature point images, and the displacement amount may be acquired based on the displacement.

In the above-described embodiment, the example in which the black-and-white inverted image is synthesized with the live view image including the image captured of at least one of the catheter, the stent, and the guide wire inserted into the subject has been described, but the present invention is not limited to this. That is, the black-and-white inverted image may be combined with a live image including an image of the catheter, the stent, and the treatment instrument other than the guide wire.

In addition, the following examples are shown in the above embodiments: in the case where the amount of displacement exceeds the threshold value, the first live image (contrast image or mask image) is corrected based on the movement information of the feature point, but the present invention is not limited thereto. That is, if there is little problem in terms of increase in the control load, it is also possible not to set the threshold value, correct the first live image (contrast image or mask image) based on the movement information of the feature point each time the live image is acquired, and correct the DSA image (blood vessel image) based on the movement information of the feature point and the movement information of the pixel.

In addition, the above-described embodiment shows an example in which the image combining unit is configured as follows: when generating the roadmap fluoroscopic image, the DSA image (blood vessel image) is corrected, and the corrected DSA image (corrected blood vessel image) is synthesized with the second live image (or live image), but the present invention is not limited thereto. That is, as in the X-ray imaging apparatus 400 according to the modification shown in fig. 12, the image synthesizing unit 440 may be configured to: when the roadmap perspective image Rs3 is generated, the second live view image Rr2 is corrected based on the movement information of the feature points (the displacement amount d1) and the movement information of the pixels (the movement amount d2), and the corrected second live view image Rr32 and the DSA image Rd (the corrected blood vessel image Rdb) are synthesized.

28页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:三维超声机械探头

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!