Medical image diagnosis apparatus and alignment method

文档序号:1678019 发布日期:2020-01-03 浏览:17次 中文

阅读说明:本技术 医用图像诊断装置以及对位方法 (Medical image diagnosis apparatus and alignment method ) 是由 简伟健 唐喆 陈颀 陈宇 于 2018-06-26 设计创作,主要内容包括:本发明提供一种医用图像诊断装置以及对位方法,该医用图像诊断装置具有在医用图像数据间进行对位的功能,具备:输入部,受理相对于至少一方的上述医用图像数据所包含的被检体的管状构造物的标签付与;生成部,基于付与的上述标签生成与在上述医用图像数据内确定的多个小划区对应的多个超像素;以及对位部,使用生成的上述多个超像素来进行上述医用图像数据间的对位。(The present invention provides a medical image diagnostic apparatus and a positioning method, the medical image diagnostic apparatus has a function of positioning between medical image data, and comprises: an input unit that receives a label assigned to a tubular structure of a subject included in at least one of the medical image data; a generation unit that generates a plurality of superpixels corresponding to a plurality of cells specified in the medical image data, based on the assigned tags; and a registration unit that performs registration between the medical image data using the plurality of generated superpixels.)

1. A medical image diagnostic apparatus having a function of performing registration between medical image data, comprising:

an input unit that receives a label assigned to a tubular structure of a subject included in at least one of the medical image data;

a generation unit that generates a plurality of superpixels corresponding to a plurality of cells specified in the medical image data, based on the assigned tags; and

and an alignment unit for performing alignment between the medical image data using the plurality of generated superpixels.

2. The medical image diagnostic apparatus according to claim 1,

the tubular structure is a blood vessel.

3. The medical image diagnostic apparatus according to claim 1,

the medical image data includes 1 st image data and 2 nd image data,

the positioning portion includes:

a center point calculating unit calculating a center point of a superpixel as a center point of the superpixel;

a matrix generating unit configured to generate a transformation matrix between the 1 st tubular structure and the 2 nd tubular structure based on coordinates of a center point of at least 2 super pixels of the 1 st tubular structure in the 1 st image data and coordinates of a center point of a super pixel corresponding to the at least 2 super pixels of the 2 nd tubular structure in the 2 nd image data;

a coordinate conversion unit configured to generate estimated coordinates of a center point of a 2 nd super pixel corresponding to the 1 st super pixel of the 2 nd tubular structure from coordinates of a center point of the 1 st super pixel of the 1 st tubular structure and the transformation matrix;

a selection unit that selects a plurality of super pixels within a predetermined radius range centered on the estimated coordinates;

a similarity calculation unit that calculates a similarity between the feature of the 1 st super pixel of the 1 st tubular structure and the feature of each of the plurality of super pixels selected by the selection unit; and

and a determining unit configured to determine a super pixel having a highest similarity in feature among the plurality of super pixels selected by the selecting unit in the 2 nd tubular structure as the 2 nd super pixel.

4. The medical image diagnostic apparatus according to any one of claims 1 to 3,

further comprises a correction input unit for receiving a correction instruction from an operator by applying the label to the result of the alignment by the alignment unit,

the generation unit regenerates the super pixel based on the label after the correction instruction.

5. The medical image diagnostic apparatus according to claim 4,

the image forming apparatus further includes a learning unit that learns the correction instruction received by the correction input unit and reflects the learning result in the subsequent generation of the super pixel.

6. An alignment method for performing alignment between medical image data, comprising:

receiving a label assignment to a tubular structure of a subject included in at least one of the medical image data;

generating a plurality of superpixels corresponding to a plurality of cells specified in the medical image data based on the assigned tags; and

the generated plurality of super-pixels are used to perform registration between the medical image data.

7. The alignment method according to claim 6,

the medical image data includes 1 st image data and 2 nd image data,

the alignment between the medical image data includes the steps of:

calculating the center of gravity of the superpixel as the center point of the superpixel;

generating a transformation matrix between the 1 st tubular structure and the 2 nd tubular structure according to the coordinates of the center point of at least 2 superpixels of the 1 st tubular structure in the 1 st image data and the coordinates of the center point of the superpixel corresponding to the at least 2 superpixels of the 2 nd tubular structure in the 2 nd image data;

generating estimated coordinates of a center point of a 2 nd super pixel corresponding to the 1 st super pixel of the 2 nd tubular structure from the coordinates of the center point of the 1 st super pixel of the 1 st tubular structure and the transformation matrix;

selecting a plurality of super pixels within a predetermined radius range centered on the estimated coordinates;

calculating a similarity between a feature of the 1 st super pixel of the 1 st tubular structure and a feature of each of the selected plurality of super pixels; and

and determining a super pixel having the highest similarity in the features of the selected plurality of super pixels of the 2 nd tubular structure as the 2 nd super pixel.

Technical Field

The present invention relates to a medical image diagnostic apparatus and a positioning method that can perform positioning between medical image data.

Background

In a conventional medical image diagnostic apparatus, there is a technique of obtaining an image of the entire blood vessel portion by manually marking a tubular tissue such as a blood vessel with respect to a single image obtained by scanning with a CT, MR, ultrasonic scanner, or the like.

However, when scanning is performed by different apparatuses for the same patient, for example, when both CT scanning and ultrasound scanning are performed, it is sometimes necessary to find a lesion site of the blood vessel or the like by comparing features in different modality images, that is, a CT scanning image and an ultrasound scanning image, for the same tissue, for example, the same blood vessel. However, since there are many blood vessels in the image, it is difficult for the user to find the corresponding same blood vessel in the two scanned images, and rapid and accurate diagnosis cannot be performed.

Disclosure of Invention

In view of the above, the present invention provides a medical image diagnostic apparatus and an alignment method that can find the same tubular structure between different pieces of medical image data.

The medical image diagnostic apparatus of the present invention has a function of performing registration between medical image data, and includes: an input unit that receives a label assigned to a tubular structure of a subject included in at least one of the medical image data; a generation unit that generates a plurality of superpixels corresponding to a plurality of cells specified in the medical image data, based on the assigned tags; and a registration unit that performs registration between the medical image data using the plurality of generated superpixels.

The positioning method of the present invention is a method for positioning medical image data, including the steps of: receiving a label assignment to a tubular structure of a subject included in at least one of the medical image data; generating a plurality of superpixels corresponding to a plurality of cells specified in the medical image data based on the assigned tags; and performing registration between the medical image data using the generated plurality of super pixels.

Effects of the invention

The user can easily find the corresponding same blood vessel in a plurality of images shot by a plurality of shooting methods for the same patient, realize the alignment between images in different modalities, and thus can carry out rapid and accurate diagnosis.

Drawings

Fig. 1 is a block diagram showing a medical image diagnostic apparatus according to embodiment 1 of the present invention.

Fig. 2 is a schematic diagram for superpixel segmentation of a medical image.

Fig. 3 is a structural diagram of a super pixel calculation unit.

Fig. 4 is a diagram for explaining alignment processing between medical image data.

Fig. 5 is a flowchart for explaining the alignment process between medical image data.

Fig. 6 is a flowchart for explaining how to find a corresponding super pixel in a floating image.

Fig. 7 is a schematic diagram showing a state after alignment of medical image data.

Fig. 8 is a block diagram showing the configuration of a medical image diagnostic apparatus according to embodiment 2 of the present invention.

Fig. 9 is an explanatory diagram of correction performed when a super pixel error is calculated in a floating image.

Fig. 10 is a flowchart of the positioning process of medical image data according to embodiment 2 of the present invention.

Fig. 11 is a block diagram showing the configuration of a medical image diagnostic apparatus according to embodiment 3 of the present invention.

Fig. 12 is a flowchart of the positioning process of medical image data according to embodiment 3 of the present invention.

Detailed Description

The medical image diagnostic apparatus and the positioning method thereof according to the present invention will be described below with reference to the drawings.

Embodiment 1

A medical image diagnostic apparatus according to embodiment 1 of the present invention will be described below with reference to fig. 1 to 7.

Fig. 1 is a block diagram of a medical image diagnostic apparatus 100.

As shown in fig. 1, the medical image diagnostic apparatus 100 includes a feature extraction unit 101, an image segmentation unit 102, a superpixel labeling unit 103, and a superpixel calculation unit 104.

Feature extraction section 101 extracts image features such as a gradation value, a gradient, a shape, a position, and a blacken matrix for a medical image selected by a user such as a doctor.

The image segmentation unit 102 is configured to perform superpixel segmentation on each medical image selected by the user.

Superpixel segmentation is the process of subdividing an image into multiple image sub-regions, i.e., superpixels, in order to locate objects, boundaries, etc. in the image. Superpixels are small regions composed of a series of pixels with adjacent positions and similar characteristics such as color, brightness, texture and the like.

Fig. 2 shows two images taken of the same patient, in which the left image of (a) is an ultrasound image P1 and the left image of (b) is a CT image P2, in which the same blood vessel is shown, respectively, a blood vessel vs1 is shown in an ultrasound image P1, and a blood vessel vs2 is shown in a CT image P2, which is the same as the blood vessel vs 1.

In fig. 2, the right image of (a) is a superpixel segmentation image S1 including the blood vessel vs1, which is an enlarged view of the box portion in the figure after superpixel segmentation of the ultrasound image P1, and the right image of (b) is a superpixel segmentation image S2 including the blood vessel vs2, which is an enlarged view of the box portion in the figure after superpixel segmentation of the CT image P2. The super-pixel division map S1 and the super-pixel division map S2 are each composed of a plurality of super-pixels sp.

In fig. 2 (a), only the Y-shaped blood vessel including the blood vessel vs1 is clearly shown, and in fig. 2 (b), the Y-shaped blood vessel including the blood vessel vs2 is clearly shown, but this is a figure selected for convenience of description, and a plurality of blood vessels may exist simultaneously in each image, and it is difficult to find out the corresponding identical blood vessel vs2 in the CT image P2 with naked eyes on the assumption that the blood vessel vs1 is located in the ultrasound image P1.

In the present invention, the reference image is represented by the ultrasound image P1, the floating image is represented by the CT image P2, and superpixels constituting the blood vessel are continuously marked on the reference image represented by the ultrasound image P1, whereby the superpixels of the corresponding blood vessel can be automatically displayed on the floating image represented by the CT image P2, and the corresponding blood vessel can be displayed.

The super-pixel labeling unit 103 receives labeling of a super-pixel segmentation map displayed on the touch panel by a user using a finger, a capacitance pen, or the like, and can highlight a super-pixel where a labeled point is located or a super-pixel where a labeled line passes, that is, can generate a plurality of super-pixels corresponding to a plurality of cells specified in each medical image data based on the labeled label. The highlighted display may be distinguished from other superpixels by, for example, changing the color of the superpixel.

The super-pixel calculation unit 104 calculates a super-pixel corresponding to the super-pixel on the reference image on the floating image based on a plurality of super-pixels highlighted by continuous scribing marks on the reference image and a start super-pixel marked on the floating image, and performs highlighting display on the super-pixels, for the same component, such as a blood vessel, in the reference image and the floating image, thereby performing alignment between data of the reference image and the floating image.

Fig. 3 is a structural diagram showing the structure of the super pixel calculation unit 104. The superpixel calculation unit 104 includes a center point calculation unit 1041, a matrix generation unit 1042, a coordinate conversion unit 1043, a selection unit 1044, a feature acquisition unit 1045, a similarity calculation unit 1046, a determination unit 1047, and a determination unit 1048.

The center point calculation unit 1041 can calculate the center of gravity of the superpixel highlighted by the superpixel marking unit 103 as the center point of the superpixel.

Fig. 4 is a diagram for explaining the alignment process between the medical image data, in which (a) corresponds to the super-pixel segmentation map S1 in (a) of fig. 2 and (b) corresponds to the super-pixel segmentation map S2 in (b) of fig. 2, and in fig. 4, the gradations in the super-pixel segmentation maps S1 and S2 are removed to make the respective marks clearer.

As shown in fig. 4, in the super-pixel segmentation map S1, the center point calculation unit 1041 can calculate the center point of the super-pixel sp1 and the super-pixel sp2 obtained by the marker and represent the center points by coordinates x1 and x2, respectively, and in the super-pixel segmentation map S2, the center point calculation unit 1041 can calculate the center point of the super-pixel sp 1' obtained by the marker and represent the center point by coordinate y 1.

The matrix generation unit 1042 generates a transformation matrix T between the blood vessel vs1 of the superpixel segmentation map S1 and the blood vessel vs2 of the superpixel segmentation map S2 from center point coordinates of a plurality of superpixels (at least 2 superpixels) of the blood vessel vs1 of the superpixel segmentation map S1 and center point coordinates of the superpixels of the blood vessel vs2 of the superpixel segmentation map S2 corresponding to the plurality of superpixels of the blood vessel vs 1. The transformation matrix T is, for example, an affine transformation matrix which can be calculated by, for example, the least square method.

The coordinate conversion unit 1043 multiplies the coordinate x2 by the transformation matrix T to obtain the estimated coordinate C (x2) of the super pixel sp 2' of the super pixel segmentation map S2.

The selection unit 1044 selects the superpixels spx ', spy', spz ', and sp 2' in a range of the radius r centered on the coordinate C (x2), respectively, all or part of the superpixels are in the range, as shown in fig. 4. Of course, the radius r may be set as needed.

The feature acquisition unit 1045 acquires, for each of the superpixels spx ', spy', spz ', sp 2', its own feature, such as a gradation value, a gradient, a shape, a position, and a blackson matrix, from the features extracted by the feature extraction unit 101.

For example, when 1 to d features of a super pixel are acquired and the feature of a super pixel spx' is acquired, the feature is set as a set aj=(aj1,aj2,…,ajd) To indicate. Meanwhile, the feature obtaining unit 1045 obtains the feature a of the super-pixel sp2 in the super-pixel segmentation map S12=(a21,a22,…,a2d)。

The similarity calculation unit 1046 calculates the similarity of the features of the superpixel sp2 in the superpixel segmentation map S1 and the superpixel spx' in the superpixel segmentation map S2, specifically, the feature ajAnd a2Put into a set, i.e. form a set ak ═ a2,aj)=(a21,a22,…,a2d,aj1,aj2,…,ajd) And comparing the characteristics to calculate the characteristic ajAnd a2Overall similarity of (2)2j

The determination unit 1047 determines whether or not all superpixels are selected within a range of the radius r centered on the coordinate C (x2) in the superpixel segmentation map S2 of the CT image P2.

The decision unit 1048 decides a superpixel having the most approximate similarity (highest similarity) as a superpixel corresponding to the superpixel sp2 in the CT image P2 by comparing the similarity of the feature of each of the superpixels spx ', spy', spz ', sp 2' with the feature of the superpixel sp 2.

Next, the alignment processing of medical image data will be described with reference to fig. 2, 4, and 5, where fig. 5 is a flowchart for describing the alignment processing of medical image data.

First, the user finds two images taken of the same patient, that is, an ultrasound image P1 (reference image) and a CT image P2 (floating image), in which it is determined by the user's comparison of the image slice positions or the like that the same blood vessel should exist, that is, a blood vessel vs1 in the ultrasound image P1 and a blood vessel vs2 in the CT image P2. That is, the user obtains the reference image and the floating image in which the same blood vessel exists (step S11).

Then, the feature extraction unit 101 extracts image features such as a gradation value, a gradient, a shape, a position, and a blacken matrix for the ultrasound image P1 and the CT image P2 (step S12).

Then, as shown in the superpixel segmentation map S1 and the superpixel segmentation map S2 in the right portion of fig. 2, the image segmentation unit 102 performs superpixel segmentation on the ultrasound image P1 and the CT image P2 selected by the user (step S13).

Then, as shown in fig. 4 (b), in the super-pixel segmentation map S2, that is, the CT image P2, the user manually marks the position where the super-pixel sp1 ' is touched by a finger, a capacitive pen, or the like, and the super-pixel marking unit 103 receives the mark of the user, that is, the label assignment, and displays the super-pixel sp1 ' where the marked point is located in a highlighted manner, that is, generates the super-pixel sp1 ' (step S14).

Then, as shown in fig. 4 (a), in the superpixel segmentation map S1, that is, the ultrasound image P1, the user manually marks the blood vessel vs1 by continuously scribing with a finger, a capacitance pen, or the like along the extending direction of the blood vessel vs1 with the position of the superpixel sp1 corresponding to the superpixel sp 1' as a starting point, and the superpixel marking unit 103 receives the marking of the user, that is, the label assignment, displays the superpixel sp1, sp2, or the like in which the marked scribing is located in an emphasized manner, and gradually displays the blood vessel vs1 (step S15).

Then, the superpixel calculation unit 104 calculates superpixels corresponding to a plurality of superpixels of the ultrasound image P1 in the superpixel segmentation map S2, that is, the CT image P2, from a plurality of superpixels highlighted by continuous marking on the superpixel segmentation map S1, that is, the ultrasound image P1, and the start pixel marked on the superpixel segmentation map S2, that is, the CT image P2, and displays the superpixels highlighted and automatically marked (step S16).

Next, a method of calculating corresponding superpixels in the CT image P2 will be described with reference to fig. 4 and 6, where fig. 6 is a flowchart for describing how to find corresponding superpixels in the floating image.

First, as shown in fig. 4 (a), for the super-pixel segmentation map S1, the center point calculation unit 1041 calculates center point coordinates x1 and x2 of a super-pixel sp1 and a super-pixel sp2 marked by a continuous scribe line, for the super-pixel segmentation map S2, the center point calculation unit 1041 calculates center point coordinates y1 of a super-pixel sp 1' obtained by marking, and the matrix generation unit 1042 generates a transformation matrix T between a blood vessel vs1 of the super-pixel segmentation map S1 and a blood vessel vs2 of the super-pixel segmentation map S2 from the center point coordinates x1, x2, and y1 (step S161).

Then, as shown in fig. 4 (b), in the superpixel segmentation map S2, i.e., the CT image P2, the coordinate conversion unit 1043 multiplies the coordinates x2 by the transformation matrix T to obtain the estimated coordinates C (x2) of the superpixel sp 2' of the superpixel segmentation map S2 (step S162).

Then, as shown in fig. 4 (a), in the superpixel segmentation map S1, that is, in the ultrasound image P1, the feature acquisition unit 1045 acquires the feature a of the superpixel sp2 that is the last mark, that is, the latest mark, among the scribe marks2=(a21,a22,…,a2d) (step S163).

Then, as shown in fig. 4 (b), in the superpixel segmentation map S2, that is, the CT image P2, the selection unit 1044 is centered on the coordinate C (x2)In the range of the radius r, the super-pixel spx 'having a part within the range is selected, and the feature acquisition unit 1045 acquires the feature a of the super-pixel spx' from the features extracted by the feature extraction unit 101j=(aj1,aj2,…,ajd) (step S164).

Then, the similarity calculation unit 1046 compares the feature a with the feature bjAnd a2Put into a set, i.e. form a set ak ═ a2,aj)=(a21,a22,…,a2d,aj1,aj2,…,ajd) And comparing the characteristics to calculate the characteristic a2And ajOverall similarity of (2)2jThereby, the similarity of the features of the super pixel sp2 in the super pixel segmentation map S1 and the super pixel spx' in the super pixel segmentation map S2 is calculated (step S165).

Then, the determination unit 1047 determines whether or not all superpixels within the range of the radius r centered on the coordinate C (x2) are selected (step S166). If it is determined that none of the super-pixels has been selected (no in step S166), the process returns to step S164, and as shown in fig. 4 (b), the super-pixel spy ', spz ' or sp2 ' is selected and its feature is acquired in the super-pixel segmentation map S2, and then the process proceeds to step S165. If it is determined that all of the super pixels spx ', spy', spz ', and sp 2' are selected (yes in step S166), the determination unit 1048 determines the super pixel having the most approximate similarity as the corresponding super pixel in the CT image P2 by comparing the similarity between the feature of each of the super pixels spx ', spy', spz ', and sp 2' and the feature of the super pixel sp2 (step S167), and since the super pixel sp2 'is most similar to the gray level of the super pixel 2, both are blood vessels, and the shape is similar, and so on, in the CT image P2, the super pixel sp 2' is determined as the super pixel corresponding to the super pixel sp2 in the ultrasound image P1.

Fig. 7 is a diagram showing a state after alignment of medical image data. In fig. 7 (b), when a certain point of the region where the super-pixel sp1 '(starting super-pixel) is located is marked in the super-pixel segmentation map S2 in the CT image P2, when the super-pixel segmentation map S1 in the ultrasound image P1 is continuously scribed from the certain point of the region where the super-pixel sp1 (starting super-pixel) is located to the certain point of the region where the super-pixel sp6 is located along the direction of the blood vessel vs1 in fig. 7 (a), the super-pixel segmentation map S2 in the CT image P2 can be automatically marked to the super-pixel sp 6' as shown in fig. 7 (b), and the blood vessel vs2 corresponding to the blood vessel vs1 can be automatically found. Therefore, the user can easily find the same blood vessel in the two scan images, i.e., the ultrasound image P1 and the CT image P2, and can make a quick and accurate diagnosis.

In addition, in the super-pixel calculation unit 104, the feature acquisition unit 1045 and the determination unit 1047 may not be provided, when the feature acquisition unit 1045 is not provided, the feature of the super-pixel is provided by the feature extraction unit 101, when the determination unit 1047 is not provided, step S166 in fig. 6 is omitted, in this case, in step S164, as shown in (b) of fig. 4, in the super-pixel segmentation map S2, that is, the CT image P2, the selection unit 1044 selects all the super-pixels spx ', spy', spz 'and sp 2' partially within the range within the radius r centered on the coordinate C (x2) and provides the feature of each super-pixel by the feature extraction unit 101, and then, in step S165, the similarity calculation unit 1046 calculates the similarity of the super-pixel sp2 in the super-pixel segmentation map S1 and the super-pixels spx ', spy', spz 'and sp 2' in the super-pixel segmentation map S2, then, in step S167, the determination unit 1048 determines a superpixel that is most similar to the feature of the superpixel sp2 among the superpixels spx ', spy', spz ', sp 2' as a corresponding superpixel in the CT image P2.

In addition, as described above, the method of calculating the corresponding super pixel is described by taking fig. 4 as an example, in fig. 7, when calculating the super pixel sp6 ', the transformation matrix T between the blood vessel vs1 and the blood vessel vs2 is calculated from the center point coordinates x1, x2, x3, x4, x5 of each super pixel of the blood vessel vs1 in the super pixel segmentation map S1 and the coordinates y1, y2, y3, y4, y5 of the center point of the super pixel corresponding to the center point coordinates of each super pixel of the blood vessel vs2 in the super pixel segmentation map S2, and the estimated coordinates of the center point of the super pixel sp6 ' is calculated by multiplying the center point coordinates x6 of the super pixel sp6 by the transformation matrix T, and then the calculation method of the super pixel sp6 ' is as in steps S163 to S167 in fig. 6.

Embodiment 2

Next, a medical highlighting diagnosis apparatus 200 according to embodiment 2 of the present invention will be described with reference to fig. 8 to 10.

Only the points different from embodiment 1 will be described, and the same portions as those in embodiment 1 will be given the same reference numerals, and redundant description or simplified description will be omitted.

Fig. 8 is a block diagram of the medical image diagnostic apparatus 200, fig. 9 is an explanatory diagram of correction performed when a superpixel error is calculated in the superpixel segmentation map S2 of the CT image, and fig. 10 is a flowchart of the positioning process of medical image data.

As shown in fig. 8, the medical image diagnostic apparatus 200 includes a feature extraction unit 201, an image segmentation unit 202, a superpixel labeling unit 203, a superpixel calculation unit 204, and a correction input unit 205.

Here, the functions of the feature extraction unit 201, the image segmentation unit 202, the super pixel labeling unit 203, and the super pixel calculation unit 204 are the same as those of the feature extraction unit 101, the image segmentation unit 102, the super pixel labeling unit 103, and the super pixel calculation unit 104 in the first embodiment, and therefore, the description thereof will be simplified.

Feature extraction section 201 extracts image features such as a gradation value, a gradient, a shape, a position, and a blacken matrix for a medical image selected by a user such as a doctor.

The image segmentation unit 202 is configured to perform superpixel segmentation on each medical image selected by the user.

The super-pixel marking unit 203 receives a mark, i.e., a label assignment, made by a user with a finger, a capacitive pen, or the like on a super-pixel segmentation map displayed on the touch panel, and can highlight a super-pixel where a marked point is located or a super-pixel where a marked line passes.

The super-pixel calculation unit 204 calculates corresponding super-pixels on the floating image based on a plurality of super-pixels highlighted by continuous scribe marks on the reference image, and highlights the corresponding super-pixels.

The correction input section 205 determines whether or not the calculation result (registration result) of the super pixel calculation section 204 is appropriate by a user (operator), and accepts a correction instruction by the user by applying a label if it is determined that the calculation result is not appropriate.

Next, the function of the correction input unit 205 will be described with reference to fig. 9.

In fig. 9 (b), in the super-pixel segmentation map S2 in the CT image, when a certain point of a region where the super-pixel sp1 ' (starting super-pixel) is located is marked, the super-pixel sp1 ' is generated by the super-pixel marking unit 203, in fig. 9 (a), in the super-pixel segmentation map S1 in the ultrasound image, when a certain point of a region where the super-pixel sp1 (starting super-pixel) is located is continuously scribed to a certain point of a region where the super-pixel 4 is located via the super-pixels sp2, sp3 in the direction of the blood vessel vs1, for the super-pixels sp2, sp3, the super-pixel calculation unit 204 calculates as its corresponding super-pixel sp2 ', sp3 ' in the blood vessel vs2 of the super-pixel segmentation map S2, but for the super-pixel sp 39 4, the super-pixel calculation unit 204 calculates as its corresponding super-pixel sp ', which is inappropriate, and when the user recognizes that the calculation result is an error 39sp 64, i.e. the user 0, the user gives a correction instruction by touching the display screen with a finger, a capacitive pen, or the like to drag a super pixel sp0 'to a correct super pixel sp 4', or by clicking a super pixel sp0 'and then clicking a super pixel sp 4', or the like, and the correction input section 205 receives the correction instruction from the user.

Next, the alignment processing of medical image data will be described with reference to fig. 9 and 10.

Since steps S21 to S26 are the same as steps S11 to S16 of the first embodiment, redundant description is omitted.

In step S27, when the user determines that the calculation result of the superpixel calculation unit 204 is correct (yes in step S27), the process returns to step S25, the operation of manually marking blood vessels continuously is continued in the ultrasound image, that is, the reference image, and when the user determines that the calculation result of the superpixel calculation unit 204 is not appropriate, that is, an error (no in step S27), the user applies a label to the ultrasound image, the correction input unit 205 receives the correction instruction from the user, the superpixel marking unit 203 generates the superpixel by highlighting the superpixel in which the label is located based on the label after the correction instruction, corrects the error superpixel sp0 ' to the correct superpixel sp4 ', that is, the correct superpixel corresponding to the tag is manually selected (step S28), and the process returns to step S25, and the ultrasound image, the process, the operation of manually selects the correct superpixel sp4 ' (step S28), I.e. the act of continuing to manually mark blood vessels continuously in the reference image.

Therefore, when the calculation result of the superpixel calculation unit 204 is erroneous, the erroneous superpixel can be manually corrected to the correct superpixel, so that the user can easily find the same blood vessel vs2 as the blood vessel vs1 in the ultrasound image, i.e., the reference image, in the CT image, i.e., the floating image, thereby enabling quick and accurate diagnosis.

Embodiment 3

Next, a medical highlighting diagnosis apparatus according to embodiment 3 of the present invention will be described with reference to fig. 11 to 12.

Only the points different from embodiment 2 will be described, and the same portions as embodiment 2 will be given the same reference numerals, and redundant description thereof will be omitted.

Fig. 11 is a block diagram showing the configuration of a medical image diagnostic apparatus 300 according to the present invention. Fig. 12 is a flowchart of the alignment process of medical image data.

The medical image diagnostic apparatus 300 includes a feature extraction unit 301, an image segmentation unit 302, a superpixel labeling unit 303, a superpixel calculation unit 304, a correction input unit 305, and a training unit 306.

Here, the functions of the feature extraction unit 301, the image segmentation unit 302, the super pixel labeling unit 303, the super pixel calculation unit 304, and the correction input unit 305 are the same as those of the feature extraction unit 201, the image segmentation unit 202, the super pixel labeling unit 203, the super pixel calculation unit 204, and the correction input unit 205 in the second embodiment, and therefore, the description of these units is omitted.

The training unit 306 trains the superpixel calculation unit 304 as follows: the correction instruction received by the correction input unit 305 and the calculation result of the super pixel calculated by the super pixel calculation unit 304 are stored, a large weight is given to the correct calculation result of the super pixel, a small weight is given to the erroneous calculation result of the correction instruction, and the probability of selecting the correct super pixel becomes higher and higher later as the training process is iterated.

Next, the alignment processing of medical image data will be described with reference to fig. 12.

Since steps S31 to S35, S37, and S38 are the same as steps S21 to S25, S27, and S28 of the second embodiment, redundant description is omitted.

In step S39, training section 306 trains superpixel calculation section 304 based on the correction instruction received by correction input section 305 and the calculation result of the correct superpixel calculated by superpixel calculation section 304, and reflects the training result to the generation of the next superpixel so that the correct superpixel is generated in the generation of the next superpixel.

In step S36, superpixel calculation section 304 refers to the previous training result obtained by training section 306 with respect to superpixel calculation section 304, calculates a superpixel corresponding to a superpixel in the reference image, which is an ultrasound image, in the CT image, that is, the floating image, and automatically marks the superpixel.

In the above-described embodiment, training unit 306 trains superpixel calculation unit 304, whereby the calculation result of superpixel calculation unit 304 can be more accurate, and the user can make a more rapid and accurate diagnosis.

In the above embodiment, the superpixel calculation unit 304 may include a learning unit that learns the correction instruction received by the correction input unit 305 and reflects the learning result to the generation of the superpixel in the next superpixel calculation unit 304 of the medical image diagnostic apparatus 300.

In the present invention, the ultrasound image and the CT image are taken as an example to align the data of them, but images obtained by imaging by other imaging methods may be used, and for convenience of explanation, names such as a reference image and a floating image are used.

As described above, although the embodiments of the present invention have been described, these embodiments are shown as examples and are not intended to limit the scope of the invention. These new embodiments can be implemented in other various ways, and various omissions, substitutions, and changes can be made without departing from the spirit of the invention. These embodiments and modifications thereof are included in the scope and gist of the invention, and are included in the inventions described in the claims and the equivalent scope thereof.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于显示具有精细结构的检查区域的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!