Diagnosis support device, diagnosis support system, and diagnosis support method

文档序号:1835381 发布日期:2021-11-12 浏览:30次 中文

阅读说明:本技术 诊断支援装置、诊断支援系统以及诊断支援方法 (Diagnosis support device, diagnosis support system, and diagnosis support method ) 是由 坂本泰一 大久保到 清水克彦 石原弘之 佐贺亮介 T·亨 C·雅凯 N·哈思 I·埃 于 2020-03-27 设计创作,主要内容包括:诊断支援装置具备控制部,所述控制部进行下述的控制,将利用在血液通过的生物组织的内部发送的超声波的反射波的信号而生成的、包括所述生物组织的二维图像所包含的多个像素,与包括生物组织类型在内的两个以上的类型相关联,由与所述生物组织类型相关联的像素群生成所述生物组织的三维图像,显示所生成的所述生物组织的三维图像。(The diagnostic support apparatus includes a control unit that performs control of associating a plurality of pixels included in a two-dimensional image of a biological tissue, which is generated using a signal of a reflected wave of an ultrasonic wave transmitted inside the biological tissue through which blood passes, with two or more types including a type of the biological tissue, generating a three-dimensional image of the biological tissue from a pixel group associated with the type of the biological tissue, and displaying the generated three-dimensional image of the biological tissue.)

1. A diagnosis assistance device comprising a control unit that performs the following control:

a plurality of pixels included in a two-dimensional image of a biological tissue, which is generated using a signal of a reflected wave of an ultrasonic wave transmitted inside the biological tissue through which blood passes, are associated with two or more types including a type of the biological tissue, a three-dimensional image of the biological tissue is generated from a pixel group associated with the type of the biological tissue, and the generated three-dimensional image of the biological tissue is displayed.

2. The diagnosis support apparatus according to claim 1, wherein,

also included among the two or more types are blood cell types,

the control unit excludes a pixel group associated with the blood cell type from the plurality of pixels and generates a three-dimensional image of the biological tissue.

3. The diagnosis support apparatus according to claim 1 or 2, wherein,

the control unit performs the following control: analyzing any one of the generated three-dimensional image of the biological tissue and the pixel group associated with the type of the biological tissue, calculating a thickness of the biological tissue, and displaying the calculated thickness of the biological tissue.

4. The diagnosis support apparatus according to any one of claims 1 to 3,

also included among the two or more types are medical instrument types,

the control unit performs the following control: generating a three-dimensional image of the medical instrument from one or more pixels associated with the type of the medical instrument, and displaying the generated three-dimensional image of the biological tissue and the three-dimensional image of the medical instrument in a form distinguishable from each other.

5. The diagnosis support apparatus according to claim 4, wherein,

the control unit executes a 1 st classification process and a 2 nd classification process, the 1 st classification process associating the plurality of pixels included in the two-dimensional image with the medical instrument type and one or more other types,

the 2 nd classification process smoothes the two-dimensional image except for the one or more pixels associated with the medical instrument type by the 1 st classification process, and associates a pixel group included in the smoothed two-dimensional image with one or more types including the biological tissue type.

6. The diagnosis support apparatus according to claim 4, wherein the control unit executes a 1 st classification process and a 2 nd classification process, the 1 st classification process smoothing the two-dimensional image and associating the plurality of pixels included in the two-dimensional image before smoothing with the medical instrument type and one or more other types,

the 2 nd classification process associates a pixel group included in the smoothed two-dimensional image with one or more types including the biological tissue type, except for the one or more pixels associated with the medical instrument type by the 1 st classification process.

7. The diagnosis support apparatus according to any one of claims 4 to 6, wherein the control unit performs the following control:

when two or more pixels displaying different medical instruments are included in the one or more pixels associated with the medical instrument type, a three-dimensional image of the medical instrument is generated for each medical instrument, and the generated three-dimensional image of the medical instrument is displayed so as to be distinguishable for each medical instrument.

8. The diagnosis support apparatus according to any one of claims 4 to 7,

sequentially generating the two-dimensional images while changing a transmission position of the ultrasonic waves in the biological tissue,

the control unit determines whether or not to associate one or more pixels among the plurality of pixels included in the two-dimensional image newly generated with the medical instrument type based on a result of association between the plurality of pixels included in the two-dimensional image previously generated.

9. The diagnosis support apparatus according to any one of claims 1 to 8,

the control unit associates the plurality of pixels included in the two-dimensional image with each other using a learned model.

10. The diagnosis support apparatus according to any one of claims 1 to 9,

the control unit generates the two-dimensional image by processing the signal of the reflected wave, and generates a three-dimensional image of the biological tissue corresponding to the newly generated two-dimensional image before generating the two-dimensional image every time a new two-dimensional image is generated.

11. The diagnosis support apparatus according to claim 10, wherein the control unit generates the two-dimensional image at a rate of 15 times per second or more and 90 times per second or less.

12. A diagnosis support system is provided with:

the diagnosis support apparatus according to any one of claims 1 to 11; and

and a probe that transmits the ultrasonic wave inside the biological tissue and inputs a signal of the reflected wave to the control unit.

13. A diagnosis support method, wherein,

the probe transmits ultrasonic waves inside the biological tissue through which blood passes,

the diagnosis support apparatus associates a plurality of pixels included in a two-dimensional image including the biological tissue generated using a signal of the reflected wave of the ultrasonic wave with two or more types including a type of the biological tissue,

the diagnosis support apparatus generates a three-dimensional image of the biological tissue from a pixel group associated with the type of the biological tissue,

the display displays a three-dimensional image of the biological tissue generated by the diagnosis support apparatus.

Technical Field

The invention relates to a diagnosis support device, a diagnosis support system, and a diagnosis support method.

Background

Patent documents 1 to 3 describe techniques for detecting contours of a heart cavity or a blood vessel captured in an image acquired by a medical imaging system such as an MRI system, an X-ray CT imaging system, or an US imaging system, and for segmenting an image region of the heart cavity or the blood vessel from other image regions. "MRI" is an abbreviation for Magnetic Resonance Imaging (MRI). "CT" is an abbreviation of computed tomography. "US" is an abbreviation for ultrasound (ultrasound).

Documents of the prior art

Patent document

Patent document 1: U.S. patent application publication No. 2010/0215238 specification

Patent document 2: specification of U.S. Pat. No. 6385332

Patent document 3: specification of U.S. Pat. No. 6251072

Disclosure of Invention

Problems to be solved by the invention

Treatment with IVUS is widely performed for the intracavitary, cardiovascular and lower limb arterial regions, etc. "IVUS" is an abbreviation for intravascular ultrasound. IVUS refers to an instrument or method that provides a two-dimensional image of a plane perpendicular to the long axis of a catheter.

As a current situation, an operator needs to perform an operation while reconstructing a three-dimensional structure by stacking two-dimensional images of IVUS in the mind, and this is particularly an obstacle for a young doctor or a less experienced doctor. In order to eliminate such obstacles, it is considered to automatically generate a three-dimensional image representing the structure of biological tissues such as a cardiac chamber and a blood vessel from a two-dimensional image of IVUS and display the generated three-dimensional image to an operator.

IVUS uses a high frequency band from about 6MHz to 60MHz, and therefore, minute particles are shot in a two-dimensional image of IVUS, and particularly, blood cell noise is strongly reflected. Therefore, the conventional method of detecting the contour of the heart cavity or blood vessel captured in the image cannot accurately distinguish the image region of the biological tissue included in the two-dimensional image of the IVUS from the other image regions such as the blood cells. Even if a three-dimensional image can be generated by such a method, the structure of the represented biological tissue is not accurate, and thus, there is a possibility that the safety of the operation is hindered.

The invention aims to improve the accuracy of a three-dimensional image which is generated from a two-dimensional image of ultrasonic waves and represents the structure of biological tissues.

Means for solving the problems

A diagnosis support apparatus according to an aspect of the present invention includes a control unit that performs control of associating a plurality of pixels included in a two-dimensional image of a biological tissue generated using a signal of a reflected wave of an ultrasonic wave transmitted inside the biological tissue through which blood passes with two or more types including a type of the biological tissue, generating a three-dimensional image of the biological tissue from a pixel group associated with the type of the biological tissue, and displaying the generated three-dimensional image of the biological tissue.

In one embodiment of the present invention, the two or more types further include a blood cell type, and the control unit excludes a pixel group associated with the blood cell type from the plurality of pixels and generates a three-dimensional image of the biological tissue.

In one embodiment of the present invention, the control unit performs control of analyzing any one of a pixel group associated with the type of the biological tissue and a generated three-dimensional image of the biological tissue, calculating a thickness of the biological tissue, and displaying the calculated thickness of the biological tissue.

In one embodiment of the present invention, the two or more types further include a medical instrument type, and the control unit performs control to generate a three-dimensional image of the medical instrument from one or more pixels associated with the medical instrument type and to display the generated three-dimensional image of the biological tissue and the generated three-dimensional image of the medical instrument in a distinguishable form.

In one embodiment of the present invention, the control unit executes a 1 st classification process and a 2 nd classification process, the 1 st classification process relating the plurality of pixels included in the two-dimensional image to the medical instrument type and one or more other types, and the 2 nd classification process smoothing the two-dimensional image except for the one or more pixels related to the medical instrument type by the 1 st classification process and relating a pixel group included in the smoothed two-dimensional image to one or more types including the biological tissue type.

In one embodiment of the present invention, the control unit executes 1 st classification processing and 2 nd classification processing, the 1 st classification processing smoothing the two-dimensional image and associating the plurality of pixels included in the two-dimensional image before smoothing with the medical instrument type and one or more other types, and the 2 nd classification processing associating a group of pixels included in the two-dimensional image after smoothing, excluding the one or more pixels associated with the medical instrument type by the 1 st classification processing, with one or more types including the biological tissue type.

In one embodiment of the present invention, the control unit performs control such that, when two or more pixels displaying different medical instruments are included in the one or more pixels associated with the medical instrument type, a three-dimensional image of the medical instrument is generated for each medical instrument, and the generated three-dimensional image of the medical instrument is displayed so as to be distinguishable for each medical instrument.

In one embodiment of the present invention, the control unit determines whether or not to associate one or more pixels of the plurality of pixels included in the two-dimensional image newly generated with the type of the medical instrument based on a result of association between the plurality of pixels included in the two-dimensional image previously generated.

In one embodiment of the present invention, the control unit associates the plurality of pixels included in the two-dimensional image with each other using a learned model.

In one embodiment of the present invention, the control unit generates the two-dimensional image by processing the signal of the reflected wave, and generates a three-dimensional image of the biological tissue corresponding to the newly generated two-dimensional image before generating the two-dimensional image every time the new two-dimensional image is generated.

In one embodiment of the present invention, the control unit generates the two-dimensional image at a rate of 15 times to 90 times per second.

A diagnosis support system according to an aspect of the present invention includes the diagnosis support apparatus, and a probe that transmits the ultrasonic wave inside the biological tissue and inputs a signal of the reflected wave to the control unit.

In a diagnosis support method according to an aspect of the present invention, a probe transmits an ultrasonic wave inside a biological tissue through which blood passes, a diagnosis support apparatus associates a plurality of pixels included in a two-dimensional image of the biological tissue, which is generated by a signal of a reflected wave of the ultrasonic wave, with two or more types including a type of the biological tissue, the diagnosis support apparatus generates a three-dimensional image of the biological tissue from a pixel group associated with the type of the biological tissue, and a display displays the three-dimensional image of the biological tissue generated by the diagnosis support apparatus.

Effects of the invention

According to an embodiment of the present invention, the accuracy of a three-dimensional image representing the structure of a biological tissue generated from a two-dimensional image of an ultrasonic wave is improved.

Drawings

Fig. 1 is a perspective view of a diagnosis support system according to an embodiment of the present invention.

Fig. 2 is a diagram showing an example of classification of a plurality of pixels included in a two-dimensional image according to an embodiment of the present invention.

FIG. 3 is a perspective view of a probe and a drive unit according to an embodiment of the present invention.

Fig. 4 is a block diagram showing a configuration of a diagnosis support apparatus according to an embodiment of the present invention.

Fig. 5 is a flowchart showing an operation of the diagnosis support system according to the embodiment of the present invention.

Fig. 6 is a diagram showing a data flow of the diagnosis support apparatus according to the embodiment of the present invention.

Fig. 7 is a diagram showing an example of input and output of a learned model according to an embodiment of the present invention.

Fig. 8 is a diagram showing an example of mounting a learned model according to an embodiment of the present invention.

Fig. 9 is a diagram showing an example of mounting a learned model according to an embodiment of the present invention.

Fig. 10 is a diagram showing an example of mounting a learned model according to an embodiment of the present invention.

Fig. 11 is a diagram showing an example of a three-dimensional image according to an embodiment of the present invention.

Fig. 12 is a diagram showing an example of a three-dimensional image according to an embodiment of the present invention.

Fig. 13 is a diagram showing an example of a three-dimensional image according to an embodiment of the present invention.

Fig. 14 is a diagram showing a data flow of the diagnosis support apparatus according to the modification of the embodiment of the present invention.

Detailed Description

Hereinafter, an embodiment of the present invention will be described with reference to the drawings.

In the drawings, the same or corresponding portions are denoted by the same reference numerals. In the description of the present embodiment, the same or corresponding portions will be omitted or simplified as appropriate.

The outline of the present embodiment will be described with reference to fig. 1 and 2.

In the present embodiment, the diagnosis support apparatus 11 associates two or more types including a biological tissue type with a plurality of pixels included in a two-dimensional image including a biological tissue generated by processing signals of reflected waves of ultrasonic waves transmitted inside the biological tissue through which blood passes. The "associating with type" of a plurality of pixels included in a two-dimensional image is synonymous with: in order to identify the type of an object such as a biological tissue displayed in each pixel of a two-dimensional image, a label such as a biological tissue label is assigned to each pixel, or each pixel is classified by the type such as a biological tissue type. In the present embodiment, the diagnosis support apparatus 11 generates a three-dimensional image of a biological tissue from a pixel group associated with a type of the biological tissue. That is, the diagnosis support apparatus 11 generates a three-dimensional image of the biological tissue from the pixel group classified as the type of the biological tissue. Then, the display 16 displays the three-dimensional image of the biological tissue generated by the diagnosis support apparatus 11. In the example of fig. 2, a plurality of pixels, i.e., 262,144 pixels, included in a two-dimensional image of 512 pixels × 512 pixels are classified into two or more types including other types such as a biological tissue type and a blood cell type. In the area of 4 pixels × 4 pixels shown enlarged in fig. 2, of all 16 pixels, half 8 pixels are a group of pixels classified into a biological tissue type, and the remaining 8 pixels are a group of pixels classified into another type different from the biological tissue type. In fig. 2, a 4-pixel × 4-pixel group, which is a part of a plurality of pixels included in a 512-pixel × 512-pixel two-dimensional image, is displayed in an enlarged manner, and for convenience of explanation, a pixel group classified as a type of biological tissue is hatched.

According to the present embodiment, the accuracy of a three-dimensional image representing the structure of a biological tissue generated from a two-dimensional image of an ultrasonic wave is improved.

In the present embodiment, the diagnosis support apparatus 11 uses a two-dimensional image of IVUS as a two-dimensional image of ultrasound.

For IVUS, for example, it can be used during interventional procedures. The reason for this is, for example, the following reason.

To determine the biological tissue properties in the heart cavity and the like.

To confirm the position of the placement object such as the stand or the position where the placement object is placed.

To confirm the positions of catheters, guide wires, and the like other than the IVUS catheter while using the two-dimensional image in real time.

Examples of the catheter other than the IVUS catheter include a catheter for stent indwelling and an ablation catheter.

According to the present embodiment, the operator does not need to perform the operation while superimposing the two-dimensional images of the IVUS in the mind and reconstructing the three-dimensional structure. Especially for young physicians or less experienced physicians, is no longer an obstacle.

In the present embodiment, the diagnosis support apparatus 11 is configured to be able to determine the positional relationship or biological tissue properties of catheters, indwelling objects, and the like other than the IVUS catheter by using a three-dimensional image during surgery.

In the present embodiment, the diagnosis support apparatus 11 is configured particularly as follows: the three-dimensional image can be updated on-the-fly for guiding the IVUS catheter.

In an operation such as ablation, there is a demand to determine the energy of ablation in consideration of the thickness of a blood vessel or a myocardial region. In addition, there is a need to perform an operation in consideration of the thickness of a biological tissue when removing lime or plaque using an Atherectomy (Atherectomy) instrument or the like. In the present embodiment, the diagnosis support apparatus 11 is configured to be capable of displaying the thickness.

In the present embodiment, the diagnosis support apparatus 11 is configured to continuously provide a three-dimensional structure of a region that can be observed through a blood vessel by continuously updating a three-dimensional image using an IVUS continuous image that is updated at any time.

In order to represent the cardiac chamber structure from the two-dimensional image of IVUS, it is necessary to distinguish between catheters and the like other than IVUS catheters in the blood cell region, the cardiac muscle region, and the cardiac chamber. In the present embodiment, it is possible to display only the myocardial region.

Since IVUS uses a high frequency band from about 6MHz to 60MHz, blood cell noise is strongly reflected, but in the present embodiment, there is a possibility that a difference between a biological tissue region and a blood cell region occurs.

In order to instantaneously perform processing for representing the cardiac chamber structure from a two-dimensional image of the IVUS updated at a speed of 15FPS or more and 90FPS or less, the time for processing 1 image is limited to 11msec or more and 66msec or less. In the present embodiment, the diagnosis support apparatus 11 is configured to be able to cope with such restrictions.

In the present embodiment, the diagnosis support apparatus 11 is configured to be able to place an image in which the biological tissue characteristics are specified, the blood cell region is removed, or the position of a catheter other than the IVUS catheter is specified, in a three-dimensional space, and to complete the processing of calculating and drawing a three-dimensional image until the next frame of image arrives, that is, to be able to perform calculation within a time period in which timeliness is established.

In the present embodiment, the diagnosis support apparatus 11 is configured to be able to provide additional information that satisfies the doctor's request, including information on lime and plaque, as well as the configuration.

Referring to fig. 1, the configuration of a diagnosis support system 10 according to the present embodiment will be described.

The diagnosis support system 10 includes a diagnosis support device 11, a cable 12, a drive unit 13, a keyboard 14, a mouse 15, and a display 16.

The diagnosis support apparatus 11 is a computer dedicated to image diagnosis in the present embodiment, but may be a general-purpose computer such as a PC. "PC" is an abbreviation for personal computer.

The cable 12 is used to connect the diagnosis support apparatus 11 and the drive unit 13.

The driving unit 13 is a device for connecting to the probe 20 shown in fig. 3 and driving the probe 20. The drive unit 13 is also referred to as MDU. "MDU" is an abbreviation of motor drive unit. The probe 20 is suitable for use in IVUS. The probe 20 is also called an IVUS catheter or a catheter for image diagnosis.

The keyboard 14, the mouse 15, and the display 16 are connected to the diagnosis support apparatus 11 via any cable or wirelessly. The display 16 is, for example, an LCD, an organic EL display, or an HMD. "LCD" is an abbreviation for Liquid Crystal Display (LCD). "EL" is an abbreviation for electro luminescence. "HMD" is an abbreviation for head-mounted display.

As an option, the diagnosis support system 10 further includes a connection terminal 17 and a car unit (cart unit) 18.

The connection terminal 17 is used to connect the diagnosis support apparatus 11 and an external device. The connection terminal 17 is, for example, a USB terminal. "USB" is an abbreviation for Universal Serial Bus (Universal Serial Bus). As the external device, a recording medium such as a magnetic disk drive, a magneto-optical disk drive, or an optical disk drive can be used.

The cart unit 18 is a cart (cart) with casters for movement. The vehicle body of the vehicle unit 18 is provided with the diagnosis support device 11, the cable 12, and the drive unit 13. A keyboard 14, a mouse 15, and a display 16 are provided on an uppermost table (table) of the cart unit 18.

Referring to fig. 3, the structure of the probe 20 and the driving unit 13 according to the present embodiment will be described.

The probe 20 includes a drive shaft 21, a hub 22, a sheath 23, an outer tube 24, an ultrasonic transducer 25, and a relay connector 26.

The drive shaft 21 extends through a sheath 23 inserted into a body cavity of a living body and an outer tube 24 connected to a proximal end of the sheath 23 to the inside of a hub 22 provided in a proximal end of the probe 20. The front end of the drive shaft 21 has an ultrasonic transducer 25 that receives a transmission signal, and the drive shaft 21 is provided so as to be rotatable within the sheath 23 and the outer tube 24. The relay connector 26 connects the sheath 23 and the outer tube 24.

The hub 22, the drive shaft 21, and the ultrasonic transducer 25 are connected to each other so as to be integrally moved in the axial direction. Therefore, for example, when the boss 22 is pushed toward the distal end, the drive shaft 21 and the ultrasonic transducer 25 move toward the distal end inside the sheath 23. For example, when the hub 22 is pulled toward the proximal end, the drive shaft 21 and the ultrasonic transducer 25 move toward the proximal end inside the sheath 23 as indicated by the arrow.

The drive unit 13 includes a scanner unit 31, a slide unit 32, and a bottom cover 33.

The scanner unit 31 is connected to the diagnosis support apparatus 11 via a cable 12. The scanning unit 31 includes a probe connector 34 connected to the probe 20, and a scanning motor 35 as a driving source for rotating the drive shaft 21.

The probe connector 34 is detachably connected to the probe 20 via an insertion port 36 of the hub 22 provided at the proximal end of the probe 20. The proximal end of the drive shaft 21 is rotatably supported inside the boss 22, and the rotational force of the scanning motor 35 is transmitted to the drive shaft 21. Further, the drive shaft 21 and the diagnosis support apparatus 11 receive and transmit signals via the cable 12. The diagnosis support apparatus 11 generates a tomographic image of the body lumen and performs image processing based on the signal transmitted from the drive shaft 21.

The slide unit 32 carries the scanner unit 31 so as to allow the scanner unit 31 to move forward and backward, and is mechanically and electrically connected to the scanner unit 31. The slide unit 32 includes a probe holder 37, a slide motor 38, and a switch group 39.

The probe holder 37 is provided so as to be disposed coaxially with the probe connecting portion 34 at a position on the tip side of the probe connecting portion 34, and supports the probe 20 connected to the probe connecting portion 34.

The slide motor 38 is a drive source that generates an axial drive force. The scanning unit 31 moves forward and backward by driving the slide motor 38, and the drive shaft 21 moves forward and backward in the axial direction in accordance with the forward and backward movement. The slide motor 38 is a servo motor, for example.

The switch group 39 includes, for example, a forward switch and a return switch (pullback switch) that are pressed at the time of forward and backward operations of the scanner unit 31, and a scanner switch that is pressed at the time of start and end of image writing. The example here is not particularly limited, and the switch group 39 includes various switches as necessary.

When the forward switch is pressed, the slide motor 38 is rotated forward, and the scanner unit 31 advances. On the other hand, when the return switch is pressed, the slide motor 38 rotates reversely, and the scanner unit 31 moves backward.

When the scan switch is pressed, image writing is started, the scan motor 35 is driven, and the slide motor 38 is driven to move the scan unit 31 backward. The operator connects the probe 20 to the scanner unit 31 in advance, starts image writing, and moves the drive shaft 21 to the axial proximal end side while rotating. When the scan switch is pressed again, the scan motor 35 and the slide motor 38 stop the image writing and terminate the image writing.

The bottom cover 33 covers the entire circumference of the bottom surface and the side surface on the bottom surface side of the slide unit 32, and is freely accessible and detachable with respect to the bottom surface of the slide unit 32.

Referring to fig. 4, the configuration of the diagnosis support apparatus 11 according to the present embodiment will be described.

The diagnosis support apparatus 11 includes components such as a control unit 41, a storage unit 42, a communication unit 43, an input unit 44, and an output unit 45.

The control unit 41 is one or more processors. As the processor, a general-purpose processor such as a CPU or a GPU, or a processor dedicated to a specific process can be used. "CPU" is an abbreviation for central processing unit. "GPU" is an abbreviation for graphics processing unit. The control unit 41 may include one or more dedicated circuits, or the control unit 41 may replace one or more processors with one or more dedicated circuits. As the dedicated circuit, for example, an FPGA or an ASIC can be used. "FPGA" is an abbreviation for field-programmable gate array. "ASIC" is an abbreviation for Application Specific Integrated Circuit (ASIC). The control unit 41 executes information processing relating to the operation of the diagnosis assisting apparatus 11 while controlling each part of the diagnosis assisting system 10 including the diagnosis assisting apparatus 11.

The storage unit 42 is one or more memories (memories). As the memory, for example, a semiconductor memory, a magnetic memory, or an optical memory can be used. As the semiconductor memory, for example, a RAM or a ROM can be used. "RAM" is an abbreviation for random access memory (random access memory). "ROM" is an abbreviation for read only memory. As the RAM, for example, SRAM or DRAM can be used. "SRAM" is an abbreviation for Static Random Access Memory (SRAM). "DRAM" is an abbreviation for Dynamic Random Access Memory (DRAM). As the ROM, for example, EEPROM can be used. "EEPROM" is an abbreviation of Electrically Erasable Programmable Read Only Memory (EEPROM). The memory is for example used for primary storage, secondary storage or cache memory. The storage unit 42 stores information for the operation of the diagnosis assisting apparatus 11 and information obtained based on the operation of the diagnosis assisting apparatus 11.

The communication unit 43 is one or more communication interfaces. As the communication interface, a wired LAN interface, a wireless LAN interface, or an image diagnosis interface that receives a signal of the IVUS and performs a/D conversion can be used. "LAN" is an abbreviation of local area network. "A/D" is an abbreviation for analog to digital. The communication unit 43 receives information for the operation of the diagnosis assisting apparatus 11 and transmits information obtained in accordance with the operation of the diagnosis assisting apparatus 11. In the present embodiment, the driving unit 13 is connected to an interface for image diagnosis included in the communication unit 43.

The input unit 44 is one or more input interfaces. As the interface for input, for example, a USB interface or an HDMI (registered trademark) interface can be used. "HDMI" is an abbreviation of High-Definition Multimedia Interface (HD Multimedia Interface). The input unit 44 receives an operation for inputting information for the operation of the diagnosis support apparatus 11. In the present embodiment, the keyboard 14 and the mouse 15 are connected to the USB interface included in the input unit 44, and the keyboard 14 and the mouse 15 may be connected to the wireless LAN interface included in the communication unit 43.

The output unit 45 is one or more output interfaces. As the interface for output, for example, a USB interface or an HDMI (registered trademark) interface can be used. The output unit 45 outputs information obtained in accordance with the operation of the diagnosis assisting apparatus 11. In the present embodiment, the display 16 is connected to an HDMI (registered trademark) interface included in the output unit 45.

The function of the diagnosis support apparatus 11 can be realized by executing the diagnosis support program according to the present embodiment by a processor included in the control unit 41. That is, the function of the diagnosis support apparatus 11 can be realized by software. The diagnosis support program is as follows: the present embodiment is intended to cause a computer to realize a function corresponding to a procedure included in the operation of the diagnosis support apparatus 11 by executing the procedure of the procedure by the computer. That is, the diagnosis support program is a program that causes a computer to function as the diagnosis support apparatus 11.

The program can be recorded in a computer-readable recording medium. As the computer-readable recording medium, for example, a magnetic recording device, an optical disc, a magneto-optical recording medium, or a semiconductor memory can be used. The distribution of the program can be performed by, for example, selling, transferring, or lending a portable recording medium such as a DVD or a CD-ROM on which the program is recorded. "DVD" is an abbreviation for digital versatile disc. "CD-ROM" is an abbreviation for compact disc read only memory (compact disc read Only memory). The program may be distributed by storing the program in a memory (storage) of the server and transferring the program from the server to another computer via a network. The program may be provided as a program product.

The computer temporarily stores a program recorded in a portable recording medium or a program transferred from a server in a memory, for example. Then, the computer reads the program stored in the memory with the processor, and executes the processing according to the read program with the processor. The computer can read the program directly from the portable recording medium and execute the processing according to the program. The computer may execute the processing in accordance with the received program in sequence each time the program is transferred from the server to the computer. The processing may be executed by a so-called ASP-type service that realizes a function only in accordance with the execution instruction and the result acquisition without transmitting the program from the server to the computer. "ASP" is an abbreviation of application service provider. The program includes information provided for processing by the electronic computer and content in accordance with the program. For example, data having a property that is not a direct instruction to the computer but specifies a computer process corresponds to "contents according to a program".

Part or all of the functions of the diagnosis support apparatus 11 can be realized by a dedicated circuit included in the control unit 41. That is, part or all of the functions of the diagnosis support apparatus 11 may be realized by hardware.

Referring to fig. 5, the operation of the diagnosis support system 10 according to the present embodiment will be described. The operation of the diagnosis support system 10 corresponds to the diagnosis support method according to the present embodiment.

Before the flow of fig. 5 starts, the probe 20 is primed by the operator (priming). Thereafter, the probe 20 is embedded in the probe connecting portion 34 and the probe clip portion 37 of the driving unit 13, and connected and fixed to the driving unit 13. The probe 20 is then inserted into a target site in a biological tissue through which blood passes, such as a heart chamber or a blood vessel.

In step S1, by pressing the scan switch included in the switch group 39 and further pressing the return switch included in the switch group 39, a so-called return operation is performed. The probe 20 transmits ultrasonic waves inside the biological tissue by the ultrasonic transducer 25 which is retracted in the axial direction by the returning operation.

In step S2, the probe 20 inputs the signal of the reflected wave of the ultrasonic wave transmitted in step S1 to the control unit 41 of the diagnosis support apparatus 11.

Specifically, the probe 20 transmits a signal of the ultrasonic wave reflected inside the biological tissue to the diagnosis support apparatus 11 via the drive unit 13 and the cable 12. The communication unit 43 of the diagnosis support apparatus 11 receives a signal transmitted from the probe 20. The communication unit 43 performs a/D conversion on the received signal. The communication unit 43 inputs the a/D converted signal to the control unit 41.

In step S3, the control unit 41 of the diagnosis assistance apparatus 11 generates a two-dimensional image of the ultrasonic wave by processing the signal input in step S2.

Specifically, as shown in fig. 6, the control section 41 executes the task management process PM that manages at least the image process P1, the image process P2, and the image process P3. The function of the task management processing PM is installed as one function of the OS, for example. "OS" is an abbreviation for operating system. The control unit 41 acquires the signal a/D converted by the communication unit 43 in step S2 as the signal data 51. The control unit 41 starts image processing P1 by the task management processing PM, and processes the signal data 51 to generate a two-dimensional image of the IVUS. The control unit 41 acquires a two-dimensional image of the IVUS as a result of the image processing P1 as the two-dimensional image data 52.

In step S4, the control unit 41 of the diagnosis support apparatus 11 classifies the plurality of pixels included in the two-dimensional image generated in step S3 into two or more types including the biological tissue type corresponding to the pixel displaying the biological tissue. In the present embodiment, the two or more types further include a blood cell type corresponding to a pixel displaying blood cells contained in blood. The two or more types also include types of medical instruments corresponding to pixels displaying medical instruments such as catheters or guide wires other than IVUS catheters. The two or more types may further include a type of the placement corresponding to a pixel of the placement such as the display stand. The two or more types may further include a type of lesion corresponding to a pixel displaying a lesion such as lime or plaque. Each type can be subdivided. Medical instrument types can be classified, for example, into catheter types, guidewire types, and other medical instrument types.

Specifically, as shown in fig. 6 and 7, the control unit 41 starts the image processing P2 by the job management processing PM, and classifies a plurality of pixels included in the two-dimensional image data 52 acquired in step S3 by using the learned model 61. The control unit 41 obtains, as the classification result 62, a two-dimensional image obtained as a result of the image processing P2, which is a two-dimensional image obtained by classifying each pixel of the two-dimensional image data 52 into one of a biological tissue type, a blood cell type, and a medical instrument type.

In step S5, the control unit 41 of the diagnosis support apparatus 11 generates a three-dimensional image of the biological tissue from the pixel group classified as the type of the biological tissue in step S4. In the present embodiment, the control unit 41 generates a three-dimensional image of the biological tissue by excluding the pixel group classified as the blood cell type in step S4 from the plurality of pixels included in the two-dimensional image generated in step S3. Further, the control unit 41 generates a three-dimensional image of the medical instrument from one or more pixels classified as the type of the medical instrument at step S4. Further, when two or more pixels displaying different medical instruments are included in the one or more pixels classified as the medical instrument type in step S4, the control unit 41 generates a three-dimensional image of the medical instrument for each medical instrument.

Specifically, as shown in fig. 6, the control unit 41 executes the image processing P2 by the task management processing PM, and stacks and three-dimensionally converts the two-dimensional image obtained in step S4 and obtained by classifying each pixel of the two-dimensional image data 52. The control unit 41 acquires volume data 53 indicating the three-dimensional structure for each classification as a result of the image processing P2. Then, the control unit 41 starts the image processing P3 by the task management processing PM to visualize the acquired volume data 53. The control unit 41 acquires, as the three-dimensional image data 54, a three-dimensional image representing the three-dimensional structure for each classification as a result of the image processing P3.

As a modification of the present embodiment, the control unit 41 may generate a three-dimensional image of the medical instrument based on the coordinates of one or more pixels classified as the type of the medical instrument at step S4. Specifically, the control unit 41 may hold data showing the coordinates of one or more pixels classified into the type of the medical instrument at step S4 as the coordinates of a plurality of points along the moving direction of the scanner unit 31 of the drive unit 13, and generate a linear three-dimensional model connecting the plurality of points along the moving direction of the scanner unit 31 as a three-dimensional image of the medical instrument. For example, for a medical instrument having a small cross section such as a catheter, the control unit 41 may arrange a three-dimensional model of a circular cross section as a three-dimensional image of the medical instrument on the coordinates of the center of one pixel classified as the medical instrument type or the center of a pixel group classified as the medical instrument type. That is, in the case of a small object such as a catheter, the pixels or the region as a set of pixels may not be returned as the classification result, but the coordinates may be returned as the classification result 62.

In step S6, the control unit 41 of the diagnosis support apparatus 11 controls to display the three-dimensional image of the biological tissue generated in step S5. In the present embodiment, the control unit 41 performs control to display the three-dimensional image of the biological tissue and the three-dimensional image of the medical instrument generated in step S5 in a distinguishable manner. When the control unit 41 performs control to generate a three-dimensional image of the medical instrument for each medical instrument in step S5, the generated three-dimensional image of the medical instrument is displayed so as to be distinguishable for each medical instrument. The display 16 is controlled by the control unit 41 to display a three-dimensional image of the biological tissue and a three-dimensional image of the medical instrument.

Specifically, as shown in fig. 6, the controller 41 executes 3D display processing P4 to cause the three-dimensional image data 54 acquired in step S6 to be displayed on the display 16 via the output unit 45. By labeling different colors or the like, a three-dimensional image of a biological tissue such as a heart cavity or a blood vessel and a three-dimensional image of a medical instrument such as a catheter can be displayed in a distinguishable manner. Any image among the three-dimensional image of the biological tissue and the three-dimensional image of the medical instrument can be selected by the keyboard 14 or the mouse 15. In this case, the control section 41 accepts an operation of selecting an image via the input section 44. The control unit 41 causes the display 16 to display the selected image via the output unit 45, and does not display the unselected images. In addition, an arbitrary cutting surface can be set by the keyboard 14 or the mouse 15. In this case, the control unit 41 receives an operation of selecting the cut surface via the input unit 44. The control unit 41 causes the display to display the three-dimensional image cut at the selected cut surface via the output unit 45.

In step S7, if the scan switch included in the switch group 39 is not pressed again, the process returns to step S1 and the process continues to return. As a result, while changing the transmission position of the ultrasonic waves inside the biological tissue, two-dimensional images of IVUS are sequentially generated. On the other hand, when the scan switch is pressed again, the return operation is stopped, and the flow of fig. 5 is terminated.

In the present embodiment, the image processing P1 and the 3D display processing P4 are performed on the CPU, and the image processing P2 and the image processing P3 are performed on the GPU. The volume data 53 may be saved in a memory area within the CPU, but saved in a memory area within the GPU in order to omit data transfer between the CPU and the GPU.

In particular, the processes of classification, catheter detection, image interpolation, and three-dimensionality included in the image processing P2 can be executed by the GP-GPU of the present embodiment, but may also be executed by an integrated circuit such as an FPGA or an ASIC. "GP-GPU" is an abbreviation for general purpose processor processing unit. The respective processes may be executed in series or in parallel. The respective processes may also be performed through a network.

In step S4, the control unit 41 of the diagnosis support apparatus 11 extracts a biological tissue region based on the region recognition, instead of the conventional edge extraction. The reason for this will be explained.

In the IVUS image, it is considered that an edge showing a boundary between a blood cell region and a biological tissue region is extracted with the aim of removing the blood cell region, and the edge is reflected in a three-dimensional space, thereby creating a three-dimensional image. However, the edge extraction is very difficult in the following respects.

The brightness gradient at the boundary between the blood cell region and the biological tissue region is not constant, and it is difficult to solve all the problems with the same algorithm.

When a three-dimensional image is formed using an edge, a complicated structure cannot be represented when the entire heart chamber is targeted rather than the blood vessel wall.

In such an image in which the blood cell region is included not only inside the biological tissue but also outside the biological tissue such as a portion visible to both the left atrium and the right atrium, edge extraction alone is not sufficient.

The catheter cannot be specified by extracting the edge only. In particular, when the wall of the biological tissue is connected to the catheter, it is impossible to obtain the boundary with the biological tissue.

With thin walls sandwiched, it is difficult to know which side is actually biological tissue by the edges alone.

It is difficult to calculate the thickness.

In steps S2 to S6, the control unit 41 of the diagnosis support apparatus 11 must remove blood cell components, extract organ parts, reflect the information in a three-dimensional space, and draw a three-dimensional image when performing three-dimensional processing, but can complete these processes within the time Tx of image transmission in order to immediately continue updating the three-dimensional image. Time Tx is 1/FPS. In the prior art of providing three-dimensional images, immediate processing cannot be achieved. By processing frame by the existing method, the three-dimensional image cannot be continuously updated until the next frame arrives.

As described above, in the present embodiment, each time the two-dimensional image is newly generated, the control unit 41 generates a three-dimensional image of the biological tissue corresponding to the newly generated two-dimensional image before generating the two-dimensional image.

Specifically, the control unit 41 generates a two-dimensional image of the IVUS at a rate of 15 times or more and 90 times or less per second, and updates a three-dimensional image at a rate of 15 times or more and 90 times or less per second.

In step S4, the control unit 41 of the diagnosis support apparatus 11 can identify a particularly small article such as a catheter by using the region where an article other than a biological tissue is extracted based on the region identification, instead of the edge extraction as in the conventional case.

If the catheter is in contact with the wall, the person would be judged as biological tissue from only 1 image.

Since the catheter is mistaken for a thrombus or a bubble, it is difficult to distinguish and determine the catheter from only 1 image.

The control unit 41 may use the past information to specify the catheter position, as in a method in which a normal person estimates the catheter position using the past continuous image as reference information.

In step S4, even when the probe 20 main body at the center of the two-dimensional image and the wall surface are in contact with each other, the control unit 41 of the diagnosis support apparatus 11 can distinguish the object by extracting the region of the object other than the biological tissue based on the region recognition rather than the conventional edge extraction. That is, the control section 41 can distinguish the IVUS catheter itself from the biological tissue region.

In step S4, in order to represent a complicated configuration, determine the biological tissue properties, search for small articles such as catheters, and the control unit 41 of the diagnosis support apparatus 11 extracts a biological tissue region and a catheter region, not an edge. Therefore, the present embodiment adopts a machine learning method. The control unit 41 directly evaluates what kind of characteristic portions are in each pixel of the image using the learned model 61, and reflects the image to which the classification is given in a three-dimensional space set under predetermined conditions. The control unit 41 stacks the information in a three-dimensional space, and displays a three-dimensional image by three-dimensionalizing the information with reference to the information stored in the three-dimensionally arranged memory space. In addition, these processes are updated instantaneously, and the three-dimensional information of the position corresponding to the two-dimensional image is updated. The calculations are performed sequentially or in parallel. In particular, by performing the processing in parallel, time efficiency can be achieved.

Machine learning is to analyze input data using an algorithm, extract useful rules, criteria for judgment, and the like from the analysis result, and develop the algorithm. Algorithms for machine learning are generally classified into supervised learning, unsupervised learning, reinforcement learning, and the like. In the supervised learning algorithm, a data set is given in which sound data and an ultrasonic image of a biological sound as a sample are input and disease data corresponding to the input is a result, and machine learning is performed based on the data set. In the unsupervised learning algorithm, machine learning is performed by giving only a large amount of input data. Reinforcement learning algorithms vary the environment based on the solution output by the algorithm, adding corrections based on how accurate the solution is to the output. The machine-learned model thus obtained is used as the learned model 61.

The learned model 61 is trained by performing machine learning in advance to enable a type to be specified by a two-dimensional image that becomes a sample. In a medical institution, for example, a medical institution such as a university hospital where many patients are collected, an ultrasound image as a sample and an image obtained by classifying the image by a person in advance with a tag are collected.

The IVUS image has high noise such as a blood cell region, and also has system noise. Therefore, in step S4, the control unit 41 of the diagnosis support apparatus 11 performs preprocessing on the image before inserting the learned model 61. As the preprocessing, filtering (smoothing) using various filters such as simple blur (simple blur), average blur (median blur), Gaussian blur (Gaussian blur), bilateral filter (bilateral filter), median filter (median filter), or block averaging, or performing dilation erosion (dilation and erosion), opening and closing (opening and closing), morphological gradient (morphological gradient), or image morphology (image morphology) or color filling (flow filter) such as top hat and black hat (top hat and black hat), size adjustment (size), image pyramid (image pyramid), threshold (threshold), low pass filter (low pass filter), high pass filter (high pass wavelet transform), or discrete wavelet transform (discrete wave transform) may be performed. However, when such processing is performed on a normal CPU, even this processing alone may not be completed within 66 msec. Therefore, this processing is performed on the GPU. In particular, in a method of machine learning called deep learning, which is constructed by a plurality of layers, it has been verified that preprocessing with immediacy is possible by constructing an algorithm as its layer. In this verification, 42FPS with a classification accuracy of 97% or more was achieved using images of 512 pixels × 512 pixels or more.

In the case of performing comparison with the presence or absence of pretreatment, it is desirable to add a pretreatment layer to the extraction of the biological tissue region, but when a small article such as a catheter in the two-dimensional image is determined, it is preferable that no pretreatment layer is present. Therefore, as a modification of the present embodiment, different image processing P2 may be prepared for each type. For example, as shown in fig. 14, image processing P2a containing a preprocessed layer for a biological tissue type and image processing P2b containing no preprocessed layer for a catheter type or for a specific catheter position may be prepared.

In this modification, the control unit 41 of the diagnosis support apparatus 11 smoothes the two-dimensional image. The smoothing is a process of smoothing the shading fluctuation of the pixel group. The smoothing includes the filtering described above. The control unit 41 performs the 1 st classification process of classifying a plurality of pixels included in the two-dimensional image before smoothing into a medical instrument type and one or more other types. The control unit 41 executes the 2 nd classification process of classifying the pixel group included in the smoothed two-dimensional image into one or more types including the biological tissue type, except for the one or more pixels classified into the medical instrument type in the 1 st classification process. The control unit 41 can display the medical instrument in the three-dimensional image with high accuracy by overlapping one or more pixels classified in the 1 st classification process and the pixel group classified in the 2 nd classification process. As a further modification of this modification, the control unit 41 may execute a 1 st classification process of classifying a plurality of pixels included in the two-dimensional image before smoothing into the medical instrument type and one or more other types, and a 2 nd classification process of smoothing the two-dimensional image except for the one or more pixels classified into the medical instrument type in the 1 st classification process and classifying a pixel group included in the two-dimensional image after smoothing into one or more types including the biological tissue type.

In step S5, the control unit 41 of the diagnosis support apparatus 11 calculates the thickness of the measured biological tissue using the acquired information of the biological tissue region based on the classification result of the image processing P2. The control unit 41 reflects the calculation measurement result in the three-dimensional information to indicate the thickness. In step S6, the control unit 41 represents the thickness by adding processing for distinguishing the color of the three-dimensional structure using layering or the like. The control section 41 may give the additional information by further giving the difference in the biological tissue property or the like by a display method of changing the color or the like of the biological tissue structure in three dimensions for each type.

As described above, in the present embodiment, the control unit 41 analyzes the pixel group classified as the type of the biological tissue in step S4, and calculates the thickness of the biological tissue. The control unit 41 performs control to display the calculated thickness of the biological tissue. The display 16 is controlled by the control unit 41 to display the thickness of the biological tissue. As a modification of the present embodiment, the control unit 41 may analyze the generated three-dimensional image of the biological tissue to calculate the thickness of the biological tissue.

The definition of the three-dimensional space in the present embodiment will be explained.

As a method of three-dimensionality, various operations such as a rendering method such as surface rendering (surface rendering) or volume rendering (volume rendering), and texture mapping (texture mapping), bump mapping (bump mapping), or environment mapping (environment mapping) attached thereto can be used.

The three-dimensional space used in the present embodiment is limited to a size that enables immediate processing. This size needs to be based on the FPS for obtaining the ultrasound image specified in the system.

In the present embodiment, the driving unit 13 capable of acquiring the positions thereof one by one is used. The scanning unit 31 of the driving unit 13 is movable on one axis, and the axis is the z axis, and the position of the scanning unit 31 at a certain moment is z. The Z-axis is correlated with one axis of a predetermined three-dimensional space, and this axis is defined as the Z-axis. Since the Z axis and the Z axis are correlated with each other, a point Z on the Z axis is predetermined to be Z ═ f (Z).

The information of the classification result 62 obtained according to the image processing P2 is reflected on the Z-axis. In the XY-axis plane of the three-dimensional space defined herein, it is necessary to be able to store all the type information classifiable in the image processing P2. Further, it is desirable that the luminance information in the original ultrasonic image is also included. All the type information of the classification result 62 obtained by the image processing P2 is reflected on the XY plane in the three-dimensional Z-axis position corresponding to the current position of the scanner unit 31.

In addition, it is desirable that each Tx (═ 1/FPS) is three-dimensional using volume rendering (volume rendering) or the like, but since the processing time is limited, it cannot be infinitely large. That is, the three-dimensional space is required to be a size that can be calculated within Tx (═ 1/FPS).

The possibility of exceeding a calculable size is taken into account when one wants to convert a longer range on the drive unit 13 into three dimensions. Therefore, in order to suppress the range displayed by the driving unit 13 within the above range, Z ═ f (Z) is specified as an appropriate conversion. It is necessary to set a function for converting the position on the Z axis into the position on the Z axis within the limits of both the moving range of the scanning unit 31 of the driving unit 13 on the Z axis and the range in which the volume data 53 on the Z axis can be stored.

As described above, in the present embodiment, the control unit 41 of the diagnosis support apparatus 11 classifies a plurality of pixels included in the two-dimensional image generated by processing the signal of the reflected wave of the ultrasonic wave transmitted inside the biological tissue through which blood passes into two or more types including the biological tissue type corresponding to the pixel displaying the biological tissue. The control unit 41 generates a three-dimensional image of the biological tissue from the pixel group classified as the type of the biological tissue. The control unit 41 controls to display the generated three-dimensional image of the biological tissue. Therefore, according to the present embodiment, the accuracy of the three-dimensional image representing the structure of the biological tissue generated from the two-dimensional image of the ultrasonic wave is improved.

According to the present embodiment, the three-dimensional image is displayed in real time, so that the operator can perform the operation without switching the two-dimensional image into the three-dimensional space in the mind, and it is possible to expect to reduce the fatigue of the operator and shorten the operation time.

According to the present embodiment, the positional relationship between an insertion object such as a catheter and a remaining object such as a stent is clarified, and the failure of the operation is reduced.

According to the present embodiment, the properties of the biological tissue can be obtained three-dimensionally, and an accurate operation can be performed.

According to the present embodiment, accuracy is improved by inserting a layer to be preprocessed inside the image processing P2.

According to the present embodiment, the information of the classified biological tissue region is used to calculate the measured biological tissue thickness, and the information is reflected in the three-dimensional information.

In the present embodiment, the input image is made an ultrasonic image, and the output is classified into 2 types or more of classification including a region of a catheter body, a blood cell region, a calcified region, a fibrotic region, a catheter region, a stent region, a myocardial necrosis region, a fat biological tissue, a biological tissue between organs, and the like, for 1 pixel or a region regarded as a set of a plurality of pixels, and the classification enables determination of what portion is the cause of the portion in 1 image.

In the present embodiment, at least the classification of the biological tissue types corresponding to the heart and the blood vessel region is predetermined. The learning efficiency can be improved by using, as a material for machine learning, supervised learning data that has been classified by having it have a classification of 2 types or more including the type of biological tissue every 1 pixel or a region regarded as an aggregate of a plurality of pixels.

In the present embodiment, the learned model 61 is constructed as an arbitrary neural network for deep learning including CNN, RNN, and LSTM. "CNN" is an abbreviation for a convolutional neural network. "RNN" is an abbreviation for recurrent neural network. "LSTM" is an abbreviation for long short-term memory.

Fig. 8 shows an example in which learned model 61 is constructed as the RNN.

In this example, the time series is considered in the classification. When a normal person determines the position of a small article such as a catheter, the continuity of the small article is considered by changing the imaging position while changing the position of the ultrasonic element. Similarly, in the image processing P2, a small object such as a catheter can be specified in consideration of time-axis data. In order to take into account the past information, the past information for a certain period is input to the image processing P2 together with the currently obtained image, and the current classification is performed based on the input information. The learned model 61 in this example is at least a model that accepts as input one previously generated two-dimensional image and a newly generated two-dimensional image, and outputs a classification result 62 of the newly generated two-dimensional image. In fig. 8, the input image at time t-1 is a previously generated two-dimensional image, the input image at time t is a newly generated two-dimensional image, and the output image at time t is the classification result 62 of the newly generated two-dimensional image.

Fig. 9 is a diagram showing an example in which the learned model 61 is constructed as LSTM.

In this example, the learned model 61 has a memory module that stores more than 2 previously generated two-dimensional images. The memory module has a function of storing past information.

Fig. 10 shows an example that further considers future information.

In this example, the actual current time is set to a future time t, images of a certain period before the time t are input to the image processing P2, the actual previous time is set to a current time t-1, and classification is performed in accordance with the images at the time t-1. The learned model 61 in this example is a model that accepts as input at least one previously generated two-dimensional image and the newly generated two-dimensional image, and outputs a classification result 62 of the previously generated two-dimensional image. In FIG. 10, the input image at time t-1 is a previously generated two-dimensional image, the input image at time t is a newly generated two-dimensional image, and the output image at time t-1 is the classification result 62 of a previously generated two-dimensional image. The method of fig. 10 may be applied to the examples of fig. 8 or 9.

As a method for extracting small articles such as catheters in an image by deep learning, a method such as R-CNN, Fast R-CNN, Mask R-CNN, YoLO or SSD can be applied. "R-CNN" is an abbreviation of region-based volumetric network. "YOLO" is an abbreviation for You Only Look one. "SSD" is an abbreviation for Single Shot MultiBox Detector.

As a modification of the present embodiment, as shown in fig. 11 or 12, the control unit 41 of the diagnosis support apparatus 11 can determine whether or not to classify one or more pixels among a plurality of pixels included in a newly generated two-dimensional image as a medical instrument type based on the classification result of a plurality of pixels included in a previously generated two-dimensional image. In this case, the control section 41 compares the previously generated two-dimensional image with the newly generated two-dimensional image. Then, for example, when the newly generated two-dimensional image includes one or more pixels having a degree of coincidence with 90% or more of the one or more pixels classified as the medical instrument type among the plurality of pixels included in the previously generated two-dimensional image, the control unit 41 classifies the one or more pixels included in the newly generated two-dimensional image as the medical instrument type.

Fig. 11 shows an example of a packet.

In this example, the control unit 41 temporarily stores the extraction result using the image processing P2 when a plurality of objects of the type of an article, such as a catheter, are expected to be continuously present in the three-dimensional space. The control unit 41 further classifies the extracted target group based on information on the number of catheters given in advance, taking into consideration the results of the determination in time series. In fig. 11, the blood vessel 63, the 1 st catheter 64, and the 2 nd catheter 65 are each independently reflected in the three-dimensional image data 54.

Fig. 12 shows an example of noise correction.

Even if the position of the catheter or the like is extracted using the image processing P2, not all of them are accurate solutions. Therefore, in this example, the control section 41 further considers the result of the determination in time series, removing apparent errors. In fig. 12, the blood vessel 63, the 1 st catheter 64, and the 2 nd catheter 65 are each independently reflected in the three-dimensional image data 54. Although the noise 66 is shown in fig. 12 for convenience of explanation, it is not actually reflected in the three-dimensional image data 54.

As a modification of the present embodiment, the presence or absence or arrangement of the medical instrument in the biological tissue may be changed to generate a two-dimensional image of the IVUS a plurality of times. In this case, as shown in fig. 13, the control unit 41 of the diagnosis support apparatus 11 determines whether or not to classify one or more pixels among a plurality of pixels included in the newly generated two-dimensional image as the medical instrument type based on the classification result of the plurality of pixels included in the previously generated two-dimensional image.

In the example of fig. 13, the 1 st catheter 64 and the 2 nd catheter 65 can be reliably detected by comparing images captured with a difference D1 for the presence or absence of a catheter at the same position in the blood vessel 63. In addition, by comparing images captured by placing catheters at the same positions in the blood vessel 63 and identifying the difference D2, the 1 st catheter 64 and the 2 nd catheter 65 can be reliably detected.

As a modification of the present embodiment, instead of the diagnosis support apparatus 11 performing the process of step S3, another apparatus performs the process of step S3, and the diagnosis support apparatus 11 acquires the generated two-dimensional image as a result of the process of step S3 and performs the processes after step S4. That is, instead of processing the signal of the IVUS to generate the two-dimensional image, the control unit 41 of the diagnosis support apparatus 11 may process the signal of the IVUS by another apparatus to generate the two-dimensional image and input the generated two-dimensional image to the control unit 41.

The present invention is not limited to the above-described embodiments. For example, a plurality of blocks described in the block diagrams may be integrated, or one block may be divided. Instead of performing a plurality of steps based on the description in time series, the steps may be performed in parallel or in a different order depending on the processing capability of the apparatus performing the steps or on the need. Further, modifications may be made without departing from the spirit of the present invention.

For example, the image processing P1, the image processing P2, and the image processing P3 shown in fig. 6 may be performed in parallel.

Description of the reference numerals

10 diagnosis support system

11 diagnosis support device

12 cable

13 drive unit

14 keyboard

15 mouse

16 display

17 connecting terminal

18 vehicle unit

20 Probe

21 drive shaft

22 hub

23 sheath layer

24 outer tube

25 ultrasonic vibrator

26 relay connector

31 scanning unit

32 slide unit

33 bottom cover

34 probe connection part

35 scanning motor

36 insertion opening

37 Probe clip

38 sliding motor

39 switch group

41 control part

42 storage unit

43 communication unit

44 input unit

45 output part

51 signal data

52 two-dimensional image data

53 volume data

54 three-dimensional image data

61 learned model

62 classification result

63 blood vessels

64 st catheter

65 No. 2 conduit

66 noise

31页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于皮肤活检的装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!