Three-dimensional imaging and modeling of ultrasound image data

文档序号:1144940 发布日期:2020-09-11 浏览:23次 中文

阅读说明:本技术 超声图像数据的三维成像和建模 (Three-dimensional imaging and modeling of ultrasound image data ) 是由 弗兰克·威廉·莫尔丁 亚当·迪克森 凯文·欧文 于 2019-01-08 设计创作,主要内容包括:在三维中跟踪超声探头的位置和取向以提供可用于解剖学评估和/或手术引导的高度精确的三维骨表面图像。可以在三维中跟踪治疗施加器的位置和取向,以提供反馈以将治疗施加器的预计路径与治疗施加器的期望路径对准,或者提供反馈以将治疗施加器的潜在治疗场与目标解剖部位对准。可以将三维骨表面图像拟合到解剖部位的三维模型,以向用户提供或显示附加信息,以提高解剖学评估和/或手术引导的准确性。(The position and orientation of the ultrasound probe is tracked in three dimensions to provide highly accurate three-dimensional bone surface images that can be used for anatomical assessment and/or surgical guidance. The position and orientation of the treatment applicator may be tracked in three dimensions to provide feedback to align the intended path of the treatment applicator with the desired path of the treatment applicator, or to align the potential treatment field of the treatment applicator with the targeted anatomical site. The three-dimensional bone surface image may be fitted to a three-dimensional model of the anatomical site to provide or display additional information to the user to improve the accuracy of the anatomical assessment and/or surgical guidance.)

1. An ultrasound imaging and therapy guidance system comprising:

an ultrasound probe that generates a positionally adjusted ultrasound beam to acquire three-dimensional image data of bony anatomy within a human subject;

an object tracker configured to detect a current position and a current orientation of the ultrasound probe;

a therapy applicator for delivering therapy to the human subject;

a mechanical device coupled to the ultrasound probe and the therapy applicator to set a predetermined relative position of the therapy applicator with respect to the ultrasound probe;

a processor;

a non-transitory computer memory operably coupled to the processor, the non-transitory memory including computer-readable instructions that cause the processor to:

detecting a position and orientation of a three-dimensional bone surface location based at least in part on the three-dimensional image data and the current position and the current orientation of the ultrasound probe;

automatically detecting a target treatment site positioned relative to the three-dimensional bone surface;

determining an appropriate position and an appropriate orientation of the therapy applicator required to deliver the therapy to the target treatment site; and

generating display data;

a display in electrical communication with the processor, the display generating an image based on the display data, the image comprising:

an indication of the three-dimensional bone surface location;

temporally acquired two-dimensional ultrasound image frames mutually aligned with a potential treatment field of the treatment applicator at a current position and a current orientation of the treatment applicator;

an indication of the target treatment site positioned relative to the three-dimensional bone surface; and

a graphical indicator indicating whether the target treatment site and the potential treatment field are aligned.

2. The system of claim 1, wherein the computer readable instructions further cause the processor to automatically detect the target treatment site located relative to the three-dimensional bone surface using a neural network.

3. The system of claim 1, wherein the computer readable instructions further cause the processor to detect the position and the orientation of the three-dimensional bone surface location by fitting the three-dimensional image data to a three-dimensional bone model.

4. The system of claim 3, wherein the image generated by the display further comprises a bone landmark location.

5. The system of claim 3, wherein the computer readable instructions further cause the processor to automatically detect the target treatment site using the three-dimensional bone model.

6. The system of claim 1, wherein the indication of the three-dimensional bone surface location is displayed as a two-dimensional bone surface image with a third dimension encoded to represent the bone surface location along the third dimension.

7. The system of claim 6, wherein the third dimension is graphically encoded to represent the bone surface location along the third dimension.

8. The system of claim 7, wherein the third dimension is color-coded to represent the bone surface locations along the third dimension.

9. The system of claim 1, wherein the appropriate position and the appropriate orientation of the therapy applicator are determined based at least in part on the predetermined relative position of the therapy applicator with respect to the ultrasound probe.

10. The system of claim 1 or 9, wherein:

the object tracker is configured to detect the current position and the current orientation of the therapy applicator, and

determining the appropriate position and the appropriate orientation of the therapy applicator based at least in part on the current position and the current orientation of the therapy applicator.

11. The system of claim 1, wherein the image generated by the display further includes a current position and a current orientation of the potential treatment field.

12. The system of claim 1, wherein the image generated by the display further comprises the current position and the current orientation of the therapy applicator.

13. The system of claim 1, wherein the image generated by the display further comprises size and orientation information of the bony anatomy calculated from the three-dimensional bone surface locations.

14. The system of claim 1, wherein the therapy applicator comprises a needle guide, a needle, an ablation instrument, and/or a high intensity focused ultrasound transducer.

15. The system of claim 1, wherein the target treatment site comprises an epidural space, a subarachnoid space, or a medial branch nerve.

16. The system of claim 1, wherein the ultrasound probe is configured to be manually positionally adjusted by a user.

17. The system of claim 1, wherein the ultrasound probe is configured to be automatically positionally adjusted via a mechanical motorized mechanism.

18. The system of claim 1, wherein the object tracker includes an inductive proximity sensor.

19. The system of claim 1, wherein the object tracker includes ultrasound image processing circuitry.

20. The system of claim 19, wherein the ultrasound image processing circuitry is configured to determine the relative change in the current position of the ultrasound probe by comparing sequentially acquired ultrasound images in the three-dimensional image data.

21. The system of claim 1, wherein the object tracker includes an optical sensor.

22. The system of claim 21, wherein the optical sensor comprises a fixed optical emitter and a swept laser that is detected by the optical sensor, the optical sensor being disposed on the ultrasound probe.

23. The system of claim 1, wherein the object tracker includes an integrated position sensor.

24. The system of claim 23, wherein the integrated position sensor comprises an electromechanical potentiometer, a linear variable differential transformer, an inductive proximity sensor, a rotary encoder, an incremental encoder, an accelerometer, and/or a gyroscope.

25. The system of claim 1, wherein the three-dimensional bone surface locations comprise three-dimensional spinal bone locations.

26. The system of claim 1, wherein the position-adjusted ultrasound beam is positionally adjusted by mechanical movement of the ultrasound probe and/or electrical steering of the position-adjusted ultrasound beam.

27. A method for guiding a therapy applicator, comprising:

positionally adjusting an ultrasound beam produced by an ultrasound probe on a human subject to acquire three-dimensional image data of bony anatomy within the human subject;

detecting a current position and a current orientation of the ultrasound probe using an object tracker while performing position adjustment on the ultrasound beam;

determining a position and orientation of a three-dimensional bone surface location based at least in part on the three-dimensional image data and the current position and the current orientation of the ultrasound probe;

automatically detecting a target treatment site positioned relative to the three-dimensional bone surface;

determining an appropriate position and an appropriate orientation of the therapy applicator required to deliver therapy to the target treatment site;

displaying an image on a display in electrical communication with a computer, the image comprising:

an indication of the three-dimensional bone surface location;

temporally acquired two-dimensional ultrasound image frames mutually aligned with a potential treatment field of the treatment applicator at a current position and a current orientation of the treatment applicator;

an indication of the target treatment site positioned relative to the three-dimensional bone surface; and

a graphical indicator indicating whether the target treatment site and the potential treatment field are aligned.

28. The method of claim 27, further comprising: automatically detecting, in a computer, the target treatment site positioned relative to the three-dimensional bone surface using a neural network.

29. The method of claim 27, further comprising: fitting the three-dimensional image data to a three-dimensional bone model.

30. The method of claim 29, further comprising: determining the location and the orientation of the three-dimensional bone surface using the three-dimensional bone model.

31. The method of claim 30, further comprising: identifying a bone landmark location using the three-dimensional bone model.

32. The method of claim 31, wherein the image comprises the bone landmark locations.

33. The method of claim 30, further comprising: automatically detecting the target treatment site using the three-dimensional bone model.

34. The method of claim 27, wherein the indication of the three-dimensional bone surface location is displayed as a two-dimensional bone surface image with a third dimension encoded to represent the bone surface location along the third dimension.

35. The method of claim 34, further comprising: graphically encoding the third dimension to represent the bone surface location along the third dimension.

36. The method of claim 35, further comprising: color coding the third dimension to represent positioning along the bone surface of the third dimension.

37. The method of claim 27, further comprising: mechanically coupling a mechanical device coupled to the ultrasound probe with the therapy applicator, the mechanical device setting a predetermined relative position of the therapy applicator with respect to the ultrasound probe.

38. The method of claim 37, further comprising: determining the appropriate position and the appropriate orientation of the therapy applicator based at least in part on the predetermined relative position of the therapy applicator with respect to the ultrasound probe.

39. The method of claim 27 or 38, further comprising:

detecting the current position and the current orientation of the therapy applicator using the object tracker; and

determining the appropriate position and the appropriate orientation of the therapy applicator based at least in part on the current position and the current orientation of the therapy applicator.

40. The method of claim 27, wherein the image further comprises a current position and a current orientation of the potential treatment field.

41. The method of claim 27, wherein the image further comprises the current position and the current orientation of the therapy applicator.

42. The method of claim 27, wherein the image further comprises size and orientation information of the bony anatomy calculated from the three-dimensional bone surface location.

43. The method of claim 27, wherein the therapy applicator comprises a needle guide, a needle, an ablation instrument, and/or a high intensity focused ultrasound transducer.

44. The method of claim 27, wherein the target treatment site comprises an epidural space, a subarachnoid space, or a medial branch nerve.

45. The method of claim 27, wherein positionally adjusting the ultrasound beam comprises mechanically moving the ultrasound probe.

46. The method of claim 27, further comprising: position adjustment of the ultrasound probe is performed using a mechanical motorized mechanism.

47. The method of claim 27, wherein positionally adjusting the ultrasound beam comprises electronically scanning the ultrasound beam.

48. The method of claim 27, wherein the object tracker includes an inductive proximity sensor.

49. The method of claim 27, wherein the object tracker includes ultrasound image processing circuitry.

50. The method of claim 49, further comprising: determining, using the ultrasound image processing circuitry, a relative change in the current position of the ultrasound probe by comparing sequentially acquired ultrasound images in the three-dimensional image data.

51. The method of claim 27, wherein the object tracker includes an optical sensor.

52. The method of claim 51, wherein the optical sensor comprises a fixed optical emitter and a swept laser detected by the optical sensor, the optical sensor disposed on the ultrasonic probe.

53. The method of claim 27, wherein the object tracker includes an integrated position sensor.

54. The method of claim 53, wherein the integrated position sensor comprises an electromechanical potentiometer, a linear variable differential transformer, an inductive proximity sensor, a rotary encoder, an incremental encoder, an accelerometer, and/or a gyroscope.

55. The method of claim 27, wherein the three-dimensional bone surface positioning comprises three-dimensional spinal bone positioning.

56. The method of claim 27, wherein the current position and the current orientation of the ultrasound probe are detected using the object tracker.

57. The method of claim 27, further comprising:

acquiring two-dimensional ultrasound image data of the bony anatomy at a plurality of ultrasound probe locations; and

combining the two-dimensional ultrasound image data and the ultrasound probe positioning to form the three-dimensional image data.

58. The method of claim 57, wherein the two-dimensional image data comprises pixels, and the method further comprises determining a three-dimensional location of each pixel based on the ultrasound probe localization.

59. The method of claim 27, further comprising: a bone enhancement process is performed to enhance any bone and/or bone features in the ultrasound image.

60. The method of claim 27, further comprising:

receiving a user interface event; and

recording a reference position of the ultrasound probe based on a time at which the user interface event was received.

Technical Field

The present application relates generally to three-dimensional rendering of bone images acquired by ultrasound imaging.

Background

Medical ultrasound is commonly used to facilitate needle injection or probe insertion procedures, such as central venous line placement or various spinal anesthesia procedures. Commonly implemented techniques include using ultrasound imaging to locate anatomical landmarks (e.g., blood vessels or bone structures), followed by marking the patient's skin with a surgical marker near the ultrasound transducer. The ultrasound transducer is then removed, the needle is positioned in a position relative to the marking site, and the needle is then inserted.

Needle insertion, probe placement, and therapy delivery procedures require knowledge of the subcutaneous three-dimensional anatomy to ensure accurate placement of the therapeutic instrument. However, existing medical ultrasound systems are typically configured to provide only two-dimensional cross-sectional views of the subcutaneous anatomy. Therefore, navigating the treatment instrument three-dimensionally with reference to only two-dimensional cross-sectional views of the anatomical structure is technically challenging. Furthermore, few medical ultrasound systems provide visual cues to a practitioner to assist in determining whether a treatment device is aligned with a target anatomical site. Current systems fail to provide a medical provider with visual guidance to determine whether a treatment device is aligned with a target treatment site without complex registration with other 3D imaging modality images (CT/MRI).

Limitations of existing medical ultrasound systems have led to the need for medical practitioners to receive extensive training protocols to compensate for the lack of real-time three-dimensional image guidance information. The burden of training results in an inadequate medical practitioner who is qualified to perform interventional procedures.

Disclosure of Invention

The exemplary embodiments described herein have innovative features, no single one of which is essential or solely responsible for their desirable attributes. The following description and the annexed drawings set forth in detail certain illustrative implementations of the disclosure, indicative of several exemplary ways in which the various principles of the disclosure may be implemented. However, these illustrative examples are not exhaustive of the many possible embodiments of the disclosure. Without limiting the scope of the claims, some advantageous features will now be outlined. Other objects, advantages and novel features of the disclosure will be set forth in the following detailed description of the disclosure when considered in conjunction with the drawings, which are intended to illustrate and not to limit the invention.

One aspect of the invention relates to an ultrasound imaging and therapy guidance system comprising: an ultrasound probe that generates a positionally adjusted ultrasound beam to acquire three-dimensional image data of bony anatomy within a human subject; an object tracker configured to detect a current position and a current orientation of the ultrasound probe; a therapy applicator for delivering therapy to a human subject; a mechanical device coupled to the ultrasound probe and the therapy applicator to set a predetermined relative position of the therapy applicator with respect to the ultrasound probe; a processor; a non-transitory computer memory operably coupled to the processor. The non-transitory memory includes computer readable instructions that cause the processor to: detecting a position and orientation of a three-dimensional bone surface location based at least in part on the three-dimensional image data and a current position and a current orientation of the ultrasound probe; automatically detecting a target treatment site positioned relative to a three-dimensional bone surface; determining an appropriate position and an appropriate orientation of the therapy applicator required to deliver therapy to the target treatment site; and generating display data. The system also includes a display in electrical communication with the processor, the display generating an image based on the display data, the image including: an indication of three-dimensional bone surface location; two-dimensional ultrasound image frames acquired instantaneously, mutually aligned with a potential treatment field of the treatment applicator at a current position and a current orientation of the treatment applicator; an indication of a target treatment site located relative to a three-dimensional bone surface; and a graphical indicator indicating whether the target treatment site and the potential treatment field are aligned.

In one or more embodiments, the computer readable instructions further cause the processor to automatically detect a target treatment site located relative to the three-dimensional bone surface using the neural network. In one or more embodiments, the computer readable instructions further cause the processor to detect a position and an orientation of the three-dimensional bone surface location by fitting the three-dimensional image data to a three-dimensional bone model. In one or more embodiments, the image generated by the display further includes a bone landmark location. In one or more embodiments, the computer readable instructions further cause the processor to automatically detect the target treatment site using the three-dimensional bone model.

In one or more embodiments, the indication of the three-dimensional bone surface location is displayed as a two-dimensional bone surface image with a third dimension encoded to represent the bone surface location along the third dimension. In one or more embodiments, the third dimension is graphically encoded to represent positioning along the bone surface of the third dimension. In one or more embodiments, the third dimension is color-coded to represent positioning along the bone surface of the third dimension.

In one or more embodiments, the appropriate position and appropriate orientation of the treatment applicator is determined based at least in part on a predetermined relative position of the treatment applicator with respect to the ultrasound probe. In one or more embodiments, the object tracker is configured to detect a current position and a current orientation of the therapy applicator, and determine the appropriate position and the appropriate orientation of the therapy applicator based at least in part on the current position and the current orientation of the therapy applicator.

In one or more embodiments, the images generated by the display also include the current position and current orientation of the potential treatment field. In one or more embodiments, the image generated by the display also includes the current position and current orientation of the treatment applicator. In one or more embodiments, the image generated by the display further includes size and orientation information of the bony anatomy calculated from the three-dimensional bone surface locations.

In one or more embodiments, the treatment applicator includes a needle guide, a needle, an ablation instrument, and/or a high intensity focused ultrasound transducer. In one or more embodiments, the target treatment site includes an epidural space, a subarachnoid space, or a medial branch nerve. In one or more embodiments, the ultrasound probe is configured to be manually position adjusted by a user. In one or more embodiments, the ultrasound probe is configured to be automatically positionally adjusted by a mechanical motorized mechanism.

In one or more embodiments, the object tracker includes an inductive proximity sensor. In one or more embodiments, the object tracker includes ultrasound image processing circuitry. In one or more embodiments, the ultrasound image processing circuitry is configured to determine the relative change in the current position of the ultrasound probe by comparing sequentially acquired ultrasound images in the three-dimensional image data.

In one or more embodiments, the object tracker includes an optical sensor. In one or more embodiments, the optical sensor includes a fixed optical emitter and a swept laser that is detected by the optical sensor, which is disposed on the ultrasonic probe. In one or more embodiments, the object tracker includes an integrated position sensor. In one or more embodiments, the integrated position sensor includes an electromechanical potentiometer, a linear variable differential transformer, an inductive proximity sensor, a rotary encoder, an incremental encoder, an accelerometer, and/or a gyroscope. In one or more embodiments, the three-dimensional bone surface locations include three-dimensional spinal bone locations.

In one or more embodiments, the position of the position-adjusted ultrasound beam is adjusted by mechanical movement of the ultrasound probe and/or electrical steering of the position-adjusted ultrasound beam.

Another aspect of the invention relates to a method for guiding a therapy applicator, comprising: positionally adjusting an ultrasound beam produced by an ultrasound probe on a human subject to acquire three-dimensional image data of bony anatomy within the human subject; detecting a current position and a current orientation of the ultrasound probe using an object tracker while performing position adjustment on the ultrasound beam; determining a position and orientation of a three-dimensional bone surface location based at least in part on the three-dimensional image data and a current position and a current orientation of the ultrasound probe; automatically detecting a target treatment site positioned relative to a three-dimensional bone surface; determining an appropriate position and an appropriate orientation of the therapy applicator required to deliver therapy to the target treatment site; displaying an image on a display in electrical communication with the computer, the image comprising: an indication of three-dimensional bone surface location; two-dimensional ultrasound image frames acquired instantaneously, mutually aligned with a potential treatment field of the treatment applicator at a current position and a current orientation of the treatment applicator; an indication of a target treatment site located relative to a three-dimensional bone surface; and a graphical indicator indicating whether the target treatment site and the potential treatment field are aligned.

In one or more embodiments, the method further comprises using the neural network in a computer to automatically detect a target treatment site located relative to the three-dimensional bone surface.

In one or more embodiments, the method further comprises fitting the three-dimensional image data to a three-dimensional bone model. In one or more embodiments, the method further comprises determining the position and orientation of the three-dimensional bone surface using the three-dimensional bone model. In one or more embodiments, the method further comprises identifying a bone landmark location using the three-dimensional bone model. In one or more embodiments, the image includes bone landmark localization. In one or more embodiments, the method further comprises automatically detecting the target treatment site using the three-dimensional bone model.

In one or more embodiments, the indication of the three-dimensional bone surface location is displayed as a two-dimensional bone surface image with a third dimension encoded to represent the bone surface location along the third dimension. In one or more embodiments, the method further includes graphically encoding the third dimension to represent positioning along the bone surface of the third dimension. In one or more embodiments, the method further comprises color coding the third dimension to represent positioning along a bone surface of the third dimension.

In one or more embodiments, the method further includes mechanically coupling a mechanical device coupled to the ultrasound probe with the treatment applicator, the mechanical device setting a predetermined relative position of the treatment applicator with respect to the ultrasound probe. In one or more embodiments, the method further includes determining the appropriate position and the appropriate orientation of the treatment applicator based at least in part on a predetermined relative position of the treatment applicator with respect to the ultrasound probe. In one or more embodiments, the method further includes detecting a current position and a current orientation of the therapy applicator using the object tracker; and determining an appropriate position and an appropriate orientation of the treatment applicator based at least in part on the current position and the current orientation of the treatment applicator.

In one or more embodiments, the image also includes a current position and a current orientation of the potential treatment field. In one or more embodiments, the image further includes a current position and a current orientation of the treatment applicator. In one or more embodiments, the image further includes, among other things, size and orientation information of the bony anatomy calculated from the three-dimensional bone surface locations.

In one or more embodiments, the treatment applicator includes a needle guide, a needle, an ablation instrument, and/or a high intensity focused ultrasound transducer. In one or more embodiments, the target treatment site includes an epidural space, a subarachnoid space, or a medial branch nerve. In one or more embodiments, positionally adjusting the ultrasound beam includes mechanically moving the ultrasound probe.

In one or more embodiments, the method further comprises performing position adjustment of the ultrasound probe using a mechanical motorized mechanism. In one or more embodiments, positionally adjusting the ultrasound beam includes electronically scanning the ultrasound beam.

In one or more embodiments, the object tracker includes an inductive proximity sensor. In one or more embodiments, the object tracker includes ultrasound image processing circuitry. In one or more embodiments, the method further includes determining, using the ultrasound image processing circuitry, a relative change in a current position of the ultrasound probe by comparing sequentially acquired ultrasound images in the three-dimensional image data.

In one or more embodiments, the object tracker includes an optical sensor. In one or more embodiments, the optical sensor includes a fixed optical emitter and a swept laser that is detected by the optical sensor, which is disposed on the ultrasonic probe. In one or more embodiments, the object tracker includes an integrated position sensor. In one or more embodiments, the integrated position sensor includes an electromechanical potentiometer, a linear variable differential transformer, an inductive proximity sensor, a rotary encoder, an incremental encoder, an accelerometer, and/or a gyroscope.

In one or more embodiments, the three-dimensional bone surface locations include three-dimensional spinal bone locations. In one or more embodiments, the current position and current orientation of the ultrasound probe is detected using an object tracker.

In one or more embodiments, the method further comprises acquiring two-dimensional ultrasound image data of the bony anatomy at the multiple ultrasound probe locations; and combining the two-dimensional ultrasound image data and the ultrasound probe positioning to form three-dimensional image data. In one or more embodiments, the two-dimensional image data includes pixels, and the method further includes determining a three-dimensional location of each pixel based on the ultrasound probe location. In one or more embodiments, the method further includes performing a bone enhancement process to enhance any bone and/or bone features in the ultrasound image.

In one or more embodiments, the method further comprises receiving a user interface event; and recording a reference position of the ultrasound probe based on the time of receipt of the user interface event.

Drawings

For a fuller understanding of the nature and advantages of the present inventive concept, reference should be made to the following detailed description of the preferred embodiment taken together with the accompanying drawings in which:

fig. 1 is a block diagram of a system for guiding an ultrasound probe and a therapy applicator in accordance with one or more embodiments;

fig. 2 is a flow diagram illustrating a method for tracking and/or guiding an ultrasound probe and a therapy applicator in accordance with one or more embodiments;

FIG. 3 is a representative illustration of a display that graphically identifies regions within a human subject that have not been adequately scanned with an ultrasound probe;

FIG. 4 is a display of an exemplary 3D spine model or exemplary 3D spine data with analysis superimposed with spine analysis based on the 3D spine model for guiding an epidural injection;

FIG. 5 illustrates a display for guiding a needle along an appropriate needle trajectory in accordance with one or more embodiments;

FIG. 6 is a perspective view of a mechanical system including a mechanical device mechanically coupled to an ultrasound probe and a needle;

FIG. 7 illustrates an example of a three-dimensional display of spinal anatomy along an anterior-posterior line of sight; and

figure 8 shows a two-dimensional display of potential treatment fields aligned with the treatment site.

Detailed Description

Aspects of the present invention relate to an ultrasound system combined with three-dimensional (3D) position tracking that enables highly accurate 3D bone surface rendering for the purpose of anatomical assessment and/or surgical guidance (e.g., guiding a treatment applicator such as a needle and/or device during energy-based ablation). In some embodiments, the invention includes one or more of features (a) - (e). Other embodiments may include additional, fewer, and/or different features.

In feature (a), when the ultrasound probe is positionally adjusted proximate to a target region on the human subject (e.g., to acquire image data of bony anatomy proximate to the target region), a 3D bone image may be generated by tracking (e.g., with a position tracking system) the spatial position and optionally orientation of the ultrasound probe. Bone images may be automatically annotated, such as by providing an indication of joint or bone feature location, fracture location, indication of optimal needle insertion angle, indication of possible needle or treatment sites, and/or indication and degree of scoliosis and/or other bone anatomical abnormalities.

In feature (b), the 3D bone image is fitted to a model of the target anatomy, and this model may optionally be displayed together with the actual bone positioning.

In feature (c), real-time feedback may be provided to the user during the ultrasound probe scan (e.g., while acquiring the 3D bone image) so that the 3D anatomy proximate the target region is scanned at all positions and/or orientations necessary to achieve a 3D display of the reconstructed bone with annotation and/or model fitting information.

In feature (d), the position tracking system tracks the therapy applicator in addition to the ultrasound transducer. After the 3D bone information is constructed, the system can guide the treatment applicator to the desired location in real time. For example, the therapy applicator may be a needle, a needle guide, a catheter, an ultrasound system or probe (with or without a needle guide), a radio frequency ablation probe, or a High Intensity Focused Ultrasound (HIFU) transducer. The desired treatment site may be the epidural space, the facet joint, or the sacroiliac joint. In some embodiments, real-time guidance may include guiding the therapy applicator while therapy is being applied, for example, during energy-based ablation. The desired location may be a user-specified location, such as by indicating a location on the 3D bone reconstruction at which treatment should be applied. The system then guides the treatment applicator to the desired location so that the desired treatment site receives treatment when the treatment applicator is activated. Alternatively, the positioning may be provided automatically by the system. This positioning may be the optimal positioning for the treatment applicator to deliver treatment precisely to the desired treatment site, or may provide several options for optimal positioning (e.g., at different intervertebral spaces).

In feature (e), the ultrasound system, bone positioning and/or therapy applicator may be displayed in a Virtual Reality (VR) environment, an Augmented Reality (AR) environment or a Mixed Reality (MR) environment. Any of these environments may be displayed on the VR headset and/or a conventional computer screen, and/or on a screen attached to the ultrasound probe, and/or on a screen attached to the therapy applicator.

In VR, the user may be presented with a simulated 3D environment in which a stereoscopic head-mounted display and/or some other visual stimulation method is used to create the illusion of depth. If the display is unable to convey depth information, the VR display is simply a virtual 3D environment presented on a two-dimensional (2D) display (e.g., a monitor). This display limitation also applies to the following definitions of AR and MR systems.

In AR, some form of realism may be presented to a user by superimposing simulated ("virtual") 2D or 3D data in a visual environment. The combination of real content and virtual content may be accomplished using a camera to capture real content and/or by combining virtual content with the user's conventional vision using a transparent screen and/or other methods to inject visual information into the user's field of view. The visual environment may include a simulated 3D environment or a virtual 3D environment as described above with respect to VR.

MR is similar to AR, presenting real and simulated content to a user seamlessly, however in this modality, virtual and augmented entities can interact in real-time. For example, a virtual ball may pop off of a real physical wall, or enhanced anatomical information may move in space when a physical object location (e.g., skin surface) is sensed. For the purposes of this application, AR includes MR as a subset thereof.

Fig. 1 is a block diagram of a system 10 for guiding an ultrasound probe and a therapy applicator in accordance with one or more embodiments. System 10 includes an optional mechanical device 102, an ultrasound probe 104, an optional probe display 108, an object tracking system 112, an optional treatment applicator 116, an optional treatment applicator display 118, a fiducial 124, a camera 130, a main processing unit 136, a display 140, a computer memory 150, and a user interface device 160.

The ultrasound probe 104 includes one or more ultrasound transducers for imaging a target anatomical region within the subject. Exemplary ultrasound transducers may be cell transducers, linear arrays, curved arrays, two-dimensional arrays, Capacitive Micromachined Ultrasound Transducers (CMUTs), all of which are commercially available and known to those skilled in the art. In operation, a user places the ultrasound probe 104 on the skin of a subject proximate a target anatomical region, e.g., prior to a therapeutic procedure (e.g., an epidural anesthesia procedure, an ultrasound therapy procedure, a surgical procedure, etc.) or as part of a diagnostic procedure (e.g., spinal anatomy analysis). The user then moves or scans (e.g., mechanically and/or electronically) the ultrasound probe 104 along the skin of the subject in the vicinity of the target anatomical region via the position adjustment to acquire an ultrasound image of the target anatomical region. By performing the position adjustment of the ultrasonic probe 104, the ultrasonic beam used for producing the ultrasonic image is also subjected to the position adjustment. In another exemplary embodiment using a two-dimensional array transducer, the ultrasound beam produced by the ultrasound transducer may be electronically positionally adjusted using a programmable electronic transmit circuit that applies a time delay to a particular element of the two-dimensional array (e.g., adjusts the relative phase of drive signals to the particular element of the two-dimensional array). Such operations for producing three-dimensional ultrasound image data without mechanical motion of a two-dimensional transducer array are known to those skilled in the art and are readily commercially available. In this same embodiment, the positional adjustment of the ultrasound beam is tracked in accordance with knowledge of the time delays applied to the elements within the Two-Dimensional Array, for example as disclosed in U.S. patent No. 6,419,633 entitled "2D Ultrasonic Transducer Array for Two Dimensional and Three Dimensional Imaging" and U.S. patent No. 5,329,496 entitled "Two-Dimensional Array Ultrasonic transducers," which are incorporated herein by reference. The acquired ultrasound images may be displayed on an optional probe display 108 (disposed on the ultrasound probe 104 or integrated in the ultrasound probe 104) and/or on a display 140.

During ultrasound imaging, the ultrasound probe 104 is tracked in three-dimensional space using the object tracking system 112. The object tracking system 112 may generally track the ultrasound probe 104 in three-dimensional space using various methods. For example, 3D tracking may be achieved by tracking two or more locations on the ultrasound probe 104, which may include tracking two or more locations on a rigid portion of the ultrasound probe 104 in some embodiments. The object tracking system 112 may also track the ultrasound probe 104 in only one or two dimensions if the ultrasound probe 104 is mechanically constrained in other dimensions, such as by a mechanical frame or guide such as implemented in commercially available three-dimensional rocking ultrasound transducers. Additionally or alternatively, the object tracking system 112 may track the ultrasound probe 104 in three-dimensional space by tracking the position and orientation of the ultrasound probe 104 using an integrated position sensor (e.g., which projects gravity onto 3 vertical axes). Additionally, the object tracking system 112 may optionally utilize ultrasound data processing circuitry to calculate changes in relative position by comparing sequentially acquired 2D images and/or 3 volumes using speckle tracking and/or image similarity tracking techniques (techniques well known in the art). These techniques are described, for example, in U.S. patent No. 6,012,458 entitled "Method and Apparatus for Tracking Scan Plane Motion in Free-hand three-dimensional Ultrasound Scanning Using Adaptive spectrum correction" and U.S. patent No. 6,728,394 entitled "Dynamic Measurement of Object Parameters," which are incorporated herein by reference.

The object tracking system 112 may use an optical tracking system, a magnetic-based tracking system, a radio or acoustic tracking system, a camera-based tracking system, a position sensor, and/or ultrasound image processing circuitry to determine the position and orientation of the ultrasound probe 104. The optical tracking system may include one or more fixed optical transmitters with optical synchronization pulses, and a swept laser that is detected by a light sensor on the target device (i.e., the ultrasound probe 104). An example of such an optical tracking system is the HTC vivtm Lighthouse tracking system available from HTC corporation of taiwan.

The magnetic-based tracking system may include multiple pairs of fixed and moving coils or other magnetic field sensors that may be used to determine the relative position of the moving coil based on the variable mutual inductance of each pair of fixed and moving coils or the magnetic field measured by the sensors. The mutual inductance or magnetic field value is a function of the separation distance between each pair of stationary and moving coils, or sensors. Examples of Magnetic field 3D Tracking systems include those described in U.S. patent No. 6,774,624 entitled "Magnetic Tracking System," which is incorporated herein by reference, and those in the Tracking products sold by Polhemus (colester, VT, USA) and NDI Medical, LLC (ontario, canada).

A radio or acoustic tracking system can track the position of an object on a smaller scale using the time of flight between a fixed transmitter and a mobile receiver (and/or a fixed receiver and a mobile transmitter), including optionally using related methods for fine-tuning the distance estimate. These transmitters may emit radio frequency signals or acoustic energy and typically use time-of-flight delays and/or variations in the received waves and propagation models to estimate position and/or orientation, the sensing range and accuracy of which are limited substantially only by the signal-to-noise ratio. In some embodiments, the radio or acoustic tracking system may function similar to a Global Positioning System (GPS).

A camera-based tracking system includes one or more cameras attached to one or both of a fixed object and a moving object. The images from the camera may be analyzed to determine the relative positions of the fixed structure or object and the moving structure or object within the camera field of view.

In some embodiments, the position sensor may be integrated or disposed on or in the ultrasound probe 104 (e.g., in a housing of the ultrasound probe 104), or the ultrasound probe 104 may be attached or affixed to an object that includes such a position sensor (e.g., integrated therein or disposed on or in the object), and the distance between the object and the ultrasound probe 104 is known. The position sensor is capable of tracking the relative movement of the ultrasound probe through 3D space. Examples of position sensors include electromechanical potentiometers, linear variable differential transformers, inductive proximity sensors, rotary encoders, incremental encoders, and inertial tracking using integrated accelerometers and/or gyroscopes.

Additionally or alternatively, the 2D and/or 3D position of the ultrasound probe 104 may be tracked using speckle tracking or other image processing-based methods for motion tracking (e.g., block tracking) of sequentially acquired 2D/3D ultrasound data sets. Such ultrasound image-based tracking may be performed, at least in part, by ultrasound image processing circuitry disposed in object tracking system 112 or operatively connected to object tracking system 112.

In some embodiments, an optional mechanical device 102 may be used to constrain the position of the treatment applicator 116. The mechanical device 102 will set the position of the ultrasound probe 104 relative to the treatment applicator 116. If the exact dimensions of the mechanism are known, the exact position of the treatment applicator relative to the ultrasound probe is also known. An example of such a mechanical device is a mechanical device integrated into commonly used ultrasound needle guides. Such needle guides have a clamp or similar mechanical mechanism that fixes the position of the needle guide relative to the ultrasound probe. Other examples may be a mechanical frame that holds both the ultrasound probe 104 and the High Intensity Focused Ultrasound (HIFU) therapy applicator 116.

Fig. 6 is a perspective view of a mechanical system 60 including a mechanical device 600, the mechanical device 600 mechanically coupled to an ultrasound probe 610 and a needle 620. The mechanism includes a first portion 602 and a second portion 604, the first portion 602 and the second portion 604 being removably attached (e.g., using a clamp, screw, or other attachment mechanism). The first portion 602 and the second portion 604 are arranged around the ultrasound probe 610 to rigidly hold the ultrasound probe 610 therebetween. The needle 620 passes through the aperture 606 defined in the arm 608 of the first portion 602 of the mechanism 600. Thus, the mechanism 600 sets the relative position and orientation of the ultrasound probe 610 and the needle 620. It should be noted that in other embodiments, the needle 620 may be another treatment applicator (e.g., treatment applicator 116).

Returning to fig. 1, the data output of the ultrasound probe 104 and the object tracking system 112 are provided to a computer that includes a main processing unit 136. The main processing unit 136 may process the data and output image data to the display 140 and/or the optional probe display 108, as described herein. The display 140 may be a two-dimensional display (e.g., a computer monitor) or a three-dimensional display, such as a virtual reality helmet that may be worn by a user.

The subject tracking system 112 may also track the optional treatment applicator 116 and/or fiducial markers 124 in three-dimensional space. The object tracking system 112 may track the treatment applicator 116 in three-dimensional space in the same or substantially the same manner as it tracks the ultrasound probe 104. The fiducial markers 124 may be markers in absolute space that are independent of subsequent subject movement, and/or they may be markers physically attached to the object (e.g., the ultrasound probe 104 and/or the therapy applicator 116) and thus may be subsequently tracked as the fiducial markers move. In some embodiments, the fiducial marker may be physically attached to the human subject. The object tracking system 112 may track the three-dimensional position and optionally the orientation of the fiducial markers 124 in three-dimensional space, as described further below.

The camera 130 is in electrical communication with the main processing unit 136. The camera 130 may be static (i.e., a camera mounted at a fixed location) or dynamic such that its position is also tracked in 3D space (e.g., a camera worn by the user, such as a forward facing camera that is part of a virtual reality helmet, such as an HTCViveTM helmet). The camera 130 may be used to capture images of the human subject and/or the device user such that, for example, if a virtual reality headset is used in surgery on the back of the subject, the back of the human subject may be displayed in addition to other information (e.g., 3D spine model fitting, 3D bone composite images, fiducials 124, user annotations, analysis, and/or therapy applicator 116, among other items) while the device user's arm holds the therapy applicator 116.

Alternatively, the treatment applicator 116 may incorporate integrated position sensors or may be attached to a mechanism incorporating integrated position sensors that are capable of tracking the position of the treatment applicator 116 through 3D space and the relative position of the treatment applicator 116 with respect to the ultrasound probe 104. For example, the therapy applicator 116 may be a needle that may be attached to a needle guide rigidly mounted on the ultrasound probe 104. The needle guide may contain a rotary encoder mechanism by which the relative angular trajectory of the needle with respect to the ultrasound probe may be measured. In addition, the linear travel of the needle through the needle guide and into the human subject may be measured by a linear variable differential transformer integrated in the needle guide.

The computer memory 150 comprises a non-transitory computer memory operatively coupled to the main processing unit 136. Memory 150 may store computer programs or applications, instructions, and/or data sets that may enable main processing unit 136 to perform the functions described herein.

The user interface device 160 may include a mouse, touch screen, virtual buttons, mechanical buttons, microphone (e.g., for receiving voice commands), or other device that allows a user to interact with a computer.

Other aspects of the system 10 will be described in conjunction with the flowchart 20 of fig. 2, fig. 2 showing a method for tracking and/or guiding an ultrasound probe and a therapy applicator in accordance with one or more embodiments.

In step 204, the three-dimensional position and orientation of the ultrasound probe 104 is tracked as the user of the ultrasound probe 104 places it on and/or moves the ultrasound probe 104 along the skin of the human subject proximate to the target anatomical region. As described above, the three-dimensional position and orientation of the ultrasound probe 104 may be tracked using the object tracking system 112.

In step 208, the main processing unit 136 calculates the three-dimensional position of the ultrasound image pixels. The positioning of ultrasound image pixels in one-dimensional (1D), two-dimensional (2D), or three-dimensional (3D) space has a spatial relationship with respect to the position and orientation of the ultrasound probe 104 (e.g., ultrasound transducer) at a particular instant in time. For example, at one instant, the position and orientation of the ultrasound probe may be described by a 3-dimensional position vector (e.g., r0 ═ 0.5m,10m,10m on the x, y, z axes) and a set of three orthogonal unit vectors (e.g., i, j, k) such that each individual ultrasound pixel 'N' (of the total N) has a spatial position described by rn ═ r0+ ani + bnj + cnk, where an, bn, and cn describe the pixel position relative to the probe on three arbitrary but fixed orthogonal axes. Using this information, a linear transformation in 3D space can be constructed to calculate the instantaneous position of each ultrasound image pixel of a 1D, 2D or 3D ultrasound image.

The imaging field of view and ultrasound image pixels (e.g., 'n') may occupy a known spatial region projected from the ultrasound transducer element. The spatial relationship between the probe and the field of view can be derived from known geometric relationships inherent to ultrasound probe design. In some embodiments, the ultrasound probe may naturally create a 3D image. In other embodiments, the ultrasound probe forms a 3D image by combining separate 2D images.

In step 212, the ultrasound probe 104 acquires ultrasound images of the target anatomical region, which may include bone and/or bone features, at a first location. After acquiring the ultrasound image, the main processing unit 136 performs a bone enhancement process in step 214 to enhance any bone and/or bone features in the ultrasound image. Bone enhancement processing may be performed using any method known in the art, such as phase coherence between adjacent ultrasound echoes from the same bone surface, directional log-Gabor filtering, and rank reduction methods that also enhance bone reflections. In another example, bone shadowing, and other physical aspects of acoustic/Ultrasound interaction with bone structures may be used to enhance bone features, such as described in U.S. patent application publication No. 2016/0012582 entitled "Systems and Methods for Ultrasound Imaging" published on 14/1/2016, U.S. patent application publication No. 2016/0249879 entitled "Systems and Methods for Ultrasound Imaging of a region Containing bone structures" published on 1/9/2016, and/or U.S. patent application publication No. 2016/0249879 entitled "System and Methods for Ultrasound Imaging of a region Containing bone structures", and/or PCT/47472 in PCT/17 entitled "published on 18/8/2017 for Ultrasound and bone Shadow Detection and Imaging" published in PCT/17, these applications are incorporated herein by reference.

In optional step 226, fiducial markers 124 are generated and tracked. Using the tracked ultrasound probe 104, optionally in contact with the patient, in step 204, the instantaneous position and orientation of the ultrasound probe 104 may be recorded using user interface events such that some of the tip of the ultrasound probe 104 corresponds to a reference position (e.g., as fiducial markers 124). For example, the tip of the ultrasound probe 104 may be positioned at the sacrum or alternatively at a bony prominence above the intra-gluteal cleft, and then the user interface button may be pressed to record the reference position of the tip of the ultrasound probe 104 (e.g., as the fiducial marker 124). Additionally, by capturing several or many fiducial markers 124 associated with a single subject, a surface, such as the skin surface of the back or the length of the ultrasound probe 104, may be captured.

In one example, the three-dimensional position of the fiducial marker 124 and/or fiducial position is tracked using the object tracking system 112 and/or a still camera (e.g., camera 130). The three-dimensional positions of the fiducial markers 124 and/or fiducial locations may be tracked using a similar method as the tracking of the ultrasound probe 104 in step 204. As described above, the user interface button may be activated to indicate the position of the fiducial marker 124 in space, which may be tracked using the target tracking system 112 and/or a still camera (e.g., camera 130).

In another example, the three-dimensional position of the fiducial markers 124 and/or fiducial locations may be tracked using a method similar to tracking the ultrasound probe 104 in step 204. The trackable object may be affixed to the skin of a human subject, and the trackable object may be used as a permanent fiducial marker 124 that will track the subject's motion in real time.

In yet another example, a three-dimensional position of a trackable or fixed position object (which may operate as fiducial markers 124) affixed to the skin of the subject may be tracked with a camera (e.g., camera 130) and/or object tracking system 112. In some embodiments, the object affixed to the skin of the subject may include a sticker with a spatially coded identification and/or a color coded identification. Spatially and/or color coded identification can be used to determine the instantaneous position of an object (e.g., sticker) being tracked, and knowledge of the geometry of the camera imaging can be used to track the reference position in real time as the subject and object move. The camera may be a spatially fixed camera or a "dynamic" camera, such as a forward facing camera on a virtual reality helmet.

In step 218, the image and the position information are combined to form a 3D composite image. In this step, the outputs of steps 214 and 208, respectively the bone enhanced ultrasound data sets from successive captures and the 3D position of each pixel from each capture, are combined to produce a set of bone enhanced ultrasound pixels, each pixel corresponding or registered to a particular location in 3D space. This processing may be performed by the ultrasound probe 104, the 3D object tracking system 112, the main processing unit 136, and the optional fiducial markers 124, and/or may be related to the ultrasound probe 104, the 3D object tracking system 112, the main processing unit 136, and the optional fiducial markers 124. The location information includes bone enhanced ultrasound data in addition to the optional reference location. Step 218 is commonly referred to as "freehand" 3D imaging by those skilled in the ultrasound art.

In some embodiments, step 218 may include using the 3D position information of the ultrasound probe 104 provided by the object tracking system 112 to collectively register the 2D frames of ultrasound image data into the 3D volume. In some embodiments, this may be accurate to about 1 mm.

Additionally or alternatively, step 218 may include a data-dependent frame-to-frame registration operation (e.g., blob tracking) to better align image features in the 2D frame of ultrasound image data into the 3D volume. This would be an iterative semi-rigid registration operation that would preserve the spatial relationship between image features but reduce the registration error to approximately sub-millimeter error.

Additionally or alternatively, step 218 may include applying some sort of persistence mapping or other method to improve the specificity of bone feature detection in the multi-sampled volumetric region. A persistence map or other method may exclude false positive bone features that are not present in all samples of the same region.

Techniques for combining images and positional information to generate a 3D composite image have been described, for example, in the following documents, which are incorporated herein by reference: (1) R.Rohling, A.Gee, L.Berman, "A composition of free three-dimensional ultrasound reconstruction techniques," Medical Image Analysis,3(4): 339-; and (2) O.V.Solberg, F.Lindseth, H.Torp, R.E.Blake, T.A.N.Hemes, "Freehand 3D Ultrasound reconstruction algorithms-review)," Ultrasound in medicine & Biology,33(7): 991-.

As illustrated by the following examples, there are several alternative ways to generate a 3D composite image in step 218, which may be used alone or in some arbitrary combination. For example, the data sets from each bone enhanced ultrasound capture (e.g., the output of step 214) may be viewed as separate subsets of 3D samples that may be searched and analyzed in future processing steps. In another example, if the data set from each bone enhanced ultrasound capture is taken as a scalar value placed in 3D space, the spatial frequency characteristics of the data set can be used together with Nyquist-Shannon (Nyquist-Shannon) sampling theory to resample the 3D data, resulting in a uniform or non-uniform 3D scalar field to simplify further analysis.

In yet another example, the data set from each bone enhanced ultrasound capture may be considered a vector value, as each scalar value from a single ultrasound frame also has a direction associated with the corresponding acoustic wave front, given by the acoustic wave propagation theory. The bone enhanced pixel values from step 214 may have varying sensitivities based on the angle that the acoustic wave vector makes with the bone surface. This means that the vector data set in 3D space contains more abundant information that can be used to improve subsequent analysis.

The data set from each bone enhanced ultrasound capture (e.g., the output of step 214) may be combined with the 3D data set resulting from the previous 2D scan using several methods, which may be used alone or in some arbitrary combination, such as in example (a) - (c) below.

Example (a) includes, for example, additive combining of 3D data that supports a "density" function in 3D space.

Example (b) includes using existing 3D data as "previous" probabilities of three-dimensional bone surface positioning in space. The data obtained from a single scan can be used to iteratively update the 3D bone surface probability function in space. Furthermore, to filter out "false positive" bone surfaces (e.g., due to loss of ultrasound probe contact), the 3D bone surface probability volume function may also have an "age" parameter. The age parameter may be used to retire bone surface locations that are not enhanced by subsequent scans (increasing their probability) with a fairly low probability (e.g., less than 25%) over a period of time (e.g., over a certain number of scans). The probability data may also improve the accuracy of real bone surfaces consisting of several to many scans, where the position in space and bone detection are effectively averaged or compounded over many partially independent measurements. The processing of the composite bone probability may be a non-linear function of the probabilities of the existing and new scans, as well as an age history of the scans forming the existing probabilities.

Example (c) includes using other a priori bone probabilities. For example, if skin surface fiducial markers and/or some bone fiducial markers (e.g. hip anatomy) are identified, these may be used to modify the possibility of bone in space. Similarly, once the 3D model of optional step 230 has been at least partially constructed, this may also modify the bone probabilities, for example, so that the bone surfaces are more likely to approximate the anatomy identified by the model as the spinous process and less likely to approximate the anatomy identified by the model as the intervertebral space.

In step 222, ultrasound image data is acquired at the next or subsequent location (i.e., after the ultrasound image was acquired at the first location in step 212). The ultrasound probe 104 is able to capture ultrasound data sets at successive positions and successive times, while some control and/or reporting of the time at which the data sets were captured is made so that they can be registered in space (e.g., in steps 208 and 218). This can be done, for example, by controlling the timing of data capture to coincide with the physical location, or by continuously and repeatedly capturing ultrasound frames while accurately recording the timing of the ultrasound frames relative to the motion tracking sample instants. The ultrasound probe 104, the object tracking system 112 and the main processing unit 236 may be involved in this processing step.

In step 230, the landmark anatomy is automatically detected by a model-based or data-based algorithm. In one embodiment, this may include a model fitting algorithm that matches the synthesized 3D image (e.g., the output of step 218) to a 3D model, such as disclosed in U.S. patent No. 10,134,125 entitled Systems and Methods for Ultrasound Imaging, which is incorporated herein by reference. In some embodiments, the 3D composite image formed in step 218 is fitted to the 3D model with optimization to satisfy certain constraints. Such constraints may include a priori knowledge of the type of anatomy being imaged, such as the lumbar, thoracic, or other particular bony anatomy.

Thus, a shape model based approach may be used. Shape models typically identify points of interest in an image (e.g., bone points or bone surfaces) and compare those points to a prototype set (e.g., a template) of one or more points or surfaces that conform to the shape of, for example, known anatomical features. Linear and/or non-linear transformations may be parametrically applied to the shape or template and used to match points of interest in the image, where closeness of fit is used as a metric to determine whether the image matches a particular anatomical structure. Further constraints may include portions of the anatomy traced by the fiducial markers 124 (and tracked in step 226), e.g., specific vertebrae, pelvic extremities, etc. Furthermore, prior statistical knowledge of the target anatomy and mechanical constraints can be used to assist in model fitting, such as statistical distribution of vertebral dimensions, separation distance between bones (e.g., between adjacent vertebrae), and/or intervertebral bending angles.

Model fitting and registration techniques are known to those skilled in the ultrasound and/or image processing arts. For example, open source software, such as the Insight Segmentation and Registration Toolkit (https:// itk. org/, available from the national medical library of America), may be used to obtain 3D Registration software that uses algorithms such as point set Registration. Furthermore, pre-existing images of any modality may be used to constrain 3D model fitting, such as applying CT and/or MRI datasets to limit 3D model parameters.

There are various methods that can be used for the optimization process of 3D model fitting, such as in optimization examples (1) - (3).

The optimization example (1) includes a parameter space search. For example, the parameter space of the 3D model is heuristically, linearly, and/or adaptively searched, such as by varying parameters such as vertebra position, size, and/or orientation, until the observations from step 218 are well-fitted in a least squares sense.

Optimization example (2) included maximum likelihood model fitting using prior knowledge and bayesian analysis. This example may be achieved by searching the parameter space of a constrained 3D model, such as a multi-spine, and finding a set of parameters (e.g., the location, orientation, and/or size parameters of each vertebra) that maximizes the probability of getting the input data set (from step 218) from a given set of 3D model parameters, which gives an a priori likelihood of any given set of parameters.

Optimization example (3) includes deep learning methods (e.g., neural networks, convolutional neural networks, and/or bayesian inference convolutional neural networks) of different designs. After sufficient training, a deep learning analysis may be implemented using such deep learning methods to classify the observation data as belonging to a particular anatomical structure (e.g., lumbar, sacral) and to identify individual 2D and/or 3D features within the observation data that correspond to a "good fit" of the observation data based on the training set.

In step 230, the 3D bone model fit and the 3D image of the bone anatomy may optionally be used as a prior probability model for quadratic model fit of the nearby soft tissue anatomy. In some embodiments, the soft tissue structure may be the target of a therapeutic intervention (e.g., a shoulder capsule), while in other embodiments, additional anatomical information may simply be provided to assist in a medical procedure (e.g., the positioning of the lungs). The soft tissue information contained in the 2D ultrasound image acquired in step 212 may be post-processed to extract image features (e.g., edge detection, shape detection) before fitting to the 2D or 3D model optimized to meet certain constraints. Such constraints may include anatomical information contained in a 3D bone model fit and 3D images of the bony anatomy. In addition, constraints may include a priori knowledge of the type of anatomy being imaged, such as shoulder joints, thoracic vertebrae, ribs, or other particular bony anatomy. Further constraints may include the portion of the anatomy traced by the fiducial markers 124 (and tracked in step 226), e.g., the particular vertebra, pelvic end, joint location, etc. Furthermore, prior statistical knowledge of the target anatomy and mechanical constraints may be used to assist in model fitting, such as rib or vertebra size, separation distance between bones (e.g., between adjacent vertebrae), and/or statistical distribution of intervertebral bending angles. Furthermore, pre-existing images of any modality may be used to constrain 3D model fitting, such as applying CT and/or MRI datasets to limit 3D model parameters. There are a number of methods that can be used for the optimization process of 3D model fitting, such as in the examples listed above.

In step 232, the user may annotate the image data. The image data is preferably displayed in human readable form while the view can be manipulated (zoom, pan, rotate, change projection, etc.) so that the user can annotate positions, lines, regions and/or volumes in the 3D model. Any annotations performed by the user are co-registered with the 3D image data and/or the 3D model so that in subsequent processing steps the annotations can be used seamlessly with other data sources.

In step 234 (via placeholder a in flowchart 20), a 3D rendering of the image and/or model is generated for display on a display (e.g., display 140 and/or optional probe display 108). In this step, some combination of the 3D synthetic image formed in step 218, the 3D model from step 230, and/or the user annotation from step 232 is rendered under user control (zoom, pan, rotate, etc.) so that the user can usefully view the entire 3D registration data set or some subset thereof. Different components in the display (e.g., display 140 and/or optional probe display 108) may be rendered in a variety of different ways consistent with the state of the art 3D rendering techniques, such as in the following methods.

Generally, the simplest way to implement 3D rendering is to use a 3D rendering framework, such as

Figure BDA0002575322160000211

(available from the Kronos Group Inc.),

Figure BDA0002575322160000212

(available from Unity Technologies ApS),(available from Epic Games, inc.), or similar optimized 3D rendering framework that renders surfaces, points, and objects in 3D space in customized textures, lighting, etc. Various 3D rendering algorithms and kits are readily available and known to those skilled in the ultrasound and/or image processing arts. They include The Visualization(https://www.vtk.org/)。

As described above, the 3D rendering may take the form of a fully interactive 3D volume in which the user may zoom, pan or rotate the entire volume. The 3D volume may also be configured to be viewed from a particular vantage point, for example, where the spinal anatomy is viewed along an anterior-posterior line of sight to provide a "bird's eye" view of the spine. In this case, the 3D volume may be rendered as a maximum intensity projection within the plane, or as a two-dimensional image with a third dimension encoded to indicate the value of each bone surface in the third dimension. For example, the third dimension may be graphically encoded, such as by color mapping, contours, or other graphics attributed to the values of each bone surface in the third dimension.

Fig. 7 illustrates an example of a three-dimensional display 70 of spinal anatomy 700 along an anterior-posterior line of sight. In the illustrated display 70, the third dimension corresponds to the depth of the bone surface from the patient's skin. The depth of the bone surface from the patient's skin is shown in the two-dimensional display 70 by the color of the bone surface. For example, the depth is shown to progress from lighter color in the bone surface 710 (closer to the skin surface) to darker color in the bone surface 720 (further from the skin surface). Bone surface 730 has an intermediate color, which indicates its depth between bone surfaces 710 and 720. In addition, display 70 shows optional crosshair 740 indicating an auto-detection treatment site and optional auto-detection treatment applicator 750, which auto-detection treatment applicator 750 may be the same as treatment applicator 116.

In step 234, the composite image generated in step 218 may be rendered as a set of surfaces with optional transparency (mesh, polygon, etc. as desired), or as a point cloud (with variable point size and transparency as desired). External optional lighting and other effects may be applied as desired. In step 234, the 3D fitted model generated in step 230 is rendered most simply as a series of 3D objects in space, with the surface texture depending on the nature and user importance of each 3D object.

Further, in step 218, the user annotations from step 232 may be displayed with the rendered composite image and/or with the rendered 3D fitted model as points, objects, regions, lines, or volumes in 3D space co-registered with other items in space.

In step 238 (via placeholder a in flowchart 20), an analysis of the 3D image and/or model is computed. In this step, the parameters of the 3D model (from step 230), the 3D composite image (from step 218), and/or the user annotations (from step 232) are analyzed to produce useful information for one or more purposes. For example, the computed analysis may be used to help diagnose a disease state, disease progression, or other health metric that may be inferred from one or more inputs (e.g., outputs from steps 230, 218, and/or 232). Examples of such analyses include vertebral size, intervertebral distance, intervertebral rotation, measures of scoliosis, scoliosis progression over time, and other disease or health markers.

In another example, the computed analysis may be used to help plan and/or guide a treatment process, such as needle insertion or energy-based treatment. Examples of such analyses include: the insertion of the measurement needle into the space in a given intervertebral space from the nearest bone surface (which may indicate, for example, the difficulty of introducing nerve axis anesthesia at that location), the identification of the appropriate needle insertion site and trajectory (line), or the identification of the depth of certain anatomical features from the skin (e.g., epidural space).

In yet another example, the computed analysis may be used for real-time guidance in 3D space. Examples of such real-time guidance include feedback to the user such data as proximity to fiducial markers, annotations and/or 3D model positioning such as spinal midline, and relative angles of external objects (e.g., treatment applicator 116) to, for example, an appropriate needle insertion trajectory.

In step 242 (via placeholder a in flowchart 20), the location of the 3D structure requiring additional scan information is determined and/or identified. In this step, the current state of the three-dimensional composite image of step 218 and/or all or part of the three-dimensional model from step 230 is used to estimate to what extent different portions of the three-dimensional space corresponding to the anatomical structure of interest have been adequately sampled by the ultrasound beam (from ultrasound probe 104). If the ultrasound beam is moved rapidly through a region of the target anatomy, the region may not be sampled sufficiently to satisfy nyquist sampling, or to ensure sufficient oversampling and/or to provide a signal-to-noise ratio sufficient for subsequent processing, based on the known spatial resolution of the imaging system.

In one example, step 242 may be performed by maintaining a bulk density function in 3D space and additionally filling the bulk density as the ultrasound plane or volume passes through it. The current state of the bulk density may be indicated to the user interactively (e.g., graphically, by voice, etc.). The current state of the volume may include a fully sampled position or an insufficiently sampled position. There are many ways to determine a sufficient volume sample. One approach is to assert (assert) a minimum of 3D ultrasound pixel samples per volume, e.g., 25 pixels per cubic centimeter or other volume element. Additionally, more intelligent sampling metrics may include continuity to existing fully sampled volumes (e.g., showing gaps but not limited to volume ranges), or use of adaptive volume sampling thresholds that depend on location and on variables such as bone surface density, information (e.g., entropy) content in volume cells, or data statistics, and estimate what type of anatomical structure is contained for a volume cell. This can be used to let the user "draw" missing regions or "erase" undersampled regions by indicating where scanning or additional scanning is required, respectively. This method is illustrated in fig. 3, which is a representative illustration of a display graphically identifying an undersampled region 300 within a human subject 310 that has not been adequately scanned with an ultrasound probe. When the ultrasound probe 104 acquires sufficient data of the undersampled region 300, the undersampled region 300 is removed from the display.

Additionally or alternatively, step 242 may be performed by providing a visual indicator to the user indicating where to move the ultrasound probe 104 in order to maximize sampling efficiency (e.g., left, up, down, etc. from the current location). Sampling efficiency is defined as the amount of volume that can be adequately sampled per unit time.

Additionally or alternatively, step 242 may be performed by using the volume density (e.g., by maintaining a volume density function over 3D space, as described above) or some other sampling status indicator to provide the user with a real-time 3D rendering with a level of detail indicative of the sampling progress. This may be achieved by blurring the undersampled regions, while the fully sampled regions may have a higher resolution, or alternatively, by using color coding or some other visual indication to help the user fill the sampling space.

Additionally or alternatively, step 242 may be performed by feeding back the progress of the sampling to the user in the form of a 3D model display. For example, the undersampled vertebrae may have a different appearance (color, resolution, etc.) than the fully sampled vertebrae, thereby guiding the user to acquire more data for the undersampled vertebrae.

In step 246, the 3D position of the treatment applicator 116 is tracked. This step is the same or substantially the same as step 204, except that the object being tracked is a treatment applicator, such as a needle guide or an object capable of directing energy to a target (e.g., an RF ablator, high intensity focused ultrasound (i.e., HIFU) element). The object tracking system 112 may be used to track the 3D position and orientation of the treatment applicator 116.

In step 250, a desired treatment application site relative to the 3D image structure is input by a user (e.g., via a user interface such as a mouse, touch screen, keyboard, or other user interface). Once the 3D composite image has been formed (step 218), 3D model fit (step 230), analyzed (step 232), and/or user annotated (step 138), the user may indicate the location, line, region, and/or volume to which treatment should be applied. Some examples of methods for indicating a location to apply treatment include: (1) pointing at a target, region or small volume to indicate a needle tip target; (2) pointing to a target, region or small volume to indicate a needle insertion point target; (3) lines depicting the pointing direction, the angle of insertion of the needle, and/or the final needle tip target; and/or (4) the volume or area to which anesthesia or energy treatment should be applied.

In step 254, a combination of one or more of the following is displayed to the user (e.g., on display 140 and/or optional treatment applicator display 118 (which is disposed on optional treatment applicator 116 or integrated in optional treatment applicator 116), a human subject (or a portion thereof, such as an anatomical region of interest), a device operator (or a portion thereof, such as an operator's arm or hand), an ultrasound transducer/probe 104 (or a portion thereof, such as a tip of ultrasound probe 104), a current (e.g., temporally acquired) ultrasound image frame, a 2D fluoroscopic-like bone structure image, a 2D or 3D depth-encoded synthetic bone structure image, a 3D model of a bone structure, a location of a bone structure requiring additional scan data, an analysis computed from the 3D image or model, a current location of a treatment applicator, directional indicators for navigating the treatment applicator to a desired location, depictions of potential treatment fields, fiducial markers, and/or user annotations.

If an appropriate treatment application trajectory has been previously specified, this may be displayed as a directional indicator for navigating the treatment, e.g. a graph showing the line segments of the appropriate needle trajectory, skin entry point and/or final needle target point, as well as an analysis of the expected effective area such as needle angle error (azimuth and/or elevation), distance of needle tip to target tip location and/or therapeutic agent (e.g. anesthesia, directed energy application, etc.). The current target trajectory for the therapy applicator may also be shown with the intent that the two line segments (e.g., the appropriate needle trajectory and current target trajectory) should eventually match. The region of the bone or other anatomical structure with which the current treatment applicator will intersect may also be highlighted in real time. An example of this display is shown in fig. 7.

In some embodiments, the display may display the mutual alignment of (a) the currently or instantaneously acquired two-dimensional ultrasound image frames and (b) the potential treatment field of the treatment applicator at its current position and current orientation.

In all cases with arbitrary and/or customizable transparency, the current ultrasound image frame, e.g., a 2D image with optional bone augmentation, can be displayed in the 3D image with the correct orientation and plane of the ultrasound scan plane, or alternatively, as a planar image at an arbitrary user-settable location in the 3D scene. If the therapy applicator 116 (e.g., needle, RF ablation needle, etc.) intersects the 2D ultrasound image, the therapy applicator 116 may be specifically detected and rendered in the correct orientation with respect to the 3D volume and ultrasound plane. Furthermore, if an injectate is expelled from the needle, and if the 2D ultrasound plane intersects the path of the injectate, the injectate can be detected and rendered in the correct orientation with respect to the 3D volume and ultrasound plane. If an energy treatment device (e.g., RF ablation or HIFU) is used instead of a needle, the energy field of the device (e.g., the expected spatial extent of the energy effect) may be similarly rendered. The potential treatment field may include an expected path of the injectate and an expected spatial extent of energy effect from the energy treatment device. The locations where additional scan data is needed (as in step 242) may be displayed at their actual locations in the 3D domain, particularly if a virtual reality helmet is used, the areas where additional scan is needed may be visually displayed as an augmented display superimposed on the actual image of the human subject.

If the ultrasound probe has an attached display (e.g., optional probe display 108), and/or if the optional treatment applicator has an attached display (e.g., optional treatment applicator display 118), either or both of these screens may be used to display either of the 2D and/or 3D data described above in real time, separately, or in addition to an external 2D or virtual reality display. The attached display may also be used to display information relating to the relative positioning of the ultrasound probe 104 and the target location. If a virtual reality headset is used, one or more virtual 2D displays can be generated in 3DVR space, which can be statically placed with respect to the headset, and/or probe, or in 3D space.

Fig. 8 is a two-dimensional display 80 of potential treatment fields aligned with the treatment site. In the display 80, the automated detection therapy applicator 800 is shown extending toward a targeted anatomical feature 810 (e.g., bone surface, organ, etc.). Using aspects of the invention described herein, the system automatically determines the position and orientation of the treatment applicator 800 (e.g., using the object tracking system 112) and the three-dimensional location of the bony anatomy (e.g., as discussed above with respect to flowchart 20), such as the spinal midline 820, which may be used as an anatomical reference plane (i.e., the spinal midline 820 does not exist as part of the physical anatomy, but is an imaginary line that is a reference with respect to the physical anatomical features). When the potential treatment field 805 of the treatment applicator 800 is aligned with the targeted anatomical feature 810, as shown in fig. 8, the system may provide a visual and/or audible indication of such alignment (e.g., by changing the color of the targeted anatomical feature 810, flashing a light, producing a sound, etc.).

EXAMPLE A guidance of epidural anesthesia surgery

In this example, the objective is to guide a Tuohy needle into the epidural space in the lumbar of the patient for placement of a catheter to provide permanent anesthesia. The current standard of care is palpation of the spinal anatomy to identify the intervertebral space and insert a needle, followed by a "resistance-loss" technique in which a syringe is used to sense the pressure reduction as the needle reaches the target epidural space. To improve the accuracy of the procedure, the user may scan the patient using the ultrasound probe 104 with the screen 108 attached while the probe is tracked by the object tracking system 112. As the user scans, a 3D composite image of the bone augmentation (step 214) is compiled (step 218), an intermediate 3D model fit (step 230) and an indication of scan density sufficiency are calculated (step 242), all of which are displayed in 3D (step 254) in real time on the display 140 (e.g., laptop display, external display, virtual reality helmet, etc.) and/or optional probe display 108. Scan density indicates that using color coding, the target anatomy is highlighted to the extent of blue (or other color) when the scans in the region are of sufficient density.

Optionally, one or more fiducials 124 may be created and tracked (step 226), for example, by interacting through a user interface on the ultrasound probe as the probe tip is brought into contact with the left and right pelvic extremities and/or bony prominences above the internal gluteal cleft. These fiducials 124 will be added to the combined image on the display 140 and/or optional treatment applicator display 118 (step 254).

Once a sufficient level of scan density has been achieved on the target anatomy (e.g., lumbar spine), 3D model fitting (step 230) may identify the lumbar vertebrae with intervertebral space highlighting, along with analysis based on the 3D fitting (step 238), such as intervertebral space dimensions, appropriate needle trajectories to the epidural space at each candidate lumbar intervertebral space, depth to the epidural space, and minimum clearance to the bone surface for each needle trajectory. Fig. 4 is a display 40 of an exemplary 3D spine model or exemplary 3D spine data with analysis superimposed on the 3D spine model based spine analysis for guiding an epidural injection. Although display 40 is shown in two dimensions, it should be noted that display 40 may also show the same information in three dimensions.

FIG. 5 illustrates a display 50 for guiding a needle along an appropriate, satisfactory, user-selected, or automatically-selected needle trajectory 500 (collectively, "appropriate needle trajectory"), in accordance with one or more embodiments. In some embodiments, the appropriate needle trajectory 500 is a subsequent and/or future needle trajectory required to deliver therapy to the target treatment site. The user may identify the appropriate needle trajectory 500 using the display 140 and/or the optional treatment applicator display 118 and a particular analysis of the procedure (e.g., the analysis shown in fig. 4). For example, at the location of the current treatment applicator 116, analyses such as the vertebrae below it (e.g., L1-L5), the lateral distance to the spinal midline, and/or the epidural space depth 530 may be displayed. Although the display 50 is shown in two dimensions, it should be noted that the display 50 may also show the same information in three dimensions.

Once the appropriate needle trajectory 500 has been identified, the tracked treatment applicator 116 (tracked in step 246) may be used to guide the needle 510 to the desired appropriate needle trajectory 500. As the treatment applicator 116 (in this case, the tracked needle guide 516) moves, the current (or predicted) needle track 520 may be displayed (in step 254) on the display 50 (which may include the display 140 and/or the optional treatment applicator display 118), including the current skin entry point 522 and the current (or predicted) needle end point 524. The display 50 also shows an appropriate needle track 500, including an appropriate skin entry point 502 and an appropriate needle end point 504. Displaying these data and images may assist the operator in moving and/or orienting the treatment applicator 116 so that the appropriate needle trajectory 500 is achieved. In some embodiments, the display 50 may include an arrow indicating the direction to translate and/or rotate the treatment applicator 116 to align the current needle track 520 with the appropriate needle track 500. For example, the display 50 may include a first arrow 540 and a second arrow 550, the first arrow 540 indicating the direction to translate the treatment applicator 116 to achieve the appropriate needle trajectory 500, and the second arrow 550 indicating the direction to rotate the treatment applicator 116 to achieve the appropriate needle trajectory 500. Each arrow 540, 550 may be colored or displayed differently to avoid confusion for the user. More or fewer arrows may be provided (e.g., based on the number of dimensions in which the current needle trajectory 520 is misaligned with the appropriate needle trajectory 500). An example of a System and Method for angularly aligning a Probe with a target Probe angle is disclosed in U.S. patent application No. 15/864,395 entitled "System and Method for Angular Alignment of a Probe at a target Location", filed on 8.1/2018, which is incorporated herein by reference.

At this point, conventional resistance-nulling techniques may be used for needle insertion. Optionally, if mechanically possible, the treatment applicator 116 may track the distal end of the needle, tracking needle insertion depth in real time using visual and/or audible feedback from a laptop computer. The treatment applicator 116 may track the tip of the needle in several ways. One way is to use geometry to calculate the needle tip position if the position and orientation of the treatment applicator 116 is known and the needle is rigid (does not bend).

Optionally, a virtual reality helmet may be used as the display 140 (or in addition to the display 140) during some or all portions of the procedure. During the 3D scan, the probe and patient may be displayed using a head-mounted camera, along with the ultrasound image plane and other aspects of step 254. During the application of treatment, the user may use the VR headset to view the intended and appropriate needle trajectory at any angle by moving their head around the treatment applicator 116. Various virtual heads-up displays may be placed around the scene to provide any kind of surgical feedback that is desired.

It should be noted that the ultrasound probe 104 and/or the therapy applicator 116 may be positioned by a machine (such as a robotic actuator) algorithmically based on direct input from a user or based on information provided herein. For example, a robot, rather than a human user, may automatically move the therapy applicator 116 to a position deemed appropriate for reaching a desired target based on the output of the technique.

Example B-spinal anatomy analysis for disease State assessment

In this example, the objective is to scan a patient to create a 3D model of his/her spinal anatomy to visualize it without the need for ionizing radiation (e.g., X-rays, CT scans, etc.) or expensive surgery (e.g., MRI). The 3D spine model may be used to extract analysis and assess the presence or extent of disease states. One exemplary use of this technique is to diagnose or track the progression of adolescent scoliosis. The main means of such diagnosis is currently X-ray imaging. However, repeated exposure of the child to X-rays is undesirable, and a front-line care provider may not easily acquire the X-ray machine, but rather use other methods with limited accuracy (e.g., measuring external spinal angulation). Therefore, an inexpensive accurate spinal analysis system as described in this example would be an improvement over current standards of care.

In this embodiment, to build the 3D spine anatomy model, the main processing unit 136 (e.g., a computer such as a laptop computer) will instruct the user to move the ultrasound probe 104 to the sacrum and begin scanning there in a bone enhanced mode (step 214). While scanning, the user will see the 3D composite image (step 218) and 3D intermediate model (step 230) built in real time on the display 140 (e.g., computer/laptop display) and/or optional probe display 108, along with an indication of the scan density (step 242). Once sufficient scan density has been established near the sacrum, the computer directs the user to move up to the lowest vertebra L5 and scan it. Likewise, once sufficient scan density has been established, the user will be directed to the next vertebra L4, and so on, until a desired number of vertebrae have been scanned.

At this point, the 3D composite image (step 218) should be sufficient for the full spine 3D model to be developed (step 230), along with the spine related analysis (step 238). The analysis associated with the full spine model may include relative vertebral position, intervertebral spacing, spinal curvature measurements in one or more axes. In addition, data from previous scans over time can be incorporated to show spinal changes and progression of disease states over time.

Now, the display 140 and/or the optional probe display 108 may be used to display the combined spine model in 3D space, along with the analysis derived therefrom, and optionally display an animation that includes the model generated from the previous scan, and/or the development of the metrics derived from the analysis that have changed over time.

If a virtual reality helmet is available, it may be used during any or all of the stages of this example (e.g., as display 140 or in addition to display 140). First, during the scan, the helmet may use the front facing camera 130 to allow the user to see the back of the patient during the scan, in addition to synthesizing the 3D image (step 218), the 3D model (step 230), and other portions of the 3D display listed in step 254 and/or discussed above with respect to step 254. During this stage, the virtual reality display may also highlight the vertebrae that have been scanned, as well as the possible location of the next vertebrae to be scanned, and otherwise guide the scanning process. Once the scan is complete, the user may view the full 3D display (shown in step 254) from any angle by walking around the patient with the anatomy shown "inside" the patient. In addition, the patient may observe what the user sees, and/or observe a spinal scan in the virtual reality environment after the scan, along with other previous scans, including animation over time and/or annotations of analytical information.

This general method of diagnosing disease states in connection with 3D analysis of bone anatomy may be extended by performing two or more scans, thereby requiring the patient to be required to make a range of movements (e.g., back extension or forward bow) for subsequent scans. Two or more scans may be used to assess the range of motion that the bony anatomy is capable of, and may be used as part of a historical record to assess disease state progression and/or provide feedback on the effect of various treatments.

Example C-record the bony anatomy in 3D in a standard format for later review.

In this example, where the aim is to scan a patient to build a 3D model of bones and other anatomical structures, and to save this scan information for subsequent review, an advantage of the method, possibly by a different person and possibly at a different location, is that a technician can obtain the anatomical structure model by scanning the patient, and one or more medically advanced medical professionals can subsequently interactively understand the model data at any location. If the anatomical model data is stored in a standard volume, surface or other format, such as those provided by the digital imaging and communications in medicine (DICOM) standard (available at http:// www.dicomstandard.org/available), any user of the data may use existing or new tools to browse the data, send the data, and store the data, such as using a PACS system (image archiving and transmission system).

Since the data set is 3D in nature, the virtual reality system can be readily used to navigate data, control analysis and display, and annotate data. Alternatively, data may be studied and annotated using non-VR tools. In one possible variation, multiple users may view, annotate, and control displayed 3D data in real-time using network communications for collaborative medical analysis. This example is similar to the workflow of echocardiology ultrasound, where the sonographer collects large amounts of data from a cardiac scan, which the cardiologist later examines using the PACS system and standard tools. Likewise, a technician may use an ultrasound system with the bone augmentation techniques and 3D position tracking described in this disclosure to obtain a 3D anatomical model from a patient, and an orthopedic or other medical professional may then use the PACS system to analyze and examine the data.

Examples of illustrative embodiments

Example 1. an ultrasound imaging and therapy guidance system, comprising: an ultrasound probe that generates a positionally adjusted ultrasound beam to acquire three-dimensional image data of bony anatomy within a human subject; an object tracker configured to detect a current position and a current orientation of the ultrasound probe; a therapy applicator for delivering therapy to a human subject; a mechanical device coupled to the ultrasound probe and the therapy applicator to set a predetermined relative position of the therapy applicator with respect to the ultrasound probe; a processor; a non-transitory computer memory operably coupled to the processor. The non-transitory memory includes computer readable instructions that cause the processor to: detecting a position and orientation of a three-dimensional bone surface location based at least in part on the three-dimensional image data and a current position and a current orientation of the ultrasound probe; automatically detecting a target treatment site positioned relative to a three-dimensional bone surface; determining an appropriate position and an appropriate orientation of the therapy applicator required to deliver therapy to the target treatment site; and generating display data. The system also includes a display in electrical communication with the processor, the display generating an image based on the display data, the image including: an indication of three-dimensional bone surface location; two-dimensional ultrasound image frames acquired instantaneously, mutually aligned with a potential treatment field of the treatment applicator at a current position and a current orientation of the treatment applicator; an indication of a target treatment site located relative to a three-dimensional bone surface; and a graphical indicator indicating whether the target treatment site and the potential treatment field are aligned.

Example 2. the system of example 1, wherein the computer readable instructions further cause the processor to automatically detect the target treatment site located relative to the three-dimensional bone surface using a neural network.

Example 3. the system of examples 1 or 2, wherein the computer readable instructions further cause the processor to detect the position and orientation of the three-dimensional bone surface location by fitting the three-dimensional image data to a three-dimensional bone model.

Example 4 the system of any of examples 1-3, wherein the image generated by the display further comprises a bone landmark location.

Example 5 the system of any of examples 1-4, wherein the computer readable instructions further cause the processor to automatically detect the target treatment site using a three-dimensional bone model.

Example 6 the system of any of examples 1-4, wherein the indication of the three-dimensional bone surface location is displayed as a two-dimensional bone surface image with a third dimension encoded to represent the bone surface location along the third dimension.

Example 7. the system of example 6, wherein the third dimension is graphically encoded to represent positioning along the bone surface of the third dimension.

Example 8 the system of example 6 or 7, wherein the third dimension is color coded to indicate positioning along a bone surface of the third dimension.

Example 9. the system of any of examples 1-8, wherein the appropriate position and the appropriate orientation of the treatment applicator is determined based at least in part on a predetermined relative position of the treatment applicator with respect to the ultrasound probe.

Example 10 the system of any of examples 1-9, wherein the object tracker is configured to detect a current position and a current orientation of the treatment applicator, and determine the appropriate position and the appropriate orientation of the treatment applicator based at least in part on the current position and the current orientation of the treatment applicator.

Example 11 the system of any of examples 1-10, wherein the images generated by the display further include a current position and a current orientation of the potential treatment field.

Example 12 the system of any of examples 1-11, wherein the image generated by the display further includes a current position and a current orientation of the therapy applicator.

Example 13 the system of any of examples 1-12, wherein the image generated by the display further comprises size and orientation information of the bony anatomy calculated from the three-dimensional bone surface locations.

Example 14 the system of any of examples 1-13, wherein the treatment applicator comprises a needle guide, a needle, an ablation instrument, and/or a high intensity focused ultrasound transducer.

Example 15 the system of any one of examples 1-14, wherein the target treatment site comprises an epidural space, a subarachnoid space, or a medial branch nerve.

Example 16. the system of any of examples 1-15, wherein the ultrasound probe is configured to be manually position adjusted by a user.

Example 17. the system of any of examples 1-16, wherein the ultrasound probe is configured to be automatically positionally adjusted via a mechanical motorized mechanism.

Example 18. the system of any of examples 1-17, wherein the object tracker includes an inductive proximity sensor.

Example 19. the system of any of examples 1-18, wherein the object tracker includes ultrasound image processing circuitry.

Example 20 the system of example 19, wherein the ultrasound image processing circuitry is configured to determine the relative change in the current position of the ultrasound probe by comparing sequentially acquired ultrasound images in the three-dimensional image data.

Example 21. the system of any of examples 1-20, wherein the object tracker includes an optical sensor.

Example 22 the system of example 21, wherein the optical sensor comprises a fixed optical emitter and a swept laser that is detected by the optical sensor, the optical sensor disposed on the ultrasonic probe.

Example 23. the system of any of examples 1-22, wherein the object tracker includes an integrated position sensor.

Example 24. the system of example 23, wherein the integrated position sensor comprises an electromechanical potentiometer, a linear variable differential transformer, an inductive proximity sensor, a rotary encoder, an incremental encoder, an accelerometer, and/or a gyroscope.

Example 25 the system of any one of examples 1-25, wherein the three-dimensional bone surface positioning comprises a three-dimensional spinal bone positioning.

Example 26 the system of any of examples 1-26, wherein the position adjustment of the position adjusted ultrasound beam is performed by mechanical movement of the ultrasound probe and/or electrical steering of the position adjusted ultrasound beam.

Example 27 a method for guiding a therapy applicator, comprising: positionally adjusting an ultrasound beam produced by an ultrasound probe on a human subject to acquire three-dimensional image data of bony anatomy within the human subject; detecting a current position and a current orientation of the ultrasound probe using an object tracker while performing position adjustment on the ultrasound beam; determining a position and orientation of a three-dimensional bone surface location based at least in part on the three-dimensional image data and a current position and a current orientation of the ultrasound probe; automatically detecting a target treatment site positioned relative to a three-dimensional bone surface; determining an appropriate position and an appropriate orientation of the therapy applicator required to deliver therapy to the target treatment site; displaying an image on a display in electrical communication with the computer, the image comprising: an indication of three-dimensional bone surface location; two-dimensional ultrasound image frames acquired instantaneously, mutually aligned with a potential treatment field of the treatment applicator at a current position and a current orientation of the treatment applicator; an indication of a target treatment site located relative to a three-dimensional bone surface; and a graphical indicator indicating whether the target treatment site and the potential treatment area are aligned.

Example 28 the method of example 27, further comprising: a neural network is used in a computer to automatically detect a target treatment site located relative to a three-dimensional bone surface.

Example 29 the method of example 27 or 28, further comprising: the three-dimensional image data is fitted to a three-dimensional bone model.

Example 30 the method of example 29, further comprising: the position and orientation of the three-dimensional bone surface is determined using the three-dimensional bone model.

Example 31. the method of example 29 or 30, further comprising: a three-dimensional bone model is used to identify bone landmark locations.

Example 32 the method of example 31, wherein the image comprises bone landmark localizations.

Example 33 the method of any one of examples 30 to 32, further comprising: a target treatment site is automatically detected using a three-dimensional bone model.

Example 34 the method of any of examples 27-33, wherein the indication of the three-dimensional bone surface location is displayed as a two-dimensional bone surface image with a third dimension encoded to represent the bone surface location along the third dimension.

Example 35. the method of example 34, further comprising: the third dimension is graphically encoded to represent positioning along the bone surface of the third dimension.

Example 36. the method of example 34 or 35, further comprising: the third dimension is color coded to indicate positioning along the bone surface of the third dimension.

Example 37 the method of any one of examples 27 to 36, further comprising: a mechanical device coupled to the ultrasound probe is mechanically coupled with the treatment applicator, the mechanical device setting a predetermined relative position of the treatment applicator with respect to the ultrasound probe.

Example 38. the method of example 37, further comprising: the appropriate position and the appropriate orientation of the treatment applicator are determined based at least in part on a predetermined relative position of the treatment applicator with respect to the ultrasound probe.

Example 39 the method of any one of examples 27 to 38, further comprising: detecting a current position and a current orientation of the therapy applicator using the object tracker; and determining an appropriate position and an appropriate orientation of the treatment applicator based at least in part on the current position and the current orientation of the treatment applicator.

Example 40 the method of any of examples 27 to 39, wherein the image further comprises a current position and a current orientation of the potential treatment field.

Example 41 the method of any one of examples 27 to 40, wherein the image further includes a current position and a current orientation of the treatment applicator.

Example 42 the method of any of examples 27 to 41, wherein the image further comprises size and orientation information of the bony anatomy calculated from the three-dimensional bone surface locations.

Example 43 the method of any one of examples 27 to 42, wherein the treatment applicator comprises a needle guide, a needle, an ablation instrument, and/or a high intensity focused ultrasound transducer.

Example 44 the method of any one of examples 27-43, wherein the target treatment site comprises an epidural space, a subarachnoid space, or a medial branch nerve.

Example 45 the method of any one of examples 27 to 44, wherein positionally adjusting the ultrasound beam comprises mechanically moving an ultrasound probe.

Example 46 the method of any one of examples 27 to 45, further comprising: the position of the ultrasound probe is adjusted using a mechanical motorized mechanism.

Example 47 the method of any one of examples 27 to 46, wherein positionally adjusting the ultrasound beam comprises electronically scanning the ultrasound beam.

Example 48 the method of any one of examples 27 to 47, wherein the object tracker includes an inductive proximity sensor.

Example 49 the method of any one of examples 27 to 48, wherein the object tracker includes ultrasound image processing circuitry.

Example 50 the method of example 49, further comprising: using ultrasound image processing circuitry, a relative change in the current position of the ultrasound probe is determined by comparing sequentially acquired ultrasound images in the three-dimensional image data.

Example 51 the method of any one of examples 27 to 50, wherein the object tracker includes an optical sensor.

Example 52. the method of example 51, wherein the optical sensor comprises a fixed optical emitter and a swept laser that is detected by the optical sensor, the optical sensor disposed on the ultrasonic probe.

Example 53 the method of any of examples 27 to 52, wherein the object tracker includes an integrated position sensor.

Example 54 the method of example 53, wherein the integrated position sensor comprises an electromechanical potentiometer, a linear variable differential transformer, an inductive proximity sensor, a rotary encoder, an incremental encoder, an accelerometer, and/or a gyroscope.

Example 55 the method of any one of examples 27-54, wherein the three-dimensional bone surface positioning comprises three-dimensional spinal bone positioning.

Example 56 the method of any one of examples 27 to 55, wherein the current position and the current orientation of the ultrasound probe are detected using an object tracker.

Example 57 the method of any one of examples 27 to 56, further comprising: acquiring two-dimensional ultrasound image data of the bony anatomy at the locations of the plurality of ultrasound probes; and combining the two-dimensional ultrasound image data and the ultrasound probe positioning to form three-dimensional image data.

Example 58 the method of any one of examples 27 to 57, wherein the two-dimensional image data includes pixels, and the method further comprises determining a three-dimensional location of each pixel based on the ultrasound probe location.

Example 59 the method of any one of examples 27 to 58, further comprising: a bone enhancement process is performed to enhance any bone and/or bone features in the ultrasound image.

Example 60 the method of any one of examples 27 to 60, further comprising: receiving a user interface event; and recording a reference position of the ultrasound probe based on the time of receipt of the user interface event.

These non-limiting examples may be combined in any combination or permutation.

Having thus described several aspects and embodiments of the present invention, it is to be appreciated various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be within the spirit and scope of the invention as described herein. For example, various other means and/or structures for performing the function and/or obtaining the results and/or one or more advantages described herein will be readily apparent to those of ordinary skill in the art, and each of these variations and/or modifications is considered to be within the scope of the embodiments described herein.

Those skilled in the art will recognize many equivalents to the specific embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that inventive embodiments may be practiced otherwise than as specifically described. In addition, any combination of two or more features, systems, articles, materials, kits, and/or methods described herein, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.

The above-described embodiments may be implemented in various ways. One or more aspects and embodiments of the present application directed to performing a process or method may utilize program instructions executable by a device (e.g., a computer, hardware processor, or other device) to perform or control the performance of a process or method.

In this regard, the various inventive concepts may be embodied as a non-transitory computer memory and/or non-transitory computer readable storage medium (or multiple non-transitory computer readable storage media) (e.g., a computer memory, one or more floppy disks, optical disks, magnetic tapes, flash memories, circuit structures in field programmable gate arrays or other semiconductor devices, or other tangible computer storage medium) encoded with one or more programs that, when executed on one or more computers or other processors, perform methods that implement one or more of the various embodiments described above.

The computer-readable medium can be transportable, such that the program or programs stored thereon can be loaded onto one or more different computers or other processors to implement one or more of the above-described aspects. In some embodiments, the computer readable medium may be a non-transitory medium. The non-transitory computer memory or medium may be operatively coupled to a hardware processor and may include instructions for performing one or more aspects of the present invention.

The terms "program," "software," "application," and "application" are used herein in a generic sense to refer to any type of computer code or set of computer-executable instructions that can be employed to program a computer or other processor to implement various aspects described above. In addition, it should be appreciated that according to one aspect, one or more computer programs that when executed perform methods of the present application need not reside on a single computer or processor, but may be distributed in a modular fashion amongst a number of different computers or processors to implement various aspects of the present C

Computer-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. In various embodiments, the functionality of the program modules may be combined or distributed as desired.

In addition, the data structures may be stored in any suitable form on a computer readable medium. For simplicity of illustration, the data structure may be shown with fields that are related by location in the data structure. Such relationships may likewise be implemented by assigning fields a store of their locations in a computer-readable medium that conveys the relationship between the fields. However, any suitable mechanism may be used to establish relationships between information in fields of a data structure, including through the use of pointers, tags, or other mechanisms that establish relationships between data elements.

Also, as described, some aspects may be implemented as one or more methods. The actions performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts concurrently, even though shown as sequential acts in illustrative embodiments.

37页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:自采样的通用套件、方法和用途

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!