Ultrasound method and apparatus

文档序号:863454 发布日期:2021-03-16 浏览:4次 中文

阅读说明:本技术 超声方法和装置 (Ultrasound method and apparatus ) 是由 R·埃克斯利 J·哈伊纳尔 A·戈麦斯 L·佩拉尔塔佩雷拉 于 2019-06-28 设计创作,主要内容包括:所描述的各方面和实施例提供了一种超声方法、超声装置以及可操作以执行该方法的计算机产品。该超声方法包括:配置两个或多个分离的超声发射器以将信号发射至重合区域;配置接收阵列,以在发射信号与位于重合区域内的介质进行相互作用之后接收表示来自两个或多个发射器中每一个的发射信号的波前;分析每个接收到的波前以确定两个或多个分离的超声发射器中每一个的相对空间位置;以及在发射信号与位于重合区域内的介质进行相互作用之后,使用两个或多个分离的超声发射器中每一个的确定的相对空间位置来对来自两个或多个发射器中每一个且在接收阵列处接收的波前进行相干信号组合。因此,在一些实施例中,该方法通过提供一种在系统中准确地定位换能器的鲁棒方法来提供多换能器超声成像系统,以便对最终图像进行波束形成。所描述的方法和装置可以在分辨率、深度穿透、对比度和信噪比(SNR)方面改善成像质量。(The described aspects and embodiments provide an ultrasound method, an ultrasound apparatus, and a computer product operable to perform the method. The ultrasonic method comprises the following steps: configuring two or more separate ultrasound transmitters to transmit signals to the coincident regions; configuring a receiving array to receive a wavefront representing the transmitted signal from each of the two or more transmitters after the transmitted signal has interacted with the medium located within the coincidence region; analyzing each received wavefront to determine the relative spatial location of each of the two or more separate ultrasound transmitters; and using the determined relative spatial location of each of the two or more separate ultrasound transmitters to coherently signal combine wavefronts received at the receive array from each of the two or more transmitters after the transmit signals interact with the medium located within the coincidence region. Thus, in some embodiments, the method provides a multi-transducer ultrasound imaging system by providing a robust method of accurately positioning the transducers in the system for beamforming the final image. The described methods and apparatus may improve imaging quality in terms of resolution, depth penetration, contrast, and signal-to-noise ratio (SNR).)

1. An ultrasound method comprising:

configuring two or more separate ultrasound transmitters to transmit signals to the coincident regions;

configuring a receive array to receive a wavefront representing an emission signal from each of the two or more emitters after the emission signal interacts with a medium located within the coincidence region;

analyzing each of the received wavefronts to determine a relative spatial location of each of the two or more separate ultrasound transmitters; and

after the transmit signals interact with the medium located within the region of coincidence, coherently signal combining the wavefronts received at the receive array from each of the two or more transmitters using the determined relative spatial locations of each of the two or more separate ultrasound transmitters.

2. The ultrasound method according to claim 1, wherein the analyzing comprises:

selecting one or more parameters that define the relative spatial location of each of the two or more separate ultrasound transmitters.

3. The ultrasound method according to claim 2, wherein the analyzing comprises:

using the received wavefront to make an initial guess of one or more parameters defining the relative spatial location of each of the two or more separate ultrasound transmitters.

4. The ultrasound method according to claim 2, wherein the analyzing comprises:

receiving, from one or more directional sensors disposed at each ultrasound emitter, an indication of one or more parameters defining the relative spatial location of each of the two or more separate ultrasound emitters.

5. The ultrasound method according to any of claims 2 to 4, wherein the parameters comprise: a combination of one or more parameters that enable determination of the relative spatial position of each of the two or more separate ultrasound transmitters.

6. The method of claim 5, wherein the parameters include one or more of: the location of one or more scatterers within the medium within the overlap region; the relative angle between the ultrasonic emitters; the relative distance of the ultrasound transmitter to the receive array; a speed of sound within the medium located within the region of coincidence.

7. The method of any one of claims 2 to 6, wherein the analyzing comprises: increasing a correspondence between the received wavefronts by accurately defining the parameters for the relative spatial locations of each of the two or more separate ultrasound transmitters.

8. The method of claim 7, wherein the correspondence comprises: correlation between the received wavefronts.

9. The method of claim 7 or 8, further comprising: using the precise parameters to select the relative spatial locations to be used in performing the coherent signal combination.

10. A computer program product operable, when executed on a computer, to perform the ultrasound method of any of claims 1 to 9.

11. An ultrasound device, comprising:

two or more separate ultrasonic transmitters configured to transmit signals to the coincident region;

a receive array configured to receive a wavefront representing an emission signal from each of the two or more emitters after the emission signal interacts with a medium located within the coincidence region;

position processing logic configured to analyze each of the received wavefronts and determine a relative spatial position of each of the two or more separate ultrasound transmitters; and

signal combining logic configured to coherently signal combine the wavefronts received at the receive array from each of the two or more separate ultrasound transmitters using the determined relative spatial locations of each of the two or more separate ultrasound transmitters after the transmit signals interact with the medium located within the coincidence region.

12. The ultrasound device according to claim 11, wherein the two or more separate ultrasound transmitters are positioned such that their signal quantities at least partially overlap within the overlap region.

13. The ultrasound device according to claim 11 or 12, wherein the two or more separate ultrasound transmitters are configured to transmit signals into the coinciding zones substantially simultaneously.

14. The ultrasound device according to claim 11 or 12, wherein the two or more separate ultrasound transmitters are configured to transmit signals into the coinciding zones continuously.

15. The ultrasound device according to any one of claims 11 to 14, wherein the signal transmitted by each of the two or more transmitters comprises a plane wave.

16. The ultrasound device according to any one of claims 11 to 15, wherein the device further comprises:

at least one further receive array configured to receive the wavefront representing the transmit signal from each of the two or more transmitters after the transmit signal interacts with a medium located within the coincidence region;

wherein the position processing logic is configured to analyze each of the received wavefronts received at each receive array and determine a relative spatial position of each of the two or more separate ultrasound transmitters; and is

Wherein the signal combination logic is configured to use the determined relative spatial locations of each of the two or more separate ultrasound transmitters and each receive array to coherently reconstruct an image of a medium located within the coincident imaging region after the transmit signals interact with the medium located within the coincident region by combining wavefronts received at each of the receive arrays from each of the two or more transmitters.

17. The ultrasound device according to claim 16, wherein at least one of the two or more separate ultrasound transmitters and one or more of the receive arrays are co-located to form an ultrasound transducer.

Technical Field

The described aspects and embodiments provide an ultrasound method and an ultrasound apparatus, as well as a computer program product operable to perform the method.

Background

Ultrasound is a widely used analytical tool. Advantages of ultrasound compared to other possible analytical tools include safety and low cost. However, conventional ultrasound systems may produce information that may be difficult to evaluate, for example, due to the limited resolution and viewpoint-related artifacts inherent to commonly used ultrasound transducers. Ultrasound imaging using typical ultrasound transducers can be particularly challenging, for example, if imaging at large depths is sought.

Disclosure of Invention

A first aspect provides an ultrasound method comprising: configuring two or more separate ultrasound transmitters to transmit signals to the coincident regions; configuring a receiving array to receive a wavefront representing the transmitted signal from each of the two or more transmitters after the transmitted signal has interacted with the medium located within the coincidence region; analyzing each received wavefront to determine the relative spatial location of each of the two or more separate ultrasound transmitters; and using the determined relative spatial location of each of the two or more separate ultrasound transmitters to coherently signal combine wavefronts received at the receive array from each of the two or more transmitters after the transmit signals interact with the medium located within the coincidence region.

Various mechanisms are known to improve data acquired using ultrasound techniques. Such mechanisms include, for example, compound data acquisition methods and system configurations, methods and system configurations that expand the field of view, and configurations that operate to increase the effective aperture of an ultrasound data acquisition system.

This first aspect recognizes that typical ultrasound transducers, including transmit and receive arrays, are typically designed for a particular application. For example, in a clinical or medical environment, the transducer is sized to allow an operator to hold and move the transducer, and is shaped and sized so that it can remain in contact with the surface of a human or animal body as it moves around the surface of the body. Other applications of ultrasound may have similar limitations with respect to the physical size of the ultrasound transmitter and/or receiver. Due to physical limitations, the data that can be acquired by ultrasound techniques may be limited. It is well known that increasing the effective aperture can improve the image created from acquired data, for example in optical and radio frequency systems.

The creation of extended aperture ultrasound systems may be limited by complexity, expense, and limited adaptability to different applications with large physical size to allow large aperture ultrasound transducers.

This first aspect recognizes that a method may be implemented using a simple ultrasound assembly that allows one or more challenges in ultrasound applications to be addressed. The method according to the first aspect recognises that one of the challenges in an ultrasound system may be the accurate and precise location of the transmit and receive elements in the system. A first aspect provides a method for locating critical elements in an ultrasound system based on information acquired by the system. In particular, rather than requiring knowledge or maintenance of the particular physical location of one or more elements forming the ultrasound system, the first aspect provides a method of determining the physical location by using ultrasound waves transmitted and received by the elements of the system while the elements are acquiring information about the medium under investigation using ultrasound methods. The method according to the first aspect may provide a mechanism that may both determine the locations of key operational elements of the ultrasound system and improve the interpretation of the data acquired by the ultrasound system after those locations have been determined.

The first aspect provides an ultrasound method. The ultrasound method may comprise a medical or clinical ultrasound method. The ultrasound method may comprise a medical ultrasound imaging method. The method may comprise the steps of: two or more separate ultrasound transmitters are configured to transmit signals into the coincident regions. Those emitters may comprise point emitters or emitting elements or arrays of emitters. The transmit array may include a plurality of transmit elements. In either case, the signals transmitted by two or more ultrasound transmitters pass through regions that at least partially overlap or coincide. The region may comprise an imaging region in which the medium to be investigated may be placed.

The method of the first aspect may comprise the steps of: the receiving array is configured to receive a wavefront representing the transmitted signal from each of the two or more transmitters after the transmitted signal interacts with the medium located within the coincidence region. The receive array may include a plurality of receiver elements configured to receive the transmitted signal after it is scattered by the medium under study. The method may include the step of analysing each received wavefront received by the receive array. Analyzing the form of the received wavefront at the receive array may allow the relative spatial location of each of the two or more separate ultrasound transmitters to be determined. Analyzing each wavefront received by the receive array may include analyzing one or more wavefronts received at the receive array based on signals transmitted by the first ultrasound transmitter and analyzing one or more wavefronts received at the receive array from the second ultrasound transmitter. Wavefronts received from the first and second ultrasound transmitters may be compared.

Then, the method may include: after the transmit signals interact with the medium located within the coincidence region, the determined relative spatial locations of each of the two or more separate ultrasound transmitters are used to coherently signal combine wavefronts received at the receive array from each of the two or more transmitters. Thus, by analyzing the received wavefronts over a time window to determine the relative spatial positions of the separate ultrasound transmitters, coherent signal combining may be performed, and thus it may be possible to obtain an improved image of the medium within the overlap region.

The method of the first embodiment may be performed with as few as two effectively separate ultrasound transmitters. The transmitters may be differently remote and/or physically separated. The receiving array may be co-located with the transmitter or may be remote from the transmitter.

This first aspect recognises that the use of the ultrasound signals themselves to calculate the relative positions of the emitters means that the physical positions of the ultrasound emitters in space need not be known (e.g. using a translation stage device or the like) or limited with precision. An important requirement is that the signals from the transmitters received at the receiving array at least partially overlap in the region of interest. In other words, if the transmitters are directed at the same (overlapping) volume of the target medium, the position of the transmitters may be determined using the method of the first aspect and using the ultrasound signals received at the receiving array.

In one embodiment, the analysis includes: one or more parameters are selected that define the relative spatial location of each of the two or more separate ultrasound transmitters. Thus, any set of parameters that work together to define the location of the transmitter in space may be selected. In one embodiment, a set of parameters and a set of possible ranges for each parameter are selected. Providing an initial "seed" guess within the relevant range of transmitter relative positions may be used as a starting position for later implementation of the optimization method according to the first aspect.

In one embodiment, the analysis includes: the received wavefront is used to make an initial guess of one or more parameters defining the relative spatial location of each of the two or more separate ultrasound transmitters. That is, a rough guess can be made as to the relative position of the transmitter, which depends on the received wavefront. For example, wavefronts received from scatterers within the medium from each emitter may be identified. The distance can be estimated because the difference in the receive time of two received wavefronts scattered by the same scatterer will be due to the difference in the travel time from the transmitter to the common scatterer.

In one embodiment, the analysis includes: an indication of one or more parameters defining a relative spatial location of each of two or more separate ultrasound emitters is received from one or more directional sensors disposed at each ultrasound emitter. Thus, an initial guess may be provided by the one or more physical positioning sensors provided, which initial guess may be refined by the method according to the first aspect. For example, those sensors may be located on the emitter body.

In one embodiment, the parameters include: a combination of one or more parameters that allows the determination of the relative spatial position of each of two or more separate ultrasound transmitters. Thus, a combination of angles and distances and other similar parameters may be selected.

In one embodiment, the parameters include one or more of: the location of one or more scatterers within the medium within the overlap region; relative angle between the ultrasonic emitters; relative distance of the ultrasound transmitter to the receiving array; the speed of sound in the medium located in the region of coincidence. In one embodiment, the parameters include: the location of one or more scatterers within the medium within the overlap region; relative angle between the ultrasonic emitters; relative distance of the ultrasound transmitter to the receiving array; the speed of sound in a medium located in the region of coincidence or equivalent.

In one embodiment, the analysis includes: the correspondence between the received wavefronts is increased by parameters that precisely define the relative spatial location of each of the two or more separate ultrasound transmitters. In one embodiment, the correspondence includes a correlation between the received wavefronts. Thus, the analysis step of the method of the first aspect is performed using an iterative process. Various criteria may be used to "stop" the iterative or precision process. The stopping criterion may comprise a selected number of iterations. The stopping criterion may comprise a measure of fitness by a selected threshold. The stopping criteria may include reaching a maximum or minimum value or rate of change of the fitting parameters that is smooth.

In one embodiment, the method further comprises: precise parameters are used in selecting the relative spatial position to be used when performing coherent signal combining. Thus, once the precise spatial locations of the transmitters are calculated, coherent signal combining may be performed on the information from each transmitter and received at the receive array. That is, signals from two or more ultrasound transmitters and received at a receive array may be matched.

Some embodiments of the first aspect may provide an ultrasound method, the method comprising: configuring two or more separate ultrasound transmitters to transmit signals to the coincident regions; configuring a receiving array to receive a wavefront representing the transmitted signal from each of the two or more transmitters after the transmitted signal has interacted with the medium located within the coincidence region; analyzing each received wavefront to determine an indication of the relative spatial location of each of the two or more separate ultrasound transmitters; and calculating one or more characteristics of the medium located within the coincidence region using the determined indications of the relative spatial locations of each of the two or more separate ultrasound transmitters. In some embodiments, the one or more properties may comprise the velocity of the sound signal within a (sub-) region of the medium. In some embodiments, the one or more characteristics may include a density map of a region of the medium. It is understood that wavefront aberrations caused by inhomogeneous media limit the quality of ultrasound images and are an important obstacle to achieving diffraction limited resolution using large aperture transducers [18 ]. An embodiment of the method according to the first aspect may assume that the speed of sound is constant along the propagation path. However, in some embodiments, since the speed of sound is a parameter that can be optimized, the described method can be applied to non-uniform media where the speed of sound varies in space. In this case, the medium may be modeled, for example, by segmenting the continuous layer. The optimization method can be applied in a recursive manner, dividing the field of view into suitable sub-regions with different speeds of sound. A more accurate sound speed estimate may allow improved beamforming and allow higher order phase aberration correction. Furthermore, a map of sound velocities within the medium can be used for tissue characterization.

Embodiments of this first aspect allow for a system that avoids the need for pre-calibration and/or advance knowledge of the relative positions of two or more separate ultrasound transmitters arranged to transmit signals into the coincidence region. In particular, it is not necessary to perform a direct transmission from the transmitter to the receiver to calculate the relative position of the transmitter and the receiver, but rather the data obtained from the scattering medium under investigation may be used to calculate the relative position of the transmitter. The embodiment of using scatterers in the medium under study to determine the relative position of the emitters presents an efficient mechanism to ensure that the geometry is always supported (assuming that there is a coincidence region).

Some embodiments of this first aspect provide a method that uses shared information (e.g., dominant scatterers or other salient features in the received cross-transducer data) to enable the localization of the aperture even if there are no clear point targets in the medium under study. In some configurations, an external source of primary scatterers, such as a low concentration of microbubbles, may be used to assist in correlation between the received cross-transducer data.

Embodiments of this first aspect recognize that although the typical aperture (formed within each individual transmitter/receiver array) may be limited to the maximum available size set by the dispersion of the speed of sound in the medium under study, some embodiments may include a "super aperture" formed by multiple transmitter/receiver arrays and not constrained by that same maximum size.

A second aspect provides a computer program product operable, when executed on a computer, to perform the ultrasound method of the first aspect.

A third aspect provides an ultrasound device comprising: two or more separate ultrasound transmitters configured to transmit signals to the coincident region; a receive array configured to receive a wavefront representing an emission signal from each of two or more emitters after the emission signal interacts with a medium located within the coincidence region; position processing logic configured to analyze each received wavefront and determine a relative spatial position of each of two or more separate ultrasound transmitters; and signal combining logic configured to coherently signal combine wavefronts received at the receive array from each of the two or more transmitters using the determined relative spatial locations of each of the two or more separate ultrasound transmitters after the transmit signals interact with the medium located within the coincidence region.

In one embodiment, two or more separate ultrasound transmitters are positioned such that their signal quantities at least partially overlap within a coincidence region. In other words, the two or more separate ultrasound emitters are positioned such that the field of view or cone-like field of view of each separate ultrasound emitter at least partially overlaps the field of view of the respective emitter within the overlap region.

In one embodiment, the ultrasound signal comprises a pulsed ultrasound signal. The repetition rate of the ultrasound pulses may depend on the depth within the medium of interest to be imaged. Higher pulse frequencies provide higher time sampling of the medium of interest.

In one embodiment, two or more separate ultrasound transmitters are configured to transmit signals into the coincident regions substantially simultaneously. In one embodiment, the two or more separate ultrasound transmitters are configured to transmit signals into the coincidence region continuously. Depending on the application, an appropriate transmission mode may be selected. Parallel transmission may increase computational complexity but may increase the sensitivity of the information collected by the receiving array.

In one embodiment, the signal transmitted by each of the two or more transmitters comprises a plane wave. In one embodiment, the signal transmitted by each of the two or more transmitters comprises a point ultrasound source. In one embodiment, the transmitted signal comprises a known wave configuration. The transmitted signal may comprise any suitable known wave configuration, such as a sine wave or similar waveform.

In one embodiment, the apparatus further comprises: at least one further receive array configured to receive a wavefront representing the transmit signal from each of the two or more transmitters after the transmit signal has interacted with the medium located within the coincidence region; wherein the position processing logic is configured to analyze each received wavefront received at each receive array and determine a relative spatial position of each of two or more separate ultrasound transmitters; and wherein the signal combination logic is configured to use the determined relative spatial locations of each of the two or more separate ultrasound transmitters and each of the receive arrays to perform coherent image reconstruction of the medium located within the coincidence imaging region by combining wavefronts received at each of the receive arrays from each of the two or more transmitters after the transmit signals interact with the medium located within the coincidence region. Thus, the same analysis can be performed using two or more receive arrays, effectively increasing the receive aperture.

In one embodiment, at least one of two or more separate ultrasound transmitters and one or more receive arrays are collocated to form an ultrasound transducer. In one embodiment, each of two or more separate ultrasound transmitters and a receive array are co-located to form an ultrasound transducer.

Further specific and preferred aspects are set out in the accompanying independent and dependent claims. Features of the dependent claims may be combined with those of the independent claims as appropriate and in combinations other than those explicitly set out in the claims. In particular, features of the first aspect may be incorporated into the third aspect, and vice versa, as appropriate.

Where an apparatus feature is described as being operable to provide a function, it will be understood that this includes an apparatus feature that provides the function or is adapted or configured to provide the function.

Drawings

Embodiments of the invention will now be further described with reference to the accompanying drawings, in which:

FIG. 1 is a geometric representation of a multi-transducer beamforming scheme;

FIG. 2 schematically shows an experimental set-up comprising two ultrasonic transducers;

FIG. 3 shows the experimental setup of FIG. 2 in more detail;

FIG. 4 graphically illustrates a coherent multi-transducer image obtained using initial estimates and optimal values of parameters, the data corresponding to the data shown in Table I;

FIG. 5 is a box plot of normalized values of optimal parameters defining a rigid body transformation between a coordinate system and acoustic velocity during an experiment;

FIG. 6 shows a model image of a wire phantom obtained using a single transducer, a model image of a wire phantom obtained by non-coherently combining data acquired from two ultrasound transducers, and a model image of a wire phantom obtained by coherently combining data acquired from two ultrasound transducers;

FIGS. 7 and 8 show respective cross sections of the Point Spread Function (PSF) at the depth of the scatterer shown in FIG. 6;

FIG. 9 shows a comparison of PSF and k-space representations of envelope detection obtained using a single transducer and coherent multiple transducers;

FIG. 10 shows PSF and k-space representations of envelope detection for a multi-transducer ultrasound method without and with apodization, which compounds 121 plane waves covering a total angular range of 60 °;

FIG. 11 shows a set of individual sub-images forming the final "multi-coherent" image;

figure 12 shows experimental images of alignment phantoms obtained by different methods;

FIG. 13 shows two probes T1And T2A schematic of a common field of view (FoV);

FIG. 14 shows an example of a sound velocity diagram for a propagation medium having a muscle layer 8mm thick and a fat layer 25mm thick;

FIG. 15 is a schematic of the spatial position of two linear arrays;

FIG. 16 is a schematic illustration of the spatial locations of two linear arrays and their fields of view at different imaging depths;

FIG. 17 is a conventional aperture image;

FIG. 18 shows simulated PSF and lesion images from aberration-free media for increasing the effective aperture and gap of the CMTUS system;

FIG. 19 compares the computed image quality measurement indices of the CMTUS method and the single probe system;

FIG. 20 compares the CMTUS image with a single probe system at two different imaging depths (100mm and 155 mm);

FIG. 21 compares calculated quality measure indicators as a function of imaging depth;

FIG. 22 is a comparison of simulated images obtained by conventional aperture 1 probes (a-d), 2 probes (e-h) and CMTUS method (i-1) through an increasingly thick aberrating layer;

FIG. 23 shows simulated delayed radio frequency data for a medium with an adipose layer thickness of 35 mm;

FIG. 24 is a comparison of calculated quality measure indicators for different imaging methods;

figure 25 shows a comparison of phantom images obtained with paraffin samples under control using 1 probe and CMTUS;

FIG. 26 shows a comparison of calculated quality measure indicators, Lateral Resolution (LR), contrast, and contrast-to-noise ratio (CNR) for two different acquisition techniques;

FIG. 27 compares experimental point target images; and

figure 28 shows experimentally delayed radio frequency data obtained using different beamforming parameters.

Detailed Description

Before one particular embodiment is described in detail, a general overview of methods and apparatus utilizing the concepts is provided.

The overall imaging system recognizes that expanding the aperture has the potential to improve imaging performance [1 ]. When using ultrasound as an analysis tool, especially in a clinical setting, the aperture size may be limited by the complexity and expense associated with extended aperture systems. Furthermore, ultrasound transducers with large physical dimensions to allow large apertures have limited adaptability to different applications.

Taking the clinical application of ultrasound imaging as an example, a typical clinical ultrasound probe is controlled and moved by a physician to adapt to the contours and shape of the human body. Physical ultrasound transducer size becomes a compromise between cost, ergonomics and image performance. It may be useful to provide a method of improving the quality of ultrasound images without changing the size of a conventional ultrasound probe.

In synthetic aperture ultrasound imaging [2], [3], improvements related to a wider coherent aperture are shown. In these devices, the extended aperture is obtained by mechanically moving and tracking the ultrasound transducer. The detailed position and orientation tracking information is used to identify the relative position and orientation of the acquired ultrasound images, which are then merged into the final image [4 ]. However, noise and calibration errors of the tracking system can propagate into the coherent image reconstruction, resulting in image degradation. In fact, sub-wavelength positioning accuracy is required to combine information from multiple poses. Achieving such accuracy is challenging in conventional ultrasound calibration. For practical implementation, more accurate calibration techniques [3], [5] are required. Furthermore, the feasibility of this technique in vivo is limited by long acquisition times (greater than 15 minutes per image), which may disrupt the coherence aperture [6 ]. Resolution is affected by motion artifacts, tissue deformation, and tissue aberrations, all of which deteriorate with increasing effective aperture [7 ].

Methods according to some aspects and embodiments may provide a fully coherent multi-transducer ultrasound imaging system. The system may be composed of a plurality of ultrasound transducers that are synchronized, freely arranged in space, and configured to transmit Plane Waves (PW). By coherently integrating the different transducers, a larger effective aperture can be obtained in both transmission and reception and an improved final image can be formed. As previously mentioned, the coherent combination of information obtained by the different transducers requires that the location of the transmitter and receiver within the system be known and sub-wavelength accuracy achieved.

In general, a method is described that enables accurate sub-wavelength localization of ultrasound transmitters (and receivers) in a multi-transmitter system. Multiple transducers in a multi-transducer ultrasound imaging system can be located without the use of external tracking devices based on the spatial coherence function of the backscattered echoes of the same point source received by the same transducer. The use of Plane Waves (PW) produces a higher energy wave field than synthetic aperture methods, thereby increasing penetration. Higher frame rates can also be achieved using plane waves [8 ].

The principles of classical plane wave imaging, the terms used, and an overview of multi-transducer beamforming are summarized below. A method of accurately calculating the spatial positions of different transducers is described. Experimental phantom measurements are described and show the corresponding results obtained by the multi-transducer system. The results were compared to conventional plane wave imaging using a single transducer and incoherent composite images from multiple transducers.

Theory of the invention

By reducing the F-number, which represents the ratio of depth of focus to aperture size, the ultrasound image quality can be improved. Enlarging the aperture is a straightforward way to improve imaging performance. Thus, if the information from different transducers can be coherently combined, the aperture size of the system is greatly increased, hopefully an enhanced image is obtained.

In one possible coherent multi-transducer approach, a single transducer is used for each transmission to generate a Plane Wave (PW) that isolates the entire field of view (FoV) of the transmitting transducer. Echoes scattered from the medium are recorded using all of the transducers forming part of the multi-transducer system. The data acquisition sequence is performed by transmitting from each individual transducer in turn. Following conventional plane wave imaging methods, knowing the position of each transducer (and taking into account the complete transmit and receive path lengths), the coherent summation of the acquired data from multiple transducers can be used to form a larger aperture and obtain an image.

Multi-transducer symbol and beamforming

A 3D architecture consisting of N matrix arrays, freely placed in space and having a partially shared field of view (FoV) was studied. This architecture represents the positioning of multiple ultrasound transducers. The transducers may be considered anywhere in space except for at least partially overlapping fields of view. The transducers are synchronized (in other words, in this configuration, the trigger and sample times in the transmit and receive modes of the ultrasound transducer are the same). The ultrasonic transducers are configured to transmit plane waves into the medium in turn. The structure is such that each transmitted wave is received by all the transducers, including the transmitting transducer. Thus, a single plane wave transmission produces N sets of radio frequency data — one for each receiving transducer.

The architecture is described using the following terminology:

dots are indicated by capital letters (e.g., P);

vectors representing relative positions are represented by bold lower case letters (e.g., r);

the unit vector is marked with "top edge" (hat); and

the matrix is written in bold upper case letters (e.g., R).

The indexing convention is to use i for the transmit transducer, j for the receive transducer, h for the transducer element, and k for the scatterer. Other indices are also illustrated as used.

The device consists of N matrix array transducers Ti(i-1, … …, N) where H elements are as shown in fig. 1. T isiIs defined by an axis { x ^i;y^i;z^i(ii) a And the origin OiDenotes the origin OiIs defined at the center of the surface of the transducer, wherein z ^ siThe direction is normal to the transducer face and away from the transducer i. By transducers TiTransmitted plane waves are guided by plane PiBy definition, it can be defined by the perpendicular to the plane n ^iAnd origin OiIs characterized by the normal of. The radio frequency data received by transducer j at time T on element h is denoted as TiRj(h; t). The generated image and all transducer coordinates are defined in a world coordinate system arbitrarily positioned in space, except for special references to the local coordinate system of the transducer, under which the superscript i is used.

Fig. 1 is a geometric representation of a multi-transducer beamforming scheme. In the example shown in FIG. 1, the transducer T1Emitting plane waves, T2Receiving slave QkThe echo scattered onto element h. Using the above symbols, plane wave imaging beamforming [8]]It is extendable to the multi-transducer scheme shown in fig. 1. Assuming that the transducer Ti emits a plane wave, at QkThe image point at which the beam is to be formed may be derived from the transducer T according to the following formulajThe received echoes are calculated as:

where c is the speed of sound of the medium and D is the distance over which the wave travels, which can be divided into a transmit distance and a receive distance:

wherein d isTDistance between measuring point and plane (transmission distance), dR:hIs the distance between the point and the receiving element (receiving distance). These distances can be calculated as follows:

and

where | | | is the usual euclidean distance, Rj ═ x ^ sj y^j z^j]Is a 3x3 matrix, which is parameterized by three rotation angles:

which is offset by an offset OjCharacterization of the transducer T with 6 parameters togetherjPosition and orientation of [9]]。

After the total distance is calculated, equation (1) may be evaluated for each pair of transmit transducers, receive transducers, and a total beamformed image S (Q) may be obtained by coherently adding the individual beamformed imagesk):

Calculation of transducer position

In order to perform the above-described coherent multi-transducer compounding, the location and orientation of each imaging transducer is required. This allows the time of flight of the wave transmitted to any receiving transducer to be calculated. This section describes methods for using the consistency of the received radio frequency data to accurately calculate these positions when multiple transducers are simultaneously receiving from the same transmitted (and scattered) wave. The method assumes, except at position Qk(K-1, … …, K), the medium is substantially uniform and all transducers are considered to be identical.

Consider the following transmit sequence:

plane wave driven transducer TiTransmitting and being received by N transducers making up a multi-transducer system;

plane wave composed ofjTransmit and also be received by all transducers;

this process continues until the N transducers transmit in sequence.

During the time each transmitter is operating in turn, it is assumed that the system and medium under study remain completely stationary.

Generated by and transmitted from the same scatterer when using all transducersTransducer TjThe received wavefields must be correlated or have a spatial covariance 10]. That is, for each element h, the only difference in timing is the transmit time (receive time is equal because the receive transducers are the same). When compensating for the transmit time difference, the signal received at element h will be time dependent.

One method includes finding the "best" parameter at which the time correlation between the received radio frequency data sets of the shared receiving transducer is greatest for all scatterers in the common field of view.

Since the reception time also depends on the speed of sound in the medium c and the scatterer QkSo the unknown parameters are:

θ={c,Q1,...,QK,φ1,O1,...,φ|N,ON} (6)

note that since the parameters defining the spatial position of the transducers depend on the definition of the world coordinate system, the vectors of unknown parameters can be reduced by defining the world coordinate system to be the same as the local coordinate system of one transducer.

The similarity between signals received by the same element can be calculated using normalized cross-correlation NCC,

wherein y isi;h;j;kWhen it is from TiFrom Q at transmissionkThe signal backscattered and received by element h on transducer j can be calculated as follows:

t is the time transmission pulse length.

Then considering all the elements, the overall similarity between the radio frequency data received by the same transducer j can be calculated according to

WhereinIs a signal yi,h,j,kThe envelope of the (c) signal is,is a Hilbert transform; and Wi,h,j,kIs defined as:

function Wi,h,j,kTo represent the weighting of each element in the same transducer j relative to the elements of the rest of the elements.

If intra-transducer channel correlation is not taken into account, in undesirable situations (wave reception times are wrong but used in a similar manner for different transmitting transducers), low disparity values for wrong parameters may result.

Summing all receiving transducers and scatterers yields the final cost function to be maximized:

"optimum" parameterIncluding the relative positions and orientations of all involved transducers, the speed of sound in the medium, and the location of scatterers in the medium, and can be used to make a cost functionTo a maximum ofThe search algorithm finds:

equation (12) can be maximized by using a gradient-based optimization method [11 ].

Method of producing a composite material

Figure 2 schematically shows an experimental setup comprising two ultrasound transducers. The method was experimentally tested using two identical linear arrays with a partially common field of view (FoV) of the ultrasound phantom. The same linear array is located on the same plane (y 0). In such a 2D architecture, the parameters defining the transducer position and orientation are reduced to a rotation angle and a 2D translation [9 ].

The experimental sequence starts with the transducer 1 emitting a plane wave into the region of interest, where 5 scatterers are located within the common field of view of the transducer 1 and the transducer 2.

In this system, both transducers receive a backscattered ultrasound field (T)1R1And T1R2). Under the same conditions, the sequence is repeated, the backscattered echo T is transmitted by the transducer 2 and acquired by both transducers2R1And T2R2

Body model

The acquisition was performed on a custom-made linear target phantom (200 μm diameter) immersed in distilled water. The phantom is located within overlapping imaging regions of multiple transducers so that all scatterers are within a common field of view.

Experimental device

The experimental setup included two synchronized 256-channel ultrasound advanced open platform (ULA-OP 256) systems (MSD laboratories, university of florfenicol, italy) [12], each ULA-OP 256 system being used to drive an ultrasound linear array consisting of 144 piezoelectric elements (imaging transducer LA332, by florfenicol, italy, Esaote corporation) with a 6dB bandwidth from 2MHz to 7.5 MHz. The probe is carefully aligned on the same elevation plane by precision optomechanical means prior to acquisition. Each probe is supported by a 3D printing housing structure connected to a double tilting rotary stage and then mounted on an xyz translation and rotation stage (Thorlabs, usa). The imaging planes (y ═ 0) of the two transducers are defined by two parallel lines immersed in the water tank.

Figure 3 shows the experimental set-up of figure 2 in more detail. The components shown in fig. 3 are marked with letters: (A) the device comprises a linear array, (B) a 3D printing probe bracket, (C) a double-inclined rotating table, (D) a rotating table and (E) an xyz translation table.

Pulse sequencing and experimental scheme

Two independent experiments were performed. First, both probes are mounted and fixed for static acquisition in the opto-mechanical device described above. The second experiment was a free-hand demonstration. In this case, both probes are grasped and controlled by the operator. The movement of the transducer is carefully limited to the same elevation plane, i.e., y is 0, and two common targets are kept within a common field of view.

Two different types of pulse sequences are used.

In static experiments, 121 plane waves covering a total fan angle of 60 ° (from-30 ° to 30 °, step size of 0.5 °) were transmitted from 144 elements of each probe at a frequency of 3MHz with a pulse repetition frequency equal to 4000Hz, in an alternating sequence for each probe, i.e. only one transducer is transmitting at a time when both probes are receiving. The total fan angle between the transmitted plane waves is selected to be approximately the same as the angle defined between the probes. Raw radio frequency data scattered to 77mm depth was obtained at a sampling frequency of 39 MHz. There is no apodization either on transmit or receive. The total time for this sequence is 60.5 milliseconds.

During the unconstrained demonstration, 21 plane angles (from-5 ° to 5 °, step size of 0.5 °) were transmitted from each probe and the raw radio frequency data backscattered to 55mm depth was obtained. The rest of the setup was the same as the fixed probe experiment. The total acquisition time with this time series was 1 second.

Data processing

Parameter θ required to initiate optimization algorithm0={c,Q1,...,QK,φ1,O1,φ2,O2The preliminary estimate of } is chosen as follows:

the speed of sound of the propagation medium is chosen according to the literature, which in the case of water is c 1496m/s [13 ].

Considering the world coordinate system and the transducer 1 (phi)1=0,O1=[0,0]) Using point-based image registration to calculate a parameter, { phi, { that defines the position of the transducer 22,O2}[14]。

For scatterer position QkUsing a catalyst such as [15 ]]The best fit one-way geometric delay of the echoes returned from the target is calculated to its initial value.

All targets within a common field of view are used for optimization.

For static experiments, only one set of optimal parameters is needed because there is no motion, and all the radio frequency data corresponding to plane waves transmitted at different angles can be beamformed by using the same optimal parameters. However, to validate the optimization algorithm, 121 optimal parameter sets were calculated, one for each emission angle.

For unconstrained presentations, each frame is generated using a different set of optimization parameters, with each subsequent optimization being initialized with the best value of the previous frame. The proposed method is compared to conventional B-mode imaging using a single transducer and non-coherent combining of B-mode images acquired by two independent transducers. Images obtained in static experiments were used for image performance analysis. Will be in a sequence (T) using equation (5)1R1、T1R2、T2R1、T2R2) The sum of the individual images obtained in (1) is added to obtain the final image:

S(Qk)=s1,1(Qk)+s1,2(Qk)+s2,1(Qk)+s2,2(Qk) (13)

the spatial resolution is calculated from the Point Spread Function (PSF) on a single scatterer. An one-axis side plane for 2D PSF analysis is selected by finding the location of the peak in the elevation dimension from the envelope detected data. The transverse and axial PSF profiles are taken from the center of the point target. The lateral resolution is then evaluated by measuring the PSF width at-6 dB level, and the axial resolution is taken as the PSF size at-6 dB level in the axial (depth) direction.

Furthermore, the performance of the proposed multi-transducer system in terms of image quality, e.g. resolution, is described in a frequency domain or k-space representation. The off-axis side radio frequency PFS is extracted from the beamformed data and computed as a k-space representation using a 2D fourier transform. While the axial resolution is determined by the transmitted pulse length and the transmit aperture function, the transverse response of the system can be predicted by convolution of the transmit and receive aperture functions [16 ].

Results

In static experiments, the 121 optimal parameter sets calculated for each emission angle converged to the same result. The initial values and the best values obtained are summarized in table I below.

Table I initial estimates and optima of system parameters

FIG. 4 graphically illustrates a coherent multi-transducer image obtained using initial estimates and optimal values of parameters, which data corresponds to the data shown in Table I. It can be seen that after the optimization method is implemented, the blurring effect on the PSF in the image obtained using the initial estimation of the position parameters can be compensated.

The convergence shown in table I and fig. 4 was also verified by the results of the unconstrained experiments. In this case, each emission angle is optimized over the total acquisition time. After calculating the initial estimate of the position parameter of the first transmitted PW, each subsequent optimization is initialized with the optimal value of the previous transmit event.

Fig. 5 is a box plot of normalized values of the optimal parameters defining the rigid body transformation between coordinate system and acoustic velocity during the experiment. As can be predicted, the rotation parameter and the translation parameter exhibit a larger range of values, whereas the speed of sound in the medium can be considered to be substantially constant. The average value of the optimal sound speed in the acquisition time is 1466.00m/s, and the standard deviation is 0.66 m/s.

FIG. 6 illustrates the use of a single transducer (T)1R1) The obtained line-like body mode image incoherently combines the data acquired from the two ultrasonic transducers (envelope detection image T)1R1、T2R2) While obtaining a linear body mode image and coherently combining the data (T) acquired from the two ultrasonic transducers1R1、T1R2、T2R1、T2R2) And obtaining a line body mode image.

By comparing the images from the single transducer with the images from the multi-transducer approach, it can be seen that the reconstructed image of the linear object is significantly improved.

The PSFs of the three images can be compared. Fig. 7 and 8 show the corresponding cross-section of the PSF at diffuser depth for each image shown in fig. 6 using a single plane wave at 0 ° and 121 plane waves compounded over a total angular range of 60 °, respectively.

To analyze the multi-transducer approach, a world coordinate system is used that can achieve the best resolution and more conventional PSF shape. The coordinate system is determined by the transducer T1Is defined by rotating the bisector angle between the two transducers. In this coordinate system, the best possible resolution is aligned with the x-axis. The incoherent multi-transducer results show that the optimization is efficient because the optimized parameters are used for incoherent complex envelope detection subimage T1R1And T2R2. In the PSF generated by compounding 121 plane waves over a total angular range of 60 °, the apodization effect in the multiphase dry PSF was analyzed, which emphasizes low transverse frequencies. The performance of all these methods is summarized in table II.

TABLE II imaging Performance of different methods

It can be seen that the lateral resolution of the coherent multi-transducer acquisition is best, while the worst lateral resolution corresponds to the incoherent image generated by combining the independent images acquired by the two transducers.

The characteristics of the side lobes differ greatly, which is even more in the coherent multi-transducer approach. When a single plane wave is used, the largest difference is the difference between the second side lobes, which is 13dB higher than the conventional single transducer method, while the difference of the first side lobes is 3.5 dB. This indicates that, although significant image improvement can be achieved, the image may be affected by side lobes. Apodization results in a significant reduction in the first side lobe and a 65% improvement in resolution over conventional images acquired by a single transducer.

Fig. 9 shows a comparison of the PSF and k-space representation of envelope detection obtained using a single transducer and coherent multiple transducers. In the k-space representation, the use of a single transducer (T) is analyzed1R1) The obtained PSF and the coherent composite image obtained by the two transducers. Fig. 9 shows the corresponding result using a single plane wave at 0 °. The image is presented in the local coordinate system of the transducer 1. An important consequence of linear systems is that the superposition principle can be applied. As expected, the overall k-space representation shows an extended lateral region corresponding to the sum of the four individual k-spaces forming the image in the coherent multi-transducer approach.

It will be appreciated that since the two transducers are identical but have different spatial locations, they exhibit the same k-space response (same transmit and receive aperture functions) at different spatial locations. The discontinuity in the system aperture causes gaps in the spatial frequency space, allowing for separation between transducers. The discontinuity may be filled by a composite plane wave over a range of angles similar to the angle defined by the two transducers.

FIG. 10 shows PSF and k-space representations of envelope detection for a multi-transducer ultrasound method with and without apodization, which combines coverage121 plane waves of a total angular range of 60 °. In particular, fig. 10 shows the resulting PSF and the corresponding continuous k-space after compounding 121 angles (which define a total sector of 60 °) with a 0.5 ° spacing. The topography of the continuous k-space can be reshaped by weighting data from different images, which are combined into a final image. A more conventional transfer function showing reduced side lobes may be created to emphasize the dominant sub-image T1R2And T2R1A defined low lateral spatial frequency. FIG. 10 shows a vector [ 1; 2; 2; 1]Weighting subimage T1R1、T1R2、T2R1And T2R2The resulting PSF and its corresponding k-space representation.

Discussion of the related Art

The described research introduces a new synchronized multi-transducer ultrasound system and method that can significantly outperform conventional plane wave ultrasound imaging by coherently superimposing all the individual images acquired by the different transducers. The improvement in resolution is demonstrated by practice in addition to the extended field of view allowed by the use of multiple transducers.

Furthermore, the final image formed by the coherent combination of the sub-images may exhibit different characteristics than shown in the single image. For example, in the common field of view of multiple transducers, the final image may have a region that provides the best performance, and the quality of the final image may deteriorate outside that region (the number of transducers having a common field of view is reduced). The worst region of the final image is typically defined by the properties of the individual images and corresponds to the portion of the combined "final" image that has no overlapping fields of view.

Different emission beam profiles (e.g., diverging waves) can increase the overlapping fields of view and expand the high resolution area of the final image.

The significant difference between the k-space representations of the single transducer approach and the multi-transducer approach shown in the figures further explains the difference in imaging performance. The more extensive the k-space representation, the higher the resolution [17 ].

The appearance of the overall response of a multi-transducer system can be explained by the rotation and translation properties of the 2D fourier transform. This total range determines the highest spatial frequency representation in the image and thus the resolution. The relative amplitude of the spatial frequency representation (i.e., the topography of k-space) determines the texture of the imaged object. Weighting data from different transducers can reshape k-space to emphasize a certain spatial frequency and allow for a more conventional system response to be created.

The presence of uniformly spaced unfilled regions in the k-space response of the system may indicate the presence of grating lobes [16] in the spatial impulse response of the system. Sparse arrays (such as the dual transducer system described above) can produce gaps in the k-space response. If k-space has negligible gaps, the magnitude response of k-space becomes smooth and continuous within a limited region. This is the motivation for finding and using a good spatial distribution of the transducers in the system and suggests that while it may be beneficial to compound plane waves at different angles, it may not always be necessary to produce an improved image.

Wavefront aberrations caused by non-homogeneous media can limit the quality of ultrasound images and are an important obstacle to achieving diffraction limited resolution using large aperture transducers [18 ]. The above-described method and apparatus have been tested in a homogeneous medium where the speed of sound is constant along the propagation path. However, since the speed of sound is a parameter that can be optimized, the method can be applied to a non-uniform medium in which the speed of sound varies in space. For example, in this case, the medium may be modeled with piecewise continuous layers. The optimization method can be applied in a recursive manner, dividing the field of view into suitable sub-regions with different speeds of sound. A more accurate estimate of the speed of sound may allow improved beamforming and allow higher order phase aberration correction. Furthermore, sound velocity maps are very meaningful for tissue characterization [19], [20 ].

To successfully improve PSF, the multi-transducer approach described above requires coherent alignment of the backscattered echoes from multiple transmit and receive locations. This requirement is achieved by precise knowledge of all transducer positions, which is practically impossible to achieve by manual measurements or using electromagnetic or optical trackers [21 ]. The above-described method allows for accurate and robust transducer localization based on the spatial coherence of backscattered echoes from the same scatterer and received by the same transducer. The precise location of the transducers required to generate the coherent image is calculated by optimizing the spatial coherence. Using the gradient descent method requires an initial estimation of parameters that are sufficiently close to the global maximum of the cost function. The distance between the maxima corresponding to the pulse lengths determines the tolerance. For the experimental set-up described above, this is approximately 1.5 μ s (corresponding to 2.19 mm). This tolerance value can be achieved by imaging registration [14 ]. In fact, without constraints, assuming that the registration is accurate at some initial instant, an initial guess can be assured if the transducer movement in time between the two transmissions is relatively small. The method has been validated in an unconstrained demonstration.

It should be appreciated that the experimental setup and the related method described above is limited because it assumes that all transducers are located on the same plane, i.e. they share the same imaging plane. Prior to the imaging acquisition, an alignment step has been performed to obtain the image shown in the figure. A 3D matrix array can be used to overcome these limitations and to establish a higher resolution volume than current ultrasound transducer aperture sizes allow. It will also be appreciated that in order for the described optimization algorithm to converge to a unique solution, N point scatterers (as many as the number of transducers) may be required in the common field of view. In practice, there may be a number of significant scatterers in the medium, and so this limitation is not significant. Although this method has been validated for point scatterers, different scatterers may require different methods.

The different transmit and receive paths may experience unique clutter effects [22], generating spatial incoherent noise and PSF distortion, which may form the basis for further work.

In conventional plane wave imaging, the frame rate is limited by the propagation time and the decay time, which depend on the speed of sound and the attenuation coefficient. In the experimental setup described above, the minimum time between two insonifications was about 94 μ s. Thus, the maximum frame rate is limited to 10.7kHz, which may be reduced when different compound angles are usedLow. In the case of the multi-transducer approach, the frame rate is reduced to F by the number of transducersmax/N。

Figure 11 shows a set of individual sub-images forming the final "multi-coherent" image. These data are obtained by individually beamforming 4 sets of radio frequency data obtained from a complete sequence, i.e. using the probe T1Emitting plane waves at 0 deg. while using two probes (T)1R1、T1R2) Receiving, using probes T2(T2R1、T2R2) The transmission is repeated. The optimal parameters for reconstructing the image areO2=[41.10,25.00]mm, c 1437.3 m/s. Straight line representation transducer T1(vertical) and T2(tilted) field of view.

Figure 12 shows experimental images of alignment phantoms obtained by different methods. FIG. 12(a) shows the use of a transducer T1Compounding coherent plane waves of 41 plane waves; FIG. 12(b) shows the use of a transducer T2Compounding coherent plane waves of 41 plane waves; FIG. 12(c) shows a coherent multi-transducer approach with a single plane wave launched at 0 from each transducer; fig. 12(d) shows a coherent multi-transducer approach with additional compounding and 41 plane waves transmitted per transducer. The optimal parameters for reconstructing the multi-coherent image areO2=[41.10;25.00]mm, c 1437.3 m/s. Straight line representation transducer T1(vertical) and T2(tilted) field of view.

The results obtained from the anechoic lesion phantom are shown in fig. 11 and 12, where the field of view (FoV) of each transducer is represented by a vertical line and an oblique line (T, respectively)1And T2). Figure 11 shows the various sub-images forming the final multi-coherent image obtained by beamforming the 4 radio frequency data sets obtained in a single cycle of the imaging process, i.e. with the probe T1At 0 °Two probes (T) for emitting plane waves simultaneously1R1、T1R2) Receiving and using the probe T2(T2R1、T2R2) The transmission is repeated. By optimization, these sub-images can be reconstructed after finding the relative position of the probe. The direct result of the combination of these 4 sub-images is that the field of view of the multi-phase dry image is extended. Fig. 12(c) shows a multi-phase dry image obtained by coherently compounding 4 sub-images. It can be seen that any overlapping regions in the sub-images will help to improve the resolution of the final poly-phase dry image, as predicted by the k-space representation, because of the effective magnifying aperture that is created.

The use of coherent plane waves (with a single transducer (T)) is compared in FIG. 121R1And T2R2Composite 41 plane wave angles) and coherently composite the images obtained by the radio frequency data acquired by the two transducers (using equation (6)) sending a single plane wave to each at 0 ° and 41 plane waves to each.

Table II imaging performance of different methods evaluated using control phantoms

Table II above shows the corresponding imaging measurement indicators in terms of lateral resolution, contrast, CNR, and frame rate. To reconstruct coherent multi-transducer images, initial estimates of the parameters were chosen as described above and 3 strong scatterers produced by nylon wire were used in the optimization. It can be seen that, in general, the multi-phase dry image has better defined edges, making the boundary easier to delineate than an image obtained by a single transducer. The reconstructed image of the linear object is significantly improved, the speckle size is reduced, and the anechoic regions are easily identified from the phantom background. The coherent multi-transducer approach greatly improves resolution at the expense of less contrast without sacrificing frame rate. For a single transducer, coherent compounding was used, with a lateral resolution of 1.555mm (measured at a frame rate of 260 Hz) measured at the first target location. Using multi-probe images (no additional compounding), the resolution was improved to 0.713mm (frame rate was improved to 5350 Hz). In the case of a single transducer, the lesion was visible with a contrast of-8.26 dB and a CNR of 0.795, whereas in the multi-transducer coherent image both measurements were slightly reduced (without additional compounding) to-7.251 dB and 0.721, respectively. 41 plane waves are compounded on each probe, and the two measurement indexes can be improved to-8.608 dB and 0.793. These results indicate that the target detectability is a function of resolution and contrast.

The relationship between imaging depth and the included angle between the two probes was also studied. Fig. 13 shows a spatial representation of the fields of view of two linear arrays and the common depth of field measured at the intersection of the centers of the two individual fields of view. The depth of the common field of view as a function of the angle between the two probes is described when the plane wave is transmitted at 0 deg.. As can be seen from fig. 13, the greater the angle between the probes, the greater the imaging depth.

The described architecture introduces a coherent multi-transducer ultrasound system that is significantly superior to a single transducer architecture by the coherent combination of signals obtained by different synchronized transducers with a common field of view. Although the described experiments were performed as demonstration in a 2D using linear array, the proposed architecture comprises a third spatial dimension. The use of a matrix array capable of volume acquisition is useful for true 3D presentations. Since the multi-phase dry image is formed from 4 sets of radio frequency data acquired in two successive transmissions, it should be appreciated that tissue and/or probe motion does not disrupt coherence between successive acquisitions. To ensure this, a high frame rate acquisition is useful. Although the described structure uses plane waves, different transmit beam profiles (e.g., diverging waves) may increase the overlapping fields of view, thereby expanding the final high resolution image. In fact, there is a complex interaction between field of view and resolution gain when the probe is relatively moved with respect to the other probe.

In the proposed method, the overlap of the insonated regions makes it possible to determine the relative probe position. Any overlap in the transmit or receive sensitivity fields contributes to improved resolution due to the increased aperture of the transducer assembly. The final image achieves an extended field of view, but the resolution will only increase in the region of the overlapping fields. This is highest in the center, where the overlap includes the transmission and reception of both probes. There is also an improvement, albeit less, in the region where the overlap is only present over the transmitted or received field (see fig. 11 and 12). Thus, there are different kinds of net gains at different locations. In a similar manner, this will also determine the imaging depth achieved by the described method. Although the relative positions of the individual transducers and the angle of the emitted plane waves determine the depth of the common field of view (see fig. 12), it is expected to improve imaging sensitivity in the depth region because the effective receiving aperture is larger than in a single probe system.

The resolution enhancement depends primarily on the effective extended aperture, rather than by compounding plane waves at different angles. The results show that in the coherent multi-transducer approach, a tradeoff is required between resolution and contrast [18 ]. Although large gaps between the probes may result in an enlarged aperture to improve resolution, the contrast may be affected due to side lobe effects associated with the generation of a discontinuous aperture. Further coherent recombination can be used to improve contrast by reducing side lobes. Fig. 12 shows that the detectability of the target is determined by both resolution and contrast [29 ]. The difference in imaging performance is further explained by the difference between the k-space representations of the single transducer approach and the coherent multi-transducer approach; the more extensive the k-space representation, the higher the resolution [30 ]. The relative amplitude of the spatial frequency representation, i.e., the topography of k-space, determines the texture of the imaged object. Weighting the individual data from different transducers can reshape k-space to emphasize certain spatial frequencies, thus potentially creating a more general response for the system. Furthermore, the presence of uniformly distributed unfilled regions in the k-space response of the system may indicate the presence of a grating lobe [28] in the spatial impulse response of the system. Sparse arrays may produce gaps in the k-space response. Only with minimal separation between transducers will the k-space amplitude response become smooth and continuous over an extended region. This indicates that there is an interplay between the relative spatial positioning of the individual transducers and the angle of the plane wave emitted; one or both of them may determine the achievable resolution and contrast in the final image [18 ].

The relative position data can be used to determine the range of plane wave angles to use and to alter these angles in real time to adaptively alter system performance. In real-life applications, resolution and contrast can be affected by a complex combination of probe spacing and angle, aperture width, transmitted plane wave angle, and imaging depth. It should be understood that different factors may determine the image performance of the system. Image enhancement associated with increasing aperture size is well described [12 ]. However, in clinical practice, the aperture is limited, as expanding the aperture generally means increasing the cost and complexity of the system. The described embodiments use conventional equipment and image-based calibration to extend the effective aperture size while increasing the amount of radio frequency data (data x N) received.

According to the described architecture, the estimated time for the "first" initialization of the system is less than 1 minute, comparable to the other calibration methods [31], [32 ]. Once the algorithm is properly initialized, the subsequent run time of the optimization can be greatly reduced. For example, in an unconstrained experiment, each optimization is initialized with the output of the previous acquisition, which is 4 times faster than the first optimization.

With respect to data volume, similar to very large data 3D and 4D ultrafast imaging [33], in the proposed multi-transducer approach, the computation may be a bottleneck for real-time imaging. Graphics Processing Unit (GPU) based platforms and high speed buses are key to future implementations of these new imaging modes [34 ].

In addition to system complexity, large aperture arrays present ergonomic operating problems and have limited flexibility to accommodate different applications. In the described configuration, the enlarged aperture is a result of stacking multiple freely placed transducers together, which allows for greater flexibility. The small array is easily attached to the skin and conforms to the body shape. While the use of multiple probes may increase the operational difficulty of an individual performing a scan, the multiple probes may be operated by using a single, possibly adjustable, multi-probe mount that allows the operator to grasp the multiple probes with only one hand while remaining pointed at the same region of interest. Such probe holders have proven to be a potential device for incoherent combining of multiple images for extended field of view imaging [4 ].

The described methods and structures may provide different strategies in ultrasound according to which large components of the respective arrays may be operated coherently together. In order to successfully improve the PSF, the multi-transducer approach according to this architecture requires coherent alignment of the backscattered echoes from multiple transmit and receive locations. This can be achieved by knowing exactly the position of all transducers, which in practice is not possible by manual measurement or using electromagnetic or optical trackers [35 ]. The described method provides an accurate and robust method of transducer location by using sequential transmissions from each transducer of the system to maximize the coherence of backscattered echoes caused by the same point scatterers and received by the same transducer.

Application equivalent to providing unconstrained tracking ultrasound for image guided applications [31]、[32]Spatial calibration helps to ensure the performance of the described multiphase dry ultrasound method. It will be appreciated that the use of the gradient descent method requires an initial estimate of parameters (including the position of the calibration target) that are sufficiently close to the global maximum of the cost function. The distance between the maxima determines this tolerance, which is dependent on NCC and corresponds to the pulse length. For the experimental set-up described above, this is about 1.5 μ s (corresponding to 2.19 mm). This tolerance value can be practically achieved by image registration [27]. In practice, without constraints, and assuming that the registration is accurate at some initial time, an initial guess can be assured if the transducers move relatively little in the time between two transmissions and share a common field of view. In plane wave imaging, the frame rate is limited only by the round trip travel time, which depends on the speed of sound and depth. For the experimental setup described, the minimum time between two insonifications is about 94 μ s. Thus, the maximum frame rate is limited to Fmax=10.7kHz, the maximum frame rate is reduced to F depending on the number of probes in the case of the multi-transducer coherent method describedmaxand/N. To guarantee the unconstrained performance of the described embodiment of the multi-transducer approach, a perfect coherent summation must be achieved over the successive transmissions of the N transducers of the system. However, this condition is no longer satisfied when an object in an acoustically transmissive state moves between transmit events. In other words, unconstrained performance is limited by the maximum speed at which the probe moves. Consider that coherence breaks at a speed (at which the observed displacement is greater than half the pulse wavelength per frame) [26 ]]Maximum speed of the probe is Vmax=λFmax2N, 1.33m/s in the example shown here. This speed is far beyond the hand movement speed of a typical operator during a conventional scan, so that a coherent summation of two consecutive transmissions can be achieved. The method has been validated in an unconstrained demonstration.

Wavefront aberrations caused by non-uniform media can greatly limit the quality of medical ultrasound images and are a major obstacle to obtaining diffraction limited resolution using large aperture transducers [36 ]. The technique described in this work was tested in a scattering medium, assuming that the speed of sound along the propagation path is constant. However, since the speed of sound is one parameter in the optimization, the technique is applicable to a non-uniform medium [18] in which the speed of sound varies in space. In this case, the medium can be modeled by segmenting successive layers. The optimization method can be applied in a recursive manner, dividing the field of view into sub-regions with different speeds of sound. A more accurate sound estimation will improve the beam forming and allow higher order phase difference corrections. It should be understood that the "speed of sound" map would be of great interest [37], [38] in tissue characterization.

In addition, multiple visits from different angles may be made using multiple transducers, which may give insight into aberration problems and help test new algorithms for clutter removal.

The approach presented herein has been formulated and validated for detectable and isolated point scatterers within a common imaging region, but is not always possible in practice. Although theories have been proposed regarding point-like scatterers, the method relies on a measure of coherence, which may be more forgiving than that shown in phantom, as shown in figure 12. This shows that the method works in the presence of prominent local features that can be identified, and the concept of maximizing the coherence of the data received by each receiver array when acted upon by different transmitter sound waves can be more widely applied. Indeed, optimization based on spatial coherence without point targets may be more robust [39] - [41] due to the decorrelation of expected speckle with receiver position. This may also result in increased computational efficiency. Spatial coherence measurements have previously been used in applications such as phase difference correction [42], flow measurement [43], and beamforming [44 ]. Alternatively, isolated point scatterers may be artificially generated by other techniques, such as by the addition of microbubble contrast agents [45 ].

Ultrasound super resolution imaging recognizes that individual bubbles in spatial isolation can be considered point scatterers [46] in the acoustic field and precisely located [47 ]. The feasibility of coherent multi-transducer approaches in complex media, including new approaches [20], [40] based primarily on spatial coherence and the potential use of microbubbles.

The described structure may provide a new coherent multi-transducer ultrasound imaging system and a robust method of accurately positioning multiple transducers. The sub-wavelength localization accuracy required to combine information from multiple probes is achieved by optimizing the coherence function of the backscattered echoes from and received by the same point scatterers insonated by all transducers in turn, without the use of external tracking equipment.

The described theory applies to multiple 2D arrays placed in 3D, and the method is experimentally validated in a 2D architecture using a pair of linear arrays and an ultrasound phantom. Improvements in imaging quality have been shown. In general, the performance of the multi-transducer approach is superior to plane wave imaging using a single linear array. The results show that coherent multi-transducer imaging has the potential to improve the quality of ultrasound images in a variety of situations.

As described above, a coherent multi-transducer ultrasound imaging system (CMTUS) achieves an extended effective aperture (super aperture) by coherent combination of multiple transducers. As described above, by coherently combining Radio Frequency (RF) data acquired by a plurality of synchronized transducers that in turn transmit Plane Waves (PW) into a common field of view, an image of improved quality can be obtained. In this coherent multi-transducer ultrasound (CMTUS) approach, optimal beamforming parameters, including transducer location and average speed of sound in the medium under study, can be derived by maximizing the coherence of the received radio frequency data using cross-correlation techniques. Thus, a discontinuous large effective aperture (super aperture) is created, significantly improving imaging resolution. At the same time, creating a large aperture using multiple arrays rather than using a single large array may be more flexible for different situations (e.g., typical intercostal imaging applications where the acoustic window is narrow), and the discontinuity dictated by the spatial separation between the multiple transducers may determine the overall performance of the CMTUS method. It will be appreciated that there is a trade-off between resolution and contrast due to the aperture discontinuity.

Structure it is recognized that since the average speed of sound in the medium under study is optimized by the CMTUS method, it may be desirable to improve beamforming with some higher order phase difference correction.

Heterogeneous media

The k-Wave of the Matlab toolbox was used to simulate nonlinear Wave propagation through non-uniform media (Treeby and Cox, 2010; Treeby et al, 2012). The CMTUS system formed by two identical linear arrays (similar to experimentally available arrays) was simulated as follows:

each array had a center frequency of 3MHz, 144 active elements per array during transmit and receive, a pitch of 240 μm, and a kerf (kerf) of 40 μm. For plane waves, the modeled transducer has an infinite axial focus and all 144 elements are fired simultaneously. Apodization on the transducer is modeled by applying a hanning filter across the width of the transducer. Table IV summarizes the simulation parameters that define each linear array.

TABLE IV

Simulations were performed for each transmit event, i.e., each plane wave at a particular angle. A total of 7 transmit simulations were performed per linear array to generate a plane wave data set covering a total sector angle of 30 deg. (from-15 deg. to 15 deg., step size of 5 deg.). In the case of CMTUS, this results in a total of 14 transmit events (7 plane waves per array). This plane wave sequence is chosen to match its resolution to a focusing system with an F-number of 1.9, thus reducing the number of angles required by 6 to optimize the simulation time without affecting the resolution. The spatial grid is fixed at 40 μm (six grid points per wavelength) and its time step corresponds to the Courant-Friedrichs-lewy (cfl) condition of 0.05 with respect to a propagation velocity of 1540 m/s. The received signal is down-sampled at 30.8 MHz. Channel noise was introduced into the radio frequency analog data as gaussian noise with a signal to noise ratio of 35dB at 50 mm imaging depth.

Ultrasound pulses propagate through heterogeneous scattering media using tissue maps (speed of sound, density, attenuation, and nonlinearity). The medium defined only in the characteristics of general soft tissue was used as a control case. To model the scattering properties observed in vivo, sub-resolution scatterers were added to the tissue map. A total of 15 scatterers of 40 μm diameter were added per resolution cell, each with random spatial position and amplitude (defined by a 5% difference in acoustic velocity and density from the surrounding medium) to fully exploit speckle. Three punctate targets and an anechoic lesion are included in the medium to allow measurement of basic measurement indicators for comparing the imaging quality of different scenes. A circular anechoic lesion of 12mm diameter located at the aperture center (common field of view) of both arrays was modeled as a scatterer-free region. The point targets were simulated as circles of 0.2mm diameter with 25% difference in sound velocity and density from the surrounding tissue to produce appreciable reflections. The same implementation of scatterers is superimposed on all the figures by different simulations to preserve the speckle pattern in the CMTUS system, so any change in the quality imaging measurement index depends on the tissue overlying it, the imaging depth, and the change in the acoustic field.

The k-Wave of Matlab toolbox uses a Fourier co-location (Fourier co-location) method to compute the spatial derivatives and numerically solve the control model equations, which requires discretization of the simulation domain into an orthogonal grid. Therefore, a continuously defined sound source and medium need to be sampled on this computational grid, and step errors occur when the sound source is not perfectly aligned with the simulation grid. To minimize these step errors, the transmit array is always aligned with the computational grid, i.e., simulations are performed in the local coordinate system of the transmit array. This means that the array T is to be simulated2Sequence of transmissions transforming a propagation medium including sub-resolution scatterers to a probe T using the same transformation matrix defining the relative positions of two transducers in space2In the local coordinate system of (2). Fig. 14 shows a sample tissue map with a transducer, a punctual target and an anechoic lesion location represented in two local coordinate systems.

Fig. 14 shows an example of a sound velocity diagram of a propagation medium having a muscle layer of 8mm thickness and a fat layer of 25 mm. The position of the ultrasound probe, the punctiform target and the anechoic lesion is shown. FIG. 14(a) shows the array T1And for simulating radio frequency data T1R12I.e. when the array T is1When it is transmitted. FIG. 14(b) shows the array T2And for simulating radio frequency data T2R12I.e. when the array T is2When it is transmitted. In this example, the angle between the probes defining their position in space is 60 °, and the corresponding imaging depth is 75 mm.

Discontinuous effective aperture of CMTUS

The above demonstrates that the discontinuous effective aperture obtained by the CMTUS determines the quality of the resulting image. To study the effect of discrete apertures determined by the relative position of the CMTUS array in space, different CMTUS systems with different spatial locations were modeled. Simulations were performed in the same control medium considering only soft tissue material. In order to maintain the imaging depth (fixed at 75mm) at the same timeModifying the relative position of the probe changes the angle between the arrays. Array T1Always at the x-axis center of the simulation grid, and the array T2Rotating around the center of the propagation medium. Then, different cases were simulated in which the two arrays in the CMTUS were located at different angles from 30 ° to 75 ° in 15 ° steps.

Fig. 15 shows a schematic diagram of the probe in space, with different spatial parameters (angle θ between probes, Gap, and resulting effective aperture Ef) marked. Note that at larger angles, the effective aperture of the system defined by the two probes and the gap between them will both increase. The relationship between probe position and resulting effective aperture and gap is shown in figure 15.

CMTUS imaging penetration

The imaging penetration of the CMTUS was studied by changing the local orientation of the array and using the same control propagation medium (soft tissue only). For a given effective aperture (fixed clearance), each probe is rotated about its center by the same angle but in the opposite direction. Thus, a given rotation, e.g. T1Negative direction and T2A square shape will result in a deeper common field of view, whereas the opposite rotation is opposite. Figure 16 shows that the imaging depth depends on the orientation of the transducer (defined by the position of the common field of view of the two arrays). Using this scheme, four different imaging depths were simulated: 57.5mm, 75mm, 108mm and 132 mm.

FIG. 16 shows two linear arrays T1And T2And its field of view at different imaging depths. The imaging depth is obtained by steering the linear array at the same angle but in the opposite direction. Three different cases are shown: (a) an imaging depth of 57.5 mm; (b)75 mm imaging depth; (c)108mm imaging depth. The circle represents the center of the common field of view, which defines the imaging depth in the CMTUS.

CMTUS through distorted media

To investigate the effect of aberration non-uniformity in the medium, three different tissues (soft tissue, fat and muscle in general) were defined in the propagation medium. The imaging depth was set at 75mm and the array configuration in space defined an effective aperture of 104.7mm and a gap of 45.3 mm. The acoustic properties assigned to each tissue type were selected from the literature and listed below:

the medium defined with soft tissue properties alone was used as a control case. The clutter effect was then analyzed using heterogeneous media, where two layers with acoustic properties of muscle and fat were introduced into the control case media. In different study cases, the thickness of the muscle layer was set to 8mm, while the thickness of the fat layer ranged from 5mm to 35 mm. Fig. 14 shows an example of a propagation medium with a muscle layer of 8mm and a fat layer of 25 mm.

In vitro experiments

Sequences similar to those used in the simulation were used to image the phantom. The imaging system consisted of two 256-channel ultrasound advanced open platform (ULA-OP 256) systems (MSD laboratory, university of florfenicol, italy). The system is synchronized, i.e. has the same trigger time and sample time in transmit mode and receive mode. Each ULAOP 256 system is used to drive an ultrasonic linear array made of 144 piezoelectric elements with a 6dB bandwidth (imaging transducer LA332 from esseote, flores italy) in the range of 2MHz to 7.5 MHz. The two probes were mounted on an xyz translation and rotation platform (Thorlabs, usa) and carefully aligned on the same elevation plane (y ═ 0). For each probe in the alternating sequence, i.e., only one probe transmitting at a time and both receiving, 7 plane waves covering a total sector angle of 30 ° (from-15 ° to 15 °, step size 5 °) are transmitted at 3MHz, and the Pulse Repetition Frequency (PRF) is 1 kHz. Radio frequency data backscattered to a depth of 135mm was acquired at a sampling frequency of 19.5 MHz. Apodization is not employed in either transmit or receive. A subset of the simulation results were experimentally verified in vitro. A phantom made of three point targets and an anechoic region was imaged using the imaging system and pulse sequence described below. The mean speed of sound of the phantom was 1450 m/s. The phantom is immersed in a water tank to ensure good acoustic coupling. To induce aberrations, a 20mm thick paraffin layer was placed between the probe and phantom. The measured sound velocity of the paraffin was 1300 m/s.

A control experiment was first performed in the absence of a paraffin sample. After the control scan, the paraffin sample was placed on the phantom without moving the phantom or the housing. The target is then scanned as before. A paraffin sample was placed immediately on the phantom and coupled to the transducer through water. After scanning and taking out the paraffin sample, a final control scan is performed to verify the registration of the phantom, the housing and the transducer.

Data processing

Simulation and experimentally obtained radio frequency data were processed in different combinations to study image quality. For single probe systems, beamforming of the radio frequency data is performed by a delay and sum method using conventional coherent plane wave coincidence. Beamforming for multiple transducers is performed as described above.

For each simulation case, the optimal beamforming parameters, which were calculated by maximizing the cross-correlation of backscattered signals from a common target captured by the various receiving elements described above, were used to generate the CMTUS image. For simulated RF data, given the actual position of the array in space, an additional image (denoted as 2 probes) is beamformed, assuming an acoustic velocity of 1540m/s and using the spatial positions of the array elements. Note that in an experimental situation this is not possible because the actual position of the array in space is not known a priori. Finally, the corresponding array T1Data of a sequence at the time of transmission and reception (i.e. T)1R1Here, 1 probe) was used as a baseline for array performance to provide a point of comparison with current coherent plane wave compounding methods, both in simulation and experimental scenarios. Note that for all cases except CMTUS, the data was beamformed using the assumed value of the speed of sound (1540 m/s for the simulation data and 1450m/s for the experimental data).

To achieve a comparison of the imaging modes as fair as possible in terms of emission energy, the CMTUS and 2 are obtained by only compounding 6 different plane wavesImages of the probe, while images of 1 probe system were generated by compounding the total number of transmitted plane waves (i.e., 7 plane waves from-15 ° to 15 ° in 5 ° steps). In the same way, when the array T is1Transmitting plane waves at zero and positive angles (0, 5, 10) and array T2Images of the CMTUS and 2 probes are the result of composite radio frequency data when the plane waves are transmitted at zero and negative angles (0 °, -5 °, -10 °). Since the CMTUS optimization is based on a pair of transmissions (one for each array), an even number of transmissions are set. Furthermore, the use of two arrays transmitting at opposite angles may ensure performance of the CMTUS, since the overlap of the insonated regions is a necessary condition to determine probe-to-probe and probe-to-probe relative positions.

For each generated image, the Lateral Resolution (LR), contrast, and contrast-to-noise ratio 273(CNR) were measured to quantify the effect of aperture size and clutter. The LR is calculated from the Point Spread Function (PSF) of the intermediate point-like targets. The axial side for 2D PSF analysis is selected by finding the location of the peak in the elevation dimension from the envelope detected data. The lateral and axial PSF profiles are acquired from the center of the point target and aligned with the primary resolution direction. LR was then evaluated by measuring the width of the PSF at the-6 dB level. Contrast and CNR are measured from the envelope detected images. The contrast and CNR were calculated as:

Contrast=20log10io)

wherein muiAnd mu0The mean of the signal inside and outside the region, respectively. All image measurement indicators are calculated before applying the logarithmic compression conversion.

Results

A. Simulation result

Comparative example: conventional aperture imaging

Corresponding to the array T1Sequence in transmission and reception(i.e. T)1R1(1 probe)) provides a baseline for the imaging quality through different scenes.

Figure 17 shows an image produced at a depth of 75mm without any aberrating layer in the propagation medium. A sound speed of 1540m/s is used to reconstruct these images. The lateral resolution of the point target (fig. 17(b)) was 1.78mm and the lesion was visible (fig. 17(c)), with a contrast of-16.78 dB and a CNR of 0.846. Note that while lesions are easily identified from the background, it is difficult to outline their edges.

Discontinuous effective aperture of CMTUS

Figure 18 shows simulated PSF and lesion images from the same non-aberrated medium used to increase the effective aperture and clearance of the CMTUS system. It can be seen that the PSF depends on the size of the effective aperture and the gap between the probes. As expected, the central lobe width of the PSF decreases with increasing effective aperture size. However, although the width of the main lobe decreases at an enlarged aperture, the amplitude of the side lobes increases with an increase in the corresponding gap in the aperture, thereby affecting the contrast as seen in the lesion image. The effect of the side lobes on image quality can be seen in fig. 18, where the effective aperture with a 64.1mm gap is such that the side lobe amplitude, which is close to the main lobe amplitude, is significantly improved and affects the lesion image.

Fig. 19 compares the corresponding calculated image quality measures (LR, contrast and CNR) according to the obtained effective aperture. The results show that both the main lobe and the lateral resolution of the PSF decrease with increasing effective aperture. Since an increased effective aperture also indicates a larger gap between the probes, contrast and resolution follow opposite trends. In general, the CMTUS produces the best lateral resolution in all cases, but shows a drop in contrast at a particular imaging depth of 75mm, compared to a 1-probe system. At the simulated maximum effective aperture, the best resolution is 0.34mm, while the contrast and CNR are reduced to a minimum of-15.51 dB and 0.82, respectively. Fig. 19 shows the lateral point spread function at the depth of the peak point intensity and in the main direction extracted from fig. 18. And calculating corresponding quality measurement indexes according to the size of the effective aperture in the CMTUS: lateral Resolution (LR) measured at-6 dB from lateral point spread function, contrast measured on figure 18, and contrast-to-noise ratio (CNR).

CMTUS imaging penetration

Figure 20 compares the CMTUS images of a system using 1 probe at two different imaging depths (100mm and 155 mm). Image degradation with depth was clearly observed in all cases. However, at greater depths, 1 probe showed a greater level of degradation. At the maximum imaging depth shown (155mm), point targets and lesions can still be identified in the CMTUS image, but not in the 1 probe image.

Fig. 21 summarizes the image measurement indices calculated from the imaging depth. As expected, in both systems, all image measurements deteriorate at greater imaging depths. However, the results show that in the case of 1 probe and CMTUS, their dependence on imaging depth is different. The slope of the curve LR-depth is significantly greater in the 1 probe system than in the CMTUS method, indicating that the smaller the aperture the faster the resolution decreases with imaging depth. Although the contrast and CNR in both systems appear to be affected in a similar manner at reduced imaging depths (< 100mm), the loss of contrast measurement index is less pronounced in the CMTUS system at depths greater than 100mm, where the CMTUS method overcomes this performance of the 1-probe system not only in terms of resolution but also in terms of contrast. Thus, the enlarged effective aperture produced by the CMTUS will increase the sensitivity of the imaging system, especially at larger imaging depths.

CMTUS through aberrating media

FIG. 22 is a comparison of simulated images obtained by conventional aperture 1 probes (a-d), 2 probes (e-h) and CMTUS method (i-1) through an aberration layer of increasing thickness (the thickness of the fat layer is from 0mm, 10mm, 25mm to 35 mm). 1 probe image using 7 plane wave transmissions; images of a 2 probe using 6 plane wave transmissions and a CMTUS image using 6 plane wave transmissions.

Fig. 22 shows simulated images of the control case (propagation medium with soft tissue only) and simulated images imaged through the aberrating layers of different thicknesses. Different methods were compared, namely the 1-probe method, the 2-probe method and the CMTUS method. It can be seen that the PSF and contrast of the images of the 2 probes are significantly reduced in the presence of aberrations compared to the control case. This effect can be clearly seen on a point target imaged through a 35mm thick fat layer, where the results show that an enlarged aperture does not show an advantage in resolution if aberrations are not corrected. In fact, in the presence of aberrations, it is not possible to use two separate transducers (in the case of a 2-probe system) to coherently reconstruct the image.

FIG. 23 shows simulated delayed radiofrequency data for a medium with a 35mm thick fat layer and backscattered from a punctate target by 4 delayed backscatter echoes (T) from the same punctate target1R1、T1R2、T2R1、T2R2) And different beamforming parameters are coherently superposed to obtain: FIG. 23(a)2 probes; FIG. 23(b) CMTUS.

Figure 23 shows an example of delayed echoes from a point-like target for a 2 probe and CMTUS case, corresponding to a propagation medium with a 35mm thick fat layer. These flat backscatter echoes are obtained by delaying the backscatter echoes (T) by 4 from the same point-like target1R1、T1R2、T2R1、T2R2) And corresponding beamforming parameters. It is worth noting that in the case of 2 probes, the different echoes are not aligned correctly, so that they interfere when coherently added together. However, after optimizing the beamforming parameters in the CMTUS, all echoes will be better aligned and can be coherently added together, minimizing the effects of aberrations. A similar effect can be seen in anechoic lesions. Although differences in background speckle patterns are observed between different imaging methods, higher loss of contrast due to aberrations can only be appreciated in images of 2 probes. However, of either 1 probeNeither system, nor the CMTUS system, found a significant effect of aberrations on imaging quality. Although both systems can be imaged through the aberrating layer, they show significant differences. CMTUS shows a more detailed image than a 1 probe system. The speckle size is reduced and the different tissue layers are only visible in the CMTUS image.

Fig. 24 is a comparison of calculated quality measure indicators between different imaging methods. Fig. 24 shows the quality measure index, the Lateral Resolution (LR), the contrast, and the contrast-to-noise ratio (CNR) calculated from the clutter thickness (fat layer). Three different methods were compared: 1 probe coherent plane complex using 7 plane wave transmissions, 2 probes using 6 plane wave transmissions, and CMTUS using 6 plane wave transmissions. The imaging measurement index is shown in relation to the thickness of the fat layer. As expected, the resolution would increase with increasing aperture size in the absence of aberrations. In this case, the worst lateral resolution corresponds to a single probe system with 1.78mm (which is the system with the smallest aperture size), while the 2 probe image and the CMTUS image are both similar to a lateral resolution of 0.40 mm. The trend shows that the imaging measurement related to the aperture size does not improve significantly for thicker fat layer thicknesses if the aberrations are not corrected. At clutter thicknesses greater than 10mm, the image quality of the system formed by the 2 transducers without aberration correction (2 probes) is significantly reduced, while the CMTUS imaging measurement index is not affected by aberration errors, follows the same trend as the conventional aperture (1 probe), and provides a constant resolution value over clutter thickness without any significant loss of contrast. On the thickest fat layer simulated, the resolution of the image of 1 probe and the CTMUS image were 1.7mm and 0.35mm, respectively, whereas in the case of 2 probes it is no longer possible to reconstruct the point target to measure resolution. The contrast and CNR also showed similar apparent loss for the 2 probe images, which exhibited contrast of-10.84 dB and a CNR of 0.69, while these values were significantly better for the 1 probe image (-18.44dB contrast, and 0.87 CNR) and the CMTUS image (-17.41dB contrast and 0.86 CNR).

Results of the experiment

Coherent plane wave imaging using conventional aperture imaging (using a single probe) can provide a reference for image quality with and without a paraffin layer. To reconstruct these images, the reference speed of sound in water of 1496m/s was used, and 7 plane waves were compounded.

Fig. 25 shows experimental images of control cases (a, c) and paraffin cases (b, d). Two different approaches were compared: 1 probe coherent plane complex (a, b) using 7 plane wave transmissions, and CMTUS (c, d) using 6 plane wave transmissions. Figure 25 shows a comparison of phantom images acquired in the control case and by paraffin sampling using 1 probe and CMTUS. The CMTUS image is reconstructed using the optimal beamforming parameters including the average speed of sound and compounding 6 plane waves. All images are shown in the same dynamic range of-00 dB. In both cases, i.e. 1 probe image and the CMTUS image, the observed variation between the control image and the paraffin image was small, consistent with the simulation results. The optimal beamforming parameters for reconstructing the CMTUS image, for the control case, have a value of { c-1488.5 m/s; theta2=30.04°;r2=[46.60,12.33]mm, and for paraffin wax the value is { c-1482.6 m/s, θ2=30.00°;r2=[46.70,12.37]mm }. All values varied slightly, with the average sound velocity drop corresponding to the lower sound propagation velocity of the paraffin.

Fig. 26 shows a comparison of calculated quality measure indicators, Lateral Resolution (LR), contrast, and contrast-to-noise ratio (CNR) experimentally measured for two different acquisition techniques. Two different approaches were compared: 1 probe coherent plane complex transmitted using 7 plane waves and CMTUS transmitted using 6 plane waves. Fig. 26 summarizes the calculated image measurement indices for the control case and the paraffin case. Little change was observed in all imaging measurements. Although minimal image degradation caused by the aberrating layer is observed in the CMTUS, overall image quality is improved compared to the conventional single aperture, and the observed image degradation follows the same trend.

Fig. 27 compares the experimental point target images. The first point target at a depth of 85mm was described using a transverse PSF with and without a paraffin layer. In any case, no significant effect due to aberrations was observed in the PSF. The PSFs with and without paraffin layers were similar in shape and consistent with the shapes observed in the simulation. In general, the CMTUS method results in a PSF with a significantly narrower main lobe but larger amplitude side lobes than a 1-probe conventional imaging system.

Fig. 27 shows an experimental point target image. (a) The columns correspond to the control and the (b) columns correspond to the paraffin. The first row corresponds to a system of 1 probe and the middle row corresponds to the CMTUS. The bottom row shows the corresponding lateral point spread function for the two cases shown: system of 1 probe (dashed line) and CMTUS (solid line). The images of 1 probe were transmitted using 7 plane waves. The CMTUS image is transmitted using 6 plane waves.

Fig. 28 shows the coherent summation of delayed echoes from a point target before and after optimization. The effect of the paraffin layer can be clearly seen. When the beamforming parameters including the average speed of sound are optimized by the CMTUS method, all echoes are better aligned, minimizing the paraffin effects with aberrations. Figure 28 shows experimentally delayed radio frequency data acquired from a phantom with a paraffin wax sample. The CMTUS flat backscatter echoes from a punctual target are obtained by using different beamforming parameters ((a) initial guess values, (b) optimal values) to combine 4 delayed backscatter echoes (T) from the same punctual target1R1、T1R2、T2R1T2R2) are added coherently.

Discussion of the related Art

The significance of imaging using the CMTUS method with two linear arrays is investigated here by simulation and experimentation. Analysis shows that the performance of the CMTUS depends on the relative position of the array, the sensitivity of the CMTUS increases with increasing imaging depth, and there is still an extended aperture in the presence of aberrations. These findings indicate that the extended effective aperture produced by the CMTUS may provide advantages in resolution and contrast if the distance between transducers is limited, thereby improving image quality at large imaging depths even with acoustic clutter imposed by tissue layers of different acoustic velocities. The contrast advantage is not that significant in contrast to achieving an improvement in resolution.

Simulation results show that discontinuous effective apertures may reduce contrast when the gap in the aperture is greater than a few centimeters. In probe design, a half wavelength spacing between elements is required to avoid unwanted grating lobes in the array response. Furthermore, previous studies have shown that contrast does not continue to increase uniformly at larger aperture sizes, unlike resolution. However, although contrast may be reduced by significant aperture discontinuities, the main lobe resolution continues to improve at larger effective apertures. Since the detectability of lesions is generally a function of contrast and resolution, the enlargement of the aperture size provides benefits even if contrast is limited. The narrow main lobe allows fine sampling of high resolution targets, thereby improving the visibility of the margins of clinically relevant targets. In addition, the extended aperture may improve attenuation-limited image quality when imaging at greater depths. In those challenging cases, CMTUS showed not only an improvement in resolution but also an improvement in contrast at larger imaging depths.

The results are consistent with the following assumptions: in the absence of aberrations, the size of the aperture determines the resolution. However, previous work has shown that although the prediction resolution can be improved, there are practical limitations to the resolution improvement obtained at larger apertures. The non-uniformity causes variations in side lobes and focal length, limiting the improvement in resolution. The resulting degradation is mainly considered as a change in arrival time, which is called phase difference. External elements on large transducers are subject to severe phase errors due to the aberrating layer of varying thickness, limiting the advantages that can be obtained from large arrays.

The findings presented here are consistent with these previous studies and the aperture size will be practically limited in the presence of aberration clutter. Nonetheless, the CMTUS method takes into account the average velocity of the sound in the medium and shows promise to extend the effective aperture beyond the practical limits of clutter. A more accurate sound speed estimate will improve beamforming and allow higher order phase difference correction. However, other challenges presented by aberrations remain.

Both phase aberration and reverberation are the main factors that cause image quality degradation. Although the phase aberration effect is caused by variations in sound speed due to tissue inhomogeneities, reverberation is caused by multiple reflections in the inhomogeneous medium, creating clutter that distorts the wavefront of interest. For fundamental wave imaging, reverberation has proven to be a significant cause of image quality degradation, and a major cause of the superiority of harmonic ultrasound imaging over fundamental wave imaging. It is envisaged that redundancy in large arrays plays a role in averaging multiple realizations of the reverberant signal in order to provide a mechanism for reducing clutter.

While certain choices made in the described experimental design may not translate directly into clinical practice, it will be appreciated that they do not prejudice the conclusions drawn from the above results. For example, the available H6J test set facilitates the selection of frequencies higher than those traditionally used in abdominal imaging (1-2 MHz). Furthermore, although both the simulated phantom and the experimental phantom are simplified models of real human tissue, they are able to capture the main potential causes of ultrasound image degradation, including attenuation, total sound speed error, phase aberrations, and reverberation clutter.

Although illustrative embodiments of the present invention have been disclosed in detail herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various changes and modifications can be effected therein by one skilled in the art without departing from the scope of the invention as defined by the appended claims and their equivalents.

Reference to the literature

[1]M.Moshfeghi and R.Waag,“In vivo and in vitro ultrasound beam distortion measurements of a large aperture and a conventional aperture focussed transducer,”Ultrasound in Medicine and Biology,vol.14,no.5,pp.415-428,1988.

[2]N.Bottenus,W.Long,M.Morgan,and G.Trahey,“Evaluat ion of large-aperture imaging through the ex vivo human abdominal wall,”Ultrasound in medicine&biology,2017.

[3]H.K.Zhang,A.Cheng,N.Bottenus,X.Guo,G.E.Trahey,and E.M.Boctor,“Synthetic tracked aperture ultrasound imaging:design,simulation,and experimental evaluation,”Journal of Medical Imaging,vol.3,no.2,pp.027001-027001,2016.

[4]J.A.Jensen,O.Holm,L.Jerisen,H.Bendsen,S.I.Nikolov,B.G.Tomov,P.Munk,M.Hansen,K.Salomonsen,J.Hansen et al.,“Ultrasound research scanner for real-time synthetic aperture data acquisition,”IEEE transactions on ultrasonics,ferroelectrics,and frequency control,vol.52,no.5,pp.881-891,2005.

[5]N.Bottenus,W.Long,H.K.Zhang,M.Jakovljevic,D.P.Bradway,E.M.Boctor,and G.E.Trahey,“Feasibility of swept synthetic aperture ultrasound imaging,”IEEE transactions on medical imaging,vol.35,no.7,pp.1676-1685,2016.

[6]H.K.Zhang,R.Finocchi,K.Apkarian,and E.M.Boctor,“Co-robotic synthetic tracked aperture ultrasound imaging with croSs-correlation based dynamic error compensation and virtual fixture control,”in Ultrasonics Symposium(IUS),2016IEEE International.IEEE,2016,pp.1-4.

[7]K.L.Gammelmark and J.A.Jensen,“2-d tissue motion compensation of synthetic transmit aperture images,”IEEE Transactions on Ultrasonics,Ferroelectrics,and Frequency Control,vol.61,no.4,pp.594-610,2014.

[8] Montaldo, m.taner, j.beroff, n.benech, and m.fink, "Coherent plane-wave assembly for frame high rate ultrasound and transient elastomer," IEEE Transactions on Ultrasonics, ferroelectronics and Frequency Control, vol.56, No.3, pp.489-506, 32009.[ web ] available: http: ieeexplore. ieee. org/document/4816058

[9]A.W.Fitzgibbon,“Robust registration of 2d and 3d point sets,”Image and Vision Computing,vol.21,no.13-14,pp.1145-1153,2003.

[10]R.Mallart and M.Fink,“The van cittert-zernike theorem in pulse echo measurements,”The Journal of the Acoustical Society of America,vol.90,no.5,pp.2718-2727,1991.

[11]J.C.Lagarias,J.A.Reeds,M.H.Wright,and P.E.Wright,“Convergence properties of the nelder-mead simplex method in low dimensions,”SIAM Journal on optimization,vol.9,no.1,pp.112-147,1998.

[12]E.Boni,L.Bassi,A.Dallai,F.Guidi,V.Meacci,A.Ramalli,S.Ricci,and P.Tortoli,“Ula-op 256:A 256-channel open scanner for development and real-time implementation of new ultrasound methods,”IEEE transactions on ultrasonics,ferroelectrics,and frequency control,vol.63,no.10,pp.1488-1495,2016.

[13]M.Greenspan and C.E.Tschiegg,“Tables of the speed of sound in water,”The Journal of the Acoustical Society of America,vol.31,no.1,pp.75-76,1959.

[14]R.A.Beasley,J.D.Stefansic,A.J.Herline,L.Guttierez,and R.L.Galloway,“Registration of ultrasound images,”in Medical Imaging 1999:Image Display,vol.3658.International Society for Optics and Photonics,1999,pp.125-133.

[15]M.E.Anderson and G.E.Trahey,“The direct estimation of sound speed using pulse-echo ultrasound,”The Journal of the Acoustical Society of America,vol.104,no.5,pp.3099-3106,1998.

[16]W.F.Walker and G.E.Trahey,“The application of k-space in pulse echo ultrasound,”IEEE transactions on ultrasonics,ferroelectrics,and frequency control,vol.45,no.3,pp.541-558,1998.

[17]M.E.Anderson and G.E.Trahey,“A seminar on k-space applied to medical ultrasound,”Department of Biomedical Engineering,Duke University,2000.

[18]J.C.Lacefield,W.C.Pilkington,and R.C.Waag,“Distributed aberrators for emulation of ultrasonic pulse distortion by abdominal wall,”Acoustics Research Letters Online,vol.3,no.2,pp.47-52,2002.

[19]J.Bamber and C.Hill,“Acoustic properties of normal and cancerous human liveri.dependence on pathological condition,”Ultrasound in medicine&biology,vol.7,no.2,pp.121-133,1981.

[20]M.Imbault,A.Faccinetto,B.-F.Osmanski,A.Tissier,T.Deffieux,J.-L.Gennisson,V.Vilgrain,and M.Tanter,“Robust sound speed estimation for ultrasound-based hepatic steatosis assessment,”Physics in Medicine and Biology,vol.62,no.9,p.3582,2017.

[21]L.Mercier,T.F.Lindseth,and L.D.Collins,“A review of calibration techniques for freehand 3-d ultrasound systems,”Ultrasound in medicine&biology,vol.31,no.2,pp.143-165,2005.

[22]G.F.Pinton,G.E.Trahey,and J.J.Dahl,“Spatial coherence in human tissue:Implications for imaging and measurement,”IEEE transactions on ultrasonics,ferroelectrics,and frequency control,vol.61,no.12,pp.1976-1987,2014.

[23]Y.Desailly,O.Couture,M.Fink,and M.Tanter,“Sono-activated ultrasound localization microscopy,”Applied Physics Letters,vol.103,no.17,p.174107,2013.

[24]B.T.Fang,“Trilateration and extension to global positioning system navigation,”Journal of Guidance,Control,and Dynamics,vol.9,no.6,pp.715-717,1986.

[25]E.Boni,L.Bassi,A.Dallai,F.Guidi,V.Meacci,A.Ramalli,S.Ricci,and P.Tortoli,“ULA-OP 256:A 256-channel open scanner for development and real-time implementation of new ultrasound methods,”IEEE Transactions on Ultrasonics,Ferroelectrics and Frequency Control,vol.63,no.10,pp.1488-1495,2016.

[26]B.Denarie,T.A.Tangen,I.K.Ekroll,N.Rolim,H.Torp,T.Bj°astad,and L.Lovstakken,“Coherent plane wave compounding for very high framerate ultrasonography of rapidly moving targets,”IEEE Transactions on Medical Imaging,vol.32,no.7,pp.1265-1276,2013.

[27]R.A.Beasley,J.D.Stefansic,A.J.Herline,L.Guttierez,and R.L.Galloway,“Registration of ultrasound images,”in Medical Imaging1999:Image Display,vol.3658.International Society for Optics and Photonics,1999,pp.125-133.

[28]W.F.Walker and G.E.Trahey,“The application of k-space in pulse echo ultrasound,”IEEE Transactions on Ultrasonics,Ferroelectrics and Frequency Control,vol.45,no.3,pp.541-558,1998.

[29]S.W.Smith,R.F.Wagner,J.M.Sandrik,and H.Lopez,“Low contrast detectability and contrast/detail analysis in medical ultrasound,”IEEE Transactions on Sonics and Ultrasonics,vol.30,no.3,pp.164-173,1983.

[30]M.E.Anderson and G.E.Trahey,“A seminar on k-space applied to medical ultrasound,”Department of Biomedical Engineering,DukeUniversity,2000.

[31]M.Najafi,N.Afsham,P.Abolmaesumi,and R.Rohling,“A closed-form differential formulation for ultrasound spatial calibration:multi-wedge phantom,”Ultrasound in Medicine&Biology,vol.40,no.9,pp.2231-2243,2014.

[32]E.Boctor,A.Viswanathan,M.Choti,R.H.Taylor,G.Fichtinger,and G.Hager,“A novel closed form solution for ultrasound calibration,”in Biomedical Imaging:Nano to Macro,2004.IEEE International Symposium on.IEEE,2004,pp.527-530.

[33]J.Provost,C.Papadacci,J.E.Arango,M.Imbault,M.Fink,J.-L.Gennisson,M.Tanter,and M.Pernot,“3D ultrafast ultrasound imaging in vivo,”Physics in Medicine&Biology,vol.59,no.19,p.L1,2014.

[34]M.Tanter and M.Fink,“Ultrafast imaging in biomedical ultrasound,”IEEE Transactions on Ultrasonics,Ferroelectrics,and Frequency Control,vol.61,no.1,pp.102-119,2014.

[35]L.Mercier,T.F.Lindseth,and L.D.Collins,“A review of calibration techniques for freehand3-D ultrasound systems,”Ultrasound in Medicine&Biology,vol.31,no.2,pp.143-165,2005.

[36]J.C.Lacefield,W.C.Pilkington,and R.C.Waag,“Distributed aberrators for emulation of ultrasonic pulse distortion by abdominal wall,”Acoustics Research Letters Online,vol.3,no.2,pp.47-52,2002.

[37]J.Bamber and C.Hill,“Acoustic properties of normal and cancerous human liver-I.dependence on pathological condition,”Ultrasound in Medicine&Biology,vol.7,no.2,pp.121-133,1981.

[38]M.Imbault,A.Faccinetto,B.-F.Osmanski,A.Tissier,T.Deffieux,J.-L.Gennisson,V.Vilgrain,and M.Tanter,“Robust sound speed estimation for ultrasound-based hepatic steatosis assessment,”Physics in Medicine and Biology,vol.62,no.9,p.3582,2017.

[39]N.Bottenus and K.F.U··stu··ner,“Acoustic reciprocity of spatial coherence in ultrasound imaging,”IEEE Transactions on Ultrasonics,Ferroelectrics,and Frequency Control,vol.62,no.5,p.852,2015.

[40]D.-L.Liu and R.C.Waag,“About the application of the van cittertzernike theorem in ultrasonic imaging,”IEEE Transactions on Ultrasonics,Ferroelectrics,and Frequency Control,vol.42,no.4,pp.590-601,1995.

[41]W.F.Walker and G.E.Trahey,“Speckle coherence and implications for adaptive imaging,”The Journal of the Acoustical Society of America,vol.101,no.4,pp.1847-1858,1997.

[42]D.-L.Liu and R.C.Waag,“Estimation and correction of ultrasonic wavefront distortion using pulse-echo data received in a two-dimensional aperture,”IEEE Transactions on Ultrasonics,Ferroelectrics,and Frequency Control,vol.45,no.2,pp.473-490,1998.

[43]Y.L.Li and J.J.Dahl,“Coherent flow power doppler(CFPD):flow detection using spatial coherence beamforming,”IEEE Transactions on Ultrasonics,Ferroelectrics,and Frequency Control,vol.62,no.6,pp.1022-1035,2015.

[44]M.A.Lediju,G.E.Trahey,B.C.Byram,and J.J.Dahl,“Shortlag spatial coherence of backscattered echoes:Imaging characteristics,”IEEE Transactions on Ultrasonics,Ferroelectrics,and Frequency Control,vol.58,no.7,2011.

[45]L.Peralta,K.Christensen-Jeffries,R.Paley,J.V.Hajnal,and R.J.Eckersley,“Microbubble contrast agents for coherent multi-transducer ultrasound imaging,”in The 24st European Symposium on Ultrasound Contrast Imaging.ICUS,2019,pp.96-97.

[46]K.Christensen-Jeffries,R.J.Browning,M.-X.Tang,C.Dunsby,and R.J.Eckersley,“In vivo acoustic super-resolution and super-resolved velocity mapping using microbubbles,”IEEE Transactions on Medical Imaging,vol.34,no.2,pp.433-440,2015.

[47]K.Christensen-Jeffries,S.Harput,J.Brown,P.N.Wells,P.Aljabar,C.Dunsby,M.-X.Tang,and R.J.Eckersley,“Microbubble axial localization errors in ultrasound super-resolution imaging,”IEEE Transactions on Ultrasonics,Ferroelectrics,and Frequency Control,vol.64,no.11,pp.1644-1654,2017.

54页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:纵深取得装置、纵深取得方法以及程序

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!