Method and system for coherent composite motion detection using channel coherence and transmit coherence

文档序号:623171 发布日期:2021-05-11 浏览:7次 中文

阅读说明:本技术 使用信道相干性和发射相干性进行相干复合运动检测的方法和系统 (Method and system for coherent composite motion detection using channel coherence and transmit coherence ) 是由 托雷·比亚斯塔德 于 2020-10-16 设计创作,主要内容包括:本发明题为“使用信道相干性和发射相干性进行相干复合运动检测的方法和系统”。本公开提供了一种用于生成超声图像的方法,该方法包括:由换能器中的多个发射器以不同的角度发射至少两个发射波束,其中发射波束的至少部分覆盖重叠区域;并且由换能器的多个传感器接收发射波束的反射信号。方法还包括:计算接收信号的信道相干性,以产生一个或多个信道相干性图像;并且计算接收信号的发射相干性,以产生一个或多个发射相干性图像。组合来自信道相干性图像中的至少一个图像和发射相干性图像中的至少一个图像的信息,以识别移动对象。然后处理来自重叠区域中的不同发射的接收信号,以产生针对移动对象补偿的最终图像。(The invention provides a method and system for coherent compound motion detection using channel coherence and transmit coherence. The present disclosure provides a method for generating an ultrasound image, the method comprising: transmitting, by a plurality of transmitters in a transducer, at least two transmit beams at different angles, wherein at least a portion of the transmit beams cover an overlap region; and receiving reflected signals of the transmitted beam by a plurality of sensors of the transducer. The method further comprises the following steps: calculating channel coherence of the received signal to produce one or more channel coherence images; and calculating transmit coherence of the received signal to produce one or more transmit coherence images. Combining information from at least one of the channel coherence images and at least one of the emission coherence images to identify the moving object. The received signals from the different transmissions in the overlapping region are then processed to produce a final image that is compensated for moving objects.)

1. A method for generating an ultrasound image, the method comprising:

transmitting, by a plurality of transmitters in a transducer, at least two transmit beams at different angles, wherein at least portions of the transmit beams cover an overlap region;

receiving, by a plurality of sensors of the transducer, reflected signals of the transmitted beam;

calculating a channel coherence of the received signal to produce one or more channel coherence images;

calculating transmit coherence of the received signal to produce one or more transmit coherence images;

combining information from at least one of the channel coherence images and at least one of the emission coherence images to identify a moving object; and

processing the received signals from different transmissions in an overlap region to produce a final image compensated for the moving object.

2. The method of claim 1, wherein calculating the channel coherence uses the received signal to perform one or more of the following operations on the received signal: delaying, weighting and phase shifting the received signal.

3. The method of claim 1, wherein calculating the channel coherence is performed with a respective received signal at each sensor of the transducer.

4. The method of claim 1, wherein calculating the transmit coherence comprises using scan line data representing the overlapping region covered by the transmit beam.

5. The method of claim 4, wherein the scan lines are generated by one or more of processing, weighting, delaying, phase shifting, and summing the respective receive signals for calculating transmit coherence.

6. The method of claim 1, wherein:

calculating the channel coherence using a coherence factor process comprising abs (SumChannelData/SumAlbsChannelData), wherein "abs" is an operator of absolute values, SumChannelData is a sum of channel data in the receive beamforming, and SumChannelData is an absolute value of each of the respective channel data before the sum of the channel data, and

calculating the transmit coherence using a coherence factor process comprising abs (SumTransmitData/SumAbsTransmitData), wherein SumTransmitData is a sum of data from the different transmissions for coherent compounding, and SumAbsTransmitData is an absolute value of each of the respective transmission data before the summing in the coherent compounding.

7. The method of claim 1, wherein identifying the moving object comprises:

generating a difference value (CFdiff) of a first part P of an image by subtracting a corresponding first part of the emission coherence image (CFtx) from a corresponding first part of the channel coherence image (CFch); and

the difference is compared with a threshold value,

wherein when the difference is above the threshold, the first part P of the image is considered as part of the moving object and is set as a compensation image Pm, and

wherein when the difference value is not higher than the threshold value, the first portion P of the image is not regarded as a portion of a moving object and is set as an uncompensated image P.

8. The method of claim 7, wherein the first portion is a pixel.

9. The method of claim 7, wherein the first portion Po is generated by the formula: po ═ Pm CFdiff + (1-CFdiff) × P, where Pm is the compensated first part and P is the uncompensated first part.

10. The method of claim 1, wherein the channel coherence is calculated using one of the following: a generalized coherence factor, a phase coherence factor, or a symbol coherence factor.

11. The method of claim 1, wherein the transmit coherence is calculated using one of the following processes: a generalized coherence factor, a phase coherence factor, or a symbol coherence factor.

12. The method of claim 1, wherein generating the one or more transmit coherence images comprises using retrospective transmit beamforming.

13. A system for generating an ultrasound image, the system comprising:

a plurality of transmitters in a transducer, the plurality of transmitters configured to transmit at least two transmit beams at different angles, wherein at least portions of the transmit beams cover an overlap region;

a plurality of sensors of the transducer configured to receive reflected signals of the transmitted beam;

a processor configured to:

calculating a channel coherence of the received signal to produce one or more channel coherence images;

calculating transmit coherence of the received signal to produce one or more transmit coherence images;

combining information from at least one of the channel coherence images and at least one of the emission coherence images to identify a moving object; and

processing the received signals from different transmissions in an overlap region to produce a final image compensated for the moving object; and

a display configured to display the final image.

14. The system of claim 13, wherein the processor is configured to perform one or more of the following operations on the received signal by using the received signal to calculate the channel coherence: delaying, weighting and phase shifting the received signal.

15. The system of claim 13, wherein the processor is configured to calculate the channel coherence using a respective receive signal at each sensor of the transducers.

16. The system of claim 13, wherein the processor is configured to calculate the transmit coherence using scan line data representing the overlapping region covered by the transmit beam, and to generate the scan lines by one or more of processing, weighting, delaying, phase shifting, and summing the respective receive signals for calculating transmit coherence.

17. The system of claim 13, wherein the processor is configured to:

calculating the channel coherence using a coherence factor process comprising abs (SumChannelData/SumAlbsChannelData), wherein "abs" is an operator of absolute values, SumChannelData is a sum of channel data in the receive beamforming, and SumChannelData is an absolute value of each of the respective channel data before the sum of the channel data, and

calculating the transmit coherence using a coherence factor process comprising abs (SumTransmitData/SumAbsTransmitData), wherein SumTransmitData is a sum of data from the different transmissions for coherent compounding, and SumAbsTransmitData is an absolute value of each of the respective transmission data before the summing in the coherent compounding.

18. The system of claim 13, wherein the processor is configured to identify the moving object by:

generating a difference value (CFdiff) of a first part P of an image by subtracting a corresponding first part of the emission coherence image (CFtx) from a corresponding first part of the channel coherence image (CFch); and

the difference is compared with a threshold value,

wherein when the difference value is higher than the threshold value, the first part P of the image is regarded as a part of a moving object and set as a compensation image Pm, and

wherein when the difference value is not higher than the threshold value, the first portion P of the image is not regarded as a portion of a moving object and is set as an uncompensated image P.

19. The system of claim 18, wherein the processor is configured to generate the first portion Po using the following equation: po ═ Pm CFdiff + (1-CFdiff) × P, where Pm is the compensated first part and P is the uncompensated first part.

20. The system of claim 13, wherein the processor is configured to generate the one or more transmit coherence images using retrospective transmit beamforming.

Technical Field

Certain embodiments relate to ultrasound imaging. More particularly, certain embodiments relate to methods and systems for providing coherent composite motion detection using channel coherence and transmit coherence.

Background

Ultrasound imaging is a medical imaging technique for imaging organs and soft tissue in the human body. Ultrasound imaging uses real-time, non-invasive high frequency sound waves to produce a series of two-dimensional (2D) images and/or three-dimensional (3D) images.

During ultrasound-based imaging of a patient, the image quality of the patient may sometimes be reduced. Therefore, much effort is required to provide accurate and clear images.

Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such systems with some aspects of the present disclosure as set forth in the remainder of the present application with reference to the drawings.

Disclosure of Invention

A system and/or method for coherent composite motion detection using channel coherence and transmit coherence, substantially as shown in and/or described in connection with at least one of the figures, as set forth more completely in the claims.

These and other advantages, aspects, and novel features of the present disclosure, as well as details of an illustrated embodiment thereof, will be more fully understood from the following description and drawings.

Drawings

Fig. 1 is a block diagram of an exemplary ultrasound system that may be used to provide coherent composite motion detection using channel coherence and transmit coherence, in accordance with various embodiments.

Fig. 2 is a diagram of an experimental ultrasound contrast imaging setup, according to various embodiments.

Fig. 3 is a display of an exemplary ultrasound image using coherent compounding, in accordance with various embodiments.

Fig. 4 is a display of an exemplary ultrasound image using incoherent compounding, according to various embodiments.

Fig. 5 is a display of an exemplary emission coherence image, according to various embodiments.

Fig. 6 is a display of an exemplary channel coherence image, in accordance with various embodiments.

Fig. 7 is a display of an exemplary coherence difference image, according to various embodiments.

Fig. 8 is a display of the example coherence difference image of fig. 7 after processing to detect regions with moving tissue, according to various embodiments.

Fig. 9 is a flow diagram illustrating exemplary steps that may be used to provide coherent composite motion detection using channel coherence and transmit coherence, in accordance with various embodiments.

Detailed Description

Certain embodiments may be found in methods and systems that may be used in an exemplary ultrasound system that provides coherent composite motion detection using channel coherence and transmit coherence. Various embodiments have the technical effect of improving an ultrasound image by accurately determining whether pixels in the ultrasound image suffer from phase cancellation due to motion.

The foregoing summary, as well as the following detailed description of certain embodiments, will be better understood when read in conjunction with the appended drawings. To the extent that the figures illustrate diagrams of the functional blocks of various embodiments, the functional blocks are not necessarily indicative of the division between hardware circuitry. Thus, for example, one or more of the functional blocks (e.g., processors or memories) may be implemented in a single piece of hardware (e.g., a general purpose signal processor or a block of random access memory, hard disk, or the like) or multiple pieces of hardware. Similarly, the programs may be stand alone programs, may be incorporated as subroutines in an operating system, may be functions in an installed software package, and the like. It should be understood that the various embodiments are not limited to the arrangements and instrumentality shown in the drawings. It is to be understood that the embodiments may be combined, or that other embodiments may be utilized and that structural, logical, and electrical changes may be made without departing from the scope of the various embodiments. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present disclosure is defined by the appended claims and their equivalents.

As used herein, an element or step recited in the singular and proceeded with the word "a" or "an" should be understood as not excluding plural said elements or steps, unless such exclusion is explicitly recited. Furthermore, references to "exemplary embodiments," "various embodiments," "certain embodiments," "representative embodiments," etc., are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. Furthermore, unless explicitly stated to the contrary, embodiments "comprising," "including," or "having" an element or a plurality of elements having a particular property may include additional elements not having that property.

Also as used herein, the term "image" broadly refers to both a visual image and data representing a visual image. However, many embodiments generate (or are configured to generate) at least one viewable image. Further, as used herein, the phrase "image" is used to refer to ultrasound modes, such as B-mode (2D mode), M-mode, three-dimensional (3D) mode, CF mode, PW doppler, CW doppler, MGD, and/or sub-modes of B-mode and/or CF, such as Shear Wave Elastic Imaging (SWEI), TVI, Angio, B-flow, BMI _ Angio, and in some cases MM, CM, TVD, where "image" and/or "plane" includes a single beam or multiple beams.

Further, as used herein, the term "processor" or "processing unit" refers to any type of processing unit that can perform the computations required by the various embodiments, such as single or multiple core: a CPU, an Accelerated Processing Unit (APU), a graphics board, a DSP, an FPGA, an ASIC, or a combination thereof.

It should be noted that various embodiments of generating or forming images described herein may include processes for forming images that include beamforming in some embodiments, and do not include beamforming in other embodiments. For example, the image may be formed without beamforming, such as by multiplying a matrix of demodulated data by a matrix of coefficients, such that the product is an image, and wherein the process does not form any "beams. In addition, the formation of an image may be performed using a combination of channels (e.g., synthetic aperture techniques) that may result from more than one transmit event.

In various embodiments, the sonication is performed in software, firmware, hardware, or a combination thereof to form an image, including, for example, ultrasound beamforming, such as receive beamforming. One specific implementation of an ultrasound system having a software beamformer architecture formed in accordance with various embodiments is shown in figure 1.

Fig. 1 is a block diagram of an exemplary ultrasound system that may be used to provide coherent composite motion detection using channel coherence and transmit coherence, in accordance with various embodiments. Referring to fig. 1, a block diagram of an exemplary ultrasound system 100 is shown. Ultrasound system 100 includes a transmitter 102, an ultrasound probe 104, a transmit beamformer 110, a receiver 118, a receive beamformer 120, an A/D converter 122, an RF processor 124, an RF/IQ buffer 126, a user input device 130, a signal processor 132, an image buffer 136, a display system 134, and an archive 138.

The transmitter 102 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to drive the ultrasound probe 104. The ultrasound probe 104 may include a two-dimensional (2D) array of piezoelectric elements. The ultrasound probe 104 may include a set of transmit transducer elements 106 and a set of receive transducer elements 108 that generally constitute the same elements. In certain embodiments, the ultrasound probe 104 is operable to acquire ultrasound image data covering at least a substantial portion of an anatomical structure, such as a heart, a blood vessel, or any suitable anatomical structure.

The transmit beamformer 110 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to control the transmitter 102 to drive the set of transmit transducer elements 106 through the transmit sub-aperture beamformer 114 to transmit ultrasonic transmit signals into a region of interest (e.g., a human, an animal, a subsurface cavity, a physical structure, etc.). The transmitted ultrasound signals may be backscattered from structures in the object of interest, such as blood cells or tissue, to generate echoes. The echoes are received by the receiving transducer elements 108.

The set of receive transducer elements 108 in the ultrasound probe 104 may be used to convert the received echoes to analog signals, sub-aperture beamformed by a receive sub-aperture beamformer 116, and then transmitted to a receiver 118. The receiver 118 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to receive signals from the receive sub-aperture beamformer 116. The analog signals may be communicated to one or more of the plurality of a/D converters 122.

The plurality of a/D converters 122 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to convert analog signals from the receiver 118 to corresponding digital signals. A plurality of a/D converters 122 are disposed between the receiver 118 and the RF processor 124. The present disclosure is not limited in this respect, though. Thus, in some embodiments, multiple a/D converters 122 may be integrated within receiver 118.

The RF processor 124 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to demodulate digital signals output by the plurality of a/D converters 122. According to one embodiment, the RF processor 124 may include a complex demodulator (not shown) that may be used to demodulate the digital signals to form I/Q data pairs representative of corresponding echo signals. The RF data (which may be, for example, I/Q signal data, real-valued RF data, etc.) may then be passed to an RF/IQ buffer 126. The RF/IQ buffer 126 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to provide temporary storage of RF or I/Q signal data generated by the RF processor 124.

Thus, various embodiments may enable, for example, the RF processor 124 to process real-valued RF data or any other equivalent representation of the data with an appropriate RF buffer 126.

The receive beamformer 120 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to perform digital beamforming processing to, for example, sum delayed channel signals, phase shifted channel signals and/or weighted channel signals received from the RF processor 124 via the RF/IQ buffer 126 and output a beamformed signal. The delayed channel data and/or the phase shifted channel data and the weighted channel data may be summed to form a scanline output from the receive beamformer 120, where the scanline may be, for example, complex or non-complex. The particular delay of the channel may be provided, for example, by the RF processor 124 or any other processor configured to perform this task. The resulting processed information may be the beam summation signal output from the receive beamformer 120 and passed to the signal processor 132. According to some embodiments, the receiver 118, the plurality of a/D converters 122, the RF processor 124, and the beamformer 120 may be integrated into a single beamformer, which may be digital. In various embodiments, the ultrasound system 100 includes a plurality of receive beamformers 120.

The user input device 130 may be used to enter patient data, scan parameters, settings, select protocols and/or templates, etc. In an exemplary embodiment, the user input device 130 is operable to configure, manage and/or control the operation of one or more components and/or modules in the ultrasound system 100. In this regard, the user input device 130 may be used to configure, manage and/or control the operation of the transmitter 102, ultrasound probe 104, transmit beamformer 110, receiver 118, receive beamformer 120, RF processor 124, RF/IQ buffer 126, user input device 130, signal processor 132, image buffer 136, display system 134 and/or archive 138. User input device 130 may include one or more buttons, one or more rotary encoders, a touch screen, motion tracking, voice recognition, a mouse device, a keyboard, a camera, and/or any other device capable of receiving user instructions. In certain embodiments, for example, one or more of the user input devices 130 may be integrated into other components (such as the display system 134 or the ultrasound probe 104). For example, the user input device 130 may include a touch screen display.

The signal processor 132 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to process ultrasound scan data (i.e., sum IQ signals) to generate an ultrasound image for presentation on the display system 134. The signal processor 132 is operable to perform one or more processing operations according to a plurality of selectable ultrasound modalities on the acquired ultrasound scan data. In an exemplary embodiment, the signal processor 132 may be used to perform display processing and/or control processing, and the like. As echo signals are received, acquired ultrasound scan data may be processed in real-time during a scan session. Additionally or alternatively, the ultrasound scan data may be temporarily stored in the RF/IQ buffer 126 during a scan session and processed in a less real-time manner in an online operation or an offline operation. In various implementations, the processed image data may be presented at display system 134 and/or may be stored at archive 138. Archive 138 may be a local archive, Picture Archiving and Communication System (PACS), or any suitable device for storing images and related information.

The signal processor 132 may be one or more central processing units, microprocessors, microcontrollers, or the like. For example, the signal processor 132 may be an integrated component, or may be distributed in various locations. In an exemplary embodiment, the signal processor 132 may be capable of receiving input information from the user input device 130 and/or the profile 138, generating output that may be displayed by the display system 134, and manipulating the output in response to the input information from the user input device 130, and the like. The signal processor 132 may be capable of performing, for example, any of the methods and/or sets of instructions discussed herein in accordance with various embodiments.

The ultrasound system 100 may be used to continuously acquire ultrasound scan data at a frame rate appropriate for the imaging situation in question. Typical frame rates are in the range of 20-120, but may be lower or higher. The acquired ultrasound scan data may be displayed on the display system 134 at the same frame rate, or at a slower or faster display rate. An image buffer 136 is included for storing processed frames of acquired ultrasound scan data that are not scheduled for immediate display. Preferably, the image buffer 136 has sufficient capacity to store at least several minutes of frames of ultrasound scan data. The frames of ultrasound scan data are stored in a manner that is easily retrievable therefrom according to their acquisition order or time. The image buffer 136 may be embodied as any known data storage medium.

Display system 134 may be any device capable of communicating visual information to a user. For example, the display system 134 may include a liquid crystal display, a light emitting diode display, and/or any suitable display or displays. The display system 134 may be operable to present ultrasound images and/or any suitable information. For example, the ultrasound images presented at display system 134 may include markers, tracking identifiers, and/or any suitable information.

The archive 138 may be one or more computer-readable memories integrated with the ultrasound system 100 and/or communicatively coupled (e.g., over a network) to the ultrasound system 100, such as a Picture Archiving and Communication System (PACS), a server, a hard disk, a floppy disk, a CD-ROM, a DVD, a compact memory, a flash memory, a random access memory, a read-only memory, an electrically erasable and programmable read-only memory, and/or any suitable memory. The archive 138 may include, for example, a database, library, information set, or other memory accessed by the signal processor 132 and/or incorporated into the signal processor 132. For example, the archive 138 can store data temporarily or permanently. The archive 138 may be capable of storing medical image data, data generated by the signal processor 132, and/or instructions readable by the signal processor 132, among others. In various embodiments, archive 138 stores, for example, ultrasound image data, labeled ultrasound images, identification instructions, segmentation instructions, labeling instructions, and tracking instructions.

The components of the ultrasound system 100 may be implemented in software, hardware, firmware, etc. The various components of the ultrasound system 100 may be communicatively connected. The components of the ultrasound system 100 may be implemented separately and/or integrated in various forms. For example, the display system 134 and the user input device 130 may be integrated as a touch screen display. Furthermore, although the ultrasound system 100 is described as including the RF processor 124 and the signal processor 132, various embodiments of the present disclosure may use only one processor. Various implementations may refer to each of the RF processor 124 and the signal processor 132 as a processor. In addition, there may be other processors to additionally perform the tasks described as being performed by the RF processor 124 and the signal processor 132, and all of these processors may be referred to as "processors" for convenience of description.

Fig. 2 is a diagram of an experimental ultrasound contrast imaging setup, according to various embodiments. Referring to fig. 2, an ultrasound image probe 200 is shown, which may be similar to the ultrasound probe 104 of the ultrasound system 100. The ultrasound probe 200 may have an imaging portion 210 in which a tube 203 with flowing contrast agent 201 is present. Within the imaging section there are also a number of medium intensity fixed scatterers (not shown) and strong fixed point scatterers 202.

The channel coherence image may be generated, for example, in the receive beamformer 120 (fig. 1). Depending on how coherent compounding is achieved, a transmit coherence image may be generated, for example, in the signal processor 132 (FIG. 1) or the receive beamformer 120. However, as graphics processing and software beamforming techniques evolve, the boundary between beamforming and signal processing becomes increasingly blurred.

For coherent compounding, it may also be assumed that the scanline data is compounded after channel summation in the beamforming process, but an embodiment is also possible where coherent compounding is performed at the channel data level. The exemplary embodiment also assumes the use of coherent compounding of focused emissions, but any other form of coherent compounding may be used.

The channel coherence and transmit coherence methods can be used to provide better ultrasound images when generating ultrasound images. After the classical beamforming delays have been applied, the channel coherence method can measure the alignment of signals on all or part of the channel. In general, these methods may partially or completely ignore the amplitude of the incoming echoes on each channel. These methods may in particular output very low values for echoes arriving off-axis from the intended beamforming direction, allowing them to effectively suppress side lobe artifacts (referred to as "side lobes" for short) in ultrasound images. The output image from the application of the coherence method can then be used alone or can be combined with images from other types of beamforming, e.g., Delay-And-Sum.

The transmit coherence method may be similar to the channel coherence method. For example, in the case of using a coherent combining method, a transmit coherence method may be applied. Coherent compounding methods are methods that combine data from multiple emissions during image reconstruction. The difference from the channel coherence approach is that the transmit coherence approach measures the alignment of data from different transmit events, rather than measuring the alignment between data on different channels from the same transmit event. The type of data in which alignment is measured may typically be scanline data (summed with delay channel data from some or all of the channels), but may also be channel data.

When viewed at a high level and limiting the channel coherence and transmit coherence to, for example, high or low, a reconstructed pixel in an ultrasound image may represent one of four possible cases. The first case is that the channel coherence is low and the transmit coherence is low. This may occur, for example, when the data used to reconstruct the pixels is noise or from off-axis scatterers (side lobe data). The second case is that the channel coherence is low and the transmit coherence is high. This is unlikely to happen, but may occur, for example, when the data used to reconstruct the pixel is from off-axis scatterers (side lobe data) and the motion of the scatterers cancels out the phase shift between the combined emissions caused by the off-axis scatterers.

The third case is that the channel coherence is high and the transmit coherence is low. Generally, this may occur when the scatterers imaged in the reconstruction pixels move. This motion may cause the coherently combined emissions to be out of phase and cause coherent recombination quality to degrade. Compensating for motion in reconstructing the pixels (or regions) may result in a better final ultrasound image. The fourth case is that the channel coherence is high and the transmit coherence is high. This indicates that the echo data from the stationary scatterers is of good quality.

A third case may be one in which the joint use of channel coherence and transmit coherence is of interest. It is known from the literature that when coherent compounding is performed in pixels where the data has a large number of signals (side lobe data) from off-axis scatterers, the data from different combined emissions will be phase shifted from each other. If the phase shift is compensated for before summing the transmit data, side lobes can be reconstructed/amplified, which is undesirable. It is therefore not possible to determine whether the phase shift is from motion or due to strong off-axis scatterers. However, high channel coherence may indicate that there are no strong off-axis scatterers, and therefore, low emission coherence in the pixel is likely to be motion-induced.

Thus, knowing that the pixels are imaging a moving scatterer, phase and/or delay compensation can be applied to realign the transmit data before summing them, thus obtaining a better final ultrasound image. As previously described, although various types of data may be used, for ease of reference, it will be assumed that complex baseband data is used. Thus, an example of phase compensation may be to estimate the average phase shift ThetaAvg between data from adjacent transmissions used to reconstruct a pixel, and then to back-shift each transmitted data with transmittindex ThetaAvg before summing the transmitted data. The phase may be estimated by calculating the phase of the correlation between the complex-valued data from the emission used to reconstruct the pixel. Various embodiments of the present disclosure are described with respect to fig. 3-9 for motion compensation for the third case.

Fig. 3 is a display of an exemplary ultrasound image using coherent compounding, in accordance with various embodiments. Referring to fig. 3, an ultrasound image 300 is shown showing a resulting image after, for example, performing a Retrospective Transmit Beamforming (RTB) process on the received echo data. I.e. an image generated using coherent compounding of data from several focused transmit beams. The image has the arrangement shown in figure 2. A contrast agent 301 and a fixed-point scatterer 302 are shown. Also shown is a medium intensity scatterer 304 which may be considered to be a uniform speckle region generally outside of the tube 203, rather than a bright spot scatterer such as in the lower left portion of fig. 3.

In the case of moving tissue (e.g., contrast agent 301), the pixel intensity may be reduced compared to the intensity if the contrast agent is in a fixed state. This may be because the motion causes the data from the different transmissions combined in coherent compounding to be likely out of phase. However, it may not be possible to determine from the ultrasound image 300 alone whether a pixel (or region) shows lower intensity due to, for example, phase cancellation (due to motion), or whether the intensity is naturally lower due to the imaged scatterers being weak (reflecting/scattering less ultrasound energy back to the probe).

Fig. 4 is a display of an exemplary ultrasound image using incoherent compounding, according to various embodiments. Referring to fig. 4, an ultrasound image 400 is shown showing a resulting image after, for example, performing Incoherent Retrospective Transmit Beamforming (iRTB) processing on the received echo data. I.e. an image generated using incoherent compounding of data from several focused transmit beams. Incoherent compounding refers to combining the absolute values of data from the emissions to reconstruct pixels rather than complex-valued data. Coherent compounding combines complex data. Also shown is contrast agent 401 and fixed-point scatterer 402.

The resulting image using incoherent retrospective transmit beamforming is an image generated using incoherent compounding of data from several focused transmit beams. Incoherent compounding combines the absolute values of the self-emitted data to reconstruct the pixels rather than the complex-valued data (coherent compounding is the case). By using absolute values, potentially destructive interference when combining data from multiple transmissions can be avoided. Thus, motion may not matter, as the intensity of the pixels may be the same whether or not the tissue is moving.

The ultrasound image 400 may not be affected by motion artifacts, however, a disadvantage may be that the side lobes are too large. This can be seen by comparing the pixel intensities in the left/right regions of the point scatterers 302 and 402 in the coherent RTB image in fig. 3 and the incoherent RTB image in fig. 4, respectively. The strength of the side regions of the diffuser 402 should be as low as possible. Thus, it can be seen that coherent RTB is better than non-coherent RTB in this regard.

Although one embodiment is described as using complex-valued data (I/Q data), various embodiments may use real-valued RF data or any other equivalent representation of that data.

Fig. 5 is a display of an exemplary emission coherence image, according to various embodiments. Referring to fig. 5, a transmit coherence image 500 is shown that shows a transmit coherence image (CFtx) generated after appropriate delays have been applied to all transmit data to be summed, but before the actual summation takes place. Also shown is a contrast agent 501 and a fixed-point scatterer 502.

The coherence of the data from the summed transmissions can be checked by using, for example, a Coherence Factor (CF). For example, an emission coherence value in the image may be calculated for each pixel in the image, which may be set to abs (SumTransmitData/SumAbsTransmitData), where the sum is the sum of all emissions contributing to the pixel. It should be noted that "abs" is the absolute value operator, sumatrasmitdata is the sum of the data from the different emissions used for coherent compounding, and sumab transmit data is the absolute value of each of the respective emissions data before the summation in coherent compounding is performed. That is, the absolute value of each transmit data is used for summation in coherent compounding. Thus, dividing a coherent RTB image by a non-coherent RTB image can be used to generate a transmit coherence image (CFtx), and can be referred to as a coherence factor process. Additionally, one of different coherence factors (e.g., a generalized coherence factor, a phase coherence factor, a symbol coherence factor, etc.) may also be used in determining channel/transmit coherence. These processes are not described, as they are well known.

The resulting emission coherence image (CFtx) is shown in fig. 5. It can be seen that the pixel intensity in the region with flowing contrast agent 501 is lower. However, this may also be the case for side lobe regions flanking the point scatterer 502.

It is still difficult to distinguish low intensities of side lobes from low intensities of motion.

Fig. 6 is a display of an exemplary channel coherence image, in accordance with various embodiments. Referring to fig. 6, a channel coherence image 600 is shown illustrating a channel coherence image (CFch) showing the coherence of channel data. That is, the channel coherence image (CFch) shows the data aligned across all channels after the beamforming delays are applied and before they are summed during beamforming. Also shown is a contrast agent 601 and a fixed-point scatterer 602.

The channel coherence image 600 may be generated in a similar manner as the transmit coherence image. For example, the channel coherence value in the image may be calculated as abs (SumChannelData/SumAbsChannelData), where the sum may be the sum of all channels contributing to the reconstructed pixel. Note that as described above, "abs" is an absolute value operator. Sumchaneldata is the sum of the channel data in receive beamforming, and sumab channeldata is the absolute value of each of the respective channel data before summing of the channel data. That is, the absolute value of each channel data is used for the summation of the channel data. The channel coherence image 600 is not affected by motion since all of the data for this coherence factor image is from a single transmission rather than a different transmission. Therefore, only the image cannot detect motion. However, the channel coherence image (CFch) can obtain lower values in areas affected by side lobe emission. This can be seen in fig. 6. The side lobe causes are more likely, although there may be other causes that result in low channel coherence.

It should be noted that various embodiments of the present disclosure may use different coherency measures, such as a generalized coherence factor, a phase coherence factor, a sign coherence factor, and so forth. Thus, various embodiments may use different methods to generate the channel coherence image and the transmit coherence image, and a first method to generate the channel coherence image and a second method to generate the transmit coherence image.

Fig. 7 is a display of an exemplary coherence difference image, according to various embodiments. Referring to fig. 7, an ultrasound image 700 is shown that illustrates the difference of the channel coherence image (CFch) and the transmit coherence image (CFtx). Also shown is contrast agent 701 and a fixed point scatterer 702.

One way to determine the motion region may be to subtract the transmit coherence image (CFtx) from the channel coherence image (CFch):

CFdiff ═ CFch-CFtx (equation 1)

The CFdiff value (difference) may then be compared to a threshold, where CFdiff values above the threshold may be considered as having motion. In other words, the CFdiff value is considered as part of the moving object. This may be done on a pixel-by-pixel basis or any other grouping of pixels, for example.

Other embodiments may use, for example, a ratio between the transmit coherence image and the channel coherence image.

Thus, as can be seen from fig. 7, pixels with moving tissue (contrast agent 701) have a higher intensity than pixels at the sides of the point scatterer 702. Thus, pixels within the tube 203 of flowing contrast agent 701 may be an example of case 3, while pixels to the sides of the point scatterer 702 may be an example of case 1, since side lobes typically have low channel coherence and low emission coherence, as classified above with respect to fig. 2.

Fig. 8 is a display of the example coherence difference image of fig. 7 after processing to detect regions with moving tissue, according to various embodiments. Referring to FIG. 8, an ultrasound image 800 is shown showing a threshold coherence difference image. Also shown is contrast agent 801 and a fixed-point scatterer 802.

To generate a threshold difference image, a threshold value may be used to compare to CFdiff, such that pixels (or regions) with CFdiff values above the threshold value are considered moving tissue. For example, in an exemplary embodiment of the present disclosure, the threshold value may be zero (or some other value). After identifying pixels with moving tissue, phase and/or delay errors caused only by motion in these pixels can be corrected.

Various implementations may also use soft weighting of the corrected image. For example, pixels may be weighted, for example, by floating point numbers. As an example, there may be two values of a pixel (or region), one of which is a motion compensated value Pm and one of which is a non-compensated value P, where P may be an initial value. Thus, the output pixel value Po (or region) can be shown as:

po ═ Pm × CFdiff + (1-CFdiff) × P (formula 2)

Pm may also be, for example, an incoherent RTB image. The incoherent RTB image can then be weighted according to the likelihood of moving tissue given by CFdiff. In this way, the motion elasticity of the incoherent RTB can be obtained and high side lobes can be avoided.

In the case of using the value of the motion compensated pixel value Pm or the uncompensated pixel value P, hard weighting may also be used.

As can be seen in fig. 8, the pixels with moving tissue (contrast agent 801) have a higher intensity than the pixels at the sides of the point scatterer 802. Thus, it can be seen that the loss of intensity in coherent compounding can be further compensated for in accordance with the motion in the indicated region. This may be used, for example, for phase compensation in these pixels, or for using non-coherent summation or the like in these pixels.

For example, a soft mask using soft weighting may be used to weight the compensated pixels based on the CFdiff intensity.

Fig. 9 is a flow diagram illustrating exemplary steps that may be used to provide coherent composite motion detection using channel coherence and transmit coherence, in accordance with various embodiments. Referring to fig. 9, a flow chart 900 having blocks 902 through 916 is shown.

In block 902, the channel data for each transmit event is delayed and/or phase shifted and weighted by, for example, the RF processor 124 and/or the receive beamformer 120. Classical delay-and-sum (DAS) and Retrospective Transmit Beamforming (RTB) delays (or any other coherent combining technique) may be applied. DAS delays/phases are applied to control the scan lines in the appropriate direction for each shot to perform RTB later in the process. This may mean that the scan lines are controlled in the same direction for all emissions to be combined in the scan line direction. DAS delays/weights/phases may vary over channel and range. The RTB delay/phase/weighting may remain constant across all channels, but differ in range.

In block 904, a channel coherence image (CFch) may be generated by, for example, the receive beamformer 120. This can be done, for example, by calculating abs (SumChannelData/SumAbsChannelData), where the sum is the sum of all channels contributing to the reconstructed pixel along the scan line.

In block 906, the delayed and/or phase shifted and weighted channel data are summed to form a complex valued scanline output from the receive beamformer 120.

In block 908, a transmit coherence image CFtx is calculated, for example, by the signal processor 132. This may be done, for example, by calculating abs (SumTransmitData/SumAbsTransmitData) as described above, where TransmitData is the scan line data from all the emissions that are to be combined to create an output pixel.

In block 910, the weighted, delayed, and/or phase shifted scan line data from all the emissions to be combined to create each pixel are summed to form a complex valued coherent RTB image. This may be done, for example, by the signal processor 132.

In block 912, a difference image CFdiff between the channel coherence image and the transmit coherence image is generated, for example, by the signal processor 132. This may be generated, for example, as CFdiff ═ CFch-CFtx.

In block 914, a compensated RTB image is generated, for example, by the signal processor 132. This may be, for example, an image in which the phase shift between all combined scan lines is calculated and compensated. This will then compensate for motion artifacts and reconstruction sidelobes. Alternatively, it may be a non-coherent RTB image in which the absolute values of the scan lines are combined rather than complex values. The image is not affected by motion but has high side lobes.

In block 916, the compensated RTB image is mixed with the (uncompensated) coherent RTB image, for example, by the signal processor 132. The value of CFdiff is used to determine which pixels are affected by motion, so pixel values from the compensated RTB image should be used instead of the uncompensated image. Another method may use soft weighting to compensate the RTB image, such as equation 2:

Po=Pm*CFdiff+(1-CFdiff)*P,

where Po is the output image, Pm is the compensated image, and P is the uncompensated image.

Other methods may also be used to determine Po. For example, in addition to calculating a completely uncompensated image and a completely compensated image, compensation pixels may also be calculated where compensation pixels are needed. Thus, the compensation may be for one or more pixels, regions in the image, or the entire image.

As can be seen, the present disclosure provides a method for generating an ultrasound image, the method comprising: transmitting, by a plurality of transmitters in a transducer, at least two transmit beams at different angles, wherein at least a portion of the transmit beams cover an overlap region; and receiving reflected signals of the transmitted beam by a plurality of sensors of the transducer.

Channel coherence of the received signal may be calculated for generating one or more channel coherence images, and transmit coherence of the received signal may be calculated for generating one or more transmit coherence images. Information from at least one of the channel coherence images and at least one of the transmit coherence images can be combined to identify moving objects. The received signals from the different transmissions in the overlapping region may be processed to produce a final image that is compensated for moving objects.

Calculating channel coherence may include performing one or more of the following operations on the received signal: the received signal is delayed, weighted and phase shifted. Calculating channel coherence may be performed using the respective received signals at each sensor of the transducer. Calculating transmit coherence may include using scan line data representing an overlapping region covered by the transmit beams, wherein the scan lines may be generated by one or more of processing, weighting, delaying, phase shifting, and summing the respective receive signals for calculating transmit coherence.

The method further includes calculating channel coherence using a coherence factor process that includes abs (SumChannelData/sumab channneldata), where "abs" is an operator of absolute values, SumChannelData is a sum of the channel data in the receive beamforming, and sumanschannelnata is an absolute value of each of the respective channel data before summing of the channel data. The method also provides for calculating the transmit coherence using a coherence factor process that includes abs (SumTransmitData/SumAbsTransmitData), where SumTransmitData is the sum of data from different transmissions used for coherent compounding, and SumAbsTransmitData is the absolute value of each of the corresponding transmit data before the sums in the coherent compounding are made.

Identifying the moving object includes generating a difference value (CFdiff) for the first portion P of the image by subtracting the corresponding first portion of the emission coherence image (CFtx) from the corresponding first portion of the channel coherence image (CFch), and comparing the difference value (CFdiff) to a threshold value. When the difference is higher than the threshold value, the first part P of the image is regarded as a part of the moving object and set as the compensation image Pm. When the difference is not higher than the threshold, the first part P of the image is not regarded as a part of the moving object and is set as the non-compensated image P. The first portion may be, for example, a pixel or a group of pixels.

For example, the first portion Po may be generated using the following formula: po ═ Pm CFdiff + (1-CFdiff) × P, where Pm is the compensated first part and P is the uncompensated first part.

The channel coherence is calculated using one of the following processes: a generalized coherence factor, a phase coherence factor, or a symbol coherence factor. The transmit coherence is calculated using one of the following processes: a generalized coherence factor, a phase coherence factor, or a symbol coherence factor. Generating the one or more transmit coherence images can include using retrospective transmit beamforming.

The present disclosure may also provide a system 100 for generating ultrasound images, where the system 100 may include a plurality of emitters (transmit transducer elements) 106 in a transducer (probe) 104 configured to transmit at least two transmit beams at different angles, where at least portions of the transmit beams cover an overlap region. The system 100 may also include a plurality of sensors (receiving transducer elements) 108 of the transducer configured to receive reflected signals of the transmitted beam. The system 100 can include one or more processors 124/120/132 configured to calculate channel coherence of the received signal to produce one or more channel coherence images and to calculate transmit coherence of the received signal to produce one or more transmit coherence images. The one or more processors 124/120/132 may be configured to combine information from at least one of the channel coherence images and at least one of the emission coherence images to identify a moving object and process the received signals from different emissions in the overlapping region to produce a final image that is compensated for the moving object. The system 100 may also include a display 134 configured to display the final image.

The one or more processors 124/120/132 may be configured to calculate channel coherence using a received signal on which to perform one or more of the following operations: the received signal is delayed, weighted and phase shifted. The one or more processors 124/120/132 may be configured to calculate channel coherence using the respective received signals at each sensor of the transducer (probe) 104. The one or more processors 124/120/132 may be configured to calculate transmit coherence using scan line data representing an overlap region covered by a transmit beam, and may generate scan lines for use in calculating transmit coherence by one or more of processing, weighting, delaying, phase shifting, and summing corresponding receive signals.

The one or more processors 124/120/132 may be configured to calculate channel coherence using a coherence factor process that includes abs (SumChannelData/sumanschannelnata), where "abs" is an operator of absolute values, SumChannelData is a sum of channel data in receive beamforming, and sumanschannelnata is an absolute value of each of the respective channel data before summing of the channel data. The one or more processors 124/120/132 may also calculate transmit coherence using a coherence factor process that includes abs (sumatrasmitdata/sumab transmitdata), where sumatrasmitdata is the sum of data from different transmissions used for coherent compounding, and sumab transmitdata is the absolute value of each of the respective transmit data before summing in coherent compounding.

The one or more processors 124/120/132 may be configured to identify moving objects by: a difference value (CFdiff) of the first portion P of the image is generated by subtracting the corresponding first portion of the transmit coherence image (CFtx) from the corresponding first portion of the channel coherence image (CFch) and comparing the difference value to a threshold value. When the difference is higher than the threshold value, the first part P of the image may be regarded as a part of the moving object and set as the compensation image Pm. When the difference is not higher than the threshold, the first portion P of the image may not be regarded as a part of the moving object and set as the non-compensated image P.

The one or more processors 124/120/132 may be configured to generate the first portion using the following equation: and Po: po ═ Pm CFdiff + (1-CFdiff) × P, where Pm is the compensated first part and P is the uncompensated first part. The one or more processors 124/120/132 may be configured to generate one or more transmit coherence images using retrospective transmit beamforming.

As used herein, the term "circuitry" refers to physical electronic components (i.e., hardware) as well as configurable hardware, any software and/or firmware ("code") that is executable by and/or otherwise associated with hardware. For example, as used herein, a particular processor and memory may comprise first "circuitry" when executing one or more first codes and may comprise second "circuitry" when executing one or more second codes. As used herein, "and/or" means any one or more of the items in the list joined by "and/or". As an example, "x and/or y" represents any element of the three-element set { (x), (y), (x, y) }. As another example, "x, y, and/or z" represents any element of the seven-element set { (x), (y), (z), (x, y), (x, z), (y, z), (x, y, z) }. The term "exemplary", as used herein, means serving as a non-limiting example, instance, or illustration. As used herein, the terms "e.g., (e.g.)" and "e.g., (for example)" bring forth a list of one or more non-limiting examples, instances, or illustrations. As used herein, a circuit is "operable to" and/or "configured to" perform a function whenever the circuit includes the necessary hardware and code (if needed) to perform the function, regardless of whether execution of the function is disabled or not enabled by certain user-configurable settings.

Other embodiments may provide a computer readable device and/or a non-transitory computer readable medium, and/or a machine readable device and/or a non-transitory machine readable medium having stored thereon machine code executable by a machine and/or a computer program having at least one code section to cause the machine and/or the computer to perform steps as described herein to facilitate ultrasound operator interaction.

Accordingly, the present disclosure may be realized in hardware, software, or a combination of hardware and software. The present disclosure may be realized in a centralized fashion in at least one computer system, or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited.

Various embodiments may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) replication takes place in different physical forms.

While the disclosure has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the disclosure. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the disclosure without departing from the scope thereof. Therefore, it is intended that the disclosure not be limited to the particular embodiments disclosed, but that the disclosure will include all embodiments falling within the scope of the appended claims.

24页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于通过基于超声图像分析自动调整波束形成器参数来提供超声图像增强的方法和系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!