For handling system, method and computer-readable medium with composite ultraphonic image in the presence of motion

文档序号:1776121 发布日期:2019-12-03 浏览:30次 中文

阅读说明:本技术 用于在存在运动的情况下处理和复合超声图像的系统、方法和计算机可读介质 (For handling system, method and computer-readable medium with composite ultraphonic image in the presence of motion ) 是由 甘小方 谭伟 于 2017-02-10 设计创作,主要内容包括:一种超声成像方法和系统。该方法包括:对于包括基准转向角的多个转向角中的每个转向角,以特定转向角将声能发送到目标区域、接收声反射,以及将声反射转换为图像,其中图像与特定转向角相关联;基于与基准转向角相关联的图像计算运动信息;以及基于与多个转向角中的每个转向角相关联的图像并基于运动信息生成复合超声图像。(A kind of ultrasonic imaging method and system.This method comprises: can send target area for each steering angle in multiple steering angles including benchmark steering angle for sound with specific steering angle, receive sound reflecting, and sound reflecting is converted to image, wherein image is associated with specific steering angle;Motion information is calculated based on image associated with benchmark steering angle;And composite ultraphonic image is generated based on image associated with each steering angle in multiple steering angles and based on motion information.)

1. a kind of ultrasonic imaging method, comprising:

For each steering angle in multiple steering angles including benchmark steering angle:

Sound energy is sent to target area with specific steering angle;

Receive sound reflecting;And

Sound reflecting is converted into image, the image is associated with the specific steering angle;

Motion information is calculated based on image associated with benchmark steering angle;And

Composite ultraphonic is generated based on image associated with each steering angle in the multiple steering angle and based on motion information Image.

2. according to the method described in claim 1, wherein, generating composite ultraphonic image includes:

For each steering angle in the multiple steering angle, to image associated with the specific steering angle using specific plus Power, to generate weighted image associated with the specific steering angle, wherein the particular weights are based on motion information;And

Weighted image associated with the multiple steering angle is combined, to generate composite ultraphonic image.

3. according to the method described in claim 1, wherein, image associated with each specific steering angle includes the pixel that H multiplies W Array, wherein H is the picture altitude as unit of the quantity of pixel, and W is the picture traverse as unit of the quantity of pixel, And wherein each pixel has pixel value.

4. according to the method described in claim 3, wherein, calculating motion information includes: to calculate pre-existing image and and base Difference between the quasi- associated image of steering angle is to generate differential image, and wherein differential image includes: to sit for each pixel It marks (i, j), wherein 1≤i≤H and 1≤j≤W:

Differential image (i, j)=| image (i, j)-benchmark steering angle image (i, j) existing for previously |.

5. according to the method described in claim 4, wherein calculating motion information further includes using low pass filter to differential image It is filtered, to generate the filtered differential image with pixel Dis (i, j).

6. according to the method described in claim 5, further include:

For each steering angle in the multiple steering angle in addition to benchmark steering angle (rsa), calculate and the specific steering The weight of each pixel of the associated image in angle, wherein it is that N is appointed as 1≤k≤N that the multiple steering angle, which includes quantity, Steering angle, and wherein the weight of each pixel (i, j) of image associated with the specific steering angle k is counted by following It calculates:

WK, k ≠ rsa(i, j)=CK, k ≠ rsaF (Dis (i, j)), if Dis (i, j)≤TH

WK, k ≠ rsa(i, j)=0, if Dis (i, j) > TH

Wherein:

F is that reversion pixel value makes f (Dis (i, j)) with the function that Dis (i, j) becomes larger and becomes smaller,

CkIt is predetermined value, wherein CkAs steering angle becomes larger and becomes smaller and whereinAnd

TH is predetermined threshold.

7. according to the method described in claim 6, wherein, generating composite ultraphonic image includes: for each pixel (i, j), meter It calculates:

Wherein ImagekIt is image associated with steering angle k, and wherein:

If Dis (i, j)≤TH

WK, k=rsa(i, j)=1, if Dis (i, j) > TH.

8. according to the method described in claim 1, wherein, the benchmark steering angle is zero degree.

9. according to the method described in claim 1, wherein, movement of the motion information based on target area.

10. according to the method described in claim 1, wherein, generating composite ultraphonic image includes executing packet by graphics processing unit Network detection, compound and post-processing.

11. according to the method described in claim 1, wherein, the movement also being calculated based on previously compound ultrasound image and is believed Breath.

12. a kind of ultrasonic system, comprising:

Energy converter is configured as each steering angle in multiple steering angles including benchmark steering angle:

It is sent to target area sound energy with specific steering angle,

Sound reflecting is received, and

Sound reflecting is converted into radio frequency (RF) data;

Front-end circuitry is configured as each steering angle in the multiple steering angle, processing and the specific steering Angle associated radio frequency (RF) data, to generate image associated with the specific steering angle;And

Equipment is calculated, is configured as:

Motion information is generated based on image associated with benchmark steering angle, and

Composite ultraphonic image is generated based on motion information and image associated with each steering angle in the multiple steering angle.

13. system according to claim 12, wherein generating composite ultraphonic image includes:

For each steering angle in the multiple steering angle, to image associated with the specific steering angle using specific plus Power, to generate weighted image associated with the specific steering angle, wherein the particular weights are based on motion information;And

Weighted image associated with the multiple steering angle is combined, to generate composite ultraphonic image.

14. system according to claim 12, wherein image associated with each specific steering angle includes the picture that H multiplies W Pixel array, wherein H is the picture altitude as unit of pixel quantity, and W is the picture traverse as unit of pixel quantity, and And wherein each pixel has pixel value.

15. system according to claim 14, wherein generating motion information includes:

The difference between pre-existing image and image associated with benchmark steering angle is calculated to generate differential image, wherein The differential image includes: for each pixel coordinate (i, j), wherein 1≤i≤H and 1≤j≤W:

Differential image (i, j)=| image (i, j)-benchmark steering angle image (i, j) existing for previously |.

16. system according to claim 15, wherein generate motion information further include:

Using low pass filter filtering differential image to generate the filtered differential image with pixel Dis (i, j).

17. system according to claim 16, wherein the calculating equipment is additionally configured to for the multiple steering angle In each steering angle in addition to benchmark steering angle (rsa) calculate each pixel of image associated with the specific steering angle Weight, wherein the multiple steering angle includes that quantity is that N is appointed as 1≤k≤N steering angle, and wherein with the spy The weight for determining each pixel (i, j) of the associated image of steering angle k is calculated by following:

WK, k ≠ rsa(i, j)=CK, k ≠ rsaF (Dis (i, j)), if Dis (i, j)≤TH

WK, k ≠ rsa(i, j)=0, if Dis (i, j) > TH

Wherein:

F is that reversion pixel value makes f (Dis (i, j)) with the function that Dis (i, j) becomes larger and becomes smaller,

CkIt is predetermined value, wherein CkAs steering angle becomes larger and becomes smaller and whereinAnd

TH is predetermined threshold.

18. system according to claim 17, wherein generating composite ultraphonic image includes for each pixel (i, j), meter It calculates:

Wherein ImagekIt is image associated with steering angle k, and wherein:

If Dis (i, j)≤TH

WK, k=rsa(i, j)=1, if Dis (i, j) > TH.

19. system according to claim 12, wherein the benchmark steering angle is zero degree.

20. system according to claim 12, wherein movement of the motion information based on target area.

Technical field

The disclosure relates in general to ultrasonic imaging, and relates more particularly to for handling in the presence of motion With the system, method and computer-readable medium of composite ultraphonic image.

Background technique

Due to being widely applied, ultrasonic system has become popular diagnostic tool.Specifically, due to its Noninvasive and Non-destructive property, ultrasonic system are used in medical industry extensively.Modern high performance ultrasonic system and technology are commonly used in production The bidimensional or 3-D image of the internal feature of raw object (for example, human organ).

Ultrasonic system sends and receives ultrasonic signal usually using the probe comprising wide bandwidth transducer.When with human body one Rise in use, ultrasonic system generated by being electrically excited the array of acoustic transducer elements or acoustic transducer elements advance to or The image of inside of human body tissue is formed across the ultrasonic signal of body.Ultrasonic signal generates the ultrasound reflected from bodily tissue and returns Wave signal, bodily tissue show as discontinuity to the ultrasonic signal of propagation.Various ultrasound echo signals are first back to energy converter Part and it is converted into electric signal, these electric signals are amplified and processed, to generate the ultrasound data of the image for tissue.

Ultrasonic system uses the ultrasonic probe comprising transducer array, for sending and receiving ultrasonic signal.Ultrasonic system Ultrasound image is formed based on the ultrasonic signal received.The skill of ultrasonic signal is sent by manipulating ultrasonic beam in all angles Art has been used to obtain the ultrasound image of multiple perspective views with interested target area.

In addition, ultrasonic image-forming system may include ultrasound imaging unit and image processing unit.Ultrasound imaging unit can be with The transmission of ultrasonic signal to target area (such as organizing) is controlled, and based on the echo-signal shape generated by the ultrasonic signal sent At data.Ultrasonic signal can also be controlled with the transmission of various steering angles by ultrasound imaging unit.Use echo-signal, ultrasound Imaging unit can be by using the echo next life of the technical combinations difference steering angle of referred to as spatial compounding with image processing unit At the composograph of target area.In some cases, target area may move during ultrasonic procedure, thus adversely shadow It rings the echo-signal of one or more steering angle and the combination picture of target area is caused to deteriorate.Therefore, it is necessary in ultrasound The system and method for the movement of target area are determined during process and the system and method for motion compensation are applied to target area The combination picture in domain.

Summary of the invention

This disclosure relates to handle and composite ultraphonic image in the presence of motion.In one aspect, it is a kind of ultrasound at Image space method operates in specific steering angle each steering angle in several steering angles including benchmark steering angle (RSA) It is sent to target area sound energy, receives sound reflecting, and sound reflecting is converted into image, wherein image is related to specific steering angle Connection.Ultrasonic imaging method is based on image associated with benchmark steering angle and calculates motion information, and is based on and each steering angle phase Associated image and based on motion information generate composite ultraphonic image.

In one embodiment, generating composite ultraphonic image includes: for each steering angle, to related to specific steering angle The image application particular weights of connection, to generate weighted image associated with specific steering angle, wherein particular weights are based on movement Information;And combination weighted image associated with steering angle, to generate composite ultraphonic image.

In another embodiment, there is image associated with each steering angle H to multiply the pixel array of W, wherein H be with Pixel quantity is the picture altitude of unit, and W is the picture traverse as unit of pixel quantity, and wherein, each pixel With pixel value.In one embodiment, calculate motion information include calculate pre-existing image and with benchmark steering angle (RSA) difference between associated image is to generate differential image, and filters differential image using low pass filter to generate The filtered differential image with pixel Dis (i, j).

In one aspect, disclosed ultrasonic imaging method operates each steering angle in addition to benchmark steering angle, To calculate the weight of each pixel of associated with specific steering angle image, wherein steering angle includes that quantity for N is appointed as 1 The steering angle of≤k≤N, and wherein the weight of each pixel (i, j) of image associated with specific steering angle k by following To calculate:

WK, k ≠ rsa(i, j)=CK, k ≠ rsaF (Dis (i, j)), if Dis (i, j)≤TH

WK, k ≠ rsa(i, j)=0, if Dis (i, j) > TH

Wherein:

F is that reversion pixel value makes f (Dis (i, j)) with the function that Dis (i, j) becomes larger and becomes smaller,

CkIt is predetermined value, wherein CkAs steering angle becomes larger and becomes smaller and whereinAnd

TH is predetermined threshold.

In yet another embodiment, generating composite ultraphonic image includes calculating for each pixel (i, j):

Combination picture

Wherein ImagekIt is image associated with steering angle k, and wherein:

If Dis (i, j)≤TH

WK, k=rsa(i, j)=1, if Dis (i, j) > TH

In another embodiment, benchmark steering angle is zero degree, this corresponds to send from all element of transducers simultaneously The steering angle of sound energy.In one embodiment, motion information is based further on previous composite ultraphonic image to calculate.

According at least one aspect of the disclosure, ultrasonic system includes energy converter, which is configured as including Each steering angle in several steering angles of benchmark steering angle is sent to target area sound energy, reception sound with specific steering angle Reflection, and sound reflecting is converted into radio frequency (RF) data.The ultrasonic system further include: front-end circuitry, the front-end circuit System is configured as handling RF data associated with specific steering angle for each steering angle, to generate and specific steering angle Associated image;And equipment is calculated, which is configured as generating based on image associated with benchmark steering angle Motion information simultaneously generates composite ultraphonic image based on motion information and image associated with each steering angle.

In another embodiment of ultrasonic system, generate composite ultraphonic image include: for each steering angle, to spy The associated image application particular weights of steering angle are determined, to generate weighted image associated with specific steering angle, wherein specific Weighting is based on motion information;And combination weighted image associated with steering angle, to generate composite ultraphonic image.

In another embodiment of ultrasonic system, the pixel that there is image associated with each specific steering angle H to multiply W Array, wherein H is the picture altitude as unit of pixel quantity, and W is the picture traverse as unit of pixel quantity, and Wherein, each pixel has pixel value.In one embodiment, generate motion information include calculate pre-existing image and with Difference between benchmark steering angle (RSA) associated image filters difference using low pass filter to generate differential image Image is to generate the filtered differential image with pixel Dis (i, j).

In the one aspect of ultrasonic system, calculates equipment and be additionally configured to calculate each steering angle in addition to RSA The weight of each pixel of image associated with specific steering angle, it is that N is appointed as 1≤k≤N that wherein steering angle, which includes quantity, Steering angle, and wherein the weight of each pixel (i, j) of image associated with specific steering angle k is counted by following It calculates:

WK, k ≠ rsa(i, j)=CK, k ≠ rsaF (Dis (i, j)), if Dis (i, j)≤TH

WK, k ≠ rsa(i, j)=0, if Dis (i, j) > TH

Wherein:

F is that reversion pixel value makes f (Dis (i, j)) with the function that Dis (i, j) becomes larger and becomes smaller,

CkIt is predetermined value, wherein CkAs steering angle becomes larger and becomes smaller and whereinAnd

TH is predetermined threshold.

In another embodiment of ultrasonic system, generating composite ultraphonic image includes that each pixel (i, j) is calculated:

Combination picture

Wherein ImagekIt is image associated with steering angle k, and wherein:

If Dis (i, j)≤THWK, k=rsa(i, j)=1, Dis if (i, j) > TH

In one embodiment of ultrasonic system, benchmark steering angle is zero degree, this is corresponded to simultaneously from all energy converters The steering angle of element transmission sound energy.In one embodiment, motion information is based further on previous composite ultraphonic image to calculate.

There is provided the content of present invention is in brief introduce the disclosure.The content of present invention is not related to crucial or necessary Feature, and the scope of the present disclosure is not defined or limited in any way.

Detailed description of the invention

Various aspects of the disclosure is described hereinafter with reference to attached drawing, and attached drawing includes in the present specification and to constitute this explanation A part of book, in which:

Fig. 1 illustrates the top architectural frameworks according to one embodiment of the ultrasonic image-forming system of the aspect of the disclosure;

Fig. 2 illustrates one embodiment of the imaging system of the ultrasonic image-forming system of the aspect according to the disclosure;

Fig. 3 illustrates the block diagram of one embodiment of the operation of the beam shaping elements of the aspect according to the disclosure;

Fig. 4 illustrates the flow chart of one embodiment of the spatial compounding method according to the aspect of the disclosure;And

Fig. 5 illustrates the spatial compounding method for working as Fig. 4 applied to multiple ultrasound images of the aspect according to the disclosure.

Specific embodiment

This disclosure relates to which it is fuzzy to reduce to execute spatial compounding using motion information.Spatial compounding is ultrasonic imaging skill Then art combines the image from the generation of each angle by directing the ultrasound waves to target area with different angle to obtain The image of target area.Spatial compounding can produce than only in the image of an angle imageable target region better quality.But When target area is easy to move, such as when heart, lung or fetus are imaged, encounter difficulties in spatial compounding.In In this case, spatial compounding may cause excessively fuzzy image.

Ultrasonic probe is a kind of equipment that electronics is reusable, have can precision waveform timing and complicated wave form shaping simultaneously And it can be by the transducer element array of analog or digital data transmission to imaging system.By utilizing with the aiming of various angles The independent transducer elements of target area simultaneously handle the information obtained by independent transducer elements, and imaging system can generate multiple Ultrasound image, these ultrasound images can be combined to generate single ultrasound image, which can produce target Region has image more higher than the ultrasonograph quality not combined individually.As being discussed in further detail below, this is super The various embodiments of acoustic imaging system are provided relative to spatial compounding in the presence of motion.

Fig. 1 illustrates the block diagrams of ultrasonic image-forming system 100 according to an embodiment of the present disclosure.Ultrasonic image-forming system 100 wraps Include: transducer unit 110, the transducer unit 110 include the transducer array of element of transducer 105a-z;Front-end circuit System 120;Communication interface 130;Calculate equipment 140;And display 150.Transducer unit 110 is to be converted into sound energy/sound wave Electric signal and convert the electrical signal to sound energy/sound wave equipment.As it is used herein, term " sound " include have it is audible Frequency is (that is, in the wave of 20kHz or less) and wave with supersonic frequency (that is, be higher than 20kHz).Transducer unit 100 includes The array of element of transducer 105a-z, these elements are being referred to as the specific and/or predetermined angular of " steering angle " towards target area Send ultrasonic wave.Ultrasonic wave omnidirection in the case where no waveguide is propagated.When all element of transducer 105a-z are activated simultaneously When, the ultrasonic wave generated by all element of transducers 105 is simultaneously from element of transducer spread out.This is referred to as zero degree steering angle. But each element of transducer 105a-z can be independently controlled, and by activating special transducer element in different time, The steering angle that the adjustable ultrasound of transducer unit is sent, without physically moving transducer unit.When the wave quilt activated later When propagating first, the wave more early activated is by certain distance of having advanced, to be formed relative to transducer unit 110 at an angle The wavefront of appearance.Therefore, steering angle can be used energy converter timing and carry out electronics adjustment, without physically moving transducer unit 110。

Transducer array 105 receives the ultrasonic wave from target area reflection or echo, and transducer unit 110 will receive To ultrasonic wave be converted into electric signal.It can be the form of radio frequency (RF) signal by the electric signal that transducer unit 110 is converted.Such as Shown in Fig. 1, transducer unit 110 is via communication line " T " and 120 interface of front-end circuitry or couples.As used herein , communication line " T " expression can be transmitted RF data, image data or other electric signals and provide ultrasonic image-forming system 100 Component between interface route.

Referring now to front-end circuitry 120, front-end circuitry 120 includes receiving the RF from transducer unit 110 The receiver (not shown) of signal, transmitter (not shown) and the front-end processing for sending RF signal to transducer unit 110 Device 125.As described below, front-end circuitry 120 executes the particular procedure of RF signal.Front-end processor 125 can use specific Ultrasonic waveform, beam pattern, receiver filtering technique, amplification and demodulation scheme be imaged.Front-end circuitry 120 also will Digital signal is converted into analog signal, and vice versa.Front-end circuitry 120 is via transmission line " T " and transducer unit 110 interfaces and coupling, and also via transmission line " T " and communication interface 130 and calculate 140 interface of equipment.Communication interface 130 is The interface equipment for allowing front-end circuitry 120 to communicate with calculating equipment 140, and may include universal serial bus (USB), Such as USB 3.0, or can be with the other bus interface or agreement of computer and electronic equipment interfaces.

In the shown embodiment, calculating equipment 140 includes central processing unit (CPU) 145 and graphics processing unit (GPU) 147.Central processing unit 145 and GPU 147 provide from the image procossing of the information that front-end circuitry 120 receives and it is rear from Reason, and can control or instruct other operations.In some embodiments, calculating equipment 140 can be personal computer or knee Laptop computer or other calculating equipment.

In some embodiments, transducer unit 110 and front-end circuitry 120 include in one single, via Communication interface 130 and calculating 140 interface of equipment.In other embodiments, 120 quilt of transducer unit 110 and front-end circuitry Comprising in a separate device.In one embodiment, communication interface 130, calculate equipment 140 and/or display 150 can be by In an equipment.In another embodiment, it calculates equipment 140 and display 150 can be individual equipment.It is other Configuration is considered to be within the scope of this disclosure.

Referring now to Figure 2, illustrating the block diagram of the imaging system 200 including calculating equipment 140 and display 150.It utilizes System 200 generates combination picture.System 200 and transducer unit and/or front-end circuitry (Fig. 1,110,120) communicate, To receive RF/ image data, and processing is executed to generate combination picture to RF/ image data.Calculating equipment 140 includes CPU 145, first in, first out (FIFO) buffer 134, GPU 147, and may include display 150.In other embodiments, it shows Device 150 may, for example, be the independent monitor calculated outside equipment 140.CPU 145 can be at least used as usb host controller 136.GPU 147 is capable of handling such as respectively by beam shaping elements 141, envelope detected unit 142, recombiner unit 143 and figure Beam forming, envelope detected, the image executed as post-processing unit 144 be compound and the operation of post processing of image etc.As herein Used, term " beam forming " should refer to reception beam forming, unless purposes or context instruction are in any particular instance It is intended to send beam forming or another meaning.

CPU 145 controls usb host controller 136 to receive RF/ image from transducer unit and/or front-end circuitry Data.When RF/ image data is received, RF/ image data is sent and is written to fifo buffer 134 by CPU 145.In Next sequential processes that RF/ image data in fifo buffer 134 is received by GPU 147 with it, so that receive First RF/ image data is by first processing.Fifo buffer 134 is coupled with CPU145 and GPU 147.As long as fifo buffer 135 have available storage space, and each RF/ image data is just stored in fifo buffer 134 by CPU 145.

Turning now to GPU 147, beam shaping elements 141 are received by delay by special transducer element 105a-z Certain signals execute RF/ image data further from target area to compensate certain element of transducers than other element of transducers Processing, so as to the temporally aligned signal received by each element of transducer 105a-z.In the letter of various element of transducers After number being aligned, beam shaping elements 141 combine the data from each element of transducer, to generate the RF/ of single image Image data, as shown in Figure 3.Envelope detected unit 142 is configured as detecting the packet of the signal generated by beam shaping elements 141 Network, to remove carrier signal.Envelope detected unit 142 is used to detect the peak in the RF/ image data received, and log Compression be used to reduce the dynamic range of the RF/ image data received.Recombiner unit 143 is used to analyze the RF/ figure received As, with the presence or absence of the movement that may influence combination picture, with correction of movement, and combining and being given birth to by beam shaping elements 141 in data At image.After recombiner unit 143 carries out combined processing, composite ultraphonic image is generated.Post processing of image unit 144 is configured To enhance ultrasound image generated automatically for medical professional or technical staff.Medical professional or technical staff can To control the degree for post-processing combination picture generated.As a result then the image obtained can be displayed on display 150 On screen.

Fig. 2 is only exemplary, and in various embodiments, can be by (one or more in conjunction with Fig. 2 operation described It is a) processor, software, programmable logic device, integrated circuit, ASIC, circuit system and/or hardware other configurations locate Reason.For example, in certain embodiments, CPU or microprocessor or digital signal processor can handle some or complete in operation Portion's operation.In certain embodiments, some or all of operation operation can be executed by hardware or circuit system, such as wave beam Forming and envelope detected operation.Other configurations not specifically disclosed herein are considered in scope of the presently disclosed technology.

Referring now to Figure 3, Fig. 3 is illustrated when the beam shaping elements 141 of GPU 147 handle RF/ image data, including The exemplary block diagram 300 of the beam forming of filtering, time delay and summation.As shown in figure 3, the expression of RF/ image data 305 passes through The data that element of transducer 105a-z is generated, and read from fifo buffer 134.RF/ image data 305 is via filter 310 Filtering, to reduce noise, and filter 310 can be finite impulse response (FIR) filter, infinite impulse response filter or The form of median filter, such as every kind all by understood by one of ordinary skill in the art.Filtered data are by time delay unit 315 delays, time delay unit 315, which is generated, exports 305 corresponding data 317 with RF/ image data.As described above, wave beam at Shape unit 141 compensates certain element of transducer 105a-z using time delay than other element of transducer 105a-z further from mesh Region is marked, so as to the temporally aligned signal received by each element of transducer 105a-z.For example, working as transducer unit 110 when receiving ultrasonic reflection, and the element of transducer 105a-z closer to target area will be in the transducing further from target area Reflection is received before device element 105a-z.Therefore, element of transducer 105a-z will not receive reflection in the same time. Time delay is operated with the temporally aligned RF/ image data 305 from element of transducer 105a-z.In element of transducer After the signal of 105a-z is aligned, summation unit 320 generates the 317 groups of merging of all output datas for specific steering angle The output data 325 of beam forming.The output data 325 of each beam forming is that have H pixel high for specific steering angle The independent image of wide picture element matrix with W pixel.

The block diagram 300 of beam shaping elements 141 and Fig. 3 referring again to Fig. 2, beam shaping elements 141 can be by that can compile Journey logical device is realized.Programmable logic device is filtered the signal received, interpolation, demodulation, phased, applies apodization, Delay and/or summation, these are the operations of beam shaping elements 141.Programmable logic device control ultrasonic waveform characteristic and Delay, and ultrasonic waveform is generated from memory.Programmable logic device can also realize the relative delay between waveform, and Filtering, interpolation, modulation, phased and applies apodization.Programmable logic device can control beam shaping elements 141 execute function with Handle multiple signals associated with the transducer array of multicomponent electric scanning.

Referring now to Figure 4, illustrating the flow chart of one embodiment of the method 400 of spatial compounding.The flow chart starts In step 405, wherein forming multiple ultrasound images with multiple steering angles.As described in the detailed description in Fig. 2 and Fig. 3 , the wave beam forming unit 141 of Fig. 2 is used to generate the output data 325 of the beam forming of Fig. 3.The output of each beam forming Data 325 are the different ultrasound images with different steering angles.There can be the N number of different steering angle of quantity, and one of them turns Benchmark steering angle (RSA) is designated as to angle.In one embodiment, RSA can be zero degree steering angle.In another embodiment In, RSA can be non-zero-degree steering angle.Ultrasound image with different steering angles includes that the H of pixel multiplies W matrix, wherein H indicate with Pixel quantity is the picture altitude of unit, and W indicates the picture traverse as unit of pixel quantity.At step 405, formed With N number of ultrasound image of N number of different steering angles.

In one embodiment, method 400 may proceed to step 408 and the case where not using any motion information Lower generation spatial compounding image, as it is known in the art, then at step 427, storage and display space combination picture.In In another embodiment, method 400 may proceed to step 410-425 to calculate motion information, and be generated based on motion information Combination picture spatially, as described below.

Step 410 requires the pre-existing image of storage to can be used for the purpose of difference calculating, and difference calculating is for giving birth to At the information about the movement at target area.Image stored, pre-existing is used as base image, wherein with foundation drawing The difference of picture is considered as moving.After step 405, if the pre-existing image of storage is not useable for difference calculating, Method 400 is readily modified as proceeding to step 408.For example, for the first time iteration of spatial compounding operation, method 400 can be into Row arrives step 408.It will be operated available for use as the spatial compounding of base image in the combination picture wherein from first time iteration Second of iteration in, method 400 may proceed to step 410.In second of iteration, the composite diagram from first time iteration As the difference being used in second of iteration at step 410 is calculated.

At step 410, selects the image of storage and select the ultrasound image with RSA, and is selected at two Difference is executed between image to calculate.In one embodiment, the image of storage can be pre-existing, previously compound figure Picture.In one embodiment, the image of storage can be pre-existing ultrasound figure associated with the steering angle in addition to RSA Picture.

Equation (1) based on following turns in each pixel coordinate (i, j) of the pre-existing image of storage and with benchmark It is calculated to difference is executed between the respective pixel coordinate (i, j) of the ultrasound image at angle, wherein (i, j) indicates pixel coordinate, and Wherein 1≤i≤H and 1≤j≤W.

Differential image (i, j)=| pre-existing image (i, j)-benchmark steering angle image (i, j) | (1)

At step 415, pretreatment is executed to differential image.In one embodiment, it is calculated based on above equation (1) Differential image is filtered.As a result the filtering differential image obtained is represented as Dis, and each pixel therein is represented as Dis (i, j).In one embodiment, filter can be the low pass filter for reducing the noise in differential image, such as 5*5 Median filter or Gaussian smoothing filter.Other filters are considered to be within the scope of this disclosure.This filtered difference Image provides the instruction of the movement of target area during imaging, and is used as motion information.In one embodiment, differential image It is not required to be filtered and itself serves as motion information.The difference calculating of equation (1) is exemplary.Determine its of motion information Its mode is considered to be within the scope of this disclosure.

Next, at step 420, equation (2)-(7) based on following, for each ultrasound image of each non-rsa Generate weight matrix:

WK, k ≠ rsa(i, j)=CK, k ≠ rsa* (Dis (i, j)) (2) f

Wherein k is steering angle ID and 1≤k≤N, and wherein N is the quantity of steering angle, and rsa is that benchmark turns to Angle.Coefficient CkIt is pre-determined factor associated with steering angle k.Function f (Dis (i, j)) is applied to filtered differential image Each pixel coordinate (i, j) function.CkAs be suitable for image associated with steering angle k each pixel specific to The coefficient of steering angle, and f (Dis (i, j)) is used as the pixel coordinate (i, j) suitable for the ultrasound image of each steering angle Specific to the weight of pixel.Therefore, equation (2) is directed to using the coefficient specific to angle and the weight specific to pixel to calculate The weight W of steering angle k and pixel coordinate (i, j)K, k ≠ rsa(i, j).In one embodiment, CkWith steering angle close to RSA and It becomes much larger:

Equation (3) generates intermediate variable Pk, reduce as steering angle k deviates from rsa, and when steering angle is rsa When for maximum (being equal to 1).Equation (4) generates CkAnd for normalizing CkValue so that the C on all steering angle kkSummation It is 1, as shown in equation (5).In other words, deviate with steering angle from RSA, give each pixel seat in the specific steering angle The weight for marking (i, j) reduces.

In one embodiment, f (Dis (i, j)) and Dis (i, j) have inverse relation, so that with the value of Dis (i, j) Increase, the value of f (Dis (i, j)) reduces.In this way, for specific steering angle k, as the value of Dis (i, j) increases, WK, k ≠ rsaThe value of (i, j) usually reduces.Therefore, reflect that the pixel of larger movement more lightly weights spatial compounding, and Reflect that the pixel of smaller movement or without motion is weighted more heavily spatial compounding.

Next, it is pre- more than Dis (i, j) to exclude its motion information Dis (i, j) from combined processing at step 422 Determine the pixel coordinate (i, j) of threshold value.There is Dis (i, j) value of the predetermined threshold more than Dis (i, j) at pixel coordinate (i, j) In the case of, determine ultrasound image at pixel coordinate (i, j) or surrounding move past it is acceptable for compound movement, because This, the ultrasound image at these compound pixel coordinates (i, j) will not generate the image of the target area with enough quality.This The example of the threshold value (TH) of sample is shown in equation (6):

TH=f1(fs) (6)

Wherein fsIt is the frame rate of ultrasonic image-forming system, and f1It is reversion fsSo that f1(fs) become smaller as fs becomes larger Function.For example, threshold value can be TH=a+b/fs, wherein a and b is constant.

Pixel coordinate is being excluded in this case from combined processing, distributes to the pixel of each non-referenced steering angle k The weight W of coordinate (i, j)K, k ≠ rsa(i, j) is arranged to rsa, and distributes to the pixel coordinate (i, j) of benchmark steering angle rsa Weight WK=rsa(i, j) is arranged to 1, as shown in equation (7):

In the weight metric W of pixel coordinate (i, j) and steering angle kk(i, j) is included in weight table WkIn, which exists It is used during the spatial compounding of step 425.At step 425, spatial compounding is executed, such as in the description of Fig. 5 further in detail It carefully describes, and generates and memory space combination picture in memory.Next, storing at step 427 and showing sky Between combination picture.Equation (2)-(7) weighting technique is exemplary.Other weighting schemes are considered in the scope of the present disclosure It is interior.

Next, method 400 proceeds to step 430, and new ultrasound image is generated with another steering angle.It connects down To come, method 400 returns to step 410, it is calculated wherein the previous combination picture from step 425 is used to calculate difference, and Step 415 place determines motion information.As it is following be more fully described in the description of Fig. 5, step 420-425 then operate with New space is generated using the motion information determined at step 415 and the new ultrasound image being previously formed at step 430 Combination picture.

Therefore, spatial compounding is executed using with all images of various steering angles, but pixel with higher weights Those of image there is the influence bigger to final combination picture than those of pixel with lower weight image.In addition, In When generating spatial compounding image, the static part of image will be compound with the weight for having the part of movement bigger than image, thus Reduce the motion blur of final spatial compounding image.Operating process in Fig. 4 is exemplary, unless otherwise stated, Operation can be executed in different order and/or can be performed in parallel.

Referring now to Figure 5, providing Figure 50 0 of the spatial compounding of the step 425 of pictorial image 4.As shown in figure 5, with difference Steering angle 1 is to the associated 505 (I of multiple images of N1-IN) be shown as multiple H and multiply W picture element matrix.Such as above for the step of Fig. 4 Described in rapid 410-422, the motion information and weighting of each steering angle are calculated.In the step 425 of Fig. 4, according to equation (8) to figure As 505 (I1-IN) it is weighted and combines to form the spatial compounding image for being expressed as Res:

Wherein, Res (i, j) corresponds to from 505 (I of multiple images1-IN) spatial compounding generate combination picture 520 in Pixel coordinate (i, j), wherein each weight table Wk(i, j) is applied to correspondence image IkPixel.Weight table Wk(i, j) be Determination at step 420-422 in Fig. 4.Spatial compounding image Res1 is stored and is shown.

Fig. 5 also illustrates the step 430 of Fig. 4, generates the new ultrasound image of another steering angle.As shown in figure 5, compound Image 520 (Res1) corresponds to 505 (I of multiple images1-IN) it is compound.As new ultrasound image I1'It is made available by (in Fig. 4 When step 430), its alternate image I1And use new image I1'Pass through compound 505 (I of multiple images2-INAnd I1') (Fig. 4's Step 425) generates new combination picture 520 (Res2).

Disclosed system is sequentially generated with the ultrasound image of steering angle 1 to N.Generate with the ultrasound image of steering angle N it Afterwards, system circulation returns to steering angle 1 and repeats.Each image I includes for specific steering angle from transducer array 105 All RF/ image datas 305 of all element of transducer 105a-z.Initially, it is based on image I1To INGenerate the first spatial compounding figure Picture.When system again returns to steering angle 1, new images I1'Substitute the first image I1.When timely observation, spatial compounding " window " is from using image I1To INBecome using image I2To INAnd I1'.Those skilled in the art will appreciate that this method is institute " sliding window " technology of meaning.In next iteration, system is generated with the new images I of steering angle 22', and new images I2'It replaces The first image I of generation2.Then, when timely observation, the window of spatial compounding is from using image I2To I1'Become using image I3Extremely INAnd I1'And I2'(I1'And I2'I is substituted respectively1And I2, and therefore used always in compound period N number of for composite goal The steering angle of quantity).Therefore, when the steering angle of transducer unit 110 changes and inswept angle 1-N, imaging system 200 is answered It closes and display operation is multiple come continuously compound and update by removal prior images and a new images including identical steering angle Close the display of image.In this way, for generating and showing that the initial latency of the first spatial compounding image Res1 is N number of The beam forming period, but hereafter for generating and showing that the waiting time of another spatial compounding image Res2 shortens to one A beam forming period, to increase the frame rate of display of ultrasonic imaging.Based on aforementioned system, method and apparatus, pass through Using imaging disclosed and illustrated herein and ultrasonic device and method, more clearly complete for the target area moved Image spatial compounding.The spatial compounding operation of Fig. 5 is exemplary, and modification is considered to be within the scope of this disclosure. For example, not needing to execute spatial compounding when ultrasound image new every time is available.For example, ultrasound figure that can be new for every two As or with another frequency or interval execution spatial compounding.It is generated when Analogously, it is not necessary to which new ultrasound image is available every time Motion information, and motion information can be generated with another frequency or interval.

Computer or calculating equipment may be embodied in one or more ultrasonic image-forming systems or one or more electronic equipments Or in server, ultrasonic image-forming system is run to operate one or more processors.It is therefore to be understood that the disclosure Be not limited to diagram particular form, and be intended to appended claims cover do not depart from embodiment described herein spirit and model All alternatives, the modifications and variations enclosed.The specific embodiment of equipment, the system comprising such equipment and use are such The method of equipment is as described herein.But these detailed embodiments are only the example of the disclosure, it can be in a variety of manners Implement.Therefore, specific structure and function details disclosed herein are not necessarily to be construed as limiting, and as just claims Basis, and as the representative base for making those skilled in the art use with specific structure appropriate the disclosure in various ways Plinth.

Detailed description is provided with reference to attached drawing.Those skilled in the art are it will be recognized that these descriptions are explanation Property, it is not limited in any way.The other embodiments of the disclosure will be benefited from the those skilled in the art of the disclosure Member is interpreted as in the range of disclosed technology.Although having had been illustrated in the accompanying drawings several embodiments of the disclosure, Be not intended disclosure limited to this because the disclosure be intended to this field allow range equally extensively and in the same manner as read Reader specification.Therefore, above description is not necessarily to be construed as limiting, and as just the illustration of specific embodiment.Ability Field technique personnel will imagine other modifications in scope of the appended claims and spirit.

18页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种微型吸脂针器械及其使用方法和制作方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!