System and method for adaptively configuring dynamic range for ultrasound image display
阅读说明:本技术 用于自适应地配置用于超声图像显示的动态范围的系统和方法 (System and method for adaptively configuring dynamic range for ultrasound image display ) 是由 D·W·克拉克 F·G·G·M·维尼翁 D·P·亚当斯 R·A·西夫莱 K·拉达克里希南 于 2018-12-21 设计创作,主要内容包括:根据本公开的超声成像系统可以包括超声探头、显示单元和处理器,所述处理器被配置为:接收具有第一动态范围的源图像数据,其中,所述源图像数据包括基于由所述超声探头探测到的超声回波的对数压缩的回波强度值,生成源图像数据的至少部分的直方图,生成针对所述直方图的累积密度函数,接收所述累积密度函数(CDF)上的至少两个点的指示,并且使所述显示单元显示表示根据第二动态范围显示的所述源图像数据的超声图像。(An ultrasound imaging system according to the present disclosure may include an ultrasound probe, a display unit, and a processor configured to: receiving source image data having a first dynamic range, wherein the source image data comprises echo intensity values based on a logarithmic compression of an ultrasound echo detected by the ultrasound probe, generating a histogram of at least part of source image data, generating a cumulative density function for the histogram, receiving an indication of at least two points on the Cumulative Density Function (CDF), and causing the display unit to display an ultrasound image representing the source image data displayed according to a second dynamic range.)
1. An ultrasound imaging system comprising:
an ultrasound probe operable to detect ultrasound echoes;
a display unit operable to display an ultrasound image based on the ultrasound echoes; and
a processor communicatively coupled to the ultrasound probe and the display and configured to:
receiving source image data having a first dynamic range, wherein the source image data comprises logarithmically compressed echo intensity values based on the ultrasonic echo detected by the ultrasonic probe;
generating a histogram of at least a portion of the source image data;
generating a cumulative density function for the histogram;
receiving indications of at least two points on the Cumulative Density Function (CDF); and is
Defining a second dynamic range that is less than the first dynamic range based on the at least two points; and is
Causing the display unit to display an ultrasound image representing the source image data displayed according to the second dynamic range.
2. The ultrasound imaging system of claim 1, wherein the processor is configured to: an indication of two points on the CDF is received, and a linear mapping function for mapping a portion of the first dynamic range to the second dynamic range is defined based on the two points.
3. The ultrasound imaging system of claim 2, wherein the processor is configured to map 16-bit source image data to 8-bit image data for display using the linear mapping function.
4. The ultrasound imaging system of claim 1, wherein the processor is configured to: an indication of a first point on the CDF corresponding to a desired percentage of black pixels is received, and an indication of a second point on the CDF corresponding to a desired percentage of pixels having pixel values at or below mid-gray is received.
5. The ultrasound imaging system of claim 4, wherein the processor is configured to define the second dynamic range by setting a minimum value of the second dynamic range at a pixel value corresponding to a data value of the first point on an x-axis of the CDF, wherein the processor is configured to: determining a data value on the x-axis of the CDF corresponding to the second point to define a span of input values, and further defining the second dynamic range by setting a maximum value of the second dynamic range at a pixel value corresponding to twice the span of input values.
6. The ultrasound imaging system of claim 4, further comprising a memory storing the desired percentage of black pixels and the desired percentage of pixels having pixel values at or below mid-gray.
7. The ultrasound imaging system of claim 6, wherein the processor is configured to automatically define a second dynamic range for each of a plurality of temporally successive ultrasound images based on the desired percentage of black pixels and the desired percentage of pixels having pixel values at or below mid-gray stored in memory.
8. The ultrasound imaging system of claim 6, wherein the memory stores pairs of values for the desired percentage of black pixels and the desired percentage of pixels having pixel values at or below mid-gray, each pair associated with a particular clinical application or a standard view associated with the particular clinical application.
9. The ultrasound imaging system of claim 6, further comprising one or more user controls configured to adjust the desired percentage of black pixels and the desired percentage of pixels having pixel values at or below mid-gray.
10. The ultrasound imaging system of claim 1, wherein the processor is further configured to:
dividing at least part of the source image data into a plurality of depth zones;
generating a histogram and a corresponding CDF for each depth zone; and is
A second dynamic range associated with the depth associated with each depth zone is defined.
11. The ultrasound imaging system of claim 10, wherein the processor is configured to: a plurality of depth related second dynamic ranges are defined using the CDF and interpolated between a minimum value associated with each of the plurality of depth related second dynamic ranges and between a maximum value associated with each of the plurality of depth related second dynamic ranges to derive additional depth related second dynamic ranges.
12. The ultrasound imaging system of claim 1, wherein the processor is further configured to apply a time gain compensation to the logarithmically compressed echo intensity values to produce the source image data.
13. The ultrasound imaging system of claim 1, wherein the processor is further configured to spatially filter, temporally filter, or spatio-temporally filter the source image data prior to generating the histogram.
14. A method of configuring an ultrasound imaging system for display, the method comprising:
receiving source image data having a first dynamic range, wherein the source image data comprises logarithmically compressed echo intensity values based on echo signals received by an ultrasound probe;
generating a histogram of at least a portion of the source image data;
generating a cumulative density function for the histogram;
receiving indications of at least two points on the Cumulative Density Function (CDF);
defining a second dynamic range that is less than the first dynamic range based on the at least two points; and
generating for display an ultrasound image representing the source image data, wherein the ultrasound image is configured to display the source image data according to the second dynamic range.
15. The method of claim 14, wherein receiving an indication of at least two points and defining a second dynamic range comprises: an indication of a first point and a second point is received, and a linear mapping function for mapping a portion of the first dynamic range to the second dynamic range is defined based on the first point and the second point.
16. The method of claim 15, wherein mapping the portion of the first dynamic range to the second dynamic range comprises mapping 16-bit source image data to 8-bit image data for display.
17. The method of claim 14, wherein a first one of the at least two points corresponds to a percentage of pixels in the source image data to be assigned pixel values in black, and wherein a second one of the at least two points corresponds to a percentage of pixels in the source image data to be assigned pixel values at and below a middle gray pixel value.
18. The method of claim 14, wherein receiving indications of at least two points comprises: retrieving the at least two points from a memory of the ultrasound imaging system, wherein a first of the at least two points comprises a desired percentage of black pixel values and a second of the at least two points comprises a desired percentage of gray pixel values at or below mid-gray.
19. The method of claim 14, further comprising applying a time gain compensation to the logarithmically compressed echo intensity values to produce the source image data.
20. The method of claim 14, further comprising passing the source image data through a spatial filter, a temporal filter, or a spatio-temporal filter prior to generating the histogram.
21. The method of claim 14, further comprising:
dividing the source image data into a plurality of depth zones;
generating a histogram and a Cumulative Density Function (CDF) of the source image data associated with each of the plurality of depth bands; and
defining a depth-dependent reduced dynamic range for each of the plurality of depth bands.
22. A non-transitory computer-readable medium comprising processor-executable instructions that, when executed by one or more processors of a medical imaging system, cause the one or more processors to perform the method of any of claims 14 to 21.
Technical Field
The present disclosure relates generally to ultrasound systems and methods for adjusting the dynamic range for the display of ultrasound images (e.g., for B-mode and M-mode imaging).
Background
In ultrasound B-mode (2D or 3D) or M-mode imaging, the echo intensities are typically logarithmically compressed for display. This produces a visual image in which the echo intensity ratio is expressed as a luminance difference and the gain adjustment and speckle and noise variance are independent of the echo amplitude. In general, the dynamic range displayed is less than the full dynamic range of the echo signal even after Time Gain Compensation (TGC). Pixels within the displayed dynamic range are typically shown as shades of gray, while pixels outside the displayed dynamic range are shown as saturated white or black. For aesthetic images, a significant portion of the pixels are typically black or very dark, especially in cardiac or obstetrical applications where a large portion of the image represents fluid. Usually a very small fraction of the pixels (sometimes none) are saturated white. The dynamic range displayed is applied with offset ("gain"), scaling ("compression") and clipping of the logarithmic intensity data. The log offset is typically depth dependent (TGC) and can be controlled manually by the user or is not adaptive by tying to existing systems. System noise is generally consistent and predictable, and is therefore automatically controlled by algorithms in the system. Logarithmic scaling is typically controlled manually by the user and generally works well where the automatic setting of the gain (logarithmic offset) by the system can be where noise is the dominant undesired component, such as deep depth. However, in many cases (and shallow depths), a piece of clutter from clutter reverberation or side lobes can be the main unwanted component, and this is very dependent on the patient and view. The intensity of the tissue echo is also very dependent on the patient and the view. As a result, designers and manufacturers of ultrasound systems continue to seek improvements in their, and in particular algorithms for configuring, the displayed dynamic range of such devices and systems.
Disclosure of Invention
The present disclosure relates generally to ultrasound systems and methods for configuring dynamic range for display of ultrasound images (e.g., for B-mode and M-mode imaging). According to examples herein, adaptive techniques are described for mapping a full dynamic range to a displayed dynamic range.
An ultrasound imaging system according to the present disclosure may include: an ultrasound probe operable to detect ultrasound echoes; a display unit operable to display an ultrasound image based on the ultrasound echoes; and a processor communicatively coupled to the ultrasound probe and the display. The processor may be configured to: receiving source image data having a first dynamic range, wherein the source image data comprises logarithmically compressed echo intensity values based on the ultrasound echoes detected by the ultrasound probe, generating a histogram of at least part of source image data, generating a cumulative density function for the histogram, receiving indications of at least two points on the Cumulative Density Function (CDF), and causing the display unit to display an ultrasound image representing the source image data displayed according to the second dynamic range.
In some embodiments, the processor may be configured to receive an indication of only two points and define a linear mapping function based on the two points. The linear mapping function may be used to map a portion of the first dynamic range to the second dynamic range. For example, the processor may be configured to derive a mapping function (e.g., a linear mapping function based on CDFs associated with one or more incoming ultrasound images) for mapping 16-bit source image data to 8-bit image data for display.
In some embodiments, the processor may be configured to: an indication of a first point on the CDF corresponding to a desired percentage of black pixels is received, and an indication of a second point on the CDF corresponding to a desired percentage of pixels having pixel values at or below mid-gray is received. A low clipping value and a high clipping value for the first dynamic range that maps to the second dynamic range may be derived from the two points. For example, a low clipping value, and thus a minimum value, of the second dynamic range may be based on the first point (e.g., equal to a desired percentage of black pixels), and a high clipping value, and thus a maximum value, of the second dynamic range may be derived from the second point (e.g., if a mid-gray percentage is specified, a high clipping value may be defined by doubling the pixel value corresponding to the mid-gray percentage).
In some embodiments, the ultrasound system may include non-volatile memory that stores one or more of the inputs to the histogram-CDF process. For example, the memory may store the desired percentage of black pixels and the desired percentage of pixels having pixel values at or below mid-gray, or any other desired percentage of pixels of certain pixel values.
In some embodiments, the processor may be configured to automatically define a second or reduced dynamic range for each of a plurality of temporally successive (in some cases, temporally successive) ultrasound images based on the same values for the at least two points (e.g., based on the same values for the desired percentage of black pixels and the desired percentage of pixels having pixel values at or below mid-gray) stored in the memory). Thus, while the desired pixel percentile may not change between different images (e.g., in a given imaging application or for a given view), the display may be dynamically or adaptively adjusted based on the particular distribution of pixel values in each image.
In some embodiments, the memory (e.g., of the ultrasound system) may store pairs of values for the desired percentage of black pixels and the desired percentage of pixels having pixel values at or below mid-gray, each pair associated with a particular clinical application or a standard view associated with the particular clinical application. In some embodiments, the system may further include one or more user controls (e.g., in the form of mechanical or soft controls, such as slider, knob, or touch screen controls) configured to adjust the desired percentage of black pixels and the desired percentage of pixels having pixel values at or below mid-gray.
In some embodiments, the processor may be configured to derive a depth-dependent mapping function for mapping image data associated with any one of a plurality of depth zones to a reduced dynamic range. To perform depth-dependent analysis, the processor may divide the source image data into a set of samples associated with a given depth of tissue. These sample sets (also referred to as transverse (or spatially transversely correlated) sample sets) may remain along straight or arcuate lines depending on the physical properties of the transducer (e.g., the geometry of the array, such as a sector phased array, an arcuate array, a linear array, etc.). In some embodiments, the processor may be configured to: the method includes partitioning at least a portion of the source image data into a plurality of depth zones (e.g., sets of lateral samples at particular depths), generating a histogram and a corresponding CDF for each depth zone, and defining a second dynamic range associated with the depth associated with each depth zone. In some embodiments, such as in the case of a sector phased or arc linear array, the set of transverse (or spatially correlated) samples at a particular depth may include echo intensity data along one or more adjacent circular arcs, or a portion thereof. In some examples, for example, in the case of a linear (non-phased) array, the set of transverse samples may include pixel data along a given pixel line or multiple rows of pixel lines, or a portion thereof. In some examples, the depth-related analysis may be performed on a pixel line basis, regardless of the geometry of the source data.
In some embodiments, the processor may be configured to: a plurality of depth-related second dynamic ranges are defined using a CDF-based process, and interpolation is performed between minimum values associated with each of the plurality of depth-related second dynamic ranges and between maximum values associated with each of the plurality of depth-related second dynamic ranges to derive additional depth-related second dynamic ranges. In some embodiments, the processor may be further configured (e.g., prior to histogram) to apply a time gain compensation to the logarithmically compressed echo intensity values to produce the source image data. In some embodiments, the processor may be further configured to spatially filter, temporally filter, or spatio-temporally filter the source image data prior to generating the histogram.
Methods according to some examples herein may include: source image data having a first dynamic range is received, and a histogram of at least a portion of the source image data is generated. The source image data may include logarithmically compressed echo intensity values (i.e., echo intensity values after log compression) generated in response to echo signals received by the ultrasound probe. The method may further comprise: -generating a cumulative density function for the histogram, -receiving an indication of at least two points on the Cumulative Density Function (CDF), -defining a second dynamic range, smaller than the first dynamic range, based on the at least two points, and-generating for display an ultrasound image representing the source image data, wherein the ultrasound image is configured to display the source image data according to the second dynamic range.
In some examples, receiving an indication of at least two points and defining a second dynamic range may include: an indication of a first point and a second point is received, and a linear mapping function for mapping a portion of the first dynamic range (e.g., a portion of pixel values associated with a first dynamic range) to the second dynamic range (e.g., pixel values associated with a second dynamic range) is defined based on the first point and the second point. In some examples, the mapping may involve mapping 16-bit source image data to 8-bit image data. In some examples, a first of the at least two points may correspond to a percentage of pixels in the source image data to be assigned pixel values in black, and wherein a second of the at least two points may correspond to a percentage of pixels in the source image data to be assigned pixel values at and below a middle gray pixel value.
In some examples, the method may include: the method includes dividing the source image data into a plurality of depth zones, generating a histogram and Cumulative Density Function (CDF) of the source image data associated with each of the plurality of depth zones, and defining a depth-related reduced dynamic range for each of the plurality of depth zones. In some examples, each band of the plurality of depth bands may correspond to a set of lateral sample sets at a given depth. In some embodiments, the lateral set may remain along an arc line (or arc) or along a line or row of pixels of the ultrasound image. In other examples, each band may correspond to a set of lateral samples along a plurality of axially adjacent lines (arc or straight) of pixel data.
In some examples, the at least two points may be retrieved from a memory of the ultrasound imaging system. In some examples, the at least two points may be retrieved from the plurality of pairs of input points based on an imaging application (e.g., heart, breast, obstetrics, etc.) or based on image data associated with a particular view (e.g., standard cardiac view). In some examples, the method may further include applying a time gain compensation to the logarithmically compressed echo intensity values to produce the source image data. In some examples, the method may further include filtering the source image data, for example, using a spatial filter, a temporal filter, or a spatiotemporal filter, prior to generating the histogram.
The method according to any of the examples disclosed herein may be embodied in a computer-readable medium comprising processor-executable instructions that, when executed by a system (e.g., a system configured to display and/or acquire medical images), may cause the system to perform a process embodied in the computer-readable medium.
Features from any of the disclosed embodiments may be used in combination with each other without any limitation. Furthermore, other features and advantages of the present disclosure will become apparent to those of ordinary skill in the art upon consideration of the following detailed description and the accompanying drawings.
Drawings
Fig. 1A shows a histogram of logarithmically compressed data for full dynamic range (e.g., 16-bit) image data.
FIG. 1B shows the histogram of the log-compressed image data of FIG. 1A but at a reduced dynamic range (e.g., 8 bits).
Fig. 2 shows a block diagram of a process for adjusting the dynamic range for the display of medical image data.
Fig. 3 illustrates a block diagram of an ultrasound imaging system in accordance with the principles of the present disclosure.
Fig. 4 shows a diagram of a process involving applying treatment on input (source data with full DR) to obtain output (image data for display with adjusted DR) in accordance with the principles of the present disclosure.
Fig. 5 shows a diagram of a process for determining a treatment to be applied to full DR image data.
Fig. 6 shows an example of linear mapping for 16-bit image data that can be mapped to 8-bit image data.
Fig. 7A, 7B, and 7C illustrate examples of user controls for adjusting the dynamic range for display in accordance with the principles of the present disclosure.
Detailed Description
The following description of certain exemplary embodiments is merely exemplary in nature and is in no way intended to limit the disclosure, its application, or uses. In the following detailed description of embodiments of the present systems and methods, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific embodiments in which the described systems and methods may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the presently disclosed systems and methods, and it is to be understood that other embodiments may be utilized and that structural and logical changes may be made without departing from the spirit and scope of the present system. Furthermore, for the purpose of clarity, when particular features will be apparent to those skilled in the art, a detailed description of such features will not be discussed so as not to obscure the description of the present system. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present system is defined only by the claims.
As shown in fig. 1A and 1B, the displayed dynamic range 120 is less than the full dynamic range 110 of the echo signal. As shown in fig. 1B, pixels within the displayed dynamic range are typically shown as shades of gray (i.e., pixel values between 0 and 255), while pixels outside the displayed dynamic range are shown as saturated white (i.e., pixel values of 255) or black (i.e., pixel values of 0). For aesthetic images, a significant portion of the pixels are typically black or very dark, especially in cardiac or obstetrical applications where a large portion of the image represents fluid. Usually a very small fraction of the pixels (sometimes none) are saturated white.
The displayed dynamic range is defined by applying an offset 112 (e.g., in response to input via a "gain" knob) and a scaling 114 (in response to a "compression" input) to crop the full dynamic range of the log intensity data. As shown in fig. 1A and 1B, a 16-bit "full dynamic range" image may be mapped to an 8-bit image for display according to specified offset (gain) and scaling (compression) settings. The gain and compression inputs are typically independent of the Time Gain Compensation (TGC) control (typically provided in the form of 8 sliders for depth dependent gain or contrast adjustment). However, even with these controls on existing ultrasound systems, the images may be displayed sub-optimally, and/or more manual adjustments by the sonographer may be required to obtain an optimal display than may be desired for an efficient workflow.
Fig. 2 illustrates a block diagram of a process for adaptively adjusting the Dynamic Range (DR) for display of ultrasound images in accordance with the principles of the present disclosure. The process begins by receiving full dynamic range image data (block 202), in this example, 16-bit image data. Although an example is described in fig. 2 with reference to 16-bit image data as source (raw) image data and 8-bit image data as output image data, it will be understood that in this example, the bit size is arbitrary and is provided for illustration only. It will be appreciated that the techniques can be equally applicable to inputs and outputs of any size, for example 12-bit, 32-bit, 36-bit or any other integer-based or non-integer-based (i.e. floating point) inputs and outputs. It will also be understood that while the examples illustrate a reduction in bit size between the input and output, this is again provided for illustration only. In other examples, the source and output data may both be of the same size (e.g., 16-bit or other sized input/output), and the treatment applied to adjust the dynamic range of the image data according to examples herein is not only or partially intended to change the word length of the input and output, but is generally intended to affect image quality, e.g., to remove unnecessary data (or clutter) from the source image data before the image data is presented on the display.
As described herein, treatment 201 may be applied to source image data (e.g., as shown by arrow 203) to reduce undesirable or unnecessary information (such as clutter) in the image. The determination of the treatment 201 to be applied to the source image data involves: one or more histograms are generated based on the source image data, a Cumulative Density Function (CDF) is calculated for each histogram, and a minimum and maximum value for a reduced dynamic range, also referred to herein as dynamic range for Display (DR), are defined based on two or more points on the CDF. In the example in fig. 2, the source image data (in block 202) is divided into depth zones (as shown in block 208), and a histogram and corresponding CDF are generated for each depth zone (as shown in block 204). However, it will be understood that the processes described herein may be performed on the entire source image data without dividing it into multiple depth zones-i.e., the treatment 201 may be derived using a single depth zone corresponding to the complete source image data set. It will also be understood that the source image data need not be the complete image data set required to produce a complete ultrasound image, but may be a portion thereof, such as a portion associated with a region of interest in an image, a portion around the center of an image, a portion along one or more scan lines of an image, or any other portion. Once the minimum and maximum values of the displayed dynamic range have been defined based on the CDF, the treatment is applied to the source image data to crop the full dynamic range and produce output image data having a DR for display, e.g., as shown in block 220.
As further shown in fig. 2, process 200 may involve several optional steps. For example, the source image data (at block 202) may initially be spatially filtered, temporally filtered, or spatio-temporally filtered (as shown in block 206) to blur or smooth the image. For example, the source image data may be spatially low pass filtered prior to histogramming to reduce intensity variance caused by speckle and noise (e.g., applying histogram blurring). As shown, any such spatial and/or temporal smoothing as shown in the block diagram in fig. 2 may only be applied to source data along an analysis path 205 (e.g., a signal path associated with defining a treatment 201 to be applied). Such filtering may not be applied to the source image data along the image generation path 207 and thus may not affect (e.g., blur) the image that is ultimately displayed. That is, once a reduced dynamic range (e.g., DR for display or DR displayed) has been defined at the output of signal path 205, a treatment 201 (e.g., reduction of dynamic range) may be applied to the pre-filtered source image data supplied via signal path 207. In some embodiments, histograms for two or more temporally successive (not necessarily successive) image frames may be temporally averaged to reduce temporal variation prior to calculating the CDF, such temporal averaging may reduce cardiac cycle fluctuations, for example in the case of cardiac imaging.
As further shown in fig. 2, the source image data may optionally be divided into depth bands, e.g., 2, 3, 4, 6, 8, 10, 20, or any other number of depth bands, as shown in block 210. A histogram may be generated for each depth band and a corresponding CDF may be calculated for each of the histograms associated with each depth band, as shown in block 204. The CDF may be calculated using any known technique, for example, by integrating and normalizing the histograms to obtain a CDF that monotonically increases from 0 to 1 for each histogram. A monotonic function (such as CDF) is obtained from the corresponding histogram to achieve an inverse mapping of the variables-i.e., the selection of two points along the y-axis to obtain two points on the x-axis. In other examples, different invertible functions other than CDF may be used. In this example, the input or selection of two desired percentage values (e.g., full black and medium gray) enables the algorithm to determine the minimum and maximum values for the reduced dynamic range. For example, if a full black and medium gray percentage is specified, the algorithm maps the full black percentage from the y-axis to a low clipping point (or minimum pixel value, i.e., the pixel value associated with fully saturated black) on the x-axis, and further maps the medium gray percentage from the y-axis to a corresponding midpoint pixel value. The high clipping point or maximum pixel value of the reduced dynamic range can then be defined as twice the midpoint pixel value. For example, in the case of mapping 16-bit image data (represented by D16) to 8-bit image data (represented by D8), and where the desired percentiles for full black and medium gray are represented by B and G, respectively, an example adaptive linear mapping of input image data to output image data may be represented by the following equation:
different pairs of two input points can be used for linear mapping according to other examples, or in still other examples, more than two points may be used for non-linear or piecewise linear mapping. As will be appreciated, in the case of a linear mapping function, two input points on the CDF can be used to derive conventional logarithmic offset (high clipping point) and scale (low clipping point) values, but the process described herein is dynamic or adaptive in that different offset and scale values can be applied to different images by virtue of differences in the cumulative distribution of pixel values between the different images. In other words, instead of using the same log offset and scale values, and thus the same range of pixel values, for the displayed DR of each incoming image (unless manually adjusted by the user), as is the case in conventional systems, a system according to the present disclosure may use the same percentage of certain pixel values among different images, but the resulting displayed DR may differ from image to image, and the distribution of pixel values in any given image may differ.
As described, a histogram and corresponding CDF may be generated for each of a plurality of depth bands, e.g., by analyzing or histograzing all pixels in a given depth band, and log offset and scaling values for each depth in an image may be computationally obtained (e.g., by interpolation between analyzed samples). In other examples, the samples at each pixel line (whether straight or curved) may be analyzed independently, e.g., by generating a histogram and CDF at each depth.
As further shown in fig. 2 and as described, the process 200 receives as input at least two points on the CDF (e.g., see block 222). In some examples, the two dots may be a desired percentage of full black and a desired percentage of mid-gray pixels. That is, a first point may be indicated on the CDF to select the percentage of pixels on the output image that should be assigned a "black" pixel value, and a second point may be indicated on the CDF to select the percentage of pixels that should be assigned pixel values at and below mid-gray. Two inputs (e.g., full black and medium gray levels or percentages) may be used to derive the minimum and maximum values of reduced DR. For example, two inputs (e.g., two points indicated on the CDF) may define a linear mapping function for mapping a percentage of pixel values of the source image to pixel values to be included in the reduced DR, as further described with reference to fig. 5. In some examples, the two points indicated on the CDF may be in addition to the full black and medium gray percentages, e.g., they may correspond to two gray level percentages, full black and full white percentages, medium gray and full white, or any of the full black or full white inputs and a gray value input at some intermediate position (the median of any of the two full saturation levels (black or white)). In other examples, more than two points may be specified, and a non-linear or piecewise linear mapping may be used to define the minimum and maximum values of the corresponding histogram.
In some examples, the two points specified on the CDF may be converted back to conventional log offset and scaling (as shown in block 212), in which case for each depth band, but in the case of a single depth band-the log offset and scaling are to be applied to the DR for the complete source data set. The log offset and scale at each depth band may be interpolated (as shown in block 214) to define the log offset and scale at each depth of the source image data. The process 201 may then be applied to the source image data, as shown in block 218, to crop the full DR to the DR for display, and to produce output image data for display (block 220). As mentioned, the partitioning of the source image data into depth zones is optional, and in some examples, a reduced DR may be defined by operating on the complete source image data set, and interpolation may optionally be used to define log offsets and scales for depths other than those defined based on a single histogram and CDF. In other examples, the process may be performed at each depth of the image data, thus omitting the steps at blocks 212 and 214, but the technique may be more computationally intensive than the example where a smaller number of depth bands are used for the histogram.
The dynamic range adjustment method, for example as described with reference to fig. 1, may be incorporated in an ultrasound system to provide adaptive DR adjustment for the display of ultrasound images. In some examples, the ultrasound system may be an imaging system, for example, including hardware and/or software components for ultrasound image data acquisition. In other embodiments, the ultrasound system may be, for example, an analysis workstation, such as a post-acquisition review workstation, that includes hardware and/or software components for display and/or analysis of ultrasound images. The examples herein are equally applicable to any system configured to display medical images, such as any system configured to acquire and/or display medical images of any imaging modality (e.g., ultrasound, CT, MRI, etc.).
Fig. 3 illustrates a block diagram of an ultrasound imaging system constructed in accordance with the principles of the present disclosure. The
As shown, the
The transmission of ultrasound pulses from the
Another function that may be controlled by the transmit
The signal processor 32 can process the received echo signals in various ways, such as bandpass filtering, decimation, I and Q component separation, and harmonic signal separation.
The
The output (e.g., images) from the scan converter 930, the multi-plane reformatter 932 and/or the volume renderer 934 may be coupled to an image processor 936 for further enhancement, buffering and temporary storage before being displayed on an image display 938. In some embodiments, for example, when performing image fusion of 2D real-time ultrasound data with preoperative image data, the system may include or be communicatively coupled to preoperative data source 968. The 2D images from the scan converter 930 may first pass through a registration and fusion processor 964, which may be configured to correct for motion-induced misalignments in real-time before fusing and sending the combined images downstream, e.g., to an image processor and/or graphics processor. The graphics processor 940 may generate a graphics overlay for display with the image. These graphic overlays can contain, for example, standard identifying information such as patient name, date and time of the image, imaging parameters, and the like. For these purposes, the graphics processor may be configured to receive input from the user interface 924, such as a typed patient name or other annotation. In some embodiments, the system 100 may be configured to receive user input via the user interface 924 in order to set parameters of the algorithms described herein. In some embodiments, one or more functions of at least one of a graphics processor, an image processor, a volume renderer, and a multi-plane reformatter may be combined into integrated image processing circuitry (the operation of which may be divided among multiple processors operating in parallel), rather than the specific functions described with reference to each of these components being performed by discrete processing units. Furthermore, although the processing of echo signals is discussed with reference to a B-mode processor and a doppler processor, for example, for the purpose of generating a B-mode image or a doppler image, it will be understood that the functions of these processors may be integrated into a single processor.
In some embodiments, the
The
Figure 6 shows a process similar to that described with reference to figure 5 but more specifically for the example of 16-bit image data being mapped to 8-bit image data. In fig. 6, the cumulative density function 602 generated for any given histogram associated with 16-bit source image data (or a portion or depth band thereof) is shown in the upper portion of the figure. The linear mapping function is defined based on two inputs (e.g., a desired percentage of black pixels on the CDF, indicated by 604, and a desired percentage of pixels at or below mid gray, indicated by 606, on the CDF). The values for the black and middle gray percentages are used to define the minimum (or low clipping point, indicated by 612) and maximum (or high clipping point, indicated by 614) values of the full DR, and the pixel values within the full DR between the low clipping point and the high clipping point are then mapped (e.g., equally distributed) to the pixel values of the reduced DR (e.g., to produce an 8-bit image). The lower part of the image in fig. 6 illustrates this cropping of a full (16-bit) DR image according to well understood terms log offset and scaling. For the resulting clipped DR, pixels below the low clipping point associated with the full DR value will all be mapped to black (i.e., assigned a value corresponding to a black pixel value or 0 in this case), and pixels above the high clipping point associated with the full DR value will all be mapped to white (i.e., assigned a value corresponding to a white pixel value or 255 in this case), with the remaining pixel values distributed equally in between along the range of available values within the reduced DR in this case from 0 to 255.
In some embodiments, the system may be configured to automatically apply the appropriate presets (e.g., the user may toggle the ON button for automatic dynamic range adjustment, and each image may be automatically enhanced based ON the presets, with the toggling of the button to OFF turning OFF the functionality of the dynamic range controller). In some embodiments, additionally or alternatively, the system may be equipped with one or more user controls for providing input to the dynamic range controller. For example, the user interface may be configured to include one or more user controls (e.g., soft controls (such as controls implemented via a touch screen) or mechanical controls (such as knobs, dials, buttons, sliders, etc.) to enable a user to specify one or more of the at least two points on the CDF.
For example, as shown in fig. 7A-7C, the user interface may include one or more sliders for adjusting the desired full black and medium gray levels. Fig. 7A shows a graphical user interface on a display of the ultrasound system showing a
Referring back to fig. 5, a linear mapping function may be used to define the offset and scale for the displayed dynamic range to crop the histogram of the logarithmically compressed image data as shown by curve 510 based on the selected point on the CDF. As shown, the percentage of pixels to be mapped to white may be defined by doubling the span of input data values (on the horizontal axis) corresponding to the mid-gray level, thus defining a high clipping point or offset. In one particular example, the inputs for black and medium gray may be 40% and 85%, which may correspond to a span of approximately 98% of the pixels that provide under full white when doubled. In other examples, different percentages for black and medium gray values may be used.
Pixel values associated with the percentage of pixels in the full dynamic range that fall below the specified full black level will be mapped to black, thus defining a low clipping point or scale. The high clipping point and the low clipping point may also be interchangeably referred to as the maximum and minimum values of the dynamic range, which define the reduced DR (or the DR for display). This process of generating a histogram, calculating a CDF for the histogram, and defining the boundaries of the displayed DR based on the input points may be repeated at each of a plurality of depth bands, and interpolation may be used to derive a reduced DR for each depth associated with a given image. The reduced DR is then applied to the source image data to generate an ultrasound image for display such that the displayed image includes pixel values only within the reduced dynamic range.
The shape of the histogram may be different for any incoming image (i.e., reflect the difference in the distribution of pixel values in any given image), and thus the mapping to a reduced DR based on two or more input points related to a linear (or non-linear) mapping function (such as based on a desired percentage of black, medium gray, white, or any other pixel level) may thus adaptively adjust each image to provide a better display of image data. Thus, in B-mode ultrasound imaging, full dynamic range logarithmic data is histogrammed and the desired percentile is mapped to certain points of the displayed gray scale range, such as full black or medium gray, according to examples of the present disclosure. As described, the data may be low pass filtered spatially and/or temporally prior to histogram to reduce variance due to speckle, noise, or heartbeat. For example, in an example where histograms are generated for multiple depth bands and corresponding desired percentiles (which may be different between depth bands) are applied to each depth band, the histograms and/or the desired percentiles may be a function of depth. Histogram-based adaptivity of gain and dynamic range may provide more consistency and robustness than conventional non-adaptive control. In some examples, after such histogram-based dynamic range adjustment, the percentiles of pixels at certain gray levels may be modified by downstream processing (such as scan conversion, adaptive spatial filtering, persistence, or gray maps). Alternatively, these processing steps may be applied upstream on the source image data, e.g. before histogram-based dynamic range adjustment. As described, the desired pixel percentile may be preset or programmed into the system, which may set a value based on automatic view recognition (such as AP4, PLAX, PSAX, etc.), user input, or machine learning. Additionally or alternatively, the desired percentile may be user selectable or adjustable (e.g., via one or more knobs, sliders, text entry, or other user controls), and the preprogrammed settings (e.g., desired percentile level) for a given system may be further refined over time based on machine learning.
In view of this disclosure, it is noted that the various methods and devices described herein may be implemented in hardware, software, and firmware. In addition, various methods and parameters are included by way of example only and not in any limiting sense. In view of this disclosure, those skilled in the art can implement the present teachings in determining their own techniques and needed equipment to implement these techniques, while remaining within the scope of the present disclosure. The functionality of one or more of the processors described herein may be combined into a fewer number or single processing unit (e.g., CPU or GPU), or alternatively, they may be distributed among a larger number of processing units, and may use Application Specific Integrated Circuits (ASIC) or general purpose processing circuits programmed to perform the functions described herein in response to executable instructions. A computer program (e.g., executable instructions) may be stored/distributed on any suitable computer readable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems.
It will be appreciated that any of the examples, embodiments, or processes described herein can be combined with one or more other examples, embodiments, and/or processes or can be separated and/or performed in a separate device or device portion in accordance with the present systems, devices, and methods. Finally, the above-discussion is intended to be merely illustrative of the present system and should not be construed as limiting the claims to any particular embodiment or group of embodiments. Thus, while the present system has been described in particular detail with reference to exemplary embodiments, it should also be appreciated that numerous modifications and alternative embodiments are contemplated by those skilled in the art without departing from the broader and intended spirit and scope of the present system as set forth in the claims that follow. The specification and drawings are accordingly to be regarded in an illustrative manner and are not intended to limit the scope of the claims.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:用于定位场景中的目标的装置、系统及方法