Photon interaction characteristics from subsets of pixels

文档序号:1814804 发布日期:2021-11-09 浏览:17次 中文

阅读说明:本技术 来自像素子集的光子相互作用特性 (Photon interaction characteristics from subsets of pixels ) 是由 布雷恩·威廉·哈里斯 富田秀文 于 2021-05-06 设计创作,主要内容包括:一个实施例提供一种方法,包括:接收在光子检测器像素阵列内发生的光子相互作用,其中光子检测器像素阵列包括多个像素;确定由光子相互作用生成的光电子云,其中光子检测器像素阵列包括电场,其中静电排斥力使光子分散到光电子云;识别与光子相互作用相关联的多个像素的子集,其中多个像素的子集中的每一个子集与由光电子云激活的像素相对应,其中多个像素的子集包括中心像素和多个相邻像素,其中中心像素包括对光子相互作用具有最高幅度响应的像素;以及根据光电子云确定光子相互作用的特性,其中特性包括以下至少一项:相互作用的时间、位置和能量。描述并要求保护其他方面。(One embodiment provides a method comprising: receiving photon interactions occurring within a photon detector pixel array, wherein the photon detector pixel array comprises a plurality of pixels; determining a photoelectron cloud generated by photon interaction, wherein the photon detector pixel array comprises an electric field, wherein electrostatic repulsive forces disperse photons into the photoelectron cloud; identifying a subset of the plurality of pixels associated with the photon interaction, wherein each subset of the plurality of pixels corresponds to a pixel activated by the optoelectronic cloud, wherein the subset of the plurality of pixels comprises a center pixel and a plurality of neighboring pixels, wherein the center pixel comprises a pixel having a highest amplitude response to the photon interaction; and determining a characteristic of the photon interaction from the photoelectron cloud, wherein the characteristic comprises at least one of: time, location and energy of interaction. Other aspects are described and claimed.)

1. A method, comprising:

receiving photon interactions occurring within a photon detector pixel array, wherein the photon detector pixel array comprises a plurality of pixels;

determining a photoelectron cloud generated by the photon interaction, wherein the photon detector pixel array comprises an electric field, wherein electrostatic repulsive forces disperse photons into the photoelectron cloud;

identifying a subset of the plurality of pixels associated with the photon interaction, wherein each subset of the plurality of pixels corresponds to a pixel activated by the optoelectronic cloud, wherein the subset of the plurality of pixels comprises a center pixel and a plurality of neighboring pixels, wherein the center pixel comprises a pixel having a highest amplitude response to the photon interaction; and

determining a characteristic of the photon interaction from the photoelectron cloud, wherein the characteristic comprises at least one of: time, location and energy of the interaction.

2. The method of claim 1, wherein the photon detector array comprises an anode and a cathode that generate electrostatic repulsion within the photons.

3. The method of claim 1, wherein the center pixel receives a negative charge current sense.

4. The method of claim 1, wherein each of the plurality of neighboring pixels receives a positive charge current induction.

5. The method of claim 4, wherein the positive charge current induction is proportional to a subset of the photoelectron clouds.

6. The method of claim 1, wherein the characteristic comprises a location of the photon interaction, wherein determining the location of the interaction comprises: the pulse heights from at least two adjacent pixels are compared.

7. The method of claim 1, wherein the characteristic comprises an intensity of the photon interaction, wherein the intensity is based on adding responses to the center pixel and the plurality of neighboring pixels.

8. The method of claim 1, wherein the characteristic comprises a location of the photon interaction, wherein determining the location of the interaction comprises: the time delays from at least two adjacent pixels are compared.

9. The method of claim 1, wherein the plurality of responses do not correspond to responses from cathodes of the array of photon detector pixels.

10. The method of claim 1, wherein the photon detector pixel array comprises a pixelated semiconductor detector array comprising CdZnTe.

11. An apparatus, comprising:

a photon detector pixel array comprising a plurality of pixels;

a processor operatively coupled to the photon detector pixel array;

a storage device storing instructions for execution by the processor to:

receiving photon interactions occurring within a photon detector pixel array, wherein the photon detector pixel array comprises a plurality of pixels;

determining a photoelectron cloud generated by the photon interaction, wherein the photon detector pixel array comprises an electric field, wherein electrostatic repulsive forces disperse photons into the photoelectron cloud;

identifying a subset of the plurality of pixels associated with the photon interaction, wherein each subset of the plurality of pixels corresponds to a pixel activated by the optoelectronic cloud, wherein the subset of the plurality of pixels comprises a center pixel and a plurality of neighboring pixels, wherein the center pixel comprises a pixel having a highest amplitude response to the photon interaction; and

determining a characteristic of the photon interaction from the photoelectron cloud, wherein the characteristic comprises at least one of: time, location and energy of the interaction.

12. The apparatus of claim 11, wherein the photon detector array comprises an anode and a cathode that generate electrostatic repulsion within the photons.

13. The apparatus of claim 11, wherein the center pixel receives a negative charge current sense.

14. The apparatus of claim 11, wherein each of the plurality of neighboring pixels receives a positive charge current induction.

15. The apparatus of claim 14, wherein the positive charge current induction is proportional to a subset of the photoelectron clouds.

16. The apparatus of claim 11, wherein the characteristic comprises a location of the photon interaction, wherein determining the location of the interaction comprises: the pulse heights from at least two adjacent pixels are compared.

17. The apparatus of claim 11, wherein the characteristic comprises an intensity of the photon interaction, wherein the intensity is based on adding responses to the center pixel and the plurality of neighboring pixels.

18. The apparatus of claim 11, wherein the characteristic comprises a location of the photon interaction, wherein determining the location of the interaction comprises: the time delays from at least two adjacent pixels are compared.

19. The apparatus of claim 11, wherein the plurality of responses do not correspond to responses from cathodes of the array of photon detector pixels.

20. A product, comprising:

a storage device storing code, the code being executable by a processor and comprising:

code that receives photon interactions that occur within a photon detector pixel array, wherein the photon detector pixel array comprises a plurality of pixels;

code to determine a photoelectron cloud generated by the photon interaction, wherein the photon detector pixel array comprises an electric field, wherein electrostatic repulsive forces disperse photons to the photoelectron cloud;

code that identifies a subset of the plurality of pixels associated with the photon interaction, wherein each subset of the plurality of pixels corresponds to a pixel that is activated by the optoelectronic cloud, wherein the subset of the plurality of pixels comprises a center pixel and a plurality of neighboring pixels, wherein the center pixel comprises a pixel having a highest magnitude response to the photon interaction; and

code to determine a characteristic of the photon interaction from the photoelectron cloud, wherein the characteristic comprises at least one of: time, location and energy of the interaction.

Technical Field

The present application relates generally to imaging, and more particularly to determining characteristics of photon interactions at sub-pixelation (sub-pixel) resolution.

Background

The imaging device performs many different functions, such as medical imaging, security screening, image capture, and the like. The imaging source may be a radiation source, visible light, invisible light, or any type of source that the imaging device is capable of detecting. For example, in a medical environment, a patient may be injected with a radiopharmaceutical tracer and an imaging device may capture gamma photon radiation emitted from the patient's body for diagnostic analysis. The imaging device may comprise a gamma camera sensitive to the emission source, for example a camera comprising a specific substance or object sensitive or reactive to the emission source. The camera may contain individual pixels that may allow the image to determine the location, energy, time, and intensity of the emitted signal.

Disclosure of Invention

In summary, one aspect provides a method comprising: receiving photon interactions occurring within a photon detector pixel array, wherein the photon detector pixel array comprises a plurality of pixels; determining a photoelectron cloud generated by photon interaction, wherein the photon detector pixel array comprises an electric field, wherein electrostatic repulsive forces disperse photons into the photoelectron cloud; identifying a subset of the plurality of pixels associated with the photon interaction, wherein each subset of the plurality of pixels corresponds to a pixel activated by the optoelectronic cloud, wherein the subset of the plurality of pixels comprises a center pixel and a plurality of neighboring pixels, wherein the center pixel comprises a pixel having a highest amplitude response to the photon interaction; and determining a characteristic of the photon interaction from the photoelectron cloud, wherein the characteristic comprises at least one of: time, location and energy of interaction.

Another aspect provides an apparatus comprising: a photon detector pixel array comprising a plurality of pixels; a processor operatively coupled to the photon detector pixel array; a storage device storing instructions executable by a processor to: receiving photon interactions occurring within a photon detector pixel array, wherein the photon detector pixel array comprises a plurality of pixels; determining a photoelectron cloud generated by photon interaction, wherein the photon detector pixel array comprises an electric field, wherein electrostatic repulsive forces disperse photons into the photoelectron cloud; identifying a subset of the plurality of pixels associated with the photon interaction, wherein each subset of the plurality of pixels corresponds to a pixel activated by the optoelectronic cloud, wherein the subset of the plurality of pixels comprises a center pixel and a plurality of neighboring pixels, wherein the center pixel comprises a pixel having a highest amplitude response to the photon interaction; and determining a characteristic of the photon interaction from the photoelectron cloud, wherein the characteristic comprises at least one of: time, location and energy of interaction.

Another aspect provides a product comprising: a storage device storing code, the code being executable by a processor and comprising: code that receives photon interactions that occur within a photon detector pixel array, wherein the photon detector pixel array comprises a plurality of pixels; code to determine a photoelectron cloud generated by photon interaction, wherein the photon detector pixel array comprises an electric field, wherein electrostatic repulsive forces disperse photons into the photoelectron cloud; code that identifies a subset of the plurality of pixels associated with the photon interaction, wherein each subset of the plurality of pixels corresponds to a pixel that is activated by the optoelectronic cloud, wherein the subset of the plurality of pixels comprises a center pixel and a plurality of neighboring pixels, wherein the center pixel comprises a pixel having a highest magnitude response to the photon interaction; and code that determines a characteristic of the photon interaction from the photoelectron cloud, wherein the characteristic comprises at least one of: time, location and energy of interaction.

The foregoing is a summary and thus may contain simplifications, generalizations, and omissions of detail; accordingly, those skilled in the art will appreciate that the summary is illustrative only and is not intended to be in any way limiting.

For a better understanding of the embodiments, together with other and further features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying drawings. The scope of the invention is indicated in the appended claims.

Drawings

Fig. 1 shows a flow chart of an example embodiment.

FIG. 2 illustrates an example embodiment of the generation of an electronic cloud.

FIG. 3 illustrates an example embodiment of sub-pixelation correction.

FIG. 4 shows example count data from three consecutive pixels.

Fig. 5 shows example data for a 2 × 2 sub-pixelation factor.

Fig. 6 shows example data for a center pixel and eight surrounding neighboring pixels.

Fig. 7 shows an example of an information processing apparatus circuit.

Detailed Description

It will be readily understood that the components of the embodiments as generally described and illustrated in the figures herein could be arranged and designed in a wide variety of different configurations in addition to the described example embodiments. Thus, the following more detailed description of example embodiments, as represented in the figures, is not intended to limit the scope of the claimed embodiments, but is merely representative of example embodiments.

Reference throughout this specification to "one embodiment" or "an embodiment" (or the like) means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" or the like in various places throughout this specification are not necessarily all referring to the same embodiment.

Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments. One skilled in the relevant art will recognize, however, that the various embodiments may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects.

Users of imaging devices often desire high spatial, temporal and energy resolution image output. For example, medical images with high spatial, temporal and energy resolution may affect the treatment of a patient by guiding a physician to a location of interest within the patient. Many imaging devices utilize cameras that are sensitive to the type of emission being imaged in order to accurately capture images. To capture an image, a camera image is divided into discrete regions or picture elements (pixels), where each pixel may represent a location and intensity within the captured image.

As an example, in a nuclear medicine (molecular imaging) environment, a patient may be injected with a radiopharmaceutical tracer and an imaging device (gamma camera) may capture emissions of gamma photon radiation from the patient's body for diagnostic analysis. The detector in the gamma camera may include a semiconductor direct conversion material, such as CdZnTe, CdTe, HgI, and Si. An array of gamma photon detector pixels comprising semiconductor direct conversion detector materials has advantages over scintillator photon detector gamma cameras, including superior energy and spatial resolution. However, a disadvantage of such pixelated semiconductor detector arrays is the distortion of the energy spectrum of individual pixels, where the recorded energy of some counts is lower than the photopeak due to hole carrier trapping or charge sharing with neighboring pixels. Since image formation may typically only need to accept counts in the energy range immediately surrounding the light peak, counts in the lower energy spectral tails are not included in the image. This means that the gamma camera is significantly less efficient than the scintillator camera, even though the thickness of each camera provides the same stopping power for the gamma photons. The present invention provides a novel solution to the charge sharing and hole trapping spectral tail problems.

The principles underlying the present invention depend on a profound, specific understanding of photon interactions in CdZnTe detectors and signal formation in pixelated CdZnTe detectors. However, the present invention can be understood from a high level. When a gamma photon is incident on a pixelated CdZnTe detector (preferably from the cathode side), the photon may compton scatter zero or more times before depositing its remaining energy in the form of a photoelectric interaction. These interactions may occur within a single pixel or multiple pixels. The interaction is directly converted into a charge cloud of electrons and holes. The detector is at an electric field of typically about 100 volts per millimeter with the pixelated anode at ground potential and the typical monolithic cathode at a high negative voltage. Thus, holes are accelerated toward the cathode and electrons are accelerated toward the pixelated anode. Since hole mobility is generally much smaller than electron mobility, the time it takes to clear (sweep out) holes is longer than that of electrons, and there is a greater probability that holes are trapped in crystal defects. When the pixel is smaller than the thickness of the detector, the device is much more sensitive to electrons than holes due to the "small pixel effect".

When the electron cloud approaches the anode, an induced voltage is detected across the plurality of anodes and/or pixels. When the charge cloud reaches the anode plane, charge accumulates on one or a few pixels. Any adjacent pixel that detects the induced voltage will then detect a voltage of opposite polarity, so that the integral over time for any non-charge accumulating pixel will be zero. Thus, there are a number of ways in which signals can be shared between multiple pixels: when the electron charge cloud overlaps multiple pixels, charge can be shared, photon interactions may have occurred in multiple pixels due to compton scattering or k-escape X-rays, and transient induced voltages can be detected on multiple neighboring pixels. Of course, hole charges accumulate on the cathode, and this information can be used to estimate the depth of interaction of the incident photons. However, the present invention specifically does not use any cathode signal to determine the nature of the photon interaction. In addition, the present invention uses only the positive and negative peak amplitudes of the anode pixel signal. This is a great simplification, making it a relatively simple matter to determine the interaction characteristics, combining information from the peak signal amplitudes from multiple anode pixels.

As with any device, there is a problem of determining the location and energy of the signal on the detector. Photons or particles may enter the receiving imaging unit such that interaction of incident photons or particles with the imaging unit material results in a signal being generated at a plurality of pixels or detection areas. This problem may occur in the example of photons entering the imaging unit with an angular trajectory. Photons entering the detection unit may hit one or more pixels. In other words, photons may enter the detector at an angle and traverse a pixel detection region or regions before the photon trajectory terminates.

Current systems may have difficulty attributing detected charge to correct interactions on a pixel or subset of pixels, resulting in an image with less accuracy. Currently, many imaging devices rely on a signal or signals from a single pixel to identify the location of an interaction. The imaging technique may acquire signals of individual pixels from a detection unit of the imaging device. In this way, the imaging unit may receive a "pixilated" image of the received signal. Thus, one central pixel may have a higher value, while the neighboring pixels may have a lower value. However, data on how the neighboring pixel values are associated with the central pixel value may be lost in imaging techniques. For example, when a photon enters the imaging detection unit, the photon may interact with a plurality of pixels, thereby generating a signal from all pixels with which it interacts. For example, a pixel with a dominant interaction may provide a signal indicating that the pixel receives the greatest energy from a photon, and neighboring pixels may have smaller energy values. However, it may be difficult to determine exactly where the photons impinge in the pixel region. The loss of data of neighboring pixels or even the resolution within the pixel itself reduces the resolution of the imaging unit. The lower resolution of the imaging unit may result in a reduced treatment efficiency. For example, the patient may need further imaging, may miss a diagnosis, may have longer imaging time, may have increased costs, and so forth.

Accordingly, embodiments provide a system and method for determining photon interactions with a pixel array at the sub-pixelation level. In one embodiment, photon interactions may be received within a photon detector pixel array. The photonic pixel array may include a plurality of pixels. A photo electronic cloud may be generated. Using an electric field (E-field), the photoelectron cloud can drift on a detector or cdznte (czt) crystal. The photoelectron cloud may drift toward the electron sensor within the E-field. Along the drift path, the electrostatic repulsion force may cause photoelectrons to diffuse. The electrostatic repulsive force may be due to the photoelectrons having the same charge as the other photoelectrons. The diffusion may be primarily perpendicular to the direction of motion of the photoelectron cloud. This diffusion may cause the photoelectron cloud to disperse before reaching the electron sensor region. The E-field may be between the anode and the cathode. The photoelectron cloud may be determined by an array of photon detector pixels. In one embodiment, the method or system may identify a subset of the plurality of pixels associated with the photon interaction. Each subset of the plurality of pixels may correspond to pixels activated by the optoelectronic cloud. In one embodiment, the subset of the plurality of pixels may include a center pixel and a plurality of neighboring pixels. In one embodiment, the center pixel may be the pixel with the highest magnitude response to photon interaction. In one embodiment, the method or system may determine the characteristics of the photon interaction from the photoelectron cloud. The characteristics may include time, location, energy, etc. of interaction.

Such systems and methods provide technological improvements to current imaging techniques. Without requiring signals from both the cathode and anode of the detector, the embodiments described herein capture information from both the center anode pixel and the adjacent anode pixels. Using these values, the system can identify sub-pixel resolution, providing a system and method that provides images with higher resolution by being able to more accurately identify characteristics about the interaction without requiring cathode signals that may be difficult to obtain. The system may use signals from adjacent anodes that may account for charge sharing between the center pixel and adjacent pixels. Typically, these shared events may not be counted. It is possible to recombine the information and correct the properties regarding the interaction. By correcting the characteristics with respect to the interaction, the energy resolution can be improved. These improvements may be important for medical imaging, reducing imaging agent dose to patients, reducing examination/procedure time, and the like.

The illustrated example embodiments will be best understood by reference to the drawings. The following description is intended by way of example only, and only shows certain exemplary embodiments.

The pixelated detectors, gamma cameras, and/or pixelated arrays of various embodiments may be provided as part of different types of imaging systems, for example, Nuclear Medicine (NM) imaging systems such as Positron Emission Tomography (PET) imaging systems, Single Photon Emission Computed Tomography (SPECT) imaging systems, and/or X-ray Computed Tomography (CT) imaging systems, and the like. The system may be secured to a one-piece housing that further includes a rotor oriented about a central bore of the housing. The rotor is configured to support one or more pixilated cameras, such as, but not limited to, gamma cameras, SPECT detectors, multi-layer pixilated cameras (e.g., compton cameras), and/or PET detectors. It should be noted that when the medical imaging system comprises a CT camera or an X-ray camera, the medical imaging system further comprises an X-ray tube for emitting X-ray radiation to the detector. In various embodiments, the camera is formed from a pixelated detector, as described in more detail herein. The rotor is also configured to rotate axially about the inspection axis. Operation and control of the imaging system may be performed in any manner known in the art. It should be noted that various embodiments may be implemented in connection with imaging systems that include a rotating gantry or a stationary gantry.

In one embodiment, the imaging device may be mounted in a position for secure scanning. For example, the apparatus may be in an airport security checkpoint, baggage inspection location, or the like. The apparatus may include a plurality of X-ray sources and a plurality of pixelated photon detector arrays. In one embodiment, the imaging device may be permanently anchored, mobile, or fully portable. For example, the imaging device may be a handheld device used by a first responder, security or assessment team. Other uses outside of a secure environment are contemplated and disclosed. As will be appreciated by those skilled in the art, healthcare imaging and security screening are merely examples. Other possible applications of the techniques as described herein are also possible and contemplated.

In one embodiment, the receiving device may contain a sensor sensitive to radioactive particles or photons. The receiving device may record a communication event (also referred to as an interaction) on a sensor array located in the receiving device. Each sensor in the array may be represented as a pixel in the final image. During imaging, photons or particles may strike one or more pixel detection cells. In one embodiment, signals received from one or more pixel detection units may be used to determine a characteristic of a photon interaction. In a healthcare environment, this may allow a healthcare professional to achieve better imaging in less time and provide fewer radioactive labels to the patient, which may result in better treatment planning and lower medical costs, e.g., better resolution may be achieved and the duration of the imaging procedure may be reduced.

Embodiments of the imaging device may be in a healthcare environment, security screening, manufacturing, or any application where an imaging device may be used. For example, the imaging device may be a radiation imaging device in which radioactive substances (composed of particles or photons) are transmitted or injected into and emitted from a patient. Another example may include a port to an airport or access device for scanning radiation or other material of interest for security purposes. Another example of an imaging device may be used by a first responder to determine the security of environmental conditions and/or locations. Other uses are contemplated and disclosed.

Referring to fig. 1, at 101, an embodiment may receive or capture a photon interaction or an interaction occurring within a photon detector pixel array. Photons can travel towards the cathode entering the device from the cathode side of the cell (see fig. 2). Receiving or capturing the interaction may include receiving one or more signals from the one or more pixel detection units indicating that interaction has occurred with the one or more pixel detection units. For readability, the discussion herein will refer to photons as the object that causes the interaction and produces a signal. However, it should be understood that the object may comprise photons, light of any spectrum, radioactive particles, or any type of energy that can be detected by the detection unit. The photon detector pixel array may be one or more pixel detector units. The photon detector pixel array may be organized in any configuration (e.g., grid, brick pattern, interspersed pattern, etc.). The photon detector pixel array may be oriented in a flat plane, a curved plane, or the like. In other words, the photon detector pixel array may be arranged in a manner suitable for detecting interactions from the emission source, and may be different for different applications. For example, photons from the emission source may interact with one or more pixels on a photonic pixel array that is part of an imaging unit in a medical environment.

At 102, in one embodiment, the system or method may determine a photoelectron cloud generated by a photon interaction. In one embodiment, the device may have a cathode and an anode (see fig. 2 for an example device configuration). Photons or photon interactions may enter the device at the cathode end. In one embodiment, the cathode or anode may be a single plane across the device or module. The plane may be flat or have a curvature suitable for imaging applications. The cathode may be maintained at a negative high voltage and/or AC-coupled (AC-coupled). The anode may have a pixelated detection element, be electrically grounded and/or be direct current coupled (DC-coupled). In one embodiment, a cdznte (czt) crystal or other type of semiconductor material is between the cathode and the anode. For ease of readability, references will be made herein to CZT crystals, but the system described is not so limited, as any type of semiconductor material or imaging material may be utilized and may be based on the application of the imaging device.

In one embodiment, photon interaction with CZT or other semiconductor material may generate or generate electron and hole clouds. As the electron cloud drifts towards the anode channel, the size of the electron cloud may increase due to electrostatic repulsion. The initial creation of the photoelectron cloud can be represented by the quantity E γ ≈ E γ/4.64 eV. In one embodiment, the electron cloud may resist drifting of an electric field (E-field) through the CZT crystal in a direction toward the anode and the pixel detection array. The electron cloud may generate a negative charge signal to one or more pixels and/or associated ASIC channels. One or more pixels may be connected to a charge sensitive preamplifier and/or a shaping amplifier. When charge is detected, the pixel and associated electronics can be triggered for data collection.

In embodiments with semiconductor detector materials, both sides of the photon detector pixel array may have metal electrodes deposited on the semiconductor detector crystals. The first side may comprise a plurality of pixels, also referred to as a pixelated side, which may be arranged in a grid pattern. The side may be coupled to readout electronics that may capture signals from the pixelated side. In the case of CdZnTe or CdTe, which has a much greater electron mobility than hole mobility, the pixelated side may be the anode side of the array and provide an anode signal. In some configurations, the side may be connected to ground potential. In one embodiment, the second side of the detector pixel array may be substantially opposite the first side, e.g. in the case of a thick sheet detector the first side may be the bottom side and the second side may be the top side, typically the side from which gamma photons may be incident on the detector. The second side of the detector pixel array may be a cathode and may be connected to a negative voltage bias.

In one embodiment, the center pixel, defined as the pixel receiving the largest count, may receive a negative charge inducing signal. In contrast, neighboring pixels surrounding the center pixel may receive the positive charge induction signal. The example of fig. 2 shows the electron cloud to the left which affects the center pixel more. As an example, fig. 2 inset shows an example charge induction measured by a central pixel and two adjacent pixels. The center pixel receives the maximum pulse height or count (shown as a solid line) and the negative charge induction. Both neighboring pixels receive a positive charge induction. For example, the left pixel has a higher amplitude and a shorter time delay than the right pixel. Such data may indicate that the center pixel receives a charge cloud at a location closer to the left than to the right. This is example data and is more complex as discussed herein. For example, a center pixel may have eight neighboring pixels around the center pixel to further accurately determine the location of the induced charge.

At 103, in one embodiment, the method or system may identify a subset of the plurality of pixels associated with the photon interaction. In one embodiment, pixels (see FIG. 3) refer to discrete locations on the imaging hardware surface that may be only a subset of the imaging area. The subset of pixels may correspond to pixels activated by the optoelectronic cloud. The subset of pixels may include a center pixel and a plurality of neighboring pixels. The center pixel may be defined as the pixel with the highest magnitude response or count to photon interactions. Data or communications from one or more pixels may be used to form an image that is synthesized from the one or more pixels.

In one embodiment, the system and method may identify a plurality of pixels associated with an interaction of photons. For example, when a photon interacts with a detector, one or more pixels produce a signal corresponding to the interaction. As an example, as photons move through the pixel array, the photons interact with different pixels. Each of these pixels then generates a signal indicative of some form of interaction or contact. In one embodiment, a center pixel may be identified. The central pixel may be associated with a "rest" place for the photon (e.g., a location of a photoelectric interaction). In other words, the photons have stopped moving through the pixel array. The identification of the center pixel may be performed using one or more characteristics. For example, the center pixel may be identified as the pixel having the highest energy detected from the photon event. However, the center pixel may not represent the pixel with the highest energy detected by photon interaction. As an example, two pixels may provide the same highest energy value if they share the same level of interaction. In this case, a pixel may be simply characterized as the center pixel.

In addition to the center pixel, the system may also identify one or more neighboring pixels. In one embodiment, the identified one or more neighboring pixels may be in any physical location relative to the center pixel. In other words, the neighboring pixels need not be the immediate neighbors of the central pixel or directly adjacent to the central pixel. Rather, one or more neighboring pixels may be identified as pixels that receive less energy from a photon than the energy received by the central pixel. In other words, as a photon moves through the pixel array, it may interact with pixels other than the center pixel, for example, by compton scattering. These pixels can be identified as neighboring pixels. One or more neighboring pixels may be in any type of configuration relative to the center pixel. For example, the neighboring pixels may be in a "ring" or "box" configuration around the center pixel. As another example, one or more neighboring pixels may be located on one or more sides of the center pixel. As a final example, the neighboring pixel may be a single pixel adjacent to the center pixel. Each of the neighboring pixels may have a different signal with respect to each other and/or the central pixel. In other words, each of the signals from the neighboring pixels may be the same, different, or a combination thereof relative to other neighboring pixels and/or the center pixel.

The imaging device may use a number of methods to detect communication events from the pixels. For example, in a consumer camera, a pixel represents the intensity and wavelength of visible light detected by the pixel. As another example, radiation imaging devices, radiation detectors, and the like used in cancer screening use one type of atomic particle or photon emitted by a source and measurable by a sensor associated with a circuit to provide a location and intensity (or count density) of the detected radioactive particle or photon. Using the communication events from the pixels, an image may be created based on the location, intensity, energy, or wavelength of the communication events from the pixels. In other words, embodiments may use the signals transmitted from the pixels to create an image based on information contained within the signals during imaging. Data may be collected from multiple pixels to create an image of a larger area.

Referring to FIG. 3, in one embodiment, each pixel may be divided into sub-pixels or sub-pixilated regions. In one embodiment, the pixels may be labeled as a grid-like structure having rows and columns. For ease of illustration, square pixels are shown, however, different geometries and interlocking (interlocking) shapes may be used. For row and column nomenclature, a column may be defined as i and a row as j. For example, the center pixel may be given a location identifier of "i, j", and the left pixel is "i-1, j", and the top pixel is "i, j + 1". Alternatively, the columns and rows may simply be given numerical identifiers. For example, columns and rows may be identified in numerical order from left to right and bottom to top.

For example, the central pixel may be where the interaction indicates the 2D location of the photon "rest". The central pixel is the pixel that provides the highest energy signal with respect to photon interaction. The neighboring pixels represent pixels that provide an energy signal that is not as large as the energy signal of the central pixel. For example, a center pixel (also referred to as a trigger pixel) may be defined as "i, j", the adjacent pixel to the left in the box or ring around the center pixel as "i-1, j", and the adjacent pixel above as "i, j + 1".

The pixels may be further subdivided or sub-pixilated. For example, a single pixel may be divided into 2 × 2 sub-pixelized regions (see fig. 3). For example, a conventional 3 × 3 pixel region may be converted to a "virtual" 6 × 6 sub-pixelized region using processing methods. For this particular example, a single pixel would have four sub-pixels within the pixel boundary. In other words, a single pixel may be divided in 4 regions of sub-pixelized regions. Other divisions are contemplated and disclosed, with 2 x 2 sub-pixelation being an illustrative example. As another example, a sub-pixilated area of a factor of 3 x 3 or greater may be used. Sub-pixelation provides higher resolution with lower electrical noise and/or production cost.

At 104, in one embodiment, the systems and methods may determine a characteristic of the photon interaction from the photoelectron cloud. The characteristics may include time, location, energy, etc. In one embodiment, the method or system may receive a trigger signal as described above. For example, the system may receive a trigger, which may be at a center pixel such as pixel (i, j). The system can then obtain the negative energy or negative charge induction of neighboring pixels (e.g., NE (i-1, j) and (i +1, j)). The system may perform a calibration step. The calibration may be represented by NE' (i, j) ═ NE (i, j) -base linenoid (i, j)). times gain (i, j). In addition, a correction factor may be used. The correction may be under-range (under range cut). The correction can be expressed as: (NE '(i +1, j) -NE' (i-1, j))/(NE '(i-1, j) + NE' (i +1, j)). In an embodiment, a correction factor may be added to the trigger pixel (i, j).

Referring to fig. 4, an illustrative example of sample data from a center pixel and 2 neighboring pixels is shown. For example, at a peak count of 59.0keV, the bottom adjacent pixel, the center pixel, and the top adjacent pixel are shown. Example data demonstrates photopeak data and pixel gaps. For example, the actual physical gap between pixels may be 75 μm. The example data shows an apparent gap of up to 200 μm, which may be due to a primary effect of charge sharing when the photon beam illuminates the area between the gaps. In some cases, most electron clouds are split into 2 (and thus registered to 2 or more pixels), with the energy per electron cloud being significantly lower. Since the y-axis is the peak count from a particular energy window, no such signal is collected. The width of the pixel gap can be determined using an X-ray tube source for high-throughput cleaning of the desired spectrum and a moving fixture for moving the pixel array in the X, y and z axes. For example, the X-ray tube may be fixed and the detector system moved in small increments (e.g., 20 μm-40 μm). Alternatively, the X-rays may be moved using a static detector system. The method can be used to apply sub-pixelation techniques to the final data to see the accuracy or fine tuning of the method. Referring to fig. 5, in one embodiment, the system and method may use a 2 × 2 sub-pixelation technique. Fig. 5 statistically represents an exemplary embodiment of 2 × 2 pixelation. For example, if the photons and the resulting photon cloud appear on the side closer to the center pixel, the system and method may identify the location as sub-millimeter accuracy. The technique may also use a collimator in conjunction with a photon source to achieve this level of resolution and accuracy. In the example shown, the photon source is moving across the array in a direction from row 33 to row 34. The count may be plotted over a distance for each pixel using a 2 x 2 sub-pixelation technique. In this example, the peak count is at 59.0 keV. However, other peak counts, such as 67.2keV, may be used.

Referring to fig. 6, the amount of induced charge can be plotted over a distance (measured in μm). In this example, the center pixel is located at the center of the graph and eight adjacent pixel data are shown. The trigger pixel (center pixel) is labeled # 126. The central pixel receives the largest signal from the photon cloud. Referring to pixel #127, there is more induced charge than pixel #125, indicating that the photon interaction is located closer to #127 than to # 125. This technique allows better spatial resolution of the hardware on the pixel array.

In one embodiment, the system or method may identify whether a characteristic of the interaction may be determined. The system may determine many different characteristics for the interaction, for example, the characteristics may include time, location (possibly including depth), energy, intensity, and the like. To determine the characteristic, the system may receive signals from one or more pixels (e.g., a center pixel and neighboring pixels). For example, photons may not enter the detector pixel array at right angles of incidence. Thus, when a photon is traveling through the detector, the photon may interact with more than one pixel. In other words, as photons enter the detector pixel plane, the interaction may "share" characteristics (i.e., energy) with one or more neighboring pixels. Not only the signal received from the central pixel but also the signals received from the neighboring pixels can be used to determine the different characteristics. The system may use these signals to directly identify the characteristics or may attribute these signals to signals from other pixels. The system may determine one or more characteristics simultaneously or at different times.

In one embodiment, the determined characteristic may include a depth of interaction. In one embodiment, the depth of interaction may be determined by first identifying the following two or more dimensional multi-dimensional space including peak signal amplitude responses along multiple axes: 1) a positive polarity of the center pixel, 2) a positive polarity of the neighboring pixel, and optionally, 3) a negative polarity of the neighboring pixel. The next step is to identify one or more clusters (clusters) within the multi-dimensional space that represent one or more mechanisms of depth dependent inter-pixel charge sharing or hole trapping.

Each of these pixel and sub-pixellated signals may also have an associated amplitude that represents, for example, the interaction energy of the signals. Accordingly, the signal from the pixel may include a signal having a peak amplitude of the positive polarity signal and a peak amplitude of the negative polarity signal. Using these signals from the center pixel and the neighboring pixels, the system can determine the time, location, energy, and depth of interaction, for example, by clustering these signals in a multidimensional space. As described above, the systems and methods described herein capture only the peak amplitude signal from the anode portion of the detector. Thus, by analyzing and correlating the amplitude peaks of the positive and negative polarity signals from all pixels (e.g., the center pixel and the adjacent pixels), the system can determine at what depth the interaction occurred. Thus, the system can determine the location characteristics, including the depth of interaction.

In addition to the signal from the center pixel, the system may use signals from one or more neighboring pixels to determine other characteristics. For example, using signals from one or more neighboring pixels in addition to the signal from the center pixel may allow for better resolution with respect to characteristics such as time, location, energy, and the like. Determination of some of these characteristics may be done using conventional techniques, except that the signals from the neighboring pixels are considered together with the signal of the central pixel (which provides a more accurate or more accurate determination of the characteristics).

For example, the system may determine the location of the interaction relative to a two-dimensional location, which is more accurate than conventional systems and methods. For example, the interaction of photons with neighboring pixels can adjust the location of the photons to sub-pixel resolution, not just pixel resolution. As an example, referring to fig. 2, the interaction occurs to the left of the center pixel with respect to the imaginary center line of the pixel. Using information from neighboring pixels, the system can identify that the interaction occurred to the left of the center pixel, rather than just identifying that the interaction occurred at the center pixel. For example, by identifying the signals from neighboring pixels, the system can determine which neighboring pixels have higher signals than other neighboring pixels. Due to the fact that pixels closer to the interaction will have higher signals, if the interaction occurs off-center, neighboring pixels closer to the interaction will provide higher signals than pixels further away from the interaction. Thus, by identifying which pixels have higher signals, the system can determine on which side of the pixel the interaction occurred.

For example, the system may identify the sub-pixel location information using a weighted average. As an illustrative example, if the detector pixel array receives interactions of photons, where one adjacent pixel receives 2/3 of interactions that occur outside of the center pixel and another adjacent pixel receives 1/3 of interactions that occur outside of the center pixel, the system may determine where the event occurred along the center line of the two pixels by weighting the two interactions along the center pixel. In other words, the interaction may not fall in the center of the pixel area, and the neighboring pixels allow a more precise location of the interacting photons to be determined.

As another example of a more accurate or precise determination of the characteristic, the system may determine a more precise energy of the interaction. When a photon interacts with a pixel, neighboring pixels may receive a portion of the interaction. This is called sharing charge. Thus, the system can attribute the charge received by the neighboring pixels to the center pixel to provide a more accurate representation of the actual energy of the interaction. To provide such a more accurate representation, the system may correct the amount of energy received from the center pixel. The correction may include adding the shared charge of one or more neighboring pixels to the response of the central pixel. In other words, if the pixel array detects photon interaction, the charge detected by the neighboring pixels can be added to the charge value of the central pixel. As an example, if a photon interacts with an array of detector pixels in which 80% of the charge is received at a central pixel and 20% of the charge is received at neighboring pixels, then 20% of the charge of the neighboring pixels may be distributed to the central pixel.

If one or more characteristics cannot be determined for the interaction at 104, the system may ignore the interaction and receive information about the new interaction at 101. On the other hand, if the system can determine one or more characteristics at 104, the system can record data related to the interaction at 105. The recorded data may be analyzed in real time or saved for later analysis. Further, the recorded data may be used by a system as described herein to generate one or more images of an object being scanned using the imaging device.

Accordingly, various embodiments described herein represent technological improvements to imaging devices that may require high sensitivity and resolution to the material being imaged. One embodiment allows the use of sub-pixelation to determine the characteristics of photon interactions. Using the techniques described herein, a more complete image may be obtained with a lower imaging procedure duration and/or lower radiation dose without requiring a longer imaging procedure and/or higher radiation dose. Such a system enables more accurate imaging, less equipment downtime, and lower costs associated with the imaging process.

Although in accordance with any of the various embodiments described herein with respect to an instrument for determining characteristics of an electronic cloud over a subset of pixels, various other circuits, circuitry, or components may be used in an information processing device, an example of which is shown in fig. 7. The device circuitry 10' may include a measurement system on a chip design fabric, such as a particular computing platform (e.g., mobile computing, desktop computing, etc.). The software and processor are combined in a single chip 11'. As is well known in the art, a processor includes internal arithmetic units, registers, cache memory, buses, I/O ports, and the like. Internal buses, etc., depend on different vendors, but substantially all peripherals (12 ') may be attached to a single chip 11'. The circuit 10 'combines the processor, memory control and I/O controller hub all into a single chip 11'. Also, this type of system 10' typically does not use SATA or PCI or LPC. For example, common interfaces include SDIO and I2C.

There is a power management chip 13 ', for example a battery management unit BMU, which manages the power supplied, for example, via a rechargeable battery 14 ', which rechargeable battery 14 ' can be charged by connection to a power source (not shown). In at least one design, a single chip (e.g., 11') is used to provide BIOS like functionality and DRAM memory.

The system 10 ' generally includes one or more of a WWAN transceiver 15 ' and a WLAN transceiver 16 ' for connecting to various networks, such as telecommunications networks and wireless internet devices (e.g., access points). Further, the device 12' typically includes, for example, transmit and receive antennas, oscillators, PLLs, and the like. The system 10 'includes an input/output device 17' for data input and display/rendering (e.g., a computing location located away from a single beam system that a user may easily access). The system 10 ' also typically includes various storage devices, such as flash memory 18 ' and SDRAM 19 '.

It will be appreciated from the foregoing that the electronic components of one or more systems or devices may include, but are not limited to, at least one processing unit, memory, and a communication bus or communication means that couples various components (including the memory) to the processing unit. The system or device may include or have access to a variety of device-readable media. The system memory may include device-readable storage media in the form of volatile and/or nonvolatile memory such as Read Only Memory (ROM) and/or Random Access Memory (RAM). By way of example, and not limitation, system memory may also include an operating system, application programs, other program modules, and program data. The disclosed system may be used in embodiments of an instrument for determining characteristics of an electron cloud over a subset of pixels.

As will be appreciated by one skilled in the art, various aspects may be embodied as a system, method or device program product. Accordingly, aspects may take the form of an entirely hardware embodiment or an embodiment containing software that may all generally be referred to herein as a "circuit," module "or" system. Furthermore, aspects may take the form of a device program product embodied in one or more device-readable media having device-readable program code embodied therewith.

As will be appreciated by one skilled in the art, various aspects may be embodied as a system, method or device program product. Accordingly, aspects may take the form of an entirely hardware embodiment or an embodiment containing software that may all generally be referred to herein as a "circuit," module "or" system. Furthermore, aspects may take the form of a device program product embodied in one or more device-readable media having device-readable program code embodied therewith.

It should be noted that the various functions described herein may be implemented using instructions stored on a device readable storage medium, such as a non-signal storage device, where the instructions are executed by a processor. In the context of this document, a storage device is not a signal, and "non-transitory" includes all media except signal media.

Program code for performing operations may be written in any combination of one or more programming languages. The program code may execute entirely on a single device, partly on a single device, as a stand-alone software package, partly on a single device and partly on another device, or entirely on other devices. In some cases, the devices may be connected by any type of connection or network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected by other devices (e.g., by the internet using an internet service provider), by a wireless connection (e.g., near field communication), or by a hardwired connection (e.g., by a USB connection).

It should be noted that the various functions described herein may be implemented using instructions stored on a device readable storage medium, such as a non-signal storage device, for execution by a processor. A storage device may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of the storage medium include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a storage device is not a signal, and "non-transitory" includes all media except signal media.

Program code embodied on a storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.

Program code for performing operations may be written in any combination of one or more programming languages. The program code may execute entirely on a single device, partly on a single device, as a stand-alone software package, partly on a single device and partly on another device, or entirely on other devices. In some cases, the devices may be connected by any type of connection or network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected by other devices (e.g., by the internet using an internet service provider), by a wireless connection (e.g., near field communication), or by a hardwired connection (e.g., by a USB connection).

Example embodiments are described herein with reference to the accompanying drawings, which illustrate example methods, apparatus, and program products in accordance with various example embodiments. It will be understood that acts and functions may be implemented, at least in part, by program instructions. These program instructions may be provided to a processor of a device, special purpose information processing device, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the device, implement the functions/acts specified.

Note that the values provided herein should be construed to include equivalent values as indicated by the use of the term "about". Equivalent values will be obvious to a person skilled in the art, but at least include the values obtained by ordinary rounding of the last significant digit.

It is worthy to note that although specific blocks are used in the figures, and a specific order of blocks has been shown, these are non-limiting examples. In some cases, two or more blocks may be combined, one block may be split into two or more blocks, or some blocks may be reordered or reorganized as appropriate, as the explicitly illustrated examples are for descriptive purposes only and should not be construed as limiting.

As used herein, the singular forms "a", "an" and "the" may be construed to include the plural forms "one or more", unless expressly specified otherwise.

The disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limiting. Many modifications and variations will be apparent to those of ordinary skill in the art. The example embodiments were chosen and described in order to explain the principles and practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated.

Thus, although the illustrative example embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the description is not limiting, and that various other changes and modifications may be affected therein by one skilled in the art without departing from the scope or spirit of the disclosure.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种水体总α、总β比活度实时在线监测方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!