Conditional reset image sensor for threshold monitoring

文档序号:1492812 发布日期:2020-02-04 浏览:18次 中文

阅读说明:本技术 阈值监测的有条件重置的图像传感器 (Conditional reset image sensor for threshold monitoring ) 是由 T·沃吉尔桑 M·吉达什 薛松 M·斯米尔诺维 C·M·史密斯 J·恩德斯雷 J·E·哈 于 2014-03-14 设计创作,主要内容包括:本发明的各个实施例涉及阈值监测的有条件重置的图像传感器。一种具有多位采样的图像传感器架构,实施在图像传感器系统内。将响应于入射在光敏元件上的光而产生的像素信号,转换为表示该像素信号的多位数字值。如果像素信号超出采样阈值,那么重置光敏元件。在图像捕获周期期间,将与超出采样阈值的像素信号相关联的数字值累积到图像数据中。(Various embodiments of the present invention relate to a threshold-monitored conditional-reset image sensor. An image sensor architecture with multi-bit sampling is implemented within an image sensor system. A pixel signal generated in response to light incident on the photosensitive element is converted to a multi-bit digital value representing the pixel signal. If the pixel signal exceeds the sampling threshold, the photosensitive element is reset. During an image capture period, digital values associated with pixel signals that exceed a sampling threshold are accumulated into the image data.)

1. An integrated-circuit image sensor, comprising:

a plurality of photosensitive elements that accumulate charge in response to incident light;

a shared floating diffusion enabling readout of each of said photosensitive elements; and

a row line associated with a respective row of the photosensitive elements, and a column line associated with a respective column of the photosensitive elements, such that activating a selected one of the row lines and activating a selected one of the column lines switchably couples the shared floating diffusion to a respective one of the photosensitive elements.

2. The integrated-circuit image sensor of claim 1 further comprising: a plurality of switching circuits, each switching circuit switchably coupling a respective one of the photosensitive elements to the shared floating gate and having a column input coupled to a respective one of the column lines and a row input coupled to a respective one of the row lines.

3. The integrated-circuit image sensor of claim 1 further comprising: row logic operable in a first mode to sequentially activate each of the row lines to enable readout of the rows of photosensitive elements in respective successive time periods.

4. The integrated-circuit image sensor of claim 1 wherein the row logic is further operable in a second mode to activate two or more of the row lines in parallel to enable readout of two or more of the rows of the photosensitive elements in parallel.

5. The integrated-circuit image sensor of claim 1 further comprising: readout circuitry to read out a signal corresponding to a charge level of the shared floating diffusion and to activate one or more of the column lines if the charge level of the shared floating diffusion exceeds a threshold.

6. The integrated-circuit image sensor of claim 1 wherein the plurality of photosensitive elements comprises a subset of photosensitive elements within the integrated-circuit image sensor, the subset comprising at least four photosensitive elements arranged in two rows and two columns.

7. A method of operation within an integrated circuit image sensor, the method comprising:

activating a selected row line of a plurality of row lines, each of the row lines associated with a respective row of photosensitive elements within a group of pixels having a shared floating diffusion;

activating a selected column line of a plurality of column lines, each of the column lines associated with a respective column of photosensitive elements within the pixel group; and

switchably coupling a first photosensitive element to the shared floating diffusion, the first photosensitive element included in the row of photosensitive elements associated with the selected row line and included in the column of photosensitive elements associated with the selected column line.

8. A method of operation within an integrated circuit image sensor, the method comprising:

activating a plurality of row line-column line combinations during a first period to determine charge levels accumulated within a subset of two or more photosensitive elements in a group of at least four photosensitive elements in a partial readout operation; and

based on the results of the partial read-out operation, the plurality of row line-column line combinations are conditionally activated during a second period to enable generation of a digital value representative of a total charge level accumulated within the subset of photosensitive elements in a full read-out operation.

9. An integrated-circuit image sensor, comprising:

an array of photosensitive elements;

a row line associated with a respective row of the photosensitive elements and a column line associated with a respective column of the photosensitive elements; and

control circuitry to: (i) activating a plurality of row line-column line combinations during a first period to determine in a partial readout operation charge levels accumulated within a subset of two or more photosensitive elements of a group of at least four of the photosensitive elements, and (ii) conditionally based on the results of the partial readout operation, activating the plurality of row line-column line combinations during a second period to enable generation of a digital value representative of a total charge level accumulated within the subset of photosensitive elements in a full readout operation.

10. A method of operation within an integrated circuit image sensor, the method comprising:

switchably coupling a first amplifier to a pixel during a first pixel readout operation; and

switchably coupling a second amplifier to the pixel during a second pixel readout operation, the second amplifier producing a different gain than the first amplifier.

Technical Field

The present disclosure relates to the field of electronic image sensors, and more particularly to a sampling architecture for use in such image sensors.

Background

Digital image sensors, such as CMOS or CCD sensors, include a plurality of light sensitive elements ("photosensors"), each configured to convert photons incident on the photosensor ("captured light") into an electrical charge. The charge can then be converted into image data representing the light captured by each photosensor. The image data includes a digital representation of the captured light and may be manipulated or processed to produce a digital image that can be displayed on a viewing device. The image sensor is implemented in an integrated circuit ("IC") having a physical surface that may be divided into a plurality of pixel regions (e.g., one or more photosensors, and accompanying control circuitry) configured to convert light into electrical signals (charge, voltage, current, etc.). For convenience, a pixel region within an image sensor may also be referred to as an image pixel ("IP"), and a pixel region or aggregate of image pixels will be referred to as an image sensor region. Image sensor ICs also typically include an area outside the image sensor area, such as certain types of control, sampling, or interface circuitry. Most CMOS image sensors contain a/D (analog to digital) circuitry to convert the pixel electrical signals into digital image data. The a/D circuitry may be one or more ADCs (analog to digital converters) located within or at the periphery of the image sensor area.

Drawings

The various embodiments disclosed herein are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:

FIG. 1 illustrates a cross-section of a portion of an image sensor in accordance with one embodiment;

FIG. 2 illustrates a portion of array circuitry of an analog pixel image sensor having multiple pixel signal thresholds according to one embodiment useful in, for example, the layout of FIG. 1;

FIG. 3 illustrates an example image sensor readout circuit configured to convert pixel signals to multi-bit digital conversion in accordance with one embodiment useful, for example, with the embodiments of FIGS. 1 and 2;

FIG. 4 illustrates an example circuit diagram embodiment of an image sensor system having a multi-bit architecture in accordance with one embodiment using, for example, the cross-section of FIG. 1 and the circuitry of FIGS. 2 and 3;

FIG. 5 illustrates another example circuit block diagram of an image sensor system architecture having an array of readout circuits located at the periphery of an IP array in accordance with one embodiment using, for example, the cross-section of FIG. 1 and the circuitry of FIGS. 2 and 3;

FIG. 6a illustrates a top view of a pixel array IC in an alternative exemplary dual-layer image sensor system architecture of FIGS. 4 and 5, in accordance with one embodiment using, for example, the array circuitry of FIG. 2;

FIG. 6b illustrates a top view of a preprocessor IC in an alternative exemplary dual-layer image sensor system architecture of FIGS. 4 and 5, according to one embodiment using readout circuitry such as that of FIG. 3;

FIG. 6c illustrates a partial cross-section of the pixel array IC of FIG. 6a and the pre-processor IC of FIG. 6b in an example two-layer image sensor system, according to one embodiment;

FIG. 7 illustrates operation of an image sensor readout circuit, such as the readout circuit of FIG. 3, in accordance with one embodiment;

FIG. 8 illustrates data flow in an image capture system according to one embodiment useful with the system described herein;

FIG. 9 illustrates various temporal sampling strategies for use with an image sensor readout circuit, such as the readout circuit of FIG. 3, in accordance with one embodiment;

FIG. 10 illustrates one embodiment of a modified 4-transistor pixel in which performing a lossless over-threshold detection operation enables a conditional reset operation in conjunction with correlated double sampling;

FIG. 11 is a timing diagram illustrating exemplary pixel cycles within the progressive read-out pixel (progressive _ read _ out _ pixel) of FIG. 10;

FIGS. 12 and 13 illustrate exemplary electrostatic potential diagrams for the photodiode, transfer gate, and floating diffusion structures of FIG. 10 below their corresponding schematic cross-sectional views;

FIG. 14 illustrates one embodiment of an image sensor 300 with an array of progressive readout pixels;

15A-15C illustrate alternative column readout circuit embodiments that may be employed in conjunction with the progressive readout pixels described with reference to FIGS. 10-14;

fig. 16 illustrates a four pixel shared floating diffusion image sensor architecture in which the row and column transfer gate control lines disclosed in the embodiments of fig. 10-14 can be applied in a manner that achieves a multi-decimation mode without requiring additional array traversal control lines;

FIG. 17 illustrates an exemplary physical layout of the four-pixel architecture shown in FIG. 16;

18A and 18B illustrate Color Filter Array (CFA) patterns that may be employed with respect to the four-pixel architecture of FIGS. 16 and 17;

fig. 19 and 20 set forth timing diagrams illustrating exemplary stages of a full resolution (binning) pixel readout operation and a binning mode pixel readout operation, respectively, within an image sensor incorporating the 2x2 four pixel arrangement shown in fig. 16;

FIG. 21 illustrates an alternative stitching strategy that may be performed on a 4x1 quad-pixel block stitching in conjunction with a color filter array;

FIG. 22 illustrates a column interconnect architecture that may be applied to voltage stitching to enable analog signal readout from a selected column of a 4x1 quad-pixel block;

fig. 23 illustrates an exemplary timing diagram for a tiled mode readout operation within the 4x1 four-pixel structure of fig. 21 and 22;

FIG. 24 illustrates a more detailed embodiment of an image sensor having an array of 4x1 quad-pixel blocks operable in the decimation (stitching) mode described with reference to FIGS. 21-23;

25A-25C illustrate one embodiment of a selectable gain (or multi-gain) readout circuit that may be used to implement a high gain partial readout and an approximately unity gain full readout within a pixel column;

fig. 26 sets forth an exemplary timing diagram illustrating the alternating application of a common source gain configuration and a source follower gain configuration during hard reset, integration, partial readout, and (conditional) full readout operations within the multi-gain architecture of fig. 25A;

FIG. 27 illustrates an alternative embodiment of a selectable gain (or multi-gain) readout circuit that can be used to achieve both a high-gain partial readout and an approximately unity-gain full readout within a pixel column;

FIG. 28 illustrates one embodiment of an image sensor having a pixel array disposed between upper and lower readout circuits;

FIG. 29 illustrates one embodiment of an imaging system having a conditional reset image sensor along with an image processor, memory, and display;

FIG. 30 illustrates an exemplary sequence of operations that may be performed within the imaging system of FIG. 29 in conjunction with image processing operations;

FIG. 31 illustrates an exemplary log-log plot of Exposure Value (EV) versus light intensity (in lux) for a sum-find image reconstruction technique and a weighted-average image reconstruction technique;

fig. 32 depicts an example of a first HDR mode of operation in the context of 1080P video imaging at a 60Hz frame rate;

fig. 33 depicts an example of a second HDR mode of operation in the context of 1080P video imaging at a 60Hz frame rate;

FIG. 34 illustrates an exemplary mode of operation in which the dynamic range is further extended without disturbing the integration time ratio indicated in FIG. 33;

FIG. 35 demonstrates an exemplary "preview mode" that may be suitable, for example, when the sensor is operating in a reduced power/data rate mode to enable a user to compose an image, set zoom, etc.;

fig. 36 illustrates an exemplary 30Hz capture mode that implements (for the same imager example as shown in fig. 32-35) up to 8 sub-exposure captures per frame period (interval) and additional flexibility in the exposure strategy;

FIG. 37 illustrates examples of two different 120Hz timing possibilities, including: (i) a 4ms, 4ms sequence with equal sub-exposure time and one conditional read/reset for each frame; and (ii) a 0.75ms, 4ms sequence with one conditional read/reset for each frame;

FIG. 38 is an exemplary timing diagram illustrating an interleaved capture pattern that may be applied to strictly grouped sub-exposures;

FIG. 39 is an exemplary timing diagram for a mode similar to that shown in FIG. 32 (i.e., all sub-exposure periods are 4ms long), but in which 4 sub-exposure periods occur in parallel;

FIG. 40 is an exemplary timing diagram for operating a sensor in a data rate mode in which the maximum row rate is fast enough to support approximately 4.5 array scans per frame;

FIG. 41 illustrates a block diagram for an image sensor capable of operating with a variable exposure schedule and thus facilitating performance according to the variable timing arrangement illustrated in FIG. 40;

FIG. 42 illustrates an exemplary line buffer display format that provides flexibility for various sensor operating modes;

FIG. 43 depicts an exemplary schedule for an image sensor having a higher ADC bandwidth than the channel rate so that scanning can be expedited when data can be successfully compressed and more than 4 exemplary scans can be completed in one frame;

FIG. 44 illustrates an exemplary gating window bounded by two unconditional readout processing periods (passes) such that the gating light is dominant and the ambient light integration time is uniform (or equal) for all pixels;

FIG. 45 illustrates an exemplary exposure timing diagram for a spatial hybrid exposure mode; and

FIG. 46 illustrates an exemplary sub-exposure histogram sequence and exposure optimization algorithm.

Detailed Description

In some image sensors, electrical information representing the photon response and resulting from light incident on the pixel region (referred to herein as the "pixel signal") is converted into data image data values by readout circuitry. The readout circuitry may belong within the image sensor or may be located external to the image sensor. In some approaches, readout circuitry may be located within the image sensor for use with one or more pixel regions adjacent or proximate to the readout circuitry. For readout circuitry located outside the image sensor, pixel signals of one or more pixel regions associated with the readout circuitry may be transferred from the pixel regions to the readout circuitry.

Each readout circuit samples a pixel region, receives a pixel signal from the sampled pixel region, and converts the pixel signal into a multi-bit digital value representing the pixel signal. If the pixel signal or a digital value representing the pixel signal exceeds a sampling threshold, the pixel signal stored at the pixel region associated with the pixel signal is reset (e.g., by resetting the photosensitive element associated with the pixel region). If the pixel signal or the digital value does not exceed the sampling threshold, the pixel signal stored at the pixel region is not reset. Here, sampling a pixel region and resetting a pixel signal at the pixel region only when the pixel signal exceeds a sampling threshold will be referred to as "lossless sampling for conditional reset".

Image sensor overview

FIG. 1 illustrates a partial cross-section of an image sensor 25 useful in one embodiment. In the image sensor 25, light that passes through the microlens array 10 and the color filter array 12 (useful for color imaging) is incident on the silicon portion 20 of the image sensor. The use of microlenses (or other focusing optics) and color filters is optional and is shown here for illustration purposes only. The silicon 20 contains a photodiode (not shown) for collecting charge generated by photons absorbed by the silicon, and an access transistor (also not shown) for operating the photodiode. The pixel array IC wiring 14 provides connections for routing signals and supplying voltages within the array. As shown, the image sensor 25 is a backside illuminated (BSI) sensor because light enters the silicon from the side of the integrated circuit opposite the wiring layers and primary active circuit structure. Optionally, pixel array IC wiring 14 may be arranged between the color filter array 12 and the silicon 20 (with the primary active circuit structure within the "top" of the silicon as oriented in fig. 1) in order to achieve front-illuminated (FSI).

The image sensor 25 includes a plurality of IPs ("image pixels"), shown as IPs 1 through IPs 3, on which light collected by the lenses of the microlens array 10 is incident, respectively, IP1 through IP 3. Each IP comprises one or more photodiodes embedded within the silicon 20. At least some of the photons entering the silicon 20 are converted into electron-hole pairs in the silicon, and the resulting electrons (or holes in alternative embodiments) are collected over IP. For the sake of brevity, the description herein refers to this process as capturing light over IP and converting the light into image data. Each IP of the image sensor represents a portion of the surface area of the image sensor and may be organized into various arrays of columns and rows. In CMOS or CCD image pixel technology, each IP (e.g., each photodiode) converts light incident on the IP into electrical charge and includes readout circuitry configured to convert the electrical charge into a voltage or current. In one embodiment, the light captured by each IP of the image sensor represents one pixel of image data for the associated digital image, although in other embodiments image data from multiple IPs is combined together to represent a smaller number (one or more) of pixels (downscaling).

The image sensor 25 may include components external to the IP array. Similarly, portions of the IP array may include components that do not convert light into electrical charge. The area defined by the IP in the poly is referred to as the image sensor area. As described herein, an image sensor may include an amplifier, an analog-to-digital converter ("ADC"), a comparator, a controller, a counter, an accumulator, a register, a transistor, a photodiode, and the like. In different architectures, some of these components may be located within or outside the image sensor area, and some components may be located on the accompanying integrated circuit. In these embodiments, lenses (such as lenses in the microlens array 10) may be configured to direct light toward the actual light-sensing elements within the IP, rather than on, for example, amplifiers, comparators, controllers, or other components.

As noted above, the image sensor may include an array of multiple IPs. Each IP captures and stores a corresponding charge in response to light (e.g., one or more photons). In one embodiment, when sampling the IP, if a pixel signal representing the charge stored at the IP exceeds a sampling threshold, the pixel signal is converted to a digital value representing the pixel signal and the charge stored by the IP is reset. Alternatively, in sampling the IP, a pixel signal representing the charge stored at the IP is converted to a digital value representing the pixel signal, and if the digital value exceeds a sampling threshold, the charge stored by the IP is reset. In other embodiments, analog-to-digital conversion is started and, when enough conversion has been completed to determine whether the threshold has been exceeded, a determination is made whether to continue conversion. For example, in a successive approximation register ("SAR") ADC, if the threshold is equal to the most significant bit pattern, a determination can be made immediately whether to continue the conversion and perform a reset of the pixel, or to stop the conversion, once the pattern is resolved. The determination of whether the pixel signal or the digital value representing the pixel signal exceeds the sampling threshold may be made by using a comparator configured to compare the pixel signal or the digital value with the sampling threshold.

FIG. 2 illustrates an analog pixel image sensor having multiple pixel signal thresholds according to one embodiment. The image sensor of fig. 2 is a CMOS sensor and includes an IP array 40. The IP array may include any number of columns and rows with any number of IPs per column and per row. In fig. 2, an IP column 50 is highlighted, with IP column 50 representing all or part of the IP columns in the IP array. IP column 50 includes a plurality of IPs communicatively coupled via column lines 55. The IP 60 is highlighted in FIG. 2, with the IP 60 representing the IPs in the IP array.

The IP 60 includes photodiodes 65 along with control elements that enable the photodiodes to be pre-charged in preparation for exposure and then sampled after exposure. In operation, transistor 70 is turned on to couple the cathode of the photodiode to a voltage source and thereby "precharge" the cathode of the photodiode to a precharge voltage. At or before the beginning of the exposure period, the transistor 70 is turned off. With the transistor 70 off, the cathode voltage is incrementally discharged in response to photon irradiation, reducing the photodiode potential V in proportion to the amount of light detectedDET. At the end of the exposure period, the access transistor 72 is turned on to enable a signal representing the photodiode potential to be amplified/driven onto the column line 55 via the follower transistor 74 as a pixel signal 80.

ADC85 is communicatively coupled to IP column 50 via column line 55. In the embodiment of fig. 2, the ADC is located at the edge of the pixel array 40 and may be located within or external to the image sensor on which the IP array is arranged. ADC receives pixel signal 80 (representing modulo) from IP 60Pseudo photodiode potential). The ADC digitizes the pixel signal to generate a 3-bit digital value ("[ 2:0 ] representing the pixel signal]"). The ADC includes 7 pixel thresholds: threshold1 to Threshold7 (referred to herein as "VT1To VT7"). If the intensity of the pixel signal is less than VpreBut greater than VT1Then the ADC converts the pixel signal to a digital value of "000". Will be less than VT1But greater than VT2Is converted to a digital value of "001" to be at VT2And VT3The pixel signal in between is converted to a digital value "010", and so on until less than V will beT7The pixel signal of (1) is converted into "111".

In the embodiment of fig. 2, the potential difference between successive pixel thresholds is approximately the same (e.g., VT3–VT4≈VT5–VT6). In other words, the pixel threshold is linearly distributed at VT1And VT7In the meantime. Additionally, in the embodiment of FIG. 2, at VpreAnd VT1The potential difference between is greater than the threshold (e.g. V) of successive pixelspre–VT1>VT3–VT4) The potential difference between them, although in other embodiments all steps (step) are equal. When sampling IP, select VT1So that Vpre-VT1>VT3–VT4The effect of, for example, dark noise is reduced. In the embodiment of FIG. 2, at VT7And VfloorThe potential difference between may also be greater than the threshold (e.g. V) of successive pixelsT7-Vfloor>VT3-VT4) The potential difference between them. Finally, rather than linear threshold spacings, a given embodiment may be exponentially spaced apart by thresholds, e.g., where each threshold spacing is twice the next threshold spacing. For a system that accumulates multiple ADC samples to form an image, the exponential spacing is converted to a linear value prior to accumulation.

VfloorA pixel saturation threshold representing that the cathode voltage of the photodiode 65 no longer discharges linearly in response to photon illumination. For pixel signals within the linear sensitive region 90, the pixel signal to digital is shown in graph 95And (4) converting the value. It should be noted that the maximum number of detectable photon illuminations (e.g., pixel saturation points) is proportional to the small capacitance of the photodiode, and thus proportional to its physical size. As a result, in conventional sensor designs, the footprint of the photodiode is determined by the dynamic range required in a given application and does not scale significantly as process geometries decrease.

During capture of an image, in one embodiment, the IPs of a given one or more rows in IP column 50 and each other column in IP array 40 are sampled in turn, and the pixel signals associated with each IP are converted to digital values using one or more ADCs associated with each column. During an image capture period, the digital values output by the ADC are accumulated (conditionally accumulated, as explained above, in some embodiments) and stored. In addition to the IP illustrated in fig. 2, other types and configurations of IP may be used in the image sensor system. For example, in addition to the arrangement of transistors 70, 72, and 74, different arrangements of transistors may also be used. Additionally, although one ADC85 is shown in fig. 2 for the IP column 50, in other embodiments, more than one ADC may be used per IP column, with different ADC groups serving different portions of the array row of ADC columns. Additional combinations of ADCs (in the form of readout circuits) and IPs will be described in more detail below. Finally, the output of the ADC (e.g., Pix [2:0 ] in the embodiment of FIG. 2)]) Can be any multi-bit length and can be compared to that at VpreAnd VfloorAny number of thresholds distributed in any manner.

Image sensor system with multi-bit sampling and conditional reset

FIG. 3 illustrates an example image sensor readout circuit configured to convert a pixel signal to a multi-bit digital conversion in accordance with one embodiment. The embodiment of fig. 3 illustrates IP 100, IP memory 116, and readout circuitry 110, which includes ADC/comparator circuitry 112 (hereinafter "ADC/comparator") and adder 114. It should be noted that the modules of fig. 3 may include additional, fewer, and/or different components in other embodiments. For example, the ADC/comparator may be implemented as a separate component, and the adder may be located outside the readout circuit.

IP 100 denotes IP in an image sensor, and may be, for example, IP 60 of fig. 2. IP 100 receives one or more control signals, for example, from external control logic. The control signal may enable IP, e.g., by resetting IP and VpreTo start image capture; and exposure of the IP-enabled photosensor to light to cause storage relative to VpreOf the charge of (c). Similarly, the control signal may enable the IP to end image capture, for example, by disabling exposure of photosensitive elements of the IP to light after an image capture period has elapsed. The control signals may also enable the pixel signals to be output over IP and subsequently converted by a readout circuit to digital values representing the pixel signals (referred to herein as "sampling IP" or "sampling pixel signals"). As described above, the pixel signal may be a representation of the integrated charge (e.g., a source follower voltage, an amplified voltage, or a current having a component proportional to the integrated charge).

IP 100 receives a reset signal, for example, from external control logic. The reset signal resets the charge stored by the IP to V, for example, at the beginning of an image capture cyclepre. IP also receives a conditional reset signal from ADC/comparator 112 (in some circuits, a conditional reset and an initialization reset are provided by using common circuitry). The conditional reset signal resets the charge stored by the IP in response to the pixel signal exceeding a sampling threshold when the IP is sampled, for example, during image capture. It should be noted that in other embodiments, the conditional reset signal is received from a different entity. In one embodiment, the ADC/comparator may determine that the pixel signal exceeds the sampling threshold and may enable the external control logic to output a conditional reset signal to the IP; in such an embodiment, the reset signal (row-wise signal) AND the conditional reset signal (column-wise signal) may be anded by IP to initiate all resets. For the sake of brevity, the remainder of the description will be limited to the fact that the ADC/comparator provides a conditional reset signal to the IPExamples are shown.

The readout circuit 110 receives, for example, a threshold signal, a sampling signal (or "sampling enable signal"), a comparison signal (or "comparison enable signal"), a remainder signal (or "remainder enable signal"), and a reset signal from an external control logic, and receives a pixel signal from the IP 100. An IP memory element 116 corresponding to IP 100 receives a read signal that selects IP 100 for reading/writing by adder 114 and for external reading. The ADC/comparator 112 samples the IP 100 in response to receiving one or more sampling signals. During image capture, the ADC/comparator receives sampled signals at various sampling periods, e.g., periodically, or according to a predefined sampling period pattern (referred to herein as a "sampling strategy"). Alternatively, the sampling signal received by the ADC/comparator may include a sampling policy, and the ADC/comparator may be configured to sample the IP based on the sampling policy. In other embodiments, the IP receives one or more sampled signals and outputs a pixel signal based on the received sampled signals. In still other embodiments, the IP outputs the pixel signal periodically or according to a sampling strategy, or the ADC/comparator samples the pixel signal independently of the received sampling signal periodically or according to a sampling strategy. The ADC/comparator may request the pixel signal from the IP before sampling the pixel signal from the IP.

During sampling of the IP, the ADC/comparator 112 receives the pixel signal from the IP and converts the pixel signal to a multi-bit digital value representative of the pixel signal (optionally, in some embodiments, based on the pixel signal exceeding a sampling threshold). If the pixel signal exceeds the sampling threshold, the ADC/comparator outputs a conditional reset signal for resetting the charge stored at the IP. If the pixel signal does not exceed the sampling threshold, the ADC/comparator does not output a conditional reset signal for resetting the charge stored at the IP. The sampling threshold may be changed during image capture and received via a threshold signal, or may be predetermined or preset for a given image capture. One sampling threshold may be used during multiple image captures, different sampling thresholds may be used for different image captures, and multiple sampling thresholds may be used during a single image capture. In one embodiment, the sampling threshold is changed in response to a detected change in light conditions (e.g., the sampling threshold may be decreased in response to low light conditions and may be increased in response to high light conditions).

In one embodiment, the sampling threshold is an analog signal threshold. In this embodiment, the ADC/comparator 112 comprises an analog comparator and compares the pixel signal to a sampling threshold to determine whether the pixel signal exceeds the sampling threshold. If the pixel signal includes a voltage representing the charge stored by the IP 100, then the sampling threshold is exceeded if the pixel signal is below the sampling threshold. Taking the embodiment of fig. 2 as an example, if the sampling Threshold of the ADC/comparator is Threshold4, then the pixel signal will exceed the sampling Threshold only if the pixel signal comprises a voltage lower than the voltage associated with Threshold 4.

In one embodiment, the sampling threshold is a digital signal threshold. In this embodiment, the ADC/comparator 112 includes a digital comparator, and first converts the pixel signal into a digital value representing the pixel signal. The ADC/comparator then compares the digital value to a sampling threshold to determine whether the pixel signal exceeds the sampling threshold. Taking the embodiment of fig. 2 as an example, for the sampling Threshold "101", if the ADC/comparator converts the pixel signal to a digital value "001" (indicating that the pixel signal is between Threshold1 and Threshold 2), the pixel signal does not exceed the sampling Threshold and the conditional reset signal is not output. However, if the ADC/comparator converts the pixel signal to a digital value of "110" (indicating that the pixel signal is between Threshold 6 and Threshold 7), the pixel signal exceeds the sampling Threshold and a conditional reset signal is output.

In another embodiment, the sampling threshold is a digital signal threshold that can be estimated prior to a full digital conversion of the pixel signal. This may be advantageous in some embodiments or use cases to allow for faster conditional reset of pixels and/or power savings by avoiding unnecessary ADC operations. For example, with successive approximation register ADCs, multiple clock cycles are used to resolve a digital representation of the pixel signal. The first clock cycle resolves the most significant bit, the second clock cycle resolves the next most significant bit, and so on until all bit positions have been resolved. Taking the embodiment of fig. 2 as an example, for a sampling threshold of "100", after the first SAR ADC clock cycle, a determination may be made whether the threshold is met. For the sampling threshold "110," after the second SAR ADC clock cycle, a determination may be made whether the threshold is met. For embodiments having a bit depth of, for example, 6 bits or 8 bits, making a reset determination after 1 or 2 conversion cycles may result in significant time/power savings, which may be achieved by selecting a sampling threshold having one or more LSBs of 0.

In one embodiment, a row-wise comparison signal is supplied to each ADC/comparator "compare" signal input and signals the ADC/comparator to perform the comparison for the appropriate clock cycle. When the compare signal is asserted, the comparison is performed based on the current state of the analog-to-digital conversion. If the comparison for the ADC/comparator 112 meets the threshold, then a conditional reset signal is asserted to the IP 100 and adder 114, and the SAR ADC continues to convert the pixel signal. If the threshold is not met, the conditional reset signal is not asserted and may be used in conjunction with the compare signal to gate (gate) the clock signal of the SAR ADC to terminate the conversion.

The ADC/comparator 112 outputs a digital value (referred to herein as "digital conversion") representing the pixel signal received by the ADC/comparator to the adder 114. The ADC/comparator 112 may output a digital conversion in response to a pixel signal associated with the digital conversion exceeding a sampling threshold. A conditional reset signal may be used as an enable signal for signaling adder 114 to load the digital conversion and add it to the IP memory 116 location corresponding to IP 100 (which, in this embodiment, is selected from a plurality of such addresses by the address selection of the sense line). In other embodiments, the ADC/comparator outputs a digital conversion during each sample of the IP 100, regardless of whether the pixel signal associated with the digital conversion exceeds a sampling threshold. In these embodiments, the adder may be configured to accumulate digital conversions associated with pixel signals that exceed the sampling threshold and disregard digital conversions associated with pixel signals that do not exceed the sampling threshold. Alternatively, for example, if the threshold is set to "001" in fig. 2, the adder may unconditionally add the digital conversion to IP memory 116 each time IP 100 is read out, while still producing the correct result.

In one embodiment, the ADC/comparator 112 also outputs a digital conversion in response to receiving the remainder signal assertion (without asserting the compare signal). The remaining signal assertion is associated with the end of image capture and enables the ADC/comparator to output a full digital conversion to the adder 114 regardless of whether the pixel signal associated with the digital conversion exceeds the sampling threshold and asserts a conditional reset. The residual signal may prevent loss of image information associated with light received by the IP 100, but does not exceed a threshold at the end of the capture period. If the pixel signal representing such received light does not exceed the sampling threshold, the ADC/comparator may not output the digital conversion associated with the pixel signal and may not reset the charge stored by the IP with a conditional reset signal (which is also triggered by the assertion of the remainder signal). In embodiments where the ADC/comparator outputs a digital conversion to the adder regardless of whether the pixel signal associated with the digital conversion exceeds the sampling threshold, the adder may receive the residual signal and may be configured to accumulate the digital conversion associated with the pixel signal received at the end of the capture period in response to receiving the signal.

Adder 114 is configured to accumulate the digital conversions received during the capture period. As discussed above, in embodiments where a digital conversion is output only if the pixel signal associated with the digital conversion exceeds the sampling threshold ADC/comparator 112, the adder adds all received digital conversions (including additional digital conversions output by the ADC/comparator in response to receiving the remaining signal) into the IP memory 116. In embodiments where the ADC/comparator outputs a digital conversion associated with each received pixel signal, the adder simply accumulates the digital conversions associated with pixel signals that exceed the sampling threshold, adding the digital conversions output by the ADC/comparator in response to receiving the remaining signals to IP memory 116; such an embodiment requires the adder to know when the pixel signal exceeds the sampling threshold and when the residual signal is received, and for the sake of brevity, it will not be discussed further here.

Adder 114 receives reset/add control signaling, for example, from external control logic. In response to receiving a reset signal (e.g., at the beginning of an image capture cycle), the accumulator stores all zeros as image data to the accumulation of the digital conversions received at the selected IP memory location 116. The adder also receives a reset signal and resets the accumulation of the received digital conversions.

In an alternative embodiment, the adder is located external to the readout circuit 110. For example, the ADC/comparator may output the converted stream to a digital channel (e.g., multiplexed with other conversions from other ADCs) to a separate circuit that provides an accumulation function. In this case, the ADC/comparator must also output a symbol indicating "no conversion", which may be 0. One possibility is for the circuitry in the digital channel interface (e.g., PHY 134 in fig. 4) to encode the digital conversion to reduce bandwidth. In one embodiment, "no conversion" is output as "00", the higher threshold that will exceed the ADC conversion is output as "01", and all other ADC conversions are output as "lxxxxx", where x represents one of the resolved bits of the ADC conversion and the number of x positions is equal to the bit depth of the ADC.

In one embodiment, the IP is configured to output a pixel signal and receive a conditional reset on the same line. In this embodiment, the IP and ADC/comparator 112 alternately drive the pixel signal and the conditional reset on the shared line. For example, the IP may output a pixel signal on the shared line during a first portion of the sampling period, and may receive a conditional reset on the shared line during a second portion of the sampling period. Finally, the ADC/comparator may receive the threshold signal, the sampled signal, and the residual signal on the shared line. For example, the ADC/comparator may receive the threshold signal at the beginning of image capture, may receive the sample signal during the entire image capture period, and may receive the remainder signal at the end of the image capture period. It should also be noted that the reset signal received by the IP may be the same reset signal received by accumulator 114 and may be received on the shared line.

FIG. 4 illustrates an example embodiment of an image sensor system having a multi-bit architecture in accordance with one embodiment. Image sensor system 120 of fig. 4 includes an image sensor area 125, an array of readout circuits 130, control logic 132, and a physical signaling interface 134. In other embodiments, the image sensor system may include fewer, additional, or different components than illustrated in the embodiment of fig. 4 (e.g., the circuitry may have the memory 116 integrated therewith). The image sensor system shown in fig. 4 may be implemented as a single IC, or may be implemented as multiple ICs (e.g., the image sensor region and readout circuitry array may be located on separate ICs). Further, various components (such as readout circuitry arrays, control logic, and physical signaling interfaces) may be integrated within image sensor region 125.

For purposes of example, assume that image sensor system 120 and a host IC (not shown in fig. 4) communicatively coupled to the image sensor system form a primary image acquisition component within a camera (e.g., a camera or video camera within a mobile device, a pocket camera, a digital SLR camera, a standalone or platform integrated webcam, a high definition camera, a security camera, a car camera, etc.). The image sensor IC and host IC may be more generally deployed alone or in conjunction with similar or different imaging components in virtually any imaging system or device, including but not limited to metrology instruments, medical instruments, gaming systems or other consumer electronics devices, military imaging systems, transportation-related systems, space-based imaging systems, and the like. The operation of an image sensor system generally involves capturing an image or frame by exposing an IP to light, converting the stored charge into image data as a result of the exposure, and outputting the image data to a storage medium.

The image sensor region 125 includes: IP array 127 comprising N rows (indexed from 0 to N-1) and M columns (indexed from 0 to M-1). Physical signaling interface 134 is configured to receive commands and configuration information from a host IC (e.g., a general or special purpose processor, an Application Specific Integrated Circuit (ASIC), or any other control component configured to control an image sensor IC) and to provide the received commands and configuration information to control logic 132. The physical signaling interface is also configured to receive image data from the readout circuit array 130 and output the received image data to the host IC.

The control logic 132 is configured to receive commands and configuration information from the physical signaling interface 134 and to transmit signals configured to manipulate the operation and function of the image sensor system 120. For example, in response to receiving a command to capture an image or frame, the control logic may output a series of exposure signals (configured to cause IP to reset) and sampling signals (configured to cause readout circuitry in readout circuitry array 130 to sample pixel signals from the IP in IP array 127) to enable capture of the image or frame by the image sensor system. Similarly, in response to receiving a command to initialize or reset the image sensor system, the control logic may output a reset signal configured to reset each IP in the IP array, causing each IP to disregard any accumulated charge. The control signal generated by the control logic identifies a particular IP within the IP array for sampling, may control the functionality of readout circuitry associated with the IP, or may control any other functionality associated with the image sensor system. The control logic is shown in fig. 4 as being outside of the image sensor area 125, but as noted above, all or part of the control logic may be implemented locally within the image sensor area.

The control logic 132 outputs a control signal and a reset signal for each IP in the image sensor area 125. As shown in the embodiment of FIG. 4, each IP in an image pixel IP [ X ] [ Y ] receives a row-parallel Cntrl [ X ] signal (corresponding to a "row" select control signal for each IP) and a row-parallel Reset [ X ] signal from control logic to Reset the IP, where "X" and "Y" refer to the coordinates of the IP within the image sensor area. While each of the control and reset signals received at any given IP is only 1 bit, as indexed in the embodiment of fig. 4, it is to be understood that indexing is done for simplicity only, and that these signals may be of virtually any width or size.

Readout circuitry array 130 includes M readout circuits, each configured to receive pixel signals from a column of IPs in IP array 127. It should be noted that in other embodiments, the readout circuitry array may include a plurality of readout circuits configured to receive pixel signals from each IP column, as discussed in fig. 5a, 5b, and 5 c. A pixel signal bus couples the IPs in each IP column in each IP array to readout circuitry associated with the IP columns within the readout circuitry array. Each IP is configured to output a pixel signal generated by the IP to a pixel signal bus, and each readout circuit is configured to sample the pixel signal from the IP in the IP column associated with the readout circuit. For example, the readout circuit 0is configured to sample a pixel signal from the pixel signal bus 0, and so on. Each readout circuit in the array of readout circuits may repeatedly sample pixel signals from the IPs in the IP column associated with the readout circuit (e.g., sequentially sample pixel signals from consecutive IPs during multiple processing periods), or may sample pixel signals according to a predetermined non-sequential order. In one embodiment, the readout circuit may sample multiple pixel signals simultaneously. Although not illustrated in the embodiments of fig. 3 and 4, the readout circuit may additionally include a memory configured to store the accumulated digital values prior to outputting the accumulated values as image data.

The conditional reset bus couples the IPs in each IP column in IP array 127 to the readout circuitry associated with each IP column. After sampling the pixel signal from the IP in the IP column, the readout circuit associated with the IP column generates a conditional reset signal if the sampled pixel signal exceeds a sampling threshold. For example, if an IP in an IP column outputs a pixel signal to a readout circuit associated with the IP column via a pixel signal bus coupling the IP to the readout circuit, and if the readout circuit determines that the pixel signal exceeds a sampling threshold, the readout circuit outputs a conditional reset signal to the IP via a conditional reset bus coupling the readout circuit to the IP, and the IP resets the charge stored at the IP. As described above, the pixel signal bus and conditional Reset bus may be implemented in a shared bus, with Cntrl [ X ] enabling pixel signals to be output from row X to the shared bus, and Reset [ X ] enabling conditional Reset of pixels in row X from the shared bus, although such an embodiment is not further described for the sake of brevity.

Control logic 132 generates read control signals for the read circuits in the array of read circuits 130. The readout control signal may control: sampling of pixel signals from the IP in the IP array 127, conversion of the sampled pixel signals into digital values, accumulation of the digital values, output of the accumulated digital values, and reset of the adder are performed by the readout circuit. The readout control signals may include a threshold signal, a sampling signal, a comparison signal, a residual signal, a readout signal, and a reset/add signal for each readout circuit in the readout circuit array, as described in fig. 3.

The control logic 132 is configured to generate readout control signals for the readout circuitry array 130 to enable image capture during an image capture period. The control logic may generate a reset prior to the image capture cycle, or when a particular IP memory location for the image capture cycle is first used, so that the accumulator of each readout circuit 110 resets the IP memory location. At the beginning of an image capture cycle, the control logic may generate a threshold signal for each readout circuit; as discussed above, the threshold signal is used by each readout circuit to determine the threshold to compare with the pixel signal in order to conditionally reset the IP associated with the pixel signal and accumulate the digital values associated with the pixel signal. During an image capture period, the control logic may generate a series of sampling signals configured to enable the readout circuitry to sample the pixel signals from an IP associated with the readout circuitry. In one embodiment, the control logic generates the sampled signal according to one or more sampling strategies. The sampling strategy will be described in more detail below. At the end of the image capture period, the control logic generates a remainder signal configured to enable each readout circuit to accumulate digital values representative of the pixel signals regardless of whether the pixel signals exceed the sampling threshold. During an image capture period, the control logic generates readout signals configured to cause each readout circuit to output an accumulated digital value representing a sampled pixel signal exceeding an associated sampling threshold as image data. The control logic may also generate a reset signal after each image capture cycle to reset the accumulated digital values within each readout circuit.

The control logic may also generate a pause signal and a resume signal configured to cause the IP and readout circuitry to pause and resume image capture, and also generate any other signals needed to control the IP and the function of the readout circuitry in the array of readout circuitry. For each readout circuit, the image data output by the readout circuit is a digital representation of the light captured by each IP in the IP column associated with the readout circuit. The image data is received by the physical signaling interface for subsequent output to the host IC.

FIG. 5 illustrates an example image sensor system architecture with an array of readout circuits located at the periphery of an IP array, according to one embodiment. In the architecture of fig. 5, 6 readout circuit arrays (140a, 140b, 140c, 140d, 140e, and 140f) are located around an image sensor region 145 including an IP array. Unlike the embodiment of fig. 4 in which one readout circuit array 130 is located on one side of the image sensor region 125, the readout circuit array 140 of fig. 5 is located on all sides of the image sensor region 145. The array of readout circuits may be located within an IC that also contains the image sensor area, or may be located on one or more separate ICs. For example, each readout circuit array may be located at the periphery of the image sensor IC, or may be located in a dedicated readout circuit array IC adjacent to the image sensor IC.

In the foregoing embodiment of fig. 4, each sensing circuit in sensing circuit array 130 is coupled to an IP column in IP array 127. In the embodiment of fig. 5, each readout circuitry array 140x is coupled to a set of 6 IPs from a partial row and a partial column of the image sensor region 145. For example, the readout circuit array 140a is coupled to IP1, IP2, IP3, IP7, IP8, and IP 9. Each readout circuit array 140x includes one or more readout circuits. In one embodiment, each sensing circuit array includes 6 sensing circuits, where each sensing circuit in the sensing circuit array is coupled to one IP. In such an embodiment, each readout circuit only samples the IP to which it is coupled. More typically, each readout circuit is shared by the IP of a block comprising a number of rows and one or more columns. Although control logic is not illustrated in the embodiment of fig. 5, each sensing circuit array may be coupled to general purpose control logic, or each sensing circuit array may be coupled to dedicated control logic. Further, although a physical signaling interface is not illustrated in the embodiment of fig. 5, each readout circuitry array may output image data to the common physical signaling interface via a common bus, or may output image data to a dedicated physical signaling interface coupled to each readout circuitry array via a dedicated bus.

Fig. 6a illustrates a top view of a pixel array IC in an example dual-layer image sensor system architecture, according to one embodiment. The pixel array IC of figure 6a includes peripheral circuitry 162 surrounding the IP array. The IP array includes row control circuitry 164 and 4 IP row groups (IP row groups 0 through 3). Each IP row group is the width of the array and comprises one quarter of the rows in the array, and the row control circuitry provides the control and reset signals required to operate the IP (e.g., signals that enable the IP to be reset and selected for readout, as well as any other signals discussed herein).

Fig. 6b illustrates a top view of a preprocessor IC in an example dual-layer image sensor system architecture, according to one embodiment. The preprocessor IC of fig. 6b includes peripheral circuitry 172 surrounding the array of readout circuits. The readout circuitry array includes a physical signaling interface 175 (which may alternatively be on the pixel array IC 160), readout control circuitry 176, 4 readout circuitry arrays (readout circuitry arrays 0-3), and accompanying memory groups 0A/B, 1A/B, 2A/B, and 3A/B. Each array of sense circuits includes one or more sense circuits (including ADCs, adders, and reset logic for each IP column) connected to a corresponding row in an associated memory bank. When a specific IP row is selected in the IP row group of the pixel array IC, a corresponding row in a corresponding memory group is selected on the preprocessor IC.

Fig. 6c illustrates a cross-section of the pixel array IC of fig. 6a and the pre-processor IC of fig. 6b in an example dual-layer image sensor system according to an embodiment. In the embodiment of fig. 6c, the pixel array IC 160 is located above the pre-processor IC 170 such that the bottom surface of the pixel array IC is coupled to the top surface of the pre-processor IC. A microlens array 180 and a color filter array 182 are located over the pixel array IC. The pixel array IC and the processor IC are coupled via pixel array IC wiring 184 and pre-processor IC wiring 186. By locating the pixel array IC above the pre-processor IC, die size and percentage of surface area in the image sensor system that is capable of capturing light is increased. For example, in a single layer IC architecture comprising an IP array and one or more arrays of readout circuitry, the portion of the single layer IC comprising the one or more arrays of readout circuitry is not capable of capturing light; such an embodiment reduces the percentage of silicon die used to capture light incident on a single layer IC. This requires the camera module to occupy a larger area than the lens and imaging array, and increases the cost and size of the camera module. In contrast, the top layer of the embodiment of FIG. 6c does not include an array of readout circuits, so the die size of the top single layer IC is reduced to near the size of the IP array. Light incident on the top layer passes through the microlens array and the color filter array, is captured by the IP in the IP array, and a signal representing the captured light is sampled by the readout circuitry array via the pixel array IC wiring and the pre-processor IC wiring.

FIG. 7 illustrates the operation of an image sensor readout circuit, such as the readout circuit of FIG. 3, according to one embodiment. In the example embodiment of fig. 7, the image is captured during 16 sampling periods. During an image capture period, the ADC of the example embodiment of fig. 7 converts the pixel signal to a 5-bit digital value, and the accumulator accumulates the 5-bit digital value into a 9-bit digital value. Further, in the embodiment of fig. 7, the ADC converts the received pixel signal into a digital value representing the pixel signal, such that each additional photon detected by the IP causes an increase in the digital value by 1. For example, if IP detects 5 photons after reset, the pixel signal generated by IP will be converted to the value "00101" by ADC. It should be emphasized that in other embodiments, the ADC converts the received pixel signal to a digital value representative of the pixel signal such that a number of additional photons detected by the IP cause the digital value to increase by 1. In the embodiment of fig. 7, the pixel signals are analog voltages and thus are not shown in fig. 7 for the sake of brevity.

At the start of an image capturing cycle (sampling period 0), a control signal configured to configure the IP of the readout circuit to reset and start exposure is received. In the embodiment of fig. 7, the "start exposure" control signal also resets the value stored at the memory element corresponding to the IP to zero. In addition, a threshold signal is received to set a sampling threshold for the readout circuit to a pixel signal equivalent to 20 photons.

During the first sampling period, 4 photons are detected over IP. The IP then generates a pixel signal representing the charge collected by the photosensitive element within the IP equivalent in response to detecting the 4 photons, and the ADC converts the pixel signal to a digital value of "00100". Since the 4 detected photons do not trigger a sampling period of 20 photons ("10100"), the accumulator does not accumulate the digital value "00100" and does not eliminate the charge stored by the IP (does not reset the IP). It should be noted that the column "photons (delta-accumulate)" first term indicates the number of photons detected by the IP during a particular sampling period, and the second term indicates the number of photons accumulated since the last conditional reset of the IP.

During sampling period 2, 7 additional photons are detected by the IP. The charge stored by the IP increases from the charge generated in response to detecting 4 electrons during sampling period 1 to the charge generated in response to detecting 11 accumulated photons (4 photons during sampling period 1 and 7 photons during sampling period 2). The pixel signal generated by the IP in response to the stored charge is converted into a digital value "01011". Since a total of 11 photons do not trigger a sampling threshold of 20 photons, the accumulator does not accumulate the digital value "01011" and does not reset the IP. Similarly, during sampling period 3, 2 additional photons are detected by the IP, and the charge stored by the IP is added to the charge generated in response to detecting 13 accumulated photons (4 photons during sampling period 1, 7 during sampling period 2, and 2 during sampling period 3). The pixel signal generated by the IP in response to the increased stored charge is converted into a digital value "01101". Since the accumulated 13 photons do not trigger a sampling threshold of 20 photons, the accumulator does not accumulate the digital value "01101" and does not reset the IP.

During sampling period 4, 11 additional photons are detected by the IP. The charge stored by IP is added to the charge equivalent to detecting 24 accumulated photons (4 during sampling period 1, 7 during sampling period 2, 2 during sampling period 3, and 11 in sampling period 4). The pixel signal generated by the IP in response to the stored charge is converted to a digital value of "11000". Since the accumulated 24 photons exceed the sampling threshold of 20 photons, the adder accumulates the digital value "11000" into the memory element for the IP and resets the IP.

The 14 photons detected during sampling period 5 do not exceed the sampling period 20 photons, the digital value "01110" produced by the ADC is not accumulated, and the IP is not reset. The 8 photons detected during sampling period 6 result in cumulative detection of 22 photons (14 photons during sampling period 5 and 8 during sampling period 6) by the IP, and the adder accumulates the digital value "10110" (thereby producing a total cumulative value of "000101110" into the memory element) and resets the IP.

This process is repeated for each of the 16 sampling periods. The digital values produced by the ADC during sampling periods 10, 14, and 15 are all accumulated in response to the number of accumulated photons detected by the IP exceeding the sampling threshold of 20 photons. Therefore, the IP is reset for the sampling period after these periods ( sampling periods 11, 15, and 16). During sampling period 16, 19 photons are detected by IP, which does not exceed the sampling threshold of 20 photons. In addition, during sampling period 16, a residue signal (residue value 190, "10011") configured to direct the accumulator to accumulate the digital values produced by the ADC is received. Thus, the adder adds the value "10011" to the accumulated value "001111011" held in the memory element to generate image data 195, "010001110". Finally, during the sampling period 16, a reset signal is received which enables the readout circuit to output the image data, and, following the output of the image data, the value output by the ADC and stored at the accumulator is reset to zero.

FIG. 8 illustrates pixel information flow in an image capture system, according to one embodiment. During the course of an image capture cycle, the IP 200 detects photons and outputs pixel signals 202 to readout circuitry. In response, readout circuitry 204 converts the received pixel signals to digital values representing the received pixel signals, and for each digital value associated with a pixel signal that exceeds a sampling threshold, the digital value is accumulated and the IP is reset. After the image capture period, the accumulated digital values are output as image data 206.

Post-processing module 208 receives image data 206 and performs one or more processing operations on the image data to produce processed data 210. In one embodiment, a response function may be used to transform the image data 206 according to a desired response. For example, the image data may be transformed using a linear function or a logarithmic function based on the brightness of light detected by the IP. The processed data is then stored in memory 212 for subsequent retrieval and processing. IP 200, readout circuitry 204, post-processing module, and memory may reside within an IC or may reside within a separately coupled IC.

Fig. 9 illustrates various temporal sampling strategies used by an image sensor readout circuit, such as the readout circuit of fig. 3, in accordance with one embodiment. In the embodiment of fig. 9, the image is captured with an image capture period 220 equivalent to 16 time units. For each of the three sampling strategies illustrated, "x" indicates the sampling of a given IP by the readout circuit.

In sampling strategy 1, the readout circuit samples the IP after each of the 16 time units. In sampling strategy 2, the readout circuit samples the IP after every 4 time units. Since the frequency at which the readout circuit samples the IP in sampling strategy 2 is lower than the frequency at which the readout circuit samples the IP in sampling strategy 1, the IP in sampling strategy 2 is more likely to saturate than the IP in sampling strategy 1. However, the resources (processing, bandwidth, and power) required to implement sampling strategy 2(4 total samples) may be lower than the resources required to implement sampling strategy 1(16 total samples) because the frequency at which the readout circuit samples the IP in sampling strategy 2 is only 25% of the frequency at which the readout circuit samples the IP in sampling strategy 1.

In sampling strategy 3, the readout circuit samples the IP after time units 1, 2, 4, 8, and 16. The exponential spacing of the samples of sampling strategy 3 provides a short sampling period (e.g., a sampling period between time unit 0 and time unit 1) and a long sampling period (e.g., a sampling period between time unit 8 and time unit 16). Both short and long sampling periods are allowed, the dynamic range of sampling strategy 1 is preserved, with almost as few samples as sampling strategy 2 (5 samples for sampling strategy 3 versus 4 samples for sampling strategy 2). Other sampling strategies not illustrated in fig. 9 may also be implemented by the readout circuitry in the image sensor system described herein. Depending on the overall length of the exposure period, or other scene or user dependent factors, different sampling strategies may be selected to meet desired power, SNR, dynamic range, or other performance parameters.

High SNR image sensor with lossless threshold monitoring

While the three-transistor (3T) pixel architecture shown in fig. 2 is suitable for many applications, it has a node "V" disposed between the photodiode and the source follower (i.e., in fig. 2, at the light sensitive element 65)DETAnd between element 74) provides a number of advantages. First, canTo reset the now isolated floating diffusion (e.g., coupled to V) at the gate of the source follower without disturbing the charge state of the photodiodeDD) Thereby enabling a Correlated Double Sampling (CDS) operation in which the noise floor (noise floor) of the floating diffusion structure is sampled before charge integration and then subtracted from the subsequent sampling of the photodiode potential, thereby eliminating noise and significantly improving SNR. Another advantage is that, counterintuitively, a more compact pixel design as a switched connection between a photodiode and a source follower (i.e., via a transfer gate) enables sharing of the source follower, reset transistor, and access transistor among multiple photodiodes. For example, only 7 transistors are required to implement a group of 4 "4T" pixels with shared source followers, reset transistors and access transistors (i.e., 4 transfer gates plus 3 shared transistors), thereby achieving an average of 1.75 transistors per pixel (1.75T).

In terms of pixel readout, in a 3T pixel, the direct connection between the photodiode and the source follower enables readout of the charge state of the photodiode without disturbing the ongoing integration of the photocharge. This "lossless readout" capability is particularly advantageous in the context of the conditional reset operation described above, since the 3T pixel can be sampled after the integration period and conditionally be caused to continue integrating charge (i.e. not reset) if the sampling operation indicates that the charge level remains below the predetermined threshold. In contrast, the charge transferred between the photodiode and the floating diffusion, which is part of the 4T pixel readout, disturbs the state of the photodiode, thereby presenting a challenge to the conditional reset operation.

In many of the embodiments described below in connection with fig. 10-14, the modified 4T pixel architecture is operated in a manner that separates reset thresholds from pixel sample generation to enable lossless (and still CDS) over-threshold determination. That is, rather than reading out the net level of charge accumulated within the photodiode (i.e., a pixel sampling operation) and conditionally resetting the photodiode based on that readout (i.e., as in a 3T pixel sampling operation), a preliminary over-threshold sampling operation is performed to enable detection of an over-threshold state within the photodiode, where a full photodiode readout (i.e., pixel sample generation) is conditionally performed according to the preliminary over-threshold detection result. In fact, the photodiodes are not conditionally reset according to the pixel values obtained from readout from all photodiodes; but conditionally performs a full photodiode readout based on the result of a preliminary lossless determination of whether a threshold has been exceeded; in at least one embodiment, a method is implemented by separating conditional reset threshold from pixel value generation.

Fig. 10 illustrates one embodiment of a modified 4T pixel 250, referred to herein as a "progressive readout pixel," in which a lossless threshold detect operation is performed to enable a conditional reset operation in conjunction with correlated double sampling. As explained more fully below, over-threshold detection involves a limited readout of the photodiode state, which will trigger a more complete readout of the photodiode state when it is determined that an over-threshold condition is indicated. I.e., from the limited over-threshold detection readout to the full readout (the latter being conditionally done according to the over-threshold detection result), the pixels 250 are sequentially read out.

Still referring to fig. 10, the progressive readout pixel 250 includes a transfer gate 251 disposed between a photodiode 260 (or any other practical photosensitive element) and a floating diffusion node 262, and a transfer enable transistor 253 coupled between a transfer gate row line (TGr) and the transfer gate 251. The gate of the transfer enable transistor 253 is coupled to a transfer gate column line (TGc) such that, when TGc is activated, the potential at TGr is supplied to the gate of the transfer gate 251 via the transfer enable transistor 253, thereby enabling charge accumulated within the photodiode 260 to be transferred to the floating diffusion 262 and sensed by the pixel readout circuitry. More specifically, the floating diffusion 262 is coupled to the gate of the source follower 255 (amplifying and/or charge-to-voltage conversion element), the source follower 255 itself being coupled to the power supply rail (V in this example)DD) And a sense line Vout to enable output of a signal representing the floating diffusion potential to readout logic external to the pixel.

As shown, a row select transistor 257 is coupled between the source follower and a readout line to enable multiple access to the readout line by the corresponding row of pixels. That is, a row select line ("RS") is coupled to a control input of a row select transistor 257 within a corresponding row of pixels, and operates on a thermal-only basis to select one row of pixels at a time for a sensing/readout operation. A reset transistor 259 is also provided within the progressive readout pixel to enable switchable coupling of the floating diffusion to the power supply rail (i.e., when the reset gate line (RG) is enabled) and thus resetting thereof. The photodiode itself, along with the floating diffusion, may be reset by turning on the transfer gate 251 and reset transistor 259 completely in parallel (e.g., by asserting TGc when TGr is high), or simply by connecting the photodiode to a reset state floating diffusion.

Fig. 11 is a timing diagram illustrating an exemplary pixel period within the progressive readout pixel of fig. 10. As shown, the pixel cycle is divided into 5 periods or phases, which correspond to the different operations performed to produce the final progressive readout in the last 2 phases. In the first phase (phase 1), a reset operation is performed within the photodiode and floating diffusion by asserting logic high signals on the TGr, TGc, and RG lines in parallel to turn on the transfer enable transistor 253, transfer gate 251, and reset transistor 259, switchably coupling the photodiode 260 to the power supply rail via the transfer gate 251, floating diffusion 262, and reset transistor 259 (the illustrated sequence may begin with an unconditional reset (e.g., at the beginning of a frame), and may also begin with the aforementioned conditional read/reset operation). To end the reset operation, the TGr signal and the RG signal (i.e., the signals applied on similarly-named signal lines) are lowered, thereby turning off transfer gate 251 (and reset transistor 259), thereby enabling the photodiode to accumulate (or integrate) charge in response to incident light in the immediately following integration phase (phase 2). Finally, although the row select signal goes high during the reset operation shown in fig. 11, this is merely a result of implementing a dedicated row decoder that raises the row select signal each time a given row address is decoded in conjunction with a row-specific operation (e.g., raises the TGr signal and the RG signal during reset for a given row). In an alternative embodiment, the row decoder may include logic to suppress the assertion of the row select signal during reset, as shown by the dashed RS pulse in fig. 11.

At the end of the integration phase, the floating diffusion is reset (i.e., by pulsing the RG signal to couple the floating diffusion to the power supply rail) and then sampled within the column readout circuitry by the sample-and-hold element. In fact, the reset and sampling operation (shown as phase 3 in fig. 11) samples the noise level of the floating diffusion; and in the illustrated embodiment, the reset and sampling operations are performed by asserting the row select signal for the pixel row of interest (i.e., the "ith" pixel row selected by RSi) while pulsing the reset state sample and hold Signal (SHR) to transfer the state of the floating diffusion to the sample and hold element (e.g., switched access capacitive element) within the column readout circuit via the readout line Vout.

After the noise samples are acquired in phase 3, in phase 4, the "over-threshold detection" potential VTG is detected by raising the TGr line to partially conductingpartialThe over-threshold detection operation is performed by turning on the transfer enable transistor 253 at the same time (i.e., by asserting the logic high TGc signal, although TGc is already on in this embodiment). With this operation graphically illustrated in fig. 12 and 13, the VTG is setpartialIs supplied to the transfer gate 251 to switch the transfer gate to a "partially on" state ("TG partially on"). Referring to fig. 12 and 13, electrostatic potential diagrams for the photodiode 260 (pinned photodiode in this example), the transfer gate 251 and the floating diffusion 262 are shown below their corresponding schematic cross-sectional views. It should be noted that the depicted level of electrostatic potential is not intended to be an accurate representation of the level generated in a real or simulated device, but is used to illustrate the general (or schematic) operation of the pixel readout phaseOf (d) is described. At the moment of connecting VTGpartialWhen supplied to the transfer gate 251, a shallower channel potential 271 is formed between the photodiode 260 and the floating diffusion 262. In the example of fig. 12, at the time of the over-threshold detection operation (stage 4), the level of the electric charges accumulated in the photodiode does not increase to a threshold level required for the electric charges to overflow (e.g., be transferred) to the floating diffusion via the shallow channel potential of the partially-turned-on transfer gate. Therefore, the accumulated charge level does not exceed the VTGpartialThe overflow threshold established by the supply to the control node of the transfer gate 251, so there is no overflow from the photodiode to the floating diffusion structure, but the accumulated charge remains undisturbed within the photodiode. In contrast, in the example of fig. 13, the level of accumulated charge is higher, exceeding the overflow threshold, causing a portion of the accumulated charge (i.e., a subset of charge carriers above the transfer gate portion conducting electrostatic potential) to overflow into the floating diffusion structure node 262, with the remaining accumulated charge remaining in the photodiode, as shown at 272.

Still referring to fig. 11, 12 and 13, prior to the end of the over-threshold detection phase 4, the charge level of the floating diffusion is sampled and held within the single-state sample-and-hold element (i.e., in response to assertion of signal SHS) to produce a threshold test sample, which is the difference between the signal state sample and the previously obtained reset state sample, to be evaluated against the conditional reset threshold. In one embodiment, the conditional reset threshold is an analog threshold (e.g., compared to a threshold test sample in the sense amplifier in response to assertion of a compare/convert strobe signal) that is set or programmed to a setting above the sampling noise floor, but low enough to enable detection of minimal charge overflow via the shallow transfer gate channel. Alternatively, the threshold test sample may be digitized (e.g., within an analog-to-digital converter that is also used to generate the final pixel sample value) in response to assertion of the compare/convert signal, and then compared to a digital conditional reset threshold, again set (or programmed to be above the noise floor)Background setting) but low enough to enable detection of trace charge spillover. Whichever occurs, if the threshold test sample indicates that no detectable overflow has occurred (i.e. the threshold test sample value minus the conditional reset overflow threshold), then the photodiode is considered to be in the under-threshold state shown in figure 12, and the TGc line is held low in the immediately following conditional readout phase (phase 5, final phase) to disable the transfer gate 251 for the remainder of the progressive readout operation-in effect disabling further readout from the photodiode and thereby enabling the photodiode to continue integrating charge without disturbing at least another sampling period. Conversely, if the threshold test sample indicates an overflow event (i.e., the threshold test sample is greater than the conditional reset/overflow threshold), then during the conditional read phase, the TGc line is pulsed while the "remainder transfer" potential VTG, which is fully conductive at the same timefullTo the TGr line, thereby enabling the transfer of the remainder of the charge within the photodiode 260 (i.e., charge 272 as shown in fig. 13) to the floating diffusion 262 via the full-depth transfer gate channel (273); so that the charge accumulated within the photodiode since the hard reset in phase 1 is fully transferred between the over-threshold transfer in phase 4 and the remainder transfer in phase 5, to the floating diffusion where it can be sensed in a pixel readout operation. In the illustrated embodiment, the pixel readout operation is achieved by sequentially pulsing the SHS signal and the compare/convert strobe during conditional readout phase 5, although either or both of these pulses may optionally be suppressed without over-threshold detection. It should be noted that the conditional readout of the photodiode (i.e., by pulsing TGc and pulsing VTG)fullSupply on TGr), effectively resetting the photodiode (i.e., pumping all charge out to the floating diffusion); while suppressing the conditional readout so that the integration state of the photodiode is not disturbed. Therefore, a conditional read-out operation is performed in phase 5, conditionally resetting the photodiode for a subsequent sampling period(s) ((c))Sub-frame) or to avoid resetting the photodiode to enable cumulative integration in a subsequent sampling period. Thus, in either case, a new integration phase follows phase 5, wherein phases 2 to 5 are repeated for each sub-frame of the entire frame (or exposure) period, before the hard reset is repeated in the new frame. In other embodiments that allow cumulative integration across frame boundaries, a hard reset operation may be performed to initialize the image sensor and omitted during an indeterminate period of time thereafter.

Fig. 14 illustrates an embodiment of an image sensor 300 having an progressive readout pixel array 301, ordering logic 303, row decoder/driver 305, and column readout circuitry 307. While pixel array 301 is shown as including four rows and two columns of shared element pixels, other embodiments may include more pixel rows and columns to implement, for example, image sensors of many megapixels or billions of pixels. The column readout circuitry 307 (for which two column readout circuitry is depicted) and the row decoder/driver 305 can be similarly scaled to meet the number of pixels in the pixel array.

In the illustrated embodiment, each column of the pixel array is populated with shared element pixels, where each 4 pixels form a four pixel cell 310 and contain a respective photodiode 260(PD 1-PD 4), transfer gate 251, and transfer enable transistor 253, but share a floating diffusion node 312, a reset transistor 259, a source follower 255, and a row select transistor 257. With this arrangement, the average transistor count per pixel is 2.75 (i.e., 11 transistors/4 pixels), thereby enabling a more efficient 2.75T pixel image sensor.

As shown, row decoder/driver 305 outputs a shared row select signal (RS) and reset gate signal (RG) to each row of four pixel cells 310, and outputs independent row transfer gate control signals (TGr1-TGr4) to the drain terminals of respective transfer enable transistors 253. The row decoder/driver 305 is incrementally ordered through the various rows of the array (e.g., pipelined resetting, integrating, and progressing with respect to the various rows of the pixel array 301)Read out operations, such that row by row read out), the row decoder/driver may include logic to assert the RG, RS, and TGr signals at the appropriate times for each row (e.g., synthesize these signals with respect to the row clock from ordering logic 303). Alternatively, row decoder/driver 305 may receive separate timing signals corresponding to each or any of the RG, RS, and TGr signals, thereby multiplexing any separate enable pulses onto the corresponding RG, RS, and TGr lines of the selected row at the appropriate time. In one embodiment, the row decoder/drivers receive the transmission gate control voltages (i.e., VTG) corresponding to the off-state, partially on-state, and fully on-state shown in FIGS. 11, 12, and 13 from either an on-chip or off-chip programmable voltage source 309 (i.e., VTG)off、VTGpartial、VTGfull) Each of the different control voltages is switchably coupled to a given row of transmission gates at a determined time, for example, as shown in fig. 11. In alternative embodiments, more than one voltage source 309 may be provided in the image sensor 300 to enable the transfer gate control voltages to be calibrated locally and thereby compensate for control voltage variations and/or performance variations (i.e., non-uniformities) across the pixel array.

Still referring to the embodiment of fig. 14, the column readout circuitry 307 includes a set of readout circuits 315, each readout circuit 315 implementing a digital threshold comparator and a lower bit depth analog-to-digital converter (e.g., a 4-10 bit ADC, although lower or higher bit depth ADCs may also be employed) to perform the over-threshold detection and conditional sampling operations, respectively, as discussed in connection with fig. 11-13. In one embodiment, the threshold comparator and ADC are implemented by separate circuits, thereby enabling pixel sample values to be generated without regard to the conditional reset threshold applied in the over-threshold determination. In this way, the conditional reset threshold is decoupled from the reference signal used in the ADC conversion ("ADC Vref"), freeing the conditional reset threshold and the ADC reference voltage to dynamically adjust independently (e.g., by reprogramming the threshold reference generator) during or before sensor operation to enable calibration and/or compensation for changing operating conditions or sub-optimal imaging results. In an alternative embodiment, the threshold comparator may be implemented as part of the ADC (e.g., using a reference applied in conjunction with resolving digital sample values as a conditional reset threshold), potentially reducing the footprint of the column readout logic by a more compact circuit design.

In the illustrated embodiment, the sequencing logic transmits the column clock, the sample-and-hold strobe (SHR, SHS, both used to enable signal storage within the sample-and-hold element at the front end of the ADC/threshold comparator), and the compare/convert strobe to the column readout logic to enable operational timing, for example, as shown in fig. 11. That is, during the over-threshold detection phase (i.e., phase 3), the readout circuitry for a given pixel column asserts (OR holds) the TGc line (e.g., in response to assertion of the TGcEn signal from the ordering logic 303 and the logic OR gate 316), thereby causing the TGr line for a given pixel row to switch to a partially-on potential (e.g., VTG) when the row decoder/driver switches the TGc line to a partially-on potential (e.g., VTG)partialWhich is supplied to the transfer gates of the pixel rows), enables the above-described over-threshold detection operation to be performed. Thus, a threshold comparator within each sense circuit evaluates the state of the threshold test sample (based on the VTG under test) against a conditional reset thresholdpartialThe state of the shared floating diffusion 312 after the transfer gate supplied to a given photodiode) to produce a binary over-threshold result. If an over-threshold condition is detected, the sensing circuit again raises the TGc signal after a short time (i.e., in conjunction with a fully on TGr potential (VTG)full) To enable a conditional readout operation to enable complete readout of the photodiode state onto Vout and reset of the photodiode), and to perform an analog-to-digital conversion operation to generate digitized pixel samples in response to assertion of the compare/convert gate.

Readout circuitry

Fig. 15A-15C illustrate alternative column readout circuit embodiments that may be used in conjunction with the exemplary progressive readout pixels described above. For example, fig. 15A illustrates a column readout circuit 350 formed by a sample-and-hold group 351, an analog-to-digital converter (ADC)353, a sense amplifier 355, and an ADC enable gate 357. The sample-and-hold (S/H) group 351 includes switching elements and analog storage elements (e.g., capacitive elements) to enable sampling and holding of the reset state and signal state (transmitted via the column "Vout" line) of a selected pixel in response to assertion of the reset state and signal state control signals. In one embodiment, the pixel reset state signal and the signal state signal are output differentially (e.g., signal state-reset state) from the S/H bank 351, thereby enabling the sense amplifier 355 and ADC 353 to receive a measurement signal reflecting that the state of the floating diffusion is below a variable (i.e., noisy) reset level. In the illustrated embodiment, the sense amplifier 355 and the ADC 353 receive separate reference signals ("SA Ref" and "ADC Ref") for over-threshold detection and ADC operation, respectively. More specifically, when the compare strobe signal ("compare") is pulsed, a threshold comparison is triggered within the sense amplifier 353, producing a logic high or low comparison result depending on whether the S/H signal output exceeds the sense amplifier reference signal (i.e., an overflow threshold or a conditional reset threshold as described above) (and thus, whether the noise corrected pixel signal state exceeds the sense amplifier reference signal). The comparison result is fed back to the pixel column as the conditional reset signal discussed above and is also supplied to the logic gate 357 to enable analog-to-digital conversion operations within the ADC 353. That is, if the sense amplifier 355 signals an over-threshold condition (a logical "1" comparison result in this example), then the next transition strobe ("transition") is enabled to reach the transition enable input of the ADC 353 through the logical AND gate 357 (i.e., by a high sense amplifier output), triggering ADC operation. In one embodiment, a buffer 359 is provided for storing the resulting N-bit ADC values (e.g., 8-bit to 12-bit values in various embodiments, although higher or lower resolutions may be suitable for all cases), and the comparison results from sense amplifier 355, which form a validity bit "V" that limits (qualify) the ADC contents within buffer 359 to contain valid or invalid data. Thus, if no detectable overflow occurs in the pixels being read out, the logic low comparison results not only in suppression of ADC operation (power saving), but also limits the contents of the read out buffer, thereby allowing compression of the outgoing data stream. This result is indicated in the timing waveform at 360 with a dashed ADC data transfer — meaning that ADC data is generated and transferred only if the pixel measurement exceeds the overflow threshold (V ═ 1).

Fig. 15B illustrates that an alternative readout circuit embodiment 365 lacks a sense amplifier and instead employs an ADC circuit 353 to both perform the threshold comparison and (if necessary) generate ADC data corresponding to a full pixel readout. As described previously, the S/H group 351 outputs a measurement signal reflecting the difference between the signal state and the reset state during the overflow (partial readout) operation and the full readout operation. When the compare strobe ("compare") is asserted, the compare strobe ("compare") is supplied to the enable conversion input of the ADC via the logic OR gate 368 to enable operations with respect to partial readout (i.e., to couple the VTG)partialThe transfer gate supplied to the selected pixel, as discussed above). If the ADC output exceeds the digital threshold (i.e., the multi-bit digital value OR number of bits), then comparator 367 asserts a conditional reset/over-threshold signal (e.g., a logic '1' state in the illustrated example) to enable the next transition strobe ("transition") to pass through logic AND gate 369 (AND logic OR gate 368) to trigger another ADC operation, this time with respect to the measurement signal acquired during a full read operation. As shown in the embodiment of fig. 15A, the conditional reset signal is driven back to the pixel column to enable a full readout (and pixel reset) operation within the target pixel, and is also output to a readout buffer 359 for storage as a validity bit to describe the corresponding ADC data content of that buffer. Although the compare strobe, conversion strobe and transmit data waveforms are illustrated in the embodiment of fig. 15B as matching the compare strobe, conversion strobe and transmit data waveforms in fig. 15A, a slightly larger delay may be added between the compare strobe and the conversion strobe to account for the additional time required to digitize the S/H portion read measurements within the ADC. In both cases, the strobe is comparedThe period between the pulse and the switching strobe may be different from that shown, for example, to align the readout timing operation with the pixel operation described above (e.g., as shown in fig. 11).

Fig. 15C illustrates a variation (375) of the readout circuit embodiment of fig. 15B. In general, the sequence of the readout operation is as discussed with reference to fig. 15B, except that a portion of the readout ADC output is latched within readout buffer 377; and, if it is under-thresholded (i.e., no conditional reset output and thus no subsequent full readout ADC output), then the digitized partial readout measurement is transmitted off-chip along with an over-threshold bit (OT) indicating whether an over-threshold condition has been detected. If the partial read ADC output exceeds the overflow threshold, then the full read measurement is digitized and stored in the read buffer in a second ADC operation, overwriting the partial read ADC value. By this operation, the effective pixel readout value, which reflects either the partial readout (OT-0) or the full readout (OT-1), is transmitted to the external destination, regardless of whether the overflow threshold is exceeded, thereby allowing the sequence of partial readout values to be accumulated (integrated) within the final pixel value. It should be noted that the storage and transmission of the OT bits may be omitted, particularly in embodiments where the ADC measurements are aggregated or combined regardless of whether the acquisition of the ADC measurements occurs in a full or partial read operation.

Image decimation and pixel stitching

The various conditional reset image sensor embodiments described herein may operate in a decimation mode that produces less than maximum image resolution. For example, in one embodiment, an image sensor capable of generating an 8MP (8 megapixel) output in still image mode, producing a 2MP output in decimated High Definition (HD) video mode; a decimation ratio of 4:1 (higher or lower resolution may be applied in each mode and other decimation modes and decimation ratios may be implemented in alternative embodiments; also if the still frame aspect ratio and the video frame aspect ratio are different, then some area regions of the sensor may not be used at all in one mode or the other).

While post-digitization logic may be provided to decimate full resolution data (e.g., on-chip logic at the output of an ADC bank, or off-chip processing logic), in various embodiments, pixel charge aggregation or "tiling" within a pixel array and/or voltage tiling within sample and hold storage elements is applied to achieve pre-digitization (i.e., to implement pre-ADCs, and thus analog) decimation, thereby eliminating die-consuming and power-consuming digital tiling logic and thereby producing, in many cases, improved signal-to-noise ratios in the decimated output.

Fig. 16 illustrates a four pixel shared floating diffusion image sensor architecture in which the row and column transfer gate control lines (TGr and TGc) disclosed in various embodiments can be applied in a manner that achieves a multi-decimation mode without requiring additional array traversal control lines. More specifically, by centering the shared floating diffusion structure 401 over 4 pixels (each including a respective photodiode PD 1-PD 4, transfer enable transistors 403.1-403.4, and transfer gates 404.1-404.4) and dividing the column transfer gate control line TGc into separate odd and even column enable lines (TGc1 and TGc2, each coupled to a respective logical OR column line driver 421, 423), all OR any subset of pixels can be charge-stitched in the decimation mode, and each pixel can also be individually operated and read out in the non-decimation (full resolution) mode.

In the particular embodiment shown, a shared floating diffusion 401 (illustrated in 2 portions of the interconnect for simplicity of the drawing) is switchably coupled to the 4 pixel photodiodes PD 1-PD 4 by respective transfer gates 404.1-404.4, each controlled by a different TGr and TGc signal pair within the control signal matrix. That is, transfer gate 404.1 is controlled by transfer enable transistor 403.1 via control signals TGr1/TGc1, transfer gate 404.2 is controlled by transfer enable transistor 403.2 via control signals TGr2/TGc1, transfer gate 404.3 is controlled by transfer enable transistor 403.3 via control signals TGr1/TGc2, and transfer gate 404.4 is controlled by transfer enable transistor 403.4 via control signals TGr2/TGc 2. As in the shared element pixel arrangement described above, the shared floating diffusion 401 is coupled to the shared source follower 405, row select transistor 407, and reset transistor 409, thereby enabling a more compact four-pixel layout. Also, as shown in the exemplary physical layout diagram of fig. 17, 4 transfer gates ("TG") may be physically disposed at the corners of the centralized Floating Diffusion (FD), with the transfer enable transistor, reset gate, source follower, and row select transistor being formed at the periphery of the four-pixel layout, thereby achieving a highly compact footprint of four pixels that can be repeated in row and column dimensions across a multi-megapixel array.

Fig. 18A and 18B illustrate Color Filter Array (CFA) patterns that may be employed with respect to the four-pixel architecture of fig. 16 and 17 and may indicate the actual decimation pattern. In the CFA pattern of fig. 18A, for example, in a 4:3 charge-binning decimation mode, the green corner pixels (G) including photodiodes PD1 and PD4 may be tiled (i.e., PD1 and PD4 disposed below the green color filter elements); whereas in the CFA pattern of fig. 18B including white, green, red, and blue color filters, two pairs of corner pixels (i.e., pixels including photodiodes PD1 and PD4 and pixels including photodiodes PD2 and PD 3) in each four-pixel may be charge-stitched in a 4:2 decimation mode. Other charge stitching arrangements may also be used with other CFA patterns and/or black & white (or grayscale) imaging.

Fig. 19 and 20 show timing diagrams illustrating exemplary stages of a full resolution (non-tiled) pixel readout operation and a tiled mode pixel readout operation, respectively, within the image sensor shown in fig. 16, which contains a 2x2 four-pixel arrangement. For purposes of illustration, it is assumed within each timing diagram that the readout gain configuration during the operation of partial readout (threshold test) and full readout is different, with separate sets of sample and hold elements applied to capture reset state samples and signal state samples during these readout operations. Examples of different gain configuration circuits and their advantages are described below with reference to fig. 25A to 25C, 26, and 27.

First, in the full resolution readout of fig. 19, a reset operation is performed in phase 1 (depicted at the bottom of the timing diagram) by fully asserting the transfer gate row signal (TGri), as shown at 420, for the row being readout, along with the odd and even transfer gate column signals (TGc1, TGc2), to supply full readout potentials to the transfer gates for the odd and even columns within the selected row to enable charge transfer from the corresponding photodiode to the shared floating diffusion (i.e., reset the photodiode to an initial state, in preparation for charge integration). After lowering the TGri signal, at 422, a reset enable signal (RG) is pulsed to turn on the reset transistor and thereby reset the floating diffusion. During the integration phase 2 (not shown to scale for a long time), charge is integrated/accumulated in the photodiode according to the brightness of the incident light. During odd column threshold test phase 3a, the RG signal is pulsed a second time to reset the floating diffusion at 424, and the reset state sample and hold signals SHRsa and shrdc are pulsed while the row select line RSi is high at 426 and 428 to enable sampling of the reset state of the floating diffusion within the sample and hold elements for the sense amplifier and ADC, respectively. After sampling the reset state of the floating diffusion, the even column transfer gate signal (TGc2) is lowered (while TGc1 remains high) and TGri is raised to VTGpartialPotentials to enable a threshold test readout with respect to odd column pixels. At 430, the signal state sample-and-hold signal SHSsa is raised to enable capture of a sample of the floating diffusion state (i.e., any overflowing charge therein) within the sample-and-hold elements for the sense amplifier, and at 432, the compare strobe signal ("compare") is pulsed to enable the sense amplifier components of the readout circuit to generate a comparison result (minus the reset state) between the floating diffusion signal state and the conditional reset (overflow) threshold.

Immediately after the floating diffusion signal state is captured at 432 and after the row transfer gate signal is raised to full conduction (VTG)full) Formerly, in the oddIn the conditional read-out phase 4a of several pixels, the odd column transfer gate signal (TGcl) is lowered. More specifically, if the comparison indicates an under threshold condition, then TGcl line is held low and TGri is increased to VTGfullPotential, suppressing full-pixel readout and allowing the charge integrated within the photodiode during integration phase 2 to remain undisturbed and serve as an initial state during a subsequent integration period (i.e. continuous integration). Conversely, if the sense amplifier comparison indicates an over-threshold condition (i.e., the accumulated charge during integration phase 2 exceeds the conditional reset threshold), then the TGcl line is raised as shown by the dashed pulse at 434 while the VTG is simultaneously raisedfullPotential is supplied to TGri to thereby make VTGfullTo the odd pixel transfer gate to enable a full pixel readout operation. Shortly thereafter, just before the end of the odd pixel conditional read out, the signal state sample-and-hold signal SHSadc is pulsed (as shown at 436) to capture a sample of the odd pixel read out signal within the signal state sample-and-hold element for the ADC. At 438, after capturing the odd pixel readout signal in the ADC sample-and-hold element, the conversion strobe is pulsed to trigger ADC operation with respect to the difference between the reset state sample and the signal state sample captured within the ADC sample-and-hold element.

At the end of the odd pixel conditional readout (i.e., phase 4a), the row transfer gate signal is lowered so that, in the immediately following even pixel threshold test phase 3b, the assertion of the odd pixel column transfer gate signal TGcl at 440 drives the odd pixel transfer gate low (ensuring isolation between the photodiode and the floating diffusion), thereby enabling the floating diffusion to be reset by the RG pulse without disturbing the odd column pixel state at 442. Still in phase 3b, the even column transfer gate signal is raised at 446 while the SHRsa pulse is asserted at 448 to obtain a reset state sample of the floating diffusion. As in the odd pixel threshold test, at 450 (while TGc2 remains high), the row transfer gate signal TGri is raised to the partial conduction potential (VTG)partial) Thus, if on a photodiodeAn over-threshold condition exists within the tube so that charge can spill over from the even pixel photodiodes to the floating diffusion structure. At 452, SHSsa is pulsed to sample the even pixel signal state and, at 454, the compare strobe is pulsed to enable an even pixel over-threshold determination (even pixel signal state minus floating diffusion reset state) within the sense amplifier. As with the odd pixels, if the comparison from the sense amplifier indicates an over-threshold condition, then the even pixel column transfer gate signal is asserted at 456 while the TGri potential is raised to the fully on level (VTG)full) Thereby enabling a complete readout of the even pixel signal states after SHSadc and conversion strobe signals are asserted (at 458 and 460, respectively) to produce even pixel ADC results. If the comparison from the sense amplifier indicates an under-threshold condition, then at 456, the TGc2 pulse is suppressed to avoid disturbing the state of the even pixel photodiodes, thereby leaving the charge on the photodiodes intact for continuous integration.

Still referring to fig. 19, in the data transfer phase 5, the ADC values for the rows i of even and odd pixels are transferred one after the other to an on-chip or off-chip image processing destination (e.g., to an off-chip image processing destination). As discussed above, in the event of an insufficient threshold condition for a given pixel, analog-to-digital conversion for that pixel may be suppressed and/or the ADC output omitted from the outgoing data stream. In either case, the data transfer for the selected row of pixels may be pipelined with the pixel readout operations within subsequent rows, e.g., by transferring row i-1 data while performing various stages of the readout operations for the pixels of row i.

In the splice-mode readout timing diagram of fig. 20, the hard reset operation and the integration operation (phase 1 and phase 2) are performed as described above with reference to fig. 19, as is the floating diffusion reset at the beginning of threshold test phase 3 (i.e., RG is asserted when TGc1 and TGc2 are high, and the reset state is sampled in response to the assertion of the SHRsa signal and the shrdc signal). Thereafter, with respect to the corner pixels(i.e., in the example shown, including photodiodes PD1 and PD2), the partial readout operation is performed one after the other by driving TGr1 to a partially conductive state at 476 while TGcl is asserted and TGc2 is de-asserted, and then driving TGr2 to a partially conductive state at 478 while TGc2 is asserted and TGc1 is de-asserted. By doing so, any overflowing charge from photodiodes PD1 and PD4 is aggregated in the floating diffusion structure and, thus, trapped within the sense amplifier sample-and-hold element when SHSsa is asserted at 480. Accordingly, a compare strobe signal is asserted at 482, enabling the aggregated overflow charges from PD1 and PD4 to be compared to the conditional reset/conditional read out threshold (minus the reset state of the floating diffusion). If the comparison indicates an over-threshold condition, TGcl and TGc2 may be pulsed one after the other (and, simultaneously with each of the pulses, the VTG on the corresponding row lines TGcl and TGc2, respectively, may be assertedfull) To enable transfer of the remainder of the charge accumulated in the corner photodiodes (PD1 and PD4) to the floating diffusion structure, to charge-stitch the pixel integration results, and to reset each pixel in preparation for the next charge integration period. Thus, when the shsacc signal is pulsed at 488, the photodiode charge spliced (or aggregated) within the floating diffusion is captured within the signal state sample-and-hold element for the ADC, thereby enabling ADC operation (minus the floating diffusion reset state) with respect to the combined charge from the corner pixels when the conversion strobe is pulsed at 490. The digitized pixel values (i.e., ADC outputs) generated for row i may be transferred to off-chip or on-chip processing logic during readout of the next pair of pixel rows.

Still referring to FIG. 20, if the comparison of the sense amplifier outputs indicates an under-threshold condition, then the TGcl and TGc2 signals shown at 484 and 486 are suppressed from asserting to avoid disturbing the contents of the subject photodiode, thereby allowing continuous integration during subsequent sub-frame periods. Although the timing sequence shown produces the output of the stitched result from the corner pixels containing photodiodes PD1 and PD4 (i.e., the northwest corner and the southeast corner in the layouts shown in fig. 16 and 18), the waveforms output onto signal lines TGcl and TGc2 may be switched to produce the stitched result from the corner pixels containing photodiodes PD2 and PD 3. Further, reading out the aggregate (stitched) charge within all four photodiodes may be achieved by performing additional partial read out operations in phase 3 (i.e., repeating the partial conduction pulses of TGrl and TGr2, but inverting the asserted sequence of column transfer gate signals TGcl and TGc2), and then performing additional full read out operations in phase 4 if an over-threshold result is detected (i.e., repeating the full conduction pulses of TGrl and TGr2, but inverting the asserted sequence of column transfer gate signals TGcl and TGc 2).

Fig. 21 illustrates an alternative stitching strategy that may be performed with respect to a 4x1 quad-pixel block 310 aggregation and Color Filter Array (CFA) segment shown at 500. In the illustrated embodiment, each four-pixel block 310 (shown at 310.1 through 310.4 with respect to the CFA segment) is generally implemented as described with reference to fig. 14, and may be read out according to any of the readout techniques described with reference to fig. 14 and 15A through 15C. As shown, CFA segment 500 (i.e., a portion of the sensor wide CFA sufficient to demonstrate a CFA pattern) includes an aggregation of similarly colored filter elements at the corner pixels of each 3x3 pixel group. Thus, the green filter element is disposed over the shadow pixel 'G', the blue filter element is disposed over the stripe pixel 'B', and the red filter element is disposed over the scatter pixel 'R'. In this arrangement, each pair of similarly filtered pixels disposed in the same quad-pixel block (i.e., filtered by the same color filter element R, G or B) thus allows for charge stitching within their shared floating diffusion structure, as described in detail below. Further, referring to fig. 22, the resulting "voltage-stitching" of the two pairs of charge-stitched pixels within sample-and-hold circuitry 553 may be done by fixing the column offset between the pixel pairs in each column and the similarly filtered pixel pairs coupled to the same row line (i.e., fixed at the pitch of the two columns in the illustrated example), and by placing switching elements at the column readout points of pixel array 551 (i.e., switching elements 561 and 562 within sample-and-hold circuitry 553), whereby the 4 corner pixels in each 3x3 pixel group are grouped together (i.e., aggregated, stitched) before being digitized within the ADC elements of SA/ADC block 555.

Fig. 23 illustrates an exemplary timing diagram for a tiled mode readout operation within the 4x1 four-pixel architecture of fig. 21 and 22. In the illustrated example, in the locking step, the row lines for row i and row i +2 are operated to achieve a 2:1 charge splice within the shared floating diffusion structure of a given four-pixel block. More specifically, the row signals for pixel row 1 and pixel row 3 of the 4x1 quad pixel block (or row of such quad pixel block) are asserted in unison, after which the lock-out step assertion of the row signals for pixel row 2 and pixel row 4 occurs before proceeding to the assertion of the row signal for the next row of the 4x1 quad pixel block. Lateral connections are established within the sample-and-hold switching elements (e.g., as at 561 and 562 of the sample-and-hold block 553 shown in fig. 22) to enable 2:1 voltage stitching, and thus overall 4:1 analog signal summing and concomitant image decimation.

More specifically referring to fig. 23, in the locking step, the row select signal (RS), reset gate signal (RG), and row transfer gate signal (TGr1, TGr3 or " TGr 1, 3") are operated to reset the photodiodes and shared floating diffusions of the selected row of pixels during hard reset phase 1; during integration phase 2, charge integration is allowed; during threshold test phase 3, it is determined whether the charge-stitched charge accumulation and the voltage-stitched charge accumulation within each column-interleaved (i.e., 3 × 3 corner pixels as described with reference to fig. 21 and 22) poly of 4 pixels exceeds a conditional reset threshold; and, if an over-threshold condition is detected, the accumulated charge of the full charge splice and the voltage splice within the subject pixel aggregate is conditionally read out and digitized in conditional read out stage 4, prior to transferring the digitized pixel values to downstream (on-chip or off-chip) processing logic in output stage 5. Considering these phases one by one, in hard reset phase 1, the row transfer gate signals TGr1 and TGr3 are assertedPulse VTGfull(as shown at 570) while raising the column transfer gate signal TGc, thereby transferring the accumulated charge from photodiodes PD1 and PD3 to their shared floating diffusion node. After the photodiode charge transfer to the floating diffusion, at 572, the reset signal RG is pulsed to clear the charge from the floating diffusion in preparation for the next charge integration in phase 2. At the beginning of threshold test phase 3, the reset signal is again pulsed (574) to reset the floating diffusion, and then, at 576 and 578 (when RSi is asserted), signals SHRsa and shrdc are pulsed to capture a sample of the reset state of the floating diffusion within the sample-and-hold elements for the sense amplifier and ADC. At 580, TGr1 and TGr3 are raised to the partial-on transfer potential VTGpartialTo enable charge to overflow to the shared floating diffusion if an over-threshold condition exists in the photodiode of the subject pixel. Then, at 582, the SHSsa signal is pulsed to switch the laterally interconnected switching elements (e.g., transistors) to a conducting state within the sample-and-hold group to capture the signal state of the floating diffusion node within the associated columns (i.e., in the illustrated embodiment, columns j and j +2) that share the sample-and-hold element, thereby voltage stitching the two charge stitching overflow samples. The threshold test phase is ended by lowering the TGc signal and asserting the compare strobe 584 to trigger a threshold comparison within the sense amplifier, thereby comparing the aggregate overflow charge from the 4 charge/voltage binning pixels to the conditional reset threshold. If the comparison indicates an over-threshold condition, the VTG is activatedfullDuring supply of the TGr1 and TGr3 lines, the TGc signal is pulsed at 586, (thereby enabling complete readout of the photodiodes PD1 and PD3 to the shared floating diffusion within the corresponding four-pixel block), and then at 588, the SHSadc signal is raised to capture the signal state of the floating diffusion node of the switch-interconnected pixel column within the signal state sample-and-hold element for the ADC (i.e., voltage-splicing the charge-spliced floating diffusion content). Thereafter, at 590, the transition strobe is pulsed to triggerThe ADC operates on the voltage/charge-spliced signal state captured within the sample-and-hold circuit (if present), after which in phase 5, the ADC output is transferred. As discussed above, if the threshold crossing condition is not detected in threshold test stage 4, then ADC operations and data transfer operations may be suppressed to save power and reduce signaling bandwidth.

Fig. 24 illustrates a more detailed embodiment of an image sensor 600 having an array of 4x1 quad-pixel blocks 601 operable in the decimation (stitching) mode described with reference to fig. 21-23. As in the embodiment of FIG. 14, the row decoder/driver 605 receives the pass gate voltage (e.g., VTG) from the on-chip or off-chip voltage source 309partial、VTGfullAnd VTGoff) And row address values and row clocks (for controlling row signal timing) from sequencing logic 603 to output row control signals RG, RS, and TGrl through TGr4 in response. The sequencing logic additionally outputs a set of read control signals to the column readout circuitry 607, including a column clock signal (which may be comprised of a plurality of timing/control signals for timing the operation of the column readout circuitry 607 within the sense amplifiers, ADCs, memory buffers, etc.), compare and convert strobe signals as described above, column transfer gate enable signals (TGcEn), SHR and SHS signals (which may include separate signals for the sense amplifiers and ADC sample-and-hold elements). The sequencing logic also outputs a decimation mode signal ("Dec mode") to both the column readout circuitry 607 and the row decoder/driver 605 to enable/disable the charge splicing operation and the voltage splicing operation described above. For example, in one embodiment, the decimation mode signal may be configured to be in one of at least two possible states (e.g., according to a decimation mode setting within programmable configuration register 604), including: a stitching disabled state in which pixel rows and pixel columns are operated individually to enable full resolution image readout; and a splice enable state in which the row decoder/driver interrupts the row signal pair (e.g., TGrl/TGr3 and TGr2/TGr4) in a lock step to achieve charge splicing within the shared floating diffusion structure, and in which stateIn one state, column readout lines (Vout) for even and odd column pairs are coupled laterally to enable voltage stitching within the sample-and-hold elements.

Still referring to the embodiment of fig. 24, in addition to the sense amplifier 617 and TGc logic gate 619 (which generally operates as described above), the column readout circuitry 607 also includes a set of column eclipse (eclipse) detection circuits 615, each column eclipse detection circuit 615 coupled to receive a pixel reset signal from the sample and hold block 609 and having circuitry for determining whether a photodiode measurement (whether tiled or full resolution) exceeds a saturation threshold. If a given eclipse detector 615 (e.g., implemented by a threshold comparator) detects a saturation condition (i.e., exceeds a saturation threshold), the eclipse detector increases the eclipse signal at the secondary enable input of the ADC circuitry 611 to disable ADC operation therein. A mask signal is also output to the line memory elements 621 to describe the ADC output, recording as a logic '1' mask bit on the line memory elements 621 if a saturation condition is detected (thereby indicating that the ADC output is invalid and, in fact, the ADC output should be represented by the maximum readout value), and recording as a logic '0' mask bit otherwise. By doing so, the masked and under-threshold bits recorded for each pixel column together are used to describe the corresponding ADC output, as follows (where 'X' indicates an irrelevant state):

TABLE 1

Insufficient threshold Masking etch ADC value Explanation of the invention
0 0 Invalidation Insufficient threshold value: assume ADC output is 0
X 1 Invalidation Saturation: suppose the ADC outputs are all '1'
1 0 Is effective Not reaching the over-threshold of saturation

Still referring to fig. 24, when a stitching mode is set to enable voltage stitching between column pairs (e.g., even numbered columns for voltage stitching, and odd numbered columns for voltage stitching), sense amplifiers and ADCs within one column of each stitched column pair may be disabled to conserve power, with the transmitted data stream being decimated according to the stitching mode.

Dynamic gain pixel readout

As briefly mentioned in connection with fig. 19 and 20, different gains may be applied during partial and full read operations. That is, because the overflowing charge during a portion of the readout may be very small (i.e., the charge integration level hardly exceeds the conditional reset threshold), it may be advantageous to apply a higher gain during a portion of the readout. Conversely, because the full readout may be in the range between the minimum charge integration level and the maximum charge integration level, a significantly lower gain may be applied to normalize (normalize) these charge levels to the minimum ADC output value and the maximum ADC output value. Thus, in many embodiments herein (including the embodiments described above with reference to fig. 19-24), different gains are applied by the column readout circuitry during the partial readout operation and the full readout operation.

Fig. 25A illustrates an embodiment of a selectable gain (or multi-gain) readout circuit that can be used to achieve high gain partial readout and near unity gain full readout in a column of pixels. More specifically, in the illustrated embodiment, multiplexers 651 and 653 are used to establish a common-source amplifier configuration (transconductance load resistance R of gain transistor M1) depending on the state of multiplexer control signals CS and SFLWhere "+" denotes a multiplication operation) or a source follower configuration (unity gain or approximately unity gain). In the common-source amplifier configuration (CS ═ 1, SF ═ 0), multiplexer 653 passes through load resistor RL(655) Column line Col2 is coupled to voltage supply rail Vdd, while multiplexer 651 couples column line Co11 to ground. As shown, Co12 is coupled to the drain terminal of row select transistor 683, so that Voutl will vary according to the current flowing through transistor M1; the supplied gate voltage (floating diffusion charge level) is a function of the transconductance of the transistor. More specifically, Voutl is by Vdd-I as can be understood from FIG. 25B (which illustrates a common source gain configuration)mi*RlGiving so that Voutl/VFDIs close to gmRL, wherein gmRepresenting the transconductance of transistor M1. Therefore, by properly designing M1 and/or RlCan achieve a common source gain that is substantially greater than unity, thereby increasing the sensitivity to smaller charge levels that may spill over to the floating diffusion structure during part of the readout operation. Note that reset transistor 685 is also coupled to the Co12 line, thereby enabling the floating diffusion to be pulled up CS mode Vout (i.e., reset) in response to RG signal assertion when in the common-source gain configuration.

In a source follower configuration (SF-1, CS-0), multiplexer 653 couples current source 657 to the Co12 line, and multiplexer 651 couples column line Co11 to Vdd, thereby enabling the establishment of M1 as a source follower amplifier (i.e., the output voltage at the M1 source follows the floating diffusion voltage applied at the gate of M1, and thus Vout2 follows the floating diffusion junction voltage applied at the gate of M1)Voltage), as shown in fig. 25C. More specifically, to maintain a substantially constant current through the Co12 line, the feedback loop that maintains the constant current source raises the potential at Vout2 as needed to offset any conductance variations in transistor M1. Thus, assuming a substantially linear transconductance exists in M1, the current source raises and lowers Vout2 in a substantially linear relationship with the increase and decrease of the floating diffusion potential, thereby increasing and decreasing Vout2 at Vout2 with VFDTo achieve a substantially constant ratio therebetween. In the illustrated embodiment, the constant of proportionality is slightly less than unity gain in the source follower configuration (i.e., 0.85 in the particular example depicted, although other proportionality constants, including unity gain, may be implemented in alternative embodiments or other programmed configurations).

Still referring to fig. 25A, separate sets of sample-and-hold elements (e.g., sets of capacitive elements and switching elements) 669 and 671 are coupled to the Voutl node and the Vout2 node, respectively, to accommodate different gain configurations applied during partial and full readout operations, with separate sets of reset-state sampling enable signals and signal-state sampling enable signals applied to the two sample-and-hold circuits. In the example shown, a partial readout sample-and-hold circuit 669 (i.e., controlled by signals SRcs and SScs in a common source gain configuration) provides a differential output (i.e., signal state sample minus reset state sample) to the sense amplifier circuit 675, while a full readout sample-and-hold circuit 671 (i.e., controlled by signals SRsf and SSsf in a source follower gain configuration) provides a differential output to the ADC 677. As in all embodiments with both sense amplifiers and ADCs, the sense amplifiers are omitted and the ADCs are applied during both partial and full read out operations, as discussed with reference to fig. 15B and 15C. In such an ADC-only implementation, the outputs of the sample-and- hold circuits 669 and 671 may be multiplexed to the inputs of the ADC 677 according to the states of the CS and SF signals. In embodiments where the CS and SF signals always have complementary states, alternatively a single signal may be used to switch between the common source gain configuration and the source follower gain configuration.

Fig. 26 sets forth an exemplary timing diagram illustrating the alternating application of common source gain configuration and source follower gain configuration during hard reset, integration, partial readout, and (conditional) full readout operations within the multi-gain architecture of fig. 25A. As shown, common source enable signal (CS) is asserted at 686 while hard reset RG is pulsed (i.e., asserted in preparation for charge integration), and common source enable signal (CS) is asserted at 688 while hard reset RG is pulsed (floating diffusion reset in preparation for reset state sampling). During at least part of the charge integration period, the signal gain may be completely disabled to conserve power (i.e., reduce both the SF control signal and the CS control signal, as shown), although one or both gain modes may actually be applied during this period to enable operation in other rows of pixels. During the reset state sampling, the common source gain configuration and the source follower gain configuration may be enabled one after the other (i.e., CS is held high first when SF is low, and then the configuration is inverted) as shown at 690 and 692, where the reset state sampling signals SRcs and SRsf are pulsed at 694 and 696, respectively, to capture the reset state samples within different sample-and-hold circuits set for both gain configurations when the common source gain configuration and the source follower gain configuration are in place. Thereafter, CS is increased (and SF is decreased) at 698 to apply the common source gain configuration during a partial read operation (this is accomplished by increasing TGr to a partially conductive state while TGc remains high at 700 and ending with assertion of the SScs signal and comparison of the strobe signal), and then SF is increased (and CS is decreased) at 702 to apply the source follower gain configuration during the immediately conditional full read operation (this is accomplished by increasing TGr to the full read potential at 704 while conditionally pulsing the TGc signal and ending with assertion of the SSsf signal and transition of the strobe signal).

In reflection of the multi-gain architecture described with reference to fig. 25A-25C and 26, it should be noted that other gain configurations or combinations of gain configurations may be used in alternative embodiments. For example, as shown in fig. 27, two different common source gain configurations may be achieved by coupling different pull-up resistors (RLl and RL2) to the Co12 line via multiplexer 701 and then selecting one gain or the other (i.e., by appropriate assertion of control signals CS1 and CS 2) as generally described with reference to fig. 26. In another embodiment, a programmable gain amplifier may be coupled to the Co12 line and/or the Co11 line and may be switched between programming settings to achieve different partial and full readout gains. More generally, in alternative embodiments, any practical configuration or architecture that enables adjustment of the gain applied during partial and full read operations may be employed.

Image sensor architecture, system architecture

Fig. 28 illustrates an embodiment of an image sensor having a pixel array 731 disposed between upper and lower readout circuits 732.1 and 732.2. The readout circuits are coupled to respective half rows of pixels in the array and are operable in parallel, thereby halving the time required to scan through the rows of the pixel array. In one embodiment, the rows of pixels are distributed between the upper and lower readout circuits according to the physical half of the pixel array in which the rows of pixels reside. For example, all upper rows of pixels (i.e., above the physical midpoint) may be coupled to upper readout circuitry and all lower rows of pixels may be coupled to lower readout circuitry, thereby reducing the overall column line length (reducing capacitance, noise, required drive power, etc. with respect to each Vout and reset feedback (TGc) line). In other embodiments, the pixel row interconnections to the upper and lower readout circuits may be arranged in rows across the rows of the pixel array, with the connections being arranged interleaved between the upper and lower readout circuits for pixel blocks of each successive row (e.g., every 4 th row in the pixel array is filled with a 4x1 four pixel block shown in fig. 21, or every 2 nd row in the pixel array is filled with a 2x2 four pixel block shown in fig. 16 and 17, or every other row in the pixel array is filled with pixels having dedicated Vout interconnections). In the illustrated embodiment, each sense circuit (732.1 and 732.2) includes a sample-and-hold group 733 (e.g., including capacitive storage elements and switching elements for each column, as described above), a sense amplifier group 735 (including sense amplifier circuits (or latches) and reset feedback logic for each column), a bank of ADCs 737 for each column, and a digital line memory 739. In embodiments where each column of ADCs is applied to digitize a portion of the readout samples, the sense amplifier bank 735 may be omitted and each column of ADC banks may be equipped with a digital comparator to generate a reset feedback signal (i.e., a conditional reset signal TGc). Also, the sample-and-hold group may include lateral switching elements as described with reference to fig. 22 to support voltage-splicing operation. More generally, the various circuit blocks of the upper and lower sensing circuits may operate and/or be configured as described above to support various decimation modes and sensing options. Although not specifically shown, the upper and lower digit line memories 739 may feed (feed) a shared physical output driver (PHY) disposed, for example, on the left or right side of the pixel array and coupled to receive data from each digit line memory in parallel. Alternatively, a separate PHY may be provided with respect to the two digitline memories, wherein the PHY is for example provided at opposite edges of the image sensor IC. Further, while the upper and lower readout circuits may be implemented on the same physical die as pixel array 731 (e.g., at the periphery of the die (sandwiching the pixel array therebetween) or at the center of the die between respective halves of the pixel array), alternatively, the readout circuits may be located on another die (e.g., may be coupled to the pixel array die in a stacked configuration that additionally includes other imaging-related dies).

Fig. 29 illustrates an embodiment of an imaging system 800 having an image sensor 801, an image processor 803, a memory 805, and a display 807. Image sensor 801 includes a pixel array 811 of temporally oversampled conditionally reset pixels in accordance with any of the embodiments disclosed herein, and further includes pixel control and readout circuitry described above, including row logic 815, column logic 817, line memory 819, and PHY 821. The image processor 803 (which may be implemented as a system on a chip or the like) includes an Image Signal Processor (ISP)831 and an application processor 833, which are coupled to one another via one or more interconnection buses or links 836. As shown, ISP831 is coupled via PHY 827 (and one or more signaling links 822, which may be implemented, for example, by a mobile industry processor interface ("MIPI" bus) or any other practical signaling interface) to receive imaging data from the pixel array, and the ISP and application processor are coupled via interconnect 836 to memory control interface 835 and user interface port 837. Further, as explained below, interconnect 836 may also be coupled to an image sensor interface of ISP831 (i.e., the ISP interface to PHY 827) via side channel 838 to enable the application processor to transmit data to the ISP in a manner that mimics (emulate) an image sensor.

Still referring to fig. 29, the imaging system 800 further includes one or more memory components 805 coupled to a memory control interface 835 for the image processor 803. In the illustrated example, and in the discussion that follows, it is assumed that the memory components include Dynamic Random Access Memory (DRAM) that may be used as a buffer for image sub-frame data and/or a frame buffer for other functions. The memory component may additionally include one or more non-volatile memories for long-term storage of processed images.

A user interface port 837 is coupled to the user display 807, which user display 807 may itself include a frame memory (or frame buffer) for storing images (e.g., still image frames or video frames) to be displayed to the user. Although not shown in the figure, the user interface port 837 may also be coupled to a keyboard, touch screen, or other user input circuitry capable of providing information to the image processor 803 corresponding to user input, including operating mode information that may be used to configure the decimation mode within the image sensor 801. Although also not shown in the figure, the image processor 803 may be coupled to the image sensor 801 through a sideband channel or other control interface to allow for transmission of operating modes, configuration information, operating trigger instructions (including image capture instructions, configuration programming instructions, etc.) and the like to the image sensor.

Fig. 30 illustrates an exemplary sequence of operations that may be performed within the imaging system of fig. 29, in conjunction with image processing operations. Beginning at 851, the application processor configures ISP831 for DMA (direct memory access) operations with respect to memory control interface 835 and thus with respect to memory IC 805. With this arrangement, the ISP is enabled to operate as a DMA controller between the image sensor 801 and the memory IC 805, receiving sub-frame data from the image sensor 801 (as shown at 853) and transferring the sub-frame data to the memory IC on a row-by-row basis. Thus, in effect, the sub-frame data generated by temporal oversampling within the image sensor 801 is pipelined directly through the ISP to the memory IC (e.g., DRAM) where it can be accessed by the application memory. It should be noted that in the illustrated embodiment, the sub-frames may be loaded into memory one after the other until the last sub-frame has been received and stored (i.e., the frame-by-frame storage loop and its final termination are reflected in decision block 855). In an alternative embodiment, the process may be optimized by omitting the storage of the last subframe in memory IC 805, but instead transmitting the last subframe directly to application processor 833. That is, as shown at 857, the application processor retrieves and combines (e.g., sums) the already stored sub-frames to produce a merged (integrated) image frame, such that, instead of storing the last sub-frame in memory and then reading it out immediately, the last sub-frame can be transferred directly to the application processor to serve as a starting point for sub-frame data merging. In either case, at 859, the application processor configures the ISP831 for operation in an image processing mode, and, at 861, outputs the image frame data (i.e., the combination of temporally oversampled image sensor data) to the ISP's image sensor interface (i.e., to the ISP's front end via channel 838), thereby mimicking the image sensor to transmit a full image frame to the ISP 831. At 863, the ISP processes the image frames transmitted by the application processor to produce the final determined image frame, for example, to write the completed (processed) image frame to DRAM or non-volatile memory (i.e., one or both of the memory ICs 805) and/or directly to a frame buffer within the display 807 to enable display of the image to the system user.

Method for reproducing image

Two main methods are used here to reproduce a linear light representation from a sequence of conditional exposures. The first method is sum-and-lookup (sum-and-lookup), and the second method is weighted averaging (weighted averaging). Fig. 31 contains a log-log plot of Exposure Value (EV) versus light intensity (in lux) after initial calculation for both methods. While both exhibit similar dynamic range and are nearly linear for lower light ranges, the sum-find method has a shoulder for high intensities, while the weighted-average method remains linear until the top of its dynamic range is reached.

In the first method, the digital conversion values for all frame sub-exposures exceeding a threshold are summed for a given pixel. If a given sub-exposure results in a saturated pixel, the data number for that sub-exposure in the summation is greater than 1 than the highest data number for an unsaturated pixel. After summing, a look-up operation is performed based on a look-up table (LUT) that pre-computed responses map to light intensities. The look-up table may be computed using an analytical model of the sensor response, statistical estimation using monte-rolo simulations, maximum likelihood estimation, or the look-up table may be based on characterization data from the sensor hardware. The main advantages of this mode are simplicity of runtime, and the ability to create partial sums of responses (e.g., on-sensor creation to reduce off-chip bandwidth). However, because the values for all sub-exposures are fused, the look-up table assumes a constant illumination intensity for the pixels in the entire frame; if motion occurs in the frame or camera platform, or the illumination changes for the duration of the frame, the look-up table method may not accurately estimate the average linear light value. Also, because the look-up table (LUT) is pre-computed, only preset exposure timings with matching LUTs can be processed on the system (on-system), and deviations in responses, thresholds, etc. can also cause errors.

In the second method, the duration T is the frame durationtotalIn the above, the intensity of light impinging on a pixel is directly estimated from the pattern of digital values/saturation events and the time length at which the integral of these values/events is created. With one set of samples S for each pixeliIndicating that each sample occupies a corresponding time period Ti. The values of each sample are: s is more than or equal to 0i≤Smax. For a conditional read sub-exposure, the value may not be 0; but the value may be 0 only for unconditional readout sub-exposures. When the value is SmaxWhen the pixel is at its time length TiDuring which time it saturates. In a first calculation, the intensity estimate is calculated over all unsaturated samples (no sample is taken if the pixel does not exceed a conditional threshold at the end of the sub-exposure), if such a sample exists:

Figure BDA0001974157620000511

for Poisson distribution noise, weight w is giveniIs set to 1. For Gaussian-dominant noise, the weight may be set to Ti. Other weights may be selected, if desired. In the second calculation, if there are one or more saturated samples, then if the predicted value at that exposure time is (based on having been included in the data already)

Figure BDA0001974157620000512

Samples in the estimation) are less than SmaxThen the intensity estimate is updated to include the intensity estimate having the shortest duration TkIs detected. The predicted value may be calculated and estimated in a number of ways, including:

Figure BDA0001974157620000513

if there is more than one saturated sample, then the process may repeat for the second shortest duration sampleThe update process, and so on, until the last such sample is reached or the above equation is no longer satisfied. If only saturated samples are present, then the pass has the shortest duration TkThe best intensity estimate is obtained for the saturated samples:

Figure BDA0001974157620000514

data rate limited sensor operation

In some embodiments, data storage on the sensor is limited to be much smaller than a sub-frame, and the sub-frame operating rate is limited by at least one of the ADC conversion rate and the channel rate available between the sensor and the attached processing/storage system. Fig. 32-39 illustrate a set of different conditional/unconditional readout mode timing diagrams that may be supported in such an embodiment. Each graph format plots unconditional reset operation, conditional read/reset operation, and unconditional read operation as a function of row number and time.

Fig. 32 depicts a first HDR mode of operation in the context of 1080P video imaging at a 60Hz frame rate. The assumption of this timing is that the ADC conversion rate and/or channel rate allows up to four full sub-exposures to the operating pixel every 1/60 seconds (each sub-exposure is illustrated as lasting 4ms for a frame rate of 62.5Hz for simplicity). For the first frame 1, proceeding from row R1 to row R1080, the sensor unconditionally resets the pixels on each row (or, stitched rows, as described above) at the rate of one reset processing period that completes all rows within 4 ms. At the end of this processing period of 4ms, the first sub-exposure period SE1 completes row R1, and the sensor performs a conditional read/reset operation on row R1 and places the result in the transfer buffer. By proceeding from row R2 to row R1080 at the same rate as the unconditional reset of this processing period, the sensor performs similar conditional read/reset operations on each row in turn and places the result in the transfer buffer, completing the first sub-exposure period SE1 in row order, as before for a period of 4 ms.

The sensor then transitions operation back to row 1 and a second conditional read/reset processing period is performed on the array during sub-exposure period SE 2. During the second processing period, after light integration for a period of 4ms, pixels exceeding the threshold and being reset in the first processing period are evaluated, while after light integration for a period of 8ms, pixels not being reset in the first processing period are evaluated.

After the second conditional read/reset processing period, the sensor is subjected to a similar third processing period during sub-exposure period SE 3. During the third processing period, for the pixel light integration period, there are three possibilities: for pixels reset at the end of SE2, 4 ms; for the last pixel reset at the end of SE1, 8 ms; and 12ms for pixels that have not exceeded the threshold during frame 1.

After the third processing period is complete, at which point the array is subjected to the final processing period, at which point the ADC values for each pixel in each row are unconditionally read out and transferred after the sub-exposure period SE 4. During this fourth processing period, for the pixel light integration period, there are four possibilities: for pixels reset at the end of SE3, 4 ms; for the last pixel reset at the end of SE2, 8 ms; for the last pixel reset at the end of SE 1: 12 ms; and 16ms for pixels that have not exceeded the threshold during frame 1. Because the ratio of the shortest integration time to the longest integration time for frame 1 is 1:4, the HDR mode of operation of fig. 1 adds two photographic aperture steps (stop) to the frame dynamic range (which additional dynamic range may be greater than, less than, or both greater than and less than the dynamic range of this one-exposure case, depending on the length selected for the one-exposure case) compared to a sensor that operates one-exposure every 16 ms.

The unconditional readout at the end of period SE4 also serves as an unconditional reset to start the sub-exposure period SE1 for the next frame 2. Frame 2 is collected the same as frame 1.

When using the HDR mode of operation depicted in fig. 32 in low-light scenes, it may be beneficial to operate the sensor at ISO above base ISO to capture shadow areas with less readout noise, and to maintain a higher frame rate (for video) or reduce blur due to hand shake (for still image capture). For example, if it is desired to capture four sub-exposures at ISO400, the first sub-exposure may be captured at ISO 100, the second sub-exposure may be captured at ISO 200, and the third and fourth sub-exposures may be captured at ISO 400. Since the first sub-exposure now has a sensitivity of one fourth of the last two sub-exposures, an additional two aperture steps of the dynamic range are available. If the ISO sequence is 100-.

To implement the changed ISO mode, it may be desirable to adjust the thresholds for some or all of the conditional read sub-exposures. For example, in the 100-. To avoid saturating the sub-exposure SE2 readout for this pixel, the threshold for the first sub-exposure should be less than 1/4 for the full ADC range. Likewise, at the end of the first or second sub-exposure, the pixels that are not reset may have values slightly below the second sub-exposure threshold, and will then integrate until the end of the third sub-exposure. Since the analog gain is doubled again and the integration time has increased by 50%, the threshold for the second sub-exposure should be less than 1/3 for the full ADC range. A similar analysis was performed for the third sub-exposure threshold, resulting in a recommended value of less than about 37% of the full ADC range.

When the ISO is changed during exposure, the rendering system must also know the mode. The weighted average rendering for unsaturated pixels should be adjusted to:

Figure BDA0001974157620000541

similarly, the equation for considering saturated pixels is adjusted. In addition, the weights may be adjusted to account for changing readout noise at different ISO. For the sum-find reproduction method, these values should be scaled by ISO before summing.

Fig. 33 depicts a second HDR mode of operation in the context of 1080P video imaging at a 60Hz frame rate. This second mode further expands the dynamic range compared to the first mode, whereas a photographic aperture number of less than 1/3 of sensitivity to the low-light end of the dynamic range is lost compared to the first mode. In this second HDR mode, a plurality of conditional read/reset processing periods and the final unconditional read processing period are executed at the same timing as in the first HDR mode. The difference from fig. 32 is that an unconditional reset processing period is used, which is timed to reduce the effective length of the sub-exposure period SE 1. For example, light capture for frame 2 and a given row does not begin with an unconditional readout of that row ending frame 1. Instead, the dedicated unconditional reset schedule (schedule) for that row is 1ms before the end of the conditional read/reset processing period of SE 1. This results in the light converted by the pixels in the row during 3ms since the end of frame 1 being discarded and the SE1 being shortened so that greater intensity can be sensed without saturation. Thus, a particular pixel may have possible integration periods of 1, 4, 5, 8, 9, and 13ms during one frame. The dynamic range is extended by 3 and 2/3 photographic aperture steps compared to a sensor operating an exposure every 16 ms. It should be noted that with the mode of operation of fig. 33, the duration of the first sub-exposure period can be set to almost any selected duration up to 4ms in length. In this mode, variable ISO techniques may also be used; although in general between SE1 and SE2, ISO should remain constant as the integration period has changed by a factor of 5. By doubling the ISO between SE2 and SE3 and again between SE3 and SE4, an additional 2 photographic aperture steps of dynamic range are available under low light conditions.

Another consideration when the length of SE1 is shortened is that it may be necessary to adjust the conditional sense threshold based on the shortening. For example, if the threshold used in FIG. 32 is 40% of the saturation value, then the threshold needs to be lowered to about 20% or less of the saturation value during SE1 in FIG. 33 to ensure that pixels that do not fully touch the (trip) threshold at the end of SE1 are not fully saturated or nearly unsaturated at the end of SE 2. Implementing a desired threshold may become infeasible when the ratio of any possible integration time to that extended by 1 sub-exposure becomes greater than about 5.

For a stationary brighter condition, FIG. 34 illustrates a mode of operation in which the dynamic range is further extended without disturbing the aforementioned ratio beyond 5. Fig. 34 still uses 4 readout periods, but terminates the first readout period of these 4 readout periods for each frame with unconditional readout. This allows each frame to have 2 integration periods (SE1 and SE2), each of which may be shorter than 4ms in length. In the example of fig. 34, the lengths of these periods have been set to 0.25ms and 1ms, respectively. Thus, a particular pixel may have possible integration periods of 0.25, 1, 4, 5, 8, and 9ms during a frame. The dynamic range is extended by more than 5 aperture steps compared to a sensor that operates one exposure every 16 ms.

Fig. 35 demonstrates an exemplary "preview mode" that may be appropriate, for example, when the sensor is operating in a reduced power/data rate mode to enable a user to compose an image, set zoom, etc. In preview mode, only one reset and one unconditional readout is scheduled for each row of each frame. However, the even and odd frames use different integration periods (the ratio shown is 16:1), where the integration periods are placed anywhere in the respective frame periods, as long as the two unconditional readout processing periods are separated by at least 4 ms. By only looking at the frame at full rate, the effect of "pseudo HDR" is created. Alternatively, each set of two adjacent frames can be fused together by using conventional HDR frame fusion to create a preview with a simulated HDR capture. Preferably, when the user initiates the capture of a still frame or video of a recording, the capture mode switches to a more capable mode, such as the modes shown in fig. 32-34. For auto exposure operation, the preview mode may also help to select the appropriate full HDR mode for the next capture.

For the same imager example as shown in fig. 32-35, fig. 36 illustrates a 30Hz capture mode with up to 8 sub-exposure captures per frame period and additional flexibility implemented in the exposure strategy. In this particular example, 6 sub-exposure periods (3 unconditional periods, 3 conditional periods) are used to provide exposure times of 0.125, 0.5, 2, 4, 6, 8, 12, 16, 20 and 22ms for each frame. This mode extends the dynamic range by 7.5 photographic aperture steps compared to an imager that provides only one exposure period for each frame.

This same imager can also be operated in 120Hz mode by using 2 sub-exposure periods per frame. FIG. 37 illustrates two different 120Hz timing possibilities: a 4ms, 4ms sequence, where each frame has equal sub-exposure time and one conditional read/reset; and a 0.75ms, 4ms sequence with one conditional read/reset per frame (with 3 photographic aperture level improvements per frame as shown in frame 2 and frame 4). Alternatively, and as shown in fig. 37, two such sequences may be arranged alternately for odd and even frames, if desired, with frame blending used to smooth the final output.

For some video capture scenarios, it may be desirable to have a set of very short sub-exposures for each row, which are strictly grouped in time and do not extend across a 60Hz frame. Fig. 38 contains a timing diagram illustrating an interleaved acquisition pattern suitable for this case. In interleaved capture mode, the entire sub-exposure processing period is not completed before the next sub-exposure processing period begins, but rather the row rate of each processing period is slowed such that the maximum row rate of the system is never exceeded, with four processing periods advancing in parallel.

In fig. 38, all the sub-exposure periods of each frame, except for the last sub-exposure period, are conditional read out/reset periods having pitches of 0.125, 0.25, 1, and 4ms, respectively. For a dynamic range that extends about 5 and 1/2 photographic aperture steps compared to a single exposure per frame, the longest possible integration time is 5.375ms, and the shortest integration time is 0.125 ms. The interleaving schedule is further illustrated in the lower expanded partial view of frame 1 in fig. 38: row R256, which has completed all its conditional reads, receives its last unconditional sub-exposure read for SE 4; next, row R512 receives its SE3 conditional read/reset operation; next, row R576 receives its SE2 conditional read/reset operation; row R592 then receives its SE1 conditional read/reset operation; and finally, row R600 receives an unconditional reset operation that starts frame 1 for that row. The schedule then moves down row 1 and the same sequence is performed, but the process is repeated for rows R257, R513, R577, R593, and R601. Note that the upper row of frame 2 begins its acquisition sequence before the lower row of frame 1 completes its acquisition sequence.

The interleaving pattern can be employed with almost any combination of sub-exposure periods (from very short to very long). Thus, for darker scenes, the full interleaved capture mode may also be employed. Fig. 39 contains a timing chart for a mode in which all sub-exposure periods are 4ms long, similar to the mode of fig. 32, but with the difference that 4 sub-exposure periods advance in parallel.

Variable data rate sensor operation

In some embodiments, there is an excessive amount of sensor data rate in some modes, which may be utilized in one or more ways. Fig. 40 shows a timing diagram for one such example, where the maximum row rate is fast enough to support approximately 4.5 array scans per frame. The sensor selectively picks up some rows to receive the 5 th sub-period SE5 in each frame, in this case the 5 th sub-period SE5 has a total exposure time of 0.25ms (1/4 being the exposure time of sub-period SE 1).

FIG. 41 illustrates a block diagram for a sensor 120 that is capable of operating with a variable exposure schedule and thus facilitates performing the variable timing of FIG. 40. Sensor 1210 includes a conditional reset pixel array 1212 and a readout chain including a set of column logic/ADCs 1214, row buffers 1216, compressors 1218, PHY buffers 1220, and PHY 1222 to drive sensor data off-chip. Line sorter 1224 is driven by a variable scheduler 1226 working in conjunction with a set of skip tags 1228.

In operation, row sequencer 1224 provides a row signal for resetting some and all transfer gate voltages and outputs a selection for each row in array 1212, and column logic/ADCs 1214 provide column transfer select signals for each column, perform threshold comparisons, and perform ADC conversion for each column, as explained previously. In one embodiment, the row buffer 1216 holds data for at least the last row read operation. In one embodiment, the data includes an n-bit word (n-bit word) for storing an ADC value, a flag bit for indicating whether to perform an ADC operation, and a flag bit for indicating whether the ADC is saturated.

The compressor 1218, coupled to the variable scheduler 1226, formats the output of the line buffers for transmission. Fig. 42 illustrates an exemplary packet format that provides flexibility for many different modes of operation. The row header is the primary packet header type used (other information headers/packet types may be inserted in the data stream if desired, but are not described here). A line header is received in the data stream indicating that a new line operation has been performed and contains fields (fields) indicating the type of operation and the type of compression applied to the line, frame, line number, timestamp indicating the execution time, and payload length (size of the appended line data, if any). Whenever a row is reset, a simple row header including an unconditional reset (UCRST) flag and no payload is transmitted so that the reproduction system knows from which point to calculate the integration period.

In some embodiments, the compression type may vary per line (or per line per color). For example, depending on the distribution of the AD converted and saturated pixels in a given row below the threshold, one compression may perform better than the other. Some exemplary compression code patterns are shown in the table below, where the ADC generates 8-bit data D [0..7] for the converted pixels.

Compression mode Below threshold Converted AD Saturation of
Uncompressed 00000000 D[0..7] 11111111
Mode 1 0 1.0.D[0..7] 1.1
Mode 2 0.0 1.D[0..7] 0.1
Mode 3 0.0 0.1.D[0..7] 1

Optionally, other more complex compression types may be included and may utilize more complex huffman codes, code run-length similarities, or code delta values from column to column and/or row to row for spatially slowed changing scenarios.

With full row transfer information, the receiving subsystem can easily reproduce each pixel of each row from its reset and readout time history for each frame. This enables the variable scheduler 1226 to freely set the line schedule that optimally captures the current scene. In one embodiment, as shown in fig. 40 and 41, the variable scheduler 1226 receives information from the compressor regarding: whether sub-exposure SE1 produced any saturated samples for a given row. If the line does not produce any saturated samples, then no shorter exposure need be scheduled for the line and the skip flag 1228 corresponding to the line is set. However, if one or more pixels are saturated, the skip flag remains unset, so that a subsequent shorter exposure can be scheduled for the row.

FIG. 40 shows that with the end of SE4, some rows begin scheduling unconditional resets in preparation for the 5 th sub-exposure. To do this, the variable scheduler 1226 triggers scheduling only for rows that require shorter exposures. All such lines may be scheduled and the other lines skipped as long as the number of lines requiring such exposure may fit into the available time before the next frame starts. If too many rows require a shorter exposure, some rows may not be serviced at the beginning of the reset for the next frame. Although SE5 is shown in fig. 40 to end with an unconditional readout, alternatively SE5 may end with a conditional readout, so that only high intensity values receive the ADC output.

In one embodiment, the skip flag may be stored in a linked table, where each entry identifies the next row to be served. With such a list, individual rows can be removed from the list when a new exposure does not produce any saturated pixels. If a new exposure still produces saturated pixels for a row, that row may be left in the list. If time is left and the list has been traversed once, additional even shorter exposures may be scheduled for the rows that remain on the list. The list may belong to a complete row, or a partial row divided by color and/or spatially (left half, right half, etc.). Another alternative embodiment is that a list of saturated rows may be applied, with a first brief exposure being made to some rows in the next frame, rather than the best brief exposure being made in the current frame.

The variable rate scheduler may operate in other more complex modes. Fig. 43 depicts a schedule diagram for a sensor having a higher ADC bandwidth than the channel rate, such that when data can be successfully compressed, scanning can be expedited and more than 4 exemplary scans can be completed in one frame. In one mode, the variable scheduler maintains rate statistics over the current frame (the first frame may be a constant scan rate learning frame, as shown for frame 1 in fig. 43). The rate statistics may then be used to adjust the scheduling for the next frame, as long as such adjustment may result in one or more additional scans being made for each frame.

In fig. 43, learning frame 1 makes the variable scheduler observe that the SE1 scan is highly compressible, and that the SE2 scan also has good compressibility, especially towards the bottom of the frame. The variable scheduler adjusts the row rate up for SE1 and SE2 in frame 2, moving the scans closer together and causing another scan to be inserted in the frame. The last two scans are maintained at a constant rate to achieve a predictable and regular end of frame time.

One negative effect of the scan variability of fig. 43 is that different rows have different sub-exposure period times. However, with time-stamped row and weighted reproduction, each row can be reproduced accurately without being affected by these variability.

Operation with a strobe

Another potential mode for conditionally resetting the imager is a strobe mode synchronized with an integrated or external flash. In the gate mode, one of the sub-exposures (SE 6 in this case) is designated as a synchronous sub-exposure. The synchronized sub-exposure includes a strobe window in which all rows are integrating and no rows are read out or reset. The imager and the strobe are synchronized so that the strobe fires (fire) during the strobe window.

In the example of fig. 44, the gating window is bracketed by two unconditional readout processes, so that the gating light dominates and the ambient light integration time is equal for all pixels. This allows the imager to potentially separate the flash from the ambient component of the illumination for advanced rendering. Also, the location of the strobe window sub-exposure (first, last, or other location in the frame) may be a parameter selectable by the user to achieve different effects. In particular, when the strobe window sub-exposure is the first sub-exposure, it may start with an unconditional reset and end with a conditional read/reset processing period that starts a short sub-exposure period.

Spatial discretization of temporal sampling strategies

Still modes generally follow the corresponding video modes for similar light levels, although some still modes may be used for much longer exposures and the saved frames may have a considerable number of rows and columns. For rate-limited scanning, another exposure mode processes tiled (tiled) "a" and "B" blocks with different exposure timing to obtain a greater variety of sub-exposure times. During demosaicing, the a and B blocks are used together to create a mixed dynamic range image.

Fig. 45 includes an exposure timing chart for the spatial hybrid exposure mode. The option 1 grouping arranges a block pixels of two rows and B block pixels of two rows alternately. Option 2 grouping creates a checkerboard arrangement of a and B tiles, where each tile faces (coordinates) a 2x2 pixel block. Other options for spatial grouping are also possible. In general, the size and location of the spatial grouping should be related to the particular CFA kernel used in the system.

During frame exposure, some operations (unconditional reset in this example) may be selectively applied to one group type (B pixels in this case) but not to the other. This selective application results in the sub-exposure sequences of group a and group B being different even after the same scheduling has been performed for both groups for readout.

Exposure optimization

Exposure optimization attempts to optimize one or more parameters such as signal-to-noise ratio, bandwidth, pixels exceeding a threshold per scan, unsaturated pixels per scan, etc. It will be appreciated herein that each sub-exposure period will have a different histogram. In the exemplary histogram sequence of fig. 46, SE1 (for a short period) has 90% of pixels below the threshold, 8% of pixels within range, and 2% of saturated pixels; SE2 (longer period) has 50% of pixels below the threshold, 20% of saturated pixels, and 30% of pixels within range; SE3 (another longer period) has 15% of pixels below the threshold, 20% of saturated pixels, and 65% of pixels within range; SE4 (longer unconditional readout period) has no pixels below the threshold, pixels with 50% saturation, and pixels with 50% in range.

Also shown in FIG. 46 is an exposure optimization algorithm: aggregating statistics (such as the illustrated histogram) from a set of sub-exposures during one or more frames; and compares these statistics to expected statistics, such as a data rate profile and/or exceeding a threshold percentage target. Then, the statistics for the upcoming frame are adjusted. One technique for adjusting statistics is: perturbation (perturbation) is used for the conditional read-out threshold of one or more sub-exposures to obtain more or less in-range values for that sub-exposure. A second technique for adjusting statistics is: the length of one or more sub-exposures is adjusted to move the pixel between below the threshold and within the range and between within the range and saturation.

Additional considerations

It should be noted that the various circuits disclosed herein, in terms of their behavior, register transfer, logic components, transistors, layout geometries, and/or other characteristics, may be described and expressed (or represented) as data and/or instructions in various computer-readable media using computer-aided design tools. The formats of files or other objects that may be implemented for such circuit representations include, but are not limited to: formats that support behavioral languages, such as C, Verilog and VHDL; formats that support register level description languages, such as RTL; and formats that support geometry description languages, such as GDSII, GDSIII, GDSIV, CIF, MEBES; and any other suitable format and language. Computer-readable media in which such formatted data and/or instructions may be embodied include, but are not limited to, various forms of computer storage media (e.g., optical, magnetic or semiconductor storage media, whether separately distributed in this manner or stored "in place" in an operating system).

When such data and/or instruction-based expressions of the above-described circuits are received within a computer system via one or more computer-readable media, such expressions may be processed by a processing entity (e.g., one or more processors) within the computer system in conjunction with one or more other computer programs, including but not limited to a netlist generation program, a location and routing program, to generate a representation or image of a physical manifestation of such circuits. Such representations or images may then be used in device fabrication, for example, by enabling the generation of one or more masks for forming various components of the circuit during fabrication.

In the preceding description and in the corresponding drawings, specific terminology and drawing symbols have been set forth to provide a thorough understanding of the disclosed embodiments. In some instances, terms and symbols may imply specific details that are not required to practice the embodiments. For example, any of the specific number of bits, signal path widths, signaling or operating frequencies, component circuits or devices, etc. may be different from the specific number of bits, signal path widths, signaling or operating frequencies, component circuits or devices, etc. described above in alternative embodiments. In addition, links or other interconnects between integrated circuit devices or internal circuit elements or blocks may be shown as buses or as single signal lines. Each of the buses may be replaced with a single signal line, and each of the single signal lines may be replaced with a bus. However, the signal and signaling links shown or described may be single ended or differential. A signal driving circuit is said to "output" a signal to a signal receiving circuit when the signal driving circuit asserts (or deasserts, if the context clearly dictates or dictates) a signal on a signal line coupled between the signal driving circuit and the signal receiving circuit. The term "coupled" is used herein to mean directly connected and connected through one or more intervening circuits or structures. Integrated circuit device "programming" may include: for example and without limitation, in response to a host instruction (and thereby controlling an operational aspect of the device and/or establishing a device configuration) or by a one-time programming operation (e.g., blowing fuses within configuration circuitry during device production), control values are loaded into registers or other storage circuitry within the integrated circuit device and/or one or more selected pins or other contact structures (also referred to as shorts) of the device are connected to establish a particular device configuration or operational aspect of the device. The term "light" as used to refer to illumination is not limited to visible light and, when used to describe the sensor function, is intended to refer to the wavelength band or bands to which a particular pixel configuration (including any corresponding filters) is sensitive. The terms "exemplary" and "embodiment" are used to express an example, rather than a preference or requirement. Also, the terms "may" and "can" are used interchangeably to refer to optional (permissible) subject matter. The absence of any term should not be construed to imply that a given feature or technique is required.

Section headings in the foregoing detailed description have been provided for ease of reference only and do not limit, restrict, constitute or describe in any way the scope or extent of the corresponding sections and any of the embodiments presented herein. As such, various modifications and changes may be made to the embodiments set forth herein without departing from the broader spirit and scope of the disclosure. For example, features or aspects of any of the embodiments may be used in combination with any other of the embodiments or in place of their corresponding features or aspects, at least where practical. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

82页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:图像传感器

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类