Image sensor and method for combining the same

文档序号:452548 发布日期:2021-12-28 浏览:26次 中文

阅读说明:本技术 图像传感器及其合并方法 (Image sensor and method for combining the same ) 是由 张璨煐 姜熙 崔祐硕 于 2021-06-24 设计创作,主要内容包括:一种图像传感器的合并方法,包括:一次从像素阵列的多个区域中的各个区域的至少两行读出多个像素信号,所述多个区域中的各个区域包括以2n×2n矩阵布置的多个像素,其中n是大于或等于2的整数;通过对所述多个像素信号执行模数转换而生成第一图像数据;基于所述第一图像数据,基于多个合并区域中的各个合并区域中的与相同颜色相对应的两个像素值生成所述多个合并区域中的各个合并区域的第一和值,所述多个合并区域与所述像素阵列的所述多个区域相对应;以及基于两个合并区域中的与相同颜色相对应的两个第一和值,生成所述两个合并区域中的各个合并区域的第二和值,所述两个合并区域在所述多个合并区域中在列方向上彼此相邻。(A method of merging image sensors, comprising: reading out a plurality of pixel signals at a time from at least two rows of respective ones of a plurality of regions of a pixel array, the respective ones of the plurality of regions including a plurality of pixels arranged in a 2n × 2n matrix, where n is an integer greater than or equal to 2; generating first image data by performing analog-to-digital conversion on the plurality of pixel signals; generating, based on the first image data, a first sum value of each of a plurality of merged regions corresponding to the plurality of regions of the pixel array based on two pixel values corresponding to the same color in each of the plurality of merged regions; and generating a second sum value of each of two merged regions based on two first sum values corresponding to the same color in the two merged regions, the two merged regions being adjacent to each other in a column direction in the plurality of merged regions.)

1. A merging method of image sensors, the merging method comprising:

reading out a plurality of pixel signals at a time from at least two rows of respective ones of a plurality of regions of a pixel array, the respective ones of the plurality of regions including a plurality of pixels arranged in a 2n × 2n matrix, where n is an integer greater than or equal to 2;

generating first image data by performing analog-to-digital conversion on the plurality of pixel signals;

generating, based on the first image data, a first sum value of each of a plurality of merged regions corresponding to the plurality of regions of the pixel array based on two pixel values corresponding to the same color in each of the plurality of merged regions; and

generating a second sum value of each of two merge regions, which are adjacent to each other in a column direction in the plurality of merge regions, based on two first sum values corresponding to the same color in the two merge regions.

2. The combining method according to claim 1, wherein reading out the plurality of pixel signals comprises: simultaneously reading out pixel signals from at least two first pixels having a first color, the at least two first pixels being in a first column of the pixel array; and reading out a pixel signal from one of at least two second pixels having a second color, the at least two second pixels being in a second column of the pixel array.

3. The merging method according to claim 2, wherein, of the at least two second pixels, the one second pixel is in an outer region of each of the plurality of regions.

4. The merging method of claim 2, wherein generating the first image data comprises:

receiving, by an analog-to-digital conversion circuit, a sum signal through a first column line, the sum signal corresponding to a sum of pixel signals of the at least two first pixels; and

generating, by the analog-to-digital conversion circuit, a pixel value for a first sampling position corresponding to a midpoint between the at least two first pixels by performing analog-to-digital conversion on the sum signal.

5. The merging method of claim 1, wherein generating the first sum comprises: applying a weight to each of the two pixel values and summing the weighted values.

6. The merging method of claim 5, wherein the weight applied to each of the two pixel values is set based on the second sample position at which the first sum value is located.

7. The merging method of claim 1, wherein generating the second sum comprises:

applying a weight to each of the two first sum values and summing the weighted values; and

generating the second sum value as a pixel value for a third sample position.

8. The merging method of claim 1, further comprising:

generating a third sum value based on at least two first pixel values of each of the plurality of merged regions of the first image data and first pixel values of an adjacent merged region adjacent to each of the plurality of merged regions; and

combining the second sum value with the third sum value for at least two first pixels.

9. The merging method of claim 8, wherein the merging is performed based on a difference between first pixel values of respective ones of the plurality of merged regions and first pixel values of the neighboring merged region,

wherein the third sum value is included in the output image data when the difference is less than a first threshold,

including the second sum value in the output image data when the difference exceeds a second threshold, an

When the difference is greater than or equal to the first threshold and less than or equal to the second threshold:

applying a weight to each of the second sum value and the third sum value based on the difference, an

Including the sum of the weighted values in the output image data.

10. An image sensor, comprising:

a pixel array divided into a plurality of regions having a quadrangular shape, each of the plurality of regions including pixels arranged in a 2n × 2n matrix, where n is an integer greater than or equal to 2;

an analog-to-digital conversion circuit configured to:

reading out a plurality of pixel signals, an

Converting the plurality of pixel signals to first image data, the first image data including a plurality of pixel values, and the plurality of pixel signals being received from the pixel array through a plurality of column lines;

a row driver configured to provide control signals through a plurality of row lines connected to the pixel array, the control signals being configured to control pixel signals of at least two rows of the pixel array to be simultaneously output;

a line buffer configured to store the first image data in a specific line unit; and

a processor configured to perform merging on the first image data stored in the line buffer.

11. The image sensor of claim 10, wherein the row driver is further configured to: controlling at least two first pixels to simultaneously output a pixel signal, and controlling one of at least two second pixels to output a pixel signal, the at least two first pixels being in a first column of the at least two rows of the pixel array, and the at least two second pixels being in a second column of the at least two rows of the pixel array.

12. The image sensor of claim 11, wherein the analog-to-digital conversion circuitry is further configured to:

receiving a sum signal through a first column line, an

Generating a pixel value for a first sampling position by performing an analog-to-digital conversion on the sum signal, the sum signal corresponding to a sum of pixel signals of the at least two first pixels, and the first sampling position corresponding to a midpoint between the at least two first pixels.

13. The image sensor of claim 10, wherein the processor is further configured to: generating a first sum value of each of a plurality of merged regions, which are included in the first image data, based on two pixel values corresponding to the same color in each of the merged regions.

14. The image sensor of claim 13, wherein the processor is further configured to:

applying a weight to each of the two pixel values; and

summing the weighted values, the weights being set based on the second sample position at which the first sum value is located.

15. The image sensor of claim 13, wherein the processor is further configured to: generating a second sum value of each of two merge regions, which are adjacent to each other in a column direction in the plurality of merge regions, based on two first sum values corresponding to the same color in the two merge regions.

16. The image sensor of claim 15, wherein the processor is further configured to: calculating a third sum value based on at least two first pixel values of each of the plurality of merged regions of the first image data and first pixel values of an adjacent merged region adjacent to each of the plurality of merged regions; and combining the second sum value with the third sum value for at least two first pixels.

17. The image sensor of claim 10, wherein each of the plurality of regions comprises a bayer pattern having one red pixel, two green pixels, and one blue pixel arranged repeatedly.

18. An image processing system comprising:

an image sensor configured to sense a light signal and generate image data; and

a first processor configured to receive and process the image data from the image sensor,

wherein the image sensor comprises:

a pixel array divided into a plurality of regions having a quadrangular shape, each of the plurality of regions including pixels arranged in a 4x4 matrix;

an analog-to-digital conversion circuit configured to:

reading out a plurality of pixel signals, an

Converting the plurality of pixel signals into first image data, the first image data comprising a plurality of pixel values, the plurality of pixel signals received from the pixel array through a plurality of column lines;

a row driver configured to provide control signals through a plurality of row lines connected to the pixel array, the control signals configured to control pixel signals of at least two rows of the pixel array to be simultaneously output;

a line buffer configured to store first image data in a specific line unit; and

a second processor configured to perform merging on the first image data stored in the line buffer.

19. The image processing system of claim 18, wherein the second processor is further configured to:

generating a first sum value of each of a plurality of merged regions based on two pixel values corresponding to the same color in each of the merged regions, the merged regions being included in the first image data; and

generating a second sum value of each of two merge regions, which are adjacent to each other in a column direction in the plurality of merge regions, based on two first sum values corresponding to the same color in the two merge regions.

20. The image processing system of claim 19, wherein the second processor is further configured to:

calculating a third sum value based on at least two first pixel values of each of the plurality of merged regions of the first image data and a first pixel value of another merged region adjacent to each of the plurality of merged regions;

combining the second sum value with the third sum value for at least two first pixels.

Technical Field

Example embodiments of the present disclosure relate to an image sensor, and more particularly, to a merging method of an image sensor and an image sensor for performing the method.

Background

As the resolution of the image sensor increases, the size of the image data generated by the image sensor also increases. However, since the size of image data generated by an image sensor increases, it is difficult to maintain a high frame rate in the video mode, and power consumption increases. For example, the frame rate may be a frame rate achievable based on the circuit interface bandwidth and a computational bandwidth associated with avoiding artifacts such as zigzag artifacts or false colors when changing the sampling rate. This is a problem when the computation required to avoid artifacts is high and/or the circuit interface bandwidth is not high, the achievable frame rate is limited. Combining techniques are used to increase the frame rate and maintain image quality.

Disclosure of Invention

Example embodiments provide a merging method of image sensors and an image sensor for performing the method, by which a frame rate is improved, a data size is reduced, and an image quality is maintained.

According to an aspect of an example embodiment, there is provided a merging method of image sensors. The merging method comprises the following steps: reading out a plurality of pixel signals at a time from at least two rows of respective ones of a plurality of regions of a pixel array, the respective ones of the plurality of regions including a plurality of pixels arranged in a 2n × 2n matrix, where n is an integer greater than or equal to 2; generating first image data by performing analog-to-digital conversion on the plurality of pixel signals; generating, based on the first image data, a first sum value of each of a plurality of merged regions corresponding to the plurality of regions of the pixel array based on two pixel values corresponding to the same color in each of the plurality of merged regions; and generating a second sum value of each of two merged regions based on two first sum values corresponding to the same color in the two merged regions, the two merged regions being adjacent to each other in a column direction in the plurality of merged regions.

According to an aspect of an exemplary embodiment, there is provided an image sensor including: a pixel array divided into a plurality of regions having a quadrangular shape, each of the plurality of regions including pixels arranged in a 2nx2n matrix, where n is an integer greater than or equal to 2; an analog-to-digital conversion circuit configured to read out a plurality of pixel signals and convert the plurality of pixel signals into first image data, the first image data including a plurality of pixel values, and the plurality of pixel signals being received from the pixel array through a plurality of column lines; a row driver configured to provide control signals through a plurality of row lines connected to the pixel array, the control signals configured to control pixel signals of at least two rows of the pixel array to be simultaneously output; a line buffer configured to store the first image data in a specific line unit; and a processor configured to perform merging on the first image data stored in the line buffer.

According to an aspect of an exemplary embodiment, there is provided an image processing system including: an image sensor configured to sense a light signal and generate image data; and a first processor configured to receive and process the image data from the image sensor, wherein the image sensor comprises: a pixel array divided into a plurality of regions having a quadrangular shape, each of the plurality of regions including pixels arranged in a 4x4 matrix; an analog-to-digital conversion circuit configured to read out a plurality of pixel signals and convert the plurality of pixel signals into first image data, the first image data including a plurality of pixel values, and the plurality of pixel signals being received from the pixel array through a plurality of column lines; a row driver configured to provide control signals through a plurality of row lines connected to the pixel array, the control signals configured to control pixel signals of at least two rows of the pixel array to be simultaneously output; a line buffer configured to store the first image data in a specific line unit; and a second processor configured to perform merging on the first image data stored in the line buffer.

Drawings

Embodiments will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings, in which:

FIG. 1 is a schematic block diagram of an image sensor according to an example embodiment;

FIG. 2 shows an example of a pattern of the pixel array in FIG. 1;

FIG. 3A is a flow chart illustrating vertical analog summing and interpolation of an image sensor according to an example embodiment;

FIG. 3B is a flow chart of a merging method of image sensors according to an example embodiment;

fig. 4A and 4B are diagrams for describing a readout method according to an example embodiment;

fig. 5A, 5B, and 5C are schematic diagrams of first image data generated by a readout method according to an example embodiment;

fig. 6 illustrates a first merge performed on each of a plurality of merge regions of first image data based on pixel values in each merge region in a merge method according to an example embodiment;

fig. 7 illustrates interpolation performed based on pixel values of two adjacent merge regions in a merge method according to an example embodiment;

FIG. 8 is a flow chart of a merging method of image sensors according to an example embodiment;

fig. 9 is a diagram for describing a second merging applied to green pixels in a merging method according to an example embodiment;

fig. 10 shows an example of a pixel according to an example embodiment;

FIG. 11A shows a pixel array having a quad pattern; fig. 11B illustrates an example of applying a pixel array having a quad pattern to an image sensor according to an example embodiment;

fig. 12 shows an example of a pixel according to an example embodiment;

fig. 13 is a block diagram of an electronic device comprising a multi-camera module using an image sensor according to an example embodiment; and

fig. 14 is a detailed block diagram of the camera module in fig. 13.

Detailed Description

Hereinafter, embodiments will be described in detail with reference to the accompanying drawings.

Fig. 1 is a schematic block diagram of an image sensor according to an example embodiment.

The image sensor 100 may be mounted on an electronic device having an image or optical sensing function. For example, the image sensor 100 may be installed on an electronic device such as a camera, a smart phone, a wearable device, an internet of things (IoT) device, a home appliance, a tablet Personal Computer (PC), a Personal Digital Assistant (PDA), a Portable Multimedia Player (PMP), a navigation device, a drone, or an Advanced Driver Assistance System (ADAS). The image sensor 100 may also be mounted on an electronic device used as a component of a vehicle, furniture, a manufacturing facility, a door, or various measuring devices.

Referring to fig. 1, the image sensor 100 may include a pixel array 110, a row driver 120, an analog-to-digital converter (ADC) circuit 130, a ramp signal generator 140, a timing controller 150, a row buffer 160, and a processor 170.

The pixel array 110 may include a plurality of pixels PX in a matrix and a plurality of row lines RL and column lines CL connected to the pixels PX.

Each pixel PX may include at least one photoelectric conversion element (or photosensitive device). The photoelectric conversion element may sense light and convert the light into photo-charges. For example, the photoelectric conversion element may comprise a photosensitive device, such as an inorganic photodiode, an organic photodiode, a perovskite photodiode, a phototransistor, a photogate, or a pinned photodiode, comprising an organic or inorganic material. In an embodiment, each pixel PX may include a plurality of photoelectric conversion elements. The plurality of photosensitive devices may be arranged in the same layer or stacked on each other in a vertical direction.

The microlens for light collection may be disposed over each pixel PX, or over a pixel group including adjacent pixels PX. Each pixel PX may sense light in a specific spectrum from light received through the microlens. For example, the pixel array 110 may include red pixels that convert light in a red spectrum into electrical signals, green pixels that convert light in a green spectrum into electrical signals, and blue pixels that convert light in a blue spectrum into electrical signals. A color filter that transmits light in a specific spectrum may be disposed over each pixel PX. However, embodiments are not limited thereto, and the pixel array 110 may include pixels that convert light in other spectra than red, green, and blue spectra.

In one embodiment, the pixel PX may have a multi-layer structure. Each of the pixels PX having a multi-layered structure may include a stack of photosensitive devices, each of which converts light in a different spectrum into an electrical signal, so that electrical signals corresponding to different colors may be generated from the photosensitive devices. In other words, electrical signals corresponding to different colors may be output from a single pixel PX.

The pixel array 110 may have a bayer pattern in which first, second, and third pixels sense signals of different colors and are repeatedly arranged in a column direction and a row direction.

Fig. 2 illustrates an example of a pattern of the pixel array 110 in fig. 1.

Referring to fig. 2, the pixel array 110 may include a plurality of pixels PX arranged in a row direction (e.g., an X-axis direction) and a column direction (e.g., a Y-axis direction), and the plurality of pixels PX may include red pixels PX _ R, green pixels PX _ Gr and PX _ Gb, and blue pixels PX _ B. In the pixel array 110, a row including a red pixel PX _ R and a green pixel (e.g., a first green pixel PX _ Gr) is alternated with a row including another green pixel (e.g., a second green pixel PX _ Gb) and a blue pixel PX _ B; and the green pixels (e.g., the first green pixel PX _ Gr and the second green pixel PX _ Gb) may be on an oblique line. The green pixels (e.g., the first green pixel PX _ Gr or the second green pixel PX _ Gb) are closely related to luminance, and thus are arranged in each row; and the red pixels PX _ R and the blue pixels PX _ B are alternately arranged on different rows.

This pattern may be referred to as an RGB bayer pattern. Hereinafter, it is assumed that the pixel array 110 has an RGB bayer pattern. However, the embodiments are not limited thereto. Various patterns having the following structure may be applied to the pixel array 110: wherein pixels of at least three colors are repeatedly arranged, and second pixels (e.g., pixels related to luminance) are arranged in each row and form an oblique line with another second pixel of an adjacent row. For example, a RYYB pattern including one red pixel, two yellow pixels, and one blue pixel may be applied to the pixel array 110.

The pixel array 110 may be divided into a plurality of areas AR. Each area AR may include pixels PX arranged in a 2n × 2n matrix, where "n" is an integer of at least 2. For example, each area AR may include pixels PX arranged in a 4 × 4 matrix. At this time, according to the embodiment, each area AR is a basic unit to which the readout method is applied when the image sensor 100 operates in the first mode of performing binning. The areas AR may respectively correspond to a plurality of merging areas of the image data generated based on the readout signal. According to the readout method of the exemplary embodiment, a plurality of pixel signals of at least two rows in each area AR can be read out at once. For example, a plurality of pixel signals of pixels of at least two rows may be read out in a single horizontal period. A readout method of an example embodiment will be described with reference to fig. 4A to 5B.

When the image sensor 100 operates in the second mode (e.g., the normal mode in which binning is not performed), the pixel array 110 may sequentially read out pixel signals row by row.

Referring back to fig. 1, each row line RL may extend in the row direction and may be connected to the pixels PX of one row. For example, each row line RL may transmit a control signal from the row driver 120 to a plurality of elements (e.g., transistors) included in each pixel PX.

Each column line CL may extend in a column direction and may be connected to pixels PX of one column. Each column line CL may transmit a pixel signal (e.g., a reset signal and a sensing signal) from the pixels PX of each row of the pixel array 110 to the ADC circuit 130. When the image sensor 100 operates in the first mode, as described above, some of the column lines CL may transmit pixel signals of at least two rows at a time.

The timing controller 150 may control the timing of other elements of the image sensor 100 (e.g., the row driver 120, the ADC circuit 130, the ramp signal generator 140, the row buffer 160, and the processor 170). The timing controller 150 may provide timing signals indicating operation timings to each of the row driver 120, the ADC circuit 130, the ramp signal generator 140, the row buffer 160, and the processor 170.

Under the control of the timing controller 150, the row driver 120 may generate a control signal for driving the pixel array 110 and supply the control signal to the pixels PX of the pixel array 110 through the row lines RL. The row driver 120 may control a plurality of pixels of the pixel array 110 to sense incident light simultaneously or row by row. The row driver 120 may select one row or at least two rows of pixels PX, and may control the selected pixels PX to output pixel signals through the column lines CL.

The RAMP signal generator 140 may generate a RAMP signal RAMP that increases or decreases at a certain slope and provide the RAMP signal RAMP to the ADC circuit 130.

The ADC circuit 130 may receive pixel signals read out from the pixels PX of a row selected from the plurality of pixels PX of the pixel array 110 by the row driver 120 and convert the pixel signals into pixel values corresponding to digital data.

The ADC circuit 130 may generate and output first image data IDT1 (e.g., raw image data) in row units by converting pixel signals received from the pixel array 110 through the column lines CL into digital data based on the RAMP signal RAMP from the RAMP signal generator 140.

The ADC circuit 130 may include a plurality of ADCs corresponding to the column lines CL, respectively. Each ADC may compare a pixel signal received through a corresponding one of the column lines CL with the RAMP signal RAMP, and may generate a pixel value based on the comparison result. For example, the ADC may remove a reset signal from the sensing signal using Correlated Double Sampling (CDS) and generate a pixel value indicating the amount of light sensed by the pixel PX.

The line buffer 160 may include a plurality of line memories, and stores a plurality of pixel values output from the ADC circuit 130 in a specific line unit. In other words, the line buffer 160 may store the first image data IDT1 output from the ADC circuit 130 in a specific line unit. For example, the line buffer 160 may include three line memories corresponding to three lines of the pixel array 110, respectively, and store a plurality of pixel values corresponding to three lines of the first image data IDT1 output from the ADC circuit 130 in the three line memories.

The processor 170 may process a plurality of pixel values stored in the line buffer 160 corresponding to a plurality of lines of the first image data IDT 1. The processor 170 may perform image quality compensation, merging, downsizing, and the like on the first image data IDT1 stored in the line buffer 160 in a specific line unit. Accordingly, the output image data oid generated by the image processing can be generated and output in a specific line unit.

In one embodiment, processor 170 may process first image data IDT1 by color. For example, when the first image data IDT1 includes red, green, and blue pixel values, the processor 170 may process the red, green, and blue pixel values in parallel or in series. In one embodiment, processor 170 may include multiple processing circuits for processing different colors in parallel. However, the embodiment is not limited thereto, and a single processing circuit may be repeatedly used.

The processor 170 may generate the output image data oid having a smaller data size by performing a merging method according to an example embodiment described below.

The output image data OIDT may be output to an external processor, such as an application processor. The application processor may store, perform image processing, or display the output image data OIDT. The description will be made later with reference to fig. 13 and 14.

According to example embodiments, when the image sensor 100 operates in the first operation mode, a plurality of pixel signals of at least two rows may be simultaneously read out and may undergo analog summation in a vertical direction (e.g., a column direction). According to the analog vertical summing, at least two rows are simultaneously read out during a single horizontal period, so that the frame rate can be increased by at least two times.

The first image data IDT1 may be generated from analog vertical summing and the processor 170 may perform combining to generate the output image data OIDT. Accordingly, the size of the output image data OIDT can be reduced, and the occurrence of Z-word noise and false colors caused by the sampling frequency difference can be reduced, thereby providing good image quality and high frame rate.

Fig. 3A is a flow chart illustrating vertical analog summing and interpolation of an image sensor according to an example embodiment. The merging method of fig. 3A may be performed by the image sensor 100 of fig. 1.

At operation S010, the pixel generates photo-charges. For additional details, see discussion of FIG. 12 below.

The photo-charges are summed in an analog manner on the column conductors at operation S020. For example, the photo-charges may be summed on the column line CL of fig. 1. This result is labeled IDT0 in fig. 3A. See also fig. 12.

At operation S030, a voltage resulting from the charge summation is converted to a digital value by an analog-to-digital conversion ("ADC"). These digital values, generated by, for example, ADC circuit 130 of fig. 1, form image IDT1 of fig. 1. See, for example, fig. 4B and the following discussion.

At operation S040, the pixels from the IDT1 are interpolated using the weights. See, for example, fig. 6 and the following discussion.

At operation S050, the generated interpolated digital value is output, for example, as output image data OIDT of fig. 1.

Fig. 3B is a flowchart of a merging method of image sensors according to an example embodiment. The merging method of fig. 3B may be performed by the image sensor 100 of fig. 1.

Referring to fig. 1 to 3B, in operation S110, the image sensor 100 may simultaneously read out a plurality of pixel signals of at least two rows in the respective areas AR of the pixel array 110. Accordingly, as described above, at least two pixel signals output from at least two pixels in one column and in at least two rows may be subjected to analog summation.

In operation S120, the image sensor 100 may generate the first image data IDT1 by performing analog-to-digital conversion on the readout pixel signals. For example, the ADC circuit 130 can generate the first image data IDT1 by performing analog-to-digital conversion on pixel signals received through the column lines CL. Then, digital merging may be performed.

The first image data IDT1 may be divided into a plurality of merging areas. In operation S130, the image sensor 100 may perform first merging on the respective merged regions of the first image data IDT1 based on the pixel values of the respective merged regions. The image sensor 100 may perform weighted summation on pixel values having the same color in the respective merge regions. In the present disclosure, the weighted sum indicates that the set weight may be applied to each pixel value, and the weighted values may be summed (or summed and averaged).

In operation S140, the image sensor 100 may perform interpolation based on pixel values of two merge regions adjacent to each other in the column direction. The image sensor 100 may perform weighted summation on pixel values having the same color in the two combined regions.

Accordingly, in operation S150, the second image data having a smaller size than the first image data IDT1 may be output. For example, when each merge region includes pixel values arranged in a 4 × 4 matrix, the size of the second image data may correspond to 1/4 of the resolution of the pixel array 110.

The merging method according to an example embodiment will be described in detail below with reference to fig. 4A to 9.

Fig. 4A and 4B are diagrams for describing a readout method according to an example embodiment. Fig. 5A to 5C are schematic diagrams of first image data generated by a readout method according to an example embodiment.

Fig. 4A and 4B illustrate readout of the area AR of the pixel array 110. The area AR may include a plurality of pixels PX arranged in a 4 × 4 matrix.

Referring to fig. 4A, the first Row1 and the third Row3 may be simultaneously read out during the first horizontal period. Pixel signals of the first green pixels Gr1 and Gr3 in the first column C1 may be read out through the first column line CL1, and pixel signals of the red pixels R2 and R4 in the fourth column C4 may be output through the fourth column line CL 4.

When pixel signals of two pixels (e.g., first green pixels Gr1 and Gr3) are read out through the first column line CL1, the pixel signals may be summed. However, when a pixel signal is output from the pixel PX, the pixel PX may operate as a source follower. The pixel signal having a higher value among the pixel signals of the first green pixels Gr1 and Gr3 may be supplied to the ADC circuit 130 through the first column line CL1 as a sum signal corresponding to the first green pixels Gr1 and Gr3 because of a parasitic resistance of the pixel PX.

In the red pixels R1 and R3 of the second column C2, pixel signals of the red pixels R1 in the outer region of the area AR may be output through the second column line CL 2. In the first green pixels Gr2 and Gr4 in the third column C3, pixel signals of the first green pixels Gr2 in the outer region of the area AR may be output through the third column line CL 3. In other words, in each of the second column C2 and the third column C3, pixel signals of pixels in an outer area among the pixels having the same color in the first Row1 and the third Row3 can be read out. The pixel signals of the pixels in the inner region may not be read out.

The ADC circuit 130 may convert the pixel signals to digital values, for example, pixel values PGr13, PR1, PGr2, and PR 24. In one embodiment, the pixel values PGr13, PR1, PGr2 and PR24 generated during the first horizontal period may be stored in one line memory (e.g., the first line memory LM1 of the line buffer 160) and may form a portion of the first image data IDT 1. However, the pixel values PGr13, PR1, PGr2, and PR24 may not correspond to the same row of the first image data IDT 1.

Referring to fig. 4B, the second Row2 and the fourth Row4 may be simultaneously read out during a second horizontal period subsequent to the first horizontal period. Pixel signals of blue pixels B1 and B3 in the first column C1 may be output through the first column line CL1, and pixel signals of second green pixels Gb2 and Gb4 in the fourth column C4 may be output through the fourth column line CL 4. At this time, as described above with reference to fig. 4A, two pixel signals output through the same column line may be summed, and the sum signal may be provided to the ADC circuit 130.

In the second green pixels Gb1 and Gb3 in the second column C2, pixel signals of the second green pixels Gb3 in the outer region of the area AR may be output through the second column line CL 2. In the blue pixels B2 and B4 in the third column C3, pixel signals of the blue pixels B4 in the outer region of the area AR may be output through the third column line CL 3.

The ADC circuit 130 may convert the pixel signals into digital values, for example, pixel values PB13, PGb3, PB4, and PGb 24. In one embodiment, the pixel values PB13, PGb3, PB4, and PGb24 generated during the second horizontal period may be stored in one line memory (e.g., the second line memory LM2 of the line buffer 160) and may form a part of the first image data IDT 1. In one embodiment, the pixel values PGr13, PR1, PGr2 and PR24 generated during the first horizontal period may be moved from the first line memory LM1 to the second line memory LM2, and the pixel values PB13, PGb3, PB4 and PGb24 generated during the second horizontal period may be stored in the first line memory LM 1. The line buffers 160 may also include a third line buffer LM3, which is used in a sequential manner with LM2 and LM1, similar to the discussion of LM1 and LM2 above.

Referring to fig. 5A, the pixel values PGr13, PR1, PGr2 and PR24 stored in the first line memory LM1 may form a first line Row1 and a second line Row2 in the merge area BA of the first image data IDT 1. Each of the pixel values PGr13 and PR24 is the sum (or average) of pixel signals of two pixels (for example, the first green pixels Gr1 and Gr3 or the red pixels R2 and R4 in fig. 4A) in the first Row1 and the third Row3, respectively, of the region AR of the pixel array 110, and thus may represent a pixel value of a sampling position corresponding to a midpoint between the two pixels. The pixel values PR1 and PGr2 may respectively represent pixel values at positions of corresponding pixels (i.e., the red pixel R1 and the first green pixel Gr2 in fig. 4A).

Referring to fig. 5B, the pixel values PB13, PGb3, PB4, and PGb24 stored in the second line memory LM2 may form a third line Row3 and a fourth line Row4 in the merge area BA of the first image data IDT 1. Each of the pixel values PB13 and PGb24 is a sum (or average) of pixel signals of two pixels (for example, blue pixels B1 and B3 or second green pixels Gb2 and Gb4 in fig. 4B) in the second Row2 and the fourth Row4, respectively, of the area AR of the pixel array 110, and thus may represent a pixel value of a sampling position corresponding to a midpoint between the two pixels. The pixel values PGb3 and PB4 may respectively represent pixel values at positions of corresponding pixels (i.e., second green and blue pixels Gb3 and B4 in fig. 4B).

When readout is performed on the area AR of the pixel array 110 according to the exemplary embodiment, the pixel values of the merge area BA of the first image data IDT1 may be determined as shown in fig. 5C.

Fig. 6 illustrates a first merge performed on each of a plurality of merge regions of first image data based on pixel values in each merge region in a merge method according to an example embodiment.

Referring to fig. 6, the first combination may be performed by summing pixel values corresponding to the same color in the combination area BA.

For example, the pixel value corresponding to the sampling position S11 may be calculated by summing the pixel values PGr13 and PGr2 corresponding to the first green color. At this time, a preset weight may be applied to each of the pixel values PGr13 and PGr2, and the weighted values may be summed. The weights may be preset in consideration of the sampling positions. In other words, the weight may be set such that the sum of the weighted values is located at the sampling position S11. For example, when the distance between the centers of the pixels represented by the pixel values PGr13 and PGr2 is 10 and the sampling position S11 is located at a distance of 3 from the pixel value PGr13, the ratio of the weights applied to the pixel values PGr13 and PGr2, respectively, may be 7: 3. In other words, a higher weight may be applied to the pixel value PGr 13.

In the same manner, a weight may be applied to a pixel value corresponding to the same color so that the color is located at the sampling position S12, S13, or S14, and the weighted values may be summed. Accordingly, pixel values PGr _ b, PR _ b, PB _ b, and PGb _ b corresponding to the sampling positions S11, S12, S13, and S14, respectively, can be calculated.

Fig. 7 illustrates interpolation performed based on pixel values of two adjacent merge regions in a merge method according to an example embodiment.

Referring to fig. 7, the first image data IDT1 may include a plurality of merging areas BAn-1, BAn, and BAn + 1. Interpolation may be performed on the merging areas BAn-1, BAn, and BAn +1, and thus pixel values PGr _ t, PR _ t, PB _ t, and PGb _ t corresponding to the target sampling positions TS1, TS2, TS3, and TS4, respectively, may be generated.

Generation of a pixel value corresponding to a target sampling position in the merge area BAn will be exemplarily described. As described with reference to fig. 6, the pixel values PGr _ b, PR _ b, PB _ b, and PGb _ b corresponding to the sampling positions S11, S12, S13, and S14 are calculated in the merge region BAn, and each of the pixel values PGr _ b, PR _ b, PB _ b, and PGb _ b may be added to the pixel value corresponding to the nearest pixel among the pixel values of the same color in the adjacent merge regions BAn-1 and BAn + 1. A weight may be applied to each pixel value. The weights may be set in consideration of the position of each pixel value and the target sampling position. The smaller the distance between the location corresponding to the pixel value and the target sampling location, the greater the weight may be. As described above, the pixel values PGr _ t, PR _ t, PB _ t, and PGb _ t corresponding to the target sampling positions TS1, TS2, TS3, and TS4 in the merge region BAn, respectively, may be calculated by interpolation.

By analog vertical sum combining of readings from a plurality of pixels PX arranged in a 4 × 4 matrix, second image data IDT2 including pixel values arranged in a 2 × 2 matrix may be generated. See the right part of fig. 7.

Fig. 8 is a flowchart of a merging method of image sensors according to an example embodiment. The merging method of fig. 8 may be performed by the image sensor 100 of fig. 1. Operations S210, S220, S230, and S240 are the same as operations S110, S120, S130, and S140 in fig. 3B. And thus redundant description will be omitted.

After generating the first image data IDT1 in operation S220, the image sensor 100 may perform a second combination on the green pixels in operation S250. The image sensor 100 may sum at least two green pixel values of each of the plurality of merging areas of the first image data IDT1 and green pixel values in the neighboring merging areas. For example, adjacent merge regions may be adjacent to respective merge regions in the column direction. A weight may be applied to each pixel value in consideration of the sampling position, and the weighted values may be summed. The second merging will be described in detail with reference to fig. 9.

Fig. 9 is a diagram for describing a second merging applied to green pixels in a merging method according to an example embodiment.

Referring to fig. 9, the merge region BAn and the merge region BAn-1 may be most adjacent to each other in the column direction.

The pixel values PGr13 and PGr2 corresponding to the first green pixel in the merge region BAn and the pixel value PGb3 in the merge region BAn-1 may be summed, and the pixel value PGb3 corresponds to the second green pixel most adjacent to the first green pixel in the merge region BAn. At this time, a weight may be applied to each of the pixel values PGr13, PGr2, and PGb3 so that the sum value is located at the first target sampling position TS1, and the weighted values may be summed. Accordingly, the pixel value PGr _ t' of the green pixel corresponding to the first target sampling position TS1 may be generated. In a similar manner, a pixel value PGb _ t' of the green pixel corresponding to the second target sampling position TS2 may be generated. In this way, the pixel value of the green pixel at the target sampling position corresponding to the green pixel of the merge region BAn is determined, and thus the third image data IDT3 including the pixel value of the green pixel can be generated.

During the interpolation performed in operation S240, the pixel values to be summed are distant from each other as shown in fig. 7. However, according to the second combination, the pixel value of the green pixel closest to the combination area BAn and the pixel value of the green pixel in the combination area BAn may be summed. Therefore, the merging can be performed based on the pixel values corresponding to the adjacent pixels.

Referring back to fig. 8, in operation S270, the second image data and the third image data may be combined with each other based on a pixel value difference between the first green pixel and the second green pixel. As described above, the third image data may include the pixel value of the green pixel. Accordingly, the pixel value of the green pixel of the second image data may be combined with the pixel value of the green pixel of the third image data. At operation S280, the combined image data is output.

At this time, the second image data and the third image data may be combined with each other based on a difference between pixel values corresponding to two nearest neighboring pixels among pixel values of the first green pixel and the second green pixel used during the second combination.

For example, when the difference between the pixel values is smaller than the first reference value, that is, when the difference between the pixel values is small, the pixel value of the green pixel of the third image data may be applied to the output image data. In other words, pixel values of red and blue pixels of the second image data and pixel values of green pixels of the third image data may be included in the output image data. When the difference between the pixel values exceeds the second reference value, that is, when the difference between the pixel values is large, the second image data may be selected as the output image data. In other words, the green pixel value of the third image data may not be reflected in the output image data.

The comparison of the difference to the threshold provides a non-linear step size that is useful for reducing artifacts (e.g., zigzag and false color artifacts).

Otherwise, when the difference between the pixel values is greater than or equal to the first reference value and less than the second reference value, the difference may be converted into a value less than or equal to 1 based on the first reference value and the second reference value, a weight may be applied to each of the second image data and the third image data based on a value resulting from the conversion, and the weighted values may be summed. For example, when the difference between the pixel values is converted into a value of 0.6, a weight of 0.4 may be applied to the pixel value of the green pixel of the second image data and a weight of 0.6 may be applied to the pixel value of the green pixel of the third image data, and the weighted values may be summed. Based on the summed pixel values for the green pixels and the pixel values of the red and blue pixels of the second image data, output image data may be generated.

Fig. 10 illustrates an example of a pixel according to an example embodiment.

Referring to fig. 10, the pixel PX may include a photoelectric conversion unit 11 and a pixel circuit 12. The pixel circuit 12 may include a plurality of transistors, for example, a transfer transistor TX, a reset transistor RX, a drive transistor DX and a selection transistor SX controlled by a signal TS.

The photoelectric conversion unit 11 may include a photodiode. The photodiode can generate optical charges that vary with the intensity of incident light. The transfer transistor TX may transfer photo-charges to the floating diffusion node FD according to a transfer control signal TS provided from the row driver 120 (of fig. 1). The driving transistor DX may amplify a voltage corresponding to the photo-charges accumulated in the floating diffusion FD. The driving transistor DX may operate as a source follower. When the drain node of the selection transistor SX is connected to the source node of the driving transistor DX and the selection transistor SX is turned on in response to a selection signal SEL output from the row driver 120, a pixel signal APS corresponding to the voltage level of the floating diffusion node FD may be output to the column line CL connected to the pixel PX. The reset transistor RX may reset the floating diffusion FD based on the power supply voltage VDD in response to a reset signal RS provided from the row driver 120.

As described above with reference to fig. 4A and 4B, at least two rows of the pixel array 110 may be read out. At this time, the pixels in the central portion of the area AR of the pixel array 110 are not read out. Therefore, when two rows are read out simultaneously, the pixels in the central portion may not be selected. Pixels to be read out may be connected to the column line CL in response to the first selection signal SEL1 at an active level, and pixels not to be read out may be disconnected from the column line CL in response to the second selection signal SEL2 at an inactive level. Therefore, even if the pixels are in the same row, the pixel signal of each pixel can be selectively output or not output.

Fig. 11A shows a pixel array having a quad pattern. Fig. 11B illustrates an example of applying a pixel array having a quad pattern to an image sensor according to an example embodiment.

Referring to fig. 11A, the pixel array 110a has a quad pattern. The red pixels PX _ R may be arranged in a 2x2 matrix, the first green pixels PX _ Gr may be arranged in a 2x2 matrix, the second green pixels PX _ Gb may be arranged in a 2x2 matrix, and the blue pixels PX _ B may be arranged in a 2x2 matrix. This pattern may be repeated in a matrix form. Such a pattern may be referred to as a quaternary bayer pattern.

In an example, each pixel in a 2 × 2 matrix may include a photoelectric conversion element and may share a floating diffusion node and a pixel circuit with each other, as shown in fig. 12. Thus, pixels in a 2 × 2 matrix may operate as a single large pixel, e.g., PX _ R1, PX _ Gr1, PX _ Gb1, or PX _ B1 as shown in fig. 11B. Large pixels may form a bayer pattern. Therefore, as described above, the merging method according to the example embodiment may be applied. The first and third rows Row1 and Row3 may be simultaneously read out during the first horizontal period, and the second and fourth rows Row2 and Row4 may be simultaneously read out during the second horizontal period.

Fig. 12 illustrates an example of a pixel according to an example embodiment.

Referring to fig. 12, the pixel PXa may include a plurality of photoelectric conversion elements 22a, 22b, 22c, and 22d and a pixel circuit 12. For example, the pixel PXa may include four photoelectric conversion elements 22a, 22b, 22c, and 22 d. In some embodiments, the photoelectric conversion elements 22a, 22b, 22c, and 22d may include photodiodes PD1A, PD1B, PD1C, and PD1D, respectively. A microlens may be disposed above each of the photoelectric conversion elements 22a, 22b, 22c, and 22 d. Therefore, the combination of the microlens and the photoelectric conversion element may be referred to as a single pixel, and thus, the pixel PXa of fig. 12 may be regarded as four pixels.

The pixel circuit 12 may include: four transmission transistors TX1 to TX4 connected to the photoelectric conversion elements 22a, 22b, 22c, and 22d, respectively; a reset transistor RX 1; the driving transistor DX 1; and select transistor SX 1.

The floating diffusion FD1 may be shared by the four photoelectric conversion elements 22a, 22b, 22c, and 22d and the four transfer transistors TX1 to TX 4. The reset transistor RX1 may be turned on in response to a reset signal RS1 to reset the floating diffusion node FD1 with the power supply voltage VDD. Each of the transfer transistors TX1 to TX4 may connect or disconnect a corresponding one of the photodiodes PD1A, PD1B, PD1C, and PD1D to or from the floating diffusion FD1 according to a voltage of a corresponding one of the transfer gates TG1, TG2, TG3, and TG 4.

Light incident to each of the photodiodes PD1A, PD1B, PD1C, and PD1D can be accumulated therein as electric charges by photoelectric conversion. When the electric charges accumulated in each of the photodiodes PD1A, PD1B, PD1C, and PD1D are transferred to the floating diffusion node FD1, the electric charges may be output as the first analog voltage V1out via the driving transistor DX1 and the selection transistor SX 1. The first analog voltage V1out corresponding to the voltage change in the floating diffusion FD1 may be sent to an external readout circuit (not shown).

The pixel PXa may be applied to the pixel array 110a of fig. 11A. For example, the four photoelectric conversion elements 22a, 22b, 22c, and 22d of the pixel PXa may respectively correspond to pixels in a 2 × 2 matrix. In other words, pixels in a 2x2 matrix may share floating diffusion node FD1, similar to pixel PXa of fig. 12. When the transfer transistors TX1 to TX4 are simultaneously turned on or off, the pixels in the 2 × 2 matrix can operate as a single large pixel, as shown in fig. 11B. In one embodiment, when the pixel PXa operates as a large pixel, only some of the transfer transistors TX1 through TX4 may be turned on or off while others of the transfer transistors TX1 through TX4 remain turned off.

Fig. 13 is a block diagram of an electronic device including a multi-camera module using an image sensor according to an example embodiment. Fig. 14 is a detailed block diagram of the camera module in fig. 13.

Referring to fig. 13, an electronic device 1000 may include a camera module group 1100, an application processor 1200, a Power Management Integrated Circuit (PMIC)1300, and an external memory 1400.

The camera module group 1100 may include a plurality of camera modules 1100a, 1100b, and 1100 c. Although three camera modules 1100a, 1100b, and 1100c are illustrated in fig. 13, example embodiments are not limited thereto. In some embodiments, the camera module group 1100 may be modified to include only two camera modules. In some embodiments, the camera module group 1100 may be modified to include "n" camera modules, where "n" is a natural number of at least 4.

A detailed configuration of the camera module 1100b will be described below with reference to fig. 14. The following description may also be applied to the other camera modules 1100a and 1100 c.

Referring to fig. 14, a camera module 1100b may include a prism 1105, an Optical Path Folding Element (OPFE)1110, an actuator 1130, an image sensing device 1140, and a memory 1150.

The prism 1105 may include a reflective surface 1107 of a light reflective material and may change the path of the light L incident from the outside.

In some embodiments, the prism 1105 may change the path of the light L incident in the first direction X to a second direction Y perpendicular to the first direction X. The prism 1105 may rotate a reflective surface 1107 of light reflective material in direction a about a central axis 1106 or in direction B about the central axis 1106 such that the path of light L incident in a first direction X is changed to a second direction Y perpendicular to the first direction X. At this time, the OPFE 1110 may move in a third direction Z perpendicular to the first direction X and the second direction Y.

In some embodiments, the a-direction maximum rotation angle of the prism 1105 may be less than or equal to 15 degrees in the positive (+) a direction and greater than 15 degrees in the negative (-) a direction, but example embodiments are not limited thereto.

In a certain example, the prism 1105 may be moved in the positive B direction or the negative B direction by an angle of about 20 degrees or in a range from about 10 degrees to about 20 degrees or from about 15 degrees to about 20 degrees. At this time, the angle by which the prism 1105 moves in the positive B direction may be the same as or similar to the angle by which the prism 1105 moves in the negative B direction within a difference of about 1 degree.

In some embodiments, the prism 1105 may move the reflective surface 1107 of light reflective material in a third direction Z parallel to the direction of extension of the central axis 1106.

OPFE 1110 may include, for example, "m" optical lenses, where "m" is a natural number. The "m" lenses may move in the second direction Y and change the optical zoom ratio of the camera module 1100 b. For example, when the default optical zoom ratio of the camera module 1100b is Z, the optical zoom ratio of the camera module 1100b may be changed to 3Z, 5Z, or more by moving "m" optical lenses included in the OPFE 1110.

Actuator 1130 may move OPFE 1110 or the optical lens to a particular position. For example, the actuator 1130 may adjust the position of the optical lens such that the image sensor 1142 is positioned at the focal length of the optical lens for accurate sensing.

The image sensing device 1140 may include an image sensor 1142, control logic 1144, and a memory 1146. The image sensor 1142 may sense an image of an object using light L provided through an optical lens. The image sensor 100 of fig. 1 performing the merging method according to an example embodiment may be used as the image sensor 1142. Accordingly, when the image sensing device 1140 operates in the first mode, the frame rate and image quality may be improved and the size of image data generated by the image sensing device 1140 may be reduced. For example, the frame rate may be a frame rate achievable based on the circuit interface bandwidth and the computational bandwidth. Image sensor 100 (which provides example details of image sensor 1142) allows this frame rate to be a high frame rate.

The control logic 1144 may generally control the operation of the camera module 1100 b. For example, the control logic 1144 may control the operation of the camera module 1100b according to a control signal supplied through the control signal line CSLb.

The memory 1146 may store information, such as calibration data 1147, required for operation of the camera module 1100 b. The calibration data 1147 may include information required by the camera module 1100b to generate image data using the light L provided from the outside. For example, the calibration data 1147 may include information about the angle of rotation described above, information about the focal length, information about the optical axis, and the like. When the camera module 1100b is implemented as a multi-state camera having a focal length that varies with the position of the optical lens, the calibration data 1147 may include a value of the focal length for each position (or state) of the optical lens and information related to auto-focus.

The memory 1150 may store image data sensed by the image sensor 1142. The storage 1150 may be disposed outside the image sensing device 1140 and may form a stack with a sensor chip of the image sensing device 1140. In some embodiments, the memory 1150 may include an Electrically Erasable Programmable Read Only Memory (EEPROM), although embodiments are not limited thereto.

Referring to fig. 13 and 14, in some embodiments, each camera module 1100a, 1100b, and 1100c may include an actuator 1130. Accordingly, the camera modules 1100a, 1100b, and 1100c may include calibration data 1147, the calibration data 1147 in the camera modules 1100a, 1100b, and 1100c being the same or different depending on the operation of the actuator 1130 included in each camera module 1100a, 1100b, and 1100 c.

In some embodiments, one of the camera modules 1100a, 1100b, and 1100c (e.g., camera module 1100b) may be a folded lens type including the prism 1105 and the OPFE 1110, while the other camera modules (e.g., camera modules 1100a and 1100c) may be a vertical type not including the prism 1105 and the OPFE 1110. However, the embodiments are not limited thereto.

In some embodiments, one of the camera modules 1100a, 1100b, and 1100c (e.g., camera module 1100c) may include a vertical depth camera that extracts depth information using Infrared (IR). In this case, the application processor 1200 may generate a three-dimensional (3D) depth image by combining image data provided from the depth camera with image data provided from another camera module (e.g., camera module 1100a or 1100 b).

In some embodiments, at least two of the camera modules 1100a, 1100b, and 1100c (e.g., 1100a and 1100b) may have different fields of view. In this case, two camera modules (e.g., 1100a and 1100b) among the camera modules 1100a, 1100b, and 1100c may respectively have different optical lenses, but the embodiment is not limited thereto.

In some embodiments, the camera modules 1100a, 1100b, and 1100c may have different fields of view from each other. In this case, the camera modules 1100a, 1100b, and 1100c may respectively have different optical lenses, but the embodiment is not limited thereto.

In some embodiments, camera modules 1100a, 1100b, and 1100c may be physically isolated from each other. In other words, the sensing area of the image sensor 1142 is not divided by the camera modules 1100a, 1100b, and 1100c, but the image sensor 1142 may be independently included in each of the camera modules 1100a, 1100b, and 1100 c.

Referring back to fig. 13, the application processor 1200 may include an image processing unit 1210, a memory controller 1220, and an internal memory 1230. The application processor 1200 may be implemented separately from the camera modules 1100a, 1100b, and 1100 c. For example, the application processor 1200 and the camera modules 1100a, 1100b, and 1100c may be implemented in different semiconductor chips.

The image processing unit 1210 may include: a plurality of sub-image processors 1212a, 1212b, and 1212 c; an image generator 1214; and a camera module controller 1216.

The image processing unit 1210 may include as many sub-image processors 1212a, 1212b, and 1212c as the camera modules 1100a, 1100b, and 1100 c.

The image data generated from each camera module 1100a, 1100b, and 1100c may be supplied to a corresponding one of the sub image processors 1212a, 1212b, and 1212c through a corresponding one of the image signal lines ISLa, ISLb, and ISLc, which are separated from each other. For example, image data generated from the camera module 1100a may be supplied to the sub-image processor 1212a through an image signal line ISLa, image data generated from the camera module 1100b may be supplied to the sub-image processor 1212b through an image signal line ISLb, and image data generated from the camera module 1100c may be supplied to the sub-image processor 1212c through an image signal line ISLc. Such image data transmission may be performed using, for example, a Camera Serial Interface (CSI) based on a Mobile Industrial Processor Interface (MIPI), but the embodiment is not limited thereto.

In some embodiments, a single sub-image processor may be provided for multiple camera modules. For example, unlike fig. 13, the sub-image processors 1212a and 1212c may not be separated but may be integrated into a single sub-image processor, and image data provided from the camera module 1100a or the camera module 1100c may be selected by a selection element (e.g., a multiplexer) and then provided to the integrated sub-image processor.

The image data provided to each of the sub-image processors 1212a, 1212b, and 1212c may be provided to an image generator 1214. The image generator 1214 may generate an output image using image data supplied from each of the sub-image processors 1212a, 1212b, and 1212c according to the image generation information or the mode signal.

Specifically, the image generator 1214 may generate an output image by combining at least a portion of the respective image data respectively generated from the camera modules 1100a, 1100b, and 1100c having different fields of view according to the image generation information or the mode signal. Alternatively, the image generator 1214 may generate an output image by selecting one of the respective image data respectively generated from the camera modules 1100a, 1100b, and 1100c having different fields of view according to the image generation information or the mode signal.

In some embodiments, the image generation information may include a zoom signal or a zoom factor. In some embodiments, the mode signal may be based on a mode selected by the user.

When the image generation information includes a zoom signal or a zoom factor and the camera modules 1100a, 1100b, and 1100c have different fields of view, the image generator 1214 may perform different operations according to different kinds of zoom signals. For example, when the zoom signal is the first signal, the image generator 1214 may combine the image data output from the camera module 1100a with the image data output from the camera module 1100c, and may generate an output image using the combined image signal and the image data output from the camera module 1100b that is not used during the combination. When the zoom signal is a second signal different from the first signal, the image generator 1214 may generate an output image by selecting one of the respective image data respectively output from the camera modules 1100a, 1100b, and 1100c (instead of performing the combining). However, the embodiment is not limited thereto, and the method of processing the image data may be changed as needed.

In some embodiments, the image generator 1214 may receive a plurality of image data having different exposure times from at least one of the sub-image processors 1212a, 1212b, and 1212c, and perform High Dynamic Range (HDR) processing on the respective image data, thereby generating combined image data having a higher dynamic range.

The camera module controller 1216 may provide a control signal to each of the camera modules 1100a, 1100b, and 1100 c. The control signal generated by the camera module controller 1216 may be provided to a corresponding one of the camera modules 1100a, 1100b, and 1100c through a corresponding one of control signal lines CSLa, CSLb, and CSLc, which are separated from each other.

One of the camera modules 1100a, 1100b, and 1100c (e.g., the camera module 1100b) may be designated as a master camera according to a mode signal or an image generation signal including a zoom signal, and the other camera modules (e.g., the camera modules 1100a and 1100c) may be designated as slave cameras. Such designation information may be included in the control signal and provided to each of the camera modules 1100a, 1100b, and 1100c through a corresponding one of the control signal lines CSLa, CSLb, and CSLc that are separated from each other.

The camera module operating as the master camera module or the slave camera module may be changed according to the zoom factor or the operation mode signal. For example, when the field of view of the camera module 1100a is greater than the field of view of the camera module 1100b and the zoom factor indicates a low zoom ratio, the camera module 1100b may operate as a master camera module and the camera module 1100a may operate as a slave camera module. In contrast, when the zoom factor indicates a high zoom ratio, the camera module 1100a may operate as a master camera module and the camera module 1100b may operate as a slave camera module.

In some embodiments, the control signals provided from the camera module controller 1216 to each camera module 1100a, 1100b, and 1100c may include a synchronization enable signal. For example, when the camera module 1100b is a master camera module and the camera module 1100a is a slave camera module, the camera module controller 1216 may transmit a synchronization enable signal to the camera module 1100 b. The camera module 1100b supplied with the synchronization enable signal may generate a synchronization signal based on the synchronization enable signal, and may supply the synchronization signal to the camera modules 1100a and 1100c through the synchronization signal line SSL. The camera modules 1100a, 1100b, and 1100c may be synchronized with a synchronization signal and may transmit image data to the application processor 1200.

In some embodiments, the control signals provided from the camera module controller 1216 to each camera module 1100a, 1100b, and 1100c may include mode information consistent with the mode signals. The camera modules 1100a, 1100b, and 1100c may operate in a first operation mode or a second operation mode related to the sensing speed based on the mode information.

In the first operation mode, the camera modules 1100a, 1100b, and 1100c may generate image signals at a first speed (e.g., at a first frame rate), encode the image signals at a second speed higher than the first speed (e.g., at a second frame rate higher than the first frame rate), and transmit the encoded image signals to the application processor 1200. At this time, the second speed may be at most 30 times the first speed.

The application processor 1200 may store the received image signal (i.e., the encoded image signal) in the internal memory 1230 thereof or the external memory 1400 outside the application processor 1200. Then, the application processor 1200 may read the encoded image signal from the internal memory 1230 or the external memory 1400, decode the encoded image signal, and display the generated image data based on the decoded image signal. For example, a corresponding one of the sub image processors 1212a, 1212b, and 1212c of the image processing unit 1210 may perform decoding and may also perform image processing on the decoded image signal.

In the second operation mode, the camera modules 1100a, 1100b, and 1100c may generate image signals at a third speed lower than the first speed (e.g., at a third frame rate lower than the first frame rate) and transmit the image signals to the application processor 1200. The image signal provided to the application processor 1200 may not yet be encoded. The application processor 1200 may perform image processing on the image signal or store the image signal in the internal memory 1230 or the external memory 1400.

PMIC 1300 may provide power (e.g., a supply voltage) to each camera module 1100a, 1100b, and 1100 c. For example, under control of the application processor 1200, the PMIC 1300 may provide the first power to the camera module 1100a through the power signal line PSLa, the second power to the camera module 1100b through the power signal line PSLb, and the third power to the camera module 1100c through the power signal line PSLc.

The PMIC 1300 may generate power corresponding to each camera module 1100a, 1100b, and 1100c and adjust the level of the power in response to a power control signal PCON from the application processor 1200. The power control signal PCON may include a power adjustment signal for each operation mode of the camera modules 1100a, 1100b, and 1100 c. For example, the operating mode may include a low power mode. At this time, the power control signal PCON may include information about the camera module for operating in the low power mode and a power level to be set. The same or different levels of power may be provided to the camera modules 1100a, 1100b, and 1100c, respectively. The level of power may be dynamically changed.

While example embodiments have been particularly shown and described, it will be understood that various changes in form and details may be made therein without departing from the spirit and scope of the appended claims.

34页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:摄像元件以及摄像装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类