Image sensor, electronic device including the same, and image scaling processing method

文档序号:1415807 发布日期:2020-03-10 浏览:20次 中文

阅读说明:本技术 图像传感器和包括其的电子设备、以及图像缩放处理方法 (Image sensor, electronic device including the same, and image scaling processing method ) 是由 郑溢允 李济硕 于 2019-06-05 设计创作,主要内容包括:图像传感器可以包括:具有以拜耳模式排列的N×M合并像素阵列的像素阵列,每个合并像素包括相同颜色的单位像素的k*l矩阵,其中k和l是大于2的整数;以及图像信号处理器,用于根据普通模式或放大模式处理由像素阵列输出的信号。在放大模式中,可以重新拼接来自像素阵列的信号,使得对应于单位像素的信号排列在相同颜色的单位像素的p*q矩阵中,其中p是小于k的非负整数,q是小于l的非负整数,p*q矩阵以拜耳模式排列。(The image sensor may include: a pixel array having an N × M binning pixel array arranged in a bayer pattern, each binning pixel comprising a k × l matrix of unit pixels of the same color, where k and l are integers greater than 2; and an image signal processor for processing the signals output by the pixel array according to a normal mode or an amplification mode. In the amplification mode, signals from the pixel array may be re-tiled such that signals corresponding to unit pixels are arranged in a p × q matrix of unit pixels of the same color, where p is a non-negative integer less than k and q is a non-negative integer less than l, the p × q matrix being arranged in a bayer pattern.)

1. An image sensor, comprising:

a pixel array having an N × M binning pixel array arranged in a bayer pattern, each binning pixel comprising a k × l matrix of unit pixels of the same color, where k and l are integers greater than 2; and

an image signal processor for processing signals output by the pixel array according to a normal mode or an amplification mode, wherein in the amplification mode, signals from the pixel array are re-spliced so that signals corresponding to the unit pixels are arranged in a p x q matrix of unit pixels of the same color, where p is a non-negative integer less than k, q is a non-negative integer less than l, the p x q matrix being arranged in a bayer pattern.

2. The image sensor of claim 1, wherein in the normal mode, the image signal processor is to crop an image output by the pixel array.

3. The image sensor of claim 1, wherein the image in the normal mode and the image in the magnified mode have the same resolution, the same resolution being less than nxm.

4. The image sensor of claim 3, wherein p and q are determined such that the image in the normal mode and the image in the magnified mode have the same resolution.

5. The image sensor according to claim 1, wherein the number of bayer patterns in the amplification pattern corresponding to a single bayer pattern of nxm binning pixels is equal to (k x l)/(p x q).

6. The image sensor of claim 1, further comprising a further amplification mode, wherein in the further amplification mode, signals from the pixel array are re-tiled such that signals corresponding to the unit pixels are arranged in an r x s matrix of unit pixels of the same color, where r is a non-negative integer less than p, s is a non-negative integer less than q, the r x s matrix being arranged in a bayer pattern.

7. The image sensor of claim 6, wherein the image in the normal mode, the image in the magnified mode, and the image in the further magnified mode have the same resolution.

8. The image sensor of claim 1, wherein the image signal processor further processes the signal according to a reduction mode.

9. The image sensor of claim 8, wherein the image in the normal mode, the image in the zoom-in mode, and the image in the zoom-out mode have a same resolution, the same resolution being less than or equal to nxm.

10. The image sensor of claim 8, wherein in the reduced mode, the image signal processor outputs a signal having a resolution of N/2 xm/2.

11. The image sensor of claim 10, wherein in the downscaling mode, the image signal processor re-tiles signals from the pixel array such that signals corresponding to the unit pixels are represented by k of unit pixels of the same color2*l2And (4) matrix arrangement.

12. The image sensor of claim 10, wherein in the reduction mode, the image signal processor synthesizes signals from the pixel array such that the signals are arranged in an N/2 xm/2 matrix.

13. The image sensor of claim 1, wherein in the normal mode, each merged pixel comprises a k x l matrix of unit pixels of the same color.

14. The image sensor of claim 1, wherein in the normal mode or the magnified mode, an image is cropped from an N x M array.

15. The image sensor of claim 1 wherein p and q are both 1.

16. An electronic device, comprising:

a pixel array having an N × M binning pixel array arranged in a bayer pattern, each binning pixel comprising a k × l matrix of unit pixels of the same color, where k and l are integers greater than 2;

a signal processor for processing a signal output by the merged pixel array according to a normal mode or an amplification mode; and

a re-tiling processor, wherein, in the amplification mode, the re-tiling processor re-tiles signals from the pixel array such that signals corresponding to the unit pixels are arranged in a p x q matrix of unit pixels of the same color, where p is a non-negative integer less than k and q is a non-negative integer less than l, the p x q matrix being arranged in a bayer pattern.

17. The electronic device of claim 16, wherein the re-stitching processor is external to the signal processor.

18. The electronic device of claim 16, wherein the re-stitching processor is internal to the signal processor.

19. A method of image scaling processing for an image sensor comprising an array of pixels, the method comprising:

driving a plurality of merged pixels in the pixel array to generate a full resolution image;

processing signals output by a pixel array having an N × M binning pixel array arranged in bayer pattern, each binning pixel comprising a k × l matrix of unit pixels of the same color, where k and l are integers greater than 2, according to a normal mode or an amplification mode; and

in the amplification mode, signals from the pixel array are re-stitched such that signals corresponding to the unit pixels are arranged in a p × q matrix of unit pixels of the same color, where p is a non-negative integer less than k and q is a non-negative integer less than l, the p × q matrix being arranged in a bayer pattern.

20. The method of claim 19, further comprising: processing a signal according to a reduction mode, wherein an image in the normal mode, an image in the enlargement mode, and an image in the reduction mode have the same resolution, the same resolution being less than or equal to nxm.

21. The method of claim 20, further comprising reducing said compressionIn the small mode, signals from the pixel array are re-spliced so that signals corresponding to the unit pixels are k of unit pixels of the same color2*l2And (4) matrix arrangement.

22. The method of claim 20, further comprising: in the reduction mode, signals from the pixel array are synthesized so that the signals are arranged in an N/2 × M/2 matrix.

23. A method of image scaling processing for an image sensor comprising an array of pixels, the method comprising:

driving a plurality of merged pixels in the pixel array to generate a full resolution image;

cropping a portion of the full resolution image according to a normal mode or an enlargement mode;

synthesizing the full resolution image according to a reduction mode; and

generating an image signal according to the normal mode, the enlargement mode, or the reduction mode, wherein the image signal has the same resolution as the normal mode, the enlargement mode, and the reduction mode.

Technical Field

Apparatuses and methods consistent with example embodiments relate to an image sensor capable of electronically implementing enlargement and reduction, an electronic device including the image sensor, and an image scaling processing method.

Background

Electronic devices including image sensors (e.g., digital cameras, smart phones, and camcorders) provide zoom-in and zoom-out functions. To implement the zoom function, an Image Signal Process (ISP), a lens, or a separate zoom-in and zoom-out image sensor may be used. However, using an ISP only allows the magnification function to be achieved with reduced image quality, while using a lens or a separate image sensor is expensive and not compact.

Disclosure of Invention

According to an example embodiment, an image sensor may include a pixel array having an N × M binning pixel array arranged in a bayer pattern, each binning pixel including a k × l matrix of unit pixels of the same color, where k and l are integers greater than 2; and an image signal processor for processing the signals output by the pixel array according to a normal mode or an amplification mode. In the amplification mode, signals from the pixel array may be re-stitched (remosaic) such that signals corresponding to unit pixels are arranged in a p × q matrix of unit pixels of the same color, where p is a non-negative integer less than k, q is a non-negative integer less than l, the p × q matrix being arranged in a bayer pattern.

According to an example embodiment, an electronic device may include a pixel array having an N × M binning pixel array arranged in a bayer pattern, each binning pixel including a k × l matrix of unit pixels of the same color, where k and l are integers greater than 2, a signal processor processing signals output by the binning pixel array according to a normal mode or an amplification mode; and a re-stitching processor. In the amplification mode, the re-tiling processor is to re-tile the pixel array such that signals corresponding to the unit pixels are arranged in a p x q matrix of unit pixels of the same color, where p is a non-negative integer less than k and q is a non-negative integer less than l, the p x q matrix being arranged in a bayer pattern.

According to an example embodiment, a method of image scaling processing of an image sensor including a pixel array may include: driving a plurality of merged pixels in a pixel array to generate a full resolution image; processing signals output by a pixel array having an N × M binning pixel array arranged in bayer pattern, each binning pixel comprising a k × l matrix of unit pixels of the same color, where k and l are integers greater than 2, according to a normal mode or an amplification mode; and, in the amplification mode, re-tiling signals from the pixel array such that signals corresponding to unit pixels are arranged in a p x q matrix of unit pixels of the same color, where p is a non-negative integer less than k and p is a non-negative integer less than l, the p x q matrix being arranged in a bayer pattern.

According to an example embodiment, there is provided an image scaling processing method of an image sensor including a pixel array, a signal processor, and a signal output unit. A plurality of merged pixels disposed in a pixel array are driven to generate a full resolution image. A portion of the full resolution image is cropped to generate a normal mode image or an enlarged mode image. The signal processor rejoins the normal mode image or the magnified mode image. And outputting the spliced common mode image or the spliced amplification mode image to a host chip of the electronic equipment.

According to an example embodiment, there is provided an image scaling processing method of an image sensor including a pixel array, a signal processor, and a signal output unit. A plurality of merged pixels disposed in a pixel array are driven to generate a full resolution image. A portion of the full resolution image is cropped to generate a reduced mode image. The signal processor synthesizes (bin) the reduced mode images. The signal output unit outputs the synthesized reduced-mode image to a host chip of the electronic apparatus.

According to an example embodiment, there is provided an image scaling processing method of an electronic device including an image sensor. A plurality of merged pixels disposed in a pixel array are driven to generate a full resolution image. A portion of the full resolution image is cropped to generate a normal mode image or an enlarged mode image. The normal mode image or the enlarged mode image is output to a host chip of the electronic device. And the host chip rejoins the normal mode image or the amplification mode image.

According to an example embodiment, a method of image scaling processing of an image sensor including a pixel array may include: driving a plurality of merged pixels in a pixel array to generate a full resolution image; cropping a portion of the full resolution image according to a normal mode or an enlargement mode; synthesizing (bin) full resolution images according to a down-scaling mode; an image signal is generated according to a normal mode, an enlargement mode, or a reduction mode, wherein the image signal has the same resolution as the normal mode, the enlargement mode, and the reduction mode.

According to an example embodiment, there is provided an image sensor including a pixel array, a timing generator, a signal processor, and a signal output unit. A plurality of merged pixels are disposed in a pixel array, and all or some of the plurality of merged pixels are driven to generate a normal mode image, an enlargement mode image, or a reduction mode image. The timing generator drives each of the plurality of merged pixels based on a zoom mode input from the user interface. The signal processor re-splices the normal mode image or the enlargement mode image or synthesizes the reduction mode image. The signal output unit outputs the stitched or synthesized image to a host chip of the electronic device.

Drawings

Features will become apparent to those skilled in the art by describing in detail exemplary embodiments with reference to the attached drawings, wherein:

fig. 1A is a diagram of an electronic device including an image sensor, according to an example embodiment.

FIG. 1B illustrates a diagram of an electronic device including an image sensor, according to an example embodiment.

Fig. 2A illustrates a diagram of a signal processor of an image sensor according to an example embodiment.

Fig. 2B illustrates a diagram of a signal processor according to an example embodiment.

FIG. 2C illustrates a diagram of a host chip according to an example embodiment.

FIG. 2D illustrates a diagram of an image processor of the host chip of FIG. 2C according to an example embodiment.

Fig. 3A illustrates a diagram showing a pixel array of an image sensor.

Fig. 3B illustrates a circuit diagram of one unit pixel.

Fig. 4 illustrates a full resolution image acquired by an image sensor.

Fig. 5A illustrates a normal mode image generated by cropping a portion of a full-resolution image based on the center.

Fig. 5B illustrates an operation of generating a normal mode image by cropping a part of a full-resolution image based on corners.

Fig. 6A illustrates an enlarged mode image generated by cropping a portion of a full resolution image based on the center.

Fig. 6B illustrates an operation of generating an enlargement mode image by cropping a part of a full resolution image based on an angle.

Fig. 7 illustrates a reduced mode image.

Fig. 8A illustrates an example of a 16-merged pixel pattern (16-merged pixel patterns).

Fig. 8B illustrates an example of a 4-binning pixel pattern.

Fig. 8C illustrates an example of a bayer pixel pattern.

Fig. 8D shows an example of a 9-binning pixel pattern.

Fig. 8E shows an example of an N × M binning pixel pattern.

Fig. 9A illustrates an example of a 16 binning pixel pattern with 4 binning pixel pattern re-binning.

Fig. 9B illustrates an example of 4-binning pixel patterns using bayer pixel pattern re-binning.

Fig. 9C illustrates an example of re-stitching a 4-binning pixel pattern with a 16-binning pixel pattern.

Fig. 10 illustrates an example of enlarging an image by re-stitching a 4-binning pixel pattern using a bayer pixel pattern.

Fig. 11A and 11B illustrate an example of reducing an image by synthesizing 4 merged pixel patterns into 1/2N × 1/2M bayer pixel patterns.

Fig. 12 illustrates an example in which an image sensor having an N × M binning pixel structure generates a normal mode, an enlargement mode, and a reduction mode image.

Fig. 13 illustrates an example in which an image sensor having an N × M merged pixel structure enlarges an image once, twice, and four times without increasing the size of an image file.

Detailed Description

Hereinafter, an image sensor, an electronic apparatus including the image sensor, and an image scaling processing method according to the present exemplary embodiment will be described with reference to the drawings.

Fig. 1A is a diagram of an electronic device including an image sensor, according to an example embodiment. FIG. 1B is a diagram of an electronic device including an image sensor, according to an example embodiment. Fig. 2A is a diagram of a signal processor of the image sensor. Fig. 3A is a diagram illustrating a pixel array of an image sensor. Fig. 3B is a circuit diagram of one unit pixel.

Referring to fig. 1A, 2A, 3A, and 3B, an electronic device 10 according to an example embodiment may include an image sensor 100, a user interface 210, and a host chip 220. The image sensor 100 may include a timing generator 110, a pixel array 120, a signal processor 130, a memory 140, and a signal output unit 150. The signal processor 130 may include a first Image Signal Processor (ISP)131, a re-stitching processor 132, a second ISP133, a reducer (downscaler)134, a third ISP135 and an output interface 136.

The electronic device 10 according to an example embodiment may be a device including a user interface 210 and a host chip 220, and has display and communication functions. For example, the electronic device 10 may be any one of a smartphone, a tablet Personal Computer (PC), a mobile phone, a wearable device (e.g., a smart watch), an electronic book, a notebook, a netbook, a Personal Digital Assistant (PDA), a Portable Multimedia Player (PMP), a mobile medical instrument, a digital camera, and the like. .

As shown in fig. 3A, the pixel array 120 may include a plurality of unit pixels 121. The plurality of unit pixels 121 may be arranged in a two-dimensional (2D) array. As an example, the pixel array 120 may be arranged such that N (N is an integer greater than or equal to 1) unit pixels 121 are arranged in the vertical direction and M (M is an integer greater than or equal to 1) unit pixels 121 are arranged in the horizontal direction.

The pixel array 120 may be formed in a chip form, and includes a plurality of interconnections (see fig. 3B) for signal input and output of the respective unit pixels 121 and a readout circuit (see fig. 3B). Each of the plurality of unit pixels 121 may include a color filter (e.g., a red color filter, a blue color filter, a green color filter, etc.). Reflecting the characteristics of human vision, 25% of all unit pixels may include a red color filter, 25% may include a blue color filter, and 50% may include a green color filter. A Complementary Metal Oxide Semiconductor (CMOS) image sensor (CIS) may be applied to the pixel array 120. The unit pixels 121 including the same color filter may be adjacent to each other, thereby constituting the pixel array 120.

As an example, the pixel array 120 may include 16-merged pixel patterns in each of which 16 unit pixels 121 having the same color filter are arranged in a 4 × 4 matrix. In other words, 16 unit pixels including the same color filter may constitute one 16-merged pixel. Different 16 merged pixels may be vertically and horizontally adjacent to each other to form pixel array 120.

As an example, the pixel array 120 may include 4-merged pixel patterns in each of which 4 unit pixels 121 having the same color filter are arranged in a 2 × 2 matrix. In other words, 4 unit pixels including the same color filter may constitute one 4-merged pixel. The different 4-binning pixels may be vertically and horizontally adjacent to each other, thereby forming a pixel array 120.

The resolution of the image generated by the electronic device 10 may vary depending on the number of unit pixels 121. As an example, the pixel array 120 may include, for example, 4,000 unit pixels 121 arranged horizontally in a row direction, and, for example, 3,000 unit pixels 121 arranged vertically in a column direction. In this case, the pixel array 120 may generate an image having a resolution of 12 Million Pixels (MP) (4,000 × 3,000). As an example, the pixel array 120 may include 8,000 unit pixels 121 arranged horizontally and 6,000 unit pixels 121 arranged vertically. In this case, the pixel array 120 can generate an image having a resolution of 48MP (8,000 × 6,000). As an example, the pixel array 120 may include 12,000 unit pixels 121 arranged horizontally and 9,000 unit pixels 121 arranged vertically. In this case, the pixel array 120 can generate an image having a resolution of 108MP (12,000 × 9,000).

As shown in fig. 3B, each of the plurality of unit pixels 121 may include a photodiode PD, i.e., a photodiode including a plurality of transistors TX, RX, DX, and SX and a plurality of interconnected photosensitive elements and readout circuits. The readout circuit may drive the photodiode PD and read an image signal generated by the photodiode PD. The sensing circuit may include a transfer transistor TX, a driving transistor DX, a selection transistor SX, and a reset transistor RX.

The photo-charges generated by the photodiode PD may be output to a first node N1 (e.g., a floating diffusion node) through the transfer transistor TX. For example, when the transmission control signal TG is at a first level (e.g., a high level), the transmission transistor TX may be turned on. When the transfer transistor TX is turned on, the photo charges generated by the photodiode PD may be output to the first node N1 through the transfer transistor TX.

For example, the driving transistor DX may function as a source follower buffer amplifier. The driving transistor DX may amplify a signal corresponding to the charge stored in the first node N1.

For example, the selection transistor SX may be turned on in response to a selection signal SEL. When the selection transistor SX is turned on, a signal amplified by the driving transistor DX may be transmitted to the column line COL.

For example, the reset transistor RX may be turned on in response to the reset signal RS. When the reset transistor RX is turned on, the charge stored in the first node N1 may be discharged. Fig. 3B illustrates a unit pixel 121 including one photodiode PD and four MOS transistors TX, RS, DX, and SX. Alternatively, each unit pixel 121 may include one photodiode PD and three or less MOS transistors. Still alternatively, each unit pixel 121 may include one photodiode PD and five or more MOS transistors.

Referring to fig. 2B through 2D, the components of the signal processor 130 shown in fig. 2A may be variously configured among other components of the device 10.

As shown in fig. 2B, the signal processor 130a may include a first ISP331 and an output interface 136, in contrast to the signal processor 130 in fig. 2A. Then, as shown in fig. 2C, the host chip 220a may provide additional processing, for example, may include a signal input unit 222 and an image processor 230.

As shown in fig. 2D, the image processor 230 in the host chip 220a may include an input interface 231, a second ISP13, a re-stitching processor 132, and a reducer 134. The signal processor 130a may convert the first image output from the N × M16 merged pixels into a data signal and transmit the data signal to the host chip 220a via the signal output unit 150 of fig. 1. The image processor 230 may receive the data signal via an input interface 231, the input interface 231 converting the input data signal into a first image output from the nxm 16 merged pixels. The input interface 231 may transmit the first image to the second ISP 133.

Referring to fig. 1B, an electronic device 10' according to an example embodiment may include an image sensor 100, a user interface 210, and a host chip 220. The electronic device 10' may additionally include an illumination sensor 160. The illuminance sensor 160 may output illuminance values to the signal processor 130 and the user interface 210. The illuminance sensor 160 may be separate from the image sensor 100 or part of the image sensor 100.

Fig. 4 shows a full resolution image acquired by an image sensor.

Referring to fig. 1A and 4, a user may select a zoom mode via the user interface 210 of the electronic device 10. The user interface 210 may transmit a zoom mode signal to the image sensor 100 according to a zoom mode selection of the user. The image sensor 100 may generate an image according to a normal mode, an enlargement mode, or a reduction mode based on an input zoom mode.

Referring to fig. 1A and 4, the illuminance sensor 160 is a sensor for measuring the amount of light, and may sense the ambient illuminance of the image sensor 100 when a resistance value changes according to the amount of light incident thereon. The illuminance sensor 160 may generate an illuminance value according to the sensed illuminance and transmit the generated illuminance value to the user interface 210 and the signal processor 130.

The user interface 210 may automatically select a normal mode, a zoom-in mode, or a zoom-out mode using the illuminance value, and the user interface 210 may generate a normal mode signal, a zoom-in mode signal, or a zoom-out mode signal based on the illuminance value input from the illuminance sensor 160. The user interface 210 may transmit the generated normal mode signal, zoom-in mode signal, or zoom-out mode signal to the image sensor 100. The signal processor 130 may generate a normal mode image, an enlargement mode image, or a reduction mode image based on the input normal mode signal, enlargement mode signal, or reduction mode signal.

As shown in fig. 1A, a zoom mode signal according to a user's selection may be transmitted to the image sensor 100 through the user interface 210. As shown in fig. 1B, a scaling mode signal based on the illuminance value of the illuminance sensor 160 may be transmitted to the image sensor 100. Two mechanisms may be employed, for example, in electronic device 10', user interface 210 may override the mode signal automatically set by illumination sensor 160.

Generation of normal mode image

Fig. 5A shows a normal mode image generated by cropping a part of a full-resolution image based on the center. Fig. 5B illustrates an operation of generating a normal mode image by cropping a part of a full-resolution image based on corners.

Referring to fig. 1A to 5B, the timing generator 110 may generate driving signals (e.g., a horizontal reference signal, a vertical reference signal, a horizontal scanning reference signal, a vertical scanning reference signal, and a field signal) for driving the pixel array 120. The timing generator 110 may supply the generated driving signal to each unit pixel 121 of the pixel array 120. The normal mode signal may be input to the image sensor 100 from the user interface 210. The timing generator 110 may drive all the unit pixels of the pixel array 120 based on the input normal mode signal.

As shown in fig. 4, the image sensor 100 may crop a portion of a full-resolution image generated by driving all unit pixels of the pixel array 120. The image sensor 100 may output a full-resolution image generated by driving all unit pixels of the pixel array 120 as they are. The image sensor 100 may generate a normal mode image by cropping all or a portion of the full resolution image.

As an example, the timing generator 110 may generate a normal mode image in which the first unit pixels corresponding to the normal mode among all the unit pixels of the pixel array 120 output signals. The timing generator 110 may stop outputting signals as the second unit pixels of all the unit pixels of the pixel array 120 except the first unit pixel corresponding to the normal mode. In other words, before the zoom mode signal for the image is input, the image sensor 100 may drive all the unit pixels, and the electronic device 10 may display the full-resolution image on the screen. When the normal mode signal is input, the image sensor 100 may generate a normal mode image in which the first unit pixel outputs a signal. The image sensor 100 may stop signal output of the second unit pixels except for the first unit pixel corresponding to the normal mode signal.

As an example, the timing generator 110 may drive all the unit pixels of the pixel array 120 based on the input normal mode signal. When all the unit pixels of the pixel array 120 output signals, a full-resolution image can be generated. The full resolution image generated by the pixel array 120 may then be sent to the signal processor 130. The signal processor 130 may generate a normal mode image by cropping a portion of the full resolution image.

Since the image sensor 100 generates the normal mode image by cropping a portion of the full-resolution image, the volume (i.e., data size) of the normal mode image can be reduced compared to the full-resolution image. The image sensor 100 may generate the normal mode image by clipping regions 1/2 through 1/16 corresponding to the full resolution image based on a specific point. In other words, the normal mode image may be generated based on the image signals output from 1/2 to 1/16 of all the unit pixels.

For example, as shown in fig. 5A, the image sensor 100 may generate the normal mode image by cropping regions 1/2 to 1/16 corresponding to the full-resolution image based on the center of the full-resolution image. In other words, the image sensor 100 may generate the normal mode image by causing 1/2 to 1/16 output signals of all unit pixels based on the center of the pixel array 120.

For example, as shown in fig. 5B, the image sensor 100 may generate the normal mode image by cropping regions 1/2 to 1/16 corresponding to the full-resolution image based on the upper left corner a of the full-resolution image. In other words, the image sensor 100 may generate the normal mode image by causing 1/2 to 1/16 output signals of all unit pixels based on the upper left corner a of the pixel array 120.

In addition to this, the image sensor 100 may generate the normal mode image by cropping regions 1/2 to 1/16 corresponding to the full-resolution image based on the upper right corner b of the full-resolution image. In addition to this, the image sensor 100 may generate the normal mode image by cropping regions 1/2 to 1/16 corresponding to the full-resolution image based on the lower left corner c of the full-resolution image. In addition to this, the image sensor 100 may generate the normal mode image by cropping regions 1/2 to 1/16 corresponding to the full-resolution image based on the lower right corner d of the full-resolution image.

For example, the image sensor 100 may generate the normal mode image by cropping the regions 1/2 to 1/16 corresponding to the full-resolution image based on specific points other than the center of the full-resolution image (i.e., the center of the pixel array 120) and the four corners a, b, c, and d.

Generation of magnified mode images

Fig. 6A shows an enlarged mode image generated by cropping a portion of a full resolution image based on the center. Fig. 6B illustrates an operation of generating an enlargement mode image by cropping a part of a full resolution image based on an angle.

Referring to fig. 1A to 4 and 6A and 6B, a zoom-in mode signal may be input from the user interface 210 to the image sensor 100. The timing generator 110 may drive all the unit pixels of the pixel array 120 based on the input amplification mode signal. As shown in fig. 4, the image sensor 100 may crop a portion of a full-resolution image generated by driving all unit pixels of the pixel array 120. The image sensor 100 may generate an enlarged mode image by cropping a portion of the full resolution image. The image sensor 100 may generate the enlarged mode image by cropping a smaller area (e.g., 1/2 to 1/4 areas of the normal mode image) than the normal mode image in the full resolution image.

As an example, the timing generator 110 may generate the enlargement mode image by causing the first unit pixels corresponding to the enlargement mode in all the unit pixels of the pixel array 120 to output signals. The timing generator 110 may stop the signal output of the second unit pixels, which are all the unit pixels of the pixel array 120 except for the first unit pixel corresponding to the enlargement mode. In other words, before a zoom mode signal of an image is input, the image sensor 100 may drive all unit pixels, and the electronic device 10 may display a full-resolution image on a screen. When the amplification mode signal is input, the image sensor 100 may generate an amplification mode image by causing the first unit pixel to output a signal. The image sensor 100 may stop the signal output of the second unit pixel except for the first unit pixel corresponding to the amplification mode signal.

As an example, the timing generator 110 may drive all the unit pixels of the pixel array 120 based on the input amplification mode signal. All the unit pixels of the pixel array 120 output signals so that a full-resolution image can be generated. The full resolution image generated by the pixel array 120 may then be sent to the signal processor 130. The signal processor 130 may generate the enlargement mode image by cropping a portion of the full resolution image.

Since the image sensor 100 generates the enlargement mode image by cropping a part of the full resolution image, the volume (i.e., data size) of the enlargement mode image can be reduced compared to the full resolution image. The image sensor 100 may generate the enlargement mode image by clipping regions 1/4 through 1/64 corresponding to the full resolution image based on a specific point. In other words, the enlargement mode image can be generated based on the image signals output from 1/4 to 1/64 of all the unit pixels.

For example, as shown in fig. 6A, the image sensor 100 may generate an enlarged mode image by cropping regions 1/4 to 1/64 corresponding to the full resolution image based on the center of the full resolution image. In other words, the image sensor 100 may generate the enlargement mode image by making 1/4 to 1/64 output signals of all unit pixels based on the center of the pixel array 120.

For example, as shown in fig. 6B, the image sensor 100 may generate the enlargement mode image by cropping regions 1/4 to 1/64 corresponding to the full resolution image based on the upper left corner a of the full resolution image. In other words, the image sensor 100 may generate the enlargement mode image by causing 1/4 to 1/64 output signals of all unit pixels based on the upper left corner a of the pixel array 120.

In addition to this, the image sensor 100 may generate the enlargement mode image by cropping regions 1/4 to 1/64 corresponding to the full resolution image based on the upper right corner b of the full resolution image. In addition to this, the image sensor 100 may generate the enlargement mode image by cropping the regions 1/4 to 1/64 corresponding to the full resolution image based on the lower left corner c of the full resolution image. In addition to this, the image sensor 100 may generate the enlargement mode image by cropping regions 1/4 to 1/64 corresponding to the full resolution image based on the lower right corner d of the full resolution image.

For example, the image sensor 100 may generate the enlargement mode image by cropping the regions 1/4 to 1/64 corresponding to the full resolution image based on specific points other than the center of the full resolution image (i.e., the center of the pixel array 120) and the four corners a, b, c, and d.

Generation of reduced pattern image

Fig. 7 shows a reduced mode image.

Referring to fig. 1A to 4 and 7, a zoom-out mode signal may be input from the user interface 210 to the image sensor 100. The timing generator 110 may drive all the unit pixels of the pixel array 120 based on the input reduction pattern signal. As shown in fig. 4, the image sensor 100 may generate a reduced mode image based on a full resolution image generated by driving all unit pixels of the pixel array 120.

As an example, the timing generator 110 may drive all the unit pixels of the pixel array 120 based on the input reduction mode signal. The timing generator 110 may generate a reduced mode image by causing all unit pixels of the pixel array 120 to output signals.

When all the unit pixels of the pixel array 120 output signals, a full-resolution image can be generated. The full resolution image generated by the pixel array 120 may then be sent to the signal processor 130. The signal processor 130 may generate a reduced mode image through signal processing of the full resolution image. When the full resolution map is applied to the reduced mode image as it is, the data amount may increase. The signal processor 130 may reduce the data size of the reduced mode image through signal processing. The signal processor 130 may transmit the reduced-size mode image, whose volume has been reduced, to the host chip 220 through the signal output unit 150.

Alternatively, the full resolution image data may be output from the signal processor 130a to the host chip 220a after being processed by the first ISP 131.

Fig. 4 shows that the image sensor 100 generates a reduced mode image at the same zoom level as the full resolution image. In addition to this, the image sensor 100 may generate a reduced mode image at a zoom level of 1/2 to 1 times the full resolution image.

When the image sensor 100 generates a reduction mode image at a zoom level of 1/2 to 1 times the full resolution image, the timing generator 110 may drive all unit pixels of the pixel array 120 based on an input reduction mode signal. The image sensor 100 may crop a portion of a full-resolution image generated by driving all unit pixels of the pixel array 120. The image sensor 100 may generate a reduced mode image by using the whole or by cropping a part of the full resolution image.

As an example, the timing generator 110 may generate a reduction pattern image by outputting signals of first unit pixels corresponding to a reduction pattern in all the unit pixels of the pixel array 120. The timing generator 110 may stop the signal output of the second unit pixels, which are all the unit pixels of the pixel array 120 except for the first unit pixel corresponding to the reduction mode.

As an example, the timing generator 110 may drive all the unit pixels of the pixel array 120 based on the input reduction mode signal. All the unit pixels of the pixel array 120 are made to output signals so that a full-resolution image can be generated. The full resolution image generated by the pixel array 120 may then be sent to the signal processor 130. The signal processor 130 may generate a reduced mode image by cropping 1/2 through 1 areas corresponding to the full resolution image. The image sensor 100 may generate a reduced mode image by clipping regions 1/2 through 1 corresponding to the full resolution image based on a specific point. In other words, the reduction mode image can be generated based on the image signals output from all the unit pixels at maximum. The reduced mode image may be generated based on at least the image signals output from half of all the unit pixels.

The image sensor 100 may generate a reduced mode image based on the center of the pixel array 120. In addition, the image sensor 100 may generate a reduced mode image based on the left, upper right, lower left, or lower right corner of the pixel array 120. In addition to this, the image sensor 100 may generate a reduced mode image based on specific points other than the center and the four corners a, b, c, and d of the pixel array 120.

Fig. 8A illustrates an example of a 16-binning pixel pattern.

Referring to fig. 3A and 8A, each of a plurality of unit pixels 121 in the pixel array 120 may include a red color filter, a blue color filter, or a green color filter. The 16 unit pixels including the same color filter may be arranged in an N × M matrix (e.g., a 4 × 4 matrix) to constitute one N × M combined pixel (e.g., a 16 combined pixel). In the pixel array 120, the ratio of the first 16 combined pixels 16R including the red color filter, the second 16 combined pixels 16B including the blue color filter, and the third 16 combined pixels 16G including the green color filter is 1: 1: 2. the second 16-merged pixel 16B including the blue color filter may be diagonal to the first 16-merged pixel 16R including the red color filter. The third 16-combined pixel 16G including the green color filter may be on the upper, lower, left, and right sides of the first 16-combined pixel 16R including the red color filter. A first 16-merged pixel including a 16R red color filter may be diagonal to a second 16-merged pixel 16B including a blue color filter. The third 16-combined pixel 16G including the green color filter may be on the upper, lower, left, and right sides of the second 16-combined pixel 16B including the blue color filter. The first 16-combined pixel 16R including the red color filter may be on upper and lower sides of the third 16-combined pixel 16G including the green color filter. The second 16-combined pixel 16B including the blue color filter may be on the left and right sides of the third 16-combined pixel 16G including the green color filter.

Fig. 8B illustrates an example of a 4-binning pixel pattern.

Referring to fig. 3A and 8B, each of a plurality of unit pixels 121 in the pixel array 120 may include a red color filter, a blue color filter, or a green color filter. Pixels including the same color filter may be arranged in an N × M matrix (e.g., a 2 × 2 matrix of 4 unit pixels) to constitute one N × M merged pixel (e.g., a 4 merged pixel). In the pixel array 120, the first 4-combined pixel 4R including a red color filter, the second 4-combined pixel 4B including a blue color filter, and the third 4-combined pixel 4G including a green color filter are scaled to 1: 1: 2. the second 4-combined pixel 4B including the blue color filter may be diagonal to the first 4-combined pixel 4R including the red color filter. The third 4-combined pixel 4G including the green color filter may be on the upper, lower, left, and right sides of the first 4-combined pixel 4R including the red color filter. The first 4-combined pixel 4R including a red color filter may be diagonal to the second 4-combined pixel 4B including a blue color filter. The third 4-combined pixel 4G including the green color filter may be on the upper side, the lower side, the left side, and the right side of the second 4-combined pixel 4B including the blue color filter. The first 4-combined pixel 4R including a red color filter may be on upper and lower sides of the third 4-combined pixel 4G including a green color filter. The second 4-combined pixel 4B including the blue color filter may be on the left and right sides of the third 4-combined pixel 4G including the green color filter.

Fig. 8C illustrates an example of a bayer pixel pattern.

Referring to fig. 3A and 8C, each of the plurality of unit pixels 121 in the pixel array 120 may include a red color filter, a blue color filter, or a green color filter. The first unit pixel including a red color filter, the second unit pixel including a blue color filter, and the third unit pixel including a green color filter may be 1: 1: 2, in the first step. The second unit pixel including the blue color filter may be diagonal to the first unit pixel including the red color filter. The third unit pixel including the green color filter may be on an upper side, a lower side, a left side, and a right side of the first unit pixel including the red color filter. The first unit pixel including the red color filter may be diagonal to the second unit pixel including the blue color filter. The third unit pixel including the green color filter may be on an upper side, a lower side, a left side, and a right side of the second unit pixel including the blue color filter. The first unit pixel including the red color filter may be at left and right sides of the third unit pixel including the green color filter. The second unit pixel including the blue color filter may be on upper and lower sides of the third unit pixel including the green color filter.

The image sensor 100 may transmit an image to the host chip 220, 220a, or the electronic device 10 may transmit an image to another electronic device. As the resolution of an image to be transmitted increases, the amount of data increases, and the number of Frames Per Second (FPS) decreases. Assuming that 1,024 × 1,024 images are transmitted, 3 Megabytes (MB) of data per image is transmitted. When 1,024 × 1,024 pieces of image data based on the bayer pixel pattern shown in fig. 8C are transmitted, 1MB of data per image is transmitted. In other words, the FPS when image data based on the bayer pixel pattern is transmitted may be three times larger than that when the bayer pixel pattern is not used.

Fig. 8D shows an example of a 9-binning pixel pattern.

Referring to fig. 3A and 8D, each of the plurality of unit pixels 121 in the pixel array 120 may include a red color filter, a blue color filter, or a green color filter. Pixels including the same color filter may be arranged in an N × M matrix (e.g., a 3 × 3 matrix of 9 unit pixels) to constitute one N × M combined pixel (e.g., a 9 combined pixel). In the pixel array 120, the first 9-combined pixel 9R including a red color filter, the second 9-combined pixel 9B including a blue color filter, and the third 9-combined pixel 9G including a green color filter may be divided by 1: 1: 2.

Fig. 8E shows an example of an N × M binning pixel pattern.

Referring to fig. 3A and 8E, each of the plurality of unit pixels 121 in the pixel array 120 may include a red color filter, a blue color filter, or a green color filter. Pixels including the same color filter may be arranged in an N × M matrix (e.g., a 3 × 4 matrix of 12 unit pixels) to constitute one N × M combined pixel (e.g., a 12 combined pixel). Fig. 8A, 8B, and 8D show merged pixels in which unit pixels are arranged in 4 × 4, 2 × 2, and 3 × 3 matrices. However, the merged pixels are not limited thereto, and as shown in fig. 8E, the number of pixels arranged in the horizontal direction may be different from the number of pixels arranged in the vertical direction. In the pixel array 120, the first 12-combined pixel 12R including a red color filter, the second 12-combined pixel 12B including a blue color filter, and the third 12-combined pixel 12G including a green color filter may be divided by 1: 1: 2.

Fig. 9A shows an example of a 16 binning pixel pattern with 4 binning pixel pattern re-binning.

Referring to fig. 1A to 2D and 9A, the signal processor 130 may include a first ISP131, a re-splicing processor 132, a second ISP133, a reducer 134, a third ISP135, and an output interface 136, or the signal processor 130a may include the first ISP131 and the output interface 136, and the image processor 230 in the host chip 220a may include the re-splicing processor 132, the second ISP133, the reducer 134, and the third ISP 135.

The signal processor 130 or 130a may receive the first image output from the nxm 16 binning pixels. The first ISP131 may perform Auto Dark Level Compensation (ADLC) on the input first image. The first ISP131 may perform bad pixel correction on the input first image. The first ISP131 may perform lens shading correction on the input first image. The first ISP131 may send the first image, which has been ADLC, bad pixel correction, and lens shading correction, to the re-stitching processor 132 or the host chip 220a, which host chip 220a provides the first image, which has passed through the second ISP133, to the re-stitching processor 132 in the image processor 230.

The re-stitching processor 132 may convert the first image based on the nxm 16 merged pixels into a second image (e.g., a 4-merged-pixel image) output from the 2 nx2M 4 merged pixels by re-stitching the first image. In other words, the re-stitching processor 132 may re-stitch the first image output from 16 merged pixels so that the first image may be converted into a second image output from 4 merged pixels (e.g., a 4 merged pixel image). Although the pixel array 120 is physically composed of a 16-merged pixel pattern, the re-stitching processor 132 may convert an image of the 16-merged pixel pattern into an image of a 4-merged pixel pattern through a re-stitching process.

As an example, the re-stitching processor 132 may convert the first image output from the N × M16 merged pixels into the second image output from the 2N × 2M4 merged pixels (e.g., a 4 merged pixel image) by re-stitching the first image once. The re-stitching processor 132 may transmit a second image (e.g., a 4-pel merged image), which is an image converted into an output from 2N × 2M4 merged pixels through the re-stitching process, to the second ISP 133. Since the re-stitching processor 132 converts the 16-merged-pixel first image into the 4-merged-pixel second image by re-stitching the first image once, the image can be enlarged twice without reducing the resolution.

As an example, the re-stitching processor 132 may convert the first image output from the N × M16 merged pixels into a third image (e.g., a single-pixel image) output from the 4N × 4M bayer pixels by re-stitching the first image twice. The re-stitching processor 132 may transmit a third image (e.g., a single-pixel image), which is an image converted into an output from 4N × 4M bayer pixels by the re-stitching process, to the second ISP 133. Since the re-stitching processor 132 converts the 16-merged-pixel first image into the single-pixel third image by re-stitching the first image twice, the image can be enlarged four times without reducing the resolution.

Fig. 9B shows an example of 4-binning pixel patterns using bayer pixel pattern stitching.

Referring to fig. 1A, 1B, 2A, 2D, and 9B, the re-stitching processor 132 may convert the first image based on the N × M4 merged pixels into a second image (e.g., a single pixel image) output from the 2N × 2M bayer pixels by re-stitching the first image. In other words, the re-stitching processor 132 may re-stitch the first image output from the 4-merged pixels so that the first image may be converted into a second image (e.g., a single-pixel image) output from a single pixel. Although the pixel array 120 is physically composed of a 4-merged pixel pattern, the re-stitching processor 132 may convert the image of the 4-merged pixel pattern into the image of the bayer pixel pattern through the re-stitching process.

The re-stitching processor 132 may transmit a second image (e.g., a single pixel image), which is an image of the 2N × 2M bayer pixel output converted by the re-stitching process, to the second ISP 133. Alternatively, the second ISP133 may receive the first image from the first ISP131, which is subjected to ADLC, bad pixel correction and lens shading correction, further correct the first image, and provide the first image to the re-stitching processor 132, the re-stitching processor 132 then sends the second image to the third ISP 135. Since the re-stitching processor 132 converts the first image (e.g., 4-binning-pixel image) into the second image of bayer pixels by re-stitching the first image once, the image may be enlarged twice without reducing the resolution.

Fig. 9C shows an example of 4-binning pixel patterns that are re-stitched using a 16-binning pixel pattern.

Referring to fig. 1A, 1B, 2A, 2D, and 9C, the re-stitching processor 132 may convert the first image based on the N × M4 merged pixels into a second image output from 1/2N × 1/2M16 merged pixels (e.g., a 16 merged pixel image) by re-stitching the first image. In other words, the re-stitching processor 132 may re-stitch the first image output from the 4 merged pixels, e.g., each merged pixel comprising a k × l matrix of the same color, such that the first image may be converted to the second image output from the 16 merged pixels (e.g., a 16 merged pixel image), e.g., each merged pixel comprising a k of the same color2*l2And (4) matrix. Although the pixel array 120 is physically composed of a 4-merged pixel pattern, the re-tiling processor 132 may combine 4-merged pixels through a re-tiling processThe image of the mode is converted into an image of a 16-binning pixel mode.

The re-stitching processor 132 may send a second image (e.g., a 16-binning image) to the second ISP133, the second image being an image converted by the re-stitching process to a merged pixel output from 1/2N × 1/2M 16. Alternatively, the second ISP133 may receive the first image from the first ISP131, which is subjected to ADLC, bad pixel correction and lens shading correction, further correct the first image, and provide the first image to the re-stitching processor 132, the re-stitching processor 132 then sends the second image to the third ISP 135. Since the re-stitching processor 132 converts the first image (e.g., a 4-pel image) into a 16-pel second image by re-stitching the first image once, the image can be scaled 1/2 times without reducing the resolution.

Referring back to fig. 1A, 1B, 2A, and 2D, the second ISP133 or the third ISP135 may perform bad pixel correction, lens shading correction, and noise cancellation on the input second image (e.g., 4-merged-pixel image). The second ISP133 or the third ISP135 may perform bad pixel correction, lens shading correction, and noise cancellation on the input third image (e.g., single pixel image). The second ISP133 or the third ISP135 may perform at least one of bad pixel correction, lens shading correction, and noise cancellation. The second ISP133 or the third ISP135 may transmit the second image or the third image, which has undergone at least one of bad pixel correction, lens shading correction and noise cancellation, to the third ISP135, or to the display device and/or the communication module, directly or through the reducer 134.

When the normal mode image is output to the host chip 220 or the enlarged mode image obtained through the re-tiling process is output to the host chip 220, the image output from the second ISP133 may be input to the third ISP135 instead of through the reducer 134 or the image output from the third ISP135 may be directly output to the display device and/or the communication module, for example, not through the reducer 134. The image may be input to the reducer 134 to be reduced, and the image may be reduced by the operation of the reducer 134. The reducer 134 may reduce the data amount of the input image by decimating (demate) the image. When the host chip 220, 220a sends an image to another electronic device, the extraction (differentiation) of the reducer 134 may increase the rate at which image data is sent to the host chip 220 and/or increase the FPS. The reducer 134 may transmit the decimated image to the third ISP135, or a signal from the third ISP135 may be output through the reducer 134.

The third ISP135 may perform image processing that has not been performed by the second ISP 133. As an example, when the bad pixel correction has been performed by the second ISP133, the third ISP135 may perform lens shading correction and noise cancellation. As an example, when bad pixel correction and lens shading correction have been performed by the second ISP133, the third ISP135 may perform noise cancellation. As an example, when the lens shading correction and the noise cancellation have been performed by the second ISP133, the third ISP135 may perform the bad pixel correction. As an example, when the lens shading correction has been performed by the second ISP133, the third ISP135 may perform bad pixel correction and noise cancellation. Otherwise, the third ISP135 and the second ISP133 may perform the same image processing.

The third ISP135 may send the image that has undergone at least one of bad pixel correction, lens shading correction and noise cancellation to the output interface 136. The output interface 136 may convert an input image into a data signal suitable for transmission and transmit the converted data signal to the signal output unit 150. The signal output unit 150 may transmit a data signal input from the output interface 136 to the host chip 220. The converted data signal may be transmitted to the host chip 220 and may also be stored in the memory 140 by the signal processor 130. Alternatively, the third ISP135 may output to the display device and/or communication module directly or through the reducer 134.

The host chips 220, 220a may convert a data signal input from the image sensor 100 into an image and display the image through a display. The host chip 220, 220a may transmit a data signal input from the image sensor 100 to another electronic device through the communication module.

The host chips 220, 220a may store the data signals input from the image sensor 100 in a separate memory. The host chip 220, 220a may load the data signal stored in the memory 140 and display the data signal through a display or transmit the data signal to another electronic device through a communication module.

Fig. 10 shows an example of enlarging an image by re-stitching 4 merged pixel patterns with a bayer pixel pattern.

Referring to fig. 1A, 1B, 2A, 2D, and 10, the pixel array 120 may be composed of 4 merged pixels in each of which 4 unit pixels including the same color filter are disposed adjacent to each other. The pixel array 120 may generate a first image (e.g., a 4-merged pixel image) having a resolution of N/2 xm/2 by cropping a portion of the full-resolution image. The first image (e.g., a 4-binning pixel image) may be a normal mode image or an enlarged mode image. A first image (e.g., a 4-binning pixel image) generated by pixel array 120 may be sent to signal processor 130 or 130 a.

The first ISP131 may perform at least one of ADLC, bad pixel correction, and lens shading correction on the input first image (e.g., 4-merged pixel image). The first ISP131 may send the first image that has been image processed (e.g., a 4-pel merged image) to the re-stitching processor 132.

The re-stitching processor 132 may convert the first image having a resolution of N/2 × M/2 (e.g., the first image of 4 merged pixels) into a second image (e.g., a single pixel image) of bayer pixels having a resolution of N × M by re-stitching the first image. Since the re-stitching processor 132 converts the first image (e.g., 4-merged-pixel image) into the second image (e.g., single-pixel image) of bayer pixels by re-stitching the first image once, the image can be enlarged twice without reducing the resolution. The image that has been amplified twice by the re-splicing processor 132 may be processed through the second ISP133, the third ISP135, the output interface 136, and the signal output unit 150 and transmitted to the host chip 220. Alternatively, the images that have been processed by the first and second ISPs 131 and 133 may be amplified twice by the re-stitching processor 132, processed by the third ISP135, and output to a display device or a communication module.

Fig. 11A and 11B show an example of reducing an image by synthesizing 4 merged pixel patterns into 1/2N × 1/2M bayer pixel patterns.

Referring to fig. 1A, 1B, 11A, and 11B, the pixel array 120 may be composed of N × M combined pixels (e.g., 4 combined pixels) in each of which N × M (e.g., 4) unit pixels including the same color filter are adjacent to each other. Pixel array 120 may generate a first image (e.g., a 4-merged pixel image) having an N × M resolution by cropping a portion of a full resolution image. The first image (e.g., 4-merged-pixel image) may be a normal mode image or a reduced mode image. A first image (e.g., a 4-binning pixel image) generated by pixel array 120 may be sent to signal processor 130 or 130 a.

The first ISP131 may perform at least one of ADLC, bad pixel correction, and lens shading correction on the input first image (e.g., 4-merged pixel image). The first ISP131 may send the image processed first image (e.g., a 4-pel merged image) to the re-stitching processor 132.

The re-stitching processor 132 may convert the first image having the N × M resolution (e.g., the first image of 4 merged pixels) into a second image (e.g., a single-pixel image) of bayer pixels having 1/2N × 1/2M resolution by synthesizing the first image. In the arrangement structure of the N × M combined pixels (e.g., 4 combined pixels), the ratio of the red pixel including the red color filter, the blue pixel including the blue color filter, and the green pixel including the green color filter may be 1: 1: 2.

as an example, the re-stitching processor 132 may extract red image data from four adjacent red merged pixels (4 merged pixels). The re-stitching processor 132 may merge the extracted four (piece) red image data into a single red image. The re-stitching processor 132 may extract blue image data from four adjacent blue merged pixels (4 merged pixels). The re-stitching processor 132 may combine the extracted four pieces of blue image data into a single blue image. The re-stitching processor 132 may extract green image data from four adjacent green merged pixels (4 merged pixels). The re-stitching processor 132 may combine the extracted four pieces of green image data into a single green image.

The re-stitching processor 132 may extract red image data from each of four adjacent 4-merged pixels and merge the extracted four pieces of red image data into one piece of red data. In the same manner, the re-stitching processor 132 may extract blue image data from each of four adjacent 4-merged pixels and merge the extracted four pieces of blue image data into one piece of blue data. In the same manner, the re-stitching processor 132 may extract green image data from each of four adjacent 4-merged pixels and merge the extracted four pieces of green image data into one piece of green data.

Since the re-stitching processor 132 converts the first image (e.g., 4-binning image) into a second image of bayer pixels (e.g., single-pixel image) once by synthesizing the first image, the image may be reduced 1/2 times without reducing resolution. The image which has been reduced by 1/2 times by the re-splicing processor 132 may be processed by the second ISP133, the reducer 134, the third ISP135, the output interface 136 and the signal output unit 150 and transmitted to the host chip 220.

Alternatively, the image that has been processed by the first ISP131 may be provided to the host chip 220 a. The image may then be processed by the second ISP133 and then down-scaled 1/2 times by the re-stitching processor 132 in the image processor 230, then processed by the third ISP135 and output to a display device or through the down-scaler 134 to a communication module.

Fig. 12 shows an example in which an image sensor having a 4-merged pixel structure generates a normal mode image, an enlargement mode image, and a reduction mode image. Referring to fig. 1A, 1B and 12, the pixel array 120 may be composed of N × M combined pixels in each of which N × M unit pixels including the same color filter are disposed adjacent to each other.

As an example, the pixel array 120 may include N × M binning pixels corresponding to a resolution of 12MP (4,000 × 3,000). When all the merged pixels of the pixel array 120 are driven to generate a full resolution image, a 12MP image may be generated.

As an example, pixel array 120 may include N × M binning pixels corresponding to a resolution of 48 MP. When all of the merged pixels of pixel array 120 are driven to generate a full resolution image, a 48MP image may be generated.

The merged pixels may have any desired resolution. For example, the pixel array 120 may include N × M binning pixels corresponding to resolutions of 3MP, 6MP, 24MP, 48MP, 96MP, 108MP, 1200MP, 2400MP, 4800MP, or 9600 MP. When all the merged pixels of the pixel array 120 are driven to generate a full resolution image, 3MP, 6MP, 24MP, 48MP, 96MP, 108MP, 1200MP, 2400MP, 4800MP, 9600MP, and the like images may be generated.

Generation of normal mode image

For example, pixel array 120 may generate a 12MP normal mode image by cropping 1/4 the 48MP full resolution image based on the center of the full resolution image. Among all the 4 merged pixels of the pixel array 120, 1/4 among the center-based 4 merged pixels may be driven to generate a normal mode image of 12 MP.

For example, the pixel array 120 may send a full resolution image of 48MP to the signal processor 130 or 130 a. The signal processor 130 may generate a normal mode image of 12MP by 1/4 based on the full resolution image of the center crop 48 MP.

For example, the pixel array 120 may send a full resolution image of 48MP to the signal processor 130 or 130 a. The signal processor 130 may crop 1/4 the full resolution image of 48MP based on the center. The signal processor 130 or 130a may generate a 48MP normal mode image by rearranging the cropped image to correspond to all pixels.

Generation of magnified mode images

For example, pixel array 120 may generate a magnification mode image by cropping 1/16 of the 48M full resolution image based on the center of the full resolution image. Of all the N × M binning pixels of pixel array 120, center-based 1/16 binning pixels may be driven to generate a magnification mode image.

For example, the pixel array 120 may send a 48MP full resolution image to the signal processor 130. The signal processor 130 may generate a 3MP (2,000 × 1,500) enlargement mode image by clipping 1/16 based on the full resolution image of 48 MP.

The re-stitching processor 132 may convert the magnification pattern image of 3MP into a bayer pixel image of 12MP (e.g., a single pixel image), such as a 12MP image having merged pixels each including a k x l matrix of the same color, by re-stitching the magnification pattern image, which may be re-stitched into a 3MP image having pixels including a p x q matrix of the same color, where p is a non-negative integer less than k, q is a non-negative integer less than 1, and the p q matrix is arranged in a bayer pattern. Since the re-stitching processor 132 converts the enlargement mode image of 3MP into a bayer pixel image (e.g., single pixel image) of 12MP by re-stitching the enlargement mode image once, the image can be enlarged twice without reducing the resolution. The image that has been amplified twice by the re-splicing processor 132 may be processed by the second ISP133, the third ISP135, the output interface 136, and the signal output unit 150, and transmitted to the host chip 220.

For example, the pixel array 120 may send a full resolution image of 48MP to the signal processor 130 or 130 a. The signal processor 130 or 130a may crop 1/16 a 48MP full resolution image based on center. The signal processor 130 or 130a may generate a 48MP normal mode image by rearranging the cropped image to correspond to all pixels. First, the re-stitching processor 132 may convert the enlargement mode image of the 3MP image into a bayer pixel image of 12MP (e.g., a single pixel image) by re-stitching the enlargement mode image. Second, the re-stitching processor 132 may convert the bayer pixel image of the 12MP image to a bayer pixel image of 48MP (e.g., a single pixel image) by re-stitching the bayer pixel image of 12 MP. For example, a 48MP image may have pixels that include an r x s matrix with the same color, where r is a non-negative integer less than p and s is a non-negative integer less than q, the r x s matrix arranged in a bayer pattern. In this way, the image can be enlarged by two to four times without reducing the resolution. The image amplified two to four times by the re-splicing processor 132 may be processed by the second ISP133, the third ISP135, the output interface 136 and the signal output unit 150, and transmitted to the host chip 220.

Alternatively, the image in the image processor 230 amplified two to four times by the re-stitching processor 132 may be output to a display or a communication module.

Generation of reduced pattern image

For example, the pixel array 120 may generate a 48MP reduced mode image by driving all N × M binning pixels of the pixel array 120. The reduced mode image may be the same as the full resolution image of the pixel array 120. Since the reduced mode image of 48MP has a large data size, it may be difficult to transmit data.

For example, the re-stitching processor 132 may downscale 1/2 the downscaled pattern image of 48MP by synthesizing the downscaled pattern images. The composition by the re-stitching processor 132 can reduce the data size of the reduced mode image to the same level as that of the (12MP) of the normal mode image.

For example, the reducer 134 may extract a reduced mode image of 48 MP. The reducer 134 may reduce the data size of the reduced mode image to the same level as (12MP) of the normal mode image. The data size is not limited thereto, and the signal processor 130 or the image processor 230 may output the reduced mode image of 48MP without reducing the size.

The signal processor 130 may downscale 1/2 the reduced mode image without reducing resolution. The image that has been reduced 1/2 by the re-stitching processor 132 may be processed by the second ISP133, the third ISP135, the output interface 136 and the signal output unit 150 and transmitted to the host chip 220.

Alternatively, images that have been processed by the first and second ISPs 131 and 133 may be downscaled 1/2 by the re-stitching processor 132 in the image processor 230, and then processed by the third ISP135 and output to a display device or communication module either directly or through the downscaler 134.

As shown in fig. 12, the image sensor 100 and the electronic device 10 may generate a normal mode image, an enlargement mode image, or a reduction mode image according to a zoom mode signal input through the user interface 210. The image sensor 100 and the electronic device 10 may generate a normal mode image, an enlargement mode image, or a reduction mode image without a normal mode lens, an enlargement mode lens, and a reduction mode lens. The image sensor 100 and the electronic device 10 can generate a normal mode image, an enlargement mode image, or a reduction mode image without reducing the resolution.

As an example, the image sensor 100 and the electronic device 10 may generate a normal mode image, an enlargement mode image, or a reduction mode image having the same data size as 12MP from a full resolution image of 48 MP. As an example, the image sensor 100 and the electronic device 10 may generate a normal mode image, an enlargement mode image, or a reduction mode image having the same data size as 48MP from the full resolution image of 48 MP. Thus, all displayed images may have the same resolution regardless of the mode.

Fig. 13 shows an example in which an image sensor having an N × M merged pixel structure magnifies an image by one time, two times, and four times without increasing the size of an image file. Referring to fig. 1A, 1B, 2, and 13, the pixel array 120 may be composed of N × M merged pixels (e.g., 16 merged pixels) in each of which N × M unit pixels including the same color filter are adjacent to each other.

As an example, pixel array 120 may include 16 merged pixels corresponding to a resolution of 108 MP. When all 16 binning pixels of pixel array 120 are driven to generate a full resolution image, a 108MP image may be generated.

Generation of normal mode image

The pixel array 120 may send the full resolution image of 108MP to the signal processor 130. The signal processor 130 may generate a 6.75MP normal mode image by merging the pixels of the 108MP full resolution image into 1/16 pixels in number. The data size is not limited thereto, and the signal processor 130 may generate the normal mode image of 108MP without changing the size of the full resolution image of 108 MP.

The signal processor 130 may perform image processing on the full-resolution image using the first ISP131 and then generate a normal mode image by combining pixels of the full-resolution image into 1/16, which is the number of pixels. The signal processor 130 may perform image processing on the normal mode image whose pixels have been combined into 1/16 of the number of pixels using the second ISP133 and the third ISP135 and transmit the processed normal mode image to the output interface 136. The output interface 136 may convert the general mode image into a data signal suitable for transmission and transmit the converted data signal to the signal output unit 150. The signal output unit 150 may transmit a data signal input from the output interface 136 to the host chip 220. The converted data signal may be transmitted to the host chip 220 and may also be stored in the memory 140 by the signal processor 130.

Alternatively, the signal processor 130a may perform image processing on the full-resolution image using the first ISP131, and then generate a normal mode image by merging pixels of the full-resolution image into 1/16 of the number of pixels and output it to the host chip 220 a. The image processor 230 may perform image processing on the normal mode image whose pixels have been combined into 1/16 of the number of pixels using the second ISP133 and the third ISP135, and may transmit the processed normal mode data signal to a display device or a communication module.

2x magnification mode image generation

The pixel array 120 may generate the enlarged mode image by cropping 1/4 the full resolution image of 108MP based on the center of the full resolution image. Among all 16 binning pixels of pixel array 120, 1/4 of the center-based 16 binning pixels may be driven to generate a magnified mode image. Subsequently, the signal processor 130 may generate a 2 × magnification mode image. The re-stitching processor 132 may convert the magnification mode image to a 4-pel merged image by re-stitching the magnification mode image. As a result, a 16-pel magnification mode image can be generated from the full resolution image of 108 MP. Subsequently, a 6.75MP 2x magnification mode image may be generated by converting the 16-pel magnification mode image to a 4-pel magnification mode image. The data size is not limited thereto, and the signal processor 130 may generate a 2 × magnification mode image of 108MP without changing the size of the full resolution image of 108 MP.

The magnified mode image may be generated by cropping 1/4 the full resolution image of 108MP based on the center of the full resolution image, and then image processing by the first ISP 131. The re-stitching processor 132 may then re-stitch the magnification mode images. Subsequently, the 2 × magnification mode image that has been re-stitched may be image processed by the second ISP133 and the third ISP135 and transmitted to the output interface 136. The output interface 136 may convert the processed 2 × magnification mode image into a data signal suitable for transmission and transmit the converted data signal to the signal output unit 150. The signal output unit 150 may transmit a data signal input from the output interface 136 to the host chip 220. The converted data signal may be transmitted to the host chip 220 or may be stored in the memory 140 by the signal processor 130.

Alternatively, the output of the first ISP131 of the signal processor 130a may be provided to the host chip 220 a. The image processor 230 may generate a 2 × magnification mode image using the second ISP133, the re-stitching processor 132, and the third ISP135, and transmit a 2 × magnification mode data signal to the display device or the communication module.

Generation of 4 × magnification mode image

The pixel array 120 may generate the enlarged mode image by cropping 1/16 the full resolution image of 108MP based on the center of the full resolution image. Among all 16 binning pixels of pixel array 120, 1/16 of the center-based 16 binning pixels may be driven to generate a magnified mode image. Subsequently, the signal processor 130 may generate a 2 × magnification mode image. The re-stitching processor 132 may convert the enlargement mode image into a bayer pixel image (single pixel image) by re-stitching the enlargement mode image. As a result, a 16-pel magnification mode image can be generated from the full resolution image of 108 MP. Subsequently, by converting the enlargement mode image of 16 combined pixels into a bayer pixel image (single pixel image), a 4-fold enlargement mode image of 6.75MP can be generated. The data size is not limited thereto, and the signal processor 130 may generate a 4 × magnification mode image of 108MP without changing the size of the full resolution image of 108 MP.

The magnified mode image may be generated by cropping 1/16 the full resolution image of 108MP based on the center of the full resolution image, and then image processing by the first ISP 131. The re-stitching processor 132 may then re-stitch the magnification mode images. The 4 × magnification mode image that has been re-stitched may then be image processed by the second ISP133 and the third ISP135 and sent to the output interface 136. The output interface 136 may convert the processed 4 × magnification mode image into a data signal suitable for transmission and transmit the converted data signal to the signal output unit 150. The signal output unit 150 may transmit a data signal input from the output interface 136 to the host chip 220. The converted data signal may be transmitted to the host chip 220 or may be stored in the memory 140 by the signal processor 130.

Alternatively, the output of the first ISP131 of the signal processor 130a may be provided to the host chip 220 a. The image processor 230 may generate a 4 × magnification mode image using the second ISP133, the re-stitching processor 133, and the third ISP135, and transmit a 4 × magnification mode data signal to the display device or the communication module.

As shown in fig. 13, the image sensor 100 and the electronic device 10 or 10' may generate a normal mode image or a zoom-in mode image according to a zoom mode signal input through the user interface 210 or the illuminance sensor 160. The image sensor 100 and the electronic device 10 or 10' may generate a normal mode image or a magnified mode image without a normal mode lens and a magnified mode lens. The image sensor 100 and the electronic device 10 or 10' may generate a normal mode image or an enlarged mode image without reducing the resolution. The image sensor 100 and the electronic device 10 or 10' may generate a normal mode image, a 2 × magnification mode image, or a 4 × magnification mode image having the same data size as 6.75MP from the full resolution image of 108 MP.

Embodiments may be described in terms of functional blocks, units, modules, and/or methods and are illustrated in the accompanying drawings. Those skilled in the art will appreciate that the blocks, units, modules, and/or methods are physically implemented by electronic (or optical) circuits, such as logic circuits, discrete components, microprocessors, hardwired circuits, memory elements, wired connections, and so forth, which may be formed using semiconductor-based or other manufacturing techniques. In the case of blocks, units, modules and/or methods implemented by a microprocessor or the like, they may be programmed using software (e.g., microcode) to perform the various functions discussed herein, and may optionally be driven by firmware and/or software. Alternatively, each block, unit, module and/or method may be implemented by dedicated hardware, or as a combination of dedicated hardware and a processor (e.g., one or more programmed microprocessors and associated circuitry) that performs certain functions. Moreover, each block, unit, and/or module of an embodiment may be physically separated into two or more interactive and discrete blocks, units, and/or modules without departing from the scope of the present disclosure. Furthermore, the blocks, units and/or modules of an embodiment may be physically combined into more complex blocks, units and/or modules without departing from the scope of the present disclosure.

According to an example embodiment, a normal mode image or a zoom-in mode image according to a zoom mode signal input through a user interface may be generated. According to example embodiments, a normal mode image or a magnified mode image may be generated without a normal mode lens and a magnified mode lens. According to example embodiments, a normal mode image or an enlarged mode image may be generated without reducing resolution.

According to example embodiments, a normal mode image, a double magnification mode image, or a quadruple magnification mode image having the same data size may be generated.

According to example embodiments, a normal mode image, an enlargement mode image, or a reduction mode image having the same data size may be generated.

Example embodiments relate to providing an image sensor capable of realizing enlargement and reduction without using a lens (e.g., electronically realized), a driving method of the image sensor, and an electronic device including the image sensor. In addition, example embodiments are directed to providing an image sensor (e.g., using a single image sensor) capable of implementing enlargement and reduction without employing a plurality of image sensors, a method of driving the image sensor, and an electronic device including the image sensor. Further, example embodiments relate to providing an image sensor capable of realizing enlargement and reduction without reducing resolution, a method of driving the image sensor, and an electronic device including the image sensor.

Example embodiments have been disclosed herein, and although specific terms are employed, they are used and are to be interpreted in a generic and descriptive sense only and not for purposes of limitation. In some instances, features, characteristics and/or elements described in connection with a particular embodiment may be used alone or in combination with features, characteristics and/or elements described in connection with other embodiments unless specifically stated otherwise, as will be apparent to one of ordinary skill in the art at the time of filing the present application. Accordingly, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as set forth in the appended claims.

34页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:像素电路

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类