Image generation method and device, electronic equipment and computer-readable storage medium

文档序号:1864967 发布日期:2021-11-19 浏览:9次 中文

阅读说明:本技术 图像生成方法、装置、电子设备和计算机可读存储介质 (Image generation method and device, electronic equipment and computer-readable storage medium ) 是由 刘聪越 于 2021-07-01 设计创作,主要内容包括:本申请涉及一种图像生成方法、装置、计算机设备和存储介质。该方法包括:在第一清晰度模式下,根据所述滤光片组中的所述全色滤光片对应的多个全色像素合并读出的第一像素值,以及所述彩色滤光片对应的多个彩色像素合并读出的第二像素值,得到第一合并图像;将所述第一合并图像中在第一对角线方向上的多个全色像素合并,并将在第二对角线方向上的多个彩色像素合并,得到第一目标图像;所述第一对角线方向不同于所述第二对角线方向。采用本方法能够降低生成图像的功耗。(The application relates to an image generation method, an image generation device, a computer device and a storage medium. The method comprises the following steps: under a first definition mode, obtaining a first combined image according to a first pixel value read by combining a plurality of panchromatic pixels corresponding to the panchromatic filter in the filter set and a second pixel value read by combining a plurality of color pixels corresponding to the color filter; merging a plurality of panchromatic pixels in a first diagonal direction in the first merged image, and merging a plurality of color pixels in a second diagonal direction to obtain a first target image; the first diagonal direction is different from the second diagonal direction. By adopting the method, the power consumption of the generated image can be reduced.)

1. An image generation method applied to an image sensor, wherein the image sensor comprises a filter array and a pixel point array, the filter array comprises a minimum repeating unit, the minimum repeating unit comprises a plurality of filter sets, each filter set comprises a color filter and a panchromatic filter, each color filter has a narrower spectral response than the panchromatic filter, and each color filter and each panchromatic filter comprise 4 sub-filters; the pixel array comprises a plurality of panchromatic pixels and a plurality of color pixels, each panchromatic pixel corresponds to one sub-filter of the panchromatic filter, and each color pixel corresponds to one sub-filter of the color filter;

the method comprises the following steps:

under a first definition mode, obtaining a first combined image according to a first pixel value read by combining a plurality of panchromatic pixels corresponding to the panchromatic filter in the filter set and a second pixel value read by combining a plurality of color pixels corresponding to the color filter;

merging a plurality of panchromatic pixels in a first diagonal direction in the first merged image, and merging a plurality of color pixels in a second diagonal direction to obtain a first target image; the first diagonal direction is different from the second diagonal direction.

2. The method of claim 1, wherein merging the plurality of panchromatic pixels in a first diagonal direction and the plurality of color pixels in a second diagonal direction in the first merged image to obtain a first target image comprises:

merging a plurality of panchromatic pixels in a first diagonal direction in the first merged image to obtain a panchromatic image;

combining the plurality of color pixels in the second diagonal direction to obtain a color image;

a first target image is generated from the panchromatic image and the color image.

3. The method of claim 2, wherein generating a first target image from the panchromatic image and the color image comprises:

traversing pixel positions in a first target image to be generated, obtaining pixels at the pixel positions in the first target image to be generated according to panchromatic pixels corresponding to the pixel positions in the panchromatic image and color pixels corresponding to the pixel positions in the color image, and obtaining the first target image after obtaining the pixels at all the pixel positions in the first target image to be generated.

4. The method of claim 1, further comprising:

under a second definition mode, interpolating all color pixels in an original image into panchromatic pixels by utilizing texture information of the color pixels in the original image to obtain a full-size panchromatic channel image; the pixels in the full-size panchromatic channel map are panchromatic pixels;

generating a second target image based on the full-size panchromatic channel map and the original image; the definition corresponding to the second definition mode is greater than the definition corresponding to the first definition mode.

5. The method of claim 4, wherein the interpolating the color pixels in the original image into panchromatic pixels using texture information of the color pixels in the original image to obtain a full-size panchromatic channel image comprises:

traversing each pixel in the original image corresponding to the color pixel;

determining texture information of the color pixels based on each pixel in a preset range containing the color pixels under the condition that the current pixels of the original image are determined to be the color pixels;

and obtaining an interpolation weight corresponding to the color pixel based on the texture information of the color pixel, and interpolating the color pixel into a panchromatic pixel according to the interpolation weight of the color pixel until a full-size panchromatic channel image is obtained when traversal is completed.

6. The method of claim 5, wherein the determining the texture information of the color pixel based on each pixel in a predetermined range including the color pixel comprises:

determining the variance of each pixel in a preset range containing the color pixels;

if the variance is smaller than a preset threshold value, the color pixel is in a flat area;

and if the variance is larger than or equal to the preset threshold value, the color pixel is in a texture area.

7. The method according to claim 6, wherein the obtaining the interpolation weight corresponding to the color pixel based on the texture information of the color pixel comprises:

under the condition that the color pixels are in a flat area, determining a first pixel mean value of each panchromatic pixel in a preset range containing the color pixels and a second pixel mean value of each color pixel in the preset range;

and obtaining the interpolation weight corresponding to the color pixel based on the proportional relation between the first pixel mean value and the second pixel mean value.

8. The method according to claim 6, wherein the obtaining the interpolation weight corresponding to the color pixel based on the texture information of the color pixel comprises:

determining a target texture direction of the color pixel if the color pixel is in a texture region;

and obtaining the interpolation weight corresponding to the color pixel based on each related pixel of the color pixel in the target texture direction.

9. The method of claim 8, wherein determining the target texture direction of the color pixel if the color pixel is in a texture region comprises:

under the condition that the color pixels are in the texture area, determining panchromatic associated pixels respectively associated with the color pixels in all texture directions;

determining first associated values corresponding to the color pixels in the texture directions respectively based on panchromatic associated pixels associated with the texture directions respectively;

and taking the texture direction corresponding to the first correlation value meeting the first correlation condition in the first correlation values as the target texture direction of the color pixel.

10. The method of claim 9, further comprising:

under the condition that first correlation values respectively corresponding to the color pixels in all texture directions do not meet first correlation conditions, determining panchromatic correlation pixels and color correlation pixels respectively correlated to the color pixels in all texture directions;

determining second associated values corresponding to the color pixels in the texture directions respectively based on the panchromatic associated pixels and the color associated pixels associated with the texture directions respectively;

and taking the texture direction corresponding to the second correlation value meeting the second correlation condition in the second correlation values as the target texture direction of the color pixel.

11. The method according to claim 9 or 10, wherein the obtaining the interpolation weight corresponding to the color pixel based on each associated pixel of the color pixel in the target texture direction comprises:

and obtaining the interpolation weight corresponding to the color pixel according to the proportional relation of the color pixel in the panchromatic associated pixel associated with the target texture direction.

12. The method of claim 1, further comprising:

in a third definition mode, obtaining a first combined image according to a first pixel value read by combining a plurality of panchromatic pixels corresponding to the same panchromatic filter in the filter set and a second pixel value read by combining a plurality of color pixels corresponding to the same color filter; the color pixels comprise a first color photosensitive pixel, a second color photosensitive pixel and a third color photosensitive pixel;

interpolating the panchromatic pixel, the second color photosensitive pixel and the third color photosensitive pixel in the first combined image into a first color photosensitive pixel by using texture information of the panchromatic pixel, the second color photosensitive pixel and the third color photosensitive pixel in the first combined image to obtain a fully-arranged first channel map; all pixels in the fully-arranged first channel image are first color sensitive pixels;

interpolating the first merged image according to the texture information of the fully-arranged first channel map and the second color photosensitive pixels and the third color photosensitive pixels in the first merged image to obtain a partially-arranged second channel map and a partially-arranged third channel map; the partially arranged second channel pattern corresponds to the second color photosensitive pixels, and the partially arranged third channel pattern corresponds to the third color photosensitive pixels;

generating a third target image based on the fully-arranged first channel map, the locally-arranged second channel map and the locally-arranged third channel map; the definition corresponding to the third definition mode is greater than the definition corresponding to the first definition mode.

13. An image generation device applied to an image sensor, wherein the image sensor comprises a filter array and a pixel point array, the filter array comprises a minimum repeating unit, the minimum repeating unit comprises a plurality of filter sets, each filter set comprises a color filter and a panchromatic filter, each color filter has a narrower spectral response than the panchromatic filter, and each color filter and each panchromatic filter comprise 4 sub-filters; the pixel array comprises a plurality of panchromatic pixels and a plurality of color pixels, each panchromatic pixel corresponds to one sub-filter of the panchromatic filter, and each color pixel corresponds to one sub-filter of the color filter;

the device comprises:

a first merging module, configured to, in a first definition mode, obtain a first merged image according to a first pixel value read by merging a plurality of panchromatic pixels corresponding to the panchromatic filter in the filter set and a second pixel value read by merging a plurality of color pixels corresponding to the color filter;

a generating module, configured to combine a plurality of panchromatic pixels in a first diagonal direction in the first combined image, and combine a plurality of color pixels in a second diagonal direction to obtain a first target image; the first diagonal direction is different from the second diagonal direction.

14. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the computer program, when executed by the processor, causes the processor to perform the steps of the method according to any of claims 1 to 12.

15. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 12.

Technical Field

The present application relates to the field of computer technologies, and in particular, to an image generation method and apparatus, an electronic device, and a computer-readable storage medium.

Background

With the development of computer technology, most of electronic devices such as mobile phones and the like are equipped with cameras so as to realize a photographing function through the cameras. At present, when a terminal shoots through a camera, an output mode of an image is generally fixed and cannot flexibly adapt to different scenes, so that the power consumption of image processing is high.

Disclosure of Invention

The embodiment of the application provides an image generation method, an image generation device, electronic equipment and a computer readable storage medium, which can reduce the power consumption of image processing.

An image generation method applied to an image sensor, the image sensor comprising a filter array and a pixel dot array, the filter array comprising a minimal repeating unit, the minimal repeating unit comprising a plurality of filter sets, the filter sets comprising a color filter and a panchromatic filter, the color filter having a narrower spectral response than the panchromatic filter, the color filter and the panchromatic filter each comprising 4 sub-filters; the pixel array comprises a plurality of panchromatic pixels and a plurality of color pixels, each panchromatic pixel corresponds to one sub-filter of the panchromatic filter, and each color pixel corresponds to one sub-filter of the color filter;

the method comprises the following steps:

under a first definition mode, obtaining a first combined image according to a first pixel value read by combining a plurality of panchromatic pixels corresponding to the panchromatic filter in the filter set and a second pixel value read by combining a plurality of color pixels corresponding to the color filter;

merging a plurality of panchromatic pixels in a first diagonal direction in the first merged image, and merging a plurality of color pixels in a second diagonal direction to obtain a first target image; the first diagonal direction is different from the second diagonal direction.

An image generation device applied to an image sensor, wherein the image sensor comprises a filter array and a pixel point array, the filter array comprises a minimum repeating unit, the minimum repeating unit comprises a plurality of filter sets, each filter set comprises a color filter and a panchromatic filter, each color filter has a narrower spectral response than the panchromatic filter, and each color filter and each panchromatic filter comprise 4 sub-filters; the pixel array comprises a plurality of panchromatic pixels and a plurality of color pixels, each panchromatic pixel corresponds to one sub-filter of the panchromatic filter, and each color pixel corresponds to one sub-filter of the color filter;

the device comprises:

a first merging module, configured to, in a first definition mode, obtain a first merged image according to a first pixel value read by merging a plurality of panchromatic pixels corresponding to the panchromatic filter in the filter set and a second pixel value read by merging a plurality of color pixels corresponding to the color filter;

a generating module, configured to combine a plurality of panchromatic pixels in a first diagonal direction in the first combined image, and combine a plurality of color pixels in a second diagonal direction to obtain a first target image; the first diagonal direction is different from the second diagonal direction.

An electronic device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the steps of the method as described above.

A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as described above.

The image generating method, the image generating apparatus, the electronic device, and the computer-readable storage medium described above, wherein the image sensor includes a filter array and a pixel dot array, the filter array includes a minimum repeating unit, the minimum repeating unit includes a plurality of filter groups, each filter group includes a color filter and a panchromatic filter, the color filter has a narrower spectral response than the panchromatic filter, each color filter and the panchromatic filter includes 4 sub-filters, the pixel dot array includes a plurality of panchromatic pixels and a plurality of color pixels, each panchromatic pixel corresponds to one sub-filter of the panchromatic filter, each color pixel corresponds to one sub-filter of the color filter, the first definition mode is used in a scene with a lower resolution requirement, the first pixel value read out is merged according to a plurality of panchromatic pixels corresponding to the panchromatic filter in the filter group, and the second pixel value read out is merged according to a plurality of color pixels corresponding to the color filter, the generated first combined image is reduced in size, and power consumption consumed by image generation is low. And combining a plurality of panchromatic pixels in a first diagonal direction in the first combined image, and combining a plurality of color pixels in a second diagonal direction different from the first diagonal direction, so that the obtained first target image is further reduced, the panchromatic pixels have higher signal-to-noise ratio, the frame rate of the image is high, and the image processing effects of lower power consumption and better signal-to-noise ratio of the two-stage pixel combined output are achieved.

Drawings

In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.

FIG. 1 is a schematic diagram of an electronic device in one embodiment;

FIG. 2 is an exploded view of an image sensor in one embodiment;

FIG. 3 is a schematic diagram of the connection of a pixel spot array and readout circuitry in one embodiment;

FIG. 4 is a flow diagram of a method of image generation in one embodiment;

FIG. 5A is a schematic view of a first diagonal direction and a second diagonal direction in one embodiment;

FIG. 5B is a diagram illustrating generation of a first target image, in one embodiment;

FIG. 6 is a schematic diagram of generating a first target image from a panchromatic image and a color image in one embodiment;

FIG. 7 is a diagram illustrating an embodiment of integrating output images of three channels into a bayer pattern after bilateral filtering computations;

FIG. 8 is a diagram illustrating the calculation of pixel values for R pixels in a second target image according to one embodiment;

FIG. 9 is a diagram illustrating associated pixels for each texture direction, in one embodiment;

FIG. 10 is a diagram illustrating associated pixels for each texture direction in another embodiment;

FIG. 11 is a diagram illustrating the calculation of interpolation weights for color pixels, according to one embodiment;

FIG. 12 is a schematic diagram of generating a full-scale panchromatic channel image in one embodiment;

FIG. 13 is a diagram illustrating an embodiment of generating a second target image using a full resolution output mode in a second sharpness mode;

FIG. 14 is a schematic flow chart illustrating generation of a second target image in a second sharpness mode in one embodiment;

FIG. 15 is a flow diagram illustrating the generation of a third target image in a third sharpness mode in one embodiment;

FIG. 16 is a block diagram showing the configuration of an image generating apparatus according to an embodiment;

fig. 17 is a block diagram showing an internal configuration of an electronic apparatus in one embodiment.

Detailed Description

In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.

It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first pixel value may be referred to as a second pixel value, and similarly, a second may be referred to as a first, without departing from the scope of the present application. The first and second are both, but not the same.

In an embodiment, an image generation method is provided, and this embodiment is exemplified by applying the method to an electronic device, and it is to be understood that the electronic device may be a terminal, a server, or a system including a terminal and a server, and is implemented by interaction between the terminal and the server. The terminal can be one of a mobile phone, a tablet computer, a notebook computer, a teller machine, a gate, an intelligent watch, a head display device and the like.

The electronic device is provided with a camera, and the camera comprises a lens and an image sensor. The image sensor includes a filter array including a minimal repeating unit including a plurality of filter sets including color filters and panchromatic filters, the color filters having narrower spectral responses than the panchromatic filters, the color filters and the panchromatic filters each including 4 sub-filters. The pixel array comprises a plurality of panchromatic pixels and a plurality of color pixels, each panchromatic pixel corresponds to one sub-filter of the panchromatic filter, and each color pixel corresponds to one sub-filter of the color filter. The image sensor is used for receiving light rays passing through the lens.

Filters are optical devices used to select a desired wavelength band of radiation. The color filter is a filter that allows only a specific color light to pass through. For example, the color filter may be a green filter, a red filter, or a blue filter, and the wavelength band of the light transmitted by the color filter may correspond to the wavelength band of red light, the wavelength band of green light, or the wavelength band of blue light. Of course, the wavelength band of the light transmitted by the color filter may also correspond to the wavelength band of other color lights, such as magenta light, purple light, cyan light, yellow light, etc., and is not limited herein.

The full color filter refers to a filter that allows light of a plurality of colors to pass therethrough. The panchromatic filter is a panchromatic filter, or a filter with the light incoming quantity larger than a preset threshold value. For example, a panchromatic filter is a panchromatic filter that transmits light of all colors. And if the panchromatic optical filter is a visible light and infrared optical filter, the panchromatic optical filter can transmit visible light and infrared light.

The light entering amount transmitted by the panchromatic filter is larger than that transmitted by the color filter, namely the wave band width of the light transmitted by the color filter is smaller than that transmitted by the panchromatic filter, the panchromatic filter transmits more light, a corresponding panchromatic pixel obtained by the panchromatic filter has a higher signal-to-noise ratio, and the panchromatic pixel contains more information and can analyze more texture details. The signal-to-noise ratio refers to a ratio between a normal signal and a noise signal. The higher the signal-to-noise ratio of a pixel, the higher the proportion of normal signals that the pixel contains, and the more information that is resolved from the pixel.

The image sensor further comprises a pixel point array, the pixel point array comprises a plurality of pixel points, each pixel point corresponds to one sub-optical filter of the optical filter array, and the pixel points are used for receiving light rays passing through the corresponding sub-optical filter to generate electric signals.

As shown in fig. 1, the electronic apparatus includes a camera 102, and the camera 102 includes an image sensor including a microlens array, a filter array, and a pixel dot array.

The electronic device is described below as a mobile phone, but the electronic device is not limited to a mobile phone. The terminal comprises a camera, a processor and a shell. The camera and the processor are arranged in the shell, and the shell can also be used for installing functional modules such as a power supply device and a communication device of the terminal, so that the shell provides protection such as dust prevention, falling prevention and water prevention for the functional modules.

The camera may be a front camera, a rear camera, a side camera, a screen camera, etc., without limitation. The camera comprises a lens and an image sensor, when the camera shoots an image, light rays penetrate through the lens and reach the image sensor, and the image sensor is used for converting optical signals irradiated on the image sensor into electric signals.

As shown in fig. 2, the image sensor includes a microlens array 21, a filter array 22, and a pixel array 23.

The micro lens array 21 comprises a plurality of micro lenses 211, the sub-filters in the filter array 22 and the pixels in the pixel array 23 are arranged in a one-to-one correspondence manner, the micro lenses 211 are used for gathering incident light, the gathered light can pass through the corresponding sub-filters and then is projected onto the pixels to be received by the corresponding pixels, and the received light is converted into electric signals by the pixels.

The filter array 22 includes a plurality of minimal repeating units 221. Minimal repeating unit 221 includes a plurality of filter sets 222. In the present embodiment, the minimum repeating unit 221 includes 4 filter sets 222, and the 4 filter sets 222 are arranged in a matrix. Each filter set 222 includes a panchromatic filter 223 and a color filter 224, and each color filter or each panchromatic filter has 4 sub-filters, so that the filter set 222 includes 16 sub-filters in total. Different color filters 224 are also included in different sets of filters.

Similarly, the pixel array 23 includes a plurality of minimum repeating units 231, and the minimum repeating unit 231 includes a plurality of pixel groups 232 corresponding to the filter groups 222 in the minimum repeating unit 221. In the present embodiment, the minimum repeating unit 231 includes 4 pixel groups 232, and the 4 pixel groups 232 are arranged in a matrix, and each pixel group 232 corresponds to one filter set 222. The light transmitted by the panchromatic filter 223 is projected to the panchromatic pixel point 233, so that a panchromatic pixel can be obtained; the light passing through the color filter 224 is projected to the color pixel 234, so as to obtain a color pixel.

As shown in fig. 3, the readout circuit 24 is electrically connected to the pixel dot array 23, and is used for controlling exposure of the pixel dot array 23 and reading and outputting of pixel values of the pixel dots. The readout circuit 24 includes a vertical driving unit 241, a control unit 242, a column processing unit 243, and a horizontal driving unit 244. The vertical driving unit 241 includes a shift register and an address decoder. The vertical driving unit 241 includes a readout scanning and reset scanning functions. The control unit 242 configures timing signals according to an operation mode, and controls the vertical driving unit 241, the column processing unit 243, and the horizontal driving unit 244 to cooperatively operate using various timing signals. The column processing unit 243 may have an analog-to-digital (a/D) conversion function for converting analog pixel signals into a digital format. The horizontal driving unit 244 includes a shift register and an address decoder. The horizontal driving unit 244 sequentially scans the pixel dot array 23 column by column.

In this embodiment, as shown in fig. 4, the method includes the following steps:

step 402, in the first sharpness mode, a first merged image is obtained according to a first pixel value read by merging a plurality of panchromatic pixels corresponding to a panchromatic filter in the filter set and a second pixel value read by merging a plurality of color pixels corresponding to the color filter.

The first definition mode is a mode used in a scene with a relatively low resolution requirement, and is a two-level pixel merging and reading mode with low definition, high signal-to-noise ratio, low power consumption and high frame rate. The first definition mode may specifically be a preview mode in image capturing, a preview mode in video capturing, or a night view mode in image capturing and video capturing in a night view, which requires low resolution, but is not limited thereto. Preview modes of video capture such as 1080p video preview, WeChat video preview, and the like.

A panchromatic pixel is a pixel generated by light transmitted through the panchromatic filter 223, such as a W (White) pixel. The color pixels are other color sensitive pixels, such as a first color sensitive pixel, a second color sensitive pixel, a third color sensitive pixel, and so on. The color pixels are pixels of different colors generated by light transmitted through different color filters 224, for example, the color filters 224 may be a first filter, a second filter and a third filter, and the first color sensitive pixels are pixels generated by light transmitted through the first filter, such as G (Green) pixels. The second color sensitive pixel is a pixel generated by light transmitted through the second filter, such as an R (Red) pixel. The third color sensitive pixel is a pixel generated by light transmitted through the third filter, such as a B (Blue) pixel.

In the case where a shooting instruction is received, it is determined whether the shooting instruction is preview shooting. If the shooting instruction is preview shooting, a first resolution mode is triggered. Or the electronic equipment detects whether the current environment is a night scene or not, and triggers the first definition mode under the condition that the current environment is the night scene.

In the first definition mode, the light transmitted by the electronic device through the filter array 22 is projected onto the pixel point array 23, and the pixel point array 23 is configured to receive the light passing through the corresponding filter array 22 to generate an electrical signal. The pixel array 23 includes a plurality of panchromatic pixels and a plurality of color pixels, each panchromatic pixel corresponds to one sub-filter of the panchromatic filter 223, and each color pixel corresponds to one sub-filter of the color filter 224. The electronic device combines the read first pixel values according to the plurality of panchromatic pixels corresponding to the panchromatic filter 223 in the filter set 222 and the read second pixel values according to the plurality of color pixels corresponding to the color filter 224 to obtain a first combined image.

Further, the electronic device merges panchromatic pixels corresponding to the same panchromatic filter 223 in the filter set 222 to read out first pixel values, merges first color-sensitive pixels corresponding to the same first filter to read out corresponding second pixel values, merges second color-sensitive pixels corresponding to the same second filter to read out corresponding second pixel values, and generates a first merged image based on the first pixel values and the respective second pixel values.

In one embodiment, for each panchromatic filter 223, the electronics combine 4 panchromatic pixels corresponding to the 4 sub-filters included in the panchromatic filter 223 to read out first pixel values, resulting in respective first pixel values. For each color filter 224, the second pixel values are obtained by combining 4 color pixels corresponding to 4 sub-filters included in the color filter 224 to read out the second pixel values.

Step 404, combining a plurality of panchromatic pixels in a first diagonal direction in the first combined image, and combining a plurality of color pixels in a second diagonal direction to obtain a first target image; the first diagonal direction is different from the second diagonal direction.

The electronics merge a plurality of panchromatic pixels in a first diagonal direction in the first merged image and a plurality of color pixels in a second diagonal direction in the first merged image to obtain a first target image. The first diagonal direction is different from the second diagonal direction, as shown in fig. 5A.

In one embodiment, the first diagonal direction is perpendicular to the second diagonal direction. The plurality of panchromatic pixels and the plurality of color pixels may each be at least two. For example, 2 panchromatic pixels in a first diagonal direction are merged, and 2 color pixels in a second diagonal direction in the first merged image are merged to obtain a first target image.

In this embodiment, the image sensor includes a filter array 22 and a pixel array 23, the filter array 22 includes a minimum repeating unit 231, the minimum repeating unit 231 includes a plurality of filter sets 222, each filter set 222 includes a color filter 224 and a panchromatic filter 223, the color filter 224 has a narrower spectral response than the panchromatic filter 223, each color filter 224 and the panchromatic filter 223 includes 4 sub-filters, the pixel array 23 includes a plurality of panchromatic pixels and a plurality of color pixels, each panchromatic pixel corresponds to one sub-filter of the panchromatic filter 223, each color pixel corresponds to one sub-filter of the color filter 224, the first definition mode is used in a scene with a lower requirement on resolution, the first pixel value read out by combining a plurality of panchromatic pixels corresponding to the panchromatic filter 223 in the filter set 222, and the second pixel value read out by combining a plurality of color pixels corresponding to the color filter 224, the generated first combined image is reduced in size, and power consumption consumed by image generation is low. And combining a plurality of panchromatic pixels in a first diagonal direction in the first combined image, and combining a plurality of color pixels in a second diagonal direction different from the first diagonal direction, so that the obtained first target image is further reduced, the panchromatic pixels have higher signal-to-noise ratio, the frame rate of the image is high, and the image processing effects of lower power consumption and better signal-to-noise ratio of the two-stage pixel combined output are achieved.

In one embodiment, merging a plurality of panchromatic pixels in a first diagonal direction and merging a plurality of color pixels in a second diagonal direction in a first merged image to obtain a first target image comprises:

merging a plurality of panchromatic pixels in a first diagonal direction in the first merged image to obtain a panchromatic image; combining the plurality of color pixels in the second diagonal direction to obtain a color image; a first target image is generated from the panchromatic image and the color image.

The electronic device determines a first diagonal direction and a second diagonal direction in the first merged image, merges a plurality of panchromatic pixels in the first diagonal direction in the first merged image, and generates a panchromatic image based on the respective panchromatic pixels resulting from the merging. A plurality of color pixels of the same color in the second diagonal direction are combined to obtain each color pixel, and a color image is generated based on each color pixel.

In one embodiment, the electronics merge a plurality of panchromatic pixels in a first diagonal direction in a first merged image and a plurality of color pixels in a second diagonal direction to produce a second merged image. The electronic device separates the panchromatic image and the color image from the second combined image and generates a first target image from the panchromatic image and the color image.

In one embodiment, the first target image may be a bayer array image, as illustrated in fig. 5B, which is a schematic diagram of the generation of a bayer array image in one embodiment. In the first definition mode, the electronic device obtains an original image 502 through the filter array 22 in the image sensor, and obtains a down-sampled first combined image 504 according to the combination of the read first pixel values of the 4 panchromatic pixels corresponding to the same panchromatic filter 223 in the filter set 222 and the combination of the read second pixel values of the 4 color pixels corresponding to the same color filter 224.

The 2 panchromatic pixels in the first diagonal direction in the first merged image 504 are merged and the 2 color pixels in the second diagonal direction are merged resulting in a second merged image 506. A downsampled panchromatic image 508 and a downsampled color image 510 are separated from the second combined image 506, and a bayer array image is generated from the panchromatic image 508 and the color image 510.

In one embodiment, in the first definition mode, the electronics obtain the original image 502 via the filter array 22 in the image sensor, and obtain the down-sampled first combined image 504 based on the combined read first pixel values of the 4 panchromatic pixels corresponding to the same panchromatic filter 223 in the filter set 222 and the combined read second pixel values of the 4 color pixels corresponding to the same color filter 224. The 2 panchromatic pixels in the first diagonal direction in the first combined image 504 are combined and the 2 color pixels in the second diagonal direction are combined to obtain a panchromatic image 508 and a color image 510, respectively, and the first target image is generated from the panchromatic image 508 and the color image 510.

FIG. 6 is a schematic diagram illustrating the generation of a first target image from a panchromatic image and a color image in one embodiment. Taking the R channel as an example, for the R pixel (5,5) to be obtained, each R pixel within a certain range of the R pixel (5,5) in the color image 602 is selected to obtain a weighted average R _ mean, and from the pixel with the pixel position (5,5) corresponding to the panchromatic image 604, each W pixel within a certain range of the pixel position (5,5) is obtained to obtain a weighted average W _ mean, and then the pixel R '═ W/_ W _ mean) or R' ═ W- (W _ mean-R _ mean) at the pixel position in the image 606 can be obtained. In the same manner, each R pixel in color image 602 may be calculated to correspond to each pixel R' in image 606.

Similarly, in the G channel, for the G pixel (4,5) to be obtained, each G pixel within a certain range of the G pixel (4,5) in the color image 602 is selected to obtain a weighted average G _ mean, and from the pixels having a pixel position (4,5) corresponding to the panchromatic image 604, each W pixel within a certain range of the pixel position (4,5) is obtained to obtain a weighted average W _ mean, and then the pixel G '═ W (G _ mean/_ W _ mean) or G' ═ W- (W _ mean-G _ mean) at the pixel position in the image 606 can be obtained. In the same manner, each G pixel in color image 602 may be calculated to correspond to each pixel G' in image 608. The image 606 and the image 608 are fused to obtain a bayer pattern first target image 610.

And the processing modes of other channels are the same as the above, and finally, the images are fused to obtain a first target image, so that the form of converting two-level Binning into Bayer is completed.

In this embodiment, a plurality of panchromatic pixels in the first combined image in the first diagonal direction are combined to obtain a panchromatic image, a plurality of color pixels in the second diagonal direction are combined to obtain a color image, the generated image noise is low due to the integrated pixel reading mode, the light entering amount of a panchromatic channel is larger, the panchromatic pixels have a higher signal-to-noise ratio, the first target image is generated according to the panchromatic image and the color image, the color image can be fused by using a region with the higher signal-to-noise ratio, and the imaging quality is higher.

In one embodiment, generating a first target image from the panchromatic image and the color image includes:

traversing pixel positions in the first target image to be generated, determining pixels at the pixel positions in the first target image to be generated according to panchromatic pixels corresponding to the pixel positions in the panchromatic image and color pixels corresponding to the pixel positions in the color image, and obtaining the first target image after obtaining the pixels at all the pixel positions in the first target image to be generated.

The electronic device traverses pixel locations in the first target image to be generated, and in each traversal, the electronic device determines pixel locations in the current traversal of the first target image to be generated and determines panchromatic pixels corresponding to the pixel locations in the panchromatic image and color pixels corresponding to the pixel locations in the color image. And calculating the pixel corresponding to the pixel position according to the panchromatic pixel and the color pixel corresponding to the pixel position in the current traversal. And after the pixels corresponding to the pixel positions of the current traversal are calculated, executing the next traversal until the pixels of all the pixel positions in the first target image to be generated are obtained, and then stopping to obtain the first target image.

In one embodiment, the electronics determine panchromatic pixels corresponding to the pixel location in the panchromatic image, determine panchromatic pixels from the panchromatic image within a first predetermined range including the panchromatic pixel, and perform a weighted average of the pixels of the panchromatic pixels. Color pixels corresponding to the pixel positions in the color image are determined, color pixels in a first preset range including the color pixels are determined from the color image, and the pixel values of the color pixels are weighted and averaged. And calculating the pixel corresponding to the pixel position according to the weighted average of the pixel values of the panchromatic pixels and the weighted average of the pixel values of the color pixels.

Further, a ratio of the pixel-weighted average of the color pixels to the pixel-weighted average of the panchromatic pixels is calculated, and a product of the pixel value of the panchromatic pixel corresponding to the pixel position and the ratio is taken as the pixel of the pixel position.

Alternatively, the difference between the pixel value weighted average of the panchromatic pixels and the pixel value weighted average of the color pixels is calculated, and the difference between the pixel value of the panchromatic pixel corresponding to the pixel position and the difference is taken as the pixel of the pixel position.

In one embodiment, the first target image may be a bayer array image.

In this embodiment, the pixel position in the first target image to be generated is traversed, the pixel at the pixel position in the first target image to be generated is determined according to the panchromatic pixel corresponding to the pixel position in the panchromatic image and the color pixel corresponding to the pixel position in the color image, until the pixels at all the pixel positions in the first target image to be generated are obtained, the information content of the panchromatic channel with the high signal-to-noise ratio can be brought into the first target image, and thus the first target image is accurately generated.

In one embodiment, the method further comprises:

under a second definition mode, interpolating all color pixels in the original image into panchromatic pixels by utilizing texture information of the color pixels in the original image to obtain a full-size panchromatic channel image; the pixels in the full-size panchromatic channel map are all panchromatic pixels; generating a second target image based on the full-size panchromatic channel map and the original image; the definition corresponding to the second definition mode is greater than the definition corresponding to the first definition mode.

The second definition mode refers to a full resolution output mode with high definition, high power consumption and low frame rate, and the definition corresponding to the second definition mode is greater than the definition corresponding to the first definition mode, that is, the resolution corresponding to the second definition mode is greater than the resolution corresponding to the first definition mode. For example, the second definition mode may be a blue 1080P, ultra-clear 720P mode, but is not limited thereto.

And under the condition that a shooting instruction is received, detecting whether a user selects a definition mode required to be used, and under the condition that the user selects a second definition mode, the electronic equipment projects light transmitted by the optical filter array to the pixel point, and the pixel point array is used for receiving the light passing through the corresponding optical filter array to generate an electric signal to obtain an original image.

The texture information at least comprises at least one of texture direction, texture position and texture intensity.

The electronic device determines pixel positions of color pixels in the original image, interpolates all the color pixels in the original image into corresponding panchromatic pixels by utilizing texture information of the color pixels in the original image to obtain a full-size panchromatic channel image, wherein the pixels in the full-size panchromatic channel image are all the panchromatic pixels, and generates a full-size second target image based on the full-size panchromatic channel image and the original image.

In one embodiment, the method for interpolating color pixels in an original image into panchromatic pixels by using texture information of the color pixels in the original image to obtain a full-size panchromatic channel image includes:

performing weight calculation processing on color pixels at the positions of the color pixels in the original image by using texture information of the color pixels in the original image to obtain an interpolation weight map; and carrying out fusion processing on the interpolation weight graph and the original image to obtain a full-size panchromatic channel image.

Specifically, the color pixels at each pixel position in the original image are subjected to weight calculation processing to determine the interpolation weight corresponding to each color pixel. And obtaining an interpolation weight map based on the pixel position corresponding to each color pixel and the interpolation weight. And the electronic equipment performs fusion processing on the interpolation weight graph and the original image to obtain a full-size panchromatic channel image.

In one embodiment, generating a second target image based on the full-size panchromatic channel map and the original image includes: based on the full-size panchromatic channel image, respectively interpolating the original image by adopting bilateral filtering to obtain a first channel image of a first color photosensitive pixel, a second channel image of a second color photosensitive pixel and a third channel image of a third color photosensitive pixel; and generating a second target image according to the first channel image, the second channel image and the third channel image. The first channel image, the second channel image and the third channel image can be fused to obtain a second target image.

In this embodiment, the definition corresponding to the second definition mode is greater than the definition corresponding to the first definition mode, and in the second definition mode, the texture information of the color pixels in the original image is used to interpolate all the color pixels in the original image into panchromatic pixels, so as to obtain a full-size panchromatic channel image with the same size as the original image. The pixels in the full-size panchromatic channel image are panchromatic pixels, the second target image is generated based on the full-size panchromatic channel image and the original image, panchromatic channel information can be fused into the original image, the second target image with more information and clearer detail analysis can be generated, the image processing effect of full-size full-resolution output with high definition, high power consumption and low frame rate is achieved, and the requirement of a user on the high quality of the image can be met.

In one embodiment, after the full-size panchromatic channel image is obtained, pixel values corresponding to pixel positions in the second target image to be generated are calculated through bilateral filtering. The bilateral filtering mainly performs a smoothing process on the flat region, and the processing process is as follows:

where Ω represents a local window, which may be 7 by 7, or other sizes. q denotes the coordinate position of the pixel and Iq denotes the pixel value within the window before filtering. f represents the weight of each coordinate point of the 9 × 9 window, which is fixed and the weight is larger closer to the center. g represents the weight of the difference between the pixel at other positions and the central pixel, and the larger the difference, the smaller the weight. p is the position to be solved, and Jp is the pixel value to be solved of a certain channel.

In a local window which takes p as the center and is expressed by omega, the coordinate q of the original value of the channel to be solved is searched, Iq is the intensity value of the channel to be solved, kp is the number of the original values of the channel to be solved, and Jp is equal to the weighted average value of all Iq of the local window omega. The weight distance corresponding to Iq can be calculated through an f function, the intensity difference weight corresponding to each Iq can be calculated through a g function, and the f function is a distance function and is more weight when being closer to the center; g is an intensity difference function, the greater the intensity difference, the less the weight.

Fig. 7 is a schematic diagram illustrating an embodiment of integrating and outputting an output image in a bayer format after bilateral filtering computation of three channels. Based on the full-size panchromatic channel map 702, the original image 704 is interpolated using bilateral filtering to obtain a first channel map 706 of first color sensitive pixels, a second channel map 708 of second color sensitive pixels, and a third channel map 710 of third color sensitive pixels, respectively. And performing fusion processing on the first channel map 706, the second channel map 708 and the third channel map 710 to obtain a second target image 712.

As shown in fig. 8, taking an R pixel as an example, the pixel position of the R pixel to be obtained is (i, j), and an n × n window 802 having the pixel position (i, j) as the center in the original image is calculated based on the above distance weight function F, where F is an n × n matrix, of the R pixels in the n × n window 802. An intensity difference weight G for the W pixels in the n × n window is calculated based on the above intensity difference function G in an n × n window 804 centered at the pixel position (i, j) in the full-size panchromatic channel map. J is a W pixel window of n multiplied by n, I is an R pixel window of n multiplied by n, and the matrix value is 0 where there are no R pixels.

For each R pixel in each n × n window 802, a distance weight F (which may be considered as a fixed weight template) between each R pixel and the R pixel of (i, j) is calculated, the W pixel corresponding to each R pixel in the window 804 is determined, and an intensity difference weight G between each pixel location and the W pixel of (i, j) is calculated. If HF is the product weight of G and F at the R pixel position, and mosaicR is the position matrix of the original R pixel, the pixel value R (i, j) of the R pixel with the pixel position (i, j) in the second target image 806 can be calculated according to the following formula:

HF=G.*F.*mosaicR

meanW=sum(sum(HF.*J))

meanR=sum(sum(HF.*I))

R(i,j)=W(i,j)*meanR/meanW;

in one embodiment, interpolating color pixels in the original image into panchromatic pixels using texture information of the color pixels in the original image to obtain a full-size panchromatic channel image, includes:

traversing each pixel in the original image corresponding to the color pixel; determining texture information of the color pixels based on each pixel in a preset range containing the color pixels under the condition that the current pixels of the original image are determined to be the color pixels; and obtaining interpolation weights corresponding to the color pixels based on the texture information of the color pixels, and interpolating the color pixels into panchromatic pixels according to the interpolation weights of the color pixels until the full-size panchromatic channel image is obtained when traversal is completed.

The preset range containing the color pixels may be set as desired. For example, the predetermined range may be a range of 10 × 10 rectangular windows centered on the color pixels. As another example, the predetermined range may be a range of 8 × 8 rectangular windows centered on the color pixels. Of course, the preset range may not be centered on the color pixel, for example, the color pixel may be in at least one of an upper region, a lower region, a left region and a right region of the preset range.

In the color channel map, a sliding window is used to traverse each pixel to determine whether the pixel is a color pixel, and since the position of the color filter 224 in the filter array is periodically changed, it can be determined whether the current pixel is a color pixel obtained by the color filter 224 according to the rule of the periodic change.

In each traversal, the electronic device may acquire, based on each pixel in a preset range including the color pixel, not only information of the color pixel itself but also information of a pixel in a region adjacent to the color pixel, and may determine texture information of the color pixel more accurately, when determining that the current pixel of the original image is the color pixel. Based on the texture information of the color pixels, the interpolation weight corresponding to the color pixels at the positions of the color pixels in the original image is calculated, the color pixels can be accurately interpolated into corresponding panchromatic pixels according to the interpolation weight of the color pixels, and traversal is completed until each color pixel in the original image is interpolated into corresponding panchromatic pixels, so that a full-size panchromatic channel image can be accurately obtained.

In one embodiment, determining texture information for a color pixel based on pixels within a preset range including the color pixel comprises: determining the discrete degree of each pixel in a preset range containing color pixels; if the discrete degree is smaller than the discrete threshold value, the color pixel is in a flat area; if the discrete degree is larger than or equal to the discrete threshold value, the color pixel is in the texture area. Wherein the discrete threshold value can be set according to the requirement.

The greater the degree of dispersion between pixels in a preset range including color pixels, the greater the difference between pixels, and it can be considered that a strong texture exists in the preset range, and the color pixels are in a texture region.

Alternatively, the electronic device may represent the degree of dispersion by variance by determining the variance of each pixel within a preset range containing color pixels; the electronic device can also express the discrete degree through standard deviation by determining the standard deviation of each pixel in a preset range containing the color pixels; the degree of dispersion may also be expressed in other ways, and is not limited herein. Variance (var) is a measure of the degree of dispersion when probability theory and statistical variance measure a random variable or a set of data. The Standard Deviation (Standard development) reflects the degree of dispersion of a data set.

In one embodiment, determining texture information for a color pixel based on pixels within a preset range including the color pixel comprises:

determining the variance of each pixel in a preset range containing color pixels; if the variance is smaller than a preset threshold value, the color pixel is in a flat area; if the variance is greater than or equal to the preset threshold value, the color pixel is in the texture area.

The preset threshold value may be set as desired. A flat region is a region where weak texture or no texture is present. A texture region is a region where strong texture exists.

If the variance is smaller than the preset threshold, it indicates that the discrete degree of each pixel in the preset range is small, and it can be considered that the texture of the preset range where the color pixel is located is weak or no texture, then the color pixel is located in the flat area. If the variance is greater than or equal to the preset threshold, it indicates that the discrete degree of each pixel in the preset range is large, and it can be considered that the texture of the preset range where the color pixel is located is strong, and the color pixel is located in the texture area.

In one embodiment, the variance of panchromatic pixels within a preset range including color pixels may be determined. The method may include determining panchromatic pixels within a predetermined range including the color pixel, averaging the color pixel and the panchromatic pixels, calculating a square value of a difference value of a pixel value of the color pixel and the pixel average, and calculating a square value of a difference value of a pixel value of each panchromatic pixel and the pixel average, respectively. And determining a first pixel number corresponding to the color pixel and each panchromatic pixel, and taking the ratio of the sum of the square values to the first pixel number as a variance. The first number of pixels is the sum of the number of color pixels and panchromatic pixels within a preset range.

In one embodiment, the variances of each panchromatic pixel and each color pixel within a preset range including color pixels may be determined.

The method may include determining panchromatic pixels and color pixels within a predetermined range including the color pixel, averaging the color pixels and panchromatic pixels, calculating a square value of a difference value of a pixel value of each color pixel and the pixel average value, respectively, and calculating a square value of a difference value of a pixel value of each panchromatic pixel and the pixel average value, respectively. And determining the second pixel number corresponding to each color pixel and each panchromatic pixel, and taking the ratio of the sum of the square values to the second pixel number as the variance. The second number of pixels is the sum of the numbers of the color pixels and the panchromatic pixels within a preset range.

For example, the variance can be calculated according to the following formula:

wherein x is1、x2、xnThe pixel value may be a pixel value of a panchromatic pixel or a pixel value of a color pixel, M is a pixel average value, and n is the number of pixelsAmount, s2Is the variance.

In the present embodiment, by determining the variance of each pixel within a preset range including color pixels, it is possible to accurately determine texture information of the color pixels.

In one embodiment, obtaining the interpolation weight corresponding to the color pixel based on the texture information of the color pixel includes:

under the condition that the color pixels are in the flat area, determining a first pixel mean value of each panchromatic pixel in a preset range containing the color pixels and a second pixel mean value of each color pixel in the preset range; and obtaining the interpolation weight corresponding to the color pixel based on the proportional relation between the first pixel mean value and the second pixel mean value.

The first pixel average value is a pixel average value of panchromatic pixels in a preset range including color pixels. The second pixel mean value is a pixel mean value of each color pixel in a predetermined range including the color pixels.

Specifically, in the case where the panchromatic pixel is in the flat area, the electronic device multiplies the proportional value between the first pixel average value and the second pixel average value by the pixel value of the panchromatic pixel to obtain the interpolation weight corresponding to the color pixel.

In this embodiment, in the case that the color pixels are in the flat region, the first pixel mean value of each panchromatic pixel in the preset range including the color pixels and the second pixel mean value of each color pixel in the preset range are determined, and based on the proportional relationship between the first pixel mean value and the second pixel mean value, the interpolation weight corresponding to the color pixel at the color pixel position in the original image can be accurately calculated.

In one embodiment, obtaining the interpolation weight corresponding to the color pixel based on the texture information of the color pixel includes:

determining a target texture direction of the color pixel under the condition that the color pixel is in the texture area; and obtaining the interpolation weight corresponding to the color pixel based on each related pixel of the color pixel in the target texture direction.

The associated pixels can include full color associated pixels and color associated pixels. A panchromatic associated pixel is a panchromatic pixel that has an associated relationship with the color pixel. The color-related pixel is a color pixel having an association relationship with the color pixel.

The electronic device may set a plurality of texture directions in advance, and select a target texture direction of the color pixel from the plurality of texture directions in a case where the panchromatic pixel is in the texture region. The grain directions are symmetrical or asymmetrical, and the number of the grain directions can be set according to requirements. For example, the number of texture directions may be 4, 8, 12, or the like, and for example, the texture directions may be a horizontal direction, a vertical direction, a diagonal direction, and an anti-diagonal direction.

For example, 4 texture directions can be obtained by setting one texture direction at intervals of 90 degrees in a two-dimensional plane; setting a texture direction at intervals of 45 degrees in a two-dimensional plane, and obtaining 8 texture directions; by arranging one grain direction at 22.5 degrees intervals in the two-dimensional plane, 12 grain directions can be obtained.

Determining a target texture direction for the color pixel, comprising: determining gradient values of the color pixels in all texture directions; the texture direction of the color pixel is determined based on the gradient values in each texture direction. In one embodiment, the electronic device may determine a texture direction having the smallest gradient value as the texture direction of the color pixel. In another embodiment, the electronic device may determine a texture direction having the second smallest gradient value as the texture direction of the color pixel. In other embodiments, the electronic device may determine the texture direction of the color pixels in other manners.

The associated pixel is a pixel having an association relationship with the color pixel. For example, the association relationship may be that the associated pixel is in the texture direction of the color pixel, the association relationship may also be that the associated pixel is in a preset area of the color pixel, and so on. For example, the associated pixel is located in at least one of an upper region, a lower region, a left region, and a right region of the color pixel.

In the case where the color pixel is in the texture region, the electronic device determines a pixel associated with the color pixel in each texture direction, and determines a target texture direction of the color pixel based on the pixel associated with the color pixel in each texture direction. And calculating the interpolation weight corresponding to the color pixel based on each related pixel of the color pixel in the target texture direction. In the same way, the interpolation weight corresponding to each color pixel in the original image can be calculated.

In this embodiment, when a color pixel is in a texture region, a target texture direction of the color pixel is determined, and an interpolation weight corresponding to each color pixel is accurately calculated based on each associated pixel of the color pixel in the target texture direction.

In one embodiment, determining the target texture direction for a color pixel in the case that the color pixel is in a texture region comprises:

under the condition that the color pixels are in the texture area, determining panchromatic associated pixels respectively associated with the color pixels in all texture directions; determining first associated values corresponding to the color pixels in all texture directions respectively based on panchromatic associated pixels associated with all texture directions respectively; and taking the texture direction corresponding to the first correlation value meeting the first correlation condition in the first correlation values as the target texture direction of the color pixel.

In the case where the color pixel is in the texture region, the electronics determine a full-color associated pixel with which the color pixel is associated in each texture direction. For example, the electronics determine panchromatic associated pixels associated with the color pixel in a horizontal direction, panchromatic associated pixels associated in a vertical direction, panchromatic associated pixels associated in a diagonal direction, and panchromatic associated pixels associated in a diagonal-opposite direction.

And calculating the sum of absolute values of the difference values of all the panchromatic related pixels for the panchromatic related pixels in each texture direction to obtain a first related value corresponding to each texture direction. When there is a first correlation value satisfying the first correlation condition among the first correlation values, the texture direction corresponding to the first correlation value satisfying the first correlation condition is set as the target texture direction of the color pixel.

The first correlation condition may be that a difference between the respective first correlation values is greater than a preset difference, or that a difference between the smallest first correlation value and the next smallest first correlation value is greater than a preset difference.

And the electronic equipment determines whether the difference value between the first correlation values is larger than a preset difference value or not, and takes the texture direction corresponding to the minimum first correlation value as the target texture direction of the color pixel under the condition that the difference value between the first correlation values is larger than the preset difference value.

Or the electronic device determines the smallest first correlation value and the second smallest first correlation value in the first correlation values, determines whether a difference value between the smallest first correlation value and the second smallest first correlation value is greater than a preset difference value, and takes the texture direction corresponding to the smallest first correlation value as the target texture direction of the color pixel when the difference value is greater than the preset difference value.

In the present embodiment, in the case where a color pixel is in the texture region, panchromatic associated pixels with which the color pixel is associated in each texture direction, respectively, are determined to determine the target texture direction of the color pixel by the panchromatic associated pixel associated with the color pixel. The first correlation value corresponding to the color pixel in each texture direction is determined based on the panchromatic related pixel associated with each texture direction, and the degree of correlation between each panchromatic related pixel and the color pixel can be determined, so that the target texture direction of the color pixel can be accurately determined based on the degree of correlation between the panchromatic related pixel and the color pixel.

In one embodiment, the method further comprises: under the condition that first correlation values corresponding to the color pixels in the texture directions do not meet first correlation conditions, determining panchromatic correlation pixels and color correlation pixels which are correlated to the color pixels in the texture directions respectively; determining second correlation values corresponding to the color pixels in the texture directions respectively based on the panchromatic correlation pixels and the color correlation pixels which are correlated with the texture directions respectively; and taking the texture direction corresponding to the second correlation value meeting the second correlation condition in the second correlation values as the target texture direction of the color pixel.

In the case where the first correlation value corresponding to each color pixel in each texture direction does not satisfy the first correlation condition, the electronic device determines a full-color correlation pixel and a color correlation pixel associated with the color pixel in each texture direction.

For the panchromatic associated pixels and the color associated pixels in each texture direction, the electronics calculate the absolute value of the difference values for the panchromatic associated pixels and the absolute value of the difference values for the color associated pixels, and sum the absolute values for the same texture direction. And determining the sum of the pixel quantity of each panchromatic associated pixel and each color associated pixel, and dividing the sum of the absolute values by the sum of the pixel quantity to obtain a second associated value corresponding to the texture direction, so as to obtain a second associated value corresponding to each texture direction. When there is a second correlation value satisfying the second correlation condition among the second correlation values, the texture direction corresponding to the second correlation value satisfying the second correlation condition is set as the target texture direction of the color pixel.

The second correlation condition may be that a difference between the second correlation values is greater than a preset difference, or that a difference between the smallest second correlation value and the second smallest second correlation value is greater than a preset difference. It is understood that the preset difference in the first correlation condition and the preset difference in the second correlation condition may be the same or different.

And the electronic equipment determines whether the difference value between the second correlation values is larger than a preset difference value or not, and takes the texture direction corresponding to the minimum second correlation value as the target texture direction of the color pixel under the condition that the difference value between the second correlation values is larger than the preset difference value.

Or the electronic device determines the smallest second correlation value and the second smallest second correlation value in the second correlation values, determines whether a difference between the smallest second correlation value and the second smallest second correlation value is greater than a preset difference, and takes the texture direction corresponding to the smallest second correlation value as the target texture direction of the color pixel when the difference is greater than the preset difference.

In this embodiment, when the first correlation value corresponding to each color pixel in each texture direction does not satisfy the first correlation condition, which means that the target texture direction of the color pixel cannot be accurately determined only by using the panchromatic associated pixel, the panchromatic associated pixel and the color associated pixel associated with each color pixel in each texture direction are determined, so that the target texture direction of the color pixel is determined by the panchromatic associated pixel and the color associated pixel associated with the color pixel together. And determining second associated values corresponding to the color pixels in the texture directions respectively based on the panchromatic associated pixels and the color associated pixels associated with the texture directions respectively, wherein the used information amount is large, the calculated associated values cover more information amount, and the association degree between the panchromatic associated pixels, the color associated pixels and the color pixels can be determined more accurately, so that the target texture direction of the color pixels can be determined accurately based on the association degree between the panchromatic associated pixels and the color pixels which are determined jointly.

FIG. 9 is a diagram illustrating associated pixels for each texture direction, in one embodiment. As shown in fig. 9, taking a 10 × 10 pixel window as an example, the pixels of the color pixels (pixels where black dots are located in the figure) are respectively associated with the pixels in the horizontal direction, the vertical direction, the diagonal direction and the anti-diagonal direction. The associated pixel is the full color associated pixel as indicated by the arrow in fig. 9.

For the panchromatic related pixels in the horizontal direction, the absolute value of the difference between the two panchromatic related pixels pointed by the same arrow is calculated, and two absolute values can be obtained. And summing the two absolute values in the horizontal direction to obtain a first correlation value corresponding to the horizontal direction. In the same manner, first correlation values corresponding to the vertical direction, the diagonal direction, and the anti-diagonal direction are obtained.

And taking the texture direction corresponding to the minimum first correlation value as the target texture direction of the color pixel under the condition that the difference value between the minimum first correlation value and the second minimum first correlation value is larger than a preset difference value.

In the case where the difference between the smallest first correlation value and the second smallest first correlation value is not greater than the preset difference, the target texture direction of the color pixel is determined using the correlation pixel as shown in fig. 10. The associated pixels of the color pixels in fig. 10 include a full-color associated pixel and a color associated pixel.

For the panchromatic related pixels and the color related pixels in the horizontal direction, the absolute value of the difference value of the two panchromatic related pixels pointed by the same arrow and the absolute value of the difference value of the two color related pixels pointed by the same arrow are calculated, and a plurality of absolute values can be obtained. And summing the plurality of absolute values in the horizontal direction, and dividing the sum by the sum of the pixel numbers of the panchromatic related pixel and the color related pixel to obtain a second related value corresponding to the horizontal direction. According to the same processing method, second joint values corresponding to the vertical direction, the diagonal direction and the anti-diagonal direction can be obtained.

And taking the texture direction corresponding to the minimum second correlation value as the target texture direction of the color pixel under the condition that the difference value between the minimum second correlation value and the second smallest second correlation value is larger than the preset difference value.

In the case where the color pixel is in the flat region, or after the target texture direction of the color pixel is determined, the interpolation weight W _ C1 corresponding to the color pixel C1 is calculated by each pixel as shown in fig. 11.

A flat area: w _ C1 is 0.5 × C1 (W1+ W2+ W3+ W4+ W5+ W6+ W7+ W8)/(C1+ C2+ C3+ C4), and when the color pixel C1 is in the flat region, the average values of W1 to W8 and the average values of C1 to C4 in fig. 9 are obtained, and the ratio of the two average values is multiplied by C1 to obtain the interpolation weight W _ C1.

After determining the target texture direction for a color pixel, the interpolation weight W _ C1 for color pixel C1 may be calculated as follows:

horizontal direction DirH: w _ C1 ═ 2 × W8+ W3)/3

Vertical direction DirV: w _ C1 ═ 2 × W1+ W6)/3

Anti-diagonal direction DirA: w _ C1 ═ 0.5 × W2+0.5 × W7

Diagonal DirD: w _ C1 ═ (3 × W1+3 × W8+ W4+ W5)/8

After traversing each pixel in the original image corresponding to the color pixel, the interpolation weight corresponding to each color pixel can be obtained, thereby obtaining an interpolation weight map. And carrying out fusion processing on the interpolation weight graph and the original image to obtain a full-size panchromatic channel image.

As shown in fig. 12, after the interpolation weights w1, w2, w3, and w4 corresponding to the color pixels C1, C2, C3, and C4, respectively, are calculated, they may be reassigned according to the intensity ratio or intensity difference between the respective interpolation weights and the corresponding pixel values in the original image, resulting in a full-size panchromatic channel image. Taking w 1' as an example, the calculation may be considered low-intensity when the sum of the pixels of (C1+ C2+ C3+ C4) is less than a certain threshold, e.g., (C1+ C2+ C3+ C4) <100, triggering a low-intensity calculation mode; when the sum of the pixels (C1+ C2+ C3+ C4) is greater than or equal to a certain threshold, it is considered as normal intensity, and a normal intensity calculation mode is triggered.

Normal strength: w 1' ═ C1 (w1+ w2+ w3+ w4)/(C1+ C2+ C3+ C4)

Low strength: w 1' ═ C1+0.25 (w1+ w2+ w3+ w4) -0.25 (C1+ C2+ C3+ C4)

In the same manner, w2 ', w3 ' and w4 ' can be calculated to yield a partial panchromatic image as shown in fig. 10 in which the color pixels C1, C2, C3, C4 have been interpolated into panchromatic pixels. In the same process, the color pixels in the original image may each be interpolated to a corresponding panchromatic pixel, resulting in a full-size panchromatic channel image, i.e., a W-channel image, that is the same size as the original image.

In one embodiment, obtaining the interpolation weight corresponding to the color pixel based on each associated pixel of the color pixel in the target texture direction includes:

and obtaining the interpolation weight corresponding to the position of the color pixel according to the proportional relation of the color pixel in the panchromatic associated pixel associated in the target texture direction.

After the electronic equipment determines the target texture direction of the color pixel, the electronic equipment calculates the interpolation weight corresponding to the color pixel according to the proportional relation between the panchromatic associated pixels associated with the color pixel in the target texture direction and the panchromatic associated pixels. And traversing each pixel in the original image corresponding to the color pixel according to the same processing mode, and obtaining the interpolation weight corresponding to each color pixel in the original image when the traversal is finished.

In this embodiment, the interpolation weight corresponding to the color pixel is calculated according to the proportional relationship between the panchromatic associated pixels associated with the color pixel in the target texture direction, and the interpolation weight corresponding to the color pixel can be calculated more accurately by using not only the information of the color pixel itself but also the information of the associated panchromatic pixel in the vicinity of the color pixel.

As shown in fig. 13, a schematic diagram of generating a second target image in the second definition mode using the full resolution output mode (Fullsize mode) is provided. Fig. 13 is a flowchart of an interpolation algorithm process for outputting RGGB format through the Fullsize mode, i.e., a remosaic algorithm flowchart. The specific algorithm flow is as follows: firstly, at the pixel position of the R \ G \ B channel, the W value at the pixel position of R \ G \ B is calculated through interpolation by referring to the characteristics of surrounding pixels. And secondly, iterating and optimizing interpolation results of pixels calculated by interpolation values by referring to original R \ G \ B channel information under the same Quad to obtain a full-size W channel image. Thirdly, based on the full-size W-channel image, interpolating an R channel, a G channel and a B channel at a specific position by utilizing bilateral filtering, and outputting a full-size Bayer format image. In other embodiments, the Remosaic algorithm recovery format is not limited to RGGB \ GRBR \ BGGR \ GBRG, and the RGB image can be directly output after the image sensor is modified.

In one embodiment, as shown in fig. 14, a flow diagram for generating a second target image in a second sharpness mode is provided.

Step 1402, inputting a pixel in the original image, executing step 1404, determining whether the central position of the pixel is a W pixel, if yes, skipping to the next pixel, if not, executing step 1406, and entering a flat area determination.

If it is determined to be a flat area, an output value is interpolated using the flat area weight in step 1408. Step 1410, if the non-flat region is not in the flat region, the texture direction of the region is determined, and step 1412 is executed to perform interpolation calculation by using the associated pixels in the determined texture direction to obtain the interpolation weight corresponding to the pixel. Step 1414, based on the interpolation weights and the original image, outputs a full-size W-channel map. In the embodiment, based on the peripheral channel information of the R \ G \ B pixel in the original image, the corresponding W value is interpolated from the R \ G \ B pixel position, and a full-size W channel image is obtained.

In one embodiment, as shown in fig. 15, the method further comprises:

step 1502, in a third definition mode, obtaining a first combined image according to a first pixel value read by combining a plurality of panchromatic pixels corresponding to the same panchromatic filter in the filter set and a second pixel value read by combining a plurality of color pixels corresponding to the same color filter; the color pixels include a first color-sensitive pixel, a second color-sensitive pixel, and a third color-sensitive pixel.

The third definition mode refers to a one-level pixel merging and reading mode with medium definition, medium power consumption and medium frame rate, and the resolution and the power consumption corresponding to the third definition mode are larger than those corresponding to the first definition mode. The frame rate corresponding to the third definition mode is greater than the frame rate corresponding to the first definition mode. The resolution and power consumption corresponding to the third definition mode are smaller than those corresponding to the second definition mode, and the frame rate corresponding to the third definition mode is larger than that corresponding to the second definition mode. The third definition mode may specifically be a default mode for image, video capturing.

In the case where a photographing instruction is received, it is detected whether a user selects a desired definition mode to use, whether preview photographing is used, and a current environment. And under the condition that the definition mode required to be used is not selected by the user, the preview shooting is not used, and the current environment is not in the night scene mode, responding to the shooting instruction by using the third definition mode.

In the third definition mode, the light transmitted by the electronic device through the filter array 22 is projected onto the pixel array 23, and the pixel array 23 is used for receiving the light passing through the corresponding filter array 22 to generate an electrical signal. The pixel array 23 includes a plurality of panchromatic pixels and a plurality of color pixels, each panchromatic pixel corresponds to one sub-filter of the panchromatic filter 223, and each color pixel corresponds to one sub-filter of the color filter 224. The electronic device combines the read first pixel values according to the plurality of panchromatic pixels corresponding to the panchromatic filter 223 in the filter set 222 and the read second pixel values according to the plurality of color pixels corresponding to the color filter 224 to obtain a first combined image.

Further, the electronic device merges panchromatic pixels corresponding to the same panchromatic filter 223 in the filter set 222 to read out first pixel values, merges first color-sensitive pixels corresponding to the same first filter to read out corresponding second pixel values, merges second color-sensitive pixels corresponding to the same second filter to read out corresponding second pixel values, and generates a first merged image based on the first pixel values and the respective second pixel values.

1504, interpolating the panchromatic pixel, the second color photosensitive pixel and the third color photosensitive pixel in the first combined image into a first color photosensitive pixel by utilizing texture information of the panchromatic pixel, the second color photosensitive pixel and the third color photosensitive pixel in the first combined image to obtain a full-arrangement first channel map; all pixels in the full-arrangement first channel map are first color sensitive pixels.

And interpolating all panchromatic pixels in the first combined image into first color photosensitive pixels by using texture information of all the panchromatic pixels in the first combined image, interpolating all the second color photosensitive pixels in the first combined image into first color photosensitive pixels by using texture information of all the second color photosensitive pixels in the first combined image, and interpolating all the third color photosensitive pixels in the first combined image into first color photosensitive pixels by using texture information of all the third color photosensitive pixels in the first combined image to obtain a fully-arranged first channel map. All pixels in the full-arrangement first channel map are first color sensitive pixels.

In one embodiment, interpolating the panchromatic pixels, the second color-sensitive pixels, and the third color-sensitive pixels in the first merged image into first color-sensitive pixels using texture information of the panchromatic pixels, the second color-sensitive pixels, and the third color-sensitive pixels in the first merged image to obtain a fully-aligned first channel map, includes:

interpolating pixels corresponding to panchromatic pixel positions in a first combined channel image of the first color photosensitive pixels into first color photosensitive pixels by using texture information of the panchromatic pixels in the first combined image to obtain a first intermediate channel image; and respectively utilizing texture information provided by the second color photosensitive pixels and the third color photosensitive pixels in the first combined image to interpolate the first middle channel image into a fully-arranged first channel image.

The electronic device disassembles the first merged image into a first merged channel map, a second merged channel map, and a third merged channel in terms of the same type of pixels. The first merged channel map includes first color sensitive pixels and empty pixels. A null pixel is a pixel without any information. Similarly, the second merged channel map includes second color-sensitive pixels and empty pixels, and the third merged channel map includes third color-sensitive pixels and empty pixels.

The first intermediate channel map is a channel map resulting from interpolating pixels at panchromatic pixel locations to first color-sensitive pixels in the first merged channel map.

The electronic device determines a pixel at a panchromatic pixel position of the first combined image in a first combined channel map of the first color photosensitive pixel by using texture information of the panchromatic pixel in the first combined image, interpolates the pixel into the first color photosensitive pixel until the pixel at each panchromatic pixel position in the first combined channel map is interpolated into the first color photosensitive pixel, and obtains a first intermediate channel map.

The electronic device determines a pixel at a second color sensitive pixel position in the first combined image in the first intermediate channel image by using texture information provided by each second color sensitive pixel in the first combined image, interpolates the pixel into a first color sensitive pixel, determines a pixel at a third color sensitive pixel position in the first combined image in the first intermediate channel image, and interpolates the pixel into a first color sensitive pixel until the pixel at each second color sensitive pixel position and the pixel at each third color sensitive pixel position in the first intermediate channel image are interpolated into the first color sensitive pixel, thereby obtaining a full-array first channel image.

Step 1506, interpolating the first merged image according to the texture information of the fully-arranged first channel map and the second color photosensitive pixels and the third color photosensitive pixels in the first merged image to obtain a locally-arranged second channel map and a locally-arranged third channel map; the partially arranged second channel patterns correspond to the second color photosensitive pixels, and the partially arranged third channel patterns correspond to the third color photosensitive pixels.

And interpolating the second color photosensitive pixels in the first combined image through the fully arranged first channel images and the texture information of the second color photosensitive pixels in the first combined image to obtain a locally arranged second channel image. And interpolating the third color photosensitive pixels in the first combined image through the fully arranged first channel images and the texture information of the third color photosensitive pixels in the first combined image to obtain a locally arranged third channel image. The local arrangement second channel diagram corresponding to the second color photosensitive pixels is arranged at intervals, and the local arrangement third channel diagram corresponding to the third color photosensitive pixels is arranged at intervals.

In one embodiment, based on the fully-arranged first channel map and texture information of the second color sensitive pixels and the third color sensitive pixels in the first merged image, joint bilateral filtering is used to interpolate the first merged image to obtain a partially-arranged second channel map and a partially-arranged third channel map. The bilateral filtering basic principle is that weighting is carried out according to the relation between the position of a specific pixel and the position of a central pixel, then the specific value relation is obtained by dividing the specific pixel and the central pixel, and finally the pixel value of the corresponding pixel is converted according to the specific value relation.

In one embodiment, interpolating the first merged image by texture information of the fully arranged first channel map and the second color sensitive pixels and the third color sensitive pixels in the first merged image to obtain the partially arranged second channel map and the partially arranged third channel map includes: interpolating a second combined channel map of the second color photosensitive pixels through the fully arranged first channel map and texture information of the second color photosensitive pixels in the first combined image to obtain a partially arranged second channel map; and interpolating a third combined channel image of the third color photosensitive pixels through the fully arranged first channel image and texture information of the third color photosensitive pixels in the first combined image to obtain a locally arranged third channel image.

And interpolating the second combined channel image of the second color photosensitive pixels through the fully arranged first channel image and the texture information of the second color photosensitive pixels in the first combined image to obtain a locally arranged second channel image. For example, if the second color sensitive pixel is a red pixel, the second merged channel map of the red pixel is interpolated according to texture information of the fully-arranged first channel map and the red pixel in the first merged image, so as to obtain a partially-arranged second channel map of the red pixel. The partial arrangement of the red pixels is arranged at intervals in each red pixel in the second channel diagram.

And interpolating a third combined channel image of the third color photosensitive pixels through the fully arranged first channel image and texture information of the third color photosensitive pixels in the first combined image to obtain a locally arranged third channel image. For example, if the third color photosensitive pixel is a blue pixel, the third merged channel map of the blue pixel is interpolated according to texture information of the fully-arranged first channel map and the blue pixel in the first merged image, so as to obtain a locally-arranged third channel map of the blue pixel. The blue pixels in the local arrangement third channel diagram are arranged at intervals.

Step 1508, generating a third target image based on the fully arranged first channel map, the partially arranged second channel map and the partially arranged third channel map; the definition corresponding to the third definition mode is greater than the definition corresponding to the first definition mode.

The third target image is generated based on the fully-arranged first channel map, the partially-arranged second channel map, and the partially-arranged third channel map, that is, the third target image includes first color-sensitive pixels, second color-sensitive pixels, and third color-sensitive pixels. For example, if the full-arrangement first channel map is a full-arrangement G (Green) channel map, the partial-arrangement second channel map is a partial-arrangement R (Red) channel map, and the partial-arrangement third channel map is a partial-arrangement B (Blue) channel map, an RGB target image may be generated based on the full-arrangement G channel map, the partial-arrangement R channel map, and the partial-arrangement B channel map.

In one embodiment, the electronic device may combine the fully arranged first channel map, the partially arranged second channel map, and the partially arranged third channel map to generate a third target image.

In another embodiment, the third target image may be a bayer array image; the electronic equipment determines pixels required by the current position from a Bayer array image to be generated in sequence; and extracting pixels from corresponding positions of the fully arranged first channel images, the partially arranged second channel images or the partially arranged third channel images as pixels of the current position in the Bayer array image to be generated until the pixels of all positions in the Bayer array image to be generated are extracted, and obtaining the target image.

Extracting pixels from corresponding positions of the fully arranged first channel images, the partially arranged second channel images or the partially arranged third channel images to serve as pixels of a current position in a Bayer array image to be generated, wherein the extracting comprises the following steps: determining a required channel map from a fully arranged first channel map, a partially arranged second channel map or a partially arranged third channel map according to a required pixel at the current position in a Bayer array image to be generated; and extracting pixels from the corresponding positions of the required channel map as pixels of the current position in the Bayer array image to be generated.

In the image generation method, in the third definition mode, the generated first combined image is reduced in size and the power consumption required by image generation is low according to the first pixel value read by combining the plurality of panchromatic pixels corresponding to the panchromatic filter in the filter set and the second pixel value read by combining the plurality of color pixels corresponding to the color filter. The panchromatic pixels have higher signal-to-noise ratio, and texture information of the panchromatic pixels in the first combined image is utilized, so that the fully-arranged first channel map is interpolated more accurately and has higher signal-to-noise ratio; and finally, a third target image with more information and clearer detail analysis can be generated based on the fully-arranged first channel image, the partially-arranged second channel image and the partially-arranged third channel image with higher signal-to-noise ratio.

In one embodiment, an image generation method is provided, which is applied to an image sensor, the image sensor includes a filter array 22 and a pixel array 23, the filter array 22 includes a minimum repeating unit 231, the minimum repeating unit 231 includes a plurality of filter sets 222, the filter sets 222 include color filters 224 and panchromatic filters 223, the color filters 224 have narrower spectral responses than the panchromatic filters 223, and the color filters 224 and the panchromatic filters 223 include 4 sub-filters; the pixel array 23 includes a plurality of panchromatic pixels and a plurality of color pixels, each panchromatic pixel corresponds to one sub-filter of the panchromatic filter 223, and each color pixel corresponds to one sub-filter of the color filter 224;

the method comprises the following steps:

in the first definition mode, the electronic device combines the read first pixel values according to the plurality of panchromatic pixels corresponding to the panchromatic filter 223 in the filter set 222 and the read second pixel values according to the plurality of color pixels corresponding to the color filter 224 to obtain a first combined image.

The electronic device merges a plurality of panchromatic pixels in a first diagonal direction in the first merged image to obtain a panchromatic image, and merges a plurality of color pixels in a second diagonal direction to obtain a color image; the first diagonal direction is different from the second diagonal direction.

Traversing pixel positions in the Bayer array image to be generated, determining pixels at the pixel positions in the Bayer array image to be generated according to panchromatic pixels corresponding to the pixel positions in the panchromatic image and color pixels corresponding to the pixel positions in the color image, and obtaining a first target image after obtaining the pixels at all the pixel positions in the Bayer array image to be generated.

In the second definition mode, the electronic device traverses each pixel in the original image corresponding to the color pixel.

In the event that the current pixel of the original image is determined to be a color pixel, the electronics determine a variance of each pixel within a preset range that includes the color pixel.

If the variance is smaller than a preset threshold value, the color pixel is in a flat area; if the variance is greater than or equal to the preset threshold value, the color pixel is in the texture area.

In the case that the color pixels are in the flat region, the electronic device determines a first pixel mean value of each panchromatic pixel in a preset range including the color pixels and a second pixel mean value of each color pixel in the preset range; and determining the interpolation weight corresponding to the color pixel based on the proportional relation between the first pixel mean value and the second pixel mean value.

Under the condition that the color pixels are in the texture area, the electronic equipment determines panchromatic associated pixels respectively associated with the color pixels in all texture directions; determining first associated values corresponding to the color pixels in all texture directions respectively based on panchromatic associated pixels associated with all texture directions respectively; and taking the texture direction corresponding to the first correlation value meeting the first correlation condition in the first correlation values as the target texture direction of the color pixel.

Under the condition that first correlation values respectively corresponding to the color pixels in the texture directions do not meet first correlation conditions, the electronic equipment determines panchromatic correlation pixels and color correlation pixels respectively correlated to the color pixels in the texture directions; determining second correlation values corresponding to the color pixels in the texture directions respectively based on the panchromatic correlation pixels and the color correlation pixels which are correlated with the texture directions respectively; and taking the texture direction corresponding to the second correlation value meeting the second correlation condition in the second correlation values as the target texture direction of the color pixel.

And the electronic equipment performs interpolation processing on the color pixels in the original image according to the panchromatic associated pixels associated with the color pixels in the target texture direction to obtain interpolation weights corresponding to the color pixels.

Interpolating the color pixels into panchromatic pixels according to the interpolation weights of the color pixels until the traversal is completed to obtain a full-size panchromatic channel image; the pixels in the full-size panchromatic channel map are panchromatic pixels.

The electronic device generates a second target image based on the full-size panchromatic channel map and the original image; the definition corresponding to the second definition mode is greater than the definition corresponding to the first definition mode.

In the third definition mode, the electronic device merges the read first pixel values with the plurality of panchromatic pixels corresponding to the same panchromatic filter 223 in the filter set 222 and the read second pixel values with the plurality of color pixels corresponding to the same color filter 224 to obtain a first merged image; the color pixels include a first color-sensitive pixel, a second color-sensitive pixel, and a third color-sensitive pixel.

Interpolating all panchromatic pixels in the first combined image into first color photosensitive pixels by using texture information of all the panchromatic pixels in the first combined image respectively, interpolating all the second color photosensitive pixels in the first combined image into first color photosensitive pixels by using texture information of all the second color photosensitive pixels in the first combined image, and interpolating all the third color photosensitive pixels in the first combined image into first color photosensitive pixels by using texture information of all the third color photosensitive pixels in the first combined image to obtain a fully-arranged first channel map; all pixels in the full-arrangement first channel map are first color sensitive pixels.

Interpolating the first combined image through texture information of the fully-arranged first channel images and the second color photosensitive pixels and the third color photosensitive pixels in the first combined image to obtain a locally-arranged second channel image and a locally-arranged third channel image; the partially arranged second channel patterns correspond to the second color photosensitive pixels, and the partially arranged third channel patterns correspond to the third color photosensitive pixels.

The electronic equipment generates a third target image based on the fully arranged first channel images, the partially arranged second channel images and the partially arranged third channel images, wherein the definition corresponding to the third definition mode is larger than that corresponding to the first definition mode, and the definition corresponding to the third definition mode is smaller than that corresponding to the second definition mode.

In this embodiment, three definition modes are provided, which can adapt to different scenes. Under the first definition mode used in scenes with low resolution requirements, such as preview and night scene shooting, the generated first combined image is reduced in size and low in power consumption for generating the image according to the first pixel value read by combining the plurality of panchromatic pixels corresponding to the panchromatic filter 223 in the filter set 222 and the second pixel value read by combining the plurality of color pixels corresponding to the color filter 224. And combining a plurality of panchromatic pixels in a first diagonal direction in the first combined image, and combining a plurality of color pixels in a second diagonal direction different from the first diagonal direction, so that the obtained first target image is further reduced, the panchromatic pixels have higher signal-to-noise ratio, the frame rate of the image is high, and the image processing effects of lower power consumption and better signal-to-noise ratio of the two-stage pixel combined output are achieved.

Under the scene with higher requirement on resolution, the second definition mode is used, the texture information of the color pixels in the original image is utilized, the interpolation weights corresponding to the color pixels are calculated, the color pixels are interpolated into panchromatic pixels according to the interpolation weights, and the full-size panchromatic channel image with the same size as the original image can be obtained according to the same processing mode. The pixels in the full-size panchromatic channel image are panchromatic pixels, the second target image is generated based on the full-size panchromatic channel image and the original image, panchromatic channel information can be fused into the original image, the second target image with more information and clearer detail analysis can be generated, the image processing effect of full-size full-resolution output with high definition, high power consumption and low frame rate is achieved, and the requirement of a user on the high quality of the image can be met.

In a general scene, the third definition mode is used, and the first pixel values read out by combining the plurality of panchromatic pixels corresponding to the panchromatic filter 223 in the filter set 222 and the second pixel values read out by combining the plurality of color pixels corresponding to the color filter 224 make the generated first combined image reduced in size and low in power consumption required for generating an image. The panchromatic pixels have higher signal-to-noise ratio, texture information of the panchromatic pixels in the first combined image is utilized, so that the full-arrangement first channel image is interpolated more accurately, the full-arrangement first channel image also has higher signal-to-noise ratio, then the full-arrangement first channel image is interpolated respectively to obtain a local arrangement second channel image and a local arrangement third channel image, and finally a third target image with medium definition, medium power consumption and medium frame rate can be realized based on the full-arrangement first channel image, the local arrangement second channel image and the local arrangement third channel image with higher signal-to-noise ratio.

It should be understood that although the various steps in the flowcharts of fig. 2-15 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-15 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.

Fig. 16 is a block diagram showing the configuration of an image generating apparatus according to an embodiment. As shown in fig. 16, the image generating apparatus is applied to an image sensor, the image sensor includes a filter array and a pixel array 23, the filter array includes a minimum repeating unit 231, the minimum repeating unit 231 includes a plurality of filter sets 222, the filter sets 222 include color filters 224 and panchromatic filters 223, the color filters 224 have narrower spectral responses than the panchromatic filters 223, and the color filters 224 and the panchromatic filters 223 include 4 sub-filters; the pixel array 23 includes a plurality of panchromatic pixels and a plurality of color pixels, each panchromatic pixel corresponds to one sub-filter of the panchromatic filter 223, and each color pixel corresponds to one sub-filter of the color filter 224;

the image generation apparatus 1600 includes:

a first merging module 1602, configured to, in the first definition mode, merge the read first pixel values according to a plurality of panchromatic pixels corresponding to the panchromatic filter 223 in the filter set 222 and merge the read second pixel values according to a plurality of color pixels corresponding to the color filter 224 to obtain a first merged image;

a generating module 1604 for combining panchromatic pixels in a first diagonal direction in the first combined image and combining color pixels in a second diagonal direction to obtain a first target image; the first diagonal direction is different from the second diagonal direction.

In this embodiment, the image sensor includes a filter array 22 and a pixel array 23, the filter array 22 includes a minimum repeating unit 231, the minimum repeating unit 231 includes a plurality of filter sets 222, each filter set 222 includes a color filter 224 and a panchromatic filter 223, the color filter 224 has a narrower spectral response than the panchromatic filter 223, each color filter 224 and the panchromatic filter 223 includes 4 sub-filters, the pixel array 23 includes a plurality of panchromatic pixels and a plurality of color pixels, each panchromatic pixel corresponds to one sub-filter of the panchromatic filter 223, each color pixel corresponds to one sub-filter of the color filter 224, the first definition mode is used in a scene with a lower requirement on resolution, the first pixel value read out by combining a plurality of panchromatic pixels corresponding to the panchromatic filter 223 in the filter set 222, and the second pixel value read out by combining a plurality of color pixels corresponding to the color filter 224, the generated first combined image is reduced in size, and power consumption consumed by image generation is low. And combining a plurality of panchromatic pixels in a first diagonal direction in the first combined image, and combining a plurality of color pixels in a second diagonal direction different from the first diagonal direction, so that the obtained first target image is further reduced, the panchromatic pixels have higher signal-to-noise ratio, the frame rate of the image is high, and the image processing effects of lower power consumption and better signal-to-noise ratio of the two-stage pixel combined output are achieved.

In one embodiment, the generating module 1604 is further configured to combine a plurality of panchromatic pixels in the first combined image in a first diagonal direction to obtain a panchromatic image; combining the plurality of color pixels in the second diagonal direction to obtain a color image; a first target image is generated from the panchromatic image and the color image.

In this embodiment, a plurality of panchromatic pixels in the first combined image in the first diagonal direction are combined to obtain a panchromatic image, a plurality of color pixels in the second diagonal direction are combined to obtain a color image, the generated image noise is low due to the integrated pixel reading mode, the light entering amount of a panchromatic channel is larger, the panchromatic pixels have a higher signal-to-noise ratio, the first target image is generated according to the panchromatic image and the color image, the color image can be fused by using a region with the higher signal-to-noise ratio, and the imaging quality is higher.

In an embodiment, the generating module 1604 is further configured to traverse pixel positions in the first target image to be generated, and determine pixels of the pixel positions in the first target image to be generated according to panchromatic pixels corresponding to the pixel positions in the panchromatic image and color pixels corresponding to the pixel positions in the color image until the first target image is obtained after obtaining the pixels of all the pixel positions in the first target image to be generated.

In this embodiment, the pixel position in the first target image to be generated is traversed, the pixel at the pixel position in the first target image to be generated is determined according to the panchromatic pixel corresponding to the pixel position in the panchromatic image and the color pixel corresponding to the pixel position in the color image, until the pixels at all the pixel positions in the first target image to be generated are obtained, the information content of the panchromatic channel with the high signal-to-noise ratio can be brought into the first target image, and thus the first target image is accurately generated.

In one embodiment, the apparatus further comprises: an interpolation module; the interpolation module is used for interpolating all color pixels in the original image into panchromatic pixels in a second definition mode to obtain a full-size panchromatic channel image; the pixels in the full-size panchromatic channel map are panchromatic pixels.

A generation module 1604 for generating a second target image based on the full-size panchromatic channel map and the original image; the definition corresponding to the second definition mode is greater than the definition corresponding to the first definition mode.

In this embodiment, the definition corresponding to the second definition mode is greater than the definition corresponding to the first definition mode, and in the second definition mode, the texture information of the color pixels in the original image is used to interpolate all the color pixels in the original image into panchromatic pixels, so as to obtain a full-size panchromatic channel image with the same size as the original image. The pixels in the full-size panchromatic channel image are panchromatic pixels, the second target image is generated based on the full-size panchromatic channel image and the original image, panchromatic channel information can be fused into the original image, the second target image with more information and clearer detail analysis can be generated, the image processing effect of full-size full-resolution output with high definition, high power consumption and low frame rate is achieved, and the requirement of a user on the high quality of the image can be met.

In an embodiment, the generating module 1604 is further configured to interpolate the original image respectively by using bilateral filtering to obtain a first channel map of the first color photosensitive pixel, a second channel map of the second color photosensitive pixel, and a third channel map of the third color photosensitive pixel based on the full-size panchromatic channel map; and generating a second target image according to the first channel image, the second channel image and the third channel image.

In one embodiment, the interpolation module is further configured to traverse each pixel in the original image corresponding to the color pixel; determining texture information of the color pixels based on each pixel in a preset range containing the color pixels under the condition that the current pixels of the original image are determined to be the color pixels; and obtaining interpolation weights corresponding to the color pixels based on the texture information of the color pixels, and interpolating the color pixels into panchromatic pixels according to the interpolation weights of the color pixels until the full-size panchromatic channel image is obtained when traversal is completed.

In each traversal, the electronic device may acquire, based on each pixel in a preset range including the color pixel, not only information of the color pixel itself but also information of a pixel in a region adjacent to the color pixel, and may determine texture information of the color pixel more accurately, when determining that the current pixel of the original image is the color pixel. And based on the texture information of the color pixels, obtaining interpolation weights corresponding to the color pixels, interpolating the color pixels into panchromatic pixels according to the interpolation weights of the color pixels, and obtaining a full-size panchromatic channel image until traversal is completed, so that the full-size panchromatic channel image is obtained more accurately.

In one embodiment, the interpolation module is further configured to determine a variance of each pixel within a predetermined range including color pixels; if the variance is smaller than a preset threshold value, the color pixel is in a flat area; if the variance is greater than or equal to the preset threshold value, the color pixel is in the texture area.

In the present embodiment, by determining the variance of each pixel within a preset range including color pixels, it is possible to accurately determine texture information of the color pixels.

In one embodiment, the interpolation module is further configured to determine a first pixel mean value of each panchromatic pixel in a preset range including the color pixels and a second pixel mean value of each color pixel in the preset range if the color pixels are in the flat area; and obtaining the interpolation weight corresponding to the color pixel based on the proportional relation between the first pixel mean value and the second pixel mean value.

In this embodiment, in the case that the color pixels are in the flat region, the first pixel mean value of each panchromatic pixel in the preset range including the color pixels and the second pixel mean value of each color pixel in the preset range are determined, and based on the proportional relationship between the first pixel mean value and the second pixel mean value, the interpolation weight corresponding to the color pixel at the color pixel position in the original image can be accurately calculated.

In one embodiment, the interpolation module is further configured to determine a target texture direction of the color pixel if the color pixel is in the texture region; and obtaining the interpolation weight corresponding to the color pixel based on each related pixel of the color pixel in the target texture direction.

In this embodiment, when a color pixel is in a texture region, a target texture direction of the color pixel is determined, and an interpolation weight corresponding to each color pixel is accurately calculated based on each associated pixel of the color pixel in the target texture direction.

In one embodiment, the interpolation module is further configured to determine, if the color pixel is in the texture region, a panchromatic associated pixel with which the color pixel is associated in each texture direction; determining first associated values corresponding to the color pixels in all texture directions respectively based on panchromatic associated pixels associated with all texture directions respectively; and taking the texture direction corresponding to the first correlation value meeting the first correlation condition in the first correlation values as the target texture direction of the color pixel.

In the present embodiment, in the case where a color pixel is in the texture region, panchromatic associated pixels with which the color pixel is associated in each texture direction, respectively, are determined to determine the target texture direction of the color pixel by the panchromatic associated pixel associated with the color pixel. The first correlation value corresponding to the color pixel in each texture direction is determined based on the panchromatic related pixel associated with each texture direction, and the degree of correlation between each panchromatic related pixel and the color pixel can be determined, so that the target texture direction of the color pixel can be accurately determined based on the degree of correlation between the panchromatic related pixel and the color pixel.

In one embodiment, the interpolation module is further configured to determine a panchromatic associated pixel and a color associated pixel associated with the color pixel in each texture direction respectively, if the first associated value corresponding to the color pixel in each texture direction respectively does not satisfy the first associated condition; determining second correlation values corresponding to the color pixels in the texture directions respectively based on the panchromatic correlation pixels and the color correlation pixels which are correlated with the texture directions respectively; and taking the texture direction corresponding to the second correlation value meeting the second correlation condition in the second correlation values as the target texture direction of the color pixel.

In this embodiment, in a case where the first correlation values corresponding to the color pixels in the respective texture directions do not satisfy the first correlation condition, the panchromatic correlation pixel and the color correlation pixel associated with the color pixels in the respective texture directions are determined so as to determine the target texture direction of the color pixel by the panchromatic correlation pixel and the color correlation pixel associated with the color pixel.

In one embodiment, the interpolation module is further configured to obtain an interpolation weight corresponding to a color pixel position according to a proportional relationship between panchromatic associated pixels associated with the color pixel in the target texture direction.

In this embodiment, according to the proportional relationship between the panchromatic associated pixels associated with the color pixels in the target texture direction, not only the information of the color pixels itself but also the information of the associated panchromatic pixels in the vicinity of the color pixels are used, so that the interpolation weights corresponding to the positions of the color pixels can be determined and calculated more accurately.

In one embodiment, the apparatus further comprises: an interpolation module; a first merging module 1602, configured to, in the third definition mode, merge the read first pixel values with the plurality of panchromatic pixels corresponding to the same panchromatic filter 223 in the filter set 222 and merge the read second pixel values with the plurality of color pixels corresponding to the same color filter 224 to obtain a first merged image; the color pixels include a first color sensitive pixel, a second color sensitive pixel, and a third color sensitive pixel;

the interpolation module is further configured to interpolate a panchromatic pixel, a second color photosensitive pixel and a third color photosensitive pixel in the first combined image into a first color photosensitive pixel by using texture information of the panchromatic pixel, the second color photosensitive pixel and the third color photosensitive pixel in the first combined image, so as to obtain a fully-arranged first channel map; all pixels in the full-arrangement first channel image are first color sensitive pixels; interpolating the first combined image through texture information of the fully-arranged first channel images and the second color photosensitive pixels and the third color photosensitive pixels in the first combined image to obtain a locally-arranged second channel image and a locally-arranged third channel image; the partially arranged second channel patterns correspond to the second color photosensitive pixels, and the partially arranged third channel patterns correspond to the third color photosensitive pixels;

the generating module 1604 is further configured to generate a third target image based on the fully arranged first channel map, the locally arranged second channel map, and the locally arranged third channel map; the definition corresponding to the third definition mode is greater than the definition corresponding to the first definition mode.

In the third definition mode, the first pixel values read out by combining the plurality of panchromatic pixels corresponding to the panchromatic filter 223 in the filter set 222 and the second pixel values read out by combining the plurality of color pixels corresponding to the color filter 224 make the generated first combined image reduced in size and low in power consumption required for generating an image. The panchromatic pixels have higher signal-to-noise ratio, and texture information of the panchromatic pixels in the first combined image is utilized, so that the fully-arranged first channel map is interpolated more accurately and has higher signal-to-noise ratio; and finally, a third target image with more information and clearer detail analysis can be generated based on the fully-arranged first channel image with higher signal-to-noise ratio, the partially-arranged second channel image and the partially-arranged third channel image.

The division of the modules in the image generating apparatus is merely for illustration, and in other embodiments, the image generating apparatus may be divided into different modules as needed to complete all or part of the functions of the image generating apparatus.

For specific limitations of the image generation apparatus, reference may be made to the above limitations of the image generation method, which are not described herein again. The respective modules in the image generating apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.

Fig. 17 is a schematic diagram of an internal structure of an electronic device in one embodiment. The electronic device may be any terminal device such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, and a wearable device. The electronic device includes a processor and a memory connected by a system bus. The processor may include one or more processing units, among others. The processor may be a CPU (Central Processing Unit), a DSP (Digital Signal processor), or the like. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor for implementing an image generation method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium.

The implementation of each module in the image generation apparatus provided in the embodiment of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. Program modules constituted by such computer programs may be stored on the memory of the electronic device. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.

The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the image generation method.

Embodiments of the present application also provide a computer program product containing instructions which, when run on a computer, cause the computer to perform an image generation method.

Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. The nonvolatile Memory may include a ROM (Read-Only Memory), a PROM (Programmable Read-Only Memory), an EPROM (Erasable Programmable Read-Only Memory), an EEPROM (Electrically Erasable Programmable Read-Only Memory), or a flash Memory. Volatile Memory can include RAM (Random Access Memory), which acts as external cache Memory. By way of illustration and not limitation, RAM is available in many forms, such as SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), SDRAM (Synchronous Dynamic Random Access Memory), Double Data Rate DDR SDRAM (Double Data Rate Synchronous Random Access Memory), ESDRAM (Enhanced Synchronous Dynamic Random Access Memory), SLDRAM (Synchronous Link Dynamic Random Access Memory), RDRAM (Random Dynamic Random Access Memory), and DRmb DRAM (Dynamic Random Access Memory).

The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

40页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:智能投影设备及多屏显示方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类