Image processing method based on sensor characteristics

文档序号:569651 发布日期:2021-05-18 浏览:38次 中文

阅读说明:本技术 基于感测器特性的影像处理方法 (Image processing method based on sensor characteristics ) 是由 周旸庭 李宗轩 唐婉儒 陈世泽 于 2019-11-15 设计创作,主要内容包括:本发明揭露一种基于感测器特性的影像处理方法,包含下列步骤:取得复数个输出讯号x与该复数个输出讯号的杂讯标准差σ(x)之间的关系的一常数其中该复数个输出讯号是一感测器在同一感测器增益下所输出的复数个讯号;以及依据一取样范围内复数个像素值、该常数、以及一前端增益来计算一目标像素的一输出值,其中该目标像素的一输入值为该复数个像素值的其中之一。(The invention discloses an image processing method based on sensor characteristics, which comprises the following steps: obtaining a constant of a relationship between a plurality of output signals x and a noise standard deviation σ (x) of the plurality of output signals Wherein the plurality of output signals are a plurality of signals output by a sensor under the same sensor gain; and calculating an output value of a target pixel according to a plurality of pixel values within a sampling range, the constant and a front-end gain, wherein an input value of the target pixel is one of the plurality of pixel values.)

1. An image processing method based on sensor characteristics, comprising:

obtaining a constant of a relationship between a plurality of output signals and noise standard deviations of the plurality of output signals, wherein the plurality of output signals are a plurality of signals output by a sensor under the same sensor gain;

calculating a gradient value according to a plurality of pixel values in a sampling range, wherein the plurality of pixel values comprise an input value of a target pixel;

calculating a flat value and a texture value according to the plurality of pixels;

performing a weight value calculation according to a weight change rate, the gradient value, a front end gain and the constant to obtain a weight value; and

and executing a fusion calculation on the flat value and the texture value according to the weight value to obtain an output value of the target pixel.

2. The image processing method of claim 1, wherein the relationship is a relationship between a standard deviation of each of the noise of the plurality of output signals and a square root of each of the plurality of output signals.

3. The image processing method of claim 1, wherein the constant is an approximation of a standard deviation of any of the noise of the plurality of output signals divided by a square root of any of the plurality of output signals.

4. The image processing method as claimed in claim 1, wherein the step of calculating the gradient values comprises:

calculating a square root of each of the plurality of pixel values; and

the gradient value is calculated according to the square root of the plurality of pixel values.

5. The image processing method as claimed in claim 1, wherein the step of calculating the gradient values comprises:

calculating a horizontal gradient value according to the plurality of pixel values;

calculating a vertical gradient value according to the plurality of pixel values; and

calculating the gradient value according to the horizontal gradient value and the vertical gradient value.

6. The image processing method of claim 1, wherein the step of performing the weight value calculation comprises:

calculating a flat zone threshold value according to a flat zone threshold setting value, the front end gain and the constant;

calculating a threshold value of the texture region according to a threshold setting value of the texture region, the front end gain and the constant; and

and calculating the weight value according to the flat area threshold value, the texture area threshold value, the gradient value and the constant.

7. The image processing method as claimed in claim 1, wherein the step of performing the weighting calculation comprises:

multiplying the texture value by the weight value to obtain a weighted texture value;

multiplying the flat value by a difference value between a maximum weight value and the weight value to obtain a weighted flat value;

summing the weighted texture value and the weighted flat value to obtain a summed value; and

dividing the sum by the maximum weight value to obtain the output value.

8. The image processing method as claimed in claim 1, wherein the sensor is a photo sensor.

9. An image processing method based on sensor characteristics, comprising:

obtaining a constant of a relationship between a plurality of output signals and noise standard deviations of the plurality of output signals, wherein the plurality of output signals are a plurality of signals output by a sensor under the same sensor gain; and

an output value of a target pixel is calculated according to a plurality of pixel values within a sampling range, the constant and a front end gain, wherein an input value of the target pixel is one of the plurality of pixel values.

10. The image processing method as claimed in claim 9, wherein the sensor is a photo sensor.

Technical Field

The present invention relates to image processing methods, and more particularly, to an image processing method based on sensor characteristics.

Background

The conventional Alpha Blending is based on determining the texture/flatness Level of each pixel in an image by using Noise Level (Noise Level), and giving each pixel a different Blending Rate (Blending Rate) according to the texture/flatness Level for subsequent processing. In a current image processing apparatus, many modules have a requirement for noise removal while retaining texture; however, a back-end module often lacks strong noise level information for determining the texture/flatness attribute of each pixel due to one or more front-end processes (e.g., auto white balance, auto exposure gain, Lens Shading Correction (LSC), Black Level Correction (BLC)), so that the back-end module cannot effectively distinguish texture areas from flat areas of an image.

Some current technologies are shown below:

weisheng Dong, Ming Yuan, Xin Li, and Guangming Shi, "Joint removal and removal with Perceptional Optimization a general adaptive Network".

(II) Michael university, Gaurav Charasia, Sylvain Paris, and Fr' ed Durand "Deep Joint removal and purification".

(III) KeigoHirakawa AND Thomas W.parks, "JOINT DEOSAICING AND DENOISING".

(IV) Marc Levoy, "Image formation", pages 60-62, Computer Science Department, Stanford University, CS 178, Spring 2014.

(V) WojciechJarosz, "comparative assays of Digital Photography-Noise & Denoising", Dartmouth College, CS 89.15/189.5, Fall 2015.

(VI) Marc Levoy, "Noise and ISO", page 7-11, Computer Science Department, Stanford University, CS 178, Spring 2014.

Disclosure of Invention

An object of the present invention is to provide an image processing method based on sensor characteristics to avoid the problems of the prior art.

An embodiment of the image processing method of the present invention includes the following steps: obtaining a plurality of output signals x and the plurality of output signals xA constant of the relation between the noise standard deviations sigma (x) of the output signalsWherein the output signals are output by a sensor under the same sensor gain (such as light sensitivity, like ISO value of camera); calculating a gradient value according to a plurality of pixel values in a sampling range, wherein the plurality of pixel values comprise an input value of a target pixel; calculating a flat value and a texture value according to the plurality of pixels; performing a weight value calculation according to a weight change rate, the gradient value, a front end gain (e.g., auto-exposure gain), and the constant to obtain a weight value; and performing a weighted calculation on the flat value and the texture value according to the weight value to obtain an output value of the target pixel.

Another embodiment of the image processing method of the present invention comprises the following steps: obtaining a constant of a relationship between a plurality of output signals and noise standard deviations of the plurality of output signals, wherein the plurality of output signals are a plurality of signals output by a sensor under the same sensor gain; and calculating an output value of a target pixel according to a plurality of pixel values within a sampling range, the constant and a front-end gain, wherein an input value of the target pixel is one of the plurality of pixel values.

The features, operation and function of the present invention will be described in detail with reference to the drawings.

Drawings

FIG. 1 shows a sensor output image;

FIG. 2 shows texture of the image of FIG. 1 based on a low threshold;

FIG. 3 shows the texture of the image of FIG. 1 based on a high threshold;

FIG. 4 shows the texture of the image of FIG. 1 obtained in accordance with the present invention;

FIG. 5 shows an embodiment of an image processing method based on sensor characteristics according to the present invention;

FIG. 6 is a graph showing the relationship between the gray level of the output signal of a sensor and the standard deviation of the noise of the output signal of the sensor;

FIG. 7 shows the relationship between gradient values and weight values; and

FIG. 8 shows another embodiment of the image processing method based on sensor characteristics according to the present invention.

Description of the symbols

100. 200, 300, 400 images

S510 to S550

x gray scale value

Noise standard deviation of sigma (x) output signal

thd0Threshold value of flat zone

thd1Threshold value of texture region

GTOTALGradient value

WMAXMaximum weight value

WMINMinimum weight value

W weight value

WSLOPERate of change of weight

S810 to S820 steps

Detailed Description

One objective of the present invention is to accurately determine the texture/flatness of each pixel in an image according to the characteristics of a sensor (e.g., a photo sensor); one reason that conventional techniques (e.g., gradient eigenvalue calculation) are not able to accurately distinguish between texture and flat areas in an image is that the noise intensity of dark portions and the noise intensity of bright portions of the image may be different. For example, fig. 1 shows a sensor output image 100, if the texture region and the flat region of the image 100 are distinguished by a fixed threshold value regardless of the noise difference between the dark portion and the bright portion of the image 100, when the fixed threshold value is set to be small, the portion of the image 100 that should be judged as the flat region is easily judged as the texture region by mistake, as shown in the image 200 of fig. 2; when the setting of the fixed threshold is large, the portion of the image 100 that should be determined as the texture region is easily determined as the flat region, as shown in the image 300 of fig. 3; the present invention is able to relatively accurately distinguish between textured and flat regions of the image 100, as shown in the image 400 of FIG. 4; in the images of fig. 2 to 4, black represents a texture region and white represents a flat region.

FIG. 5 shows an embodiment of the image processing method based on sensor characteristics according to the present invention, which can be implemented by a conventional or self-developed image processing apparatus, comprising the following steps:

step S510: obtaining a plurality of output signals x and noise standard deviations of the plurality of output signals xA constant of the relationship betweenWherein the output signals are output by a sensor under the same sensor gain (e.g., sensitivity, such as the ISO value of the camera). For example, based on the assumption that the sensor gain is sixteen, the pixel value of the output signal of the sensor is represented by twelve bit value (i.e., a value between 0 and 4095), and the noise of the sensor is shot noise (shot noise), the sensor captures K (e.g., 100) images including the same exposure time for the same object according to Q exposure times (e.g., 1/30 seconds, 1 second, and 10 seconds), such that all captured images (Q × K images) collectively cover all kinds of pixel values (i.e., each value between 0 and 4095); then, an average pixel value can be obtained according to the average of the values of the same pixel position of the K images with the same exposure time (i.e., Q average pixel values of the same pixel position can be obtained according to all the images with Q exposure times), and all the average pixel values of all the captured images cover all kinds of pixel values (i.e., each value between 0 and 4095); then, a standard deviation can be calculated according to K values of the same pixel position of K images with the same exposure time and the average pixel value of the same pixel position of the same K images; a regression line can be found according to the distribution of all standard deviations relative to all average pixel values, as shown in fig. 6; according to the above method, the average value (or called gray) of each pixelMagnitude) of the square rootThe standard deviation σ (x) associated with the pixel mean will be Where "c" is a constant, which can be found from the regression line of FIG. 6; as mentioned above, the square root of the gray scale values(that is:where n is the maximum gray level value) and the corresponding standard deviation σ (x) (i.e.: σ 0, σ 1,. or σ n) can be expressed as follows:

in the above formula, the first and second carbon atoms are,

step S520: a gradient value is calculated according to a plurality of pixel values in a sampling range, wherein the plurality of pixel values comprise an input value of a target pixel. For example, the pixels in the sampling range are 3 × 3 pixel matrixThe central pixel of the pixel matrix is the target pixel; by root-marking each pixel of the pixel matrix, a square root matrix of pixels can be obtainedBy means of a predetermined horizontal gradient maskAnd vertical gradient maskMultiplying the square root matrix of the pixel to obtain the horizontal gradient GH2 with vertical gradient GV0, thereby obtaining the gradient value GTOTAL=GH+GV=2。

Step S530: a flat value and a texture value are calculated according to the plurality of pixels. For example, the pixels in the sampling range are 3 × 3 pixel matrixThe flat valueEqual to the sum of all pixel values "24" divided by the total number of pixels "9", the texture valueEqual to the texture direction of the target pixel (in this example, because of G)H>HVAnd it is determined that the texture direction is the vertical direction), the sum of the values of the pixels "4 +4+ 4" is 12 "divided by the number of pixels" 3 ".

Step S540: a weight value W is obtained by performing a weight calculation according to a weight change rate (e.g., a slope determined by thd0 and thd 1), the gradient value, a front end gain (e.g., an automatic exposure gain), and the constant. For example, if the front end gain "DG" is four, the constant "c" is 0.75, and the user-defined/predetermined flat zone threshold setting thd0_manualAnd texture region threshold setting value thd1_manual0.4 and 4, respectively, and calculating the flat threshold value according to the flat threshold value, the front end gain, and the constantCalculating the threshold value of the texture region according to the threshold value of the texture region, the front end gain, and the constant The flat region threshold and the texture region threshold determine the weight change rate WSLOPE(i.e.: thd)0Corresponding minimum weight value WMINAnd thd1Corresponding maximum weight value WMAXSlope of the straight line therebetween ) As shown in fig. 7; the weight value W is equal to the difference between the gradient value and the plateau threshold value multiplied by the weight change rate (W ═ G-TOTAL-thd0)× WSLOPE+WMIN(2-0.3) × 3034+0 ≈ 5158). It is noted that any gradient value smaller than the flat region threshold corresponds to a minimum weight value (e.g., the minimum weight value of 0 in fig. 7), and any gradient value larger than the texture region threshold corresponds to a maximum weight value (e.g., the maximum weight value of 8192 in fig. 7).

Step S550: and performing a weighted calculation on the flat value and the texture value according to the weight value to obtain an output value of the target pixel. For example, step S550 includes the following steps: multiplying the texture value by the weight value to obtain a weighted texture value; multiplying the flat value by a difference value between a maximum weight value and the weight value to obtain a weighted flat value; summing the weighted texture value and the weighted flat value to obtain a sum value; and dividing the sum by the maximum weight value to obtain an output value I of the target pixelOUTPUT. The above example can be expressed by the following equation:

obtained by applying the above exampleIs the output value I of the target pixelOUTPUTAs shown in the following formula:

it is noted that, in the example of the aforementioned step S540, the manner of determining the flat region threshold value and the texture region threshold value is based on the following description. Noise standard deviation of the front end gain DG and the output signal of the sensorAs is known, T (-) denotes a conversion expression for performing conversion on the output signal of the sensor, and according to the above, a calculation expression for determining the gradient value of the texture/flatness attribute can be expressed as T (DG × x + DG × σ (x)) -T (DG × x) ≈ T' (DG × x) · DG × σ (x)); when in useWhen the temperature of the water is higher than the set temperature,therefore, the above calculation formula can be rewritten as In other words, the gradient value is related to the front-end gain DG and the constant c.

In an implementation example, the sensor is a photo sensor. In an exemplary implementation, the front-end gain is/is associated with one of a Lens Shading Correction (LSC) gain, a Color-specific pixel value gain, an Auto Exposure (AE) gain, a Black Level Correction (BLC) gain, and a Color Correction Matrix (CCM). In one implementation, the noise of the plurality of output signals is mainly shot noise (shot noise). In one implementation, the plurality of output signals includes all signals between a minimum gray scale signal and a maximum gray scale signal.

FIG. 8 shows another embodiment of the image processing method based on sensor characteristics according to the present invention, comprising the following steps:

step S810: obtaining a constant of a relation between a plurality of output signals and noise standard deviations of the plurality of output signals, wherein the plurality of output signals are a plurality of signals output by a sensor under the same sensor gain.

Step S820: an output value of a target pixel is calculated according to a plurality of pixel values within a sampling range, the constant and a front end gain, wherein an input value of the target pixel is one of the plurality of pixel values.

Since those skilled in the art can refer to the disclosure in fig. 1 to 7 to understand the details and variations of the embodiment in fig. 8, that is, the technical features of fig. 1 to 7 can be reasonably applied to the embodiment in fig. 8, the repeated and redundant descriptions are omitted here.

It should be noted that, when the implementation is possible, a person skilled in the art can selectively implement some or all of the technical features of any one of the above embodiments, or selectively implement a combination of some or all of the technical features of the above embodiments, thereby increasing the flexibility in implementing the invention.

In summary, the present invention can accurately determine the texture/flatness of each pixel in an image according to the characteristics of a sensor (e.g., a photo sensor).

Although the embodiments of the present invention have been described above, these embodiments are not intended to limit the present invention, and those skilled in the art can make variations on the technical features of the present invention according to the explicit or implicit contents of the present invention, and all such variations may fall within the scope of the patent protection sought by the present invention.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:高泛化性的跨域道路场景语义分割方法和系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!