Image processing method and device, electronic equipment and computer readable storage medium

文档序号:245198 发布日期:2021-11-12 浏览:33次 中文

阅读说明:本技术 图像处理方法及装置、电子设备及计算机可读存储介质 (Image processing method and device, electronic equipment and computer readable storage medium ) 是由 张祎 秦红伟 王晓刚 李鸿升 于 2021-08-06 设计创作,主要内容包括:本申请公开了一种图像处理方法及装置、电子设备及计算机可读存储介质。该方法包括:获取第一图像,所述第一图像是目标成像设备在黑暗环境下采集的图像;对所述第一图像中的像素值进行采样,得到所述目标成像设备在有光信号的情况下生成图像所产生的硬件噪声。(The application discloses an image processing method and device, electronic equipment and a computer readable storage medium. The method comprises the following steps: acquiring a first image, wherein the first image is an image acquired by a target imaging device in a dark environment; and sampling pixel values in the first image to obtain hardware noise generated when the target imaging device generates an image under the condition of an optical signal.)

1. An image processing method, characterized in that the method comprises:

acquiring a first image, wherein the first image is an image acquired by a target imaging device in a dark environment;

and sampling pixel values in the first image to obtain hardware noise generated when the target imaging device generates an image under the condition of an optical signal.

2. The method of claim 1, further comprising:

acquiring a second image, wherein the second image comprises shot noise;

and adding the hardware noise to the second image to obtain a noise image.

3. The method of claim 2, wherein said acquiring a second image comprises:

acquiring a third image, wherein the third image is a clean image;

and obtaining the second image according to the third image.

4. The method of any of claims 1-3, wherein the sampling of pixel values in the first image precedes hardware noise generated by the target imaging device in generating an image in the presence of a light signal, the method further comprising:

acquiring a first continuous distribution of the first image, wherein the first continuous distribution is obtained by fitting pixel values in the first image;

the sampling of the pixel values in the first image to obtain the hardware noise generated by the target imaging device generating an image in the presence of a light signal includes:

replacing a second pixel value in the first image by using a first pixel value in the first continuous distribution to obtain a reconstructed first image;

and sampling the pixel value in the reconstructed first image to obtain the hardware noise.

5. The method of claim 4, wherein replacing the second pixel values in the first image with the first pixel values in the first continuous distribution to obtain a reconstructed first image comprises:

determining a second continuous distribution comprising the second pixel values from the first continuous distribution;

in the case where the second pixel value is the largest pixel value in the first image, the smallest pixel value in the second continuous distribution is greater than or equal to the second largest pixel value in the first image; in the case where the second pixel value is the smallest pixel value in the first image, the largest pixel value in the second continuous distribution is less than or equal to the second small pixel value in the black map; in a case where the second pixel value is not the maximum pixel value in the first image and the second pixel value is not the minimum pixel value in the first image, the maximum pixel value in the second continuous distribution is less than or equal to a third pixel value, the minimum value in the second continuous distribution is greater than or equal to a fourth pixel value, the fourth pixel value is the ith largest pixel value in the first image, the second pixel value is the (i +1) th largest pixel value in the first image, the third pixel value is the (i +2) th largest pixel value in the first image, and i is a positive integer;

and replacing the second pixel value in the first image by the first pixel value in the second continuous distribution to obtain the reconstructed first image.

6. The method of claim 4 or 5, wherein the first continuous profile comprises a bell-shaped continuous profile.

7. A method according to claim 2 or 3, wherein the first image comprises a first pixel region comprising four or more pixels;

the sampling of the pixel values in the first image to obtain the hardware noise generated by the target imaging device generating an image in the presence of a light signal includes:

sampling the first image to obtain pixel values of the first pixel area;

and obtaining the hardware noise according to the pixel value of the first pixel area.

8. The method of claim 7, wherein the size of the first pixel region is the same as the size of the second image.

9. The method of claim 8, wherein the first pixel region has the same arrangement of pixels as the second image.

10. The method of claim 9, wherein the deriving the hardware noise according to the pixel value of the first pixel region comprises:

acquiring a third continuous distribution of the first pixel region, wherein the third continuous distribution is obtained by fitting pixel values in the first pixel region;

replacing the pixel values in the first pixel region with the pixel values in the third continuous distribution to obtain a reconstructed first pixel region;

and sampling the pixel value in the reconstructed first pixel region to obtain the hardware noise.

11. A method according to claim 2 or 3, wherein the first image comprises a second region of pixels arranged in the same manner as the second image;

the sampling of the pixel values in the first image to obtain the hardware noise generated by the target imaging device generating an image in the presence of a light signal includes:

sampling the first image to obtain pixel values of the second pixel region;

and obtaining the hardware noise according to the pixel value in the second pixel region.

12. The method of any one of claims 1 to 11, wherein the hardware noise comprises one or more of: noise generated by analog gain, gain generated by digital gain, quantization noise.

13. An image processing apparatus, characterized in that the apparatus comprises:

an acquisition unit configured to acquire a first image, which is an image acquired by a target imaging apparatus in a dark environment;

and the processing unit is used for sampling the pixel values in the first image to obtain hardware noise generated when the target imaging device generates an image under the condition of an optical signal.

14. An electronic device, comprising: a processor and a memory for storing computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the method of any of claims 1 to 12.

15. A computer-readable storage medium, in which a computer program is stored, which computer program comprises program instructions which, if executed by a processor, cause the processor to carry out the method of any one of claims 1 to 12.

Technical Field

The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.

Background

The process of acquiring images by an imaging device typically includes: collecting the light signals and generating an image based on the light signals. In generating an image based on the optical signal, the hardware of the target imaging apparatus generates noise (hereinafter referred to as hardware noise). How to calculate the hardware noise of the imaging device is of great significance.

Disclosure of Invention

The application provides an image processing method and device, an electronic device and a computer readable storage medium.

In a first aspect, an image processing method is provided, the method comprising:

acquiring a first image, wherein the first image is an image acquired by a target imaging device in a dark environment;

and sampling pixel values in the first image to obtain hardware noise generated when the target imaging device generates an image under the condition of an optical signal.

In this aspect, since the first image does not include the optical signal and the first image includes the hardware noise of the target imaging device, the image processing apparatus obtains the hardware noise of the target imaging device by sampling the pixel values in the first image, and can improve the accuracy of the hardware noise. And the complexity of calculating the hardware noise of the target imaging equipment can be reduced, and the efficiency of calculating the hardware noise of the target imaging equipment is improved.

In combination with any embodiment of the present application, the method further comprises:

acquiring a second image, wherein the second image comprises shot noise;

and adding the hardware noise to the second image to obtain a noise image.

In this embodiment, the image processing apparatus may obtain a RAW image format (RAW) acquired by the target imaging device by adding hardware noise to the second image.

In combination with any embodiment of the present application, the acquiring the second image includes:

acquiring a third image, wherein the third image is a clean image;

and obtaining the second image according to the third image.

In this embodiment, the clean image is an image without hardware noise and shot noise, and the image processing apparatus may acquire the image according to the clean image and may determine the number of photons incident on the target imaging device. Since the image including shot noise satisfies the poisson distribution of the number of photons, the image processing apparatus can obtain the image including shot noise, that is, the second image, by using the number of photons as a parameter of the poisson distribution.

With reference to any embodiment of the present application, before sampling pixel values in the first image to obtain hardware noise generated by the target imaging device generating an image in the presence of a light signal, the method further includes:

acquiring a first continuous distribution of the first image, wherein the first continuous distribution is obtained by fitting pixel values in the first image;

the sampling of the pixel values in the first image to obtain the hardware noise generated by the target imaging device generating an image in the presence of a light signal includes:

replacing a second pixel value in the first image by using a first pixel value in the first continuous distribution to obtain a reconstructed first image;

and sampling the pixel value in the reconstructed first image to obtain the hardware noise.

In this embodiment, the first continuous distribution is obtained by fitting pixel values in the first image, that is, the pixel values in the first continuous distribution are continuous data, and the image processing apparatus replaces the pixel values in the first image with the pixel values sampled from the first continuous distribution, so that the accuracy of the pixel values in the first image can be improved. Therefore, the hardware noise is obtained by sampling the pixel value in the reconstructed first image, and the precision of the hardware noise can be improved.

With reference to any embodiment of the present application, the replacing the second pixel values in the first image with the first pixel values in the first continuous distribution to obtain a reconstructed first image includes:

determining a second continuous distribution comprising the second pixel values from the first continuous distribution;

in the case where the second pixel value is the largest pixel value in the first image, the smallest pixel value in the second continuous distribution is greater than or equal to the second largest pixel value in the first image; in the case where the second pixel value is the smallest pixel value in the first image, the largest pixel value in the second continuous distribution is less than or equal to the second small pixel value in the black map; in a case where the second pixel value is not the maximum pixel value in the first image and the second pixel value is not the minimum pixel value in the first image, the maximum pixel value in the second continuous distribution is less than or equal to a third pixel value, the minimum value in the second continuous distribution is greater than or equal to a fourth pixel value, the fourth pixel value is the ith largest pixel value in the first image, the second pixel value is the (i +1) th largest pixel value in the first image, the third pixel value is the (i +2) th largest pixel value in the first image, and i is a positive integer;

and replacing the second pixel value in the first image by the first pixel value in the second continuous distribution to obtain the reconstructed first image.

In this embodiment, a value section of the pixel values in the first continuous distribution is referred to as a first section, a value section of the pixel values in the second continuous distribution is referred to as a second section, and a quantization section corresponding to the second pixel values is referred to as a third section. Because the coincidence degree of the second interval and the third interval is higher than the coincidence degree of the first interval and the third interval, the image processing device replaces the second pixel value by the first pixel value in the second continuous distribution to obtain the reconstructed first image, and the accuracy of the pixel value in the first image can be improved while the accuracy of the pixel value in the first image is improved.

In combination with any embodiment of the present application, the first continuous profile comprises a bell-shaped continuous profile.

Since the distribution satisfied by the pixel values of the image is closer to the bell-shaped continuous distribution, in this embodiment, the first continuous distribution includes the bell-shaped continuous distribution, and the accuracy of the first continuous distribution can be made higher.

In combination with any embodiment of the present application, the first image includes a first pixel region, and the first pixel region includes four or more pixels;

the sampling of the pixel values in the first image to obtain the hardware noise generated by the target imaging device generating an image in the presence of a light signal includes:

sampling the first image to obtain pixel values of the first pixel area;

and obtaining the hardware noise according to the pixel value of the first pixel area.

In this embodiment, since the adjacent pixels in the image have spatial position information therebetween, structural noise related to the spatial position information exists between the adjacent pixels. And the first pixel region includes four or more pixels, i.e., the first pixel region includes adjacent pixels. Therefore, the first pixel region includes not only noise information carried by the pixels but also structural noise.

Therefore, the image processing device obtains the hardware noise generated by the target imaging equipment generating the image under the condition of the optical signal according to the first pixel area, and the hardware noise can contain structural noise.

In combination with any embodiment of the present application, a size of the first pixel region is the same as a size of the second image.

In this embodiment, the size of the first pixel region is the same as the size of the second image, and the structure of the first pixel region is the same as the structure of the second image. Since the structural noise is related to the structure of the neighboring pixels, the structural noise carried by the first pixel region more closely matches the structure of the second image.

With reference to any one of the embodiments of the present application, the arrangement of the pixels in the first pixel region is the same as the arrangement of the pixels in the second image.

In this embodiment, since the pixel arrangement of the image affects the distribution of the noise, and the pixel arrangement of the first pixel region is the same as the pixel arrangement of the second image, the distribution of the noise in the first pixel region is more matched with the distribution of the noise in the second image. In combination with the condition that the size of the first pixel region is the same as the size of the second image in the previous embodiment, the structural noise carried by the first pixel region is more matched with the structure of the second image, and the degree of matching between the distribution of the noise in the first pixel region and the distribution of the noise in the second image is higher.

With reference to any embodiment of the present application, the obtaining the hardware noise according to the pixel value of the first pixel region includes:

acquiring a third continuous distribution of the first pixel region, wherein the third continuous distribution is obtained by fitting pixel values in the first pixel region;

replacing the pixel values in the first pixel region with the pixel values in the third continuous distribution to obtain a reconstructed first pixel region;

and sampling the pixel value in the reconstructed first pixel region to obtain the hardware noise.

In this embodiment, the third continuous distribution is obtained by fitting the pixel values in the first image, that is, the pixel values in the third continuous distribution are continuous data, and the image processing apparatus replaces the pixel values in the first image with the pixel values sampled from the third continuous distribution, so that the accuracy of the pixel values in the first image can be improved. Therefore, the hardware noise is obtained by sampling the pixel value in the reconstructed first pixel region, and the precision of the hardware noise can be improved.

In addition, in the first two embodiments, the structural noise carried by the first pixel region is more matched with the structure of the second image, and the distribution of the noise in the first pixel region is more matched with the distribution of the noise in the second image. The hardware noise obtained by combining the modified embodiment with the first two embodiments is used for improving the simulation effect when the simulation target imaging device acquires an image with the target size and the pixel arrangement mode as the target arrangement mode, wherein the target size is the size of the second image, and the target arrangement mode is the pixel arrangement mode of the second image.

With reference to any one of the embodiments of the present application, the first image includes a second pixel region, and the arrangement of pixels of the second pixel region is the same as the arrangement of pixels of the second image;

the sampling of the pixel values in the first image to obtain the hardware noise generated by the target imaging device generating an image in the presence of a light signal includes:

sampling the first image to obtain pixel values of the second pixel region;

and obtaining the hardware noise according to the pixel value in the second pixel region.

In this embodiment, since the pixel arrangement of the image affects the distribution of the noise, and the pixel arrangement of the second pixel region is the same as the pixel arrangement of the second image, the distribution of the noise in the second pixel region has a higher degree of matching with the distribution of the noise in the second image.

In combination with any embodiment of the present application, the hardware noise includes one or more of: noise generated by analog gain, gain generated by digital gain, quantization noise.

In a second aspect, there is provided an image processing apparatus, the apparatus comprising:

an acquisition unit configured to acquire a first image, which is an image acquired by a target imaging apparatus in a dark environment;

and the processing unit is used for sampling the pixel values in the first image to obtain hardware noise generated when the target imaging device generates an image under the condition of an optical signal.

With reference to any one of the embodiments of the present application, the obtaining unit is further configured to obtain a second image, where the second image includes shot noise;

the image processing apparatus further includes: and the adding unit is used for adding the hardware noise to the second image to obtain a noise image.

With reference to any embodiment of the present application, the obtaining unit is configured to:

acquiring a third image, wherein the third image is a clean image;

and obtaining the second image according to the third image.

With reference to any embodiment of the present disclosure, the obtaining unit is further configured to obtain a first continuous distribution of the first image before sampling pixel values in the first image to obtain hardware noise generated by the target imaging device generating an image in the presence of an optical signal, where the first continuous distribution is obtained by fitting the pixel values in the first image;

the processing unit is configured to:

replacing a second pixel value in the first image by using a first pixel value in the first continuous distribution to obtain a reconstructed first image;

and sampling the pixel value in the reconstructed first image to obtain the hardware noise.

With reference to any embodiment of the present application, the obtaining unit is configured to:

determining a second continuous distribution comprising the second pixel values from the first continuous distribution;

in the case where the second pixel value is the largest pixel value in the first image, the smallest pixel value in the second continuous distribution is greater than or equal to the second largest pixel value in the first image; in the case where the second pixel value is the smallest pixel value in the first image, the largest pixel value in the second continuous distribution is less than or equal to the second small pixel value in the black map; in a case where the second pixel value is not the maximum pixel value in the first image and the second pixel value is not the minimum pixel value in the first image, the maximum pixel value in the second continuous distribution is less than or equal to a third pixel value, the minimum value in the second continuous distribution is greater than or equal to a fourth pixel value, the fourth pixel value is the ith largest pixel value in the first image, the second pixel value is the (i +1) th largest pixel value in the first image, the third pixel value is the (i +2) th largest pixel value in the first image, and i is a positive integer;

and replacing the second pixel value in the first image by the first pixel value in the second continuous distribution to obtain the reconstructed first image.

In combination with any embodiment of the present application, the first continuous profile comprises a bell-shaped continuous profile.

In combination with any embodiment of the present application, the first image includes a first pixel region, and the first pixel region includes four or more pixels;

the processing unit is configured to:

sampling the first image to obtain pixel values of the first pixel area;

and obtaining the hardware noise according to the pixel value of the first pixel area.

In combination with any embodiment of the present application, a size of the first pixel region is the same as a size of the second image.

With reference to any one of the embodiments of the present application, the arrangement of the pixels in the first pixel region is the same as the arrangement of the pixels in the second image.

In combination with any embodiment of the present application, the processing unit is configured to:

acquiring a third continuous distribution of the first pixel region, wherein the third continuous distribution is obtained by fitting pixel values in the first pixel region;

replacing the pixel values in the first pixel region with the pixel values in the third continuous distribution to obtain a reconstructed first pixel region;

and sampling the pixel value in the reconstructed first pixel region to obtain the hardware noise.

With reference to any one of the embodiments of the present application, the first image includes a second pixel region, and the arrangement of pixels of the second pixel region is the same as the arrangement of pixels of the second image;

the processing unit is configured to:

sampling the first image to obtain pixel values of the second pixel region;

and obtaining the hardware noise according to the pixel value in the second pixel region.

In combination with any embodiment of the present application, the hardware noise includes one or more of: noise generated by analog gain, gain generated by digital gain, quantization noise.

In a third aspect, an electronic device is provided, which includes: a processor and a memory for storing computer program code comprising computer instructions, the electronic device performing the method of the first aspect and any one of its possible implementations as described above, if the processor executes the computer instructions.

In a fourth aspect, another electronic device is provided, including: a processor, transmitting means, input means, output means, and a memory for storing computer program code comprising computer instructions, which, when executed by the processor, cause the electronic device to perform the method of the first aspect and any one of its possible implementations.

In a fifth aspect, there is provided a computer-readable storage medium having stored therein a computer program comprising program instructions which, if executed by a processor, cause the processor to perform the method of the first aspect and any one of its possible implementations.

A sixth aspect provides a computer program product comprising a computer program or instructions which, when run on a computer, causes the computer to perform the method of the first aspect and any of its possible implementations.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application.

Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;

FIG. 2 is a schematic diagram of a quantization process provided by an embodiment of the present application;

FIG. 3 is a schematic illustration of a first continuous distribution and a second continuous distribution provided by an embodiment of the present application;

fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;

fig. 5 is a schematic diagram of a hardware structure of an image processing apparatus according to an embodiment of the present disclosure.

Detailed Description

In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.

It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more, "at least two" means two or three and three or more, "and/or" for describing an association relationship of associated objects, meaning that three relationships may exist, for example, "a and/or B" may mean: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" may indicate that the objects associated with each other are in an "or" relationship, meaning any combination of the items, including single item(s) or multiple items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural. The character "/" may also represent a division in a mathematical operation, e.g., a/b-a divided by b; 6/3 ═ 2. At least one of the following "or similar expressions.

The process of acquiring images by an imaging device typically includes: collecting the light signals and generating an image based on the light signals. In generating an image based on the optical signal, the hardware of the target imaging apparatus generates noise (hereinafter referred to as hardware noise). How to calculate the hardware noise of the imaging device is of great significance.

In the target technology, the process of acquiring an image by an imaging device is generally analyzed, and the composition of hardware noise of the imaging device is obtained. After the composition of the hardware noise of the imaging device is obtained, mathematical modeling is respectively performed on the noise composing the hardware noise to obtain a noise model. And simulating the noise forming the hardware noise through a noise model, thereby calculating the hardware noise.

For example, by analyzing the process of acquiring images by the imaging device a, it is determined that the hardware noise of the imaging device a includes noise b and noise c. And performing mathematical modeling on the noise b to obtain a noise model d, and performing mathematical modeling on the noise c to obtain a noise model e. The noise b generated in the process of acquiring the image by the imaging device a is simulated by the noise model d, and the noise c generated in the process of acquiring the image by the imaging device a is simulated by the noise model e. And calculating the sum of the noise b and the noise c to obtain the hardware noise of the imaging device a.

Since the kind of noise generated by hardware to generate an image based on the collected light signal is very large during the process of collecting an image by the imaging device. The types of models for mathematical modeling are limited, the hardware noise composition of the imaging device is obtained by analyzing the process of image acquisition of the imaging device, the hardware noise composition of the imaging device is difficult to accurately obtain, and a larger error is brought by simulating the noise through the mathematical model. Therefore, the error of the hardware noise calculated by the target technique is large.

Based on this, the embodiment of the application discloses a technical scheme for calculating hardware noise of an imaging device, so as to improve the accuracy of the hardware noise.

Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.

The execution subject of the embodiment of the present application is an image processing apparatus, where the image processing apparatus may be any electronic device that can execute the technical solution disclosed in the embodiment of the present application. Optionally, the image processing apparatus may be one of the following: cell-phone, computer, panel computer, wearable smart machine.

It should be understood that the method embodiments of the present application may also be implemented by means of a processor executing computer program code. The embodiments of the present application will be described below with reference to the drawings. Referring to fig. 1, fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure.

101. A first image is acquired, wherein the first image is an image acquired by target imaging equipment in a dark environment.

In the embodiment of the present application, the target imaging apparatus may be one of the following: camera, camera. Dark environments include environments without light signals. Optionally, the dark environment is a closed dark room environment. For example, a dark environment is an internal environment that encloses a box that is dark.

In one implementation of acquiring the first image, the image processing apparatus receives the first image acquired by the first image input by the user through the input component. Optionally, the input assembly includes: keyboard, mouse, touch screen, touch pad, audio input device, etc.

In another implementation manner of acquiring the first image, the image processing device receives the first image acquired first image sent by the terminal. Optionally, the terminal may be any one of the following: cell-phone, computer, panel computer, server, wearable equipment.

In yet another implementation of acquiring the first image, the image processing apparatus includes a target imaging device. The image processing device is placed in a darkroom to acquire an image and obtain a first image, wherein the darkroom is a sealed lightless inner room.

102. And sampling pixel values in the first image to obtain hardware noise generated when the target imaging device generates an image under the condition of an optical signal.

In the embodiment of the present application, the hardware noise is noise generated when the target imaging device generates an image based on an optical signal in the presence of the optical signal.

Since hardware noise is generated during the process of acquiring an image by the imaging device, the image contains not only the optical signal acquired by the imaging device but also the hardware noise. By mathematically modeling the process of acquiring images by the imaging device, the following can be derived:

wherein D is a digital signal in an image acquired by the imaging device, and N isqRepresenting quantization noise (I) is the number of photons (incidentphoton number) incident on the imaging device, NpIs shot noise, KdDigital gain, K, in capturing images for an imaging deviceaAnalog gain when acquiring images for an imaging device.

As can be seen from the formula (1), when the optical signal is 0, D ═ KdKaN1+KdN2+KdNqI.e., D ═ hardware noise. And the first image is an image acquired in a dark environment, that is, the target imaging device does not need to acquire a light signal in the process of generating the first image, that is, the first image does not contain the light signal, that is, the digital signal in the first image is hardware noise. Therefore, the image processing device can obtain the hardware noise of the target imaging device for acquiring the first image by sampling the pixel values in the first image.

In one possible implementation manner, the image processing apparatus obtains any one pixel value in the first image by sampling the pixel value in the first image, and takes the pixel value as the hardware noise of the target imaging device. For example, the first image includes a pixel value a and a pixel value b, and the image processing apparatus obtains the pixel value a by sampling the pixel values in the first image and takes the pixel value a as hardware noise of the target imaging device.

In another possible implementation manner, the image processing apparatus obtains any one pixel value in the first image by sampling the pixel value in the first image, and takes the sum of the pixel value and a first constant as the hardware noise of the target imaging device, where the first constant is a real number. For example, the first image includes a pixel value a and a pixel value b, the image processing apparatus obtains the pixel value a by sampling the pixel value in the first image, and takes the sum of the pixel value a and the first constant as the hardware noise of the target imaging device.

In yet another possible implementation manner, the image processing apparatus obtains any one of pixel values in the first image by sampling the pixel values in the first image, and takes a product of the any pixel value and a second constant as hardware noise of the target imaging device, where the second constant is a real number. For example, the first image includes a pixel value a and a pixel value b, and the image processing apparatus obtains the pixel value a by sampling the pixel value in the first image, and takes the product of the pixel value a and the second constant as the hardware noise of the target imaging device.

In the embodiment of the application, because the first image does not contain the optical signal and contains the hardware noise of the target imaging device, the image processing device obtains the hardware noise of the target imaging device by sampling the pixel value in the first image, and the accuracy of the hardware noise can be improved. And the complexity of calculating the hardware noise of the target imaging equipment can be reduced, and the efficiency of calculating the hardware noise of the target imaging equipment is improved.

As an alternative embodiment, the image processing apparatus further performs the steps of:

1. a second image is acquired, the second image including shot noise.

In the embodiment of the present application, shot noise is noise generated when photons travel from a light source to a photosensitive element of an imaging device. For example, the light source of the imaging device when acquiring the image is the sun, and in this case, the light signal is sunlight. During the process of collecting images by the imaging device, the photons emitted by the sun need to be collected by the photosensitive element to obtain a light signal. At this time, the noise generated by the process of the photons propagating from the sun to the photosensitive element is shot noise.

For another example, the light source of the imaging device when acquiring the image is an incandescent lamp, and in this case, the light signal is light emitted by the incandescent lamp. During the process of image acquisition by the imaging device, the light signal is acquired by acquiring photons emitted by the incandescent lamp through the photosensitive element. At this time, the noise generated by the process of the photons propagating from the incandescent lamp to the photosensitive element is shot noise.

In an embodiment of the present application, the second image is an image acquired in a light environment, and the second image includes shot noise.

In one implementation of acquiring the second image, the image processing apparatus receives the second image acquired by the second image input by the user through the input component.

In another implementation manner of acquiring the second image, the image processing device receives the second image acquired by the second image transmitted by the terminal.

2. And adding the hardware noise to the second image to obtain a noise image.

As can be seen from equation (1), the image acquired by the imaging device contains not only the optical signal but also hardware noise, wherein the optical signal includes the number of photons incident into the imaging device and shot noise. And the second image is an image acquired under a light environment, namely the second image contains the number of photons, and the second image also comprises shot noise, so that the RAW, namely a noise image acquired by the target imaging device can be obtained by adding hardware noise to the second image.

Optionally, the second image and the noise image are used as training data to train the neural network, so that the trained neural network has the capability of removing hardware noise in the image acquired by the target imaging device.

In one possible implementation, the image processing apparatus adds hardware noise to each pixel in the second image separately, resulting in a noisy image. For example, the second image includes a pixel a and a pixel b. The image processing apparatus adds hardware noise to the pixel a and adds hardware noise to the pixel b, resulting in a noise image.

In another possible implementation, the image processing apparatus adds hardware noise to any one pixel in the second image, resulting in a noisy image. For example, the second image includes a pixel a and a pixel b. The image processing apparatus adds hardware noise to the pixel a, resulting in a noise image.

As an alternative embodiment, the image processing apparatus performs the following steps in the process of performing step 1:

3. a third image is acquired.

In this embodiment of the application, the third image includes a clean image, where the clean image is an image from which hardware noise and shot noise are removed.

Optionally, the third image is an image obtained by removing hardware noise and shot noise in the digital negative film, where the digital negative film includes an unprocessed image (RAW) acquired by the imaging device. Alternatively, the imaging device that captures the digital negative film may be the same as the target imaging device, or may be different.

In one implementation of acquiring the third image, the image processing apparatus receives the third image input by the user through the input component.

In another implementation of acquiring the third image, the image processing apparatus receives the third image transmitted by the terminal.

In yet another implementation of obtaining the third image, the image processing device obtains a digital negative. The image processing device removes hardware noise and shot noise in the digital negative film to obtain a third image.

4. And obtaining the second image according to the third image.

As described in step 1, the photons propagating from the light source to the photosensitive element of the target imaging device generate shot noise, so that the light signal includes not only the photons collected by the imaging device but also shot noise, that is, the image includes photons, shot noise and hardware noise.

Since the third image includes a clean image, the number of photons emitted from the light source to the imaging device (hereinafter referred to as the number of photons) when the imaging device captures an image can be obtained from the third image.

Since the image including shot noise satisfies the poisson distribution of the number of photons, the image including shot noise can be obtained in accordance with the poisson distribution of the number of photons. Therefore, the poisson distribution of the number of photons can be obtained according to the third image, and an image including shot noise, namely the second image, can be obtained.

In a possible implementation manner, the image processing device obtains the number of photons in the third image according to the third image; determining parameters of Poisson distribution according to the number of photons, and obtaining the Poisson distribution of the number of photons; and obtaining the second image according to the Poisson distribution of the number of photons.

For example, if the analog gain when the imaging device acquires the third image is Ka, the digital gain when the imaging device acquires the third image is Kd. In the case of representing the third image by Y, the number of photons I satisfies the following equation:

in useIn the case where the second image is represented,and I satisfies the following formula:

wherein the content of the first and second substances,it is shown that the poisson distribution,the parameter for poisson distribution is represented as I.

Alternatively, in the case where the image processing apparatus obtains the second image in accordance with steps 3 and 4, the image processing apparatus obtains a noise image by adding hardware noise to the second image by the following expression:

wherein the content of the first and second substances,for noisy images, NiIn the case of hardware noise, it is,representing the second image, Ka is the analog gain at which the third image was acquired by the imaging device, and Kd is the digital gain at which the third image was acquired by the imaging device.

The image processing apparatus executes step 1 to step 4 to obtain a noise image of the third image, that is, a noise image corresponding to the clean image. Based on the technical solutions disclosed in step 101, step 102, and steps 1 to 4, the embodiment of the present application provides a possible application scenario.

Thanks to the powerful capabilities, neural networks have been widely used in recent years in the field of image processing for performing various tasks. For example, the image is subjected to noise reduction processing using a neural network.

The effect of the neural network on reducing the noise of the image greatly depends on the training effect of the neural network, and the training effect of the neural network mainly depends on the number of the training data, specifically, the greater the number of the training data, the better the training effect of the neural network. Therefore, how to quickly obtain the noise reduction training data is of great significance for improving the noise reduction effect of the neural network, wherein the noise reduction training data comprises one or more training image pairs, each training image pair comprises a noise image and a noise reduction image, and the noise reduction image is an image obtained by reducing the noise image.

In the conventional method, a noise-reduced image is obtained by reducing noise of a noise image, so that a training image pair is obtained. However, the processing speed of noise reduction on the image is low, and the efficiency of obtaining the noise reduction training data by the traditional method is low.

In the embodiment of the present application, the image processing apparatus may obtain one or more hardware noises according to step 101 and step 102. The image processing device may, in the case of acquiring the third image, use the third image as a noise reduction image, obtain the second image according to step 4, and then add one or more hardware noises to the second image respectively through the implementation provided in step 2, obtain one or more noise images, thereby obtaining one or more training image pairs.

For example, the image processing apparatus obtains any one of pixel values in the first image by sampling the pixel values in the first image, and takes the pixel value as the hardware noise a of the target imaging device. The image processing device obtains any one pixel value in the first image by sampling the pixel value in the first image, and takes the sum of the pixel value and the first constant as the hardware noise b of the target imaging device. The image processing device obtains any pixel value in the first image by sampling the pixel value in the first image, and takes the product of the any pixel value and the second constant as the hardware noise c of the target imaging device.

The image processing apparatus obtains a noise image a by adding hardware noise a to each pixel in the second image after obtaining the second image from the third image. The noise image B is obtained by adding hardware noise B to each pixel in the second image, respectively. The noise image C is obtained by adding hardware noise C to each pixel in the second image, respectively. The noise image D is obtained by adding hardware noise a to any one of the pixels in the second image, respectively. The noise image E is obtained by adding hardware noise b to any one pixel in the second image, respectively. The noise image F is obtained by adding hardware noise c to any one pixel in the second image, respectively.

Thus, the third image and the noise image a can be taken as one training image pair, the third image and the noise image B can be taken as one training image pair, the third image and the noise image C can be taken as one training image pair, the third image and the noise image D can be taken as one training image pair, the third image and the noise image E can be taken as one training image pair, and the third image and the noise image F can be taken as one training image pair.

In the embodiment of the present application, since one or more hardware noises are obtained based on the calculation in step 101 and step 102, the calculation efficiency can be improved, and the accuracy of the hardware noises can be improved, and the noise reduction training data is obtained based on the technical scheme disclosed in the embodiment of the present application, so that the efficiency and the accuracy can be improved.

As an alternative embodiment, before executing step 102, the image processing apparatus further executes the following steps:

5. and acquiring a first continuous distribution of the first image.

In an embodiment of the present application, the first continuous distribution is obtained by fitting pixel values in the first image.

In one way of obtaining the first continuous distribution, the image processing apparatus takes the pixel values in the first image as the observed values, and selects a distribution whose fitting optimality satisfies a distribution requirement from the preset distributions as the first continuous distribution, wherein the distribution requirement includes that the coefficient of reliability is greater than the fitting threshold. Wherein the fitting threshold is greater than 0 and less than 1.

Optionally, the preset distribution includes one or more of the following: student distribution (Student's t-distribution), Weibull distribution (Weibull), multiple comparison distribution (Tukey lambda), Gaussian distribution (Gaussian), and Gamma distribution (Gamma).

In one implementation of obtaining the first continuous distribution, the image processing apparatus receives the first continuous distribution input by the user through the input component to obtain the first continuous distribution.

In another implementation manner of acquiring the first continuous distribution, the image processing apparatus receives the first continuous distribution transmitted by the terminal to acquire the first continuous distribution.

After executing step 5, the image processing apparatus executes the following steps in executing step 102:

6. and replacing the second pixel value in the first image by the first pixel value in the first continuous distribution to obtain a reconstructed first image.

When the target imaging device acquires the first image, the pixel values of the first image are subjected to low-bit quantization, so that the pixel values in the first image are discrete data. Specifically, pixel values in the same quantization interval are quantized to the same pixel value in the quantization interval by low bit quantization, pixel values in different quantization intervals are quantized to different pixel values, and there is no intersection between different quantization intervals.

For convenience, the following description will use [ a, b ] to denote a value range greater than or equal to a and less than or equal to b, use (c, d) to denote a value range greater than c and less than or equal to d, and use [ e, f) to denote a value range greater than or equal to e and less than f.

For example, before low bit quantization of the pixel values of the first image, the pixel values of the first image include 2.443, 5.6478, 76.321245, 155.32, 220.4321. The quantization intervals include [0, 50), (50, 100], [100, 160), [160, 250), and the pixel values within [0, 50) are each quantized to 25, the pixel values within (50, 100) are each quantized to 80, the pixel values within [100, 160) are each quantized to 135, and the pixel values within [160, 250) are each quantized to 215. Then, by low bit quantization of the pixel values in the first image, 2.443 and 5.6478 are each quantized to 25, 76.321245 to 80, 155.32 to 135, 220.4321 to 215.

As another example, FIG. 2 illustrates a process in which a continuous distribution is quantized. The horizontal axis shown in fig. 2 is a pixel value, the vertical axis is a ratio of a pixel corresponding to the pixel value to the total number of pixels in the image, and the curve in the graph is a continuous distribution obtained by fitting the pixel values in the image. The black line segment parallel to the horizontal axis on the curve represents a quantization interval, and the pixel values in the quantization interval in the image are quantized to the intersection point of the black line segment corresponding to the quantization interval and the curve. If the quantization interval corresponding to the segment AC is [ -2, -1], and the intersection point of the segment AC and the curve is the point B, where the abscissa of the point B is-1.3, then the pixel values between [ -2, -1] are quantized to-1.3 by quantizing the image.

Because the pixel values in the first continuous distribution are continuous data, the pixel values in the first image are replaced by the pixel values obtained by sampling from the first continuous distribution, so that the precision of the pixel values in the first image can be improved, and the reconstructed first image is obtained.

In this step, the first pixel value is an arbitrary pixel value in the first continuous distribution, and the second pixel value is an arbitrary pixel value in the first image. It should be understood that the first pixel value and the second pixel value in this step are examples, and it should not be understood that only one pixel value in the first continuous distribution is used to replace one pixel value in the first image, and in practical applications, the image processing apparatus may respectively sample one pixel value from the first continuous distribution for each pixel value in the first image, and respectively replace the pixel value in the first image with the sampled pixel value, so as to obtain the reconstructed first image.

For example, the first image includes a pixel value a and a pixel value b. The image processing apparatus samples pixel values c from the first continuous distribution and replaces pixel value a with pixel value c and replaces pixel value b with pixel value c.

For another example, the first image includes a pixel value a and a pixel value b. The image processing apparatus samples pixel values c and pixel values d from the first continuous distribution, and replaces the pixel value a with the pixel value c, and replaces the pixel value b with the pixel value d.

7. And sampling the pixel value in the reconstructed first image to obtain the hardware noise.

Since the accuracy of the pixel values in the reconstructed first image is higher than that of the pixel values in the first image, the image processing device samples the pixel values in the reconstructed first image to obtain the hardware noise generated when the target imaging device generates an image under the condition of an optical signal, and the accuracy of the hardware noise can be improved.

As an alternative embodiment, the image processing apparatus performs the following steps in the process of performing step 6:

8. a second continuous distribution including the second pixel value is determined from the first continuous distribution.

In this step, the second continuous distribution belongs to the first continuous distribution, and the second continuous distribution includes a second pixel value, and when the second pixel value is the maximum pixel value in the first image, the minimum pixel value in the second continuous distribution is greater than or equal to the second maximum pixel value in the first image.

For example, the first image contains 4 pixel values, 3, 40, 60, 178 respectively. If the second pixel value is 178, then the minimum pixel value in the second continuous distribution is greater than or equal to 60.

In case the second pixel value is the smallest pixel value in the first image, the largest pixel value in the second continuous distribution is smaller than or equal to the second smallest pixel value in the first image.

For example, the first image contains 4 pixel values, 3, 40, 60, 178 respectively. If the second pixel value is 3, the maximum pixel value in the second continuous distribution is less than or equal to 40.

In the case where the second pixel value is not the maximum pixel value in the first image and the second pixel value is not the minimum pixel value in the first image, the maximum pixel value in the second continuous distribution is less than or equal to the third pixel value, the minimum value in the second continuous distribution is greater than or equal to the fourth pixel value, the fourth pixel value is the ith-th largest pixel value in the first image, the second pixel value is the (i +1) -th largest pixel value in the first image, the third pixel value is the (i +2) -th largest pixel value in the first image, and i is a positive integer.

For example, the first image contains 4 pixel values, 3, 40, 60, 178 respectively. If the second pixel value is 60, i is 1, the minimum pixel value in the second continuous distribution is greater than or equal to 40, and the maximum pixel value in the second continuous distribution is less than or equal to 178. If the second pixel value is 40, i is 2, the minimum pixel value in the second continuous distribution is greater than or equal to 3, and the maximum pixel value in the second continuous distribution is less than or equal to 60.

Alternatively, the process of determining a second continuous distribution comprising second pixel values from the first continuous distribution can be seen in fig. 3. In the coordinate system shown in fig. 3, the horizontal axis represents the pixel value, the vertical axis represents the ratio, and the curve is the first continuous distribution. The abscissa of the point D is X-1/q and the ordinate of the point D is p, then the ratio of the pixel value X-1/q in the first image to the total number of pixels in the first image is p.

If the abscissa of the point C is X, the second pixel value is X, the abscissa of the point E is the ith largest pixel value in the first black map, and the abscissa of the point a is the (i +2) th largest abscissa in the first black map. The image processing apparatus may select, as the second continuous distribution, a distribution that is satisfied by a curve including the point C from the curve AE. As shown in fig. 3, the second continuous distribution including the second pixel values determined by the image processing apparatus from the first continuous distribution is a distribution satisfied by the curve BD.

9. And replacing the second pixel values in the first image with the first pixel values in the second continuous distribution to obtain the reconstructed first image.

The value section of the pixel values in the first continuous distribution is called a first section, the value section of the pixel values in the second continuous distribution is called a second section, and the quantization section corresponding to the second pixel values is called a third section, so that the coincidence degree of the second section and the third section is higher than that of the first section and the third section.

Because the coincidence degree of the second interval and the third interval is higher than the coincidence degree of the first interval and the third interval, the image processing device replaces the second pixel value by the first pixel value in the second continuous distribution to obtain the reconstructed first image, and the accuracy of the pixel value in the first image can be improved while the accuracy of the pixel value in the first image is improved.

Alternatively, the determination of the first pixel value from the second continuous distribution can be seen in fig. 3. As described for fig. 3 in step 9, the second continuous distribution is the distribution that is satisfied by the curve BD. In FIG. 3, the abscissa of the point B is X + 1/q. The first pixel value may be any value between X-1/q, X + 1/q.

It should be understood that the second continuous distribution in step 8 and step 9 is only an example, and should not be understood as only being able to determine one continuous distribution from the first continuous distribution, nor should it be understood as only being able to replace one pixel value in the first image based on the solutions disclosed in step 11 and step 12. In practical application, for all pixel values in the first image, a corresponding second continuous distribution can be determined from the first continuous distribution, and the pixel values in the second continuous distribution are used to replace the corresponding pixel values in the first image, so as to obtain a reconstructed first image.

For example, the first image includes a pixel a, a pixel b, a pixel c, and a pixel d, where the pixel values of the pixel a and the pixel b are both a first value, the pixel value of the pixel b is a second value, and the pixel value of the pixel d is a third value, where the first value, the second value, and the third value are three different values. The image processing apparatus specifies a continuous distribution A including a first value from among the first continuous distributions, specifies a continuous distribution B including a second value from among the first continuous distributions, and specifies a continuous distribution C including a third value from among the first continuous distributions.

The image processing apparatus may replace the pixel value of the pixel c with the pixel value e in the continuous distribution B. The image processing apparatus may replace the pixel value of the pixel d with the pixel value f in the continuous distribution C.

For pixel a and pixel b, the image processing apparatus may replace the pixel value of pixel a with pixel value g in the continuous distribution a and replace the pixel value of pixel b with pixel value e. The image processing apparatus may also replace the pixel value of the pixel a with the pixel value g in the continuous distribution a and replace the pixel value of the pixel b with the pixel value h in the continuous distribution a.

As an alternative embodiment, the image processing apparatus performs the following steps in the process of performing step 5:

10. determining a first number of first pixels in the first image and a second number of second pixels in the first image, the first pixels having pixel values that are different from pixel values of the first pixels.

11. A first ratio of the first number to a third number is determined, the third number being the number of pixels in the first image.

12. A second ratio of the second quantity to the third quantity is determined.

13. And obtaining a first continuous distribution of the pixel values in the first image according to the second pixel value, the pixel value of the second pixel, the first ratio and the second ratio.

In one possible implementation manner, the image processing apparatus determines the first point in the distribution coordinate system with the second pixel value as an abscissa and the first ratio as an ordinate. The image processing apparatus determines a second point in the distributed coordinate system using the pixel value of the second pixel as an abscissa and the second ratio as an ordinate. The distribution coordinate system is a two-dimensional coordinate system, the horizontal axis of the distribution coordinate system is a pixel value, and the vertical axis of the distribution coordinate system is a ratio. The image processing apparatus obtains the first continuous distribution by curve fitting the first point and the second point.

Optionally, the image processing apparatus may respectively determine a ratio of a pixel corresponding to each pixel value in the first image, and respectively determine a corresponding point of each pixel value in the distribution coordinate system. The image processing apparatus obtains the first continuous distribution by curve fitting all the points.

For example, the first image includes a pixel value a, a pixel value b, a pixel value c, and a pixel value d, where the number of pixels corresponding to the pixel value a is 8, the number of pixels corresponding to the pixel value b is 3, the number of pixels corresponding to the pixel value c is 8, and the number of pixels corresponding to the pixel value d is 1. At this time, the ratio of the pixel corresponding to the pixel value a in the first image is 8/(8+3+8+1) 2/5, the ratio of the pixel corresponding to the pixel value b in the first image is 3/(8+3+8+1) 3/20, the ratio of the pixel corresponding to the pixel value c in the first image is 8/(8+3+8+1) 2/5, and the ratio of the pixel corresponding to the pixel value d in the first image is 1/(8+3+8+1) 1/20.

The image processing device obtains a point A in a distributed coordinate system by taking the pixel value a as an abscissa and 2/5 as an ordinate; taking the pixel value B as an abscissa and 3/20 as an ordinate to obtain a point B in a distributed coordinate system; taking the pixel value C as an abscissa and 2/5 as an ordinate to obtain a point C under a distributed coordinate system; the pixel value D is plotted on the abscissa and 1/20 on the ordinate to obtain a D point in the distribution coordinate system. The image processing apparatus obtains the first continuous distribution by curve-fitting the point a, the point B, the point C, and the point D.

In another possible implementation manner, the image processing apparatus determines the first point in the distribution coordinate system with the second pixel value as an ordinate and the first ratio as an abscissa. The image processing apparatus determines a second point in the distributed coordinate system using the pixel value of the second pixel as a vertical coordinate and the second ratio as a horizontal coordinate. The distribution coordinate system is a two-dimensional coordinate system, the horizontal axis of the distribution coordinate system is a ratio, and the vertical axis of the distribution coordinate system is a pixel value. The image processing apparatus obtains the first continuous distribution by curve fitting the first point and the second point.

Optionally, the image processing apparatus may respectively determine a ratio of a pixel corresponding to each pixel value in the first image, and respectively determine a corresponding point of each pixel value in the distribution coordinate system. The image processing apparatus obtains the first continuous distribution by curve fitting all the points.

For example, the first image includes a pixel value a, a pixel value b, a pixel value c, and a pixel value d, where the number of pixels corresponding to the pixel value a is 8, the number of pixels corresponding to the pixel value b is 3, the number of pixels corresponding to the pixel value c is 8, and the number of pixels corresponding to the pixel value d is 1. At this time, the ratio of the pixel corresponding to the pixel value a in the first image is 8/(8+3+8+1) 2/5, the ratio of the pixel corresponding to the pixel value b in the first image is 3/(8+3+8+1) 3/20, the ratio of the pixel corresponding to the pixel value c in the first image is 8/(8+3+8+1) 2/5, and the ratio of the pixel corresponding to the pixel value d in the first image is 1/(8+3+8+1) 1/20.

The image processing apparatus obtains a point a in a distributed coordinate system with the pixel value a as a vertical coordinate and 2/5 as a horizontal coordinate; taking the pixel value B as a vertical coordinate and 3/20 as a horizontal coordinate to obtain a point B under a distributed coordinate system; taking the pixel value C as a vertical coordinate and 2/5 as a horizontal coordinate to obtain a point C under a distributed coordinate system; the pixel value D is set as the ordinate and 1/20 is set as the abscissa, and the D point in the distribution coordinate system is obtained. The image processing apparatus obtains the first continuous distribution by curve-fitting the point a, the point B, the point C, and the point D.

As an alternative, the first continuous distribution includes a bell-shaped continuous distribution. Optionally, the first continuous distribution includes one of: student distribution (Student's t-distribution), Weibull distribution (Weibull), multiple comparison distribution (Tukey lambda), Gaussian distribution (Gaussian), and Gamma distribution (Gamma).

As an alternative implementation, the first image includes a first pixel region, and the first pixel region includes four or more pixels. In this embodiment, the first pixel region is a pixel region including any four or more pixels in the first image.

In this embodiment, the image processing apparatus performs the following steps in executing step 102:

14. the pixel values of the first pixel region are sampled from the first image.

15. And obtaining the hardware noise according to the pixel value in the first pixel region.

Due to the spatial position information between adjacent pixels in the image, structural noise related to the spatial position information exists between the adjacent pixels. And the first pixel region includes four or more pixels, i.e., the first pixel region includes adjacent pixels. Therefore, the first pixel region includes not only noise information carried by the pixels but also structural noise.

Therefore, the image processing device obtains the hardware noise generated by the target imaging equipment generating the image under the condition of the optical signal according to the first pixel area, and the information carried by the hardware noise can be enriched. Further, since the second image includes four or more pixels, the hardware noise including the structural noise is obtained by performing step 14 and step 15, and further the hardware noise can be made to more match the structure of the second image, so that in the case where the image processing apparatus obtains the noise image by performing step 2, the noise image can be made closer to the RAW acquired by the target imaging device.

In one possible implementation, the image processing apparatus treats the pixel values of the first pixel region as hardware noise.

It should be appreciated that in such an implementation, if hardware noise is added to the second image, a pixel region matching the first pixel region may be determined from the second image as a first matching pixel region, wherein the size of the first matching pixel region is the same as the size of the first pixel region. And adding the corresponding pixel values in the first pixel area and the first matching pixel area to obtain a noise image.

In another possible implementation manner, the image processing apparatus takes an average value of pixel values in the first pixel region as hardware noise of the target imaging device.

In yet another possible implementation, the image processing apparatus takes the sum of pixel values in the first image as hardware noise of the target imaging device.

In this embodiment, since the first pixel region includes structural noise, the image processing apparatus obtains hardware noise according to the first pixel region, and can enrich information carried by the hardware noise.

In an alternative embodiment, the size of the first pixel region is the same as the size of the second image. The size of the first pixel region is the same as the size of the second image, and the structure of the first pixel region is the same as the structure of the second image. Since the structural noise is related to the structure of the neighboring pixels, the structural noise carried by the first pixel region more closely matches the structure of the second image.

Therefore, the image processing apparatus obtains the hardware noise according to the first pixel region, so that the matching degree of the hardware noise and the structure of the second image can be improved, that is, the hardware noise obtained according to the first pixel region is closer to the hardware noise generated when the target imaging device acquires the first reference image, wherein the size of the first reference image is the same as that of the second image. Therefore, hardware noise generated when the imaging device is simulated to acquire the first reference image can be better achieved based on the embodiment.

As a possible implementation manner, the first image includes a second pixel region, and the pixel arrangement manner of the second pixel region is the same as the pixel arrangement manner of the second image. The second pixel region and the first pixel region may be the same or different, and this is not limited in this application.

In the embodiment of the present application, the pixel arrangement mode includes an arrangement mode of pixels of different channels in an image. For example, the pixel arrangement in the RAW map may be a bayer pattern (bayer pattern).

In this embodiment, the image processing apparatus, in executing step 102, executes the steps of:

16. and sampling the first image to obtain the pixel value of the second pixel area.

17. And obtaining the hardware noise according to the second pixel area.

Since the pixel arrangement of the image affects the distribution of the noise, and the pixel arrangement of the second pixel region is the same as the pixel arrangement of the second image, the distribution of the noise in the second pixel region has a higher degree of matching with the distribution of the noise in the second image.

Therefore, the image processing device obtains the hardware noise generated by the target imaging device under the condition of optical signals according to the second pixel region, and the distribution of the hardware noise obtained according to the second pixel region can be closer to the distribution of the hardware noise generated by the imaging device when acquiring a second reference image, wherein the pixel arrangement of the second reference image is the same as the pixel arrangement of the second pixel region. Therefore, the hardware noise generated when the second reference image is acquired by the analog imaging device can be better simulated based on the embodiment.

Optionally, if the hardware noise is obtained based on step 17, adding the hardware noise to the second image includes the following processes: and determining a pixel region matching the second pixel region from the second image as a second matching pixel region, wherein the size of the second matching pixel region is the same as that of the second image, and the pixel arrangement of the second matching pixel region is the same as that of the second image. And adding the corresponding pixel values in the first pixel area and the second matching pixel area to obtain a noise image.

As an alternative embodiment, when the implementation in step 5 to step 7 is referred to as a high bit reconstruction method, the implementation in step 14 and step 15 is referred to as a block extraction method, and the implementation in step 16 and step 17 is referred to as a pixel arrangement method, the image processing apparatus may combine the high bit reconstruction method, the block extraction method, and the pixel arrangement method in practical applications.

In one possible implementation, the image processing apparatus may employ both the high bit reconstruction method and the block fetching method in the process of executing step 102. Specifically, the first image includes a first pixel region, and the image processing apparatus determines the first pixel region from the first image. A first intermediate continuous distribution of the first pixel region is obtained, wherein the first intermediate continuous distribution is obtained by fitting pixel values in the first pixel region. And replacing the fifth pixel value in the first pixel region with the fourth pixel value in the first intermediate continuous distribution to obtain a first reconstructed pixel region. And sampling the pixel value in the pixel region after the first reconstruction to obtain the hardware noise.

Optionally, the image processing apparatus executes the following steps in the process of executing the step of replacing the fifth pixel value in the first pixel region with the fourth pixel value in the first intermediate continuous distribution to obtain the first reconstructed pixel region: a second intermediate continuous distribution containing fifth pixel values is determined from the first intermediate continuous distribution. And replacing the fifth pixel value in the first pixel region by using the fourth pixel value in the second middle continuous distribution to obtain a first reconstructed pixel region.

In another possible implementation manner, the image processing apparatus may employ both the high bit reconstruction manner and the pixel arrangement manner during the execution of step 102. Specifically, the first image includes a second pixel region, wherein the pixel arrangement manner of the second pixel region is the same as the pixel arrangement manner of the second image. The image processing apparatus determines a second pixel region from the first image. And acquiring a third intermediate continuous distribution of the second pixel region, wherein the third intermediate continuous distribution is obtained by fitting the pixel values in the second pixel region. And replacing the seventh pixel value in the second pixel region with the sixth pixel value in the third intermediate continuous distribution to obtain a second reconstructed pixel region. And sampling the pixel values in the pixel region after the second reconstruction to obtain the hardware noise.

Optionally, the image processing apparatus, during the step of replacing the seventh pixel value in the second pixel region with the sixth pixel value in the third intermediate continuous distribution to obtain the second reconstructed pixel region, executes the following steps: a third intermediate continuous distribution containing sixth pixel values is determined from the third intermediate continuous distribution. And replacing the seventh pixel value in the first pixel region with the sixth pixel value in the third intermediate continuous distribution to obtain a second reconstructed pixel region.

In yet another possible implementation manner, the image processing apparatus may combine the block-fetching manner and the pixel arrangement manner in the process of executing step 102. Specifically, the first image includes a first pixel region, the size of the first pixel region is the same as the size of the second image, and the pixel arrangement of the first pixel region is the same as the pixel arrangement of the second image. The image processing device samples pixel values of a first pixel region from a first image. And obtaining hardware noise according to the pixel value of the first pixel region.

In yet another possible implementation manner, the image processing apparatus may combine the high bit reconstruction manner, the block extraction manner, and the pixel arrangement manner during the execution of step 102. Specifically, the first image includes a first pixel region, the size of the first pixel region is the same as the size of the second image, and the pixel arrangement of the first pixel region is the same as the pixel arrangement of the second image.

The image processing apparatus determines a first pixel region from the first image. A third continuous distribution of the first pixel region is obtained, wherein the third continuous distribution is obtained by fitting pixel values in the first pixel region. And replacing the pixel values in the first pixel region by using the pixel values in the first intermediate continuous distribution to obtain a reconstructed first pixel region. And sampling the pixel value in the reconstructed first pixel region to obtain hardware noise.

As an alternative embodiment, the hardware noise generated by the target imaging device generating the image in the presence of the optical signal includes one or more of the following: noise generated by analog gain, gain generated by digital gain, quantization noise.

As an alternative embodiment, the image processing apparatus acquires the first image by performing the steps of:

18. and acquiring a black image set, wherein images in the black image set are all images acquired by the target imaging equipment in a dark environment.

In the embodiment of the application, the image uniform target imaging equipment in the black picture set is collected in a dark environment. The analog gain is different when the target imaging device acquires any two images in the black picture set, or the digital gain is different when the target imaging device acquires any two images in any black picture set.

For example, a black atlas includes image a and image b and image c, where both image a and image b are acquired by the target imaging device.

If the analog gain of the image a acquired by the target imaging device is the first analog gain, the digital gain of the image a acquired by the target imaging device is the first digital gain, the analog gain of the image b acquired by the target imaging device is the second analog gain, and the digital gain of the image b acquired by the target imaging device is the second digital gain. The following relationships exist between the first analog gain, the second analog gain, the first digital gain, and the second digital gain: 1) the first analog gain is the same as the second analog gain, but the first digital gain is different from the second digital gain; 2) the first analog gain is different from the second analog gain, but the first digital gain is the same as the second digital gain; 3) the first analog gain is different from the second analog gain, and the first digital gain is different from the second digital gain.

In one possible implementation, the images in the black image set are images acquired by the first imaging device under dark environment at different sensitivities (ISO). For example, ISO of the first imaging apparatus is set to 400, and an image a is acquired in a dark environment; setting the ISO of the first imaging equipment to be 800, and acquiring an image b in a dark environment; the ISO of the first imaging device is set to 1600 and image c is acquired in a dark environment. The black set includes image a, image b, and image c.

19. Sampling an image from the black image set as the first image.

Because the analog gain is different when the target imaging device acquires any two images in the black atlas, or the digital gain is different when the target imaging device acquires any two images in any black atlas, the black atlas contains hardware noise generated when the target imaging device acquires images under different analog gains or hardware noise generated when the target imaging device acquires images under different digital gains.

Since both the analog gain and the digital gain of the target imaging device are variable, and images are acquired with different analog gains or with different hardware noise, the generated hardware noise is different. Therefore, the image processing device samples one image from the black image set as the first image, and obtains the hardware noise of the target imaging device according to the first image, and the hardware noise generated when the target imaging device acquires the image can be better simulated.

In one possible implementation, the image processing apparatus selects an image from the black image set as the first image.

Based on the technical scheme provided by the embodiment of the application, the embodiment of the application also provides a possible application scene.

Due to the portability of the mobile phone, people use the mobile phone more and more frequently to take pictures, so the shooting performance of the mobile phone is very important. How to reduce the noise of the image acquired by the mobile phone has very important significance on the improvement of the photographing performance of the mobile phone.

The noise reduction neural network with the noise reduction function is obtained by training the neural network, and the noise reduction can be realized by processing the acquired image by using the noise reduction neural network through the mobile phone. In the training process of the neural network, a large amount of noise reduction training data is needed to train the neural network, and the noise reduction performance of the trained noise reduction neural network depends on the amount of the noise reduction training data. Based on the technical scheme disclosed by the embodiment of the application, the noise reduction training data can be efficiently obtained.

Specifically, a mobile phone is used to acquire a RAW image as a third image in the presence of an optical signal. The image processing apparatus obtains the second image based on step 4 in the case of acquiring the third image.

The images were acquired using a cell phone at different sensitivities (ISO) in a dark environment to obtain a black set. For example, the ISO of the mobile phone is set to 400, and an image a is acquired in a dark environment; setting the ISO of the mobile phone to 800, and acquiring an image b in a dark environment; and setting the ISO of the mobile phone to 1600, and acquiring an image c in a dark environment. And obtaining a black atlas according to the image a, the image b and the image c.

When acquiring a black image set, the image processing apparatus selects one image from the black image set as a first image. Based on the technical scheme disclosed in the foregoing, the hardware noise generated by the mobile phone generating the image under the condition of the optical signal is obtained according to the first image. The image processing apparatus adds hardware noise to the second image to obtain a noise image, and may further use the obtained noise image and the third image as a training image pair.

It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.

The method of the embodiments of the present application is set forth above in detail and the apparatus of the embodiments of the present application is provided below.

Referring to fig. 4, fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure, in which the image processing apparatus 1 includes an obtaining unit 11 and a processing unit 12, where:

an acquisition unit 11 configured to acquire a first image, which is an image acquired by a target imaging device in a dark environment;

and the processing unit 12 is configured to sample pixel values in the first image to obtain hardware noise generated when the target imaging device generates an image in the presence of a light signal.

With reference to any embodiment of the present application, the obtaining unit 11 is further configured to obtain a second image, where the second image includes shot noise;

the image processing apparatus 1 further includes: and the adding unit is used for adding the hardware noise to the second image to obtain a noise image.

With reference to any embodiment of the present application, the obtaining unit 11 is configured to:

acquiring a third image, wherein the third image is a clean image;

and obtaining the second image according to the third image.

With reference to any embodiment of the present disclosure, the obtaining unit 11 is further configured to obtain a first continuous distribution of the first image before sampling pixel values in the first image to obtain hardware noise generated by the target imaging device generating an image in the presence of an optical signal, where the first continuous distribution is obtained by fitting the pixel values in the first image;

the processing unit 12 is configured to:

replacing a second pixel value in the first image by using a first pixel value in the first continuous distribution to obtain a reconstructed first image;

and sampling the pixel value in the reconstructed first image to obtain the hardware noise.

With reference to any embodiment of the present application, the obtaining unit 11 is configured to:

determining a second continuous distribution comprising the second pixel values from the first continuous distribution;

in the case where the second pixel value is the largest pixel value in the first image, the smallest pixel value in the second continuous distribution is greater than or equal to the second largest pixel value in the first image; in the case where the second pixel value is the smallest pixel value in the first image, the largest pixel value in the second continuous distribution is less than or equal to the second small pixel value in the black map; in a case where the second pixel value is not the maximum pixel value in the first image and the second pixel value is not the minimum pixel value in the first image, the maximum pixel value in the second continuous distribution is less than or equal to a third pixel value, the minimum value in the second continuous distribution is greater than or equal to a fourth pixel value, the fourth pixel value is the ith largest pixel value in the first image, the second pixel value is the (i +1) th largest pixel value in the first image, the third pixel value is the (i +2) th largest pixel value in the first image, and i is a positive integer;

and replacing the second pixel value in the first image by the first pixel value in the second continuous distribution to obtain the reconstructed first image.

In combination with any embodiment of the present application, the first continuous profile comprises a bell-shaped continuous profile.

In combination with any embodiment of the present application, the first image includes a first pixel region, and the first pixel region includes four or more pixels;

the processing unit 12 is configured to:

sampling the first image to obtain pixel values of the first pixel area;

and obtaining the hardware noise according to the pixel value of the first pixel area.

In combination with any embodiment of the present application, a size of the first pixel region is the same as a size of the second image.

With reference to any one of the embodiments of the present application, the arrangement of the pixels in the first pixel region is the same as the arrangement of the pixels in the second image.

In combination with any embodiment of the present application, the processing unit 12 is configured to:

acquiring a third continuous distribution of the first pixel region, wherein the third continuous distribution is obtained by fitting pixel values in the first pixel region;

replacing the pixel values in the first pixel region with the pixel values in the third continuous distribution to obtain a reconstructed first pixel region;

and sampling the pixel value in the reconstructed first pixel region to obtain the hardware noise.

With reference to any one of the embodiments of the present application, the first image includes a second pixel region, and the arrangement of pixels of the second pixel region is the same as the arrangement of pixels of the second image;

the processing unit 12 is configured to:

sampling the first image to obtain pixel values of the second pixel region;

and obtaining the hardware noise according to the pixel value in the second pixel region.

In combination with any embodiment of the present application, the hardware noise includes one or more of: noise generated by analog gain, gain generated by digital gain, quantization noise.

In this embodiment, the obtaining unit 11 may be a data interface, and the processing unit 12 may be a processor.

In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present application may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.

Fig. 5 is a schematic diagram of a hardware structure of an image processing apparatus according to an embodiment of the present disclosure. The image processing apparatus 2 includes a processor 21, a memory 22, an input device 23, and an output device 24. The processor 21, the memory 22, the input device 23 and the output device 24 are coupled by a connector, which includes various interfaces, transmission lines or buses, etc., and the embodiment of the present application is not limited thereto. It should be appreciated that in various embodiments of the present application, coupled refers to being interconnected in a particular manner, including being directly connected or indirectly connected through other devices, such as through various interfaces, transmission lines, buses, and the like.

The processor 21 may be one or more Graphics Processing Units (GPUs), and in the case that the processor 21 is one GPU, the GPU may be a single-core GPU or a multi-core GPU. Alternatively, the processor 21 may be a processor group composed of a plurality of GPUs, and the plurality of processors are coupled to each other through one or more buses. Alternatively, the processor may be other types of processors, and the like, and the embodiments of the present application are not limited.

Memory 22 may be used to store computer program instructions, as well as various types of computer program code for executing the program code of aspects of the present application. Alternatively, the memory includes, but is not limited to, Random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or compact disc read-only memory (CD-ROM), which is used for associated instructions and data.

The input means 23 are for inputting data and/or signals and the output means 24 are for outputting data and/or signals. The input device 23 and the output device 24 may be separate devices or may be an integral device.

It is understood that, in the embodiment of the present application, the memory 22 may be used to store not only the relevant instructions, but also relevant data, for example, the memory 22 may be used to store the first image acquired through the input device 23, or the memory 22 may also be used to store hardware noise and the like obtained through the processor 21, and the embodiment of the present application is not limited to the data specifically stored in the memory.

It will be appreciated that fig. 5 only shows a simplified design of an image processing apparatus. In practical applications, the image processing apparatuses may further include other necessary components, including but not limited to any number of input/output devices, processors, memories, etc., and all image processing apparatuses that can implement the embodiments of the present application are within the scope of the present application.

Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It is also clear to those skilled in the art that the descriptions of the various embodiments of the present application have different emphasis, and for convenience and brevity of description, the same or similar parts may not be repeated in different embodiments, so that the parts that are not described or not described in detail in a certain embodiment may refer to the descriptions of other embodiments.

In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units in a second way. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.

In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)), or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.

One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media that can store program codes, such as a read-only memory (ROM) or a Random Access Memory (RAM), a magnetic disk, or an optical disk.

27页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:图像噪声抑制装置及其方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类