Image enhancement method, device and equipment in dark light environment and storage medium

文档序号:1966069 发布日期:2021-12-14 浏览:25次 中文

阅读说明:本技术 暗光环境下的图像增强方法、装置、设备及存储介质 (Image enhancement method, device and equipment in dark light environment and storage medium ) 是由 詹永杰 于 2021-08-04 设计创作,主要内容包括:本发明涉及图像数据处理技术领域,公开了一种暗光环境下的图像增强方法、装置、设备及存储介质,所述方法包括:通过运动侦测判断暗光环境下拍摄场景的类型;当拍摄场景为静态场景时,将ISP参数设置为静态场景调优参数并采集静态图像;完成静态图像采集之后,将ISP参数设置为通用场景调优参数并采集动态图像;根据静态图像的图像信息对动态图像进行图像增强。本发明提供的一种暗光环境下的图像增强方法、装置、设备及存储介质,利用高质量的静态图像作为先验知识对动态图像进行图像增强,能够提高动态图像的信噪比。(The invention relates to the technical field of image data processing, and discloses an image enhancement method, an image enhancement device, image enhancement equipment and a storage medium in a dark light environment, wherein the method comprises the following steps: judging the type of a shooting scene in a dark light environment through motion detection; when the shooting scene is a static scene, setting the ISP parameters as the static scene tuning parameters and collecting a static image; after the static image acquisition is completed, setting the ISP parameters as the universal scene tuning parameters and acquiring the dynamic image; and performing image enhancement on the dynamic image according to the image information of the static image. According to the image enhancement method, the image enhancement device, the image enhancement equipment and the storage medium in the dark light environment, the high-quality static image is used as the priori knowledge to carry out image enhancement on the dynamic image, and the signal-to-noise ratio of the dynamic image can be improved.)

1. An image enhancement method in a dim light environment, comprising:

judging the type of a shooting scene in a dark light environment through motion detection;

when the shooting scene is a static scene, setting the ISP parameters as the static scene tuning parameters and collecting a static image;

after the static image acquisition is completed, setting the ISP parameters as the universal scene tuning parameters and acquiring the dynamic image;

and performing image enhancement on the dynamic image according to the image information of the static image.

2. The method for enhancing an image in a dark light environment according to claim 1, wherein the image enhancing a dynamic image according to image information of a static image specifically comprises:

performing multi-scale Gaussian filtering on the static image;

subtracting the static image after the multi-scale Gaussian filtering from the static image before filtering to obtain multi-scale texture details;

and fusing the multi-scale texture details into the dynamic image in a preset combination mode.

3. The method for enhancing an image in a dark light environment according to claim 1, wherein the image enhancing a dynamic image according to image information of a static image specifically comprises:

extracting data of a Y channel in a YUV color space from the static image, counting a gray level histogram of the Y channel, and extracting a color saturation channel from the static image;

matching a Y channel in the YUV color space of the dynamic image with a Y channel of the static image through histogram specification, and performing weighted fusion on a color saturation channel of the dynamic image and a color saturation channel of the static image;

and replacing the Y channel of the dynamic image by the Y channel regulated by the histogram, and replacing the color saturation channel of the dynamic image by the color saturation channel subjected to weighted fusion.

4. The method for enhancing an image in a dark light environment according to claim 1, wherein the image enhancing a dynamic image according to image information of a static image specifically comprises:

extracting data of a Y channel in a YUV color space from the static image, counting a gray level histogram of the Y channel, and extracting a color saturation channel from the static image;

matching a Y channel in the YUV color space of the dynamic image with a Y channel of the static image through histogram specification, and performing weighted fusion on a color saturation channel of the dynamic image and a color saturation channel of the static image;

replacing the Y channel of the dynamic image by the Y channel regulated by the histogram, and replacing the color saturation channel of the dynamic image by the color saturation channel subjected to weighted fusion;

performing multi-scale Gaussian filtering on the static image;

subtracting the static image after the multi-scale Gaussian filtering from the static image before filtering to obtain multi-scale texture details;

and fusing the multi-scale texture details into the dynamic image in a preset combination mode.

5. The method for image enhancement in a dim light environment according to claim 1, characterized in that the method further comprises:

detecting a shooting angle;

and when the change of the shooting angle is detected, clearing the acquired static image, and setting the ISP parameter as the static scene tuning parameter to acquire the static image again.

6. The method for enhancing the image in the dark light environment according to claim 5, wherein the detecting the shooting angle specifically includes:

acquiring feature points of a current shot image and a feature descriptor corresponding to each feature point;

calculating the similarity between the feature descriptor of the current shot image and the feature descriptor of the shot image acquired before the first time interval;

if the similarity is larger than a preset threshold value, judging that the shooting angle changes; and if the similarity is not greater than the preset threshold, judging that the shooting angle is not changed.

7. The method for image enhancement in a dim light environment according to claim 1, characterized in that the method further comprises:

and after a second time interval, clearing the acquired static images, and setting the ISP parameters as the static scene tuning parameters to acquire the static images again.

8. An image enhancement device in a dim light environment, comprising:

the shooting scene judging module is used for judging the type of a shooting scene in a dark light environment through motion detection;

the static image acquisition module is used for setting the ISP parameters as the static scene tuning parameters and acquiring a static image when the shooting scene is a static scene;

the dynamic image enhancement module is used for setting the ISP parameters as the universal scene tuning parameters and acquiring dynamic images after the static images are acquired;

and the image enhancement module is used for enhancing the image of the dynamic image according to the image information of the static image.

9. A terminal device, comprising:

a memory for storing a computer program;

a processor for executing the computer program;

wherein the processor, when executing the computer program, implements the method of image enhancement in a dim light environment according to any one of claims 1 to 7.

10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed, implements the image enhancement method in a dim light environment according to any one of claims 1 to 7.

Technical Field

The present invention relates to the field of image data processing technologies, and in particular, to an image enhancement method, an image enhancement device, an image enhancement apparatus, and a storage medium in a dark light environment.

Background

The small-sized network camera is limited by the requirements of the size of a lens and a sensor and the frame rate, and is difficult to shoot high-quality images in a dark light (low illumination) environment, especially for low-cost equipment, the camera has poor latitude, the sensor has limited light sensing capability, and only image signals with low signal-to-noise ratio can be collected. Random noise occurs if the objects in the image are to be sharp and bright. Otherwise, the texture details in the picture are lost because the signal is too weak to capture, sometimes even though the basic color restoration is not done.

There are many methods for dark ambient image enhancement, of which stack noise reduction is a common method. Stack denoising relies on taking multiple photos and then stacking processing to eliminate random noise. The ISP image tuning of SoC chip vendors provides a noise reduction technique called 3DNR, which functions similarly to stack noise reduction. The negative effect is that if there is moving object in the picture, the virtual image and smear will be generated. In order to avoid the generation of the ghost and the smear, the ISP tuning cannot adopt high noise reduction intensity, and only a compromise can be made between the noise point and the ghost and the smear.

HDR is a multi-frame synthesis at different exposure values. HDR can be divided into single frame HDR and multi-frame HDR. The single frame HDR combines signals with different exposure values into one HDR image, which requires a sensor with a large enough area to collect signals with different exposure values, thereby increasing the manufacturing cost. Multi-frame HDR, such as 30 frames combined by 60 frames, requires a high frame rate of 60 frames, which increases the cost of the sensor, and increases the cost of the host chip (requiring a CPU with higher processing power and more memory).

Leading-edge image enhancement techniques are many methods that rely on deep neural networks. On one hand, the deep neural network is time-consuming and labor-consuming to train, and needs to calibrate massive image data. On the other hand, the embedded device is limited by insufficient computing power, and the deep neural network is difficult to operate on the embedded device after being trained (processing a 4K-resolution image may require several seconds or even tens of seconds, and the video frame rate and real-time requirements are difficult to meet).

Disclosure of Invention

The technical problem to be solved by the embodiment of the invention is as follows: the method, the device, the equipment and the storage medium for enhancing the image in the dark light environment are provided, and the high-quality static image is used as priori knowledge to enhance the image of the dynamic image, so that the signal-to-noise ratio of the dynamic image is improved.

In order to solve the above technical problem, in a first aspect, an embodiment of the present invention provides an image enhancement method in a dark light environment, including:

judging the type of a shooting scene in a dark light environment through motion detection;

when the shooting scene is a static scene, setting the ISP parameters as the static scene tuning parameters and collecting a static image;

after the static image acquisition is completed, setting the ISP parameters as the universal scene tuning parameters and acquiring the dynamic image;

and performing image enhancement on the dynamic image according to the image information of the static image.

With reference to the first aspect, in a possible implementation manner, the performing image enhancement on the dynamic image according to the image information of the static image specifically includes:

performing multi-scale Gaussian filtering on the static image;

subtracting the static image after the multi-scale Gaussian filtering from the static image before filtering to obtain multi-scale texture details;

and fusing the multi-scale texture details into the dynamic image in a preset combination mode.

With reference to the first aspect, in a possible implementation manner, the performing image enhancement on the dynamic image according to the image information of the static image specifically includes:

extracting data of a Y channel in a YUV color space from the static image, counting a gray level histogram of the Y channel, and extracting a color saturation channel from the static image;

matching a Y channel in the YUV color space of the dynamic image with a Y channel of the static image through histogram specification, and performing weighted fusion on a color saturation channel of the dynamic image and a color saturation channel of the static image;

and replacing the Y channel of the dynamic image by the Y channel regulated by the histogram, and replacing the color saturation channel of the dynamic image by the color saturation channel subjected to weighted fusion.

With reference to the first aspect, in a possible implementation manner, the performing image enhancement on the dynamic image according to the image information of the static image specifically includes:

extracting data of a Y channel in a YUV color space from the static image, counting a gray level histogram of the Y channel, and extracting a color saturation channel from the static image;

matching a Y channel in the YUV color space of the dynamic image with a Y channel of the static image through histogram specification, and performing weighted fusion on a color saturation channel of the dynamic image and a color saturation channel of the static image;

replacing the Y channel of the dynamic image by the Y channel regulated by the histogram, and replacing the color saturation channel of the dynamic image by the color saturation channel subjected to weighted fusion;

performing multi-scale Gaussian filtering on the static image;

subtracting the static image after the multi-scale Gaussian filtering from the static image before filtering to obtain multi-scale texture details;

and fusing the multi-scale texture details into the dynamic image in a preset combination mode.

With reference to the first aspect, in one possible implementation manner, the method further includes:

detecting a shooting angle;

and when the change of the shooting angle is detected, clearing the acquired static image, and setting the ISP parameter as the static scene tuning parameter to acquire the static image again.

With reference to the first aspect, in a possible implementation manner, the detecting a shooting angle specifically includes:

acquiring feature points of a current shot image and a feature descriptor corresponding to each feature point;

calculating the similarity between the feature descriptor of the current shot image and the feature descriptor of the shot image acquired before the first time interval;

if the similarity is larger than a preset threshold value, judging that the shooting angle changes; and if the similarity is not greater than the preset threshold, judging that the shooting angle is not changed.

With reference to the first aspect, in one possible implementation manner, the method further includes:

and after a second time interval, clearing the acquired static images, and setting the ISP parameters as the static scene tuning parameters to acquire the static images again.

In order to solve the above technical problem, in a second aspect, an embodiment of the present invention provides an image enhancement device in a dark light environment, including:

the shooting scene judging module is used for judging the type of a shooting scene in a dark light environment through motion detection;

the static image acquisition module is used for setting the ISP parameters as the static scene tuning parameters and acquiring a static image when the shooting scene is a static scene;

the dynamic image enhancement module is used for setting the ISP parameters as the universal scene tuning parameters and acquiring dynamic images after the static images are acquired;

and the image enhancement module is used for enhancing the image of the dynamic image according to the image information of the static image.

In order to solve the foregoing technical problem, in a third aspect, an embodiment of the present invention provides a terminal device, including:

a memory for storing a computer program;

a processor for executing the computer program;

wherein the processor, when executing the computer program, implements the method of image enhancement in a dim light environment according to any of the first aspect.

In order to solve the above technical problem, in a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium storing a computer program, which when executed, implements the image enhancement method in a dim light environment according to any one of the first aspect.

Compared with the prior art, the image enhancement method, the image enhancement device, the image enhancement equipment and the image enhancement storage medium in the dark light environment have the advantages that: firstly, a static image under a dark light environment is extracted, the extracted static image can provide accurate and stable prior knowledge and can be used as a basis for enhancing a dynamic image, then the dynamic image is enhanced according to image information in the static image, random noise or dynamic color noise generated by the enhanced dynamic image can be avoided, and the signal-to-noise ratio of the dynamic image is improved.

Drawings

In order to more clearly illustrate the technical features of the embodiments of the present invention, the drawings needed to be used in the embodiments of the present invention will be briefly described below, and it is apparent that the drawings described below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on the drawings without inventive labor.

FIG. 1 is a schematic flow chart diagram illustrating a preferred embodiment of an image enhancement method in a dim light environment according to the present invention;

FIG. 2 is a schematic structural diagram of a preferred embodiment of an image enhancement device in a dim light environment according to the present invention;

fig. 3 is a schematic structural diagram of a preferred embodiment of a terminal device provided in the present invention.

Detailed Description

In order to clearly understand the technical features, objects and effects of the present invention, the following detailed description of the embodiments of the present invention is provided with reference to the accompanying drawings and examples. The following examples are intended to illustrate the invention, but are not intended to limit the scope of the invention. Other embodiments, which can be derived by those skilled in the art from the embodiments of the present invention without inventive step, shall fall within the scope of the present invention.

In the description of the present invention, it should be understood that the numbers themselves, such as "first", "second", etc., are used only for distinguishing the described objects, do not have a sequential or technical meaning, and cannot be understood as defining or implying the importance of the described objects.

Fig. 1 is a schematic flowchart illustrating an image enhancement method in a dark light environment according to a preferred embodiment of the present invention.

As shown in fig. 1, the image enhancement method includes the steps of:

s10: judging the type of a shooting scene in a dark light environment through motion detection;

s20: when the shooting scene is a static scene, setting the ISP parameters as the static scene tuning parameters and collecting a static image;

s30: after the static image acquisition is completed, setting the ISP parameters as the universal scene tuning parameters and acquiring the dynamic image;

s40: and performing image enhancement on the dynamic image according to the image information of the static image.

For the convenience of understanding the technical solution of the present invention, the terms therein are first explained:

static scene: and shooting a scene without a moving object in the range.

Dynamic scene: and a scene with a moving object exists in the shooting range.

Static image: the image acquired under the condition of the static scene tuning parameters specifically refers to an image acquired under the action of a group of ISP parameters specially optimized for the static scene, and under the action of the group of ISP parameters, if a moving object exists in a picture, serious smear and ghost can occur, so the image must be acquired in the static scene.

Moving image: the images shot under the universal scene tuning parameters are specifically acquired under the action of optimized ISP parameters aiming at a universal scene (mainly a dynamic scene) on the premise of ensuring that no noise points exist in a picture and no ghosting or smear exists in a moving object, and can be acquired in a static scene or a dynamic scene.

Specifically, when the shooting environment is a dim light environment, the image enhancement method of the present invention is started, and first, whether a moving object exists in the current shooting scene is judged by means of motion detection, if the moving object does not exist, the shooting scene is a static scene, the ISP parameter of the shooting device is set as a static scene tuning parameter, and the static image is collected. After the acquisition of the static image is finished, the ISP parameters are set as the universal scene tuning parameters, and the dynamic image is acquired. And finally, image information in the static image is used as prior knowledge to carry out image enhancement on the dynamic image.

If a moving object exists in the shooting scene, the shooting scene is a dynamic scene, and if the shooting equipment does not collect a static image before, image enhancement is not performed (but a dynamic image is still collected at the moment).

According to the image enhancement method under the dark light environment, the static image under the dark light environment is extracted, the extracted static image can provide accurate and stable priori knowledge and can be used as a basis for enhancing the dynamic image, then the dynamic image is enhanced according to the image information in the static image, random noise or dynamic color noise generated by the enhanced dynamic image can be avoided, and the signal-to-noise ratio of the dynamic image is improved.

In one possible embodiment, before step S10, the method further includes:

acquiring the exposure of a shooting environment;

when the exposure is larger than a preset exposure threshold, judging that the shooting environment is a non-dark light environment;

and when the exposure amount is not more than a preset exposure amount threshold value, judging that the shooting environment is a dark light environment.

The invention aims at the image enhancement method under the dark light environment, so the shooting environment is the dark light environment on the premise of starting, and the false starting is avoided.

The preset exposure threshold value can be set according to actual conditions, and can be preset before the equipment leaves a factory or changed by a user.

In a possible embodiment, the determining the type of the shooting scene in the dark light environment through motion detection specifically includes:

s101: and acquiring a shot image, and scaling the shot image to a size suitable for calculation.

S102: dividing each frame of the zoomed shot image into an integer number of pixel sub-blocks, wherein the number of the pixel sub-blocks is m multiplied by n, m is the number of rows, and n is the number of columns.

S103: and calculating the pixel accumulated sum of each pixel subblock in the current frame shot image, wherein each pixel accumulated sum is recorded as S0(X, Y), X belongs to n as the abscissa of the pixel subblock, Y belongs to m as the ordinate of the pixel subblock, then calculating the ratio of the accumulated sums between the adjacent pixel subblocks according to rows, and then calculating the ratio of the accumulated sums of the adjacent pixel subblocks according to columns.

S104: and calculating the pixel accumulated sum of each pixel subblock in the next frame of shot image, wherein each pixel accumulated sum is recorded as S1(X, Y), X belongs to n as the abscissa of the pixel subblock, Y belongs to m as the ordinate of the pixel subblock, then calculating the ratio of the accumulated sums between the adjacent pixel subblocks according to rows, and then calculating the ratio of the accumulated sums of the adjacent pixel subblocks according to columns.

S105: calculating the difference ratio of the accumulated sums of the adjacent pixel sub-blocks corresponding to the current frame and the next frame, and calculating the total number of the difference ratios larger than a set threshold;

the difference ratio of the accumulated sums of the adjacent pixel sub-blocks corresponding to the current frame and the next frame is calculated, which specifically comprises the following steps: firstly, calculating according to the rows:

Δ S ═ S0(x, y)/S0(x +1, y) -S1(x, y)/S1(x +1, y) |/(S0(x, y)/S0(x +1, y)), if Δ S ≧ Ta; then Isum ═ Isum + 1;

the following calculation formula is:

Δ S ═ S0(x, y)/S0(x, y +1) -S1(x, y)/S1(x, y +1) |/(S0(x, y)/S0(x, y +1)), if Δ S ≧ Ta; then Isum ═ Isum + 1;

Δ s is the difference ratio of the cumulative sums of the adjacent pixel sub-blocks corresponding to the current frame and the next frame; ta is a set threshold; isum is the total number of difference ratios greater than the set threshold.

S106: calculating the ratio of all pixel sub-blocks in the current frame and the next frame of two-frame shot images, wherein the difference ratio is larger than the set threshold; p ═ Isum/(2 × m × n-m-n).

S107: and judging whether the calculated result is larger than the alarm threshold value or not according to the calculated result of the step S106, if so, judging that the shooting scene is a dynamic scene, and if not, judging that the shooting scene is a static scene.

In step S103 and step S104, the pixel accumulated sum is the sum of the luminance values of the pixels in the pixel sub-blocks, the first pixel sub-block accumulated sum is sequentially the first pixel sub-block accumulated sum to the second pixel sub-block accumulated sum, then the second pixel sub-block accumulated sum to the third pixel sub-block accumulated sum, until the n-1 pixel sub-block of the first row is greater than the n pixel sub-block, and the accumulated sum ratio between the second row and the m row is calculated by the same method; and calculating the ratio of the accumulated sums of the adjacent pixel sub-blocks in a row sequence, namely sequentially adding the accumulated sum of the first pixel sub-block in the first row to the accumulated sum of the second pixel sub-block in the first row and adding the accumulated sum of the second pixel sub-block to the accumulated sum of the third pixel sub-block until the m-1 pixel sub-block in the first row is more than the m pixel sub-block, and calculating the ratio of the accumulated sums of the adjacent pixel sub-blocks in the second row to the n row by the same method.

In a possible embodiment, the image enhancement of the dynamic image according to the image information of the static image specifically includes:

s401: performing multi-scale Gaussian filtering on the static image;

s402: subtracting the static image after the multi-scale Gaussian filtering from the static image before filtering to obtain multi-scale texture details;

s403: and fusing the multi-scale texture details into the dynamic image in a preset combination mode.

In the embodiment, the texture details of the static image are extracted through multi-scale Gaussian differences by utilizing the image information extracted from the static image, and then are fused into the dynamic image in a preset combination mode, so that the texture of the dynamic image is enhanced. The gaussian difference means that the image is gaussian filtered, then the filtered image is subtracted from the original image, the gaussian filtering performed by using a gaussian kernel only suppresses the high-frequency information of the image, and the gaussian filtered image is subtracted from the original image, so that the spatial information in the original frequency band of the image can be maintained, and the image only containing the high-frequency texture detail information can be obtained. Texture details of different scales can be obtained by collecting multi-scale Gaussian differences, and the extracted texture details are fused into the dynamic image in a preset combination mode, so that the texture details of the dynamic image can be enhanced.

In a possible embodiment, the image enhancement of the dynamic image according to the image information of the static image specifically includes:

s401': extracting data of a Y channel in a YUV color space from the static image, counting a gray level histogram of the Y channel, and extracting a color saturation channel from the static image;

s402': matching a Y channel in the YUV color space of the dynamic image with a Y channel of the static image through histogram specification, and performing weighted fusion on a color saturation channel of the dynamic image and a color saturation channel of the static image;

s403': and replacing the Y channel of the dynamic image by the Y channel regulated by the histogram, and replacing the color saturation channel of the dynamic image by the color saturation channel subjected to weighted fusion.

The embodiment utilizes the image information extracted from the static image to enhance the color and brightness of the dynamic image by performing histogram mapping and fusion on different color channels. Wherein mapping refers to matching mapping of image histograms. Firstly, extracting the data of a Y channel in a YUV color space from a static image, and counting a gray histogram of the data. And then extracting a color saturation channel of the static image. Then, image enhancement processing is performed on each frame of moving image, and the Y channel of the moving image is matched with the Y channel of the static image through histogram specification. And simultaneously carrying out weighted fusion on the color saturation channel of the dynamic image and the color saturation channel of the static image. And finally, replacing the original corresponding channel of the dynamic image by the Y channel subjected to histogram specification and the color saturation channel subjected to weighting fusion to obtain the enhanced dynamic image.

In a possible embodiment, the image enhancement of the dynamic image according to the image information of the static image specifically includes:

s401': extracting data of a Y channel in a YUV color space from the static image, counting a gray level histogram of the Y channel, and extracting a color saturation channel from the static image;

s402': matching a Y channel in the YUV color space of the dynamic image with a Y channel of the static image through histogram specification, and performing weighted fusion on a color saturation channel of the dynamic image and a color saturation channel of the static image;

s403': replacing the Y channel of the dynamic image by the Y channel regulated by the histogram, and replacing the color saturation channel of the dynamic image by the color saturation channel subjected to weighted fusion;

s403': performing multi-scale Gaussian filtering on the static image;

s405': subtracting the static image after the multi-scale Gaussian filtering from the static image before filtering to obtain multi-scale texture details;

s406': and fusing the multi-scale texture details into the dynamic image in a preset combination mode.

In this embodiment, the multi-scale gaussian difference and channel mapping fusion mode is integrated, the luminance and color of the dynamic image are enhanced by the channel mapping fusion method, and then the texture of the dynamic image is enhanced by the multi-scale gaussian difference fusion method.

In one possible embodiment, after step S40, the method further comprises:

s50: detecting a shooting angle;

s60: and when the change of the shooting angle is detected, clearing the acquired static image, and setting the ISP parameter as the static scene tuning parameter to acquire the static image again.

It should be noted that, when the shooting angle changes (for example, due to human movement), the acquired static image may be greatly unsuitable for image enhancement at the current shooting angle, and therefore, the ISP parameter needs to be set as the static scene tuning parameter again, the static image acquisition is performed again, and the newly acquired static image is used to enhance the current dynamic image.

In a possible embodiment, the detecting of the shooting angle specifically includes:

s501: acquiring feature points of a current shot image and a feature descriptor corresponding to each feature point;

s502: calculating the similarity between the feature descriptor of the current shot image and the feature descriptor of the shot image acquired before the first time interval;

s503: if the similarity is larger than a preset threshold value, judging that the shooting angle changes; and if the similarity is not greater than the preset threshold, judging that the shooting angle is not changed.

Specifically, during the working process of the shooting device, images or recorded videos are continuously shot, at this time, whether a shooting scene changes needs to be judged according to image feature matching, when the specific judgment is made, contrast feature points (for example, FAST feature points) of the shot images obtained before a first time interval and contrast feature descriptors (for example, DAISY feature descriptors, which may be 200-dimensional feature vectors) corresponding to each contrast feature point are obtained first, all the contrast feature points and the contrast feature descriptors corresponding to each contrast feature point are used as scene features of the contrast images, then the current shot images are correspondingly processed to obtain at least one feature point of the current shot images and feature descriptors corresponding to each feature point, and all the feature points and the feature descriptors corresponding to each feature point are used as scene features in the current shot images, and performing similarity calculation on the feature descriptor and the comparison feature descriptor of the current shot image, judging that the shooting angle is changed when the similarity obtained by calculation is greater than a preset threshold value, and judging that the shooting angle is not changed when the similarity is not greater than the preset threshold value.

In one possible embodiment, after step S40, the method further comprises:

s70: and after a second time interval, clearing the acquired static images, and setting the ISP parameters as the static scene tuning parameters to acquire the static images again.

It should be noted that, after the second time interval elapses, the acquired static image may not be suitable for the current image enhancement, and therefore, the ISP parameter needs to be set as the static scene tuning parameter again, the static image acquisition is performed again, and the reacquired static image is used to enhance the current dynamic image.

In summary, according to the image enhancement method in the dark environment provided by the invention, the static image is obtained through the static scene tuning parameter, the dynamic image is obtained through the general scene tuning parameter, and then the static image with high quality is extracted by using the static scene to serve as prior information to enhance the dynamic image, so that the image quality of the dynamic image can be optimized; the texture details of the dynamic image are enhanced through multi-scale Gaussian difference, the color and the brightness of the dynamic image are enhanced through histogram specification and channel weighting fusion, and the image quality of the dynamic image can be optimized from different angles; the static image is collected again after the shooting angle is changed or a certain time interval elapses, so that the timeliness of the static image can be ensured, and the normal image enhancement can be ensured.

It should be understood that all or part of the processes in the image enhancement method in the dim light environment can be implemented by a computer program, which can be stored in a computer readable storage medium and can be executed by a processor to implement the steps of the image enhancement method in the dim light environment. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, in accordance with legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunications signals.

Fig. 2 is a schematic structural diagram of a preferred embodiment of an image enhancement apparatus in a dark light environment according to the present invention, which is capable of implementing all the processes of the image enhancement method in the dark light environment and achieving corresponding technical effects.

As shown in fig. 2, the apparatus includes:

a shooting scene judging module 21, configured to judge a type of a shooting scene in a dark light environment through motion detection;

the static image acquisition module 22 is configured to set the ISP parameter as a static scene tuning parameter and acquire a static image when the shooting scene is a static scene;

the dynamic image enhancement module 23 is configured to set the ISP parameter as a general scene tuning parameter and acquire a dynamic image after completing the acquisition of the static image;

and the image enhancement module 24 is used for performing image enhancement on the dynamic image according to the image information of the static image.

In one possible embodiment, the apparatus further comprises:

the exposure acquisition module is used for acquiring the exposure of a shooting environment;

the first judging module is used for judging that the shooting environment is a non-dark light environment when the exposure is greater than a preset exposure threshold;

and the second judging module is used for judging that the shooting environment is a dark light environment when the exposure is not greater than the preset exposure threshold.

In one possible embodiment, the shooting scene determination module 21 includes:

and the image acquisition unit is used for acquiring the shot image and scaling the shot image to a size suitable for calculation.

And the image blocking unit is used for dividing each frame of the zoomed shot image into an integral number of pixel sub-blocks, wherein the number of the pixel sub-blocks is m multiplied by n, m is the number of rows, and n is the number of columns.

The first accumulation sum calculating unit is used for calculating the pixel accumulation sum of each pixel subblock in the current frame shooting image, the pixel accumulation sum is marked as S0(X, Y), wherein X belongs to n as the abscissa of the pixel subblock, Y belongs to m as the ordinate of the pixel subblock, then the ratio of the accumulation sums between the adjacent pixel subblocks is calculated according to rows, and then the ratio of the accumulation sums of the adjacent pixel subblocks is calculated according to columns.

And the first accumulation sum calculating unit is used for calculating the pixel accumulation sum of each pixel subblock in the next frame of shot image, wherein each pixel accumulation sum is marked as S1(X, Y), X belongs to n as the abscissa of the pixel subblock, Y belongs to m as the ordinate of the pixel subblock, then the ratio of the accumulation sums between the adjacent pixel subblocks is calculated according to rows, and then the ratio of the accumulation sums of the adjacent pixel subblocks is calculated according to columns.

The difference ratio calculating unit is used for calculating the difference ratio of the accumulated sums of the adjacent pixel sub-blocks corresponding to the current frame and the next frame, and calculating the total number of the difference ratios larger than a set threshold;

the difference ratio of the accumulated sums of the adjacent pixel sub-blocks corresponding to the current frame and the next frame is calculated, which specifically comprises the following steps: firstly, calculating according to the rows:

Δ S ═ S0(x, y)/S0(x +1, y) -S1(x, y)/S1(x +1, y) |/(S0(x, y)/S0(x +1, y)), if Δ S ≧ Ta; then Isum ═ Isum + 1;

the following calculation formula is:

Δ S ═ S0(x, y)/S0(x, y +1) -S1(x, y)/S1(x, y +1) |/(S0(x, y)/S0(x, y +1)), if Δ S ≧ Ta; then Isum ═ Isum + 1;

Δ s is the difference ratio of the cumulative sums of the adjacent pixel sub-blocks corresponding to the current frame and the next frame; ta is a set threshold; isum is the total number of difference ratios greater than the set threshold.

The ratio calculation unit is used for calculating the ratio of all pixel sub-blocks in two frames of shot images of a current frame and a next frame, wherein the difference ratio is larger than a set threshold; p ═ Isum/(2 × m × n-m-n).

And the scene judging unit is used for judging whether the ratio is larger than the alarm threshold value or not according to the calculation result of the ratio, if so, judging that the shooting scene is a dynamic scene, and otherwise, judging that the shooting scene is a static scene.

The pixel accumulated sum is the sum of the brightness values of all pixels in the pixel sub-blocks, the accumulated sum of the first pixel sub-block in the first row is sequentially compared with the accumulated sum of the second pixel sub-block according to the row calculation ratio sequence between the adjacent pixel sub-blocks, then the accumulated sum of the second pixel sub-block is compared with the accumulated sum of the third pixel sub-block until the n-1 pixel sub-block in the first row is compared with the n pixel sub-block, and the accumulated sum ratio between the adjacent pixel sub-blocks from the second row to the m row is calculated by the same method; and calculating the ratio of the accumulated sums of the adjacent pixel sub-blocks in a row sequence, namely sequentially adding the accumulated sum of the first pixel sub-block in the first row to the accumulated sum of the second pixel sub-block in the first row and adding the accumulated sum of the second pixel sub-block to the accumulated sum of the third pixel sub-block until the m-1 pixel sub-block in the first row is more than the m pixel sub-block, and calculating the ratio of the accumulated sums of the adjacent pixel sub-blocks in the second row to the n row by the same method.

In one possible embodiment, the image enhancement module 24 comprises:

the first filtering unit is used for carrying out multi-scale Gaussian filtering on the static image;

the first subtraction unit is used for subtracting the static image after the multi-scale Gaussian filtering from the static image before the filtering to obtain multi-scale texture details;

and the first fusion unit is used for fusing the multi-scale texture details into the dynamic image in a preset combination mode.

In one possible embodiment, the image enhancement module 24 comprises:

the first extraction unit is used for extracting data of a Y channel in a YUV color space from the static image, counting a gray histogram of the Y channel and extracting a color saturation channel from the static image;

the first matching unit is used for matching a Y channel in a YUV color space of the dynamic image with a Y channel of the static image through histogram specification, and performing weighted fusion on a color saturation channel of the dynamic image and a color saturation channel of the static image;

and the first replacing unit is used for replacing the Y channel of the dynamic image with the Y channel specified by the histogram and replacing the color saturation channel of the dynamic image with the weighted and fused color saturation channel.

In one possible embodiment, the image enhancement module 24 comprises:

the second extraction unit is used for extracting data of a Y channel in a YUV color space from the static image, counting a gray histogram of the Y channel and extracting a color saturation channel from the static image;

the second matching unit is used for matching the Y channel in the YUV color space of the dynamic image with the Y channel of the static image through histogram specification, and performing weighted fusion on the color saturation channel of the dynamic image and the color saturation channel of the static image;

the second replacing unit is used for replacing the Y channel of the dynamic image with the Y channel regulated by the histogram and replacing the color saturation channel of the dynamic image with the weighted and fused color saturation channel;

the second filtering unit is used for carrying out multi-scale Gaussian filtering on the static image;

the second subtraction unit is used for subtracting the static image after the multi-scale Gaussian filtering from the static image before the filtering to obtain multi-scale texture details;

and the second fusion unit is used for fusing the multi-scale texture details into the dynamic image in a preset combination mode.

In one possible embodiment, the apparatus further comprises:

the angle detection module is used for detecting the shooting angle;

and the first reacquisition module is used for clearing the acquired static image and setting the ISP parameter as a static scene tuning parameter to reacquire the static image when the shooting angle is detected to be changed.

In one possible embodiment, the angle detection unit includes: .

The characteristic descriptor acquisition unit is used for acquiring the characteristic points of the current shot image and the characteristic descriptors corresponding to the characteristic points;

the similarity calculation unit is used for calculating the similarity between the feature descriptor of the current shot image and the feature descriptor of the shot image acquired before the first time interval;

the angle judging unit is used for judging that the shooting angle changes if the similarity is larger than a preset threshold value; and if the similarity is not greater than the preset threshold, judging that the shooting angle is not changed.

In one possible embodiment, the apparatus further comprises:

and the second reacquisition module is used for clearing the acquired static image after a second time interval, and setting the ISP parameter as a static scene tuning parameter to reacquire the static image.

Fig. 3 is a schematic structural diagram of a preferred embodiment of a terminal device according to the present invention, where the device can implement all the processes of the image enhancement method in the dark environment and achieve corresponding technical effects.

As shown in fig. 3, the apparatus includes:

a memory 31 for storing a computer program;

a processor 32 for executing the computer program;

wherein the processor 32, when executing the computer program, implements the image enhancement method in a dim light environment according to any of the above embodiments.

Illustratively, the computer program may be divided into one or more modules/units, which are stored in the memory 31 and executed by the processor 32 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used for describing the execution process of the computer program in the terminal device.

The Processor 32 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.

The memory 31 may be used for storing the computer programs and/or modules, and the processor 32 may implement various functions of the terminal device by running or executing the computer programs and/or modules stored in the memory 31 and calling data stored in the memory 31. The memory 31 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory 31 may include a high speed random access memory, and may also include a non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.

It should be noted that the terminal device includes, but is not limited to, a processor and a memory, and those skilled in the art will understand that the structural diagram of fig. 3 is only an example of the terminal device, and does not constitute a limitation to the terminal device, and may include more components than those shown in the drawings, or may combine some components, or may be different components.

The above description is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and it should be noted that, for those skilled in the art, several equivalent obvious modifications and/or equivalent substitutions can be made without departing from the technical principle of the present invention, and these obvious modifications and/or equivalent substitutions should also be regarded as the scope of the present invention.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种自动屏幕保护增加稳定性的自动隐藏式摄像头外壳

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类