Image fusion method

文档序号:138103 发布日期:2021-10-22 浏览:40次 中文

阅读说明:本技术 图像融合方法 (Image fusion method ) 是由 傅斌 田仁富 于 2020-04-20 设计创作,主要内容包括:本发明实施例提出图像融合方法。方法包括:获取可见光图像及与其配准的红外光图像;获取可见光图像中的第一亮度信息及红外光图像中的第二亮度信息;获取第一亮度信息的第一低频分量和第一高频分量、及第二亮度信息的第二低频分量和第二高频分量;利用第一低频分量与第二低频分量之间的残差数据补偿第二低频分量,得到第三低频分量;利用融合权重矩阵对第一低频分量和第三低频分量进行加权融合,得到融合低频分量;利用融合低频分量、第一高频分量和第二高频分量得到融合亮度信息;根据融合亮度信息和可见光图像的颜色信息,得到融合图像。本发明实施例在保证颜色真实性的基础上,提升了融合图像的信噪比和细节显示效果。(The embodiment of the invention provides an image fusion method. The method comprises the following steps: acquiring a visible light image and an infrared light image registered with the visible light image; acquiring first brightness information in a visible light image and second brightness information in an infrared light image; acquiring a first low-frequency component and a first high-frequency component of first luminance information and a second low-frequency component and a second high-frequency component of second luminance information; compensating the second low-frequency component by using residual data between the first low-frequency component and the second low-frequency component to obtain a third low-frequency component; performing weighted fusion on the first low-frequency component and the third low-frequency component by using a fusion weight matrix to obtain a fusion low-frequency component; obtaining fusion brightness information by using the fusion low-frequency component, the first high-frequency component and the second high-frequency component; and obtaining a fused image according to the fused brightness information and the color information of the visible light image. The embodiment of the invention improves the signal-to-noise ratio and the detail display effect of the fused image on the basis of ensuring the color authenticity.)

1. An image fusion method, characterized in that the image fusion method comprises:

acquiring a visible light image and an infrared light image registered with the visible light image;

acquiring first brightness information in the visible light image and second brightness information in the infrared light image;

acquiring a first low-frequency component and a first high-frequency component of the first brightness information and a second low-frequency component and a second high-frequency component of the second brightness information based on preset filtering parameters, wherein the filtering parameters are determined according to the illumination levels of the imaging scenes of the visible light image and the infrared light image;

compensating the second low-frequency component by using residual data between the first low-frequency component and the second low-frequency component to obtain a third low-frequency component;

determining a fusion weight matrix by using the difference value of the first low-frequency component and the third low-frequency component;

performing weighted fusion on the first low-frequency component and the third low-frequency component by using the fusion weight matrix to obtain a fusion low-frequency component, wherein each fusion weight factor in the fusion weight matrix generates a fusion trend that the fusion low-frequency component is closer to the first luminance information than the second luminance information;

obtaining fusion brightness information by using the fusion low-frequency component, the first high-frequency component and the second high-frequency component;

and obtaining the fused image according to the fused brightness information and the color information of the visible light image.

2. The image fusion method according to claim 1, wherein the first low-frequency component and the second low-frequency component are both low-frequency components obtained at an original scale of the visible light image and the infrared light image, and the first high-frequency component and the second high-frequency component each include high-frequency components obtained at a plurality of different scales larger than the original scale.

3. The image fusion method according to claim 2, wherein the first low-frequency component and the first high-frequency component are obtained from the visible light image by using a pyramid multi-scale decomposition method, and the second low-frequency component and the second high-frequency component are obtained from the infrared light image by using the pyramid multi-scale decomposition method.

4. The image fusion method according to claim 1, wherein compensating the second low-frequency component by using residual data between the first low-frequency component and the second low-frequency component to obtain a third low-frequency component comprises:

determining the residual data between the first low frequency component and the second low frequency component;

filtering the residual data by utilizing a preset spatial filtering operator, a first value domain filtering operator associated with the first low-frequency component, a second value domain filtering operator associated with the second low-frequency component and a third value domain filtering operator associated with the residual data;

and fusing the residual data after filtering with the second low-frequency component to obtain the third low-frequency component.

5. The image fusion method of claim 4, wherein determining the residual data between the first low-frequency component and the second low-frequency component comprises:

wherein, (i, j) is a coordinate of the pixel, RDbase (i, j) is the residual data of the pixel (i, j), VSbase (i, j) is the first low-frequency component of the pixel (i, j), IRbase (i, j) is the second low-frequency component of the pixel (i, j), and k and M are preset coefficients.

6. The image fusion method according to claim 4, wherein the filtering processing of the residual data by using a preset spatial filter operator, a first value-domain filter operator associated with the first low-frequency component, a second value-domain filter operator associated with the second low-frequency component, and a third value-domain filter operator associated with the residual data includes:

H(m,n)=Hp(m,n)*HRD(m,n)*HVS(m,n)*HIR(m, n), wherein:

among them, RDbaseflt(i, j) is the filtered residual data of the pixel (i, j), (m, n) is the pixel in the neighborhood Ω of the pixel (i, j), H (m, n) is the filter operator of the pixel (m, n), Hp(m, n) is the spatial filtering operator of the pixel point (m, n), HRD(m, n) is the third value range filter operator for pixel point (m, n), HVS(m, n) is the first value range filter operator for the pixel point (m, n), HIR(m, n) is said second value domain filter operator of pixel (m, n), RDbase (i, j) is said residual data of pixel (i, j), RDbase (m, n) is said residual data of pixel (m, n), VSbase (i, j) is said first low frequency component of pixel (i, j), VSbase (m, n) is said first low frequency component of pixel (m, n), IRbase (i, j) is said second low frequency component of pixel (i, j), IRbase (m, n) is said second low frequency component of pixel (m, n), w1、w2、w3、w4、σ1、σ2、σ3、σ4Is a preset parameter.

7. The image fusion method according to claim 4, wherein fusing the filtered residual data with the second low-frequency component to obtain the third low-frequency component comprises:

Nbase(i,j)=IRbase(i,j)+k*RDbaseflt(i,j)-M

wherein Nbase (i, j) is the third low frequency component of the pixel (i, j), IRbase (i, j) is the second low frequency component of the pixel (i, j), and RDbasefltAnd (i, j) is the residual error data after filtering of the pixel point (i, j), and k and M are preset coefficients.

8. The image fusion method of claim 1, wherein determining a fusion weight matrix using a difference of the first low frequency component and the third low frequency component comprises:

wherein, detla (i, j) ═ VSbase (i, j) -Nbase (i, j)

fs(VSbase(i,j))=CLIP(α*(VSbase(i,j))ratio,smin,smax)

w _ mix (i, j) is a fusion weight factor of the pixel (i, j), VSbase (i, j) is the first low-frequency component of the pixel (i, j), Nbase (i, j) is the third low-frequency component of the pixel (i, j), fs(VSbase (i, j)) is a coefficient mapping function, alpha and ratio are preset exponential coefficients, smin and smax are respectively preset minimum limit and maximum limit, and A and B are preset coefficients.

9. The image fusion method according to claim 8, wherein performing weighted fusion on the first low-frequency component and the third low-frequency component by using the fusion weight matrix to obtain a fused low-frequency component comprises:

Megbase(i,j)=(1-w_mix(i,j))*VSbase(i,j)+w_mix(i,j)*Nbase(i,j)

wherein Megbase (i, j) is the fused low frequency component of pixel point (i, j).

10. The image fusion method according to claim 2, wherein obtaining fusion luminance information using the fusion low-frequency component and the first and second high-frequency components comprises:

reconstructing the first high-frequency component and the second high-frequency component according to the original scale to respectively obtain a third high-frequency component related to the first high-frequency component and a fourth high-frequency component related to the second high-frequency component;

fusing the third high-frequency component and the fourth high-frequency component to obtain a fused high-frequency component;

and overlapping the fused low-frequency component and the fused high-frequency component to obtain the fused brightness information.

Technical Field

The invention relates to the technical field of image processing, in particular to an image fusion method.

Background

At present, there are many image fusion schemes in the field of image processing, most of which use a single-camera light splitting structure or a dual-camera structure to obtain information of different spectral bands, and basically take a visible light image and a non-visible light image as main components, and then combine the advantages of the two images to perform image fusion, thereby obtaining a better image effect.

Although the fusion algorithms of the schemes are different, the final purpose to be achieved is consistent, namely, the image effect under low illumination is improved, and the image effect is reflected in multiple aspects such as signal-to-noise ratio, color, detail outline and the like. However, the reflection and absorption characteristics of an object for different spectra are different, and therefore the brightness and texture of the same object in the visible image and the infrared image may be very different, especially in low-light environments. For the monitoring field, under a reasonable supplementary lighting condition, the signal-to-noise ratio of the non-visible light image and the texture of the scenery are often better, but the non-visible light image does not have real color information, so that excessive infrared information is selected, which easily causes the phenomena of color distortion, unnatural texture expression and the like. Therefore, how to ensure the reality of color and the naturalness of the whole picture while improving the signal-to-noise ratio and the details is one of the main difficulties of the image fusion algorithm.

Disclosure of Invention

The embodiment of the invention provides an image fusion method, so that a fused image has color reality, and the signal-to-noise ratio and the detail display effect are improved.

The technical scheme of the embodiment of the invention is realized as follows:

a method of image fusion, the method comprising:

acquiring a visible light image and an infrared light image registered with the visible light image;

acquiring first brightness information in the visible light image and second brightness information in the infrared light image;

acquiring a first low-frequency component and a first high-frequency component of the first brightness information and a second low-frequency component and a second high-frequency component of the second brightness information based on preset filtering parameters, wherein the filtering parameters are determined according to the illumination levels of the imaging scenes of the visible light image and the infrared light image;

compensating the second low-frequency component by using residual data between the first low-frequency component and the second low-frequency component to obtain a third low-frequency component;

determining a fusion weight matrix by using the difference value of the first low-frequency component and the third low-frequency component;

performing weighted fusion on the first low-frequency component and the third low-frequency component by using the fusion weight matrix to obtain a fusion low-frequency component, wherein each fusion weight factor in the fusion weight matrix generates a fusion trend that the fusion low-frequency component is closer to the first luminance information than the second luminance information;

obtaining fusion brightness information by using the fusion low-frequency component, the first high-frequency component and the second high-frequency component;

and obtaining the fused image according to the fused brightness information and the color information of the visible light image.

The first low-frequency component and the second low-frequency component are both low-frequency components obtained at an original scale of the visible light image and the infrared light image, and the first high-frequency component and the second high-frequency component each include high-frequency components obtained at a plurality of different scales larger than the original scale.

The first low-frequency component and the first high-frequency component are obtained from the visible light image by using a pyramid multi-scale decomposition method, and the second low-frequency component and the second high-frequency component are obtained from the infrared light image by using the pyramid multi-scale decomposition method.

Compensating the second low-frequency component by using residual data between the first low-frequency component and the second low-frequency component to obtain a third low-frequency component, including:

determining the residual data between the first low frequency component and the second low frequency component;

filtering the residual data by utilizing a preset spatial filtering operator, a first value domain filtering operator associated with the first low-frequency component, a second value domain filtering operator associated with the second low-frequency component and a third value domain filtering operator associated with the residual data;

and fusing the residual data after filtering with the second low-frequency component to obtain the third low-frequency component.

Determining the residual data between the first low frequency component and the second low frequency component, comprising:

wherein, (i, j) is a coordinate of a pixel point in the image, RDbase (i, j) is the residual data of the pixel point (i, j), VSbase (i, j) is the first low-frequency component of the pixel point (i, j), IRbase (i, j) is the second low-frequency component of the pixel point (i, j), and k and M are preset coefficients.

Performing filtering processing on the residual data by using a preset spatial filtering operator, a first value domain filtering operator associated with the first low-frequency component, a second value domain filtering operator associated with the second low-frequency component, and a third value domain filtering operator associated with the residual data, including:

H(m,n)=Hp(m,n)*HRD(m,n)*HVS(m,n)*HIR(m, n), wherein:

among them, RDbaseflt(i, j) is the filtered residual data of the pixel (i, j), (m, n) is the pixel in the neighborhood Ω of the pixel (i, j), H (m, n) is the filter operator of the pixel (m, n), Hp(m, n) is the spatial filtering operator of the pixel point (m, n), HRD(m, n) is the third value range filter operator for pixel point (m, n), HVS(m, n) is the first value range filter operator for the pixel point (m, n), HIR(m, n) is said second value domain filter operator of pixel (m, n), RDbase (i, j) is said residual data of pixel (i, j), RDbase (m, n) is said residual data of pixel (m, n), VSbase (i, j) is said first low frequency component of pixel (i, j), VSbase (m, n) is said first low frequency component of pixel (m, n), IRbase (i, j) is said second low frequency component of pixel (i, j), IRbase (m, n) is said second low frequency component of pixel (m, n), w1、w2、w3、w4、σ1、σ2、σ3、σ4Is a preset parameter.

Fusing the filtered residual data with the second low-frequency component to obtain the third low-frequency component, including:

Nbase(i,j)=IRbase(i,j)+k*RDbaseflt(i,j)-M

wherein Nbase (i, j) is the third low frequency component of the pixel (i, j), IRbase (i, j) is the second low frequency component of the pixel (i, j), and RDbasefltAnd (i, j) is the residual error data after filtering of the pixel point (i, j), and k and M are preset coefficients.

Determining a fusion weight matrix using a difference of the first low frequency component and the third low frequency component, comprising:

wherein, detla (i, j) ═ VSbase (i, j) -Nbase (i, j)

fs(VSbase(i,j))=CLIP(α*(VSbase(i,j))ratio,smin,smax)

w _ mix (i, j) is a fusion weight factor of the pixel (i, j), VSbase (i, j) is the first low-frequency component of the pixel (i, j), Nbase (i, j) is the third low-frequency component of the pixel (i, j), fs(VSbase (i, j)) is a coefficient mapping function, alpha and ratio are preset exponential coefficients, smin and smax are respectively preset minimum limit and maximum limit, and A and B are preset coefficients.

Performing weighted fusion on the first low-frequency component and the third low-frequency component by using the fusion weight matrix to obtain a fusion low-frequency component, including:

Megbase(i,j)=(1-w_mix(i,j))*VSbase(i,j)+w_mix(i,j)*Nbase(i,j)

wherein Megbase (i, j) is the fused low frequency component of pixel point (i, j).

Obtaining fused luminance information using the fused low-frequency component, and the first high-frequency component and the second high-frequency component, including:

reconstructing the first high-frequency component and the second high-frequency component according to the original scale to respectively obtain a third high-frequency component related to the first high-frequency component and a fourth high-frequency component related to the second high-frequency component;

fusing the third high-frequency component and the fourth high-frequency component to obtain a fused high-frequency component;

and overlapping the fused low-frequency component and the fused high-frequency component to obtain the fused brightness information.

In the embodiment of the invention, a third low-frequency component is obtained by obtaining a first low-frequency component, a second low-frequency component, a first high-frequency component and a second high-frequency component of the brightness information of the visible light image and the infrared light image and compensating the second low-frequency component by utilizing residual data between the first low-frequency component and the second low-frequency component, then the first low-frequency component and the third low-frequency component are subjected to weighted fusion to obtain a fused low-frequency component, and the fused low-frequency component is close to the brightness information of the visible light image on the basis brightness; and then, the fused low-frequency component and the first and second high-frequency components are fused to obtain fused brightness information, the fused brightness information is very close to the brightness information of the visible light image, and then the fused brightness information is fused with the color information of the visible light image, so that the fused image has color authenticity, and the signal-to-noise ratio and the detail display effect are improved.

Drawings

Fig. 1 is a flowchart of an image fusion method according to an embodiment of the present invention;

FIG. 2 is a flowchart of an image fusion method according to another embodiment of the present invention;

fig. 3 is a flowchart of an image fusion method according to another embodiment of the present invention.

Detailed Description

The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.

Fig. 1 is a flowchart of an image fusion method according to an embodiment of the present invention, which includes the following specific steps:

step 101: a visible light image is acquired, and an infrared light image registered with the visible light image.

Step 102: first brightness information in the visible light image and second brightness information in the infrared light image are acquired. Step 103: and acquiring a first low-frequency component and a first high-frequency component of the first brightness information and a second low-frequency component and a second high-frequency component of the second brightness information based on preset filtering parameters, wherein the filtering parameters are determined according to the illumination levels of the imaging scenes of the visible light image and the infrared light image.

Step 104: and compensating the second low-frequency component by using residual data between the first low-frequency component and the second low-frequency component to obtain a third low-frequency component.

Step 105: and determining a fusion weight matrix by using the difference value of the first low-frequency component and the third low-frequency component.

Step 106: and performing weighted fusion on the first low-frequency component and the third low-frequency component by using a fusion weight matrix to obtain a fusion low-frequency component, wherein each fusion weight factor in the fusion weight matrix generates a fusion trend which enables the fusion low-frequency component to be closer to the first luminance information than the second luminance information.

Step 107: and obtaining fused brightness information by utilizing the fused low-frequency component and the first high-frequency component and the second high-frequency component.

Step 108: and obtaining a fused image according to the fused brightness information and the color information of the visible light image.

In the above embodiment, the first and second low-frequency components and the first and second high-frequency components of the luminance information of the visible light image and the infrared light image are obtained, the second low-frequency component is compensated by using residual data between the first and second low-frequency components to obtain a third low-frequency component, and then the first and third low-frequency components are subjected to weighted fusion to obtain a fused low-frequency component, wherein the fused low-frequency component is close to the luminance information of the visible light image on the basis luminance; and then, the fused low-frequency component is fused with the first high-frequency component and the second high-frequency component to obtain fused brightness fused information, the brightness fused brightness information is very close to the brightness information of the visible light image, and then the brightness fused brightness information is fused with the color information of the visible light image, so that the fused image has color reality, and the signal-to-noise ratio and the detail display effect are improved.

Fig. 2 is a flowchart of an image fusion method according to another embodiment of the present invention, which includes the following specific steps:

step 201: a visible light image is acquired, and an infrared light image registered with the visible light image.

Step 202: first brightness information in the visible light image and second brightness information in the infrared light image are acquired.

Step 203: and acquiring a first low-frequency component and a first high-frequency component of the first brightness information and a second low-frequency component and a second high-frequency component of the second brightness information based on preset filtering parameters, wherein the filtering parameters are determined according to the illumination levels of the imaging scenes of the visible light image and the infrared light image. The first low-frequency component and the second low-frequency component are both low-frequency components obtained at the original scale of the visible light image and the infrared light image, and the first high-frequency component and the second high-frequency component both include high-frequency components obtained at a plurality of different scales larger than the original scale.

In an alternative embodiment, the first low-frequency component and the first high-frequency component are obtained from the visible light image using a pyramidal multi-scale decomposition method, and the second low-frequency component and the second high-frequency component are obtained from the infrared light image using the same pyramidal multi-scale decomposition method.

Step 204: and compensating the second low-frequency component by using residual data between the first low-frequency component and the second low-frequency component to obtain a third low-frequency component.

In an optional embodiment, the step specifically includes: determining residual data between the first low frequency component and the second low frequency component; filtering residual data by utilizing a preset spatial filtering operator, a first value domain filtering operator associated with the first low-frequency component, a second value domain filtering operator associated with the second low-frequency component and a third value domain filtering operator associated with the residual data; and fusing the residual data after filtering with the second low-frequency component to obtain a third low-frequency component.

Step 205: and determining a fusion weight matrix by using the difference value of the first low-frequency component and the third low-frequency component.

In an optional embodiment, the step specifically includes:

wherein, detla (i, j) ═ VSbase (i, j) -Nbase (i, j)

fs(VSbase(i,j))=CLIP(α*(VSbase(i,j))ratio,smin,smax)

w _ mix (i, j) is a fusion weight factor of the pixel point (i, j), VSbase (i, j) is a first low-frequency component of the pixel point (i, j), Nbase (i, j) is a third low-frequency component of the pixel point (i, j), fs(VSbase(i, j)) is a coefficient mapping function, alpha and ratio are preset exponential coefficients, smin and smax are respectively preset minimum limit and maximum limit, and A and B are preset coefficients.

Step 206: and performing weighted fusion on the first low-frequency component and the third low-frequency component by using a fusion weight matrix to obtain a fusion low-frequency component, wherein each fusion weight factor in the fusion weight matrix generates a fusion trend which enables the fusion low-frequency component to be closer to the first luminance information than the second luminance information.

In an optional embodiment, the step specifically includes:

Megbase(i,j)=(1-w_mix(i,j))*VSbase(i,j)+w_mix(i,j)*Nbase(i,j)

wherein, Megbase (i, j) is the fused low-frequency component of the pixel point (i, j).

Step 207: reconstructing the first high-frequency component and the second high-frequency component according to the original scale to respectively obtain a third high-frequency component related to the first high-frequency component and a fourth high-frequency component related to the second high-frequency component; fusing the third high-frequency component and the fourth high-frequency component to obtain a fused high-frequency component; and overlapping the fused low-frequency component and the fused high-frequency component to obtain fused brightness information.

Step 208: and obtaining a fused image according to the fused brightness information and the color information of the visible light image.

In the above embodiment, the first and second high-frequency components of the visible light image and the infrared light image at a plurality of different scales larger than the original scale, and the first and second low-frequency components at the original scale are obtained, the first high-frequency component and the second high-frequency component are reconstructed at the original scale, the third high-frequency component associated with the first high-frequency component and the fourth high-frequency component associated with the second high-frequency component are respectively obtained, the third high-frequency component and the fourth high-frequency component are fused to obtain the fused high-frequency component, and then the fused low-frequency component and the fused high-frequency component are superimposed to obtain the fused luminance information, so that the fused luminance information is closer to the luminance information of the visible light image, and the display effect of the fused image is further improved.

Fig. 3 is a flowchart of an image fusion method according to another embodiment of the present invention, which includes the following specific steps:

step 301: simultaneously, a visible light sensor and an infrared light sensor are adopted to collect images of the same area, and a visible light image and an infrared light image are respectively obtained; and matching the pixel points corresponding to the two images by adopting a registration algorithm to obtain a visible light image and an infrared light image after registration.

Step 302: first luminance information and color information are separated from the visible light image, and second luminance information is separated from the infrared light image.

If the image is in YUV format, the Y component is the luminance information and the U, V component is the color information.

If the image is in RGB format, firstly converting the image into YUV format.

Step 303: based on preset low-pass filtering parameters, a pyramid multi-scale decomposition method is adopted to respectively obtain a first low-frequency component of first brightness information of the visible light image under an original scale, a second low-frequency component of second brightness information of the infrared light image under the original scale, a first high-frequency component of the first brightness information of the visible light image under a plurality of different scales larger than the original scale and a second high-frequency component of the second brightness information of the infrared light image under a plurality of different scales larger than the original scale.

In this step, an image containing only original luminance information (i.e., a visible light image containing only first luminance information or an infrared light image containing only second luminance information) is first used as a bottom layer (layer 0) image G0, the image is filtered by using a preset low-pass filtering algorithm, then the filtered image is down-sampled to obtain a previous layer (layer 1) image G1, the above filtering and down-sampling operations are repeated, and iteration is repeated for multiple times to obtain a pyramid-shaped multi-layer (i.e., multi-scale) image. The number of pixels of each layer from bottom to top is continuously reduced and becomes coarser and coarser.

Wherein, the filtering algorithm may use gaussian filtering, the window is 5 × 5, the mean value is 0, the standard deviation is 2, the down-sampling scale may be 1/2, and the pyramid layer number may be 3.

For each layer of image Gm except the layer 0 image G0, up-sampling the Gm and then performing low-pass filtering to obtain a low-frequency image of the next layer (m-1 layer), wherein each pixel point in the low-frequency image corresponds to a low-frequency component; the low-frequency image of each layer is subtracted from the original image Gm of each layer to obtain a high-frequency image of the layer, and each pixel point in the high-frequency image corresponds to a high-frequency component.

Step 304: residual data is calculated from the first low frequency component and the second low frequency component.

Alternatively,

wherein, (i, j) is coordinates of the pixel point, RDbase (i, j) is residual data of the pixel point (i, j), VSbase (i, j) is a first low-frequency component of the pixel point (i, j), IRbase (i, j) is a second low-frequency component of the pixel point (i, j), k and M are preset coefficients, 1 ≦ k ≦ 4, preferably, k ≦ 2, M is a maximum value of a pixel value bit width, for example, when the bit width is 8 bits, M is 255, and k and M function to map a value range [ -255,255] of RDbase (i, j) into a value range [0,255] of the low-frequency component.

Step 305: and calculating the filter operator of the residual data according to the spatial filter operator of the residual data, the first value domain filter operator associated with the first low-frequency component, the second value domain filter operator associated with the second low-frequency component and the third value domain filter operator associated with the residual data.

The method specifically comprises the following steps:

H(m,n)=Hp(m,n)*HRD(m,n)*HVS(m,n)*HIR(m, n), wherein:

(m, n) is a pixel point in a neighborhood omega of the pixel point (i, j), and the value range of m is i-r assuming that the radius of the neighborhood is r<m<i + r, n has a value range of j-r<n<j + r; h (m, n) is a filter operator of residual data of the pixel point (m, n), Hp(m, n) is the spatial filtering operator of the residual data of the pixel point (m, n), HRD(m, n) is a third value range filter operator associated with residual data of the pixel point (m, n), HVS(m, n) is a first value range filter operator associated with a first low frequency component of the pixel point (m, n), HIR(m, n) is a second value domain filter operator associated with the second low frequency component of the pixel (m, n), RDbase (i, j) is residual data of the pixel (i, j), RDbase (m, n) is residual data of the pixel (m, n), VSbase (i, j) is the first low frequency component of the pixel (i, j), VSbase (m, n) is the first low frequency component of the pixel (m, n), IRbase (i, j) is the second low frequency component of the pixel (i, j), IRbase (m, n) is the second low frequency component of the pixel (m, n), w1、w2、w3、w4、σ1、σ2、σ3、σ4Is a preset parameter, w is more than or equal to 01、w2、w3、w4≤5,1≤σ1、σ2、σ3、σ4255, preferably w1=w4=2,w2=w3=1。

Step 306: and performing weighted filtering on the residual data according to the filtering operator of the residual data.

The method specifically comprises the following steps:

among them, RDbasefltAnd (i, j) is the filtered residual data of the pixel point (i, j).

Step 307: and fusing the filtered residual data with the second low-frequency component to obtain a third low-frequency component.

Optionally, the step specifically includes:

Nbase(i,j)=IRbase(i,j)+k*RDbaseflt(i,j)-M

wherein Nbase (i, j) is the third low frequency component of the pixel (i, j), IRbase (i, j) is the second low frequency component of the pixel (i, j), and RDbaseflt(i, j) is the filtered residual data of the pixel point (i, j), k and M are preset coefficients, 1 ≦ k ≦ 4, preferably, k ≦ 2, and M is the maximum value of the pixel value bit width, for example, when the bit width is 8 bits, M is 255.

Step 308: and calculating a fusion weight matrix according to the difference value of the third low-frequency component and the first low-frequency component, and performing weighted calculation on the third low-frequency component and the first low-frequency component according to the fusion weight matrix to obtain a fusion low-frequency component.

Optionally, in this step, calculating a fusion weight matrix according to a difference between the third low-frequency component and the first low-frequency component includes:

wherein, detla (i, j) ═ VSbase (i, j) -Nbase (i, j)

fs(VSbase(i,j))=CLIP(α*(VSbase(i,j))ratio,smin,smax)

w _ mix (i, j) is a fusion weight factor of the pixel (i, j), the fusion weight factors of all the pixels in a low-frequency image form a fusion weight matrix, VSbase (i, j) is a first low-frequency component of the pixel (i, j), Nbase (i, j) is a third low-frequency component of the pixel (i, j), and fs(VSbase (i, j)) is a coefficient mapping function, α and ratio are preset exponential coefficients, typically 1 ≦ α ≦ 20, preferably α ≦ 5, 0 ≦ ratio ≦ 255, smin, sma, and sminx is a preset minimum limit value and a preset maximum limit value respectively, smin is more than or equal to 0, smax is more than or equal to 512, and smax>smin, A and B are preset coefficients, 0 is less than or equal to A + B is less than or equal to 255, and preferably, A is 255.

Performing weighted calculation on the third low-frequency component and the first low-frequency component according to the fusion weight matrix to obtain a fusion low-frequency component, wherein the step of obtaining the fusion low-frequency component comprises the following steps:

Megbase(i,j)=(1-w_mix(i,j))*VSbase(i,j)+w_mix(i,j)*Nbase(i,j)

wherein, Megbase (i, j) is the fused low-frequency component of the pixel point (i, j).

Step 309: reconstructing a multi-scale first high-frequency component and a multi-scale second high-frequency component by using an original scale to respectively obtain a third high-frequency component related to the first high-frequency component and a fourth high-frequency component related to the second high-frequency component; and fusing the third high-frequency component and the fourth high-frequency component to obtain a fused high-frequency component.

Fusing a high-frequency image, namely a first high-frequency component, of first brightness information of the visible light image of each layer and a high-frequency image, namely a second high-frequency component, of second brightness information of the infrared light image, and fusing the high-frequency images of the first brightness information of the visible light image and the high-frequency images of the second brightness information of the infrared light image of all layers; and from the uppermost layer of the fused image, performing up-sampling and filtering on the fused image of the layer, and then overlapping the fused image of the layer onto the fused image of the next layer, until the fused image of the first layer is overlapped, and then performing up-sampling and filtering to obtain the fused high-frequency image of the original scale, namely the fused high-frequency component.

In this step, the high-frequency image of the first luminance information of the visible light image, i.e. the first high-frequency component, and the high-frequency image of the second luminance information of the infrared light image, i.e. the second high-frequency component, are fused, and the specific fusion method is not limited, and for example: weighted fusion can be adopted, and proper high-frequency information can be selected according to the aspects of gradient, strength and the like.

Step 310: and overlapping the fused low-frequency component and the fused high-frequency component to obtain fused brightness information.

Step 311: and obtaining a fused image according to the fused brightness information and the color information of the visible light image.

The beneficial technical effects of the above embodiment are as follows:

firstly, obtaining a first low-frequency component, a second low-frequency component, a first high-frequency component and a second high-frequency component of the brightness information of the visible light image and the infrared light image, compensating the second low-frequency component by utilizing residual data between the first low-frequency component and the second low-frequency component to obtain a third low-frequency component, and then performing weighted fusion on the first low-frequency component and the third low-frequency component to obtain a fused low-frequency component, wherein the fused low-frequency component is close to the brightness information of the visible light image on the basis of brightness; then, the fused low-frequency component is fused with the first high-frequency component and the second high-frequency component to obtain fused brightness fused information, the brightness fused brightness information is very close to the brightness information of the visible light image, and then the brightness fused brightness information is fused with the color information of the visible light image, so that the fused image has color reality, and the signal-to-noise ratio and the detail display effect are improved;

acquiring first and second high-frequency components of the visible light image and the infrared light image in a plurality of different scales larger than the original scale, reconstructing the first high-frequency component and the second high-frequency component in the original scale, then fusing to obtain a fused high-frequency component, and then overlapping the fused low-frequency component and the fused high-frequency component to obtain fused brightness information, so that the fused brightness information is closer to the brightness information of the visible light image, and the display effect of the fused image is further improved;

thirdly, calculating and filtering the residual data through a spatial filtering operator of the residual data, a first value domain filtering operator associated with the first low-frequency component, a second value domain filtering operator associated with the second low-frequency component and a third value domain filtering operator associated with the residual data and a filtering operator for calculating the residual data in a combined manner, so that the residual data can filter a large amount of noise and simultaneously retain respective advantageous contents in visible light and non-visible light low-frequency information, thereby ensuring the display effect of the final fused image;

and fourthly, the fusion weight matrix considers brightness difference and basic brightness value, the larger the brightness difference is, the more visible light low-frequency information is selected, and the larger the weight change is along with the increase of the basic brightness, so that the fusion brightness information is further close to the brightness information of the visible light image, and the display effect of the fusion image is further improved.

The embodiment of the invention also provides electronic equipment which comprises a processor, wherein the processor is used for executing the method in the steps 101 to 108, the steps 201 to 208 or the steps 301 to 311.

The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:图像增强模型的训练方法及装置、图像增强方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!