Depth-of-field control type super-resolution microscopic digital imaging method and system

文档序号:134235 发布日期:2021-10-22 浏览:40次 中文

阅读说明:本技术 一种景深控制型超分辨率显微数字成像方法及系统 (Depth-of-field control type super-resolution microscopic digital imaging method and system ) 是由 马朔昕 于 2021-07-08 设计创作,主要内容包括:本发明涉及一种景深控制型超分辨率显微数字成像方法和系统,在通过标准白光下采集得第一图像的基础上,还包括通过所述显微镜成像传感器采集标本在中紫外光或远紫外光照射下的第二图像,建立关于第二图像的高斯核滤波图像,对所述第二图像中每一像素的Y值进行归一化处理,获得归一化后的Y值,结合第一图像原始参数获得目标图像。该系统包括显微镜成像传感器和标准白光照明组件,还包括紫外光源、控制单元和处理单元。通过本发明中的方法或系统,通过将紫外光环境下图像的Y通道解析力结合到标准白光下的图像中,实现了在不改变物镜数值孔径、不改变光学显微镜基本结构前提下,不仅能够提高其光学解析力,还能保持真彩色图像的色彩还原度。(The invention relates to a depth-of-field control type super-resolution microscopic digital imaging method and a depth-of-field control type super-resolution microscopic digital imaging system, wherein on the basis of acquiring a first image under standard white light, a second image of a specimen under the irradiation of medium ultraviolet light or far ultraviolet light is acquired through a microscopic imaging sensor, a Gaussian kernel filtering image related to the second image is established, the Y value of each pixel in the second image is subjected to normalization processing to obtain a normalized Y value, and a target image is acquired by combining original parameters of the first image. The system comprises a microscope imaging sensor and a standard white light illumination component, and further comprises an ultraviolet light source, a control unit and a processing unit. By the method or the system, the Y-channel analytical force of the image under the ultraviolet environment is combined into the image under the standard white light, so that the optical analytical force can be improved and the color rendition of the true color image can be maintained on the premise of not changing the numerical aperture of the objective lens and the basic structure of the optical microscope.)

1. A depth-of-field control type super-resolution microscopic digital imaging method further comprises the following characteristic steps on the basis of acquiring a first image under standard white light through a microscopic imaging sensor:

acquiring a second image of the specimen under the irradiation of the ultraviolet light or far ultraviolet light by the microscope imaging sensor, wherein the light source of the ultraviolet light or far ultraviolet light and the microscope imaging sensor are positioned on the same side of the specimen;

if the acquired first image is a non-YUV image, converting the first image into a YUV color gamut image, and recording the Y value of any pixel in the first image as Y1,x,yAnd the U value of any pixel in the first image is recorded as U1,x,yThe V value of any pixel in the first image is recorded as V1,x,y

If the acquired second image is a non-YUV image, converting the second image into a YUV color gamut image, and recording the Y value of any pixel in the second image as Y2,x,y

Establishing a Gaussian kernel filtering image related to the second image, namely performing convolution operation on the two-dimensional Gaussian kernel matrix and the second image, wherein the Gaussian kernel matrix calculation method adopted during establishment is as follows: knowing the pixel size of the sensor p nanometers, the red light resolving power of the objective lens is q nanometers, and the positive coefficient u is preset, the radius r of the Gaussian kernel is (uq/p) pixels and rounded up, and the Y value of any pixel in the Gaussian kernel filtering image is recorded as YGaussian,x,y

Normalizing the Y value of each pixel in the second image to obtain a normalized Y value Ynorm,x,y

By said Ynorm,x,y、U1,x,yAnd V1,x,yObtained byAnd obtaining a target image.

2. The depth-of-field controlled super-resolution microscopy digital imaging method according to claim 1, characterized in that: after the second image is acquired, image phase correction is also performed,

during the image phase correction, the first image and the second image are aligned and matched by using a SURF characteristic point matching method or a maximum mutual information method,

after alignment matching, only image pixel point information included in the first image and the second image is reserved, and the reserved x-axis position information and y-axis position information of all pixels are counted again.

3. The depth-of-field controlled super-resolution microscopy digital imaging method according to claim 1, characterized in that: y is in the normalization process of the second imagenorm,x,y=Y2,x,y/YGaussian,x,y*Y1,x,y

4. The depth-of-field controlled super-resolution microscopy digital imaging method according to claim 2, characterized in that: when the SURF feature points are matched, extracting a first group of feature points from a first image, extracting a second group of same feature points from a second image by the same method, obtaining the relative displacement information between the first image and the second image by matching and comparing the two groups of feature points,

and counting image pixel points and X-axis and Y-axis position information thereof in the first image and the second image according to the relative displacement information.

5. The depth-of-field controlled super-resolution microscopy digital imaging method according to claim 4, characterized in that:

the feature point extraction adopts a self-adaptive threshold strategy, when the feature point extraction is carried out by the self-adaptive threshold strategy,

calculating the feature point significance of each pixel point by using a SURF or SIFT feature point processing algorithm in each image, wherein the mathematical meaning of the significance is the determinant of the Hessian feature matrix of the pixel point;

presetting an upper limit value K of the number of characteristic pointsmaxTo limit the upper limit of the calculation load and the minimum threshold value T of the feature significanceminTo determine if there is any available characteristic minimum threshold value Tmin

Dividing each image into a plurality of mutually overlapped regions, and checking whether all pixel feature significance is less than T in each regionminI.e. whether it is a blank area; calculating the ratio p of the number of blank regions to the total number of regions, and calculating the expected number K of feature pointsexp=(1-p)Kmax

Arranging all pixel coordinates with local maximum feature saliency in the whole visual field of the image according to the descending order of the saliency, and taking the top K of the pixel coordinatesexpAnd recording the time-frequency domain characteristics of the adjacent pixel points of the feature points by using the feature point description vector for the final feature point set of the view field.

6. The depth-of-field controlled super-resolution microscopy digital imaging method according to claim 1, characterized in that: when the second image is converted into a YUV color gamut image, the U value and the V value of any pixel in the second image are recorded as 0.

7. The depth-of-field controlled super-resolution microscopy digital imaging method according to claim 1, characterized in that: the standard white light is that under the irradiation or transmission of a white light source, the white light source and the microscope imaging sensor are positioned on the same side of the specimen during the irradiation,

in the transmitting, the white light source is located below the specimen and the microscopic imaging sensor is located above the specimen.

8. A depth-of-field controlled super-resolution microscopy digital imaging system comprising a microscopy imaging sensor and a standard white light illumination assembly, characterized by: also comprises an ultraviolet light source, a control unit and a processing unit,

the ultraviolet light source and the microscope imaging sensor are positioned on the same side of the specimen, and the ultraviolet light source is suitable for emitting medium ultraviolet light or far ultraviolet light to irradiate the surface of the specimen;

the control unit is connected to the standard white light illumination component, the ultraviolet light source and the controlled end of the microscope imaging sensor, and the control logic of the control unit comprises: controlling a microscope imaging sensor to collect a first image of the specimen under the irradiation or transmission of standard white light; controlling a microscope imaging sensor to collect a second image of the specimen under the irradiation of the medium ultraviolet light or the far ultraviolet light;

the processing unit comprises a color gamut conversion module, a Gaussian kernel filtering module, a normalization processing module and a target image generation module,

the color gamut conversion module is suitable for converting the first image into a YUV color gamut image when the first image is a non-YUV image, and the Y value of any pixel in the first image is recorded as Y1,x,yAnd the U value of any pixel in the first image is recorded as U1,x,yThe V value of any pixel in the first image is recorded as V1,x,y(ii) a And when the second image is a non-YUV image, converting the second image into a YUV color gamut image, wherein the Y value of any pixel in the second image is recorded as Y2,x,y

The Gaussian kernel filtering module is suitable for establishing a Gaussian kernel filtering image related to the second image, and the Y value of any pixel in the Gaussian kernel filtering image is recorded as YGaussian,x,y

The normalization processing module is suitable for performing normalization processing on the Y value of each pixel in the second image to obtain a normalized Y value Ynorm,x,y

The target image generation module is suitable for passing the Ynorm,x,y、U1,x,yAnd V1,x,yAnd generating an obtained target image.

9. The depth-of-field controlled super-resolution microscopy digital imaging system according to claim 8, wherein:

the processing unit further comprises an image phase correction module,

the image phase correction module is suitable for aligning and matching the first image and the second image by using a SURF feature point matching or maximum mutual information method, only image pixel point information included in the first image and the second image is reserved after alignment and matching, and the reserved x-axis and y-axis position information of all pixels is recalculated.

10. The depth-of-field controlled super-resolution microscopy digital imaging system according to claim 9, wherein:

when the SURF characteristic points are matched, the image phase correction module extracts a first group of characteristic points from the first image, extracts a second group of same characteristic points from the second image by the same method, obtains the relative displacement information between the first image and the second image by matching and comparing the two groups of characteristic points,

and counting image pixel points and X-axis and Y-axis position information thereof in the first image and the second image according to the relative displacement information.

Technical Field

The invention relates to the field of microscope electronic image processing, in particular to a depth-of-field control type super-resolution microscopic digital imaging method and system.

Background

The imaging resolution (the smallest resolvable object size) of an optical microscope is limited primarily by the optical resolution of the objective lens. The optical resolution of the objective lens is determined by the numerical aperture and the light source wavelength, and the higher the numerical aperture, the shorter the light source wavelength, and the stronger the resolution. However, the numerical aperture not only has a physical limit (not more than 1 in an air medium), but also is high in cost; the wavelength of the light source is limited by the imaging color range, and 400-700 nm must be selected if the light source is a typical color image. Therefore, it is difficult to improve the optical resolving power of the objective lens.

At present, the methods for improving the image resolution of the optical microscope are the confocal microscopes of CN106104356B and CN212276089U, which are structured light sources and imaging systems in nature to suppress diffraction, and the structure, the using method and the cost exceed the structural scope of a common optical microscope; or interpolation methods like CN112200152A, which are essentially prediction of super-resolution details, have high risk of manufacturing artifacts and noise, and cannot be used for medical diagnosis.

On the other hand, imaging by a transmission microscope (corresponding to a reflection type) is superposition of the effect of blocking light source by all substances in the vertical direction of the sample slice. Very thin sections (e.g., pathological sections with high slice homogeneity, below 10 microns in thickness, as prepared via paraffin embedding techniques) can be considered to be 0 in thickness, but thicker sections (e.g., as prepared by freezing sectioning techniques, up to 50 microns in thickness) are not negligible in thickness. When multiple layers of samples are stacked with a large thickness, the result of the microscopic imaging is its effect of blurring the stack, which may obscure details in the image. The foregoing confocal microscope can solve this problem, but its drawbacks have been described above.

Disclosure of Invention

The technical problem to be solved by the invention is as follows: a method and system for improving the optical resolution of existing microscope without changing the numerical aperture of objective lens and the basic structure of optical microscope are disclosed.

The invention provides a technical scheme (one) for solving the technical problems, which comprises the following steps: a depth-of-field control type super-resolution microscopic digital imaging method further comprises the following characteristic steps on the basis of acquiring a first image under standard white light through a microscopic imaging sensor:

acquiring a second image of the specimen under the irradiation of the ultraviolet light or far ultraviolet light by the microscope imaging sensor, wherein the light source of the ultraviolet light or far ultraviolet light and the microscope imaging sensor are positioned on the same side of the specimen;

if the acquired first image is a non-YUV image, converting the first image into a YUV (also called YCbCr) color gamut image, and recording the Y value of any pixel in the first image as Y1,x,yAnd the U value of any pixel in the first image is recorded as U1,x,yThe V value of any pixel in the first image is recorded as V1,x,y

If the acquired second image is a non-YUV image, converting the second image into a YUV color gamut image, and recording the Y value of any pixel in the second image as Y2,x,y

Establishing a Gaussian kernel filtering image related to the second image, namely performing convolution operation on the two-dimensional Gaussian kernel matrix and the second image, wherein the Gaussian kernel matrix calculation method adopted during establishment is as follows: knowing the pixel size of the sensor p nanometers, the red light resolving power of the objective lens is q nanometers, and the positive coefficient u is preset, the radius r of the Gaussian kernel is (uq/p) pixels and rounded up, and the Y value of any pixel in the Gaussian kernel filtering image is recorded as YGaussian,x,y

Normalizing the Y value of each pixel in the second image to obtain a normalized Y value Ynorm,x,y

By said Ynorm,x,y、U1,x,yAnd V1,x,yAnd obtaining a target image.

Furthermore, after the second image is obtained, image phase correction is also carried out,

during the image phase correction, the first image and the second image are aligned and matched by using a SURF characteristic point matching method or a maximum mutual information method,

after alignment matching, only image pixel point information included in the first image and the second image is reserved, and the reserved x-axis position information and y-axis position information of all pixels are counted again.

Further, Y is performed while the normalization processing is performed on the second imagenorm,x,y=Y2,x,y/YGaussian,x,y*Y1,x,y

Further, when the SURF feature points are matched, a first group of feature points are extracted from the first image, a second group of same feature points are extracted from the second image by the same method, relative displacement information between the first image and the second image is obtained through matching and comparing the two groups of feature points,

and counting image pixel points and X-axis and Y-axis position information thereof in the first image and the second image according to the relative displacement information.

Furthermore, the feature point extraction adopts a self-adaptive threshold strategy, when the feature point extraction is carried out by the self-adaptive threshold strategy,

calculating the feature point significance of each pixel point by using a SURF or SIFT feature point processing algorithm in each image, wherein the mathematical meaning of the significance is the determinant of the Hessian feature matrix of the pixel point;

presetting an upper limit value K of the number of characteristic pointsmaxTo limit the upper limit of the calculation load and the minimum threshold value T of the feature significanceminTo determine if there is any available characteristic minimum threshold value Tmin

Dividing each image into a plurality of mutually overlapped regions, and checking whether all pixel feature significance is less than T in each regionminI.e. whether it is a blank area; calculating the ratio p of the number of blank regions to the total number of regions, and calculating the expected number K of feature pointsexp=(1-p)Kmax

Arranging all pixel coordinates with local maximum feature saliency in the whole visual field of the image according to the descending order of the saliency, and taking the top K of the pixel coordinatesexpAnd recording the time-frequency domain characteristics of the adjacent pixel points of the feature points by using the feature point description vector for the final feature point set of the view field.

Further, when the second image is converted into a YUV color gamut image, the U value and the V value of any pixel in the second image are both recorded as 0.

Further, the standard white light means that under the irradiation or transmission of a white light source, the white light source and the microscope imaging sensor are positioned on the same side of the specimen under the irradiation,

in the transmitting, the white light source is located below the specimen and the microscopic imaging sensor is located above the specimen.

The invention provides a technical scheme (II) for solving the technical problems, which comprises the following steps: a depth-of-field control type super-resolution microscopic digital imaging system comprises a microscopic imaging sensor, a standard white light illumination component, an ultraviolet light source, a control unit and a processing unit,

the ultraviolet light source and the microscope imaging sensor are positioned on the same side of the specimen, and the ultraviolet light source is suitable for emitting medium ultraviolet light or far ultraviolet light to irradiate the surface of the specimen;

the control unit is connected to the standard white light illumination component, the ultraviolet light source and the controlled end of the microscope imaging sensor, and the control logic of the control unit comprises: controlling a microscope imaging sensor to collect a first image of the specimen under the irradiation or transmission of standard white light; controlling a microscope imaging sensor to collect a second image of the specimen under the irradiation of the medium ultraviolet light or the far ultraviolet light;

the processing unit comprises a color gamut conversion module, a Gaussian kernel filtering module, a normalization processing module and a target image generation module,

the color gamut conversion module is suitable for converting the first image into a YUV color gamut image when the first image is a non-YUV image, and the Y value of any pixel in the first image is recorded as Y1,x,yAnd the U value of any pixel in the first image is recorded as U1,x,yThe V value of any pixel in the first image is recorded as V1,x,y(ii) a And when the second image is a non-YUV image, converting the second image into a YUV color gamut image, wherein the Y value of any pixel in the second image is recorded as Y2,x,y

The Gaussian kernel filtering module is suitable for establishing a Gaussian kernel filtering image related to the second image, and the Y value of any pixel in the Gaussian kernel filtering image is recorded as YGaussian,x,y

The normalization processing module is suitable for performing normalization processing on the Y value of each pixel in the second image to obtain a normalization resultNormalized Y value Ynorm,x,y

The target image generation module is suitable for passing the Ynorm,x,y、U1,x,yAnd V1,x,yAnd generating an obtained target image.

Further, in the above-mentioned case,

the processing unit further comprises an image phase correction module,

the image phase correction module is suitable for aligning and matching the first image and the second image by using a SURF feature point matching or maximum mutual information method, only image pixel point information included in the first image and the second image is reserved after alignment and matching, and the reserved x-axis and y-axis position information of all pixels is recalculated.

Further, in the above-mentioned case,

when the SURF characteristic points are matched, the image phase correction module extracts a first group of characteristic points from the first image, extracts a second group of same characteristic points from the second image by the same method, obtains the relative displacement information between the first image and the second image by matching and comparing the two groups of characteristic points,

and counting image pixel points and X-axis and Y-axis position information thereof in the first image and the second image according to the relative displacement information.

The principle of the invention is as follows:

a high resolution force gray scale image (image 2) is acquired with a high resolution CMOS imaging sensor using reflective illumination with an ultraviolet or deep ultraviolet light source (e.g., 220nm wavelength). The purpose of this design is three:

1) since the deep ultraviolet wavelength is much shorter than the chromatic wavelength (400-700 nm), the resolving power is much higher (2-3 times). Meanwhile, due to the wavelength, the diffraction capability is low, so the penetration force is weak, and for a projection imaging system such as a microscope, a sample of most colors can reflect or absorb the deep ultraviolet light instead of transmitting the deep ultraviolet light.

2) A small amount of transmitted uv light will be absorbed by the upper sample (about 10 microns on the surface) during transmission and will not illuminate the lower sample, i.e. the depth of field is tightly controlled at the surface.

3) For transparent/translucent samples, the transparent portion appears more as a reference background, and the opaque structure contains details to be carefully observed (e.g., cell nucleus division and cell particles) without abundant details (e.g., staining with cytosol to determine cell type), so that the uv-reflectance image used to provide high resolution only needs to distinguish the opaque portion, and the non-imaging of the transparent portion does not affect the observed details.

Although the absolute reflectivity of different color samples is different, locally, the relative reflectivity of adjacent pixels is proportional to the concentration, i.e. proportional to the gradient of the Y channel in the color image, so that the gradient of the Y channel in the ultraviolet imaging can be used to estimate the gradient of the Y channel in the color imaging. For details in the microscopic image, such as cell nuclei and extremely small particle morphology, the color (i.e. UV or CbCr) is extremely small in local change, and the sensitivity of human eyes to the color is far lower than that of light and shade (i.e. Y), so that the lower resolution of the UV channel has no influence, and the improvement of the resolution of the Y channel can improve the overall resolution of the picture.

The invention has the beneficial effects that:

by combining the Y-channel analytic force of the image under the ultraviolet environment into the image under the standard white light through the method or the system, the optical analytic force can be improved and the color rendition of the true color image can be maintained on the premise of not changing the numerical aperture of the objective lens and the basic structure of the optical microscope (and the technical scheme in the invention can be suitable for nearly transparent samples). When the sample thickness is larger, only the surface layer sample image is collected, and the interference of the multilayer structure of the large-thickness sample to the imaging is eliminated.

Drawings

The depth-of-field controlled super-resolution microscopy digital imaging method and system of the invention are further explained with reference to the accompanying drawings.

Fig. 1 is a schematic structural diagram (white light reflection scene) of a microscope part of a depth-of-field control type super-resolution microscopy digital imaging system in the invention;

FIG. 2 is a schematic structural diagram (white light transmission scene) of a microscope part of the depth-of-field control type super-resolution microscopy digital imaging system according to the present invention;

FIG. 3 is a schematic view of the structure of the microscope part of the depth-of-field controlled super-resolution microscopy digital imaging system according to the present invention (an ultraviolet irradiation schematic scene);

fig. 4 is a system block diagram of the present invention.

Detailed Description

As shown in fig. 1, the depth-of-field controlled super-resolution microscopy digital imaging system according to the present invention includes a microscope imaging sensor (i.e. a camera in the figure), a standard white light illumination component (i.e. RGB light sources in the figure, wherein the RGB light sources may be one set, or two or even more sets), an ultraviolet light source, a control unit, and a processing unit.

The depth-of-field controlled super-resolution microscopy digital imaging method of the invention is mainly realized by the aforementioned system, and comprises the following steps (i.e. the following steps are also realized by the aforementioned control unit and processing unit):

the method comprises the following steps: based on a first image acquired by a microscope imaging sensor under standard white light. It may be preferable to: the foregoing under standard white light refers to under illumination or transmission of a white light source. The strategy in the specific selection is as follows: performing transmission illumination on a transparent/semitransparent sample and a white light source; and (5) performing reflective illumination on the opaque sample. As shown in fig. 1, the white light source is located on the same side of the specimen as the microscope imaging sensor when illuminated. In transmission, the white light source is positioned below the specimen and the microscope imaging sensor is positioned above the specimen, as shown in fig. 2.

The method comprises the following steps: a second image of the specimen under the illumination of mid-or extreme ultraviolet light (e.g., 220nm wavelength) is acquired by the microscope imaging sensor, as shown in fig. 3, with the light source of the mid-or extreme ultraviolet light (i.e., UV in the figure) on the same side of the specimen as the microscope imaging sensor.

Because the wavelength of deep ultraviolet is much shorter than that of chromatic light (400-700 nm), the resolving power is much higher than that of a photo (2-3 times) under the irradiation of white light. Due to the weak penetration of deep ultraviolet light, most colored samples reflect deep ultraviolet light for projection imaging systems such as microscopes. The uv light is absorbed by the upper sample (about 10 microns on the surface) and does not illuminate the lower sample, i.e. the depth of field is tightly controlled at the surface. For a transparent/translucent sample, the transparent part appears as a reference background, and the opaque structure contains details to be observed carefully (such as cell nucleus division and cell particles) without abundant details (such as cell sap staining to determine cell type), so that the ultraviolet reflection image for providing high resolution only needs to distinguish the opaque part, and the non-imaging of the transparent part does not affect the observed details.

The method comprises the following steps: if the acquired first image is a non-YUV (also called YCbCr) image, converting the first image into a YUV color gamut image, and recording the Y value of any pixel in the first image as Y1,x,yAnd the U value of any pixel in the first image is recorded as U1,x,yAnd the V value of any pixel in the first image is recorded as V1,x,y

If the acquired second image is a non-YUV image, converting the second image into a YUV color gamut image, and recording the Y value of any pixel in the second image as Y2,x,y. Since only the resolving power information of the second image is focused, it may be preferable that: when the second image is converted into a YUV color gamut image, the U value and the V value of any pixel in the second image can be directly recorded as 0.

In the method, if a mechanical system shakes during the acquisition of the first image and the second image, which causes a deviation between the actual contents of the two images, the preferred steps that can be performed are: after the second image is acquired, image phase correction is also performed. During image phase correction, SURF feature point matching or maximum mutual information method (namely, searching to obtain pixel offset for maximizing mutual information of two images) is utilized, and the mutual information of every two images is calculated by setting Y values at the same coordinate of the two images to be jointly distributed as p (X, Z), edge distribution as p (X) and p (Y), and the mutual information I (X, Z) is calculated asAligning and matching the first image and the second image, and only keeping the image images included in the first image and the second image after aligning and matchingAnd (4) pixel point information, and re-counting the retained x-axis and y-axis position information of all the pixels.

If the SURF feature point matching method is selected for correction, it may be further preferable that: when the SURF feature points are matched, a first group of feature points are extracted from a first image, a second group of same feature points are extracted from a second image by the same method, and relative displacement information between the first image and the second image is obtained through matching and comparing the two groups of feature points. And counting image pixel points and X-axis and Y-axis position information thereof in the first image and the second image according to the relative displacement information.

With regard to feature point extraction, it may be particularly preferable that: the feature point extraction adopts a self-adaptive threshold strategy, when the feature point extraction is carried out by the self-adaptive threshold strategy,

calculating the feature point significance of each pixel point by using a SURF or SIFT feature point processing algorithm in each image, wherein the mathematical meaning of the significance is the determinant of the Hessian feature matrix of the pixel point;

presetting an upper limit value K of the number of characteristic pointsmaxTo limit the upper limit of the calculation load and the minimum threshold value T of the feature significanceminTo determine if there is any available characteristic minimum threshold value Tmin

Dividing each image into a plurality of mutually overlapped regions, and checking whether all pixel feature significance is less than T in each regionminI.e. whether it is a blank area; calculating the ratio p of the number of blank regions to the total number of regions, and calculating the expected number K of feature pointsexp=(1-p)Kmax

Arranging all pixel coordinates with local maximum feature saliency in the whole visual field of the image according to the descending order of the saliency, and taking the top K of the pixel coordinatesexpAnd recording the time-frequency domain characteristics of the adjacent pixel points of the feature points by using the feature point description vector for the final feature point set of the view field.

Regarding the image matching correction, this is the prior art, and is also described in the patent application filed by the applicant before, so that the detailed description thereof is omitted.

The method comprises the following steps: and establishing a Gaussian kernel filtering image related to the second image, namely performing convolution operation on the two-dimensional Gaussian kernel matrix and the second image. The Gaussian kernel matrix calculation method adopted during construction comprises the following steps: given the sensor pixel size p nm, the red resolution of the objective lens is q nm (the red light can have a wavelength of 700nm), and if the positive coefficient u is preset, the gaussian kernel radius r is rounded up by (uq/p) pixels. According to the Gaussian kernel radius r, the Gaussian variance is calculated to be sigma r/3, and according to a standard formula, the value of each element in a two-dimensional Gaussian kernel matrix with the length and the width being r is exp (-d ^ 2/2/sigma ^ 2)/sigma/sqrt (2 pi), wherein d is the distance from the element to the center of the matrix, exp () is a natural exponential function, and sqrt () is an open root number calculation. This calculation treats the short wavelength uv light as an ideal diffraction-free image (each point on the object appears as a point on the sensor, and the points do not affect each other), and the long wavelength color light (with the longest red color) is diffracted (each point on the object appears as a point spread function on the sensor and spreads to the surrounding pixels, and the mutual interference occurs), so the conversion process from uv to color image is gaussian kernel filtering. The Y value of any pixel in the Gaussian kernel filtered image is recorded as YGaussian,x,y

The method comprises the following steps: normalizing the Y value of each pixel in the second image to obtain a normalized Y value Ynorm,x,y. In particular, it may be preferred that: while normalizing the second image, Ynorm,x,y=Y2,x,y/YGaussian,x,y*Y1,x,y

The method comprises the following steps: by Ynorm,x,y、U1,x,yAnd V1,x,yAnd obtaining a target image. When obtaining the target image, the image can be obtained by Ynorm,x,y、U1,x,yAnd V1,x,yRegenerating a new image from the information, or copying the first image information and using Y in it1,x,yBy substitution with corresponding Ynorm,x,y,U1,x,yAnd V1,x,yThe information remains unchanged.

The processing unit is mainly used for realizing the method in the steps, and the specific processing unit comprises: the device comprises a color gamut conversion module, a Gaussian kernel filtering module, a normalization processing module and a target image generation module. The specific summary is as follows:

a color gamut conversion module, adapted to convert the first image into a YUV color gamut image when the first image is a non-YUV image, where a Y value of any pixel in the first image is recorded as Y1,x,yAnd the U value of any pixel in the first image is recorded as U1,x,yAnd the V value of any pixel in the first image is recorded as V1,x,y(ii) a And when the second image is a non-YUV image, converting the second image into a YUV color gamut image, and recording the Y value of any pixel in the second image as Y2,x,y

A Gaussian kernel filtering module suitable for establishing a Gaussian kernel filtered image related to the second image, wherein the Y value of any pixel in the Gaussian kernel filtered image is recorded as YGaussian,x,y

A normalization processing module adapted to perform normalization processing on the Y value of each pixel in the second image to obtain a normalized Y value Ynorm,x,y

An object image generation module adapted to pass Ynorm,x,y、U1,x,yAnd V1,x,yAnd generating an obtained target image.

As mentioned above, to solve the imaging effect of mechanical system shake that may occur, an image phase correction module may also be included for the processing unit,

and the image phase correction module is suitable for aligning and matching the first image and the second image by utilizing a SURF feature point matching or maximum mutual information method, only reserving image pixel point information included in the first image and the second image after alignment and matching, and re-counting the reserved x-axis and y-axis position information of all pixels.

When the image phase correction module is matched with the SURF characteristic points, a first group of characteristic points are extracted from the first image, a second group of same characteristic points are extracted from the second image by the same method, the relative displacement information between the first image and the second image is obtained by matching and comparing the two groups of characteristic points,

and counting image pixel points and X-axis and Y-axis position information thereof in the first image and the second image according to the relative displacement information.

A block diagram of the system is shown in fig. 4.

The present invention is not limited to the above embodiments, and the technical solutions of the above embodiments of the present invention may be combined with each other in a crossing manner to form a new technical solution, and all technical solutions formed by using equivalent substitutions fall within the scope of the present invention.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于七孔径的稀疏合成孔径成像系统及其相位校正方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!