Method and apparatus for estimating STED resolution

文档序号:114953 发布日期:2021-10-19 浏览:29次 中文

阅读说明:本技术 估计sted分辨率的方法和设备 (Method and apparatus for estimating STED resolution ) 是由 凯·沃尔特 拉尔斯·弗里德里希 于 2021-03-26 设计创作,主要内容包括:本发明涉及一种用于估计STED分辨率的方法,包括以下步骤:从视场生成表示参考图像的第一帧(F0),所述参考图像(F0)具有预定参考分辨率;从同一视场生成表示STED图像的至少一个第二帧(F1-FN),所述STED图像具有待估计的STED分辨率;通过应用具有至少一个拟合参数的卷积核于第二帧(F1-FN)使第二帧(F1-FN)模糊;确定卷积核的拟合参数的优化值,对于该优化值,第一帧和模糊的第二帧之间的差被最小化;以及基于拟合参数的优化值和预定参考分辨率估计STED分辨率。(The invention relates to a method for estimating the STED resolution, comprising the steps of: generating a first frame (F0) representing a reference image from the field of view, the reference image (F0) having a predetermined reference resolution; generating at least one second frame (F1-FN) representing STED images from the same field of view, the STED images having a STED resolution to be estimated; blurring the second frame (F1-FN) by applying a convolution kernel having at least one fitting parameter to the second frame (F1-FN); determining an optimized value of a fitting parameter of the convolution kernel for which a difference between the first frame and the blurred second frame is minimized; and estimating the STED resolution based on the optimized values of the fitting parameters and a predetermined reference resolution.)

1. A method for estimating STED resolution, comprising the steps of:

generating a first frame (F0) representing a reference image from the field of view, the reference image (F0) having a predetermined reference resolution,

generating at least one second frame (F1-FN) representing STED images from the same field of view, the STED images having a STED resolution to be estimated,

blurring the second frame (F1-FN) by applying a convolution kernel having at least one fitting parameter to the second frame (F1-FN),

determining an optimized value of a fitting parameter of a convolution kernel for which a difference between the first frame and the blurred second frame is minimized, an

Estimating the STED resolution based on the optimized values of the fitting parameters and the predetermined reference resolution.

2. The method of claim 1, wherein the reference image is a confocal image, and wherein the predetermined reference resolution is a predetermined confocal resolution.

3. The method of claim 1, wherein the STED resolution is determined based on a difference between the predetermined reference resolution and an optimized value of the fitting parameter, or wherein the STED resolution is determined based on a difference between a square of the predetermined reference resolution and a square of the optimized value of the fitting parameter.

4. The method according to any one of claims 1-3, wherein the convolution kernel is represented by a Gaussian kernel or by a kernel based on a Bezier function or by a kernel based on an Airy function, the width of a kernel representing the fitting parameter.

5. The method according to any one of claims 1-3, wherein the predetermined reference resolution depends only on optical parameters of an optical system (126) used for generating the reference image.

6. The method of any one of claims 1-3, wherein a signal-to-noise ratio is determined from the second frame (F1-FN), and the STED resolution is corrected in dependence on the signal-to-noise ratio.

7. The method of claim 6, wherein a plurality of downsampled frames (F11-F14, F21-F24, …, FN1-FN4) are generated from one second frame (F1-FN), the downsampled frames (F11-F14, F21-F24.., FN1-FN4) having different signal-to-noise ratios, the different signal-to-noise ratios being derived from the signal-to-noise ratios of the second frames (F1-FN),

wherein the step of estimating the STED resolution is performed for each of the plurality of downsampled frames (F11-F14, F21-F24 …, FN1-FN4), and

wherein a signal-to-noise corrected STED resolution is determined based on the plurality of STED resolutions estimated for the plurality of downsampled frames (F11-F14, F21-F24 …, FN1-FN 4).

8. The method of any of claims 1-3, wherein the at least one second frame comprises a plurality of second frames (F1-FN),

wherein the steps of blurring, determining optimized values of fitting parameters and estimating the STED resolution are performed for each of the plurality of second frames (F1-FN), and

wherein a final STED resolution is determined based on the plurality of estimated STED resolutions.

9. The method of any one of claims 1-3, wherein the STED point spread function is determined based on an estimated STED resolution.

10. Method according to claim 9, wherein deconvolution is performed on the first frame (F0) representing the reference image based on a reference point spread function and/or on the at least one second frame (F1-FN) representing the STED image based on a STED point spread function.

11. The method of any one of claims 1-3, wherein the first frame (F0) and the at least one second frame (F1-FN) are generated from a single image acquisition.

12. The method of claim 11, wherein image acquisition is performed by applying time-gated detection that classifies photons according to their arrival times at a light detector (128).

13. The method according to any of claims 1-3, wherein when generating the second frame (F1-FN), a continuous wave laser or a pulsed laser is used to emit lossy light (L2).

14. The method according to any one of claims 1-3, wherein a pulsed laser is used to emit excitation light (L1) when generating the first frame (F0).

15. An apparatus (100) for estimating STED resolution, comprising:

an imaging unit (102) configured to:

generating a first frame (F0) representing a reference image from the field of view, the reference image having a predetermined reference resolution, an

Generating at least one second frame (F1-FN) representing STED images from the same field of view, the STED images having a STED resolution to be estimated; and

a processor (128) for processing the data,

wherein the processor (128) is configured to:

blurring the second frame (F1-FN) by applying a convolution kernel having at least one fitting parameter to the second frame (F1-FN),

determining an optimized value of a fitting parameter of a convolution kernel for which a difference between the first frame (F0) and the blurred second frame is minimized, an

Estimating the STED resolution according to the optimized value of the fitting parameter and the predetermined reference resolution.

16. The apparatus (100) according to claim 15, adapted to perform the method according to any one of claims 1 to 14.

17. A computer-readable storage medium having a computer program for performing the method according to any one of claims 1 to 14 when the computer program runs on a processor.

Technical Field

The present invention relates to a method and apparatus for estimating STED resolution.

Background

Stimulated emission depletion microscopy (STED) is a fluorescence microscopy technique that can overcome the diffraction-limited optical resolution of other techniques, such as confocal microscopy. The improvement in resolution is achieved by using stimulated emission of high intensity laser light in the outer region of the diffraction limited excitation focus to switch off fluorescence of the fluorophore. The intense laser returns nearly all of the excited fluorophores to the non-fluorescent ground state. Fluorescence from the remaining excited fluorophores in the center of the excitation focus is then detected to create a high resolution image. The principles of STED microscopy are detailed in e.g. US 5731588.

In contrast to wide-field or confocal microscopy, where the optical resolution depends only on the optical parameters of the optical system used to image the sample, the optical resolution in STED depends in particular on the photophysical properties of the fluorophore and its environment. Therefore, the estimation of STED resolution is very difficult compared to wide-field or confocal microscopy. Basically, in an actual STED experiment, the user does not have any information about the STED resolution. Since reconstructing an image by deconvolution requires information about the resolution, it is highly desirable to find a method for measuring and/or estimating the STED resolution.

In a broader context, a method known as fourier-loop correlation (FRC) is known in the art. FRC measures the normalized cross-correlation in fourier space, i.e. as a function of spatial frequency. If the sample to be imaged does not have any sharp contours, the spatial frequency is low and FRC is not suitable for use. Therefore, the FRC result cannot be used for deconvolution. Furthermore, FRC is strongly dependent on noise, so that for low signal-to-noise ratio (SNR) images, the calculation will fail. Interestingly, for particularly good SNRs (approximately SNR <50), the FRC calculation is also erroneous, and as the SNR goes to infinity, the FRC measurement converges to zero. FRC in Koho, s.; tortarolo, g.; castello, m.; deguchi, t.; diaspro, A. and Vicidomini, G. were described in "fountain ring correlation in fluorescence" publication in Nature communications "in 2018.

Disclosure of Invention

It is an object herein to provide a method and apparatus adapted to reliably estimate the STED resolution.

The above object is achieved by a method for estimating STED resolution, an apparatus for estimating STED resolution and a computer storage medium.

According to one embodiment, there is provided a method for estimating STED resolution, the method comprising the steps of: generating a first frame representing a reference image from a field of view, the reference image having a predetermined reference resolution; and generating at least one second frame representing a STED image from the same field of view, the STED image having a STED resolution to be estimated; blurring the second frame by applying a convolution kernel having at least one first parameter to the second frame; determining an optimized value of a fitting parameter of the convolution kernel for which a difference between the first frame and the blurred second frame is minimized; and estimating the STED resolution based on the optimized values of the fitting parameters and a predetermined reference resolution.

Preferably, the reference image is a confocal image and the predetermined reference resolution is a predetermined confocal resolution.

In a preferred embodiment, the STED resolution is determined based on the difference between the predetermined reference resolution and the optimized value of the fitting parameter. Alternatively, the STED resolution is determined based on the difference between the square of the predetermined reference resolution and the square of the optimized value of the fitting parameter.

The convolution kernel can be represented by a gaussian kernel or a kernel based on a spherical bessel function or a kernel based on an airy function, the width of which represents the fitting parameters.

Preferably, the predetermined reference resolution depends only on optical parameters of an optical system used for generating the reference image.

A signal-to-noise ratio may be determined from the second frame and the STED resolution may be corrected according to the signal-to-noise ratio.

In an advantageous embodiment, a plurality of down-sampled frames are generated from one second frame, said down-sampled frames having different signal-to-noise ratios, which are derived from the signal-to-noise ratio of the second frame. The step of estimating a STED resolution may be performed for each of the plurality of down-sampled frames, and the signal-to-noise corrected STED resolution may be determined based on the plurality of STED resolutions estimated for the plurality of down-sampled frames.

In a preferred embodiment, the at least one second frame comprises a plurality of second frames, wherein, in the step of blurring, determining the optimized value of the fitting parameter and estimating the STED resolution are performed for each of the plurality of second frames. A final STED resolution is then determined based on the plurality of estimated STED resolutions.

According to a preferred embodiment, the STED Point Spread Function (PSF) is determined based on the estimated STED resolution.

Preferably, the deconvolution is performed on the first frame representing the reference image based on the reference point spread function and/or on the at least one second frame representing the STED image based on the STED point spread function.

In a preferred embodiment, the first frame and the at least one second frame are generated from a single image acquisition.

Image acquisition may be performed by applying time-gated detection that classifies photons according to their arrival time on the photodetector.

When generating the second frame, a continuous wave laser or a pulsed laser may be used to emit the lossy light.

When generating the first frame, a pulsed laser may be used to emit excitation light.

According to another aspect, an apparatus for estimating STED resolution is provided. The device comprises an imaging unit configured to generate a first frame representing a reference image from a field of view, the reference image having a predetermined reference resolution, and to generate at least one second frame representing a STED image from the same field of view, the STED image having a STED resolution to be estimated. The apparatus also includes a processor configured to blur the second frame by applying a convolution kernel having at least one fitting parameter to the second frame. The processor is further configured to determine an optimized value of a fitting parameter of the convolution kernel for which a difference between the first frame and the blurred second frame is minimized. The processor is further configured to estimate the STED resolution based on the optimized values of the fitting parameters and a predetermined reference resolution.

The device is preferably adapted to perform the method. Furthermore, a computer-readable storage medium is provided, which has a computer program for performing the method according to the present invention, when the computer program runs on a processor.

Drawings

Specific embodiments will be described hereinafter with reference to the accompanying drawings, in which:

figure 1 is a schematic view of a fluorescence microscope according to an embodiment,

figure 2 is a schematic diagram illustrating time gated detection,

figure 3 is a graph showing simulated confocal images and simulated STED images,

figure 4 is a flow chart illustrating a method for estimating the STED resolution,

figure 5 is a graph showing the noise dependence of the estimate of STED resolution,

figure 6 is a schematic diagram showing the noise correction process,

figure 7 is a schematic diagram showing the effect of noise correction on the estimation of STED resolution,

fig. 8 is a flow diagram illustrating a particular embodiment of a method of considering multiple STED frames.

Detailed Description

Fig. 1 shows a schematic view of a fluorescence microscope 100 according to an embodiment. The fluorescence microscope 100 is configured to estimate the STED resolution, as explained in detail below. First, the basic structure of the fluorescence microscope 100 will be briefly summarized.

The fluorescence microscope 100 includes an imaging unit, generally referred to as 102 in fig. 1, and a processor 104, which processor 104 may be configured to control the overall operation of the fluorescence microscope 100.

The imaging unit 102 comprises an excitation light source 106, the excitation light source 106 being adapted to emit excitation light L1, the excitation light L1 being adapted to excite fluorophores present in an excitation focus within the sample 108 to spontaneously emit fluorescence light L3. The wavelength of excitation light L1 is appropriate for the fluorophore used in the particular experiment. The imaging unit 102 further comprises an evanescent light source 110, the evanescent light source 110 being for emitting evanescent light L2, the evanescent light L2 being adapted for dissipating an outer region of the excitation focus generated by the excitation light L1. The wavelength of the loss light L2 is selected such that the stimulated emission reliably induces the fluorophores present in the sample 108 to return from their excited states to the ground state when illuminated with the loss light L2. Specifically, the wavelength of the loss light L2 may be approximately equal to the wavelength of the fluorescence L3 emitted by the fluorophore when transitioning from the excited state to the ground state.

The excitation light source 106 emits excitation light L1 onto the mirror 114, and the mirror 114 reflects the excitation light L1 onto the first wavelength selective beam splitter 116. The beam splitter 116 reflects the excitation light L1 onto a second wavelength selective beam splitter 118 that transmits the excitation light L1 toward the scanning device 120.

The lossy light source 110 emits lossy light L2 onto the phase mask 122, and the phase mask 122 affects the excitation light L2 in the following manner: in the region of the excitation focus, the spatial distribution of the light L2 exhibits a minimum, preferably zero, and rises steeply from the minimum. After passing through the phase mask 122, the lost light L2 is reflected on a mirror 124 onto the second wavelength selective beam splitter 118, from which second wavelength selective beam splitter 118 the lost light L2 is reflected to the scanning device 120.

The scanning device 120 is configured to move the excitation light L1 and the loss light L2, which are superimposed by the beam splitter 118, towards the objective lens 126, the objective lens 126 focusing the superimposed light distribution L1/L2 into the sample 108. By operating the scanning device 120, the superimposed light distribution L1/L2 is moved over the sample 108, thereby scanning a plurality of points within the sample 108 with the superimposed light distribution L1/L2.

The sample 108 illuminated with the superimposed light distribution L1/L2 emits fluorescence L3, and the fluorescence L3 returns to the scanning device 120 through the objective lens 126. Thus, the exemplary configuration of fig. 1 provides a so-called inverse scan detection of fluorescence L3. The fluorescent light L3 then passes through beam splitters 118, 116 and falls onto detector 128.

Needless to say, the beam splitters 116, 118 exhibit spectral characteristics adapted to the wavelengths of the excitation light L1, the loss light L2, and the fluorescence light L3 so as to be able to guide light by reflection and transmission, as shown in fig. 1.

The detector 128 may be configured to perform image acquisition by detecting the intensity of fluorescence. Alternatively or additionally, the detector 128 may be configured to perform image acquisition by applying time-gated detection to fluorescence photons representing the fluorescence L3 emitted from the sample 108. Thus, under the control of processor 104, detector 128 detects fluorescence photons and classifies the fluorescence photons depending on their arrival time on detector 128. For example, the detector 128 is configured to detect the time of arrival by applying time-dependent single photon counting. To this end, the detector 128 detects the arrival time of the fluorescence photon relative to a start time, which may be defined by the light pulse emitted by the excitation light source and one of the depletion light sources 106, 110.

In the particular example shown in fig. 1, it may be assumed that the excitation light source 106 and the depletion light source 110 are both formed by pulsed laser sources. However, this configuration is merely an example. For example, according to an alternative embodiment, lossy light source 110 may be configured to emit Continuous Wave (CW) laser light of lossy light L2.

Fig. 2 is a schematic diagram illustrating time-gated detection performed by the detector 128 under control of the processor 104 in the case where the excitation light source 106 and the depletion light source 110 are both pulsed laser sources. According to fig. 2, the excitation light source 106 outputs an excitation pulse having a pulse duration of P1 during the detection time gate TG 1. Subsequently, the loss light source 110 outputs a loss pulse DP having a pulse duration P2 during the detection time gate TG 2. As shown in fig. 2, the excitation pulse EP and the depletion pulse DP do not overlap temporarily. By applying two separate detection time-gates TG1 and TG2, the detector 128 allows generating a first frame representing a pure confocal image and a second frame representing a pure STED image. In fig. 3, the left side shows a simulated confocal image and the right side shows a simulated STED image. As explained in detail below, a first frame representing a confocal image and a second frame representing a STED image may be used to estimate the STED resolution. In this respect, it is to be noted that the first frame is not limited to a confocal image. Specifically, any frame may be used to represent the reference image as long as this reference image has a resolution that can be predetermined to be used as a reference for estimating the unknown STED resolution.

Hereinafter, a method for estimating the STED resolution according to the embodiment will be explained. In this embodiment, only one STED frame is used to determine its resolution. However, as shown below, the method may also be applied to multiple STED frames created, for example, by applying multiple detection time strobes after the excitation pulse.

Fig. 4 shows a flow chart illustrating method steps for estimating the STED resolution performed according to an embodiment.

In step S1, a first frame representing a reference image is generated from the field of view, e.g., by image acquisition applying a detection time gate TG 1. As described above, the reference image may be a confocal image, but is not limited thereto. In any case, the reference image has a particular reference resolution that can be predetermined. For example, the resolution of the reference image may depend only on the optical parameters of the optical system used to generate the reference image. According to the exemplary configuration of fig. 1, the optical system described above may be formed by the objective lens 126 collecting fluorescence L3 from the sample 108. Accordingly, the reference resolution may be predetermined based on the optical parameters of the objective lens 126.

In step S2, at least one second frame representing a STED image is generated from the same field of view, e.g. by image acquisition applying a detection time gate TG2, wherein the STED image has a STED resolution to be estimated by the method. The order of executing steps S1 and S2 is not particularly relevant and may be reversed.

In step S3, the second frame representing the STED image is blurred by applying the convolution kernel to the second frame. The convolution kernel includes at least one fitting parameter, as will be explained in more detail below.

In step S4, an optimized value of the fitting parameter of the convolution kernel is determined, wherein the optimized value minimizes a difference between the first frame and the second frame blurred by the convolution kernel including the fitting parameter.

Finally, in step S5, the STED resolution is estimated based on the optimized values of the fitting parameters that have been determined in step S4, and based on a predetermined reference resolution that is known in advance.

In the following, a specific implementation of the general method of fig. 4 is explained.

First, a suitable convolution kernel applied in step S3 of fig. 4 is explained in more detail. In this example, a two-dimensional (x, y) Gaussian blur kernel fΔRepresented by equation (1):

in step S3, the gaussian blur kernel f defined in equation (1) is usedΔFor the second frame (F) representing the STED image1) Blurring is performed. Thus, when the second frame is not blurred by F1When specified, the blurred second frame created in step S3 is composed of fΔ*F1It is given. Preferably, the second frame (F) representing the STED image is realized by performing a convolution as indicated above by the symbol "+"1) Is not required.

The Gaussian blur kernel defined in equation (1) includes three unknown parameters α, β, and σΔ. To minimize the first frame (confocal) (by F)0Specify) and blurred Second (STED) frame fΔ*F1The following minimization problem according to equation (2) is considered as follows:

by solving the minimization problem according to equation (2), the differential resolution σ can be estimatedΔ. Suppose a first frame F0Is a pure confocal frame, the STED resolution is given by equation (3):

and STED resolution σ depending on the nature of the internal fluorophoreSTEDIn contrast, byconfocalThe represented confocal resolution depends only on the optical parameters of the optical system, i.e. on the optical parameters of the objective 124 and the wavelength of the excitation light in the embodiment of fig. 1. Confocal resolution σconfocalCan be defined according to equation (4):

in equation (4), λ is the wavelength of the excitation light L1, and NA is the numerical aperture of the objective lens 126. Factor ofIs the conversion factor between the standard deviation of gauss and half maximum half width at half maximum (HWHM).

Thus, using equation (4), the STED resolution can be estimated based on the optical parameters of the optical system only, without considering any unknown internal fluorophore parameters.

Strictly speaking, equation (4) works only if the confocal and STED Point Spread Functions (PSFs) are both gaussian. However, equation (4) may be well approximated for a typical real-valued PSF.

In the example described above, a two-dimensional case is considered for the sake of brevity. However, extension to three dimensions (x, y, z) is straightforward. In the three-dimensional case, a three-dimensional gaussian kernel according to equation (5) can be considered:

in equation (5), σΔAnd σΔzLateral and axial differential widths, respectively.

It is again noted that the two-dimensional and three-dimensional gaussian kernels according to equations (2) and (5), respectively, are only used to blur the second frame F1Examples of suitable convolution kernels. Other kernels may be used, such as a kernel based on a spherical Bessel function or a kernel based on an Airy function. However, it is preferable to use a gaussian blur kernel because the numerical effort required to determine the optimized values of the fitting parameters of the kernel is small.

Also note that the differential resolution σ estimated based on equation (2)ΔIs significantly affected by noise. To illustrate the noise effect, fig. 5 shows the results of the calculation of simulated confocal images with varying noise. In particular, curve C1 shows the difference width σ estimated based on equation (2) for different values of the signal-to-noise ratio SNRΔ. For small SNR values, the estimated difference width σΔAs opposed to the expected width represented by line C2, which indicates the true width. Accurate estimation is only possible under the constraint of large SNR values. However, STED images are typically very noisy. In addition, the estimated noise dependence depends to a large extent on the image content, which makes systematic analysis difficult.

To solve the noise problem, the inventors performed some theoretical analysis of the minimization problem according to equation (2). Based on a suitable approximation, minimization with respect to the difference width can be solved. The approximate solution is given by equation (6):

in equation (6), the constant β is not known1And beta2Are parameters that depend on the image content and are not accessible in practical experiments. The signal-to-noise ratio SNR can be defined as the square root of the average photon count of the STED image according to equation (7):

as expected, equation (6) converges to the true value σ within the limits of a large SNR valueGTAs shown in fig. 5. If multiple STED acquisitions can be made from the same field of view with different noise levels, the three unknown parameters β1、β2、σGTCan be represented by curve C in FIG. 53Curve fitting as shown to estimate/calculate. However, such multiple exposure acquisitions are not easily performed in practical experiments. For example, phototoxic reactions or movement of the biological sample to be imaged prevent qualitative comparisons of different acquisitions. Therefore, in the following, a method is proposed that enables estimation of the unknown parameter β from a single confocal frame and a single STED frame1、β2、σGT

To estimate a fitting parameter beta1、β2、σGTAt least three input points are required. To provide these input points, down-sampling of the STED frames needs to be applied. For example, a factor of 2 downsampling may be used to create four frames with different SNR values than the original frames, as shown in fig. 6. To this end, a binning process may be applied, wherein adjacent pixels of a frame are combined into a block of pixels to improve the signal-to-noise ratio.

For example, by picking one pixel from a 2x2 pixel configuration (shown as a blob area in fig. 6 a), a SNR is created that has the same signal-to-noise ratio as the original STED frame0First down-sampled pixel of (1)(shown as horizontally hatched areas in fig. 6 a). Then, by combining two adjacent pixels, a second down-sampled pixel is created, the SNR value of which is higher than that of the first down-sampled pixelFold (see fig. 6 b). In the same way, third and fourth down-sampled pixels are created. As a result, SNR with SNR value can be generated0 And 2SNR0Four down-sampled frames F of11、F12、F13、F14(see fig. 6a, 6b, 6c and 6d, respectively). In addition, the confocal frame is down-sampled by binning to create a signal-to-noise ratio of 2SNR0Down-sampled confocal frame F0′。

Based on the down-sampled confocal frame and the four down-sampled STED frames, four estimates from different SNR values can be achieved according to equation (8):

σ=[σΔ,1Δ,2Δ,3Δ,4] (8)

the noise-corrected estimate of the width may be obtained by least-squares minimization according to equation (9):

where ζ (x) ═ η01/(x+η2),η1=β1/SNR0And η2=β2/SNR0. Then, passing through σΔ,corr=η0To give a noise corrected differential resolution.

Thus, an estimation of the STED resolution is performed for each of the plurality of down-sampled frames, and the SNR-corrected STED resolution is determined based on the plurality of STED resolutions estimated for all down-sampled frames.

After correction, the estimated noise dependence becomes small, as shown in fig. 7, for the difference width σΔThe corrected estimate and the uncorrected estimate are compared. In FIG. 7, curve CuncorrUncorrected estimates are shown, and curve CcorrThe corrected estimate is shown.

A suitable method to model the STED point spread function f based on the estimated resolution is to assume a gaussian PSF according to equation (10):

although an approximation according to equation (10) is considered a good approximation, more complex PSF models may also be employed. For example, a two-stage model of the fluorophore may be applied. In this case, two unknown parameters, namely the saturation factor ζ and the lifetime τ of the fluorophore, have to be estimated. The saturation factor ζ can be directly calculated from the confocal resolution and the STED resolution based on equation (11):

the lifetime τ of the fluorophore can be determined, for example, by Fluorescence Lifetime Imaging Microscopy (FLIM).

Fig. 8 (bridging from fig. 8a to fig. 8b) shows a flow chart illustrating a specific embodiment of a method, which comprises, inter alia, the above-described noise correction. However, although the above explanation refers to an example in which the excitation light source 106 and the lossy light source 110 are both formed of pulsed laser light sources, the embodiment of fig. 8 may be modified to be advantageously applied to a case in which the lossy light source 110 is CW laser that continuously emits the lossy light L2 instead of emitting it in the form of light pulses. By using CW laser loss, the method can be applied to multiple STED frames created by time-gated detection using multiple time-gates after the excitation pulse emitted by the excitation light source 106.

The method shown in fig. 8 begins in step S10, and in step S10, the fluorescence microscope is activated for imaging. In step S12, a first frame representing, for example, a confocal image and N STED frames are generated from the same field of view. In this particular example, the confocal frame may be a frame that is detected shortly after the excitation pulse is applied to the sample. Such frames have not been substantially affected by the lost light and can therefore be considered to represent confocal images. In step S14, the confocal frame and N STED frames are stored in the memory, where in fig. 8, the confocal frame is designated by F0, and the STED frames are designated by F1 to FN.

In step S16, each of the confocal frame F0 and the STED frames F1 to FN is down-sampled. Specifically, the down-sampling process explained above with reference to fig. 6 is applied to each of the STED frames F1 through FN. For example, by down-sampling STED frame F1, a plurality of samples with signal-to-noise ratios (SNRs)0And 2SNR0Are created the downsampled frames F11 through F14. In the same manner, down-sampled frames having different SNRs are obtained for the STED frames F2 to FN. Note that fig. 8 shows, by way of example, the number of four downsampled frames for each STED frame F1 through FN. However, more than four frames may be considered due to the number of possible combinations for selecting one or more pixels from a given population of pixels. For example, there are 4 possibilities (1 out of 4 possibilities) to select from the SNR0There are 6 possibilities (2 from 4 possibilities) to select fromThere are 4 possibilities (3 out of 4) to select fromThere is a possibility (to select all 4) to select from 2SNR0And (4) selecting. In step S18, the down-sampled frame is stored in a memory.

In step S20, the k-th STED frame (k ═ 1, …, N) andestimate difference width sigma for jth down-sampled frameΔkj

The minimization can be performed, for example, by the Levenberg-Marquardt algorithm. Note that equation (12) corresponds to equation (2) above for the case where only one STED frame is considered. The difference width σ is stored in step S22Δkj

In step S24, as explained above with reference to equation (9), based on the difference width σΔkjNoise correction is performed for each STED frame F1 through FN. As a result, the difference width σ of each STED frame is obtainedΔk

In step S26, the STED resolution is estimated for each STED frame F1 to FN as explained above with reference to equation (3).

In step S28, based on equation (10) assuming a gaussian PSF, a STED point spread function is calculated for each STED frame F1 to FN.

In step S30, multi-image deconvolution may be performed based on the PSFs determined for the plurality of STED frames F1 to FN.

Finally, in step S32, the results obtained by the multi-image deconvolution in step S30 may be merged into a single deconvolution result F _ decon.

As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items and may be abbreviated as "/".

Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the respective method, where a block or device corresponds to a method step or a feature of a method step. Similarly, aspects described in the context of method steps also represent a description of a respective block or item or feature of a respective apparatus. Some or all of the method steps may be performed by (or using) a hardware device, e.g., a processor, a microprocessor, a programmable computer, or an electronic circuit. In some embodiments, such an apparatus may perform one or more of the most important method steps.

Embodiments of the present invention may be implemented in hardware or software, depending on the particular implementation requirements. The above implementation can be performed using a non-transitory storage medium, e.g. a digital storage medium, such as a floppy disk, a DVD, a blu-ray, a CD, a ROM, a PROM, and EPROM, an EEPROM or a FLASH memory, having stored thereon electronically readable control signals, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Accordingly, the digital storage medium may be computer-readable.

Some embodiments according to the invention comprise a data carrier with electronically readable control signals capable of cooperating with a programmable computer system so as to carry out one of the methods described herein.

In general, embodiments of the invention can be implemented as a computer program product having a program code operable to perform a method when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier.

Other embodiments include a computer program stored on a machine-readable carrier for performing one of the methods described herein.

In other words, an embodiment of the invention is therefore a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.

Thus, another embodiment of the invention is a storage medium (or data carrier, or computer readable medium) comprising a computer program stored thereon for performing one of the methods described herein when executed by a processor. Data carriers, digital storage media or recording media are usually tangible and/or non-transitory. Another embodiment of the invention is an apparatus as described herein that includes a processor and a storage medium.

Thus, another embodiment of the invention is a data stream or signal sequence representing a computer program for performing one of the methods described herein. The data stream or signal sequence may for example be arranged to be transmitted via a data communication connection, for example via the internet.

Another embodiment includes a processing device, such as a computer or programmable logic device, configured or adapted to perform one of the methods described herein.

Another embodiment comprises a computer having installed thereon a computer program for performing one of the methods described herein.

Another embodiment according to the present invention includes an apparatus or system configured to transmit (e.g., electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may be, for example, a computer, a mobile device, a storage device, etc. This apparatus or system may for example comprise a file server for transmitting the computer program to the receiver.

In some embodiments, a programmable logic device (e.g., a field programmable gate array) may be used to perform some or all of the functions of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor to perform one of the methods described herein. In general, the method is preferably performed by any hardware means.

List of reference numerals

100 fluorescent microscope

102 imaging unit

104 processor

106 excitation light source

108 samples

110 loss light source

114. 124 mirror

116. 118 beam splitter

120 scanning device

122 phase mask

126 objective lens

128 detector

L1 excitation light

L2 loss light

L3 fluorescence

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:可见光波段全自动望远镜系统和空间碎片监测方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!