Hyperspectral imaging system

文档序号:1850773 发布日期:2021-11-16 浏览:17次 中文

阅读说明:本技术 高光谱成像系统 (Hyperspectral imaging system ) 是由 师雯 E·S·具 斯科特·E·弗雷泽 弗朗切斯科·库特拉勒 于 2020-01-31 设计创作,主要内容包括:本发明涉及一种高光谱成像系统,其用于在低信噪比状况下以快速的分析时间对多个重叠光谱进行去噪和/或颜色解混。该系统可执行高光谱相量(HySP)计算,以有效地分析高光谱时间推移数据。例如,该系统可执行高光谱相量(HySP)计算,以有效地分析五维(5D)高光谱时间推移数据。该成像系统的优点可以包括:(a)快的计算速度;(b)容易的相量分析;以及(c)用于获得最小可接受信噪比(SNR)的去噪算法。可以生成目标的解混彩色图像。这些图像可以用于健康状况的诊断,这可以增强患者的临床结果和患者健康的演变。(The present invention relates to a hyperspectral imaging system for denoising and/or color unmixing multiple overlapping spectra in a fast analysis time under low signal-to-noise ratio conditions. The system may perform hyperspectral phasor (HySP) calculations to efficiently analyze hyperspectral time lapse data. For example, the system may perform hyper-spectral phasor (HySP) calculations to efficiently analyze five-dimensional (5D) hyper-spectral time lapse data. Advantages of the imaging system may include: (a) the calculation speed is high; (b) easy phasor analysis; and (c) a denoising algorithm for obtaining a minimum acceptable signal-to-noise ratio (SNR). A unmixed color image of the target may be generated. These images may be used for diagnosis of a health condition, which may enhance the clinical outcome of the patient and the evolution of the patient's health.)

1. A hyperspectral imaging system for generating a unmixed color image of a target, comprising:

an image forming system;

wherein the image forming system has the following configuration:

mapping at least one phasor point back to a corresponding pixel on a target image (a "target image pixel") based on the geometric location of the phasor point on a phasor plane;

determining a color of each phasor point on the phasor plane based on a reference color map;

assigning the color to a corresponding target image pixel; and

a color image of the object is generated.

2. The hyperspectral imaging system of claim 1, wherein the image forming system further has a configuration of:

acquiring detected radiation of a target comprising at least two target waves ("target waves"), each target wave having a detected intensity and a different detected wavelength;

using the detected target radiation to form an image of a target ("target image"), wherein the target image comprises at least two pixels, and wherein each pixel corresponds to one physical point on the target;

using the detected intensity and detected wavelength of each target wave to form at least one spectrum ("intensity spectrum") for each pixel;

Transforming the intensity spectrum of each pixel into complex-valued functions using a fourier transform based on the intensity spectrum of each pixel, wherein each complex-valued function has at least one real part and at least one imaginary part;

forming a point ("phasor point") for each pixel on the phasor plane by plotting the real and imaginary values of each pixel; and

the phasor points are then mapped back to the target image pixels on the target image.

3. The hyperspectral imaging system of claim 2 wherein the hyperspectral imaging system further comprises an optical system; wherein:

the optical system comprises at least one optical component;

the at least one optical component comprises at least one optical detector;

the at least one optical detector has the following configuration:

detecting electromagnetic radiation absorbed, transmitted, refracted, reflected and/or emitted from at least one physical point on the target, thereby forming the target radiation; wherein the target radiation comprises at least two target waves, each target wave having an intensity and a different wavelength; and

detecting the intensity and wavelength of each target wave; and

the detected target radiation, and the detected intensity and detected wavelength of each target wave are transmitted to the image forming system to be acquired.

4. The hyperspectral imaging system of claim 1 wherein the image forming system further comprises a control system, a hardware processor, a memory, and a display; wherein the image forming system further has a configuration to display a color image of the object on a display of the image forming system.

5. The hyperspectral imaging system of claim 2 wherein the image forming system further comprises a control system, a hardware processor, a memory and a display; wherein the image forming system further has a configuration to display a color image of the object on a display of the image forming system.

6. The hyperspectral imaging system of claim 3 wherein the image forming system further comprises a control system, a hardware processor, a memory and a display; wherein the image forming system further has a configuration to display a color image of the object on a display of the image forming system.

7. The hyperspectral imaging system of claim 2, wherein the image forming system further has a configuration of:

applying a denoising filter at least once to the real and imaginary parts of each complex-valued function to produce denoised real and imaginary values for each pixel; wherein the denoising filter is applied at the following times:

After the hyperspectral imaging system transforms the formed intensity spectrum of each pixel into the complex valued function using a fourier transform; and is

Before the hyperspectral imaging system forms a point of each pixel on the phasor plane; and

forming a point of each pixel on the phasor plane using the denoised real and imaginary values of each pixel as the real and imaginary values of each pixel.

8. The hyperspectral imaging system of claim 2, wherein the image forming system has a configuration of: a reference color map is generated by using phase modulation ("angle") and/or phase amplitude ("radius") of the phasor points.

9. The hyperspectral imaging system of claim 2 wherein the reference color map has a uniform color along at least one of its coordinate axes.

10. The hyperspectral imaging system of claim 2, wherein:

the reference color map has a circular shape ("circle");

the circle having an origin; and a radial direction and an angular direction relative to an origin of the circle; and

wherein the image forming system has the following configuration:

Changing color in the radial direction and keeping color uniform in the angular direction to form a radial reference color map; and/or

Changing color in the angular direction and keeping color uniform in the radial direction to form an angular reference color map.

11. The hyperspectral imaging system of claim 2, wherein:

the reference color map has a circular shape ("circle");

the circle having an origin; and a radial direction and an angular direction relative to an origin of the circle; and

wherein the image forming system has the following configuration:

changing color in the radial direction and keeping color uniform in the angular direction to form a radial color reference map; and/or changing color in the angular direction and keeping color uniform in the radial direction to form an angular color reference map; and

changing the brightness in the radial direction and/or the angular direction; and

forming the reference color map.

12. The hyperspectral imaging system of claim 2, wherein:

the reference color map has a circular shape ("circle");

the circle having an origin; and a radial direction and an angular direction relative to an origin of the circle; and

Wherein the image forming system has the following configuration:

changing color in the radial direction and keeping color uniform in the angular direction to form a radial reference color map; and/or changing color in the angular direction and keeping the color uniform in the radial direction to form an angular reference color map;

decreasing brightness in the radial direction to form a gradient decreasing color map; and/or increasing the brightness in the radial direction to form a gradient rising reference color map; and

forming the reference color map.

13. The hyperspectral imaging system of claim 2, wherein:

each phasor point has a real value and an imaginary value;

the image forming system has the following configuration:

forming a phasor plane by using the coordinate axes of the imaginary values and the coordinate axes of the real values;

forming a section ("phasor section"), wherein the phasor section includes a phasor point and has a specified area on the phasor plane; and wherein the number (count) of phasor points belonging to the same phasor section forms the amplitude of the section ("phasor section amplitude"); and

a histogram ("phasor histogram") is formed by plotting the phasor segment amplitudes.

14. The hyperspectral imaging system of claim 13 wherein the hyperspectral imaging system has a configuration of: forming a tensor map by calculating a gradient of phasor segment amplitudes between adjacent phasor segments; and assigning a color to each pixel based on the reference color map.

15. The hyperspectral imaging system of claim 2, wherein the hyperspectral imaging system has a configuration of:

mapping the phasor points back to corresponding pixels on the target image based on the geometric location of the phasor points on the phasor plane;

assigning an arbitrary color to the corresponding pixel based on the geometric position of the phasor points on the phasor plane, instead of determining the color of each phasor point on the phasor plane based on a reference color map; and

generating a unmixed color image of the object based on the assigned arbitrary color.

16. The hyperspectral imaging system of claim 2, wherein:

the reference color map has a circular shape ("circle");

the circle having an origin; and a radial direction and an angular direction relative to an origin of the circle;

the image forming system has the following configuration:

Determining a maximum value ("maximum phasor value") of the phasor histogram;

specifying a center of the circle corresponding to the coordinate of the maximum phasor value (the "maximum center of circle");

changing color in the radial direction relative to the maximum center of the circle and maintaining color uniform in an angular direction to form a distortion maximum mode; and/or changing color in the angular direction relative to the maximum center of the circle and keeping the color uniform in the radial direction to form a deformed centroid value pattern; and

forming the reference color map.

17. The hyperspectral imaging system of claim 2, wherein:

the reference color map has a circular shape ("circle");

the circle having an origin; and a radial direction and an angular direction relative to an origin of the circle;

the image forming system has the following configuration:

determining a maximum value ("maximum phasor value") of the phasor histogram for each axis of the phasor plane to have two maximum phasor values;

determining a minimum value ("minimum phasor value") of the phasor histogram for each axis of the phasor plane to have two minimum phasor values; and

The two largest phasor values and the two smallest phasor values are used to form a bounding plane of all positive phasor histogram values.

18. The hyperspectral imaging system of claim 2, wherein the image forming system has a configuration of: the reference color map is generated by re-coloring each image pixel based on the gradient of the count relative to the histogram values of the spectrum around each image pixel.

19. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the image formation system has a configuration that generates a color image based on a color model.

20. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the image forming system has a configuration that generates a color image based on a color model; wherein the color model is an additive color model, a subtractive color model, and/or a cylindrical coordinate color model.

21. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the image forming system has a configuration that generates a color image based on a color model; wherein the color model is a red-green-blue (RGB) color model, a cyan-magenta-yellow-black (CMYK) color model, and/or a red-green-blue-alpha (RGBA) color model.

22. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the hyperspectral imaging system is a system for real-time intrinsic signal image processing, a system for separating a plurality of extrinsic markers from a plurality of intrinsic markers, and/or a system for combined marker visualization.

23. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the hyperspectral imaging system is a system for real-time intrinsic signal image processing, a system for separating 1 to 3 extrinsic markers from a plurality of intrinsic markers, a system for separating 1 to 7 extrinsic markers from a plurality of intrinsic markers, and/or a system for combined marker visualization.

24. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the at least one optical component further comprises at least one source for illuminating the target ("illumination source"), wherein the illumination source generates electromagnetic radiation comprising at least one wave ("illumination wave") ("illumination source radiation").

25. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the hyperspectral imaging system further comprises at least one illumination source, wherein the illumination source generates illumination source radiation comprising at least two illumination waves, and wherein each illumination wave has a different wavelength.

26. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the image forming system further comprises a control system, a hardware processor, a memory and a display.

27. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the image forming system further has a configuration to display a color image of the object on a display of the image forming system.

28. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the image forming system further comprises a control system, a hardware processor, a memory and an information transfer system; wherein the information delivery system delivers the image to the user in any manner.

29. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the image forming system further comprises a control system, a hardware processor, a memory and an information transfer system; wherein the information delivery system delivers the image to the user as an image, a numerical value, a color, a sound, a mechanical motion, a signal, or a combination thereof.

30. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the at least one optical component further comprises an optical lens, an optical filter, a dispersive optical system, or a combination thereof.

31. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the at least one optical component further comprises an optical lens, an optical filter, a dispersive optical system, or a combination thereof.

32. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the at least one optical component further comprises an optical lens, an optical filter, a dispersive optical system, or a combination thereof; and wherein the optical components of the hyperspectral imaging system are configured to form a microscope.

33. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the at least one optical component further comprises an optical lens, an optical filter, a dispersive optical system, or a combination thereof; and wherein the optical components of the hyperspectral imaging system are configured to form a microscope.

34. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the optical components of the hyperspectral imaging system are configured to form a confocal fluorescence microscope, a two-photon fluorescence microscope, or a combination thereof.

35. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the optical components of the hyperspectral imaging system are configured to form a confocal fluorescence microscope, a two-photon fluorescence microscope, or a combination thereof.

36. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the at least one optical component further comprises a first optical lens, a second optical lens and a dichroic mirror/beam splitter.

37. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the at least one optical component further comprises a first optical lens, a second optical lens and a dichroic mirror/beam splitter.

38. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the at least one optical component further comprises an optical lens, dispersive optics; and wherein the at least one optical detector is an array of optical detectors.

39. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the at least one optical component further comprises an optical lens, dispersive optics; and wherein the at least one optical detector is an array of optical detectors.

40. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the at least one optical component further comprises an optical lens, dispersive optics, dichroic mirror/beam splitter; and wherein the at least one optical detector is an array of optical detectors.

41. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the at least one optical component further comprises an optical lens, dispersive optics, dichroic mirror/beam splitter; and wherein the at least one optical detector is an array of optical detectors.

42. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the at least one optical component further comprises an optical lens, dispersive optics, dichroic mirror/beam splitter; wherein the at least one optical detector is an array of optical detectors; and wherein the illumination source directly illuminates the target.

43. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the at least one optical component further comprises an optical lens, dispersive optics, dichroic mirror/beam splitter; wherein the at least one optical detector is an array of optical detectors; and wherein the illumination source directly illuminates the target.

44. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the image formation system uses at least one harmonic of the fourier transform to generate an unmixed color image of the object.

45. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the image formation system uses at least a first harmonic and/or a second harmonic of the fourier transform to generate an unmixed color image of the target.

46. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the image formation system uses only the first harmonic of the fourier transform or only the fourier second harmonic to generate the unmixed color image of the target.

47. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the image formation system uses only a first harmonic of the fourier transform and only a second harmonic of the fourier to generate the unmixed color image of the target.

48. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the illumination source illuminates the target at each illumination wavelength by emitting all illumination waves simultaneously.

49. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the illumination source illuminates the target at each illumination wavelength by emitting all illumination waves simultaneously.

50. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the illumination source illuminates the target at each illumination wavelength by emitting each wave sequentially.

51. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the illumination source illuminates the target at each illumination wavelength by emitting each wave sequentially.

52. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the object radiation comprises electromagnetic radiation emitted by the object.

53. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the object radiation comprises electromagnetic radiation emitted by the object; and wherein the electromagnetic radiation emitted by the target comprises luminescence.

54. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the object radiation comprises electromagnetic radiation emitted by the object; wherein the electromagnetic radiation emitted by the target comprises luminescence; and wherein the luminescence comprises fluorescence, phosphorescence, or a combination thereof.

55. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the object radiation comprises electromagnetic radiation emitted by the object and the electromagnetic radiation emitted by the object comprises thermal radiation.

56. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the object radiation comprises electromagnetic radiation emitted by the object; and wherein the electromagnetic radiation emitted by the target comprises luminescence, thermal radiation, or a combination thereof.

57. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the object radiation comprises electromagnetic radiation emitted by the object; wherein the electromagnetic radiation emitted by the target comprises luminescence, thermal radiation, or a combination thereof; and wherein the luminescence comprises fluorescence, phosphorescence, or a combination thereof.

58. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the object radiation comprises electromagnetic radiation emitted by the object; wherein the electromagnetic radiation emitted by the target comprises fluorescence.

59. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the at least one optical component further comprises an optical filtering system; wherein the target radiation comprises electromagnetic radiation emitted by the target; and wherein the electromagnetic radiation emitted by the target comprises fluorescence.

60. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the at least one optical component further comprises an optical filtering system placed between the target and the at least one optical detector; wherein the target radiation comprises electromagnetic radiation emitted by the target; and wherein the electromagnetic radiation emitted by the target comprises fluorescence.

61. The hyperspectral imaging system of any of the preceding and subsequent claims wherein:

the at least one optical component further comprises an optical filtering system disposed between the target and the at least one optical detector;

the optical filtering system comprises a dichroic filter, a beam splitter type filter, or a combination thereof;

the target radiation comprises electromagnetic radiation emitted by the target; and

the electromagnetic radiation emitted by the target includes fluorescence.

62. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the at least one optical component further comprises a first optical filtering system and a second optical filtering system; wherein:

the first optical filtering system is placed between the target and the at least one optical detector;

The second optical filtering system is placed between the first optical filtering system and the at least one optical detector;

the first optical filtering system comprises a dichroic filter, a beam splitter type filter, or a combination thereof;

the second optical filtering system comprises a notch filter, an active filter, or a combination thereof;

the target radiation comprises electromagnetic radiation emitted by the target; and

the electromagnetic radiation emitted by the target includes fluorescence.

63. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the at least one optical component further comprises a first optical filtering system and a second optical filtering system; wherein:

the first optical filtering system is placed between the target and the at least one optical detector;

the second optical filtering system is placed between the first optical filtering system and the at least one optical detector;

the first optical filtering system comprises a dichroic filter, a beam splitter type filter, or a combination thereof;

the second optical filtering system comprises an active filter;

the active filter comprises an adaptive optical system, an acousto-optic tunable filter, a liquid crystal tunable band-pass filter, a Fabry-Perot interference filter or a combination thereof;

The target radiation comprises electromagnetic radiation emitted by the target; and

the electromagnetic radiation emitted by the target includes fluorescence.

64. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the denoising filter comprises a median filter.

65. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the illumination source comprises a source of coherent electromagnetic radiation.

66. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the illumination source comprises a source of coherent electromagnetic radiation.

67. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the illumination source comprises a coherent electromagnetic radiation source and the coherent electromagnetic radiation source comprises a laser, a diode, a two-photon excitation source, a three-photon excitation source, or a combination thereof.

68. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the illumination source comprises a coherent electromagnetic radiation source and the coherent electromagnetic radiation source comprises a laser, a diode, a two-photon excitation source, a three-photon excitation source, or a combination thereof.

69. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the at least one detector comprises a photomultiplier tube, an array of photomultiplier tubes, a digital camera, a hyperspectral camera, an electron multiplying charge coupled device, Sci-CMOS, or a combination thereof.

70. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the object radiation comprises at least four wavelengths.

71. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the object comprises an object comprising an organic compound.

72. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the target comprises a target comprising an organic compound; and wherein the target comprises tissue, a fluorescent genetic marker, or a combination thereof.

73. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the hyperspectral imaging system forms a unmixed color image of the object with a signal to noise ratio of the at least one spectrum in the range of 1.2 to 50.

74. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the hyperspectral imaging system forms a unmixed color image of the object with a signal to noise ratio of the at least one spectrum in the range of 2 to 50.

75. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the at least one optical detector detects electromagnetic radiation emitted by the target at a wavelength in the range of 300nm to 800 nm.

76. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the at least one optical detector detects electromagnetic radiation emitted by the target at a wavelength in the range of 300nm to 800 nm; and wherein the electromagnetic radiation emitted by the target comprises fluorescence.

77. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the illumination source radiation comprises illumination waves having a wavelength in the range of 300nm to 1300 nm.

78. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the illumination source radiation comprises illumination waves having a wavelength in the range of 300nm to 1300 nm.

79. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the at least one illumination source comprises a single photon excitation source; and wherein the illumination source radiation comprises illumination waves having a wavelength in the range of 300nm to 700 nm.

80. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the at least one illumination source comprises a single photon excitation source; and wherein the illumination source radiation comprises illumination waves having a wavelength in the range of 300nm to 700 nm.

81. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the at least one illumination source comprises a two-photon excitation source; and wherein the illumination source radiation comprises illumination waves having a wavelength in the range of 690nm to 1300 nm.

82. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the at least one illumination source comprises a two-photon excitation source; and wherein the illumination source radiation comprises illumination waves having a wavelength in the range of 690nm to 1300 nm.

83. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the at least one illumination source comprises a two-photon excitation source; wherein the two-photon excitation source comprises a tunable laser; and wherein the illumination source radiation comprises illumination waves having a wavelength in the range of 690nm to 1300 nm.

84. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the at least one illumination source comprises a two-photon excitation source; wherein the two-photon excitation source comprises a tunable laser; and wherein the illumination source radiation comprises illumination waves having a wavelength in the range of 690nm to 1300 nm.

85. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the at least one illumination source comprises a single photon excitation source, a two photon excitation source, or a combination thereof; wherein the illumination source radiation of the single photon radiation source comprises an illumination wave having a wavelength in the range of 300nm to 700 nm; and wherein the illumination source radiation of the two-photon excitation source comprises illumination waves having a wavelength in the range of 690nm to 1300 nm.

86. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the at least one illumination source comprises a single photon excitation source, a two photon excitation source, or a combination thereof; wherein the illumination source radiation of the single photon radiation source comprises waves with an illumination wavelength in the range of 300nm to 700 nm; and wherein the illumination source radiation of the two-photon excitation source comprises illumination waves having a wavelength in the range of 690nm to 1300 nm.

87. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the image formation system has a configuration that uses a reference material to assign an arbitrary color to each pixel.

88. The hyperspectral imaging system of any of the preceding and subsequent claims, wherein the image forming system has a configuration that uses a reference material to assign an arbitrary color to each pixel, and wherein the unmixed color image of the reference material is generated prior to generating the unmixed color image of the object.

89. The hyperspectral imaging system of any of the preceding and appended claims, wherein the image formation system has a configuration that assigns each pixel an arbitrary color using a reference material, wherein the unmixed color image of the reference material is generated prior to generating the unmixed color image of the object, and wherein the reference material comprises a physical structure, a chemical molecule, a biological molecule, a physical change, and/or a biological change caused by a disease, or any combination thereof.

90. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the image formation system has a configuration that diagnoses a health condition.

91. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the image formation system has a configuration to diagnose the health of a mammal.

92. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the image formation system has a configuration to diagnose a health condition of a mammal; and wherein the health condition is a disease, congenital abnormality, disorder, wound, injury, ulcer, abscess, or combination thereof.

93. A system for generating a color image of a target from image data acquired by a hyperspectral imaging system, the image data comprising intensities and wavelengths detected in response to interaction of electromagnetic radiation with the target, the system comprising one or more hardware processors configured to:

accessing image data of the target, the image data comprising at least two pixels, each pixel corresponding to a physical point on the target;

generating a spectrum for each pixel based on the detected intensity and wavelength in the accessed image data;

transforming the spectrum of each pixel into the frequency domain;

mapping the frequency transformed spectrum onto a phasor plane based on real and imaginary parts of pixels in the frequency transformed spectrum;

assigning a color to each phasor point on the phasor plane;

mapping the assigned color to each of the pixels of the image data corresponding to the phasor point;

generating a color image of the target based on the mapping of the assigned colors; and

displaying the color image on a display device.

94. A system for generating a color image of a target from image data acquired by a hyperspectral imaging system, the image data comprising intensities and wavelengths detected in response to interaction of electromagnetic radiation with the target, the system comprising one or more hardware processors configured to:

Transforming the spectrum of the selected pixels from the image data to a frequency domain;

mapping the frequency-transformed spectrum onto a phasor plane;

assigning a color to each phasor point on the phasor plane; and

mapping the assigned color to a physical point corresponding to the selected pixel; and

generating a color image of the target based on the mapping of the assigned colors.

95. A system for detecting a health condition from image data of a target acquired by a hyperspectral imaging system, the image data comprising intensities and wavelengths detected in response to interaction of electromagnetic radiation with the target, the system comprising one or more hardware processors configured to:

accessing image data of the target, the image data comprising a plurality of pixels representing physical points of the target;

for each pixel of the plurality of pixels:

transforming the spectrum of the selected pixels from the image data to a frequency domain;

mapping the frequency-transformed spectrum onto a phasor plane;

assigning a color to each phasor point on the phasor plane; and

mapping the assigned color to a physical point corresponding to the selected pixel; and

Determining the health condition based on a difference between colors of the physical points.

96. A method for generating a color image of a target from image data acquired by a hyperspectral imaging system, the image data comprising intensities and wavelengths detected in response to interaction of electromagnetic radiation with the target, the method comprising:

accessing image data of the target, the image data comprising at least two pixels, each pixel corresponding to a physical point on the target;

generating a spectrum for each pixel based on the detected intensity and wavelength in the accessed image data;

transforming the spectrum of each pixel into the frequency domain;

mapping the frequency transformed spectrum onto a phasor plane based on real and imaginary parts of pixels in the frequency transformed spectrum;

assigning a color to each phasor point on the phasor plane;

mapping the assigned color to each of the pixels of the image data corresponding to the phasor point;

generating a color image of the target based on the mapping of the assigned colors; and

displaying the color image on a display device.

97. A method for generating a color image of a target from image data acquired by a hyperspectral imaging system, the image data comprising intensities and wavelengths detected in response to interaction of electromagnetic radiation with the target, the method comprising:

Accessing image data of the target, the image data comprising a plurality of pixels representing physical points of the target;

for each pixel of the plurality of pixels:

transforming the spectrum of the selected pixels from the image data to a frequency domain;

mapping the frequency-transformed spectrum onto a phasor plane;

assigning a color to each phasor point on the phasor plane; and

mapping the assigned color to a physical point corresponding to the selected pixel; and

generating a color image of the target based on the mapping of the assigned colors.

98. A method for detecting a health condition from image data of a target acquired by a hyperspectral imaging system, the image data comprising intensities and wavelengths detected in response to interaction of electromagnetic radiation with the target, the system comprising one or more hardware processors configured to:

accessing image data of the target, the image data comprising a plurality of pixels representing physical points of the target;

for each pixel of the plurality of pixels:

transforming the spectrum of the selected pixels from the image data to a frequency domain;

Mapping the frequency-transformed spectrum onto a phasor plane;

assigning a color to each phasor point on the phasor plane; and

mapping the assigned color to a physical point corresponding to the selected pixel; and

determining the health condition based on a difference between colors of the physical points.

99. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the hyperspectral imaging system is for generating an unmixed color image of an object, the hyperspectral imaging system comprising:

an optical system; and

an image forming system;

wherein:

the optical system comprises at least one optical component;

the at least one optical component comprises at least one optical detector;

the at least one optical detector has the following configuration:

detecting electromagnetic radiation absorbed, transmitted, refracted, reflected, and/or emitted by at least one physical point on the target ("target radiation"), the target radiation including at least two waves ("target waves"), each wave having an intensity and a different wavelength;

detecting the intensity and wavelength of each target wave; and

transmitting the detected target radiation, and the detected intensity and wavelength of each target wave to the image forming system;

The image forming system comprises a control system, a hardware processor, a memory and a display; and

the image forming system has the following configuration:

forming an image of the target ("target image") using the detected target radiation, wherein the target image comprises at least two pixels, and wherein each pixel corresponds to one physical point on the target;

using the detected intensity and wavelength of each target wave to form at least one spectrum ("intensity spectrum") for each pixel;

transforming the intensity spectrum of each pixel into complex-valued functions using a fourier transform based on the formed intensity spectrum of each pixel, wherein each complex-valued function has at least one real part and at least one imaginary part;

forming a point ("phasor point") of each pixel on the phasor plane by plotting the real and imaginary values of each pixel

Mapping the phasor points back to corresponding pixels on the target image (the "target image pixels") based on the geometric location of the phasor points on the phasor plane;

generating or using a reference color map;

assigning a color to each phasor point on the phasor plane;

Transferring the assigned color to the corresponding target image pixel;

generating a color image of the target based on the assigned colors; and

displaying a color image of the object on a display of the image forming system.

100. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the hyperspectral imaging system is for generating an unmixed color image of an object, the hyperspectral imaging system comprising:

an optical system; and

an image forming system;

wherein:

the optical system comprises at least one optical component;

the at least one optical component comprises at least one optical detector;

the at least one optical detector has the following configuration:

detecting electromagnetic radiation absorbed, transmitted, refracted, reflected, and/or emitted by at least one physical point on the target ("target radiation"), the target radiation including at least two waves ("target waves"), each wave having an intensity and a different wavelength;

detecting the intensity and wavelength of each target wave; and

transmitting the detected target radiation, and the detected intensity and wavelength of each target wave to the image forming system;

The image forming system comprises a control system, a hardware processor, a memory and a display; and

the image forming system has the following configuration:

mapping at least one phasor point back to a corresponding pixel on a target image (a "target image pixel") based on the geometric location of the phasor point on the phasor plane;

determining a color for each phasor point on the phasor plane based on a reference color map;

assigning the determined color to the corresponding target image pixel; and

displaying a color image of the object on a display of the image forming system.

101. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the hyperspectral imaging system is for generating an unmixed color image of an object, the hyperspectral imaging system comprising:

an image forming system;

wherein the image forming system has the following configuration:

acquiring target radiation comprising at least two target waves, each wave having an intensity and a different wavelength;

forming an image of the target ("target image") using the detected target radiation, wherein the target image comprises at least two pixels, and wherein each pixel corresponds to one physical point on the target;

Using the detected intensity and wavelength of each target wave to form at least one spectrum ("intensity spectrum") for each pixel;

transforming the intensity spectrum of each pixel into complex-valued functions using a fourier transform based on the formed intensity spectrum of each pixel, wherein each complex-valued function has at least one real part and at least one imaginary part;

forming a point ("phasor point") for each pixel on the phasor plane by plotting the real and imaginary values of each pixel;

mapping the phasor points back to corresponding pixels on the target image (the "target image pixels") based on the geometric location of the phasor points on the phasor plane;

generating or using a reference color map;

assigning a color to each phasor point on the phasor plane;

transferring the assigned color to the corresponding target image pixel;

generating a color image of the target based on the assigned colors; and

displaying a color image of the object on a display of the image forming system.

102. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the image forming system has the following configuration:

Applying a denoising filter at least once to the real and imaginary parts of each complex-valued function to produce denoised real and imaginary values for each pixel;

applying the denoising filter at a time:

after the hyperspectral imaging system transforms the formed intensity spectrum of each pixel into the complex valued function using a fourier transform; and is

Before the hyperspectral imaging system forms a point of each pixel on the phasor plane; and

forming a point of each pixel on the phasor plane using the denoised real and imaginary values of each pixel as the real and imaginary values of each pixel.

103. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the image forming system has the following configuration: a reference color map is generated by using phase modulation ("angle") and/or phase amplitude ("radius") of the phasor points.

104. The hyperspectral imaging system according to any of the preceding and subsequent claims wherein the reference color map has a uniform color along at least one of its coordinate axes.

105. The hyperspectral imaging system of any of the preceding and subsequent claims wherein:

The reference color map has a circular shape ("circle");

the circle having an origin, a radial direction and an angular direction relative to the origin of the circle; and

the color changes in the radial direction and is uniform in the angular direction to form a radial pattern; and/or

The color changes in the angular direction and is uniform in the radial direction to form an angular map.

106. The hyperspectral imaging system of any of the preceding and subsequent claims wherein:

the reference color map has a circular shape ("circle");

the circle having an origin, a radial direction and an angular direction relative to the origin of the circle; and

the color changes in the radial direction and is uniform in the angular direction to form a radial pattern; and/or

The color changes in the angular direction and is uniform in the radial direction to form an angular map; and

the brightness varies in the radial direction and/or the angular direction.

107. The hyperspectral imaging system of any of the preceding and subsequent claims wherein:

the reference color map has a circular shape ("circle");

The circle having an origin, a radial direction and an angular direction relative to the origin of the circle; and

the color changes in the radial direction and is uniform in the angular direction to form a radial pattern; and/or

The color changes in the angular direction and is uniform in the radial direction to form an angular map; and

the brightness decreases in the radial direction to form a gradient decreasing graph; and/or the brightness increases in the radial direction to form a gradient rise pattern.

108. The hyperspectral imaging system of any of the preceding and subsequent claims wherein:

the phasor plane is formed by a coordinate axis of an imaginary value and a coordinate axis of a real value;

the phasor points have real and imaginary values;

the hyperspectral imaging system has a configuration with formation segments ("phasor segments");

the phasor section includes a phasor point and has a specified area on the phasor plane;

the number (count) of phasor points belonging to the same phasor segment forms the amplitude of the segment ("phasor segment amplitude" or "appearance of a specific spectrum"); and

the hyperspectral imaging system has a configuration that forms a histogram ("phasor histogram") by plotting the phasor segment amplitudes.

109. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the hyperspectral imaging system has a configuration of:

mapping the phasor points back to corresponding pixels on the target image based on the geometric location of the phasor points on the phasor plane;

assigning an arbitrary color to a corresponding pixel based on a geometric position of the phasor point on the phasor plane;

generating a unmixed color image of the object based on the assigned arbitrary color; and

displaying the unmixed color image of the object on a display of the image forming system.

110. The hyperspectral imaging system of any of the preceding and subsequent claims wherein:

the image forming system has a configuration that determines a maximum value ("maximum phasor value") of the phasor histogram;

the reference color map has a circular shape ("circle");

the circle having an origin, a radial direction and an angular direction relative to the origin of the circle;

the image forming system has a configuration that specifies the center of the circle ("the maximum center of the circle") corresponding to the coordinates of the maximum phasor value;

The color varies in the radial direction and is uniform in the angular direction with respect to the maximum center of the circle to form a deformation maximum pattern; and/or

The color changes in the angular direction relative to the maximum center of the circle and is uniform in the radial direction to form a deformed centroid value pattern.

111. The hyperspectral imaging system of any of the preceding and subsequent claims wherein:

the image forming system has the following configuration: determining, for each axis of the phasor plane, a maximum value ("maximum phasor value") of the phasor histogram to have two maximum phasor values; determining, for each axis of the phasor plane, a minimum value ("minimum phasor value") of the phasor histogram to have two minimum phasor values;

the reference color map has a circular shape ("circle");

the circle having an origin, a radial direction and an angular direction relative to the origin of the circle; and

the image forming system has a configuration in which: the two largest phasor values and the two smallest phasor values are used to form a bounding plane of all positive phasor histogram values.

112. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the hyperspectral imaging system has a configuration of: forming a tensor map by calculating a gradient of phasor segment amplitudes between adjacent phasor segments; and assigning a color based on the color map.

113. The hyperspectral imaging system of any of the preceding and subsequent claims wherein the image forming system has the following configuration: the reference color map is generated by re-coloring each image pixel based on the gradient of the count relative to the histogram values of the spectrum around each image pixel.

114. Any combination of the systems and/or methods disclosed in any of the preceding claims is within the scope of the present disclosure.

Technical Field

The present disclosure relates to imaging systems. The present disclosure also relates to a hyperspectral imaging system. The present disclosure also relates to a hyperspectral imaging system that generates a unmixed color image of a target. The present disclosure also relates to a hyperspectral imaging system for diagnosing a health condition.

Background

In recent years, multispectral imaging has become a powerful tool for simultaneously investigating multiple markers in biological samples at the subcellular, cellular, and tissue levels [1, 2] [ all references in parentheses are identified below ]. Multispectral methods can eliminate the contribution from the sample autofluorescence and allow high levels of signal multiplexing [3 to 5] because they can unambiguously identify dyes with unclear spectra [6 ]. Despite these many advantages and availability of commercial hardware with multispectral capabilities, these methods have not been adopted since it has been challenging to simultaneously represent multidimensional data (x, y, z, λ, t) for visual inspection or quantitative analysis.

Typical methods using linear unmixing [7] or principal component analysis [8] are computationally challenging, and their performance degrades as light levels decrease [7, 9 ]. In the case of time-lapse (time-lapse) bioimaging, where the excitation light is usually kept low to minimize phototoxicity, noise leads to inevitable errors in the processed images [7, 9 ]. For such methods, complex data sets typically require a priori knowledge of image segmentation or anatomy to distinguish the unique fluorescence signals in the region of interest [10 ].

Conventional Spectral Phasor (SP) [14 to 16] methods provide efficient processing and rendering tools for multispectral data. The SP plots the spectrum of each pixel in the image as a point on the phasor plane (fig. 1a) using a fourier transform, providing a density map of the set of pixels. Since the SP provides a single point representation on even a 2D map of complex spectra, it simplifies both the interpretation and interaction with the multi-dimensional spectral data. A mixture of multiple spectra can be graphically analyzed in a computationally convenient manner. Thus, SP can be adapted for multispectral imaging and has been shown to be useful for separating up to 3 colors at a single time point in a biological sample excluding autofluorescence [14, 15 ].

However, existing embodiments of the SP method have not been suitable for in vivo (in vivo) multi-spectral time-lapse fluorescence imaging analysis, particularly for large numbers of labels. This is mainly due to the signal-to-noise ratio (SNR) limitations associated with photobleaching and phototoxicity when imaging multiple fluorescent proteins with different biophysical properties [17 ]. Proper excitation of multiple fluorophores requires a range of excitation wavelengths to provide a good SNR image. However, increasing the number of excitation lines affects the rate of photobleaching and can hinder biological developmental dynamics. Furthermore, in embryos, autofluorescence generally increases with the number of excitation wavelengths. An alternative approach to exciting multiple labels with a single wavelength while reducing the amount of negative light effects and autofluorescence is at the expense of reduced SNR.

The expanded palette of fluorescent proteins has enabled the study of spatiotemporal interactions of proteins, cells and tissues in vivo within living cells or developing embryos. However, time-lapse imaging of multiple labels remains challenging because noise, photobleaching, and toxicity greatly impair signal quality, and throughput may be limited by the time required to unmix (unmix) the spectral signals from multiple labels.

Hyperspectral fluorescence imaging is gaining popularity because it enables cross-scale spatiotemporal dynamic multiplexing of molecules, cells and tissues using a variety of fluorescent labels. This is made possible by adding the dimension of the wavelength to the dataset. The resulting data sets are information-dense and often require lengthy analysis to separate overlapping fluorescence spectra. Understanding and visualizing these large multi-dimensional datasets can be challenging during acquisition and preprocessing.

Hyperspectral imaging techniques can be used for medical purposes. See, for example, Lu et al, "medical hyperspectral imaging: review (Medical Hyperspectral Imaging: a Review) ", Journal of Biomedical Optics (Journal of Biomedical Optics)19(1), pages 010901-1 to 010901-23 (2014-January); vasefi et al, "in vivo polarization sensitive hyperspectral imaging: multimode Dermoscope for Skin Analysis (Polarization-Sensitive Hyperspectral Imaging in vivo: A Multimode Desmoscope for Skin Analysis), "Scientific Reports (Scientific Reports)4, article number: 4924 (2014); and Burlina et al, "Hyperspectral Imaging for Detection of Skin Related Conditions," U.S. Pat. No.8,761,476B2. The entire contents of each of these publications are incorporated herein by reference.

In recent years, fluorescence hyperspectral imaging (fHSI) has become increasingly popular for simultaneous imaging of multiple endogenous and exogenous markers in biological samples. One of the advantages of using multiple fluorophores is the ability to simultaneously track different labeled molecules, cells or tissues in space and time. This is particularly important in the field of biology, where tissues, proteins and their functions within the organism are deeply interwoven and there are still many unanswered questions about the relationship between the individual components. fHSI enables scientists to more fully understand biological systems using multiplexed information derived from the full spectrum of each point in the observed image.

Standard optical multichannel fluorescence imaging distinguishes fluorescent protein reporters (fluorescent protein reporters) by band pass emission filter (bandpass emission filter) to selectively collect signals based on wavelength. Spectral overlap between the markers limits the number of fluorescent reporters that can be acquired and the "background" signal is difficult to separate. fHSI overcomes these limitations, enabling the separation of fluorescent proteins with overlapping spectra from endogenous fluorescence contributions, extending to a fluorescence palette that counts many different labels with corresponding separated spectra.

The disadvantage of acquiring such huge multi-dimensional spectral information is that it increases the complexity and computation time of the analysis, thereby showing meaningful results only after lengthy computations. In order to optimize the experimental time, it is advantageous to perform a informed visualization of the spectral data during acquisition (especially for lengthy time lapse recordings) and before performing the analysis. Such pre-processing visualization allows scientists to assess image collection parameters within the experimental pipeline and select the most appropriate processing method. However, the challenge is to quickly visualize subtle spectral differences with a set of three colors compatible with the display and the human eye while minimizing information loss. Since the most common color model for displays is RGB, where red, green and blue are combined to reproduce a wide array of colors, the hyperspectral or multispectral dataset is typically reduced to three channels to be visualized. Therefore, spectral information compression becomes a key step in correctly displaying image information.

Dimension reduction strategies are commonly used to represent multidimensional fHSI data. One strategy is to construct a fixed spectral envelope from the first three components produced by Principal Component Analysis (PCA) or Independent Component Analysis (ICA), thereby converting the hyperspectral image into a three band visualization. The main advantage of the spectrally weighted envelope is that it preserves the human perception of the hyperspectral image. For a tri-stimulus display, each spectrum is displayed with the most similar hue and saturation so that the human eye easily recognizes the details in the image. Another popular visualization technique is pixel-based image fusion, which preserves the spectral pair-wise distance of the fused image compared to the input data. It selects the weights by evaluating the significance of the measured pixels with respect to their relative spatial neighborhood distance. These weights can be further optimized by implementing widely applied mathematical techniques, such as bayesian inference, by using a filter bank for feature extraction or by noise smoothing.

A disadvantage of methods such as calculating the singular value decomposition of PCA cardinality and coefficients or generating optimal fusion weights is that it may take multiple iterations to converge. Given that fHSI datasets tend to exceed gigabyte ranges and that many fHSI datasets exceed terabyte thresholds, such calculations will be computationally and time-demanding. Furthermore, most visualization methods focus more on interpreting the spectrum as RGB colors, rather than using all the features that can be extracted from the spectral data.

Reference to related art

The following publications are related art in the context of this disclosure. One or two of the numbers in brackets preceding each reference corresponds to the numbers in brackets used in the rest of this disclosure.

[1]Garini,Y.,Young,I.T.and McNamara,G.Spectral imaging:principles and applications.Cytometry A 69:735-747(2006).

[2]Dickinson,M.E.,Simbuerger,E.,Zimmermann,B.,Waters,C.W.and Fraser,S.E.Multiphoton excitation spectra in biological samples.Journal of Biomedical Optics 8:329-338(2003).

[3]Dickinson,M.E.,Bearman,G.,Tille,S.,Lansford,R.&Fraser,S.E.Multi-spectral imaging and linear unmixing add a whole new dimension to laser scanning fluorescence microscopy.Biotechniques 31,1272–1278(2001).

[4]Levenson,R.M.and Mansfield,J.R.Multispectral imaging in biology and medicine:Slices of life.Cytometry A 69:748-758(2006).

[5]Jahr,W.,Schmid,B.,Schmied,C.,Fahrbach,F.and Huisken,J.Hyperspectral light sheet microscopy.Nat Commun,6,(2015)

[6]Lansford,R.,Bearman,G.and Fraser,S.E.Resolution of multiple green fluorescent protein color variants and dyes using two-photon microscopy and imaging spectroscopy.Journal of Biomedical Optics 6:311-318(2001).

[7]Zimmermann,T.Spectral Imaging and Linear Unmixing in Light Microscopy.Adv Biochem Engin/Biotechnol(2005)95:245–265

[8]Jolliffe,Ian.Principal component analysis.John Wiley&Sons,Ltd,(2002).

[9]Gong,P.and Zhang,A.Noise Effect on Linear Spectral Unmixing.Geographic Information Sciences 5(1),(1999)

[10]Mukamel,E.A.,Nimmerjahn,A.,and Schnitzer M.J.;Automated Analysis of Cellular Signals from Large-Scale Calcium Imaging Data;Neuron,63(6),747-760

[11]Clayton,A.H.,Hanley,Q.S.&Verveer,P.J.Graphical representation and multicomponent analysis of single-frequency fluorescence lifetime imaging microscopy data.J.Microsc.213,1–5(2004)

[12]Redford,G.I.&Clegg,R.M.Polar plot representation for frequency-domain analysis of fluorescence lifetimes.J.Fluoresc.15,805–815(2005).

[13]Digman M A,Caiolfa V R,Zamai M and Gratton E.The phasor approach to fluorescence lifetime imaging analysis.Biophys.J.94 pp.14–16(2008)

[14]Fereidouni F.,Bader A.N.and Gerritsen H.C.Spectral phasor analysis allows rapid and reliable unmixing of fluorescence microscopy spectral images.Opt.Express 20 12729–41(2012)

[15]Andrews L.M.,Jones M.R.,Digman M.A.,Gratton E.Spectral phasor analysis of Pyronin Y labeled RNA microenvironments in living cells.Biomed.Op.Express 4(1)171-177(2013)

[16]Cutrale F.,Salih A.and Gratton E.Spectral phasor approach for fingerprinting of photo-activatable fluorescent proteins Dronpa,Kaede and KikGR.Methods Appl.Fluoresc.1(3)(2013)035001

[17]Cranfill P.J.,Sell B.R.,Baird M.A.,Allen J.R.,Lavagnino Z.,de Gruiter H.M.,Kremers G.,Davidson M.W.,Ustione A.,Piston D.W.,Quantitative assessment of fluorescent proteins,Nature Methods 13,557-562(2016).

[18]Chen,H.,Gratton,E.,&Digman,M.A.Spectral Properties and Dynamics of Gold Nanorods Revealed by EMCCD-Based Spectral Phasor Method.Microscopy Research and Technique,78(4),283–293(2015)

[19]Vermot,J.,Fraser,S.E.,Liebling,M."Fast fluorescence microscopy for imaging the dynamics of embryonic development,"HFSP Journal,vol 2,pp.143-155,(2008)

[20]Dalal,R.B.,Digman,M.A.,Horwitz,A.F.,Vetri,V.,Gratton,E.,Determination of particle number and brightness using a laser scanning confocal microscope operating in the analog mode,Microsc.Res.Tech.,71(1)pp.69–81(2008)

[21]Fereidouni,F.,Reitsma,K.,Gerritsen,H.C.High speed multispectral fluorescence lifetime imaging,Optics Express,21(10),pp.11769-11782(2013)

[22]Hamamatsu Photonics K.K.Photomultiplier Technical Handbook.(1994)Hamamatsu Photonics K.K

[23]Trinh,L.A.et al.,“A versatile gene trap to visualize and interrogate the function of the vertebrate proteome,”Genes&development,25(21),2306-20(2011).

[24]Jin S.W.,Beis D.,Mitchell T.,Chen J.N.,Stainier D.Y.Cellular and molecular analyses of vascular tube and lumen formation in zebrafish.Development 132,5199–5209(2005)

[25]Livet,J.,Weissman,T.A.,Kang,H.,Draft,R.W.,Lu,J.,Bennis,R.A.,Sanes,J.R.,Lichtman J.W.Transgenic strategies for combinatorial expression of fluorescent proteins in the nervous system.Nature,450(7166),56–62(2007)

[26]Lichtman,J.W.,Livet,J.,&Sanes,J.R.A technicolour approach to the connectome.Nature Reviews Neuroscience,9(6),417–422(2008).

[27]Pan,Y.A.,Freundlich,T.,Weissman,T.A.,Schoppik,D.,Wang,X.C.,Zimmerman,S.,Ciruna,B.,Sanes,J.R.,Lichtman,J.W.,Schier A.F.Zebrabow:multispectral cell labeling for cell tracing and lineage analysis in zebrafish.Development,140(13),2835–2846.(2013)

[28]Westerfield M.The Zebrafish Book.(1994)Eugene,OR:University Oregon Press.

[29]Megason,S.G.In toto imaging of embryogenesis with confocal time-lapse microscopy.Methods in molecular biology,546 pp.317-32(2009).

[30]Jahr,W.,Schmid,B.,Schmied,C.,Fahrbach,F.O.&Huisken,J.Hyperspectral light sheet microscopy.Nat.Commun.6,(2015).

[31]Levenson,R.M.&Mansfield,J.R.Multispectral imaging in biology and medicine:Slices of life.Cytometry Part A 69,748–758(2006).

[32]Garini,Y.,Young,I.&McNamara,G.Spectral imaging:principles and applications.Cytom.Part A 747,735–747(2006).

[33]Dickinson,M.E.,Simbuerger,E.,Zimmermann,B.,Waters,C.W.&Fraser,S.E.Multiphoton excitation spectra in biological samples.J Biomed Opt 8,329–338(2003).

[34]Sinclair,M.B.,Haaland,D.M.,Timlin,J.A.&Jones,H.D.T.Hyperspectral confocal microscope.Appl.Opt.45,6283(2006).

[35]Valm,A.M.et al.Applying systems-level spectral imaging and analysis to reveal the organelle interactome.Nature 546,162–167(2017).

[36]Cranfill,P.J.et al.Quantitative assessment of fluorescent proteins.Nat.Methods 13,557–562(2016).

[37]Hiraoka,Y.,Shimi,T.&Haraguchi,T.Multispectral Imaging Fluorescence Microscopy for Living Cells.Cell Struct.Funct.27,367–374(2002).

[38]Dickinson,M.E.,Bearman,G.,Tille,S.,Lansford,R.&Fraser,S.E.Multi-spectral imaging and linear unmixing add a whole new dimension to laser scanning fluorescence microscopy.Biotechniques 31,1272–1278(2001).

[39]Jacobson,N.P.&Gupta,M.R.Design goals and solutions for display of hyperspectral images.in Proceedings-International Conference on Image Processing,ICIP 2,622–625(2005).

[40]Hotelling,H.Analysis of a complex of statistical variables into principal components.J.Educ.Psychol.24,417–441(1933).

[41]Jolliffe,I.T.Principal Component Analysis.J.Am.Stat.Assoc.98,487(2002).

[42]Abdi,H.&Williams,L.J.Principal component analysis.Wiley Interdisciplinary Reviews:Computational Statistics 2,433–459(2010).

[43]Tyo,J.S.,Konsolakis,A.,Diersen,D.I.&Olsen,R.C.Principal-components-based display strategy for spectral imagery.IEEE Trans.Geosci.Remote Sens.41,708–718(2003).

[44]Wilson,T.A.Perceptual-based image fusion for hyperspectral data.IEEE Trans.Geosci.Remote Sens.35,1007–1017(1997).

[45]Long,Y.,Li,H.C.,Celik,T.,Longbotham,N.&Emery,W.J.Pairwise-distance-analysis-driven dimensionality reduction model with double mappings for hyperspectral image visualization.Remote Sens.7,7785–7808(2015).

[46]Kotwal,K.&Chaudhuri,S.A Bayesian approach to visualization-oriented hyperspectral image fusion.Inf.Fusion 14,349–360(2013).

[47]Kotwal,K.&Chaudhuri,S.Visualization of Hyperspectral Images Using Bilateral Filtering.IEEE Trans.Geosci.Remote Sens.48,2308–2316(2010).

[48]Zhao,W.&Du,S.Spectral-Spatial Feature Extraction for Hyperspectral Image Classification:A Dimension Reduction and Deep Learning Approach.IEEE Trans.Geosci.Remote Sens.54,4544–4554(2016).

[49]Zhang,Y.,De Backer,S.&Scheunders,P.Noise-resistant wavelet-based Bayesian fusion of multispectral and hyperspectral images.IEEE Trans.Geosci.Remote Sens.47,3834–3843(2009).

[50]A,R.SVD Based Image Processing Applications:State of The Art,Contributions and Research Challenges.Int.J.Adv.Comput.Sci.Appl.3,26–34(2012).

[51]Redford,G.I.&Clegg,R.M.Polar plot representation for frequency-domain analysis of fluorescence lifetimes.J.Fluoresc.15,805–815(2005).

[52]Digman,M.A.,Caiolfa,V.R.,Zamai,M.&Gratton,E.The phasor approach to fluorescence lifetime imaging analysis.Biophys.J.94,(2008).

[53]Vergeldt,F.J.et al.Multi-component quantitative magnetic resonance imaging by phasor representation.Sci.Rep.7,(2017).

[54]Lanzanò,L.et al.Encoding and decoding spatio-temporal information for super-resolution microscopy.Nat.Commun.6,(2015).

[55]Fereidouni,F.,Bader,A.N.&Gerritsen,H.C.Spectral phasor analysis allows rapid and reliable unmixing of fluorescence microscopy spectral images.Opt.Express 20,12729(2012).

[56]Cutrale,F.,Salih,A.&Gratton,E.Spectral phasor approach for fingerprinting of photo-activatable fluorescent proteins Dronpa,Kaede and KikGR.Methods Appl.Fluoresc.1,(2013).

[57]Andrews,L.M.,Jones,M.R.,Digman,M.a&Gratton,E.Spectral phasor analysis of Pyronin Y labeled RNA microenvironments in living cells.Biomed Opt Express 4,171–177(2013).

[58]Cutrale,F.et al.Hyperspectral phasor analysis enables multiplexed 5D in vivo imaging.Nat.Methods 14,149–152(2017).

[59]Radaelli,F.et al.μmAPPS:A novel phasor approach to second harmonic analysis for in vitro-in vivo investigation of collagen microstructure.Sci.Rep.7,(2017).

[60]Scipioni,L.,Gratton,E.,Diaspro,A.&Lanzanò,L.Phasor Analysis of Local ICS Detects Heterogeneity in Size and Number of Intracellular Vesicles.Biophys.J.(2016).doi:10.1016/j.bpj.2016.06.029

[61]Sarmento,M.J.et al.Exploiting the tunability of stimulated emission depletion microscopy for super-resolution imaging of nuclear structures.Nat.Commun.(2018).doi:10.1038/s41467-018-05963-2

[62]Scipioni,L.,Di Bona,M.,Vicidomini,G.,Diaspro,A.&Lanzanò,L.Local raster image correlation spectroscopy generates high-resolution intracellular diffusion maps.Commun.Biol.(2018).doi:10.1038/s42003-017-0010-6

[63]Pan,Y.A.et al.Zebrabow:multispectral cell labeling for cell tracing and lineage analysis in zebrafish.Development 140,2835–2846(2013).

[64]Ranjit,S.,Malacrida,L.,Jameson,D.M.&Gratton,E.Fit-free analysis of fluorescence lifetime imaging data using the phasor approach.Nat.Protoc.13,1979–2004(2018).

[65]Zipfel,W.R.et al.Live tissue intrinsic emission microscopy using multiphoton-excited native fluorescence and second harmonic generation.Proc.Natl.Acad.Sci.100,7075–7080(2003).

[66]Rock,J.R.,Randell,S.H.&Hogan,B.L.M.Airway basal stem cells:a perspective on their roles in epithelial homeostasis and remodeling.Dis.Model.Mech.3,545–556(2010).

[67]Rock,J.R.et al.Basal cells as stem cells of the mouse trachea and human airway epithelium.Proc.Natl.Acad.Sci.(2009).doi:10.1073/pnas.0906850106

[68]Bird,D.K.et al.Metabolic mapping of MCF10A human breast cells via multiphoton fluorescence lifetime imaging of the coenzyme NADH.Cancer Res.65,8766–8773(2005).

[69]Lakowicz,J.R.,Szmacinski,H.,Nowaczyk,K.&Johnson,M.L.Fluorescence lifetime imaging of free and protein-bound NADH.Proc.Natl.Acad.Sci.(1992).doi:10.1073/pnas.89.4.1271

[70]Skala,M.C.et al.In vivo multiphoton microscopy of NADH and FAD redox states,fluorescence lifetimes,and cellular morphology in precancerous epithelia.Proc.Natl.Acad.Sci.(2007).doi:10.1073/pnas.0708425104

[71]Sharick,J.T.et al.Protein-bound NAD(P)H Lifetime is Sensitive to Multiple Fates of Glucose Carbon.Sci.Rep.(2018).doi:10.1038/s41598-018-23691-x

[72]Stringari,C.et al.Phasor approach to fluorescence lifetime microscopy distinguishes different metabolic states of germ cells in a live tissue.Proc.Natl.Acad.Sci.108,13582–13587(2011).

[73]Stringari,C.et al.Multicolor two-photon imaging of endogenous fluorophores in living tissues by wavelength mixing.Sci.Rep.(2017).doi:10.1038/s41598-017-03359-8

[74]Sun,Y.et al.Endoscopic fluorescence lifetime imaging for in vivo intraoperative diagnosis of oral carcinoma.in Microscopy and Microanalysis(2013).doi:10.1017/S1431927613001530

[75]Ghukasyan,V.V.&Kao,F.J.Monitoring cellular metabolism with fluorescence lifetime of reduced nicotinamide adenine dinucleotide.J.Phys.Chem.C(2009).doi:10.1021/jp810931u

[76]Walsh,A.J.et al.Quantitative optical imaging of primary tumor organoid metabolism predicts drug response in breast cancer.Cancer Res.(2014).doi:10.1158/0008-5472.CAN-14-0663

[77]Conklin,M.W.,Provenzano,P.P.,Eliceiri,K.W.,Sullivan,R.&Keely,P.J.Fluorescence lifetime imaging of endogenous fluorophores in histopathology sections reveals differences between normal and tumor epithelium in carcinoma in situ of the breast.Cell Biochem.Biophys.(2009).doi:10.1007/s12013-009-9046-7

[78]Browne,A.W.et al.Structural and functional characterization of human stem-cell-derived retinal organoids by live imaging.Investig.Ophthalmol.Vis.Sci.(2017).doi:10.1167/iovs.16-20796

[79]Livet,J.et al.Transgenic strategies for combinatorial expression of fluorescent proteins in the nervous system.Nature 450,56–62(2007).

[80]Weissman,T.A.&Pan,Y.A.Brainbow:New resources and emerging biological applications for multicolor genetic labeling and analysis.Genetics 199,293–306(2015).

[81]Pan,Y.A.,Livet,J.,Sanes,J.R.,Lichtman,J.W.&Schier,A.F.Multicolor brainbow imaging in Zebrafish.Cold Spring Harb.Protoc.6,(2011).

[82]Raj,B.et al.Simultaneous single-cell profiling of lineages and cell types in the vertebrate brain.Nat.Biotechnol.36,442–450(2018).

[83]Mahou,P.et al.Multicolor two-photon tissue imaging by wavelength mixing.Nat.Methods 9,815–818(2012).

[84]Loulier,K.et al.Multiplex Cell and Lineage Tracking with Combinatorial Labels.Neuron 81,505–520(2014).

[85]North,T.E.&Goessling,W.Haematopoietic stem cells show their true colours.Nature Cell Biology 19,10–12(2017).

[86]Chen,C.H.et al.Multicolor Cell Barcoding Technology for Long-Term Surveillance of Epithelial Regeneration in Zebrafish.Dev.Cell 36,668–680(2016).

[87]Vert,J.,Tsuda,K.&B.A primer on kernel methods.Kernel Methods Comput.Biol.35–70(2004).doi:10.1017/CBO9781107415324.004

[88]Bruton,D.{RGB}Values for visible wavelengths.(1996).Available at:http://www.physics.sfasu.edu/astro/color/spectra.html.

[89]Westerfield,M.The Zebrafish Book.A Guide for the Laboratory Use of Zebrafish(Danio rerio),4th Edition.book(2000).

[90]Trinh,L.A.et al.A versatile gene trap to visualize and interrogate the function of the vertebrate proteome.Genes Dev.25,2306–2320(2011).

[91]Jin,S.-W.,Beis,D.,Mitchell,T.,Chen,J.-N.&Stainier,D.Y.R.Cellular and molecular analyses of vascular tube and lumen formation in zebrafish.Development 132,5199–209(2005).

[92]Megason,S.G.In toto imaging of embryogenesis with confocal time-lapse microscopy.Methods Mol.Biol.546,317–332(2009).

[93]Huss,D.et al.A transgenic quail model that enables dynamic imaging of amniote embryogenesis.Development 142,2850–2859(2015).

[94]Holst,J.,Vignali,K.M.,Burton,A.R.&Vignali,D.A.A.Rapid analysis of T-cell selection in vivo using T cell-receptor retrogenic mice.Nat.Methods 3,191–197(2006).

[95]Kwan,K.M.et al.The Tol2kit:A multisite gateway-based construction Kit for Tol2 transposon transgenesis constructs.Dev.Dyn.236,3088–3099(2007).

[96]Kawakami,K.et al.A transposon-mediated gene trap approach identifies developmentally regulated genes in zebrafish.Dev.Cell 7,133–144(2004).

[97]Urasaki,A.,Morvan,G.&Kawakami,K.Functional dissection of the Tol2 transposable element identified the minimal cis-sequence and a highly repetitive sequence in the subterminal region essential for transposition.Genetics 174,639–649(2006).

[98]White,R.M.et al.Transparent Adult Zebrafish as a Tool for In Vivo Transplantation Analysis.Cell Stem Cell 2,183–189(2008).

[99]Arnesano,C.,Santoro,Y.&Gratton,E.Digital parallel frequency-domain spectroscopy for tissue imaging.J.Biomed.Opt.17,0960141(2012).

[100]Cutrale,F.et al.Hyperspectral phasor analysis enables multiplexed 5D in vivo imaging.Nat.Methods 14,149–152(2017).

[101]Browne,A.W.et al.Structural and functional characterization of human stem-cell-derived retinal organoids by live imaging.Investig.Ophthalmol.Vis.Sci.(2017).doi:10.1167/iovs.16-20796

[102]Stringari,C.et al.Phasor approach to fluorescence lifetime microscopy distinguishes different metabolic states of germ cells in a live tissue.Proc.Natl.Acad.Sci.108,13582–13587(2011).

[103]Sharick,J.T.et al.Protein-bound NAD(P)H Lifetime is Sensitive to Multiple Fates of Glucose Carbon.Sci.Rep.(2018).doi:10.1038/s41598-018-23691-x

[104]Ranjit,S.,Malacrida,L.,Jameson,D.M.&Gratton,E.Fit-free analysis of fluorescence lifetime imaging data using the phasor approach.Nat.Protoc.13,1979–2004(2018).

[105]Lakowicz,J.R.,Szmacinski,H.,Nowaczyk,K.&Johnson,M.L.Fluorescence lifetime imaging of free and protein-bound NADH.Proc.Natl.Acad.Sci.(1992).doi:10.1073/pnas.89.4.1271

[106]Stringari,C.et al.Multicolor two-photon imaging of endogenous fluorophores in living tissues by wavelength mixing.Sci.Rep.(2017).doi:10.1038/s41598-017-03359-8

[107]Skala,M.C.et al.In vivo multiphoton microscopy of NADH and FAD redox states,fluorescence lifetimes,and cellular morphology in precancerous epithelia.Proc.Natl.Acad.Sci.(2007).doi:10.1073/pnas.0708425104

Disclosure of Invention

An imaging system for de-noising and/or color unmixing multiple overlapping spectra at fast analysis times under low signal-to-noise conditions is disclosed. The imaging system may be a hyperspectral imaging system. The system may perform hyperspectral phasor (HySP) calculations to efficiently analyze hyperspectral time lapse data. For example, the system may perform hyper-spectral phasor (HySP) calculations to efficiently analyze five-dimensional (5D) hyper-spectral time lapse data. Advantages of such an imaging system may include: (a) the calculation speed is high; (b) easy phasor analysis; and (c) a denoising algorithm for obtaining a minimum acceptable signal-to-noise ratio (SNR). The imaging system may also generate a unmixed color image of the target. The imaging system may be used for diagnosis of a health condition.

The hyperspectral imaging system may include an optical system, an image forming system, or a combination thereof. For example, a hyperspectral imaging system may include an optical system and an image forming system. For example, the hyperspectral imaging system may comprise an image forming system.

The optical system may comprise at least one optical component. Examples of at least one optical component are a detector ("optical detector"), a detector array ("optical detector array"), a source for illuminating a target ("illumination source"), a first optical lens, a second optical lens, a dispersive optical system, a dichroic mirror/beam splitter, a first optical filtering system, a second optical filtering system, or a combination thereof. For example, the at least one optical detector may comprise at least one optical detector. For example, the at least one optical detector may include at least one optical detector and at least one illumination source. The first optical filter system may be placed between the target and the at least one optical detector. The second optical filter system may be placed between the first optical filter system and the at least one optical detector.

The optical system may comprise an optical microscope. The components of the optical system may form such an optical microscope. Examples of optical microscopes may be confocal fluorescence microscopes, two-photon fluorescence microscopes, or combinations thereof.

The at least one optical detector may have the following configuration: electromagnetic radiation absorbed, transmitted, refracted, reflected and/or emitted by at least one physical point on the target is detected ("target radiation"). The target radiation may include at least one wave ("target wave"). The target radiation may include at least two target waves. Each target wave may have an intensity and a different wavelength. The at least one optical detector may have the following configuration: the intensity and wavelength of each target wave is detected. The at least one optical detector may have the following configuration: the detected intensity and wavelength of each target wave is transmitted to an image forming system. The at least one optical detector may comprise a photomultiplier tube, an array of photomultiplier tubes, a digital camera, a hyperspectral camera, an electron multiplying charge coupled device, Sci-CMOS, a digital camera, or a combination thereof.

The target radiation may comprise electromagnetic radiation emitted by the target. The electromagnetic radiation emitted by the target may include luminescence, thermal radiation, or a combination thereof. Luminescence may include fluorescence, phosphorescence, or a combination thereof. For example, the electromagnetic radiation emitted by the target may include fluorescence, phosphorescence, thermal radiation, or a combination thereof.

The at least one optical detector may detect electromagnetic radiation emitted by the target having a wavelength in the range of 300nm to 800 nm. The at least one optical detector may detect electromagnetic radiation emitted by the target having a wavelength in the range of 300nm to 1300 nm.

The hyperspectral imaging system may also use target radiation comprising at least four wavelengths to form a detected image of the target, wherein the at least four wavelengths with detected intensities form a spectrum. Thereby the color resolution of the image can be increased.

At least one illumination source may generate electromagnetic radiation ("illumination source radiation"). The illumination source radiation may include at least one wave ("illumination wave"). The illumination source radiation may include at least two illumination waves. Each illumination wave may have a different wavelength. The at least one illumination source may directly illuminate the target. In this configuration, there are no optical components between the illumination source and the target. The at least one illumination source may indirectly illuminate the target. In this configuration, there is at least one optical component between the illumination source and the target. The illumination source may illuminate the target at each illumination wavelength by emitting all illumination waves simultaneously. The illumination source may illuminate the target at each illumination wavelength by emitting all illumination waves sequentially.

The illumination source may comprise a source of coherent electromagnetic radiation. The source of coherent electromagnetic radiation may comprise a laser, a diode, a two-photon excitation source, a three-photon excitation source, or a combination thereof.

The illumination source radiation may include illumination waves having wavelengths in the range of 300nm to 1300 nm. The illumination source radiation may include illumination waves having wavelengths in the range of 300nm to 700 nm. The illumination source radiation may include illumination waves having wavelengths in the range of 690nm to 1300 nm.

The image forming system may include a control system, a hardware processor, a memory, a display, or a combination thereof.

The image forming system may have the following configuration: causing an optical detector to detect the target radiation and transmit the detected intensity and wavelength of each target wave to an image forming system; acquiring detected target radiation comprising at least two target waves; forming an image of the target ("target image") using the detected target radiation, wherein the target image includes at least two pixels, and wherein each pixel corresponds to a physical point on the target; forming at least one spectrum ("intensity spectrum") for each pixel using the detected intensity and wavelength of each target wave; transforming the intensity spectrum of each pixel into complex-valued functions using a fourier transform based on the formed intensity spectrum of each pixel, wherein each complex-valued function has at least one real part and at least one imaginary part; applying a denoising filter at least once to both the real and imaginary parts of each complex-valued function to produce denoised real and denoised imaginary values for each pixel; forming a point on the phasor plane for each pixel (the "phasor point") by plotting the denoised real and imaginary values for each pixel; mapping the phasor points back to corresponding pixels on the target image based on the geometric positions of the phasor points on the phasor plane; assigning an arbitrary color to the corresponding pixel based on the geometric position of the phasor point on the phasor plane; and generating a unmixed color image of the object based on the assigned arbitrary color. The image forming system may also have a configuration that displays the unmixed color image of the object on a display of the image forming system.

The image forming system may have the following configuration: at least one harmonic of the fourier transform is used to generate a unmixed color image of the target. The image forming system may use at least the first harmonic of the fourier transform to generate a unmixed color image of the target. The image forming system may use at least the second harmonic of the fourier transform to generate a unmixed color image of the target. The image forming system may generate a unmixed color image of the target using at least the first harmonic and the second harmonic of the fourier transform.

The denoising filter may include a median filter.

The unmixed color image of the target may be formed with a signal-to-noise ratio of at least one spectrum in the range of 1.2 to 50. The unmixed color image of the target may be formed with a signal-to-noise ratio of at least one spectrum in the range of 2 to 50.

The target may be any target. The target may be any target having a specific color spectrum. For example, the target may be a tissue, a fluorescent genetic marker, an inorganic target, or a combination thereof.

The hyperspectral imaging system can be calibrated by assigning any color to each pixel using the reference material. The reference material may be any known reference material. For example, the reference may be any reference material for which a unmixed color image is determined prior to generating the unmixed color image of the target. For example, the reference material can be a physical structure, a chemical molecule, a biological activity (e.g., a physiological change) as a result of a change in physical structure and/or a disease.

Any combination of the above features/configurations is within the scope of the present disclosure.

These and other features, steps, features, objects, benefits and advantages will now become apparent from a reading of the following detailed description of illustrative embodiments, the accompanying drawings and the claims.

Drawings

The drawings are illustrative of the embodiments. They do not show all embodiments. Other embodiments may be used in addition or alternatively. Details that may be apparent or unnecessary may be omitted to save space or for more effective illustration. Some embodiments may be practiced with additional components or steps and/or without all of the components or steps shown. When the same number appears in different drawings, it refers to the same or similar elements or steps. The colors disclosed in the drawings and other brief description of the disclosure below refer to color drawings and photographs as originally filed in the following disclosure: U.S. provisional patent application No. 62/419,075 entitled "Imaging System" filed on attorney docket No. 064693-0396 at 11/8 of 2016; U.S. provisional patent application No. 62/799,647 entitled "Hyperspectral Imaging System" with attorney docket number amisc.003pr, filed on 31/1/2019; and U.S. patent application publication No.2019/0287222, published on 19/9/2019. The entire contents of these patent applications are incorporated herein by reference. This patent application document contains these and additional figures and photographs executed in color. Copies of this patent application with color drawing and photograph will be provided by the U.S. patent and trademark office upon request and payment of the necessary fee.

The following reference numerals are used for the system features disclosed in the following figures: hyperspectral imaging system 10, optical system 20, image forming system 30, control system 40, hardware processor(s) 50, memory system 60, display 70, fluorescence microscope 100, multi-illumination wavelength microscope 200, multi-wavelength detection microscope 300, multi-wavelength detection apparatus 400, multi-illumination wavelength and multi-wavelength detection microscope 500, multi-wavelength detection apparatus 600, multi-wavelength detection apparatus 700, illumination source 101, dichroic mirror/beam splitter 102, first optical lens 103, second optical lens 104, target (i.e., sample) 105, (optical) detector 106, illumination source radiation 107, emitted target radiation 108, first wavelength illumination source radiation 201, second wavelength illumination source radiation 202, first wavelength emitted or reflected illumination source radiation 203, second wavelength emitted or reflected illumination source radiation 204, a, Emitted or reflected illumination source radiation 301, dispersive optics 302, spectrally dispersed target radiation 303, optical detector array 304, target image formation 401, spectral formation 403, fourier transform 403, real part of fourier function 404, imaginary part of fourier function 405, denoising filter 406, rendering on phasor plane 407, mapping back to target image 408, and forming a unmixed color image 409 of the target.

FIG. 1, high spectral phasor analysis. (a) Schematic diagram of the HySP method. The spectrum of each voxel from a multi-dimensional (x, y, z, λ) dataset has its Fourier coefficients(harmonic, n) is shown. Typically, n is chosen to be 2 and the corresponding coefficients are represented on the phasor diagram (for other harmonics, see fig. 5 f). (b) Representative recordings of fluorescein (about 5 μ M in ethanol) spectra at fixed gain values (about 800) but varying laser power (about 1% to about 60%). Error bars represent the change in intensity values for 10 measurements. Color coding indicates intensity, blue for low intensity and red for high intensity. The inset shows that when normalized, the emission spectra overlap, providing a record that is below the saturation limit of the detector. Imaging was performed on a Zeiss (Zeiss) LSM780 equipped with a QUASAR detector. (c) Dispersion error (ε) on phasor diagrams due to Poissonian noise in spectral recordingsσ) Is defined as being at the expected phasor value (z)e(n)) standard deviation of dispersion around. The inset shows at zeA 3D histogram of the distribution of surrounding phasor points. (d) Shift mean error (epsilon) on phasor diagramsμ) Is caused by a change in the shape of the normalized spectrum that moves the mean phasor point away from the true phasor coordinate corresponding to the given spectrum. (e) The dispersion error varies inversely with the number of total digital counts and is thus most sensitive to the detector gain. The legend applies to (e) and (f). (f) The normalized shifted average error remains nearly constant and is below 5% over a large range of total digital counts forming different imaging parameters. To understand which error is dominant, the ratio of the two errors is plotted (inset). The ratio shows the dispersion error (. epsilon.) σ) Almost specific shift average error (epsilon)μ) One order of magnitude higher.

FIG. 2, phasor analysis for multiplexing hyperspectral fluorescence signals in vivo. (a) Maximum intensity projection images showing seven unmixed signals in the 72hpf zebrafish embryo. By embryo Gt (desm-citrine) in a double transgenic embryo with flavochrome cells (blue)ct122a/+(ii) a Multiplexed staining was obtained by injecting mRNA encoding H2B-cerulean (cyan) and membrane-mCherry (red) in the Tg (kdrl: eGFP) (yellow and green, respectively). The samples were sequentially excited at about 458nm and about 561nm, producing their autofluorescence as two separate signals (magenta and gray, respectively). By mixing phases fromThe dispersion density of the volume map (d) is mapped to the original volume in the 32-channel raw data to reconstruct an image. (b) Emission spectra of different fluorophores obtained by plotting normalized signal intensities from their respective expression regions in the raw data. (c) Enlarged view of the head region of the embryo (box in (a)). The boxes labeled 1 to 3 represent the sub-regions of the image used to compare HySP with linear unmixing in (e to f). (d) Phasor diagrams showing the relative positions of pixels assigned to different fluorophores. The polygon represents a subset of pixels assigned to a particular fluorophore. (e) An enlarged view of regions 1 to 3 (from (c)) reconstructed via both HySP analysis and linear unmixing of the same 32-channel signal. The arrows indicate the lines along which the normalized intensities obtained by the two techniques were plotted in (f) for comparison. By visual inspection of itself, it is evident that the HySP analysis outperforms linear unmixing in distinguishing highly multiplexed signals in vivo. (f) HySP analysis was compared to normalized intensity plots for linear unmixing. The x-axis represents the normalized distance along the arrow depicted in (e). The y-axis in all figures is normalized to the value of the maximum signal strength in the seven channels to allow relative comparison. For clarity, different figures show different sets of channels (fluorophores).

Fig. 3, low laser power in vivo volume hyperspectral time lapse for zebrafish. (a) Bright field images of zebrafish embryos (36hpf) approximately 12 hours after imaging. The improved performance of HySP at lower signal-to-noise ratios allows for multi-color volume time-lapse with reduced phototoxicity. (b to e) maximum intensity projection images showing the in vivo eight unmixing signals in zebrafish embryos starting at 24 hpf. By embryo Tg in a double transgene (ubiq: membrane-Cerulean-2 a-H2B-mCherry); multiplexed staining was obtained by injecting mRNA encoding Rab9-YFP (yellow) and Rab11-RFP (red) in Tg (kdrl: eGFP) (red, cyan and green, respectively). The samples were excited sequentially at about 950nm (b and d) and about 561nm (c), producing their autofluorescence as two separate signals (e) (purple and orange, respectively). The time lapse of 25 time points at about seven minute intervals was taken with a laser power of about 5% at about 950nm and about 0.2% at about 561 nm.

Fig. 4, error on the spectral phasor diagram. (a) The dispersion error may be inversely proportional to the square root of the total digital count. The legend applies to all parts of the drawings. The dispersion error may also depend on poisson noise in the recording. The R-square statistical method can be used to confirm linearity by the inverse of the square root of the count. The slope may be a function of the detector gain used in the acquisition, showing that the count-to-scatter error (counts-to-scatter error) dynamic range is inversely proportional to the gain. Lower gains may produce less dispersion error at lower intensity values. (b) Denoising in phasor space can reduce dispersion errors without affecting the desired value (z) on the phasor diagram e(n)) of the location of the cell. (c) The denoised dispersion error may depend linearly on the dispersion error without filtering, regardless of the acquisition parameters. The slope may be determined by the filter size (here 3 x 3). (d) Denoising may not affect the normalized shifted average error because z is on the phasor diagram due to filtering (d)eThe position of (n) remains unchanged.

Fig. 5, sensitivity of phasor points. (a, b, c) | z (n) | may remain nearly constant for different imaging parameters. The legend applies to (a, b, c, d, e). (d) The total number of digits counted as a function of laser power. (e) The proportionality constant in equation 2 may depend on the gain. (f) The relative magnitude of the residuals on the phasor diagram (r (n)) shows that harmonics n-1 and 2 may be sufficient for a unique representation of the spectral signal.

FIG. 6, phasor analysis for in vivo unmixing of hyperspectral fluorescence signals. (a) citrine (skeletal muscle) and eGFP (endothelial tissue) in transgenic zebrafish line Gt (desm-citrine), respectivelyct122a/+And Tg (kdrl: eGFP). (b) Gt (desm-citrine)ct122a/+Conventional optical filter separation of Tg (kdrl: eGFP). The use of emission bands on detectors of spectrally overlapping fluorophores (eGFP and citrine) may not overcome the problem of signal permeation in the corresponding channels. Arrows indicate erroneous detection of eGFP or Citrine expression in other channels. The scale bar is about 200 μm. (c) Phasor diagrams showing the spectral features (fingerprint) of Citrine and eGFP in individually expressed embryos and in double transgenic embryos (dispersion density). Alone The spectral features of Citrine and eGFP can be preserved in the dual transgenic lines. (d) The maximum intensity projection image reconstructed by mapping the scatter density from the phasor map to the original volume. In both single and double transgenic lines, eGFP and Citrine features clearly distinguish skeletal muscle from disseminated blood vessels (endothelial tissue), although within the same anatomical region of the embryo. The scale bar is about 300 μm. Embryos were imaged approximately 72 hours after fertilization. (e, f) HySP analysis can outperform optical separation and linear unmixing in discriminating between spectrally overlapping fluorophores in vivo. (e) Tg as shown in (d) (kdrl: eGFP); gt (desm-citrine)ct122a/+The signals of eGFP and Citrine detected by optical fractionation, linear solution and phasor analysis were compared in the maximum intensity projection images of the regions in (a). (f) A corresponding normalized intensity distribution along the width of the image (600 pixels, about 553.8 μm) integrated over a height of 60 pixels. The correlation values (R) reported for the three cases show the lowest value of the HySP analysis, as expected by expression of both proteins.

FIG. 7, optical separation of eGFP and Citrine. (a) Respectively in transgenic zebra fish line Gt (desm-citrine)ct122a/+And Tg (kdrl: eGFP) spectra of citrine (peak emission at about 529nm, skeletal muscle) and eGFP (peak emission at about 509nm, endothelial tissue) were measured using confocal multi-spectral Lambda mode. (b) Conventional optical separation (using emission bands on the detector) of spectrally close fluorophores (eGFP and citrine) may not overcome the problem of signal permeation in individual channels. Arrows indicate erroneous detection of eGFP or citrine expression in other channels. The scale bar is about 300 μm. (c) Normalized intensity distribution along the length of the line in graph (a) (600 pixels, about 553.8 μm).

FIG. 8, the effect of phasor space denoising on dispersion error and shift averaging error. (a) The dispersion error is a function of the number counts for different numbers of denoising filters with 3 by 3 masks. The data source is a fluorescein data set acquired at a gain of about 800. (b) The dispersion error is a function of the number of denoising filters with 3 by 3 masks for different laser powers. (c) The shift average error is a function of the number counts for different numbers of denoising filters with 3 by 3 masks. The data source is a fluorescein data set acquired at a gain of about 800. (d) The shift average error is a function of the number of filters with 3 by 3 masks for different laser powers. (e) The relative variation of the dispersion error is a function of the number of denoising filters applied to different mask sizes. (f) The relative variation of the shift average error is a function of the number of filters applied to different mask sizes. The "filter" of the figure is a denoising filter.

FIG. 9, influence of phasor space denoising on image intensity. (a, b) HySP-treated Citrine channel of double-labeled eGFP-Citrine samples (132.71 μm. times. 132.71 μm) before and after filtering in phasor space. (c, d) HySP-treated eGFP channel of the sample in (a, b) before and after filtering in phasor space. (e) For different numbers of denoising filters, the total intensity distribution of the green lines is highlighted in (a, b, c, d). The intensity values may not change. (f) The eGFP channel intensity distribution of the green line highlighted in (a, b, c, d) for different numbers of denoising filters. (g) The Citrine channel intensity distribution of the green line shown in (a, b, c, d) is highlighted for different numbers of denoising filters. The "filter" of the figure is a denoising filter.

FIG. 10, autofluorescence recognition and removal in phasor space. (a) Phasor diagrams, which show the spectral characteristics (dispersion density) of Citrine, eGFP, and autofluorescence may allow for simple identification of intrinsic signals. (b) The maximum intensity projection image reconstructed by mapping the scatter density from the phasor map to the original volume. Autofluorescence can have broad features that can be effectively viewed as a channel. Embryos were imaged approximately 72 hours after fertilization.

FIG. 11, comparison of HySP and linear unmixing at different signal-to-noise ratios (SNR). (a) At about 458nm and about 561nm with H2B-Cerulean, kdrl: true color (TrueColor) images of 32 channel datasets of eGFP, desm-citrine, flavochrome cells, membrane-mCherry and autofluorescently labeled zebrafish. The original data set (SNR 20) is digitally degraded by adding noise and reducing the signal to SNR 5. (b) Normalized spectra for non-weighted linear unmixing. Spectra were identified on each sample from an anatomical region known to contain only specific markers. For example, the spectra of yellow pigmented cells were collected in the dorsal region, nuclei from the fins, vasculature within the muscle. The selected combination of regions is tested and corrected until the best linear unmixing result is obtained. The same region is then used for all three data sets. The same legend and color coding is used throughout the figures. (c) Linear unmixing into the treated magnified region of HySP (box in (a)). Comparison shows three nuclei belonging to muscle fibers. At good SNR (20 and above), the linear solution and HySP results were accurate. However, reducing the SNR affects linear unmixing more than phasors. This may improve the unmixing of the labels in volumetric imaging of biological samples, where generally the SNR decreases with depth and explains the differences in fig. 2e, 2f, 6e, 6f, 10 and 12. One advantage of HySP in this SNR comparison is spectral denoising in fourier space. Spectral denoising can be performed by applying a filter directly in the phasor space. This may preserve the original image resolution, but may improve spectral feature recognition in the phasor diagram. A median filter may be used as the filter. However, other filtering methods are also possible. For any image of a given size (n × m pixels), the S and G values for each pixel can be obtained, resulting in 2 new 2D matrices of dimension n × m for S and G. Since the initial S and G matrix entries may have the same index as the pixels in the image, the filtered matrices S and G may retain geometric information. By effectively using filtering in the phasor space, the S matrix and the G matrix can be considered as 2D images. First, this can reduce the dispersion error (i.e., the positioning accuracy on the phasor diagram increases (fig. 8 a-8 b)), thereby improving the spectral feature recognition resolution while improving the shift average error that has been minimized (fig. 8 c-8 d). The effect on the data may be an improved separation of different fluorescent proteins (fig. 9a to 9 d). Second, denoising in (G, S) coordinates can preserve geometry, intensity profile, and the original resolution of the acquired image (fig. 9 e-9G). Effective filtering in the phasor space can affect the spectral dimensions of the data, thereby enabling denoising of spectral noise without disturbing the intensity. (d) Comparison of the intensity profiles (dashed arrows in (c)) may show improvement in HySP at low SNR. At reduced SNR, H2B-cerulean (cyan) and desm-citrine (yellow) ((solid arrows in (c)) can be consistently identified in HySP, which may be partially mislabeled in linear unmixing. For example, some noise may be identified as kdrl eGFP (green), while no vasculature is anatomically present in the region of interest.

FIG. 12, comparison of HySP and linear unmixing when resolving seven fluorescence signals. (a) Grayscale images from different optical sections, the same as used in fig. 2 (regions 1 to 3), compare the performance of the HySP analysis and linear unmixing. (b) Normalized intensity plots for comparison of HySP analysis and linear unmixing. Similar to the corresponding plot in fig. 2f, in all plots the x-axis represents the normalized distance and the y-axis is normalized to the value of the maximum signal strength in the seven channels to allow for relative comparison. Each figure shows the distribution of all intensities of the seven channels in the corresponding image.

FIG. 13, effect of segmentation (binding) on HySP analysis of seven in vivo fluorescence signals. The raw data set acquired with 32 channels can be computationally segmented into (binned to)16, 8 and 4 channels sequentially to understand the limitations of HySP in unmixing selected fluorescence spectral features. Segmentation may not produce visible degradation of unmixing. The white square region may be used for scaling comparisons of different segments (bins). Spectral phasor diagrams at excitations of about 458nm and about 561 nm. Segmentation of the data may result in shorter phasor distances between different fluorescence spectral features. Even closer together, the clusters may still be identifiable. Enlarged comparison of embryonic stems (box in (a)). The differences in HySP analysis at different segmentation values for the same dataset remain subtle to the eye. One volume can be selected for studying the intensity distribution (white dashed arrow). For a total intensity of a volume of about 26.60 μm by about 0.27 μm by about 20.00 μm, at the different segments, kdrl: intensity distribution of eGFP, H2B-cerulean, desm-citrine and flavochrome cells (white dashed arrow (c)). The effect of the segmentation can now be visible. For the vasculature, the unmixing is not unduly degraded by segmentation. The same result is true for the nucleus. Desm and flavochrome cells may appear to be more affected by segmentation. This result may suggest that in the case of the zebrafish embryo of the present disclosure having seven separate spectral features acquired sequentially using two different lasers, 4 segments may be used at the expense of degradation of the unmixing.

FIG. 14, an exemplary hyperspectral imaging system including an exemplary optical system and an exemplary image forming system.

FIG. 15, an exemplary hyperspectral imaging system including an exemplary optical system, fluorescence microscope. The system may generate a unmixed color image of the target by using an exemplary image forming system including features disclosed in, for example, fig. 22-23.

FIG. 16, an exemplary hyperspectral imaging system including an exemplary optical system, a multi-illumination wavelength microscope. The system may generate a unmixed color image of the target by using an exemplary image forming system including features disclosed in, for example, fig. 22-23.

FIG. 17, an exemplary hyperspectral imaging system including an exemplary optical system, a multi-illumination wavelength device. The system may generate a unmixed color image of the target by using an exemplary image forming system including features disclosed in, for example, fig. 22-23.

FIG. 18, an exemplary hyperspectral imaging system including an exemplary optical system, a multi-wavelength detection microscope. The system may generate a unmixed color image of the target by using an exemplary image forming system including features disclosed in, for example, fig. 22-23.

FIG. 19, an exemplary hyperspectral imaging system including an exemplary optical system, a multi-illumination wavelength, and a multi-wavelength detection microscope. The system may generate a unmixed color image of the target by using an exemplary image forming system including features disclosed in, for example, fig. 22-23.

FIG. 20, an exemplary hyperspectral imaging system including an exemplary optical system, a multi-wavelength detection device. The system may generate a unmixed color image of the target by using an exemplary image forming system including features disclosed in, for example, fig. 22-23.

FIG. 21, an exemplary hyperspectral imaging system including an exemplary optical system, a multi-wavelength detection device. The system may generate a unmixed color image of the target by using an exemplary image forming system including features disclosed in, for example, fig. 22-23.

FIG. 22, features of an exemplary image forming system that can be used to generate a unmixed color image of a target.

FIG. 23, features of an exemplary image forming system that can be used to generate a unmixed color image of a target.

Fig. 24, a Spectrally Encoded Enhanced Representation (SEER) conceptual representation. (a) Hyperspectral fluorescence image data. A multi-spectral fluorescence dataset was acquired in spectral mode (32 channels) using a confocal instrument. Shown here is the Tg (ubi: Zebraaboow) data set, where cells contain a random combination of cyan, yellow and red fluorescent proteins. (b) The original spectrum. The average spectra within the six regions of interest (color boxes in a) show the level of overlap that occurs in the sample. (c) Standard visualization. Standard multispectral visualization methods have limited contrast for spectrally similar fluorescence. (d) The original phasor. The spectrum of each voxel in the dataset is represented as a two-dimensional histogram of their sine and cosine fourier coefficients S and G, which is called a phasor diagram. (e) And denoising the phasors. Spatial lossless spectral denoising is performed in phasor space to improve the signal. (f) Refer to the figures. SEER provides the option of encoding the positions in the vector into several color reference maps of a predetermined palette. The reference map used here (magenta selection) is designed to enhance the smaller spectral differences in the data set. (g) Contrast mode. Multiple contrast modalities allow for improved visualization of the data based on phasor spectral distributions, focusing the reference map on the most frequent spectrum, on the statistical spectral centroid of the data (magenta selection), or scaling the map to the distribution. (h) And (4) color remapping. Colors are assigned to the images using the selected SEER reference map and contrast modality. (i) Spectrally Encoded Enhanced Representation (SEER). The hardly distinguishable spectra are depicted with improved contrast, while more separated spectra are still clearly represented.

Fig. 25, Spectrally Encoded Enhanced Representation (SEER) design. A set of standard reference maps and their corresponding results on a Simulated Hyperspectral Testchart (SHTC) designed to provide a gradient of spectral overlap between spectra. (a) The standard phasor diagram with the corresponding average grayscale image provides positional information of the spectrum on the phasor diagram. The phasor positions are associated with colors in the rendering according to a set of standard reference maps, each highlighting a different property of the data set. (b) The angle map enhances the spectral phase difference by relating the color to the change in angle (in this case, relative to the origin). The graph enhances the variation of the maximum emission wavelength, since the phase position in the graph is most sensitive to this feature and largely independent of the variation of the intensity. (c) In contrast, the radial map focuses primarily on intensity variations, since a decrease in signal-to-noise ratio typically results in a shift towards the origin on the phasor map. As a result, the figure highlights the spectral amplitude (amplitude) and amplitude (magnitude) and is mostly insensitive to wavelength variations of the same spectrum. (d) The gradient rise profile enhances the spectral differences, especially in the higher intensity regions in the sample. This combination is achieved by adding a luminance component to the palette. The darker tone is located in the center of the figure, where the lower image intensity is plotted. (e) The gradient descent map improves the rendering of nuances of wavelength. b. The color bars for c, d, e represent the dominant wavelength associated with a color in nanometers. (f) The tensor map provides insight into the statistical variation of the population of spectra in the image. This visualization serves as spectral edge detection for the image and can simplify the identification of spectrally distinct and rare regions of the sample, such as the center of the SHTC. The color bars represent the normalized relative gradient of counts.

Fig. 26, enhanced contrast modality. For each SEER standard reference map design, four different modes may provide improved contrast during visualization. For reference, the present disclosure uses a gradient descent map applied on a Simulated Hyperspectral Testchart (SHTC). (a) The standard pattern is a standard graph reference. It covers the entire phasor diagram circle centered at the origin and anchored on the circumference. The color palette is constant across the sample, simplifying spectral comparison between datasets. (b) The scaling mode adapts the gradient descent map range to the values of the data set, effectively performing linear contrast stretching. In this process, the end of the graph is scaled to warp around the phasor representation of the dataset being viewed, resulting in a maximum shift in the palette of phases and modulation ranges in the dataset. (c) The Max deformation (Max Morph) mode shifts the graph center to the maximum of the phasor histogram. The boundaries of the reference map remain anchored to the phasor circle, while the colors within the plot are distorted. The maxima of the phasor diagrams represent the most frequent spectra in the data set. The visualization modality remaps the color palette relative to the most commonly occurring spectra, allowing knowledge of the spectral distribution within the sample. (d) In contrast, the mass deformation mode uses histogram counts to compute a weighted average of the phasor coordinates and uses the color frequency centroid as the new center of the SEER diagram. The palette now maximizes the palette color difference between the spectra in the sample.

Fig. 27, comparison of autofluorescence visualization of unlabeled freshly isolated mouse tracheal explants. The sample was imaged using a multi-spectral two-photon microscope (740nm excitation, 32 wavelength band, 8.9nm bandwidth, 410nm to 695nm detection) to collect fluorescence of intrinsic molecules including folate, retinoids and NADH in their free and bound states. Because of the closely overlapping emission spectra of these intrinsic molecules, these intrinsic molecules have been used as metabolic activity reporters in tissues by measuring their fluorescence lifetimes (rather than wavelengths). This overlap increases the difficulty of distinguishing spectral changes when using (a) true color image displays (Zen software (zeiss, germany)). (b) The gradient descent deformation plot shows the difference between the top and base layers, indicating different metabolic activities of the cells based on distance from the airway of the trachea. Cells on the top and bottom layers (dashed boxes) are rendered with different color sets. The color bars represent the dominant wavelength in nanometers associated with a color. (c) Tensor map images provide insight into statistics in spectral datasets, correlating the color of image pixels with the corresponding gradient of phasor counts for pixels with similar spectra. The spectral count gradient in this sample highlights the presence of fibers and edges of single cells. The color bars represent the normalized relative gradient of counts. (d) The average spectra of the cells in the dashed box (1 and 2 in figure c) show a blue spectral shift in the top layer direction. (e) The interpretation of the gradient decline plot from plot b was validated using Fluorescence Lifetime Image Micrographs (FLIM) of samples taken with a frequency domain detector, where cells in the top layer exhibited more of the oxidative phosphorylation phenotype (longer lifetime in red) than cells in the bottom layer with more of the glycolytic phenotype (shorter lifetime in yellow). The selection corresponds to the region selected in the phasor FLIM analysis based on the relative phasor coordinates of NAD +/NADH lifetime (e, top left inset, red selection and yellow selection).

FIG. 28, visualization of a single fluorescent marker with multiple autofluorescence. Tg (fli1: mKO2) (pan endothelial fluorescent protein marker) zebrafish was imaged with intrinsic signals derived from yolk and yellow pigment cells (pigment cells). In vivo imaging was performed using a multi-spectral confocal (32-channel) fluorescence microscope with 488nm excitation. The endothelial mKO2 signal is difficult to distinguish from (a) the intrinsic signal in maximum intensity projection true color 32 channel image displays (Bitplane imagis, switzerland). The SEER angle plot highlights the changes in spectral phase so that they are rendered in different colors (see figure, bottom right of each figure). (b) Here, the angle map with the zoom mode is applied to the entire volume. Previously indistinguishable spectral differences (boxes 1, 2, 3 in figure a) are now readily visually separated. The color bars represent the dominant wavelength in nanometers associated with a color. (c to h) enlarged views of regions 1 to 3 (from a) visualized in true color, (c, e, g), and SEER (d, f, h) highlights the difference distinguishing the pan endothelial marker (yellow) from the pigment cell (magenta). The improved sensitivity of SEER further distinguishes between different sources of autofluorescence derived from the yolk (blue and cyan) and pigment.

Fig. 29, triple labeled fluorescence visualization. Zebrafish embryo Tg (kdrl: eGFP); gt (desmin-Citrine); tg (ubiq: H2B-Cerulean) marks the vasculature, muscle and nucleus, respectively. In vivo imaging (Live imaging) was performed with a multispectral confocal microscope (32 channels) using 458nm excitation. Single plane slices of the tiled volume are rendered with true color and SEER maps. (a) True color image displays (Zen (zeiss, germany)). (b) The angle map in centroid deformation mode improves contrast by distinguishable color. The resulting visualization enhances the spatial localization of the fluorophores in the sample. (c) The gradient descent map in the maximum deformation mode concentrates the palette on the most frequent spectrum in the sample, thereby highlighting the spectral change relative thereto. In this sample, the presence of skin pigment cells (green) was enhanced. The 3D visualization of SEER maintains these enhanced properties. The color bars represent the dominant wavelength in nanometers associated with a color. Here, the present disclosure shows (d, e, f) true color 32-channel Maximum Intensity Projections (MIPs) of different portions of the sample rendered in true color, angular map centroid mode, and gradient descent maximum mode. The selected view highlights the performance of SEER in (d) a volume segment overview, (e) a volume segment boundary enlargement, and (f) a vasculature lateral view.

Figure 30, visualization of combined expression on Zebrabow samples. Maximum intensity projection rendering of Tg (ubi: Zebranow) muscle acquired in real time under a multi-spectral confocal mode of 458nm excitation. (a) It is difficult to interpret the signals (e.g. white arrows) that are extracted in true-color image displays (Zen software (zeiss, germany)). (b) With gradient descent maps scaled in intensity, it is increasingly simple to discern spectral differences while having an effect on the brightness of the image. (c) Gradient-down and (d) gradient-up RGB masks show the color values assigned to each pixel in the zoom mode and greatly improve the visual separation of the recombined CFP, YFP and RFP marks. The color bars represent the dominant wavelength in nanometers associated with a color.

Fig. 31, comparison of computation times for SEER and ICA for different file sizes. (a) HySP and ICA runtime (plotted on a logarithmic scale) were measured on an HP workstation with two 12-core CPUs, 128GB RAM and a 1TB SSD. The SEER runtime is measured in a modified version of the software. The ICA runtime is measured using the custom script and the FastICA submodule of the Python module (scimit-spare). The timer time using the perf _ counter function within the python block is placed around the particular function corresponding to the creation in the hypersp The computations required for the SEER diagram and the computations required to extract the individual component outputs from the custom ICA script. The data sizes varied between 0.02GB and 10.97GB, with a constant number of bands (32 bands, 410.5nm to 694.9nm, bandwidth of 8.9nm) corresponding to 2.86 · 105To 1.83.108The spectral range of (a). The ICA test is limited to a maximum of 10.97GB because for higher values the RAM requirements exceed the 128GB available on the workstation. (b) For custom ICA scripts, placing a timer to measure the time for reshaping the hyperspectral data for ICA input, for running the ICA algorithm, and for converting the values of the ICA components to image intensity values, achieves a few minutes of computation (plotted on a logarithmic scale) at only 1.1 GB. (c) For HySP, a timer is placed to measure the initial calculations to generate phasor values from the hyperspectral data, including the real and imaginary parts (g and s) and the creation of phasor map histograms. A timer is also set around all preparation functions required to create the SEER map on-the-fly. The memory-efficient phasor process allows the calculation of data sets of size 0.02GB to 43.9GB, which corresponds to 2.86 105To 7.34.108Spectral range (plotted on a logarithmic scale).

Fig. 32 comparison of SEER with visualized HySP results. Zebra fish embryos Tg (kdrl: eGFP) are shown here; gt (desmin-Citrine); tg (ubiq: H2B-Cerulean) marks the vasculature, muscle and nucleus, respectively. In vivo imaging was performed with a multispectral confocal microscope (32 channels) using 458nm excitation. Single-plane slices of the tiled volume were rendered with SEER plots (3-channel, RGB) and compared with the rendering of the same dataset analyzed with HySP (here 5-channel). (a) Rendering the data set for 5-channel HySP analysis, the dashed box is expanded in the enlarged portion of panel a, with (b) the line profile to the right along the solid line, and all 5 individual channels (eGFP, Citrine, Cerulean, pigment, and autofluorescence) at 458 nm. (c) Visualization of the 5-channel dataset is a mixed RGB, similar to how it appears on the screen. (d) The deformed mode centroid visualization shows the mode according to HySP with different color-coded (e) line profiles along the solid line in graph d showing the intensities of the 3 RGB channels of the image. The profile of the single R, G, B does not match the HySP profile unmixed in FIG. b. However, (f) color visualization of the same line plot (as the R, G, B vector) shows a pattern of on-screen visualization of the unmixed data from HySP. Similarly, (g) the maximum visualization of deformation mode shows an image of the HySP analysis data rendered in accordance with the graph a, with the (h) line profile along the solid line of the enlarged portion of graph g being comparable to both the R, G, B profile of the HySP 5 separation channel and the differently deformed centroid diagram in graph e. (i) The color display visualization of RGB intensities in g reveals color features different from the HySP unmix channel (panel b).

Fig. 33, simulated hyperspectral test chart I rendered in true color shows a spectrum that is hardly distinguishable. Simulation is here represented by "true color RGB" (method). S obtained from CFP, YFP, RFP zebrafish embryos respectively1、S2And S3The spectra were used to generate a hyperspectral testchart of a 3 by 3 simulation of (a to i). In each of the graphs (a to i), three spectra (S)1To S3) Represented as concentric squares (see fig. a), respectively outer: s1-blue spectrum, middle: s2-yellow spectrum, center: s3-red spectrum). Spectrum S2(the middle square in each figure) remains unchanged in all figures. Spectrum S1Relative to a fixed spectrum S2Is shifted by d1(-2 wavelength bands, -17.8nm steps). S3Maximum value of (D) relative to S2Is shifted by d2(2 wavelength bands, 17.8nm step). Starting from d1 ═ d2 ═ 0 (fig. a), these changes are applied to 2 steps along the vertical (d1) and horizontal (d2) axes of the central map components (a through i). The spectra used in each of the graphs (a to i) are shown in graphs j to r. Each graph (j to r) normalizes the mean S1To S3The spectrum is shown as 32 wavelength bands, 8.9nm bandwidth, 410nm to 695nm detection. Each map has a different visual contrast, but is often difficult to distinguish by eye due to significant overlap of the spectra. (s) R, G, B channel used in the gaussian kernel for true color representation (red, green, blue lines) and the average spectra of the plots (a to i) for reference (yellow lines).

Fig. 34, effect on radial plot of spectral shape with constant intensity without background. The simulation shows that without background, spectra with gaussian shape and different standard deviations are used in the 32 wavelength band, 8.9nm bandwidth, and 410nm to 695nm range. All spectra were centered around 543nm (channel 16) and the integral of intensity was kept constant. (a to l) for each value of standard deviation, a grayscale image and SEER visualization are presented. The graph used is a radial graph centered at the origin and extending to the boundary of the phasor graph. Color references are added to the phasor diagram (m). The clusters on the phasor diagram are distributed along a radius where the distance from the origin is inversely proportional to the standard deviation.

Fig. 35, effect of spectral intensity on radial plot in presence of background. In this simulation, the intensity of the first plot (top left) of the simulated hyperspectral testchart (FIG. 33) is reduced by a factor of 10^1 to 10^4 (FIGS. 1 to 4, respectively) in the presence of a constant background. Generating a background in MATLAB with an average intensity of 5 digits (digital) level; posisrnd () function is used to add posisonan noise. The grayscale images (a, d, g, j) are 10 times (a) and 10 times (d) 2Multiple, (g)103Multiple, (j)104Scaling is performed. The radial graph (original) visualization shows that the graph color shifts towards blue (b, e, h, k) with decreasing intensity. Phasor diagrams (c, f, i, l) (harmonic n ═ 2) show the radial shift of the clusters towards the origin. Adding a radial graph reference in (c). The (m) absolute intensity plot shows the average spectra of the four plots, with the maximum peaks being 1780 digital levels, 182 digital levels, 23 digital levels, 7 digital levels (figures 1-4, respectively). The normalized intensity spectrum (n) shows that the spectral shape widens significantly with decreasing signal-to-noise ratio.

Fig. 36, spectrum (simulated hyperspectral test chart I) where radial and angular reference chart designs and mode differences were hardly distinguishable (fig. 33). The present disclosure proposes 4 different modes that can be applied to each figure. Where the second harmonic is used for the calculation. Angle graph (a) and radial graph (b) under standard mode, zoom mode, maximum deformation mode and quality deformation mode. In the standard mode, the reference map is centered at the origin and limited by the phasor unit circle. In the scaling mode, the reference map is adapted to the phasor map histogram, changing its coordinates to surround the edges of the phasor clusters and enhance the contrast of the selected map properties. In the maximum deformation mode, the graph is centered on the spectrum with the highest frequency of occurrence in the phasor histogram. This mode improves sensitivity by using statistical frequency deviations. In the mass deformation mode, the map is centered around the weighted center of the phasors, thereby enhancing the sensitivity of multiple small spectra. Visualization was presented after the 1x spectral denoising.

Fig. 37, gradient up and down reference map design and spectra with almost indistinguishable pattern differences (fig. 33). Where the second harmonic is used for SEER. A gradient rise map (a) and a gradient fall map (b) in the standard mode, the zoom mode, the maximum deformation mode, and the mass deformation mode. By darkening the reference image at the center and edge of the phasor diagram unit circle, respectively, the two images are emphasized on very different (rising) and similar (falling) spectra. Visualization was presented after the 1x spectral denoising.

FIG. 38, simulated hyperspectral experimental plot II and its standard overlay spectrum. Simulated SHTC II were generated from the same zebrafish embryo dataset and the same design used in SHTC1 (fig. 33) using CFP, YFP and RFP labeled samples and a 3 by 3 block map, where each block was subdivided to correspond to spectrum S1、S2And S33 regions of (a). The objective is to test scenes with less overlapping spectra. The shift distances in this simulation were changed to d1(-3 wavelength band, -26.7nm step) and d2(3 wavelength band, 26.7nm step). Here, the channels used in the gaussian kernel for true color RGB representation are 650nm, 510nm, 470nm, which represent R, G, B, respectively. The concentric squares on the lower right side of the simulation were separated by a peak-to-peak distance of 53.6nm, with the outer and inner concentric squares well separated by a peak-to-peak distance of 106.8 nm. This distance is similar to the emission gap between CFP (475nm EM) and tdTomato (581 nm). Under these spectral conditions, most methods are expected to perform well.

FIG. 39, radial and angular reference map design and mode rendering standard overlay spectra (simulated hyperspectral testchart II) (FIG. 32). Here, the first harmonic is used for the SEER angle plot (a) and the radial plot (b) in the standard mode, the scaled mode, the maximum deformation mode, and the mass deformation mode, which are applied here to the standard overlap spectrum simulation. The reference figures show consistently improved contrast between different modalities. Visualization was presented after the 1x spectral denoising.

FIG. 40, gradient rise and fall reference plot design and pattern differences for standard overlap spectra (simulated hyperspectral testplots II). Here, the first harmonic is used for the SEER gradient rise map (a) and the gradient fall map (b) in the standard mode, the zoom mode, the maximum deformation mode, and the mass deformation mode. The reference map provides enhanced visualization even in scenarios where spectral overlap is at a similar level to common fluorescent proteins. Visualization was presented after the 1x spectral denoising.

FIG. 41 is a graph of the effect of spectral denoising on the angular and radial map (simulated hyperspectral testchart II, FIG. 38) visualization of standard overlay spectra. Phasor spectral denoising affects the quality of data along the spectral dimension without changing the intensity. Where the second harmonic is used for the calculation. The noise data appears as an extended cluster on phasor (spread cluster), here shown superimposed with (a) an angle graph and (b) a radial graph, where the superimposed visualization demonstrates salt-pepper noise. (c, d) when applying denoising to the phasors, the cluster spread is reduced, providing greater smoothing and less noise in the simulated images. (e, f) increasing the number of denoising filters results in a clearer distinction between the three spectrally distinct regions in each block simulated. (a, c, e) in the maximum deformation mode, each denoising filter introduces a shift of the vertices of the graph, thereby changing the reference center (b, d, f) of the palette. In zoom mode, the less dispersed phasor clusters maximally use the reference map, enhancing the contrast (d, f) of the rendered SHTC.

FIG. 42, effect of spectral denoising on gradient rise and gradient fall plot visualization of standard overlaid spectra (simulated hyperspectral testchart II, FIG. 38). (FIG. 41) the phasor spectral denoising principle described in (FIG. 41) is applicable to different reference diagrams. In this case, (a) the gradient rise map in the zoom mode and (b) the gradient fall in the mass deformation mode are superimposed on the dispersed phasor representation of the standard overlap spectrum SHTC. The denoising filter removes outliers along the spectral dimension while preserving intensity. (c, d) after filtering, the phasor cluster spread is reduced, resulting in spectral smoothing of the noise-affected image. Due to the variation of the phasor cluster expansion after filtering, the graph reference of the gradient rise graph has an increased brightness compared to its unfiltered representation (graphs in a and b). (e, f) the rendered SHTC after multiple denoising passes has a higher intensity, which simplifies the discrimination of subtle differences in the spectrum. (b, d, f) the denoising filter does not change the centroid of the cluster, so the vertices of the reference map remain unchanged after filtering. However, the filter acts to reduce poisson noise in the data set, converging to a stable value after 5x filtering. This representation shows greater uniformity in the concentric square regions within each block, which was simulated using the same spectrum. The edges of these squares are now clearer and easier to detect, indicating that the combination of SEER and phasor denoising can play an important role in simplifying image segmentation.

Fig. 43, visualization of autofluorescence compared to other RGB standard visualizations. Visualization of unlabeled freshly isolated mouse tracheal explants is shown here with different standard methods (fig. 27). Details of these visualizations are reported above. (a) A SEER RGB mask obtained using a gradient descent deformation map; the mask shows the color associated with each pixel by SEER, regardless of intensity. (b) Average spectrum of the entire data set. (c) True color 32 channel maximum intensity projection. (d) Peak wavelength RGB mask. (e) A gaussian default kernel with RGB centered at 650nm, 510nm and 470nm, respectively. (f) Gaussian kernels at 10% threshold, with RGB values centered at 659nm, 534nm and 410 nm. (g) Gaussian kernels at the 20% threshold, with RGB values centered at 570nm, 490nm and 410 nm. (h) Gaussian kernels at 30% threshold, with RGB values centered around 543nm, 472nm and 410 nm. (i) The wavelength-RGB color representation of the peak wavelength mask in graph d. The representation of RGB visualization parameters is reported in the following figures: (j) the average spectrum of the kernel, dataset for map e (yellow map), (k) the average spectrum of the kernel, dataset for map f (yellow map), (l) the average spectrum of the kernel, dataset for map g (yellow map), (m) the average spectrum of the kernel, dataset for map h (yellow map).

Fig. 44, phasor Fluorescence Lifetime Imaging Microscopy (FLIM) of unlabeled freshly isolated mouse tracheal explants. (a) The phasor FLIM of fluorescence lifetime data of unlabeled freshly isolated mouse tracheal explants acquired in the frequency domain with a 2-photon fluorescence microscope (LSM780 (yellow circle) tuned at 740nm coupled to an acquisition unit with a mixed detector (FLIM box, ISS, university of illinois) the selected areas correspond to more oxidative phosphorylation phenotypes (red circles) and more glycolytic phenotypes (yellow circles) the FLIM segmented image corresponds to the selection performed on phasor (a), wherein the cells in the top layer exhibit oxidative phosphorylation phenotypes compared to the cells in the base layer with glycolytic phenotypes (c) the line connecting free and bound NADH in the phasor diagram is called NADH metabolic trajectory, the shift in the direction of free NADH represents more reductive conditions and glycolytic metabolism, and the shift towards bound NADH represents more oxidative conditions and more oxidative phosphorylation, as described in previous studies. The extreme value of the metabolic trajectory is the life span of NADH dissociation and binding. The lifetime parameters (tau phase and modulation) are consistent with those reported in the literature (0.4 ns for free; and 1.0ns to 3.4ns for bound).

FIG. 45, grayscale visualization of a single fluorescent marker with multiple autofluorescence. A monochromatic representation of the mean spectral intensity of individual optical sections of Tg (fli 1: mKO2) (pan endothelial fluorescent protein marker) zebrafish exhibits intrinsic signals originating from the yolk and the yellow pigmented cells (pigment cells). The data sets were acquired in a multispectral mode using a confocal microscope (LSM780, zeiss, jena) with 488nm excitation. The average intensity is calculated along the spectral dimension and then represented in gray scale.

Figure 46, visualization of a single fluorescent marker compared to other RGB standard visualizations in the presence of autofluorescence. Visualization of Zebra fish (Panendothelial fluorescent protein-labeled) with Tg (fli 1: mKO2) derived from intrinsic signals of yolk and flavochrome cells (pigment cells) is shown here using different standard methods (FIG. 28). Details of these visualizations are reported above. (a) A SEER RGB mask for a single z-plane obtained in zoom mode using a gradient angle map shows the colors associated with each pixel by SEER, regardless of intensity. (b) SEER Maximum Intensity Projection (MIP) of the entire volume. (c) True color 32 channel volume MIP. (d) Peak wavelength volume MIP. (e) A gaussian default kernel with RGB centered at 650nm, 510nm and 470nm, respectively. (f) Gaussian kernel at 10% threshold, RGB values centered at 686nm, 588nm and 499 nm. (g) Gaussian kernel at 20% threshold, RGB values centered at 668nm, 579nm and 499 nm. (h) Gaussian kernel at 30% threshold, RGB values centered at 641nm, 570nm and 499 nm. (i) The wavelength-RGB color representation of the peak wavelength mask in graph d. The representation of RGB visualization parameters is reported in the following figures: (j) average spectrum (blue plot) of the entire data set, with the boundaries for true color 32-channel MIPs in plot c; (k) mean spectrum of the kernel, data set for plot e (yellow plot); (l) Mean spectrum of the kernel, data set for plot f (yellow plot); (m) mean spectrum of the kernel, data set for graph g (yellow plot); (n) mean spectrum of the kernel, data set for graph h (yellow plot).

Fig. 47, visual comparison of triple labeled fluorescence using other RGB standard methods. The Tg's (kdrl: eGFP) which mark the vasculature, muscle and nucleus, respectively, are shown here by different standard methods; gt (desmin-Citrine); visualization of Tg (ubiq: H2B-Cerulean) (FIG. 29). Details of these visualizations are reported above. The same slice (here z ═ 3) is shown as a Maximum Intensity Projection (MIP) using: (a) SEER gradient descent graph in the maximum deformation mode; (b) SEER MIP angle diagram under the mass deformation mode; (c) a true color 32 channel; (d) a peak wavelength; (e) a Gaussian default kernel with RGB centered at 650nm, 510nm, and 470nm, respectively; (f) a gaussian kernel below a 10% threshold, with RGB values centered at 597nm, 526nm, and 463 nm; (g) gaussian kernel at 20% threshold, RGB values centered at 579nm, 517 and 463 nm; (h) gaussian kernels at the 30% threshold, with RGB values centered at 561nm, 526nm, and 490 nm. The representation of RGB visualization parameters is reported in the following figures: (i) wavelength-RGB color representation of the peak wavelength mask in graph d; (j) average spectrum (blue curve) of the entire data set, with the boundaries for true color 32-channel MIPs in graph c; (k) mean spectrum of the kernel, data set for plot e (yellow plot); (l) Mean spectrum of the kernel, data set for plot f (yellow plot); (m) mean spectrum of the kernel, data set for graph g (yellow plot); (n) mean spectrum of the kernel, data set for graph h (yellow plot).

Fig. 48 SEER of zebrafish volume in Maximum Intensity Projection (MIP) and shadow projection. The ability of SEER to improve the visualization of spectral datasets can be translated into 3D visualizations with different visualization modalities. Shown here are zebrafish embryonic Tg (kdrl: eGFP) labeling vasculature, muscle and nucleus, respectively; gt (desmin-Citrine); tg (ubiq: H2B-Cerulean). (a) MIP with angle map volume of mass deformation mode. (b) The same combination of graphs and patterns is shown using shadow projection. The spatial distinction between the fluorescent markers is preserved despite the differences in the volume rendering methods. Here the gradient descent map in maximum deformation mode is applied to the same dataset using (c) MIP and (d) shadow projection. Using a gradient descent map, (c) MIP improves the contrast used to determine spatial discrimination between fluorophores. (d) The shadow projection further enhances the position of the skin pigments (green).

FIG. 49, visual comparison of expression with other RGB standard method combinations. Visualization of the muscle of Zebraow (FIG. 30) using a different standard method. Details of these visualizations are reported above. The same slice is shown as an RGB mask, which represents the color associated with each pixel and is independent of intensity, or as a Maximum Intensity Projection (MIP) using: (a) SEER gradient descent map mask in zoom mode; (b) average spectrum (blue curve) of the entire data set, with the boundaries for true color 32-channel MIPs in graph c; (c) a true color 32 channel; (d) a peak wavelength mask; (e) a Gaussian default kernel with RGB centered at 650nm, 510nm, and 470nm, respectively; (f) gaussian kernel at 10% threshold, RGB values centered at 659nm, 561nm and 463 nm; (g) gaussian kernel at 20% threshold, RGB values centered at 641nm, 552nm and 463 nm; (h) a gaussian kernel at 30% threshold, with RGB values centered at 632nm, 552nm and 472 nm; (i) the wavelength-RGB color representation of the peak wavelength mask in graph d. The representation of RGB visualization parameters is reported in: (j) mean spectrum of the kernel, data set for plot e (yellow plot); (k) mean spectrum of the kernel, data set for plot f (yellow plot); (l) Mean spectrum of the kernel, data set for graph g (yellow plot); (m) mean spectrum of the kernel, data set for graph h (yellow plot).

Fig. 50, RGB visualization with multiple modalities under different spectral overlap and SNR conditions. In this simulation, the intensity of the first (upper left) of the simulated hyperspectral testcharts (SHTC, fig. 33) decreased by a factor (.5 x 10) ^1 to (.5 x 10) ^4 (fig. 1 to fig. 5, respectively) in the presence of a constant background. A background with an average intensity of 5 was generated in Matlab and poissonian noise was added using poissrnd () function to obtain 5 different SNR levels. The peak-to-peak distance of the spectra in the central and outer concentric squares in (a, b, c, d, e) SHTC is shifted by 8.9nm units relative to the peak of the average spectrum of the middle square, which in this simulation (similar to fig. 33) starts from distance 0(a) to 35.6nm (e) remains constant. For each level of spectral overlap (a to e), seven different RGB visualization modalities are presented here for comparison at five different levels of SNR. Starting from the top row, SEERs at harmonic 2 (seeh ═ 2) and harmonic 1 (seeh ═ 1), the selected Peak wavelength (Peak Wav ·), gaussian kernel set at 30% of the spectrum (gaussian r ═ 3), gaussian kernel set at 20% of the spectrum (gaussian r ═ 2), gaussian kernel set at 10% of the spectrum (gaussian r ═ 1), and finally gaussian kernels set at 650nm, 510nm, 470nm of RGB, respectively (gauss.def.) (default). (f) wavelength-RGB conversion map for peak wavelength visualization. (g) R-579 nm, G-534 nm and B-499 nm for gaussian channel R-3. The average spectrum (yellow) (h) gaussian r.2 has a center wavelength of channels of 597nm, 543nm, 490 nm. Average spectrum (yellow). (i) Gauss R ═ 1 has a central wavelength of 614nm, G ═ 543nm, and B ═ 481nm channels. Average spectrum (yellow). (j) Gaussian default center wavelengths for channels of 650nm, 510nm, 470nm, G. Average spectrum (yellow). The graphs used for SEER here are the gradient descent in zoom mode (a, b, c, d), and centroid mode (e). The SEER visualization with shows reasonably constant contrast and color for different spectra in simulations at different SNRs.

Fig. 51, SNR overlaps the spectrum of the extremum in the simulation. The simulated extrema used in fig. 50 are reported here as spectra for comparison. For high signal-to-noise ratios, (a) the average spectrum of the spectrum with the peak maximum distance set to zero and (b) an example single spectrum (digital level, DL) from each concentric square region of the simulation. Average spectrum (c) and individual spectrum (d) at high SNR for simulation, where the peak-to-peak distance between spectra was 35.6 nm. (e) A reference simulated hyperspectral testchart with color-coded concentric squares. Here, low SNR analog spectra are reported for two: the zero peak distance was taken as (f) the average spectrum and (g) the individual spectrum, and the peak distance of 35.6nm was taken as (h) the average spectrum and (i) the individual spectrum.

Fig. 52, spectral separation accuracy of SEER under different spectral overlap and SNR conditions. The accuracy is calculated for different signal-to-noise ratios and the following spectral maximum separation: starting from the visualization in fig. 50 and the corresponding spectra in fig. 51, (a) alignment, (b)8.9nm, (c)17.8nm, (d)26.7nm, (e)35.6 nm. The precision is calculated here as the sum of the euclidean distances of the RGB vectors between pairs of concentric squares of the simulation, and the ratio of the maximum color separation (red to green, red to blue, blue to green). A full description of the accuracy calculation is reported in the methods section. Each value in the graph represents an average distance of 2002 pixels; the error bars are the standard deviation of the normalized precision values across all pixels. Average accuracy under multiple SNR conditions for each spectral maximum interval: (a) for highly overlapping spectra, SEER provides an average of 38.0% for harmonic 1 and 50.6% for harmonic 2, with other comparisons that perform best here being gaussian r-3, and 26.7% on average. (b) For a peak-to-peak interval of 8.9nm, SEER h ═ 1 averaged 57.0%, SEER h ═ 2 averaged 49.6%, where the other comparison that performed best was the peak wavelength, which was 22.2%; (c) for the 17.8nm interval SEER h-1 averages 57.2%, SEER h-2 averages 60.0 ± 2.3%, and another comparison that performed best here is gaussian r-3, which is 26.2%. (d) For the 26.7nm interval SEER h-1 averages 59.9% and SEER h-2 averages 60.4%, where another comparison that performs best is gaussian r-3, 32.1%; (e) for a well-spaced spectrum of 35.6nm, SEER h-1 averages 66.3%, SEER h-2 averages 66.7%, and another comparison that performs best here is gaussian r-3, with an average score of 43.5%.

Fig. 53, comparison of SEER and ICA spectral image visualization (RGB) under different spectral overlap and SNR conditions. The same simulation used in fig. 50, which changed the parameters of the simulated hyperspectral test plots that obtained different values of peak-to-peak spectral overlap and signal-to-noise ratio, was used here to calculate the accuracy of independent component analysis using the Python package Scikit-Learn function (skearn. decomposition. fastica), which has 3 Independent Components (ICs) that were not optimized for a particular data set. (a, b, c, d, e), three ICs are used as the R, G, B channel for creating a color image for each analog parameter (ICA ═ 3 lines), and are shown here next to SEER harmonics 1 and 2 (SEER h ═ 1 and SEER h ═ 2, respectively). Error bars are standard deviations. (f, g, h, i, j) the precision parameters described in the methods section are applied here to the SEER and ICA results. Each value in the graph represents 2002The average distance of each pixel, the error bar, is the standard deviation of the normalized precision value across all pixels. As calculated here, the accuracy of ICA for multiple overlap values at high SNR (over 30) is 48.0% on average, which is comparable to 57.9% for SEER h 1 and 57.6% for SEER h 2, but decreases at low SNR (under 10), where it averages 21.0%, and SEER is 50.6% and 57.7% for the first and second harmonics, respectively. (f) Average accuracy ICA is 20.2%, SEER is 38.0% for harmonic 1, and SEER is 50.6% for harmonic 2; (g) average accuracy ICA of 36.1%, SEER h ═ 1 of 57.0%, SEER h ═ 2 of 49.6%; (h) average accuracy ICA 25.3%, SEER h ═ 1 57.2%, SEER h ═ 2 60.0%; (i) average accuracy ICA 35.9%, SEER h ═ 1 59.9%, SEER h ═ 2 60.4%; (j) the average accuracy ICA was 32.6%, SEER h ═ 1 was 66.3%, and SEER h ═ 2 was 66.7%.

Fig. 54, visualization of photo bleaching with SEER. For 24hpf zebrafish embryo Gt (cltca-citrine) labeled clathrin, pan-endothelium and membrane, respectively; tg (fli1: mKO 2); tg (ubiq: memTdTimato) was subjected to the photobleaching experiment. Experiments were performed using the Zen 780 inverted confocal "bleaching" modality of zeiss, in which a single z-position was acquired in lambda mode. Frames were acquired every 13.7 seconds at high laser power, with 5 intermediate bleached frames (not acquired), until the image intensity reached 90% bleaching. The SEER RGB mask represents the color values associated with each pixel and is independent of the intensity values. The graph used here is a radial graph in centroid mode. In this modality, the map will adjust its position on the shifted centroid of the phasor cluster to visually compensate for the decrease in intensity. (a) In the initial frame, cltca-citrine is associated with magenta, the membrane with cerulean, the pan-endothelium is not in the frame, and the background with yellow. (b) Frame 10 shows the color consistent with the initial bleaching; the color remains at (c) frame 40 and (d) frame 70, where most of the signal has been bleached and most of the color has switched to yellow (here, background). (e) The last frame shows the 90% bleached sample. Alpha Color rendering (Alpha Color rendering) adds intensity information to the image visualization. Here (f) frame 1, (g) frame 10, (h) frame 40, and (i) frame 70 are shown for comparison. The scale bar is 10 μm. (j) The mean total intensity plot is a function of the frame, calculated from the sum of 32 channels, and shows significant bleaching in the sample.

Fig. 55, a pictorial abstraction of the deformation mode algorithm. (a) The radial map in the standard mode centered on the origin O can be abstracted as (b) a 3D cone shape with height h and apex a. (c) When the apex of the cone is shifted from a to a ', the map reference center is shifted from the origin O to the projection a ≠ j'. During this displacement, the edge of the cone base is anchored on the phasor unit circle. (c to d) the resulting section is a circle having a center O 'and a radius r' in consideration of a plane horizontally cutting the oblique cone. The projection of the circle is centered on the upper O ″) located at the line OA ″, adjacent to the fixed center O and the new vertex projection a ″, and has the same radius r'. As a result, (d) all points in each of these projection circles are shifted along the vector OO ±' on the phasor diagram.

Fig. 56, visualization of autofluorescence in volumetric data of unlabeled freshly isolated mouse tracheal explants. Here, a tiled z-stack (x, y, z) imaged with a multi-spectral two-photon microscope (740nm excitation, 32 wavelength bands, 8.9nm bandwidth, 410 to 695nm detection) is visualized as a single (x, y) z-slice SEER RGB gradient descent maximum deformation mask at (a)43 μm, (b)59 μm, (c)65 μm depth. The color difference between the basal layer cells and the top layer cells is maintained at different depths, each cell layer having a consistent hue. The color bars represent the dominant wavelength in nanometers associated with a color. The volume renderings presented as SEER alpha color renderings for (d) top-down (x, y) view, (e) lateral (y, z) view, and (f) magnified lateral (y, z) view show the shape and 98 μm thickness of the unmarked tissue sample.

Fig. 57, SEER versus independent component analysis processing speed for the data sets of fig. 27-30. The processing time between SEER and the FastICA submodule of the python module (scimit-spare) is compared. With the same measurement strategy used in FIG. 31, the timer time within the python block using the perf _ counter function is placed around the specific function corresponding to the computation required to create the SEER map in the HySP and to utilize FastICA. (a) In all the figures and their subsets, the run time of SEER (magenta) is significantly lower than ICA (3 components) (cyan). (b) The speed improvement is higher for larger z-stacked spectral datasets (fig. 28, 41 x improvement), while the speed improvement is lower for smaller single spectral images (fig. 27, 7.9 x improvement). The values of these curves are reported in table 4.

Fig. 58, fig. 27 to fig. 30. Scores for (a) chroma, (b) contrast, (c) sharpness, and (d) Color Quality Enhancement (CQE) are calculated according to the method portion for the multiple visualization strategies. The average values are reported in table 4 and table 5. (a) Due to the very low average intensity in the red channel (< IR > < 840) and almost twice the average green to blue intensity (< IG >/IB > < 1.7), the color value of SEER is typically higher than other methods (except for the peak wavelength visualization of fig. 7 (reported in supplementary fig. 19 d)), which makes the beta parameter average used in color chroma small and the denominator of the second logarithm in the color chroma equation (method) approximately equal to 1, resulting in a 10 times larger than the typical ratio of variance beta to average beta. This combination of intensities results in a 1.03 times higher chroma than SEER h-2, however, in this case the chroma value does not correspond to human observation (fig. 49d), indicating that the score may be an outlier due to the particular combination of intensities. (b) The values of contrast, (c) sharpness show higher performance of SEER. (d) The CQE score of SEER was higher than the standard, 11% to 26% improvement in fig. 4, 7% to 98% improvement in fig. 5, 14% to 25% improvement in fig. 6, and 12% to 15% improvement in fig. 7.

Detailed Description

Illustrative embodiments are now described. Other embodiments may be used in addition or alternatively. Details that may be obvious or unnecessary may be omitted to save space or for a more efficient presentation. Some embodiments may be practiced with additional components or steps and/or without all of the described components or steps.

The following abbreviations are used.

2D: two-dimensional

5D: five dimensions.

HySP: high spectral phasor

IACUC: committee on animal protection and utilization

N: number of photons acquired

n: number of harmonics

PMT: photomultiplier tube

PTU: 1-phenyl-2-thiourea

SBR: signal to background ratio

SEER: spectrally Encoded Enhanced Representation (Spectraily Encoded Enhanced Representation)

SNR: signal to noise ratio

SP: spectral phasor

USC: university of southern California

The present disclosure relates to hyperspectral imaging systems. The present disclosure also relates to a hyperspectral imaging system that generates a unmixed color image of a target. The imaging system can be used to de-noise and/or color unmix multiple overlapping spectra at fast analysis times under low signal-to-noise conditions. The unmixed color image of the target may be used to diagnose a health condition.

The hyperspectral imaging system can perform hyperspectral phasor (HySP) calculations to efficiently analyze hyperspectral time lapse data. For example, the system may perform a HySP calculation to efficiently analyze five-dimensional (5D) hyper-spectral time lapse data. The main advantages of this system may include: (a) the calculation speed is high; (b) easy phasor analysis; and (c) a denoising system for obtaining a minimum acceptable signal-to-noise ratio (SNR), as shown in the example of fig. 1.

The hyperspectral imaging system can effectively reduce spectral noise, remove autofluorescence, and distinguish multiple spectrally overlapping fluorophores within a biological sample. The system can improve in vivo imaging both by extending fluorophore palette selection and by reducing contributions from background autofluorescence. In the following example, the robustness of HySP was demonstrated by imaging developing zebrafish embryos with seven colors during the photoactive phase of development (fig. 2-3).

The hyperspectral imaging system 10 may include an optical system 20, an image forming system 30, or a combination thereof. For example, a hyperspectral imaging system may include an optical system and an image forming system. For example, the hyperspectral imaging system may comprise an image forming system. Fig. 14 schematically illustrates one example of an exemplary hyperspectral imaging system including an optical system and an image forming system. Fig. 15 to 21 show exemplary optical systems. Fig. 22 shows an exemplary configuration of the image forming system. FIG. 23 illustrates an exemplary configuration of a hyperspectral imaging system.

In the present disclosure, an optical system may include at least one optical component. Examples of the at least one optical component are a detector ("optical detector"), a detector array ("optical detector array"), a source for illuminating a target ("illumination source"), a first optical lens, a second optical lens, an optical filter, a dispersive optical system, a dichroic mirror/beam splitter, a first optical filtering system placed between the target and the at least one optical detector, a second optical filtering system placed between the first optical filtering system and the at least one optical detector, or a combination thereof. For example, the at least one optical component may comprise at least one optical detector. For example, the at least one optical component may comprise at least one optical detector and at least one illumination source. For example, the at least one optical component may include at least one optical detector, at least one illumination source, at least one optical lens, at least one optical filter, and at least one dispersive optical system. For example, the at least one optical component may include at least one optical detector, at least one illumination source, a first optical lens, a second optical lens, and a dichroic mirror/beam splitter. For example, the at least one optical component may include at least one optical detector, at least one illumination source, an optical lens, dispersive optics; and wherein the at least one optical detector is an array of optical detectors. For example, the at least one optical component may include at least one optical detector, at least one illumination source, an optical lens, dispersive optics, a dichroic mirror/beam splitter; and wherein the at least one optical detector is an array of optical detectors. For example, the at least one optical component may include at least one optical detector, at least one illumination source, an optical lens, dispersive optics, a dichroic mirror/beam splitter; wherein the at least one optical detector is an array of optical detectors; and wherein the illumination source directly illuminates the target. These optical components may form an exemplary optical system such as that shown in fig. 15-21.

In the present disclosure, the optical system may include an optical microscope. Examples of optical microscopes may be confocal fluorescence microscopes, two-photon fluorescence microscopes, or combinations thereof.

In the present disclosure, the at least one optical detector may have the following configuration: electromagnetic radiation absorbed, transmitted, refracted, reflected and/or emitted by at least one physical point on the target is detected ("target radiation"). The target radiation may include at least one wave ("target wave"). The target radiation may include at least two target waves. Each target wave may have an intensity and a different wavelength. The at least one optical detector may have a configuration to detect the intensity and wavelength of each target wave. The at least one optical detector may have a configuration to transmit the detected target radiation to the image forming system. The at least one optical detector may have a configuration to transmit the detected intensity and wavelength of each target wave to the image forming system. The at least one optical detector may have any combination of these configurations.

The at least one optical detector may comprise a photomultiplier tube, an array of photomultiplier tubes, a digital camera, a hyperspectral camera, an electron multiplying charge coupled device, Sci-CMOS, a digital camera, or a combination thereof. The digital camera may be any digital camera. The digital camera may be used with an active filter for detecting target radiation. The digital camera may also be used with an active filter for detecting target radiation, for example, including luminescence, thermal radiation, or a combination thereof.

In the present disclosure, the target radiation may include electromagnetic radiation emitted by the target. The electromagnetic radiation emitted by the target may include luminescence, thermal radiation, or a combination thereof. Luminescence may include fluorescence, phosphorescence, or a combination thereof. For example, the electromagnetic radiation emitted by the target may include fluorescence, phosphorescence, thermal radiation, or a combination thereof. For example, the electromagnetic radiation emitted by the target may include fluorescence. The at least one optical component may further comprise a first optical filtering system. The at least one optical component may further comprise a first optical filtering system and a second optical filtering system. The first optical filter system may be placed between the target and the at least one optical detector. The second optical filter system may be placed between the first optical filter system and the at least one optical detector. The first optical filtering system may include a dichroic filter (dichroic filter), a beam splitter type filter, or a combination thereof. The second optical filtering system may include a notch filter, an active filter, or a combination thereof. The active filter may include an adaptive optics system, an acousto-optic tunable filter, a liquid crystal tunable bandpass filter, a Fabry-Perot (Fabry-Perot) interference filter, or a combination thereof.

In the present disclosure, at least one optical detector may detect target radiation having a wavelength in the range of 300nm to 800 nm. The at least one optical detector may detect target radiation having a wavelength in the range of 300nm to 1300 nm.

In the present disclosure, at least one illumination source may generate electromagnetic radiation ("illumination source radiation"). The illumination source radiation may include at least one wave ("illumination wave"). The illumination source radiation may include at least two illumination waves. Each illumination wave may have a different wavelength. The at least one illumination source may directly illuminate the target. In this configuration, there are no optical components between the illumination source and the target. The at least one illumination source may indirectly illuminate the target. In this configuration, there is at least one optical component between the illumination source and the target. The illumination source may illuminate the target at each illumination wavelength by emitting all illumination waves simultaneously. The illumination source may illuminate the target at each illumination wavelength by emitting all illumination waves sequentially.

In the present disclosure, the illumination source may comprise a source of coherent electromagnetic radiation. The source of coherent electromagnetic radiation may comprise a laser, a diode, a two-photon excitation source, a three-photon excitation source, or a combination thereof.

In the present disclosure, the illumination source radiation may include illumination waves having wavelengths in the range of 300nm to 1300 nm. The illumination source radiation may include illumination waves having wavelengths in the range of 300nm to 700 nm. The illumination source radiation may include illumination waves having wavelengths in the range of 690nm to 1300 nm. For example, the illumination source may be a single photon excitation source capable of generating electromagnetic radiation in the range of 300nm to 700 nm. For example, such a single photon excitation source may generate electromagnetic radiation that may include waves having a wavelength of about 405nm, about 458nm, about 488nm, about 514nm, about 554nm, about 561nm, about 592nm, about 630nm, or a combination thereof. In another example, the source may be a two-photon excitation source capable of generating electromagnetic radiation in the range of 690nm to 1300 nm. Such an excitation source may be a tunable laser. In yet another example, the source may be a single photon excitation source and a two photon excitation source capable of generating electromagnetic radiation in the range of 300nm to 1300 nm. For example, such a single photon excitation source may generate electromagnetic radiation that may include waves having a wavelength of about 405nm, about 458nm, about 488nm, about 514nm, about 554nm, about 561nm, about 592nm, about 630nm, or a combination thereof. For example, such a two-photon excitation source is capable of generating electromagnetic radiation in the range of 690nm to 1300 nm. Such a two-photon excitation source may be a tunable laser.

In the present disclosure, the intensity of the illumination source radiation may not be above a certain level, such that the target is not damaged by the illumination source radiation when illuminating the target.

In the present disclosure, the hyperspectral imaging system may comprise a microscope. The microscope may be any microscope. For example, the microscope may be an optical microscope. Any optical microscope may be suitable for use with the system. Examples of optical microscopes may be two-photon microscopes, single photon confocal microscopes, or combinations thereof. Examples of two-photon microscopes are disclosed in Alberto Diaspro "confocal and two-photon microscopes: basic, application and progress (focal and Two-Photon Microscopy: Foundation, Applications and Advances) ", Wiley-Liss, New York, 11 months 2021; and Greenfield Sluder and David e.wolf, "Digital microscope (Digital microscope)" 4 th edition, Academic Press, 2013, 8 months and 20 days. The entire contents of each of these publications are incorporated herein by reference.

Fig. 15 shows an exemplary optical system including a fluorescence microscope 100. The exemplary optical system may include at least one optical component. In this system, the optical components may include an illumination source 101, a dichroic mirror/beam splitter 102, a first optical lens 103, a second optical lens 104, and a detector 106. These optical components may constitute the fluorescence microscope 100. The exemplary system may be adapted to form an image of the target 105. The source may generate illumination source radiation 107. The dichroic mirror/beam splitter 102 may reflect the illumination waves to illuminate the target 105. As a result, the target may emit electromagnetic radiation (e.g., fluorescence) 108 and reflect the illumination source radiation 107 back. The dichroic mirror/beam splitter 102 may filter illumination source radiation from the target and may substantially prevent illumination source radiation reflected from the target from reaching the detector. By using the system features/configurations of the present disclosure, an unmixed color image of the target may be generated by using the detected image of the target and the measured intensity of the target radiation of these optical components. For example, the unmixed color image of the target may be generated by using any of the system features/configurations schematically illustrated in fig. 22 to 23.

Fig. 16 shows an exemplary optical system including a multi-illumination wavelength microscope 200. The exemplary optical system may include at least one optical component. In this system, the optical components may include an illumination source 101, a dichroic mirror/beam splitter 102, a first optical lens 103, a second optical lens 104, and a detector 106. These optical components may constitute a hyperspectral imaging system, which includes a fluorescence microscope, a reflectance microscope, or a combination thereof. The exemplary system may be adapted to form an image of the target 105. The illumination source may generate illumination source radiation comprising a plurality of waves, where each wave may have a different wavelength. For example, in this example, the illumination source may generate illumination source radiation comprising two waves 201 and 202, each having a different wavelength. The source may illuminate the target sequentially at each wavelength. The dichroic mirror/beam splitter 102 may reflect the illumination source radiation to illuminate the target 105. As a result, the target may emit and/or may reflect back electromagnetic radiation waves. In one example, the dichroic mirror/beam splitter 102 may filter electromagnetic radiation from the target and may substantially allow emitted radiation to reach the detector and substantially prevent illumination source radiation reflected from the target from reaching the detector. In another example, the dichroic mirror/beam splitter 102 may only transmit reflected waves from the target, but substantially filter the emitted waves from the target, allowing only reflected waves from the target to reach the detector. In yet another example, the dichroic mirror/beam splitter 102 may transmit reflected and emitted radiation from the target, allowing only both reflected and reflected radiation from the target to reach the detector. In this example, multiple waves may arrive at the detector, each wave having a different wavelength. For example, the electromagnetic radiation reaching the detector may have two waves 203 and 204, each wave having a different wavelength. By using the system features/configurations of the present disclosure, an unmixed color image of the target may be generated by using the detected image of the target and the measured intensity of the target radiation of these optical components. For example, the unmixed color image of the target may be generated by using any of the system features/configurations schematically illustrated in fig. 22 to 23.

Another exemplary hyperspectral imaging system including a multi-wavelength detection microscope 300 is shown in fig. 17. The exemplary hyperspectral imaging system may include at least one optical component. In this system, the optical components may include a first optical lens 103, dispersive optics 302, and a detector array 304. These optical components may form a hyperspectral imaging system comprising a fluorescence device, a reflectance device, or a combination thereof. The exemplary system may be adapted to form an image of the target 105. The target may emit waves of electromagnetic radiation 301 and/or may reflect waves of electromagnetic radiation 301. In this example, at least one wave or at least two waves may reach the detector array. Each wave may have a different wavelength. The dispersive optics 302 may form spectrally dispersed electromagnetic radiation 303. By using the system features/configurations of the present disclosure, an unmixed color image of the target may be generated by using the detected image of the target and the measured intensity of the target radiation of these optical components. For example, the unmixed color image of the target may be generated by using any of the system features/configurations schematically illustrated in fig. 22 to 23.

Another exemplary hyperspectral imaging system including a multi-wavelength detection microscope 400 is shown in fig. 18. The exemplary hyperspectral imaging system may include at least one optical component. In this system, the optical components may include an illumination source 101, a dichroic mirror/beam splitter 102, a first optical lens 103, dispersive optics 302, and a detector array 304. These optical components may form a hyperspectral imaging system including a fluorescence device. The exemplary system may be adapted to form an image of the target 105. The illumination source may generate illumination source radiation comprising at least one wave 107. Each wave may have a different wavelength. The source may illuminate the target sequentially at each wavelength. The dichroic mirror/beam splitter 102 may reflect the illumination waves to illuminate the target 105. As a result, the target may emit electromagnetic radiation waves. The dichroic mirror/beam splitter 102 may substantially allow the emitted wave 301 to reach the detector array, but may filter the target radiation and thereby substantially prevent the waves reflected from the target from reaching the detector array. In this example, the emitted radiation reaching the detector array may comprise a plurality of waves, each wave having a different wavelength. The dispersive optics 302 may form spectrally dispersed electromagnetic radiation 303. By using the system features/configurations of the present disclosure, an unmixed color image of the target may be generated by using the detected image of the target and the measured intensity of the target radiation of these optical components. For example, the unmixed color image of the target may be generated by using any of the system features/configurations schematically illustrated in fig. 22 to 23.

Another exemplary hyperspectral imaging system including a multi-illumination wavelength and multi-wavelength detection apparatus 500 is shown in FIG. 19. An exemplary hyperspectral imaging system may include at least one optical component. In this system, the optical components may include an illumination source 101, a dichroic mirror/beam splitter 102, a first optical lens 103, dispersive optics 302, and a detector array 304. These optical components may form a hyperspectral imaging system, which includes a fluorescence microscope, a reflectance microscope, or a combination thereof. The exemplary system may be adapted to form an image of the target 105. The source may generate an illumination wave comprising a plurality of waves, where each wave may have a different wavelength. For example, in this example, the illumination source may generate illumination source radiation comprising two waves 201 and 202, each having a different wavelength. The illumination source may sequentially illuminate the target at each wavelength. The dichroic mirror/beam splitter 102 may reflect the illumination radiation to illuminate the target 105. As a result, the target may emit and/or may reflect back electromagnetic radiation. In one example, the dichroic mirror/beam splitter 102 may filter radiation from the target, allowing substantially only emitted radiation to reach the detector array, but substantially preventing radiation reflected from the target from reaching the detector array. In another example, the dichroic mirror/beam splitter 102 may transmit only reflected waves from the target, but substantially filter emitted waves from the target, thereby allowing substantially only reflected waves from the target to reach the detector array. In yet another example, the dichroic mirror/beam splitter 102 may substantially transmit both reflected and emitted waves from the target, allowing both reflected and reflected waves from the target to reach the detector array. In this example, the beam reaching the detector array may have multiple waves, each wave having a different wavelength. For example, the beam reaching the detector array may have two waves 203 and 204, each wave having a different wavelength. The dispersive optics 302 may form spectrally dispersed electromagnetic radiation 303. By using the system features/configurations of the present disclosure, an unmixed color image of the target may be generated by using the detected image of the target and the measured intensity of the target radiation of these optical components. This unmixed color image of the object may be generated, for example, by using any of the system features/configurations schematically shown in fig. 22-23.

FIG. 20 shows another exemplary optical system including a multi-wavelength detection device 600. The exemplary optical system may include at least one optical component. In this system, the optical components may include an illumination source 101, a first optical lens 103, dispersive optics 302, and a detector array 304. These optical components may form a hyperspectral imaging system comprising fluorescence and/or reflectance means. The exemplary system may be adapted to form an image of the target 105. The illumination source may generate illumination source radiation comprising at least one wave 107. Each wave may have a different wavelength. The source may illuminate the target sequentially at each wavelength. Accordingly, the target may emit, reflect, refract and/or absorb the electromagnetic radiation beam 203. In this example, the transmitted, reflected, refracted, and/or absorbed beam reaching the detector array may include a plurality of waves, each wave having a different wavelength. The dispersive optics 302 may form spectrally dispersed electromagnetic radiation 303. By using the system features/configurations of the present disclosure, an unmixed color image of the target may be generated by using the detected image of the target and the measured intensity of the target radiation of these optical components. This unmixed color image of the object may be generated, for example, by using any of the system features/configurations schematically shown in fig. 22-23.

Fig. 21 shows another exemplary optical system including a multi-wavelength detection device 700. The optical system may comprise at least one optical component. In this system, the optical components may include an illumination source 101, a first optical lens 103, dispersive optics 302, and a detector array 304. These optical components may form a hyperspectral imaging system comprising fluorescence and/or reflectance means. The exemplary system may be adapted to form an image of the target 105. The illumination source may generate illumination source radiation comprising at least one wave 107. Each wave may have a different wavelength. The source may illuminate the target sequentially at each wavelength. Thus, the target may emit, transmit, refract and/or absorb the beam of electromagnetic radiation 203. In this example, the emitted, transmitted, refracted, and/or absorbed electromagnetic radiation reaching the detector array may include a plurality of waves, each wave having a different wavelength. The dispersive optics 302 may form spectrally dispersed electromagnetic radiation 303. By using the system features/configurations of the present disclosure, an unmixed color image of the target may be generated by using the detected image of the target and the measured intensity of the target radiation of these optical components. This unmixed color image of the object may be generated, for example, by using any of the system features/configurations schematically shown in fig. 22-23.

In the present disclosure, image forming system 30 may include a control system 40, a hardware processor 50, a memory system 60, a display 70, or a combination thereof. Fig. 14 illustrates an exemplary image forming system. The control system may be any control system. For example, the control system may control the optical system. For example, the control system may control at least one optical component of the optical system. For example, the control system may control the at least one optical detector to detect the target radiation, detect the intensity and wavelength of each target wave, transmit the detected intensity and wavelength of each target wave to the image forming system, and display a unmixed color image of the target. For example, the control system may control movement of optical components, such as opening and closing of optical shutters, movement of mirrors, and the like. The hardware processor may include a microcontroller, digital signal processor, Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. In an embodiment, all processing discussed herein is performed by one or more hardware processors. For example, the hardware processor may form an image of the target, perform phasor analysis, perform fourier transformation of the intensity spectrum, apply a denoising filter, form a phasor plane, map back phasor point(s), assign arbitrary color(s), generate a unmixed color image of the target, and the like, or a combination of such configurations thereof. The memory system may be any memory system. For example, a memory system may receive and store input from a hardware processor. These inputs may be, for example, an image of the target, radiation of the target, an intensity spectrum, a phasor plane, a unmixed color image of the target, etc., or a combination of these configurations. For example, the memory system may provide output to other components of the image forming system (e.g., to the processor and/or display). These outputs may be, for example, an image of the target, radiation of the target, an intensity spectrum, a phasor plane, a unmixed color image of the target, etc., or a combination of these configurations. The display may be any display. For example, the display may display an image of the target, an intensity spectrum, a phasor plane, a unmixed color image of the target, etc., or a combination of these configurations. The image forming system 30 may be connected to the optical system 20 via a network. In some cases, the image forming system 30 may be located on a server remote from the optical system 20.

In the present disclosure, the image forming system may have the following configuration: causing the optical detector to detect the target radiation and transmit the detected intensity and wavelength of each target wave to the image forming system.

In the present disclosure, the image forming system may have a configuration of acquiring detected target radiation including at least two target waves.

In the present disclosure, the image forming system may have a configuration to acquire target radiation including at least two target waves, each wave having an intensity and a different wavelength.

In the present disclosure, the image forming system may have a configuration to acquire a target image, wherein the target image includes at least two pixels, and wherein each pixel corresponds to one physical point on the target.

In the present disclosure, the image forming system may have a configuration of forming an image of a target ("target image") using detected target radiation. The target image may include at least one pixel. The target image may include at least two pixels. Each pixel corresponds to a physical point on the object.

In the present disclosure, the target image may be formed/acquired in any form. For example, the target image may have a visual form and/or a digital form. For example, the formed/acquired target image may be stored data. For example, the formed/acquired target image may be stored as data in a memory system. For example, the formed/acquired target image may be displayed on a display of the image forming system. For example, the target image formed/acquired may be an image printed on paper or any similar medium.

In the present disclosure, the image forming system may have a configuration that uses the detected intensity and wavelength of each target wave to form at least one spectrum ("intensity spectrum") of each pixel.

In the present disclosure, the image forming system may have a configuration that acquires at least one intensity spectrum of each pixel, wherein the intensity spectrum includes at least two intensity points.

In the present disclosure, the intensity spectrum may be formed/acquired in any form. For example, the intensity spectrum may have a visual form and/or a digital form. For example, the formed/acquired intensity spectrum may be stored data. For example, the formed/acquired intensity spectrum may be stored as data in a memory system. For example, the formed/acquired intensity spectrum may be displayed on a display of the image forming system. For example, the intensity spectrum formed/acquired may be an image printed on paper or any similar medium.

In the present disclosure, the image forming system may have the following configuration: the formed intensity spectrum of each pixel is transformed into complex-valued functions using a fourier transform based on the intensity spectrum of each pixel, wherein each complex-valued function has at least one real part and at least one imaginary part.

In the present disclosure, the image forming system may have the following configuration: applying a denoising filter at least once to the real and imaginary parts of each complex-valued function to produce denoised real and imaginary values for each pixel.

In the present disclosure, the image forming system may have the following configuration: a point on the phasor plane ("phasor point") for each pixel is formed by plotting the denoised real and imaginary values for each pixel. The image forming system may form the phasor planes, for example, by using hardware components thereof (e.g., a control system, a hardware processor, a memory, or a combination thereof). The image forming system may display a phasor plane.

In the present disclosure, the phasor points and/or phasor planes may be formed/obtained in any form. For example, the phasor points and/or phasor planes may have a visual form and/or a numerical form. For example, the formed/obtained phasor points and/or phasor planes may be stored data. For example, the formed/obtained phasor points and/or phasor planes may be stored as data in a memory system. For example, the formed/acquired phasor points and/or phasor planes may be displayed on a display of the image forming system. For example, the formed/acquired phasor points and/or phasor planes may be images printed on paper or any similar medium.

In the present disclosure, the image forming system may have the following configuration: based on the geometric location of the phasor points on the phasor plane, the phasor points are mapped back to corresponding pixels on the target image. In the present disclosure, the image forming system may have the following configuration: based on the geometric location of each phasor point on the phasor plane, the phasor plane is mapped back to the corresponding target image. The image forming system may map back the phasor points, for example, by using its hardware components (e.g., control system, hardware processor, memory, or a combination thereof).

In the present disclosure, the phasor points and/or phasor planes may be mapped back in any form. For example, the phasor points and/or phasor planes mapped back may have a visual and/or numerical form. For example, the phasor points and/or phasor planes mapped back may be stored data. For example, the mapped-back phasor points and/or phasor planes may be stored as data in a memory system. For example, the mapped-back phasor points and/or phasor planes may be displayed on a display of the image forming system. For example, the mapped-back phasor points and/or phasor planes may be images printed on paper or any similar medium.

In the present disclosure, the image forming system may have the following configuration: based on the geometric position of the phasor point on the phasor plane, an arbitrary color is assigned to the corresponding pixel.

In the present disclosure, the image forming system may have a configuration of generating an unmixed color image of the target based on the assigned arbitrary color.

In the present disclosure, the unmixed color image may be formed in any form. For example, the unmixed color image may have a visual form and/or a digital form. For example, the unmixed color image may be stored data. For example, the unmixed color image may be stored as data in a memory system. For example, the unmixed color image may be displayed on a display of the image forming system. For example, the unmixed color image may be an image printed on paper or any similar medium.

In the present disclosure, the image forming system may have a configuration that displays a unmixed color image of a target on a display of the image forming system.

In the present disclosure, the image forming system may have any combination of the above-described configurations.

In the present disclosure, the image forming system may use at least one harmonic of the fourier transform to generate a unmixed color image of the target. The image forming system may use at least the first harmonic of the fourier transform to generate a unmixed color image of the target. The image forming system may use at least the second harmonic of the fourier transform to generate a unmixed color image of the target. The image forming system may generate a unmixed color image of the target using at least the first harmonic and the second harmonic of the fourier transform.

In the present disclosure, the denoising filter may be any denoising filter. For example, the denoising filter may be a denoising filter such that image quality is not impaired when the denoising filter is applied. For example, when applying a denoising filter, the electromagnetic radiation intensity detected at each pixel in the image may be constant. Examples of suitable denoising filters may include median filters.

In the present disclosure, a unmixed color image of a target may be formed with a signal-to-noise ratio of at least one spectrum in the range of 1.2 to 50. The unmixed color image of the target may be formed with a signal-to-noise ratio of at least one spectrum in the range of 2 to 50.

In one example, a hyperspectral imaging system for generating a unmixed color image of a target may include an optical system and an image forming system. The optical system may comprise at least one optical component. The at least one optical component may comprise at least one optical detector. The at least one optical detector may have the following configuration: detecting electromagnetic radiation absorbed, transmitted, refracted, reflected and/or emitted by at least one physical point on a target ("target radiation"), the target radiation comprising at least two waves ("target waves"), each wave having an intensity and a different wavelength; detecting the intensity and wavelength of each target wave; and transmits the detected target radiation and the detected intensity and wavelength of each target wave to the image forming system. The image forming system may include a control system, a hardware processor, a memory, and a display. The image forming system may have the following configuration: forming an image of the target ("target image") using the detected target radiation, wherein the target image comprises at least two pixels, and wherein each pixel corresponds to one physical point on the target; forming at least one spectrum ("intensity spectrum") for each pixel using the detected intensity and wavelength of each target wave; transforming the formed intensity spectrum of each pixel into complex-valued functions using a fourier transform based on the intensity spectrum of each pixel, wherein each complex-valued function has at least one real part and at least one imaginary part; applying a denoising filter at least once to the real and imaginary parts of each complex-valued function to produce denoised real and imaginary values for each pixel; forming a point ("phasor point") of each pixel on the phasor plane by plotting the denoised real and imaginary values of each pixel; mapping the phasor points back to corresponding pixels on the target image based on the geometric positions of the phasor points on the phasor plane; assigning an arbitrary color to the corresponding pixel based on the geometric position of the phasor point on the phasor plane; generating a unmixed color image of the target based on the assigned arbitrary colors; and displaying the unmixed color image of the object on a display of the image forming system.

In one example, the image forming system may have the following configuration: causing the optical detector to detect the target radiation and transmit the detected intensity and wavelength of each target wave to the image forming system. The image forming system may acquire detected target radiation including at least two target waves; forming an image of the target ("target image") using the detected target radiation, wherein the target image comprises at least two pixels, and wherein each pixel corresponds to one physical point on the target; forming at least one spectrum ("intensity spectrum") for each pixel using the detected intensity and wavelength of each target wave; transforming the formed intensity spectrum of each pixel into complex-valued functions using a fourier transform based on the intensity spectrum of each pixel, wherein each complex-valued function has at least one real part and at least one imaginary part; applying a denoising filter at least once to the real and imaginary parts of each complex-valued function to produce denoised real and imaginary values for each pixel; forming a point ("phasor point") of each pixel on the phasor plane by plotting the denoised real and imaginary values of each pixel; mapping the phasor points back to corresponding pixels on the target image based on the geometric positions of the phasor points on the phasor plane; assigning an arbitrary color to the corresponding pixel based on the geometric position of the phasor point on the phasor plane; a unmixed color image of the object is generated based on the assigned arbitrary color. The image forming system may have a further configuration to display the unmixed color image of the object on a display of the image forming system.

In another example, the image forming system may have the following configuration: acquiring target radiation comprising at least two target waves, each wave having an intensity and a different wavelength; forming a target image, wherein the target image comprises at least two pixels, and wherein each pixel corresponds to a physical point on the target; forming at least one intensity spectrum for each pixel using the intensity and wavelength of each target wave; transforming the formed intensity spectrum of each pixel into complex-valued functions using a fourier transform based on the intensity spectrum of each pixel, wherein each complex-valued function has at least one real part and at least one imaginary part; applying a denoising filter at least once to the real and imaginary parts of each complex-valued function to produce denoised real and imaginary values for each pixel; forming a phasor point for each pixel by plotting the denoised real and imaginary values for each pixel; mapping the phasor points back to corresponding pixels on the target image based on the geometric positions of the phasor points on the phasor plane; assigning an arbitrary color to the corresponding pixel based on the geometric position of the phasor point on the phasor plane; a unmixed color image of the object is generated based on the assigned arbitrary color. The image forming system may have a further configuration to display the unmixed color image of the object on a display of the image forming system.

In another example, the image forming system may have the following configuration: acquiring a target image, wherein the target image comprises at least two pixels, and wherein each pixel corresponds to a physical point on the target; acquiring at least one intensity spectrum of each pixel, wherein the intensity spectrum comprises at least two intensity points; transforming the intensity spectrum of each pixel into complex-valued functions using a fourier transform based on the intensity spectrum of each pixel, wherein each complex-valued function has at least one real part and at least one imaginary part; applying a denoising filter at least once to the real and imaginary parts of each complex-valued function to produce denoised real and imaginary values for each pixel; forming a phasor point for each pixel by plotting the denoised real and imaginary values for each pixel; mapping the phasor points back to corresponding pixels on the target image based on the geometric positions of the phasor points on the phasor plane; assigning an arbitrary color to the corresponding pixel based on the geometric position of the phasor point on the phasor plane; a unmixed color image of the object is generated based on the assigned arbitrary color. The image forming system may have a further configuration to display the unmixed color image of the object on a display of the image forming system.

FIG. 22 schematically illustrates one example of a hyperspectral imaging system. In this example, the imaging system may obtain an image 401 of the target. The image may include at least two waves and at least two pixels. The system may use the intensity ("intensity spectrum") 402 of each wave to form an image of the target. The system may transform the intensity spectrum 403 of each pixel by using a fourier transform to form a complex-valued function based on the detected intensity spectrum of each pixel. Each complex-valued function may have at least one real part 404 and at least one imaginary part 405. The system may apply denoising filters 406 to the real and imaginary parts of each complex-valued function at least once. The system can thus obtain a denoised real value and a denoised imaginary value for each pixel. The system can plot the denoised real and denoised imaginary values for each pixel. The system can thus form a point 407 on the phasor plane. The system may form at least one additional point on the phasor plane by using at least one further pixel of the image. The system may select at least one point on the phasor plane based on a geometric location of the at least one point on the phasor plane. The system may map 408 the selected point on the phasor plane back to a corresponding pixel on the image of the target and may assign a color to the corresponding pixel, and wherein the color is assigned based on the geometric location of the point on the phasor plane. As a result, the system can thereby generate a unmixed color image 409 of the target.

FIG. 23 schematically illustrates another example of a hyperspectral imaging system. In this example, the hyperspectral imaging system also includes at least one detector 106 or detector array 304. The imaging system may form an image 401 of the target by using a detector or detector array. The image may include at least two waves and at least two pixels. The system may use the intensity ("intensity spectrum") 402 of each wave to form an image of the target. The system may transform the intensity spectrum of each pixel by using a fourier transform 403 to form a complex-valued function based on the detected intensity spectrum of each pixel. Each complex-valued function may have at least one real part 404 and at least one imaginary part 405. The system may apply denoising filters 406 to the real and imaginary parts of each complex-valued function at least once. The system can thus obtain a denoised real value and a denoised imaginary value for each pixel. The system can plot the denoised real and denoised imaginary values for each pixel. The system can thus form a point 407 on the phasor plane. The system may form at least one additional point on the phasor plane by using at least one further pixel of the image. The system may select at least one point on the phasor plane based on a geometric location of the at least one point on the phasor plane. The system may map 408 the selected point on the phasor plane back to a corresponding pixel on the image of the target and may assign a color to the corresponding pixel, and wherein the color is assigned based on the geometric location of the point on the phasor plane. As a result, the system can thereby generate a unmixed color image 409 of the target.

In the present disclosure, the target may be any target. The target may be any target having a specific color spectrum. For example, the target may be a tissue, a fluorescent genetic marker, an inorganic target, or a combination thereof.

In the present disclosure, the system may be calibrated by using a reference to assign a color to each pixel. The reference may be any known reference. For example, the reference may be any reference in which a reference unmixed color image is determined prior to generating the unmixed color image of the target. For example, the reference can be a physical structure, a chemical molecule, a biological activity (e.g., a physiological change) as a result of a change in physical structure and/or a disease.

In the present disclosure, the target radiation may include fluorescence. A hyperspectral imaging system suitable for fluorescence detection may include an optical filtering system. Examples of optical filtering systems are: a first optical filter for substantially reducing the intensity of the source radiation reaching the detector. The first optical filter may be placed between the target and the detector. The first optical filter may be any optical filter. Examples of the first optical filter may be a dichroic filter, a beam splitter type filter, or a combination thereof.

In the present disclosure, the hyperspectral imaging system suitable for fluorescence detection may further comprise a second optical filter. A second optical filter may be placed between the first optical filter and the detector to further reduce the intensity of the source radiation reaching the detector. The second optical filter may be any optical filter. Examples of the second optical filter may be a notch filter, an active filter, or a combination thereof. Examples of active filters may be adaptive optics, acousto-optic tunable filters, liquid crystal tunable bandpass filters, fabry-perot interference filters, or combinations thereof.

In the present disclosure, the hyperspectral imaging system can be calibrated by using a reference material to assign a color to each pixel. The reference material may be any known reference material. For example, the reference material may be any reference material for which a unmixed color image is determined prior to generating the unmixed color image of the target. For example, the reference material can be a physical structure, a chemical molecule (i.e., a compound), a biological activity (e.g., a physiological change) as a result of a change in physical structure and/or a disease. The compound may be any compound. For example, the compound can be a biomolecule (i.e., a compound).

In the present disclosure, the hyperspectral imaging system can be used to diagnose any health condition. For example, the hyperspectral imaging system may be used to diagnose any health condition of any mammal. For example, the hyperspectral imaging system can be used to diagnose any health condition of a human. Examples of health conditions may include diseases, congenital malformations, disorders, wounds, injuries, ulcers, abscesses, and the like. The health condition may be related to a tissue. The tissue may be any tissue. For example, the tissue may comprise skin. An example of a health condition associated with skin or tissue may be skin damage. The skin lesion may be any skin lesion. Examples of skin lesions may be skin cancer, scars, acne formations, warts, wounds, ulcers, and the like. Other examples of skin or tissue health conditions may be tissue or skin composition (makeup), such as tissue or skin moisture level, oiliness, collagen content, hair content, and the like.

In the present disclosure, the target may comprise a tissue. The hyperspectral imaging system can display an unmixed color image of the tissue. Health conditions can cause differences in the chemical composition of tissues. The chemical composition may involve chemical compounds such as hemoglobin, melanin, proteins (e.g., collagen), oxygen water, and the like, or combinations thereof. Due to differences in the chemical composition of the tissue, the color of the tissue affected by the health condition may appear different from the color of the tissue not affected by the health condition. Due to this color difference, the health condition of the tissue can be diagnosed. The hyperspectral imaging system may thus allow a user to diagnose, for example, skin conditions regardless of room lighting and skin pigmentation levels.

For example, as electromagnetic radiation propagates through tissue, illumination source radiation delivered to biological tissue may experience multiple scattering from inhomogeneities of the biological structure and absorption of compounds present in the tissue, such as hemoglobin, melanin, and water. For example, the absorption, fluorescence and scattering properties of tissue may change during the progression of the disease. Thus, for example, reflected, fluorescent, and transmitted light from tissue detected by the hyperspectral imaged optical detector of the present disclosure may carry quantitative diagnostic information about the histopathology.

The diagnosis of the health condition may be performed by any user, including a doctor, medical staff or a consumer.

The health of the tissue may be determined by using diagnostic information obtained by a hyperspectral imaging system. Thus, for example, the diagnostic information can improve the clinical outcome of the patient before, during, and/or after surgery or treatment. For example, the hyperspectral imaging system may be used to track the evolution of the patient's health over time by determining, for example, the health of the patient's tissue. In the present disclosure, the patient may be any mammal. For example, the mammal may be a human.

In the present disclosure, the reference materials disclosed above may be used in the diagnosis of health conditions.

In the present disclosure, a hyperspectral imaging system including HySP may apply a fourier transform to convert all photons collected over the spectrum to one point in a two-dimensional (2D) phasor diagram ("density diagram"). The reduced dimensionality can perform well in low SNR regimes compared to the linear unmixing approach, where the error per channel can contribute to the fitting result. In any imaging system, the number of photons emitted by the dye during a time interval may be a stochastic (poisson) process, where the signal (total digital count) may be scaled by the average number of photons acquired, N; and the noise can be scaled by the square root of N √ N. This poisson noise and detector readout noise of the fluorescence emission may become more pronounced at lower light levels. First, the error on the HySP map can be quantitatively evaluated. This information can then be used to develop noise reduction methods to demonstrate that hyperspectral imaging systems including HySP are robust systems for resolving time-lapse hyperspectral fluorescence signals in vivo in low SNR regimes.

The following features are also within the scope of the present disclosure.

For each pixel in the data set, the Fourier coefficients of its normalized spectrum may define the coordinates of its phasor point (z (n)), where n is the harmonic number (equation 1 below). The sine and cosine transforms here can be used to ensure that two normalized identical spectra produce the same phasor points (fig. 1b, inset). When these transformations are applied to actual data, a system (e.g., a system that includes a microscope) may have multiple noise sources that may affect the exact coordinates of the phasor points. Poisson and detector noise in each spectral bin can cause scatter in points on the phasor diagram, which is referred to hereinafter as scatter error (std { z (n)) }. Furthermore, impaired SNR and signal saturation may change the average position of the scatter distribution itself, which is hereinafter referred to as shifted average error.

When multiple measurements of the same fluorophore are represented on a phasor diagram, the expected characteristic z of the spectrum can bee(n) a dispersion error is observed around, and can be considered as ze(n) standard deviation of phasor points around (FIG. 1 c). The shift average error may be the result of a degraded spectral shape due to reduced SNR, improper black level setting, or improper gain setting (saturation). Depending on the system setup, the average feature location on the phasor diagram may be from its expected location ze(n) shift the amount of average error (fig. 1 d). In combination, these two errors can be dispersed over the correct position z on the phasor diagrame(n) surrounding points.

Photon counting in experiments can help quantify the estimation of the bounds of any form of error. Most detectors on microscopes (especially commercial multispectral confocal systems) can record analog signals rather than photon counts. For systems comprising such a microscope, a quantitative estimation of these errors in terms of intensity values recorded in the simulation mode can be achieved.

To develop an experimental method for estimating the contribution of two error sources to the phasor diagrams, the emission spectra of fluorescein at different acquisition parameters on a commercial confocal microscope equipped with a parallel multichannel spectral detector were recorded (table 1, shown below).

Table 1, parameters for fluorescein imaging.

Based on the transform used in this disclosure and by propagating the statistical error, we can get the dispersion error std { z (n) }. It may be scaled inversely proportional to the square root of the total digital count N (equation 2 below). Experimental data confirmed that the dispersion error scales inversely with v N for different acquisition parameters within the standard range of microscope settings (fig. 1e, fig. 4 a). Furthermore, the proportionality constant in equation 2 depends on the detector gain used in the acquisition (fig. 5e and table 2, shown below).

Table 2 proportionality constants of curves for calculating dispersion errors on phasor diagrams.

Gain (A.U.) Slope of |z(n)| Constant of proportionality
700 1.35 0.43 3.14
750 1.8 0.437 4.12
800 2.34 0.437 5.36
850 3.03 0.443 6.83
900 3.89 0.446 8.72
950 4.79 0.45 10.65

Detector shot noise may be proportional to gain [22]And the dispersion error empirically shows the characteristics (fig. 5d to 5 e). Given the same normalization measured with different microscope settingsSpectra, spectra with higher gain values, may have higher dispersion errors. However, for different imaging parameters, the expected location of the spectral feature | ze(n) | can be kept constant over a wide range of total digital counts (fig. 5a to 5 c).

Shift averaged variation of the spectrum. The phasor diagram may rely on the normalized spectrum of the pixels to determine the coordinates. However, both signal saturation and very low photon counts (low signal-to-noise ratio (SNR)) may result in non-identical normalized spectra (fig. 1b, inset). This may change the value of | z (n) | at the extreme of the total number of digital counts (fig. 4 a-4 c). At low SNR, the signal may not be distinguishable from noise. At very high SNR, the same intensity values (fig. 1b, inset) for several wavelengths corresponding to the saturation values on the detector may again render the spectrum useless. In either case, the phasor point may be moved closer to the origin, resulting in a low value of | z (n) |. The value of | z (n) | is most sensitive to changes in the detector gain value in three parameters (i.e., detector gain, power, and pixel dwell time) over a constant range (fig. 4 a-4 e) (fig. 4 a-4 c).

The type of detection used for the measurement may affect the error in the phasor. In any imaging system, the number of photons emitted by the dye during a time interval may be a random (poisson) process, where the signal may be scaled by the average number of photons acquired, N, and the noise may be scaled by √ N. In general, the source of noise may include shot noise derived from (i) signal light, (ii) background light, and (iii) dark current.

In the experiments, a simulated detector was used for all measurements. A typical photomultiplier tube (PMT) may measure the pulse of electrons at the anode caused by photons impinging at its photocathode. These pulses may be counted individually and may be counted as an average photocurrent in a given interval, allowing both digital (photon counting) and analog modes of operation, respectively. While noise (poisson) from the signal and background light may remain the same for both analog and digital counts, shot noise from dark current may vary in both modes. Dark current can consist of hot electrons with a typical pulse height distribution, which can be robustly discriminated from the signal using a pulse height discriminator in photon counting and thus eliminated. In analog mode, the average pulse may also include dark current resulting in higher noise. The signal-to-noise ratio (SNR) in digital mode may be improved compared to analog mode. In addition, the photon counting mode can perform better at low signal levels, in order to avoid two photons arriving at the same time. The analog mode can operate over a wide range of photon levels.

For the purpose of HySP, the fourier transform may convert all photons collected on the spectrum to one point in the phasor diagram. In the photon counting mode, it is expected to further enhance the HySP performance due to the improved SNR at low signal levels compared to the analog mode.

The repeatability of spectral features may have two major effects on the shift averaging error (a measure of the quality of the feature identification). First, since it can be | ze(n) | and therefore the error can be kept below 5% over a wide range of digital counts other than extremum count values (fig. 1 f). Like the dispersion error, it may only be slightly sensitive to variations in detector gain, within a reasonable range. Second, a comparison of the magnitudes of the two errors may show that the dispersion error may be dominant in the phasor analysis (fig. 1f, inset). Thus, any shift in phasor points due to sub-optimal imaging parameters may be buried within the dispersion.

Since dispersion errors may be the dominant error on the HySP map and the phasor map may reduce the spectral dimension from 32 to 2, the spectral image may be denoised without altering the intensity data by applying a filter directly in the phasor space to reduce the dispersion errors. Here, a denoising filter in a phasor space is applied to reduce a dispersion error in data, and a characteristic position | z is observed eSignificant recovery of (n) | especially at low signal values. Illustrating that denoising may not alter the expected value (z)e(n)) in the image (fig. 4b to 4d), but the dispersion error can be reduced (fig. 4 c). Repeated application of the denoising filter may result in what may be typically at fiveImproved stabilization periods that occur after the minor iterations. Since the filter can be applied in the phasor space, it does not affect the intensity distribution of the image (fig. 9 and 10).

Spectral denoising in phasor space. Spectral denoising can be performed by applying a filter directly in the phasor space. This may preserve the original image resolution, but may improve spectral feature recognition in the phasor diagram. The filter applied here may be a median filter. However, other methods are possible. For any image of a given size (n × m pixels), the S and G values for each pixel can be obtained, resulting in 2 new 2D matrices of size n × m for S and G. Since the initial S and G matrix entries may have the same index as the pixels in the image, the filtered matrices S and G may retain geometric information. By effectively using filtering in the phasor space, the S matrix and the G matrix can be considered as 2D images. First, this can reduce the dispersion error (i.e., the positioning accuracy on the phasor diagram increases (fig. 8 a-8 b)), thereby improving the spectral feature recognition resolution while improving the shift average error that has been minimized (fig. 8 c-8 d). The effect on the data may be an improved separation of different fluorescent proteins (fig. 9a to 9 d). Second, denoising in (G, S) coordinates can preserve the geometry, intensity distribution, and original resolution at the time the image was acquired (fig. 9 e-9G). Efficient filtering in the phasor space can affect the spectral dimensions of the data, thereby enabling denoising of spectral noise without disturbing the intensity.

Improved signal collection (fig. 11) and reduced uncertainty may make HySP an attractive in vivo imaging technique. The study of cell and tissue interactions may typically involve the use of multiple fluorescent markers within the same anatomical region of a developing embryo or other biological sample. Furthermore, the dataset size for multi- (high) spectral fluorescence may be n times larger than standard confocals, where n equals the number of bandwidths acquired (e.g., 32).

Four-dimensional (x, y, z, λ) data of whole (whole-mount) zebrafish embryos were acquired and spectral information from all pixels was represented in the HySP map to identify fluorophore characteristics ranging from tissue to subcellular scale (table 3).

Table 3, parameters for in vivo imaging. All data points are 16-bit integers.

Selected points in the phasor space are remapped into the original volume and rendered as maximum intensity projections. This successfully captured the transgenic zebrafish embryo Gt (desm-citrine)ct122a/+And unique spectral characteristics of citrine (skeletal muscle) and eGFP (endothelial tissue) in Tg (kdrl: eGFP) [23, 24 ]](FIG. 6a, FIG. 7 a). On a tissue scale, even in a double transgene Gt (desm-citrine) that can feature co-expression in the same anatomical region (FIG. 6d) ct122a/+(ii) a In Tg (kdrl: eGFP) embryos, the method also retained the spectral features (dispersion density) of each of citrine and eGFP. Two easily separable dispersion densities in the phasor space (fig. 6c) clearly distinguish the markers in skeletal muscle from those in the intersecting blood vessels (endothelial tissue). In addition, by treating autofluorescence as an independent HySP feature, autofluorescence can be clearly distinguished (fig. 10).

Autofluorescence in phasor space for in vivo imaging. The high spectral phasor may allow for intuitive identification of the characteristics of the fluorescent protein. This can be shown for Citrine and eGFP, but can also be used for autofluorescence. Intrinsic fluorescence within cells can be a known and common problem in vivo biological imaging. The spectral features may differ from those of Citrine and eGFP. When expressed as dispersion density on the phasor diagram, autofluorescence may have different (S, G) coordinates compared to fluorescent proteins, and clustered regions may be created in different regions of the diagram (fig. 10 a).

Effectively, the phasor diagram can identify autofluorescence as a separate spectral feature that allows it to be viewed as an independent imaging channel (fig. 10 b).

The gap from tissue to subcellular dimensions can be bridged by expanding the palette with nuclear H2B-cerulean and membrane-localized mCherry in the same double transgenic embryo. HySP analysis can allow rapid identification and isolation of signals from Cerulean, eGFP and Citrine from intrinsic signals autofluorescent from flavochrome cells and tissues under excitation at approximately 458 nm. Similarly, it can separate mCherry from background autofluorescence at about 561nm excitation (fig. 2).

Finally, the multi-dimensions can be extended to include the time to acquire a five-dimensional (5D) dataset (x, y, z, t, λ), and the challenges of photodamage and bleaching in time-lapse imaging can be addressed by taking full advantage of the improved signal collection of HySP. The fusion proteins expressing endosomal components Rab9 and Rab11 (YFP and mCherry, respectively) and new vascular buds in autofluorescence (figure 3) can be imaged on double transgenic zebrafish embryos (Tg (ubiq: membrane-Cerulea-2 a-H2B-tdTomato); Tg (kdrl: eGFP); the low laser power used (about 5% at about 950nm, about 0.15% at about 561 nm) can not affect the development of multiple samples (n ═ 3) while allowing simultaneous study of at least seven distinct components without affecting photosensitive development.

Multispectral volumetric time lapse in vivo imaging with phasors. The high spectral phasors may allow for reduced photodamage when performing multispectral volumetric time lapses in vivo. Improved unmixing efficiency at reduced signal-to-noise ratios (fig. 11) may play a role in solving the problems associated with excess photons.

In general, when multiple fluorophores are present in the sample, each fluorophore can have an optimal excitation wavelength. However, it can be complicated to use too close a plurality of wavelengths (e.g., about 458nm, about 488nm, about 514nm for CFP, GFP, YFP, respectively) for excitation without significantly affecting the emission spectrum. One solution may be to sequentially excite the volume with each wavelength. Sequential excitation, while optimal to prevent overlap of emission spectra, may require extended scan times and may result in increased photodamage and bleaching. In addition, extended scan times may lead to motion artifacts due to sample development. An alternative may be excitation with a single wavelength multiple fluorophore. A disadvantage of this approach may be that the excitation efficiency of the lowest wavelength fluorophore will be higher than other fluorophores in the sample. For example, at about 458nm, the excitation efficiency of CFP is about 93%, while GFP is about 62% and YFP is about 10%. There are a range of factors that affect the actual number of photons emitted by each fluorophore, such as quantum yield, brightness, pH and concentration. However, in general, a stronger signal from one fluorescent protein and a weaker signal from another fluorescent protein can be observed. It may be desirable to increase the laser power in an attempt to extract more photons from the weaker signal. In experiments, the effect of increasing laser power of about 950nm (n-2) by more than 10% or increasing laser power of 458nm (n-3) by more than 10% resulted in the development of vasculature that was halted due to phototoxicity. The opposite solution may be to process the lower noise signal, allowing for the correct development of the sample.

A high spectral phasor method may allow improved performance at lower SNRs, thus overcoming the problem of weaker signals. This advantage can therefore be extended to 2-photon imaging, where the excitation efficiency is lower than 1 photon, and it can take several seconds to change the laser wavelength.

Thus, in the 3 fluorophore example described above, the number of volumes that need to be acquired can be reduced from 3 to 1.

The same method can be applied to different color protein clusters, e.g. one "blue" cluster CFP-GFP-YFP (excited at about 458 nm), a second "red" cluster mCherry-tdTomato-RFP (excited at about 561 nm), a third cluster with multiple iRFP (excited at about 630 nm).

Two-photon polychromatic volume time-lapse imaging of multiple samples is shown as an example of a potential application with two color clusters.

As a result of these 5D measurements, different behaviors were observed for Rab9 and Rab11 in relation to endothelial cells (kdrl positive) and muscle tissue. Specifically, Rab 11-positive vesicles were detected under the guidance of kdrl-positive cells, whereas this behavior was not observed with Rab9 protein. This example shows how HySP enables increasingly complex multi-color experiments to interrogate molecular network interactions in vivo.

HySP can be superior to other traditional multispectral methods: optical filter separation and linear unmixing [4, 6 ]. Conventional optical separation may result in low signal-to-background ratios (SBR) due to signal penetration problems (fig. 6b, 6e, 6f and 7). Linear unmixing can significantly improve SBR. However, HySP can provide superior performance, especially when separating multiple colors from multiple intrinsic signals (fig. 2, 3, 6e, 6f, and 9) within the same sample at lower SNR (fig. 11). Reducing the amount of required signal may allow for reduced laser power and reduced photodamage when imaging over time. Furthermore, the analysis time for this approximately 10 gigabyte dataset (fig. 2a, table 3) using HySP was approximately 10 minutes, while the analysis time using linear unmixing on the same computer was approximately 2.5 hours. The simplicity and robustness of the phasor method may provide the potential to use HySP analysis after the acquisition of a large sample. The HySP method can be well balanced (poised) to be used in the context of real-time imaging of biological processes in vivo, as a solution to the analysis of mosaic (mosaic) fluorescent protein expression systems [25 to 27], with the ability to process multidimensional (x, y, z, λ, t) datasets with computational times on the order of minutes.

The analysis shows the robustness, speed, denoising capability and simplicity of the hyperspectral phasor representation. It may allow robust discrimination of spectra within a range of accuracy determined primarily by poisson noise in the data acquisition. Because median filtering can be used to process spectral data in phasor space without changing the intensity data, it can provide a denoised image with substantially unimpaired resolution. The hyperspectral imaging system may be substantially oblivious to the imaging mode, as long as there are sufficient wavelength bands available for calculating the fourier coefficients of the spectral phasors (fig. 13). These advantages may make HySP suitable for a variety of environments from time-lapse imaging to cell line analysis, from fluorescence microscopy to cultural heritage reflection imaging, and from emission to excitation of multispectral data.

Other examples of the present disclosure are as follows.

Examples of the invention

Example 1, Zebrafish line (Zebrafish lines).

Such as [28 ]]Adult fish were grown and maintained as described and strictly following the recommendations of the south california university laboratory animal care and use guidelines, with the agreement being approved by the Institutional Animal Care and Use Committee (IACUC) (permit No. 12007 USC). Transgenic FlipTrap Gt (desm-citrine) ct122a/+Lines were obtained from previously described screens in the laboratory [23 ]],Tg(kdrl:eGFP)s843Is provided by Stainier laboratories [24 ]]The Tg (ubiq: membrane-Cerulean-2a-H2B-tdTomato) line was generated by injection of a construct containing a short sequence of tol2 transposable element flanked by ubiquitin protein promoter, the coding sequence for membrane-localized Cerulean, the ribosomal skip peptide coding for Thosea asigna virus (2a) and subsequently H2B-tdTomato. After crossing a suitable adult line (adult line), the embryos obtained were placed in egg water (approximately 60. mu.g/ml fast dissolving Ocean in ultrapure water (Milli-Q water)) and approximately 75. mu.g/ml CaSO4) At about 28.5 deg.C, about 0.003% (w/v) of 1-phenyl-2-thiourea (PTU) is added to about 18hpf to reduce pigmentation [28]。

Example 2, sample preparation and imaging.

An approximately 5 μ M solution of fluorescein (F1300, Invitrogen, Carlsbad, Calif.) in ethanol was prepared. For imaging, the solution was transferred to a sealed 10mm glass petri dish (P35G-1.5-10-c, MatTek, Inc., Ashland, MA, USA) and mounted in an inverted confocal microscope. Imaging was performed on a zeiss LSM780 inverted confocal microscope (jena, carl zeiss, germany) with a QUASAR detector. A typical data set consists of 32 images, each image having a size of 512 x 512 pixels, corresponding to different wavelengths from about 410.5nm to about 694.9nm, with a bandwidth of about 8.9 nm. Measurements were repeated 10 times using a C-Apochromat 40x/1.20W Korr Zeiss objective at any given imaging parameters. Fluorescein was imaged with an approximately 488nm laser at different acquisition parameters (table 1).

For in vivo imaging, 5 to 6 zebrafish embryos at appropriate stages were placed in approximately 1% agarose (catalog No.16500-100, I)nvitrogenTM) In a mold created using a custom designed negative plastic mold in an imaging dish with #1.5 cover glass bottom (catalog No. D5040P, WillCo Wells) [29 ]]. By adding about 2ml of about 1% UltraPureTMLow melting point agarose (catalog No.16520-050, Invitrogen)TM) Embryos were fixed in a solution of about 30% Danieu (about 17.4mM NaCl, about 210. mu.M KCl, about 120. mu.M MgSO in water) with about 0.003% PTU and about 0.01% Tricaine (Tricaine)4·7H2O, about 180. mu.M Ca (NO)3)2About 1.5mM HEPES buffer, pH about 7.6). This solution is then added to the top of the embryo that has been placed in the mold. After curing the agarose at room temperature (1-2 min), the imaging dish was filled with about 30% Danieau solution and about 0.01% tricaine at about 28.5 ℃. Imaging was then performed on an inverted confocal microscope by appropriately positioning the culture dish on the microscope stage. By mixing Gt (desm-citrine)ct122a/+Samples were obtained by hybridization with Tg (kdrl: eGFP) fish for two-color imaging. Samples with the four fluorescent proteins were generated by identical hybridization, followed by injection of approximately 100pg of mRNA encoding H2B-cerulean and membrane-mCherry per embryo. Gt (desm-citrine) ct122a/+(ii) a Samples of Tg (kdrl: eGFP) were imaged with a 488nm laser to excite both Citrine and eGFP and a narrow dichroism of about 488nm to separate excitation and fluorescence emission. Gt (desm-citrine) with H2B-cerulean and membrane-mCherry labelsct122a/+(ii) a Samples of Tg (kdrl: eGFP) were imaged with an approximately 458nm laser to excite Cerulean, eGFP, and Citrine with dichroism at approximately 488nm, followed by an approximately 561nm laser to excite the mCheerry with dichroism from approximately 458nm to 561 nm.

For in vivo time lapse imaging, appropriately staged 5 to 6 zebrafish embryos were fixed in imaging dishes with #1.5 coverslip bottom using about 0.5% low melting agarose for allowed incubation (same as above) and using about 0.003% PTU and about 0.01% tricaine. Subsequently, imaging was performed on the same confocal two-photon inverted microscope at about 28.5 ℃. An egg water solution was added to the imaging dish every hour to ensure proper hydration of the sample. Samples with five fluorescent proteins were obtained by hybridizing Tg (kdrl: eGFP) with Tg (ubiq: membrane-Cerulean-2a-H2B-tdTomato) zebrafish, followed by injection of about 120pg and about 30pg of mRNA encoding Rab9-YFP and Rab11-mCherry, respectively, per embryo. Volume data was acquired using approximately 950nm to excite Cerulean, eGFP, YFP and (weak) tdTomato with 760+ band pass filters, followed by an approximately 561nm laser to excite mCherry and tdTomato with dichroism from approximately 458nm to 561 nm.

Table 3 provides a detailed description of the imaging parameters for all images presented in this work.

Example 3 phasor analysis

And (3) transformation:

for each pixel in the data set, the fourier coefficients of its normalized spectrum define the coordinates of its phasor point (z (n)):

z (n) ═ g (n) + is (n), wherein,and is

Where λ s and λ f are the start and end wavelengths, respectively; i is the intensity; ω 2 pi/τ s, where τ s is the number of spectral channels (e.g., 32) and n is a harmonic (e.g., 2).

Dispersion error on phasor diagram:

the dispersion error is inversely proportional to the square root of the number of photons N:

this ratio has been derived as follows. Assuming that the number of digital levels detected in confocal analog mode is directly proportional to the number of photons collected, the total signal intensity recorded (digital count obtained by area under the spectral curve) is defined as a measure of N [20 ]. This means that:

based on equation 1 and by propagation of statistical errors, it is known that:

where std and Var represent standard deviation and variance, respectively. This can be further simplified as:

since std { number count }. oc √ N:

from the above, it can be seen that the second term predominates, and therefore:

similarly:

thus:

shift average error on phasor diagram:

Expected value (z) based on spectrume(n)) and a true representation (z)0(n)), one can write:

where < > denotes the average value used to calculate each quantity. This expression is defined as the shift average error. Further:

wherein the content of the first and second substances,is the phase difference between the two phasor points. From the above, it can be seen that the shift average error is still limited to:

further, the normalized shift average error can also be defined as follows:

in this analysis, a data set was acquired at about 177 μ s pixel dwell time at about 850 gain and about 21% laser power as a true representation of the fluorescein spectrum, due to its lower value of dispersion error. However, the general conclusion on the behavior of the shift average error remains the same, while z is the same0The value of (n) is independent.

Harmonic number in phasor analysis:

in general, phasor diagrams are limited to using the first harmonic or the second harmonic of the fourier representation of the spectral profile to determine the spectral characteristics. This may be due to the presence of branch points in the riemann surface in the complex plane corresponding to representations of harmonics greater than 2 that may not be readily visualized. Based on equation 1, the residual (ρ (n)) is calculated as a ratio of the absolute sum of all fourier coefficients except for the coefficient corresponding to the selected harmonic number (n) to the absolute value of the nth fourier coefficient.

Thus:

for typical fluorescence spectra (such as the fluorescein emission spectra here), 1 and 2 are still the dominant harmonic numbers, since the residuals of these are at least one order of magnitude smaller than the residuals of the other harmonics (fig. 5 f). Furthermore, the fluctuation of the residual values may depend on the exact nature of the spectrum being analyzed. However, such a method can be easily implemented each time a phasor analysis is performed and may allow for fast verification of the selection of harmonic numbers for any recorded spectrum.

Example 4 De-noising

For an arbitrary image of a given size (n × m pixels), the S and G values of each pixel are obtained, thereby generating 2 new 2D matrices of n × m size of S and G. When filtering the two matrices, new values S and G for each pixel can be obtained. Since the initial S and G matrices have the same indices as the pixels in the image, the filtered matrices S and G retain the geometric information.

Fluorescein data was analyzed using the equation disclosed above using Matlab script. A large zebrafish microscopy dataset was recorded by using a hyperspectral imaging system as disclosed above. Linear unmixing is done by using Zen software (jena, zeiss, germany).

Example 5 Spectrally Encoded Enhanced Representation (SEER)

In this example, a Spectrally Encoded Enhanced Representation (SEER), a method for improved and computationally efficient simultaneous color visualization of multiple spectral components of a hyperspectral (fluorescence) image, is presented. The mathematical properties of the phasor method are used to transform the wavelength space into an information rich color map (color map) for RGB display visualization.

Multiple bioluminescent samples are presented and the enhancement of SEER to specific and subtle spectral differences is highlighted, providing a fast, intuitive and quantitative way to interpret hyperspectral images during collection, pre-processing and analysis.

The method of the present disclosure is based on the belief that: preserving most of the spectral information and enhancing the distinction of spectral characteristics between related pixels will provide an ideal platform for understanding biological systems. The challenge is to develop tools that allow for efficient visualization of multi-dimensional datasets without the need for computationally demanding dimension reduction prior to analysis, such as ICA.

In this work, the present disclosure constructs a map based on phasors (phase vectors). The phasor method has several advantages derived from its nature. After transforming the spectrum at each pixel into its fourier components, the resulting complex values are represented as 2-dimensional histograms, with the axes representing the real and imaginary parts. The advantage of such a histogram is that it provides a representative display of the statistics and distribution of pixels in the image from a spectral perspective, thereby simplifying the identification of individual fluorophores. Pixels in the image with similar spectra generate clusters on the phasor diagram. Although this representation is cumulative over the entire image, each single point on the phasor diagram is easily remapped to the original fluorescence image.

Taking advantage of the phasor approach, hyperspectral phasors (HySP) enable semi-automatic analysis of 5D hyperspectral time lapse data as similarly colored regions are aggregated on a phasor diagram. These clusters have been characterized and utilized to simplify interpretation of data and spatially lossless denoising, thereby improving collection and analysis under low signal conditions. Phasor analysis usually studies 2d histograms of spectral features by means of a geometry selector, which is an effective strategy but requires user involvement. While it is possible to image multiple markers and separate the different spectral contributions into clusters, this approach is inherently limited by the number of markers that can be analyzed and displayed simultaneously. The prior art directly utilizes phase and modulation to quantify, classify and represent features within fluorescence lifetime and image-dependent spectral data. The method of the present disclosure differs from the previous embodiments in that it instead focuses on providing a quantitatively constructed, monolithic pre-processed visualization of large hyperspectral data.

The solution proposed by the present disclosure extracts from both the entire phasor and image to reconstruct a "one shot" map of the data and its intrinsic spectral information. Spectrally Encoded Enhanced Representation (SEER) is a dimensionality reduction-based approach that is implemented by exploiting phasors and automatically creating a color map of the spectral representation. The results of SEER show enhanced visualization of spectral properties, representing different fluorophores with distinguishable false colors and quantitatively highlighting the differences between intrinsic signals during in vivo imaging. SEER has the potential to optimize the experimental pipeline (experimental pipeline) from data collection during acquisition to data analysis, greatly improving image quality and data size.

Example 6 Spectrally Encoded Enhanced Representation (SEER)

The implementation of SEER has a simple basis. By means of a reference color map, each spectrum is assigned a false color based on its real and imaginary fourier components.

This concept uses Zebranow in FIG. 2434An example of an embryonic data set is detailed in which cells in a sample express different ratios of cyan, yellow, and red fluorescent proteins, resulting in a large range of discrete spectral differences. The data is acquired as a hyperspectral volume (x, y, z, λ) (fig. 24a), providing a spectrum for each voxel. The spectra obtained from multiple regions of interest are complex, showing both significant overlap and expected differences in ratio (fig. 24 b). It is challenging to discern very similar spectra within the original acquisition space using standard multispectral dataset visualization methods (fig. 24 c).

SEER is designed to produce usable spectral contrast within an image by performing five main steps. First, the sine fourier transform and cosine fourier transform of the spectral data set at one harmonic (typically the 1 st harmonic or the 2 nd harmonic due to the riemann surface) provide the components of the 2D phasor diagram (fig. 24D). The phasor transformation compresses and normalizes the image information, reduces the multidimensional dataset into a 2D histogram representation, and normalizes it to the unit circle.

Second, the histogram representation of the phasor diagram provides an improved understanding of the overall distribution of the spectrum and the signal through the summation of the spectra in the histogram segments. Pixels with very similar spectral characteristics (e.g., pixels that express only a single fluorophore) will fall within the same segment in the phasor diagram histogram. Due to the linear nature of the phasor transformation, if an image pixel contains a mixture of two fluorophores, its position on the phasor diagram will be proportionally distributed along the line connecting the phasor coordinates of the two components. This step highlights the importance of the geometry and distribution of the segments in the phasor representation.

Third, 1 to 2 times of spatially lossless spectral denoising is performed in phasor space to reduce spectral errors. In short, a median filter is applied on both the sine-transformed image and the cosine-transformed image, thereby reducing the spectral dispersion error on the phasor diagram while maintaining the coordinates of the spectrum in the original image (fig. 24 e). The filter only affects the phasor space, resulting in an improvement of the signal.

Fourth, the present disclosure contemplates multiple SEER maps that utilize phasor geometry. For each segment, the present disclosure assigns RGB colors based on the phasor positions in conjunction with the reference map (fig. 24 f). Subtle spectral changes can also be enhanced with multiple contrast modalities to focus the map on the most frequent spectrum, statistical centroid of the distribution, or scale the color to the extreme of the phasor distribution (fig. 24 g).

Finally, based on the SEER results, the colors in the original dataset are remapped (fig. 24 h). This allows rendering of data sets in which the spectra are not visually distinguishable (fig. 24a to 24c), so that even these subtle spectral differences become readily discernable (fig. 24 i). SEER rapidly produced 3-channel color images (fig. 31) that approximated the visualization produced by the more complete spectral unmixing analysis (fig. 32).

Example 7, Standard reference map

The biological sample may include a plurality of fluorescent spectral components derived from the fluorescent label and the intrinsic signal, each component having different characteristics and properties. Identifying and rendering these subtle spectral differences is a challenge. The present disclosure finds that rendering is not sufficient for all cases, and therefore four dedicated color map references are created to enhance color contrast in samples with different spectral characteristics. To simplify testing of color map references, the present disclosure designs Simulated Hyperspectral Test Charts (SHTCs) in which defined regions contain spectra obtained from CFP, YFP, and RFP zebrafish embryos. Each part of the test chart provides a different image contrast, which is obtained by shifting the maximum position of the CFP and RFP spectra with respect to the YFP spectrum (fig. 33). The present disclosure renders SHTC in grayscale images and compares with SEER (fig. 25 a). These representations can be shown separately quickly to determine which representation has the highest information content.

The reference map is defined as an organization of a color palette, where each spectrum is associated with a color based on its phasor position. The color distribution in each reference map is a function of the coordinates of the phasor diagram. In the angle diagram (fig. 25b), the hue is calculated as a function of the angle, enhancing the diversity of the colors when the spectra have different center wavelengths ("phases") on the phasor diagram. For the radial graph (fig. 25c), the present disclosure assigns colors with respect to different radii, thereby highlighting spectral amplitude and magnitude. The radial position is typically related to the intensity integral of the spectrum, which in turn may depend on the shape of the spectrum, with zero intensity positioned at the origin of the plot (fig. 34). In the simulations of the present disclosure (fig. 25c), the colors obtained with this figure represent mainly the differences in shape, whereas in scenes with a large dynamic intensity range, the colors will reflect mainly the changes in intensity, becoming affected by irrelevant background at low signal-to-noise ratios (fig. 35). In the gradient rise and fall model (fig. 25d, fig. 25e), the color groups differ according to angle, as seen in the angle diagram, the increasing change in color intensity is associated with the change in radius. The gradient map enhances similar properties as the angle map. However, the gradient rise (fig. 25d) plot more focused on distinguishing the higher intensity spectra than the lower intensity spectra; whereas the gradient descent (fig. 25e) plot is reversed, highlighting the spectral differences in the low intensity signal. The complementary properties of these four figures allow for the rendering of a wide range of spectral properties to be distinguished with respect to phasor positions. It is important to note that the concepts of angle and radial maps have been previously used in various applications and methods, and are generally introduced as "phase" and "modulation", respectively. Here, the present disclosure has recreated and provided these maps for the hyperspectral fluorescence data of the present disclosure as a simpler alternative to the more adaptive maps of the present disclosure.

The standard reference map simplifies the comparison between multiple fluorescently labeled samples, as the color palette indicates no change between samples. These references are centered on the origin of the phasor diagram, so their color distribution remains constant, associating a predetermined color with each phasor coordinate. Unless the spectrum of the fluorophore is changed by experimental conditions, the fluorophore position is constant on the phasor diagram. The ability of the standard reference map to capture different scales of markers or changes in markers (such as calcium indicators) provides a dual advantage, namely: providing a quick, quantitative overview and simplifying comparisons between multiple samples.

Example 8 tensor map

The SEER method provides a simple way to evaluate statistical observations within a spectral image. In addition to the four standard reference maps, the present disclosure designs a tensor map that recolorizes each image pixel based on the count gradient relative to the spectrum around each image pixel (fig. 25 f). Considering that the phasor representation is a two-dimensional histogram of real and imaginary fourier components, the amplitude of each histogram segment is the number of occurrences of a particular spectrum. The tensor map is computed as a count gradient between adjacent segments, and each resultant value is associated with a color based on the color map (herein the present disclosure uses a "jet" color map).

The image is re-colored according to the changes that occur in the spectrum, thereby enhancing the spectral statistics fluctuation of each phasor cluster. The change in frequency of occurrence can provide an understanding of the overall dynamics of the spectrum within the data set. The visible result is a spectral edge detection that operates in the wavelength dimension, facilitating the detection of colorimetric changes in the sample. Alternatively, the tensor map may help identify regions that contain less frequent spectral features relative to the rest of the sample. An example of this is shown in the upper left quadrant of the simulation (fig. 25f), where the central part of each quadrant has a different spectrum and occurs at a lower frequency than its surroundings.

Example 9, mode (zoom and morph)

The present disclosure has implemented two different approaches to improve the ability to enhance spectral characteristics: a zoom mode and a deformation mode.

The scaling mode provides an adaptive map with increased color contrast by normalizing the standard reference map extrema to the maximum and minimum phasor coordinates of the current data set, effectively creating a minimum bounding unit circle containing all phasor points (fig. 26 b). By resizing the color map based on the spectral range within the image, the method maximizes the number of tones represented in the rendering. The scaling mode increases the difference in hue and contrast of the final false color rendering. These characteristics set a separate scaling mode from the standard reference map (fig. 26a), which constantly covers the entire phasor map area and simplifies the comparison between data sets. The scaling mode sacrifices this uniformity but provides a spectral contrast stretching that improves contrast according to the values represented in the respective image datasets. The boundaries of the scaling mode may be set to a constant value across different samples to facilitate comparison.

The deformation mode exploits data lump properties captured in the phasor representation of the image to enhance contrast. From the phasor histograms, the most frequent spectral features or centroids (in terms of histogram counts) are used as new central reference points for the SEER diagram. This newly calculated center is referred to by the present disclosure as the vertex of SEER. The result is an adaptive palette that changes according to the data set. In this representation mode, the edges of the reference map are kept anchored to the vector map circular boundary, while the center point is shifted and the internal colors are linearly distorted (fig. 26c, fig. 26 d). By moving the vertices, the contrast of the data set with off-center phasor clusters is enhanced. A complete list of combinations of standard reference plots and patterns (fig. 36-37) is reported for different levels of spectral overlap in the simulation and for different harmonics. The second harmonic is used in the transformation, supplementing the results that present SHTC with very similar spectra (fig. 33), and the first harmonic is used to supplement the results that present images with frequently encountered levels of overlap (fig. 38-39). In both scenarios, SEER improves the visualization of the multispectral dataset (fig. 36, fig. 37, fig. 39, fig. 40) compared to the standard approach (fig. 33, fig. 38). The 1X to 5X spectral denoising filter embodiment further enhances visualization (fig. 41, fig. 42).

Example 10 color map enhances different spectral gradients in biological samples

To demonstrate the utility of SEER and its model, the present disclosure presents four exemplary visualizations of images taken from unlabeled mouse tissue and fluorescently labeled zebrafish.

In living samples, many intrinsic molecules are known to emit fluorescence, including: NADH, riboflavin, retinoids, folic acid. The contribution of these intrinsic signals to the total fluorescence is commonly referred to as autofluorescence. Hyperspectral imaging and HySP can be used to reduce the contribution of autofluorescence to the image. However, the improved sensitivity of phasors enables autofluorescence to be a signal of interest and allows exploration of its multiple endogenous molecular contributions. Here SEER was applied to visualize multispectral autofluorescence data from wild-type C57Bl mouse tracheal explants freshly isolated. Tracheal epithelial cells are characterized by very low cell turnover, and therefore the overall metabolic activity can be attributed to the cellular function of a particular cell type. Rod-shaped and ciliated cells are located on the apical side of epithelial cells and have the best metabolic activity because they secrete cytokines and chemically and physically remove inhaled toxins and particles from the tracheal lumen. In contrast, basal cells, which represent adult stem cells in the upper airway, are quiescent and metabolically inactive. Because of this activity dichotomy, tracheal epithelial cells in homeostasis constitute an ideal cellular system for testing SEER and validation by FLIM imaging. The slight curvature on the trachea caused by the cartilage ring allows the present disclosure to visualize the mesenchymal collagen layer, the hypodermal and apical dermal cells, and the tracheal lumen in a single focal plane.

Explants were imaged in multispectral mode with a 2-photon laser scanning microscope. The present disclosure compares the state of the art "true color" images (fig. 27a) and SEER images (fig. 27 b-27 c). The gradient descent deformation map (fig. 27b) enhances the visualization of metabolic activity within the tracheal sample, showing different metabolic states as cells and underlying collagen fibers move from the top of the tracheal airway to more bottom (fig. 27 b). Visualization improvements are maintained over different implementations of RGB visualization (fig. 43). The tensor maps increased the contrast at the cell boundaries (FIG. 27 c). Changes in autofluorescence within a living sample are correlated with changes in the ratio of NAD +/NADH, which in turn correlates with the ratio of free NADH and protein-bound NADH. Although the fluorescence emission spectra are very similar, the two forms of NADH are characterized by different decay times (free of 0.4ns, combined 1.0ns to 3.4 ns). FLIM provides sensitive measurements of the redox state and glycolysis/oxidative phosphorylation of NADH. Metabolic imaging by FLIM is well established and has been used to characterize disease progression in a variety of animal models, single cells and humans, and to differentiate stem cell differentiation and embryonic development.

Here, the dashed squares highlight cells with different spectral representations by SEER, which is the difference confirmed by FLIM images (fig. 27d, fig. 44).

The improvement of SEER in visualizing intrinsic signals is clear when compared to standard methods.

Microscopic imaging of fluorophores in the cyan to orange emission range in tissue is challenging due to intrinsic fluorescence. A common problem is the penetration of the autofluorescence signal into the emission wavelength of the label of interest. The penetration is a result of the overlap of the two fluorophores in the emission and excitation profiles such that photons from one fluorophore fall within the detection range of the other fluorophore. Although the penetration artifacts can be partially reduced by a strict choice of the emission filter, this requires a narrow collection channel, which rejects any ambiguous wavelengths and greatly reduces the collection efficiency. This strategy often proves difficult when applied to broad-spectrum autofluorescence. mkkasabira-Orange 2(mKO2) is a fluorescent protein whose emission spectrum significantly overlaps with autofluorescence in zebrafish. In fli1: mKO2 zebrafish, in which all blood vessels and endothelial cells were labeled, the fluorescent protein mKO2 and the autofluorescent signal due to pigment and yolk were indistinguishable (fig. 28a, box). Grey scale rendering (fig. 45) provides information on the relative intensities of multiple fluorophores in the sample, but is not sufficient to specifically detect the spatial distribution of mKO2 signals. True color representation (fig. 28a, fig. 46) is limited to visualizing these spectral differences. The angle map of SEER (fig. 28b) provides a significant contrast between the slightly different spectral components within the 4D (x, y, z, λ) data set. The angular reference map enhances the phase change on the phasor diagram, which well distinguishes the shift of the center wavelength of the internal spectrum of the sample. Autofluorescence from pigment cells was significantly different from fli1: mKO2 fluorescence (fig. 28c to 28 h). For example, the dorsal region contained a combination of mKO2 and pigment cells that did not differ significantly in standard methods (fig. 28 e-28 f). The angle map allows the SEER to distinguish subtle spectral differences. The different colors represent autofluorescence from the yolk and from the pigment cells (fig. 28g to 28h), enriching the overall information provided by this single fluorescently labeled sample and enhancing visualization of mKO2 fluorescently labeled pan endothelial cells.

Imaging and visualization of biological samples with multiple fluorescent labels is hindered by the overlapping emission spectra of fluorophores and autofluorescent molecules in the sample, complicating visualization of the sample. Rendering Gt (desm-Citrine) with labeling of muscle, vasculature and nuclei, respectively, using standard methods and SEER in 1D and 3Dct122a/+;Tg (kdrl: eGFP); H2B-Cerulean and triple-labeled zebrafish embryos with contribution from pigment autofluorescence (FIG. 29). The true color representation (fig. 29a, fig. 47) provides limited information about the internal details of the sample. Vasculature (eGFP) and nuclei (turquoise) are highlighted with cyan shading, while autofluorescence and muscle (Citrine) are highlighted with green shading (fig. 29a), making these two pairs indistinguishable. The intrinsic richness of the color in the sample is an ideal test for gradient descent and radial patterns.

The angle map separates the spectra based primarily on their center (peak) wavelengths of the spectra, which correspond to the "phase" difference in the phasor map. The gradient descent map separates spectra that have a deviation in subtle spectral differences closer to the center of the phasor map. Here, the present disclosure applies mass deformation and maximum deformation modes to further enhance spectral discrimination (fig. 29b to 29 c). In the mass deformation mode, the muscle profile and nuclear contrast were improved by increasing the spatial separation of the fluorophores and suppressing the presence of autofluorescence from the skin pigment cells (fig. 29 e). In the maximum deformation mode (fig. 29c), pixels with spectra closer to the autofluorescence of the skin are clearly separated from muscle, cell nuclei and vasculature.

The enhancement of SEER is also visible in volume visualization. The angle and gradient maps were applied to the triple labeled 4D (x, y, z, λ) dataset and visualized as maximum intensity projections (fig. 29D-29 f). The spatial localization of the fluorophore is enhanced in the mass deformation angle map, while the maximum deformation gradient descent map provides better separation of the autofluorescence of the skin pigment cells. These differences are also maintained in different visualization modalities (fig. 48).

SEER helps to discriminate between fluorophores differences even with multiple contributions from permeation between labels and from autofluorescence. In particular, the deformed graph shows high sensitivity in the presence of slight spectral differences. The triple labeled example (fig. 29) shows the advantage of the deformation map because it places the vertices of the SEER map at the centroid of the phasor histogram and compensates for the different excitation efficiency of the fluorescent protein at 458 nm.

Example 11, quantitative differences can be visualized in a combinatorial approach

Zebrabow is the result of a powerful genetic cell labeling technique based on random and combinatorial expression of different relative amounts of several genetically encoded, spectrally distinct fluorescent proteins. The zebrabow (brainbow) strategy combines the three primary colors red, green and blue in different proportions to achieve a wide range of colors in a visual palette similar to modern displays. The unique color comes from a combination of RFP, CFP and YFP in different proportions, which is achieved by random Cre-mediated recombination.

This technique has been applied to a variety of applications, from axonal and lineage tracing to cell tracing during development, where specific markers can be used as cell identifiers to trace the progeny of individual cells over time and space.

The challenge is to acquire and analyze subtle differences in hue between these hundreds of colors. Multispectral imaging provides the additional dimensionality needed for improved acquisition; however, this mode is hampered by both instrument limitations and spectral noise. Furthermore, current methods of image analysis and visualization interpret red, yellow, and cyan fluorescence as RGB additive color combinations and visualize them as color pictures, similar to the perception of color by the human eye. Since the present disclosure has difficulty reliably identifying slightly different colors, this approach does not distinguish similar but spectrally unique recombination ratios well.

SEER overcomes this limitation by improving the sensitivity of the analysis using the phasor based color interpretation of the present disclosure. The reorganization of the markers belongs to different regions of the phasor diagram, simplifying the differentiation of subtle differences. The standard reference map and pattern are associated with colors that are easily distinguished by the eye, thereby enhancing the fine spectral reorganization. SEER simplified the quantification of the differences between cells of the combinatorial strategy, opening a new analysis window for the brainbow samples.

The present disclosure images Tg (Ubi: Zebraaboow) samples and visualizes their multiple genetic recombinations using SEER. The results (fig. 30, fig. 49) highlight the difficulty in visualizing these datasets with standard methods and the difference in how the compression maps simplify both spectrally close and more separated recombinations.

Example 12

Standard methods for visualization of hyperspectral datasets improve visualization at the cost of computational overhead. In this work, the present disclosure shows that phasor methods can define a new trade-off between computation speed and rendering performance. Wavelength encoding can be achieved by appropriate transformation and representation of the spectral phasor diagrams via fourier transforms of the real and imaginary parts. Phasor representations provide an effortless explanation of spectral information. Originally developed for fluorescence lifetime analysis, and subsequently for spectroscopic applications, where phasor methods have been used to enhance the visualization of multispectral and hyperspectral imaging. Since these phasor-based tools enable accurate spectral discrimination, the present disclosure refers to this approach as spectrally enhanced coded representation (SEER).

Spectrally Encoded Enhanced Representations (SEERs) and robust methods of converting spectral (x, y, λ) information into visual representations enhance the differences between the labels. This approach allows for more complete use of the spectral information. Previous analyses employed major components or specific spectral bands in the wavelength dimension. Similarly, previous phasor analysis uses the selected region of interest to interpret the phasor. The disclosed work explores phasor diagrams as a whole and represents this complete set of information as a color image while maintaining efficiency and minimizing user interaction. The function can be implemented quickly and efficiently even for large data volumes, thus circumventing the computational overhead typical of hyperspectral processing. Tests of the present disclosure show that SEER can process a 3.7GB dataset with 1.26 · 108 spectra in 6.6 seconds and a 43.88GB dataset with 1.47 · 109 spectra in 87.3 seconds. The SEER provides up to 67 times speed increase (FIG. 31) and lower virtual memory usage compared to python modules, the scinit-spare implementation of rapid independent component analysis. The spectral plots presented herein reduce the dimensionality of these large data sets and assign colors to the final images, thereby providing an overview of the data prior to comprehensive analysis. A comparison of processing speeds between SEER and fastICA for the multi-spectral fluorescence data shown in fig. 27-30 is presented in table 4. The calculation time of SEER ranged between 0.44 seconds (fig. 27) and 6.27 seconds (fig. 28), with the corresponding timings of fastICA being 3.45 seconds and 256.86 seconds, respectively, with acceleration in the range of 7.9 to 41 times (fig. 58) according to the trend shown in fig. 31.

The processing time comparison SEERvs independent component analysis (scimit-spare embodiment) in Table 4 and FIGS. 27 to 30.

Simulation comparisons with other common visualization methods such as gaussian kernel and peak wavelength selection (fig. 50) show the increased accuracy of SEER (fig. 51) for correlating different colors to closely overlapping spectra under different noise conditions. For highly overlapping spectra, the accuracy is improved by a factor of 1.4 to 2.6 for the 0nm to 8.9nm spectral maximum distance, and 1.5 to 2.3 for overlapping spectra with a maximum separation of 17.8 to 35.6nm (fig. 52 to 53).

Quantification of RGB images by color (colorfulness), contrast and sharpness shows that SEER generally performs better than standard visualization methods (fig. 58). For the data sets of fig. 27-30, the average enhancement of SEER was 2% to 19% for chroma, 11% to 27% for sharpness, and 2% to 8% for contrast (table 5). Then, the present disclosure performs a measurement of Color Quality Enhancement (CQE), a measure of human visual perception of color image quality (table 6). The CQE score for SEER was higher than the standard, 11% to 26% improvement in fig. 27, 7% to 98% improvement in fig. 28, 14% to 25% improvement in fig. 29, and 12% to 15% improvement in fig. 30 (see also fig. 58).

Table 5 average color, contrast and sharpness scores on fig. 27-30 for different visualization methods

Color quality enhancement scores for the data sets in table 6, fig. 27 to fig. 30. Parameter calculations are reported in the methods section.

Measuring color contrast in fluorescence images

There are inherent difficulties in determining an objective method for measuring the image quality of a color image associated with a fluorescence image. The main challenge of fluorescence images is that for most fluorescence microscopy experiments, there is no reference image because of the inherent uncertainty associated with image acquisition. Thus, any kind of color image quality assessment will need to be based only on the color distribution within the image.

This type of assessment has its own further challenges. Although there have been a number of quantitative methods formulated to determine the quality of the intensity distribution in grayscale images, such methods for color images are still under debate and testing. This lack of suitable methods for color images mainly results from a division between the mathematical representation of the components of different colors and the human perception of those same colors. This partitioning occurs because human color perception varies widely and is non-linear for different colors; while a quantitative representation of any color is typically a linear combination of primary colors, such as red, green, and blue. This non-linear human perception of color is closely related to the concept of hue. In general terms, hue is the dominant wavelength of reflected light. Hues perceived as blue tend to reflect light at the left end of the spectrum, while hues perceived as red tend to reflect light at the right end of the spectrum. Typically, each individual color has a unique overall trait determined by its unique spectrum. Discretizing the spectrum into multiple components does not fully describe the original richness of the color.

Current methods for determining RGB image quality typically adjust (adapt) the grayscale method in two different ways. The first method involves converting a three-channel color image to a single-channel grayscale image before measuring quality. The second method measures the quality of each channel separately and then combines these measurements with different weights. However, both methods face limitations in providing quality values that correlate well with human perception. The first method loses information when converting a color image to grayscale. The second approach attempts to account for the non-linear human perception of color image quality by splitting the color image into three channels and measuring them separately. However, the intrinsic hue of a color is larger than the sum of the individual component colors, because each channel taken alone is not necessarily as colorful as the combined color.

A more complete color metric should take into account hue, such as by measuring the loss of color saturation between the original image 10 and the processed image 10. In summary, due to this limitation of current methods in measuring color saturation, no "true measure of contrast" is currently established within fluorescent color images.

Flexibility is another advantage of the method of the present disclosure. The user may apply several different standard reference maps to determine which is more appropriate for their data and to enhance the most important image features. The pattern provides supplemental enhancement by adjusting the reference to each data set according to the size and distribution of the spectra in the data set. The scaling maximizes the contrast by enclosing the phasor distribution, which maintains the linearity of the color map. The max and centroid modes shift the vertices of the distribution to new centers, particularly the most frequent spectra in the data set or the weighted "color frequency" centroids of the entire data set. These patterns adjust and refine the specific visualization properties of each graph to the dataset currently being analyzed. As a result, each graph provides increased sensitivity to certain properties of the data, such as amplification of smaller spectral differences or focusing on the dominant wavelength component. The adaptivity of the SEER mode can be demonstrated to be advantageous to visually correct the effect of photo-bleaching in the sample by dynamically changing the vertices of the graph with changes in intensity (fig. 54).

SEER may be applied to fluorescence, as performed herein, or to standard reflectance hyperspectral and multispectral imaging. These phasor remapping tools may be used for applications in fluorescence lifetime or combined methods of spectroscopic and lifetime imaging. For multispectral fluorescence, this method is promising for real-time imaging of multiple fluorophores, as it provides a tool for monitoring and segmenting fluorophores during acquisition. In vivo imaging visualization is another application of SEER. For example, gradient descent maps in combination with de-noising strategies can minimize photo-bleaching and toxicity by enabling the use of lower excitation powers. SEER overcomes challenges in visualization and analysis derived from low signal-to-noise images, such as intrinsic signal autofluorescence imaging. In other complications, such image data may result in a concentrated cluster near the phasor center coordinates. Gradient descent maps overcome this limitation and provide bright and distinguishable colors that enhance subtle details within a dimmed image.

Notably, this approach is generally independent of the dimension being compressed. While in this work the present disclosure explores the wavelength dimension, in principle SEER may be used with any n-dimensional dataset, where n is greater than two. For example, it may be used to compress and compare dimensions of life, space, or time for multiple data sets. Some limitations that should be considered are that the SEER false color representation sacrifices the "true color" of the image, creating inconsistencies expected by the human eye of the original image and not distinguishing the same signals originating from different biophysical events.

New multi-dimensional, multi-modal instruments will generate much larger data sets faster. SEER provides the ability to process such explosive data, enabling the scientific community to be interested in multiplexed imaging.

Example 13 simulated Hyperspectral test Pattern

To account for poisson and detector noise caused by optical microscopy, the present disclosure generates simulated hyperspectral plots that are scaled from dimensions x:300 pixels, y:300 pixels, and λ: real imaging data for 32 channels began. S1, S2, and S3 spectra were obtained from zebrafish embryos labeled with CFP, YFP, and RFP only, respectively, where the spectra in fig. 24a are the same as the central cell of the test chart of fig. 24 d. In each cell, three spectra are represented after shifting the maximum with respect to S2 by d1 nm or d2 nm. Each cell has its corresponding spectrum of S1, S2, and S3 (fig. 33).

Example 14, Standard RGB visualization

True color RGB images (fig. 33, 38, 50) are obtained by compressing a hyperspectral cube into an RGB3 channel color space by generating a gaussian radial basis function kernel K for each RGB channel. This kernel K serves as a similarity factor and is defined as:

wherein x' is the center wavelength of R or B or G. For example, when x' is 650nm, the associated RGB color space is (R:1, G:0, B: 0). Both x and K are defined as 32 x 1 vectors, representing the 32-channel spectrum of a single pixel and the normalized weights of each R, G and B channel, respectively. i is the channel index of the two vectors. K iRepresents how similar channel i is to each R/G/B channel, and σ is a deviation parameter.

The present disclosure calculates the RGB color space c by the dot product of the weight vectors K and λ at the corresponding channel R/G/B:

where λ is the vector of wavelengths captured by the spectral detector in an LSM 780 inverted confocal microscope (Jena, Zeiss, Germany) with a λ moduleiIs the center wavelength of channel i. For RGB, the gaussian kernels are set to 650nm, 510nm, 470nm as default values, respectively (fig. 33s, fig. 38, fig. 43e, fig. 46e, fig. 47e, fig. 49 j).

The same gaussian kernel is also adaptively changed to the dataset to provide spectral contrast stretching on the visualization and to focus the visualization on the most commonly used channel. The average spectrum of the entire data set was calculated and normalized. Intersections of intensities at 10% (fig. 43f, fig. 46f, fig. 47f, fig. 49f), 20% (fig. 43g, fig. 46g, fig. 47g, fig. 49g) and 30% (fig. 43h, fig. 46h, fig. 47h, fig. 49h) were obtained and used as centers of blue and red channels. The green channel is located in the middle of the red and blue colors. Representations of these adaptations are reported in fig. 50g, 50h, and 50 i.

The true color 32-channel images (fig. 1c, 28a, 28c, 28e, 28g, e.g., fig. 29a, 29d, 29e, 29f, 43c, 46c, 47c, 49c) are rendered as 32-channel maximum intensity projections using BitPlane imagis (oxford instruments, u.k.a.). Each channel has a known wavelength center (32 segments from 410.5nm to 694.9nm with a bandwidth of 8.9 nm). According to the classical wavelength RGB conversion disclosed in FIG. 50f 5Each wavelength being associated with a color. Based on the channel with the largest information, contrast adjustment (Imaris display adjustment setting) is performed on the intensities of all channels. The meaningful range for rendering was identified as the highest 90% of the intensity of the normalized average spectrum of the dataset (top 90%) (fig. 43b, fig. 46j, fig. 47j, fig. 49 b). Channels outside this range are excluded from rendering. In addition, for 1 photon excitation, it is in phase with a wavelength lower than that of laser excitationThe off channels (e.g., channels 1 to 5 for 458nm laser) are excluded from rendering.

Peak wavelength representation (fig. 43d, 46d, 47d, 49d, 50 and 52) reconstructs an RGB image for each pixel using the color associated with the wavelength at which the maximum intensity was measured. The wavelength to RGB conversion is performed using the python function. The graphical representation is reported in fig. 50 f.

FIG. 15, calculation of accuracy

The present disclosure utilizes simulated hyperspectral test patterns to produce different levels of spectral overlap and signal-to-noise ratio (SNR). The present disclosure utilizes a variety of RGB visualization methods to generate compressed RGB images (fig. 50, 51). Each graph of the simulation consists of three distinct spectra, organized as three concentric squares Q1、Q2、Q3(FIG. 33). Thus, the maximum contrast visualization is expected to have three well separated colors. To quantify this difference, the present disclosure considers color normalization [0, 1 ] in each pixel ]As a set of Euclidean (Euclidean) coordinates (x, y, z) and for each pixel, the Euclidean distance is calculated:

wherein l12Is a square Q1And Q2Color distance between pQ1And pQ2Is the (R, G, B) vector in the pixel under consideration, i is the color coordinate R, G or B. The color distance Q is similarly calculated1Q3,l13And Q2Q3. The accuracy (fig. 52) is calculated as:

where the denominator is the maximum color distance lred-green+lred-blue.+lgreen-blue

Example 16, compressed Spectroscopy Algorithm and Pattern reference design

Phasor calculation

For each pixel in the image, the present disclosure acquires a sequence of intensities at different wavelengths I (λ). Each spectrum I (lambda) is discrete Fourier transformed into a complex number gx,y,z,t+isx,y,z,t. Where i is an imaginary unit and (x, y, z, t) represents the spatio-temporal coordinates of the pixels in the 5D data set.

The transformations for the real and imaginary parts are:

wherein λ is0And λNRespectively, the initial wavelength and the final wavelength, N is the number of spectral channels, and Δ λ is the wavelength bandwidth of a single channel. k is a harmonic. In this work, the present disclosure utilizes harmonic k 2.

Standard map reference

The association of the color with each phasor coordinate (g, s) is performed in two steps. First, the reference system is converted from Cartesian coordinates to polar coordinates (r, θ).

These polar values are then converted into hue (hue), saturation (saturation), value (HSV) color models, using the specific settings for each map, as listed below. Finally, any color generated outside the r-1 boundary is set to black.

Gradient reduction:

hue=θ

saturation=1

value=1-0.85*r.

gradient rising:

hue=θ

saturation=1

value=r

radius:

each r value from 0 to 1 is associated with a level in jet colormap from matplotlib package

Angle:

hue=θ

saturation=1.

value=1

tension measuring chart

The visualization of the statistics on the phasor diagram is performed by means of mathematical gradients. The gradient is obtained in a two-step process.

First, the present disclosure calculates the two-dimensional derivative of the phasor diagram histogram counts by approximation using the second order exact center difference.

As for the difference with a unit interval h in the g (horizontal) and s (vertical) directions, each segment F (g, s) has a central difference:the approximation becomes:

similarly:

second, the present disclosure computes the square root of the sum of the squares of the differences, D (s, g), as:

the magnitude of the derivative density count is obtained. Using the gradient histogram, the present disclosure then connects phasor coordinates with the same D (s, g) gradient to a profile. All gradients were then normalized to (0, 1). Finally, pixels in the hyperspectral image that correspond to the same contour in the phasor space will be rendered to the same color. In the reference figure, red indicates a high density gradient, typically in the center of the phasor cluster. In contrast, blue indicates a sparse gradient occurring at the edge circumference of the phasor distribution.

Zoom mode

In this mode, the original square standard reference map is transformed into a new bounding box that fits the spectral distribution of each data set.

The transformation process follows these steps. First, a bounding box (width ω, height h) is determined based on the appearance of the clusters on the phasor diagram. Then, the largest ellipsoid fitting the bounding box is determined. Finally, the unit circle of the original graph is deformed into the calculated ellipsoid.

Using polar coordinates, using phasor coordinates (g)i,si) Each point P of the standard reference map is represented as:

P(gi,si)=P(ri*cosθi,ri*sinθi) (equation 11)

The ellipsoid has a semi-major axis:

and a semi-minor axis

Thus, the ellipse equation becomes:

where rad is the ratio used for each radius r in the reference mapiScaled to the proportionally corresponding distance in the bounding box adaptive ellipse, which in polar coordinates becomes:

each point P (g) of the standard reference map is mapped using the forward mappingi,si) New coordinates (g) geometrically scaled into ellipsoido,so) Obtaining the equation:

this transformation is applied to all standard reference pictures to produce corresponding scaled versions.

Deformation mode

The present disclosure combines each point P (g) by using a shift taper methodi,si) Linearly deformed to a new point P' (g)o,so). Each standard map reference is first projected onto a 3D conical surface centered at the origin of the phasor diagram and having a unit height (fig. 55a to 55 c). Starting from the edge of the phasor universal circle, each point P on the standard graph is given a z value linearly. Thus, the original standard plot (fig. 55c) can be interpreted as a top view of the right cone, where z is 0 at the phasor unit circle and 1 at the origin (fig. 55 a).

The vertex a of the cone is then shifted to the calculated weighted average or maximum of the original 2d histogram, resulting in a tilted cone centered at a' (fig. 55 b).

In such an inclined cone, any horizontal cutting plane is always a circle centered at O'. Projection O thereofAt connecting origin O and new center AOn the projected line of' (fig. 55b to 55 d). As a result, all points in each circle are directed towards the new center A on the phasor diagram' shift. The present disclosure first coordinates (g) each point Pi,si) Transformed into transformed mapping coordinates(s)o,go) Then the corresponding (r) needed to calculate hue, saturation and lightness is obtainedoo)。

Specifically, the cutting plane with center O 'has a radius r' (fig. 55). The cross section is projected at O' centered with the same radiusAnd (4) a circle. Using geometric calculations, we obtain:

OO′=α*OA', (equation 17)

Where alpha is a scaling parameter. By using an approximation of the above-mentioned,

ΔO′OO′~ΔA′OA', (equation 18)

Can obtain

OO '═ α OA' (equation 19)

Furthermore, given a point N 'on a circle centered on O', equation 14 also implies:

O′N′=(1-α)*ON(equation 20)

It is equivalent to

r' ═ 1- α × r. (equation 21)

Where R is the radius of the unit circle of the phasor diagram.

By this method, a new center A with a specific alpha is provided', central on-line OA is obtainedCollection of zoom circles on'. In the boundary case, when α is 0, the zoom circle is the origin, and α is 1, the unit circle. Given any cutting plane O', the radius of the cross section always satisfies the following identity:

with A'New deformation map centered Point P' (g)o,so) The coordinates of (a) are:

finally, calculate

Colors are then assigned based on the newly calculated hue, saturation, and lightness to generate a deformation mode reference.

Color image quality calculation

Colour intensity

Due to the inherent lack of "ground truth" in experimental fluorescence microscope images, the present disclosure utilizes established models to calculate the color quality of the images without reference. Chroma is one of the three parameters (along with sharpness and contrast) used by Panetta et al to quantify the overall quality of a color image. Two opposing color spaces are defined as:

α ═ R-G (equation 25)

β ═ 0.5(R + G) -B (equation 26)

Wherein R, G, B are a red channel, a green channel and a blue channel, respectively, and α and β are a red-green space and a yellow-blue space. The chroma (colorfulness) as used herein is defined as:

Wherein the content of the first and second substances,μαβas the variance and mean of the alpha and beta spaces, respectively.

Sharpness

The present disclosure utilizes EME, which is a Weber-based enhanced measurement. EME is defined as follows:

wherein k is1、k2Is a block for dividing an image, Imax,k,lAnd Imax,k,lAre the maximum and minimum intensities in the block. EME has been shown as a weight λ associated with each color componentcThe correlation is related to human observation of sharpness in a color image.

In accordance with NTSC standards and literature58The weight of the different color components used in this article is λR=0.299、λG=0.587、λB=0.114。

Contrast ratio

The present disclosure utilizes Michelson's Law (Michelson-Law) measurements of enhanced AME, which is an effective assessment tool for contrast in grayscale images, designed to provide larger metric values for larger contrast images. AME is defined as:

wherein k is1、k2Is a block for dividing an image, Imax,k,lAnd Imin,k,lAre the maximum and minimum intensities in the block. Then, the contrast value of the color image is calculated as:

wherein the same weight λ is used for sharpnessc

Color quality enhancement

The present disclosure utilizes Color Quality Enhancement (CQE), a polling method to combine color chroma, sharpness, and contrast into values that have a strong correlation and linear correspondence with the human visual perception of the quality of a color image. The CQE is calculated as:

CQE=c1colorfulness+c2sharpness+c3contast (equation 31)

Wherein the linear combination coefficients of the CQE measurements are set to evaluate the contrast change according to the values reported in the literature, c1=0.4358,、c20.1722 and c3=0.3920。

Example 17 Mouse line (Mouse Lines)

Mouse imaging was approved by the Institutional Animal Care and Use Committee (IACUC) of the children hospital los angeles (license No. 38616) and university of southern california (license No. 20685). Experimental studies on vertebrates meet institutional, national and international ethical guidelines. Animals were kept under light for 13:11 hours: dark cycling. The animals breathed twice the filtered air and the room temperature was maintained at 68F to 73F, and the cages were replaced weekly. All of these factors help to minimize intra-and inter-experimental variability. Adult 8-week-old C57Bl mice were euthanized with euthasol. Tracheas were rapidly collected from mice, washed in PBS, and cut longitudinally alongside the mucosae of muscolaris to expose the lumen. A3 mm by 3mm tracheal sheet was cut out and placed on a microscope slide for imaging.

Example 18 Zebra fish line

The lines were maintained and maintained according to standard literature practice and according to the guidelines for the care and use of laboratory animals provided by southern california university. The fish sample is part of a protocol approved by IACUC (license number: 12007 USC).

The transgenic FlipTrap Gt (desm-Citrine) ct122a/+ line is the previously reported screen result, Tg (kdrl: eGFP)s843Supplied by Stainier laboratories (Maxipran Heart and Lung institute). Tg (ubi: Zebranow) was given by Alex Schier. By combining homozygous Tg (ubi: Zebraow) adults with Tg (hsp70l: Cerulean-P2A-CreER)T2) The lines are crossed to obtain controlled recombination of the fluorophores. 60. mu.g/ml of fast dissolving ocean and 75. mu.g/ml of CaSO in egg water (in ultrapure water (Milli-Q water)) at 28.5 deg.C4) Embryos were cultured with 0.003% (w/v) 1-phenyl-2-thiourea (PTU) added at about 18hpf to reduce pigmentation.

Zebra fish samples with triple fluorescence were obtained by hybridizing Gt (desm-Citrine) ct122a/+ to Tg (kdrl: eGFP) fish followed by injection of 100pg of mRNA encoding H2B-Cerulean per embryo at one cell stage, as in previous work29The method as described in (1). Pairing Gt (desm-Citrine) ct122a/+, with a 458nm laser; tg (kdrl: eGFP); imaging of H2B-Cerulean samples to exciteCerulean, Citrine and eGFP, and a narrow dichroism of 458nm to 561nm to separate excitation and fluorescence emission.

Example 19 plasmid construction

pDestotol 2pA2-hsp70l: Cerulean-P2A-CreERT2 (for generating Tg (hsp70l: Cerulean-P2A-CreER) T2) Series)

The coding sequences for Cerulean, CreERT2, and woodchuck hepatitis virus post-transcriptional regulatory element (WPRE) were amplified from vectors of: vectors of Tg (bactin2: cerulean-cre) using primer #1 and primer #2 (the complete list of primers is reported in Table 7), pCAG-ERT2CreERT2(Addgene #13777) using primer #3 and primer #4, and Tg (PGK1: H2B-chFP) using primer #5 and primer # 6. The Cerulean and CreERT2 sequences were then fused using a synthetic linker encoding the P2A peptide. The resulting Cerulean-P2A-CreERT2 and WPRE sequences were cloned into pDONR221 and pDONR P2R-P3 (Saimer Feishell science Co., Ltd.), respectively. According to the developer's manual66Subsequent multi-site gateway reactions are performed using the Tol2kit vector. P5E-hsp70l (Tol2kit #222), pDONR221-Cerulean-P2A-CreER and pDONR P2R-P3-WPRE were assembled into pDesTol 2pA2(Tol2kit #394), pDesTol 2pA2-fli1: mKO2 (for generating Tg (fli1: mKO2) lines).

TABLE 7 primer List for plasmid construction

The coding sequence of mKO2 was amplified from mKO2-N1(addge #54625) using primer #7 and primer #8 and cloned into pDONR 221. Then, P5Efli1ep (addge #31160), pDONR221-mKO2 and pDONR P2R-P3-WPRE were assembled into pDesttol2pA2 as described above.

EXAMPLE 20 microinjection and screening of transgenic Zebra Fish lines

A2.3 nL solution containing 20pg/nL plasmid DNA and 20pg/nL tol2 mRNA was injected into one-cell stage embryos obtained by crossing AB with Casper zebrafish. Injected F0 embryos were cultured and hybridized to casper zebrafish for screening. F1 embryos of the expected Tg (hsp70l: Cerulean-P2A-CreERT2) line and Tg (fli1: mKO2) were screened for ubiquitous Cerulean expression and mKO2 expression restricted in vasculature after heat shock at 37 ℃ for 30 minutes, respectively. The positive individuals F1 adults were then heterotypically crossed with casper zebrafish and their progeny with casper phenotype were used in experiments when 50% transgene transmission was observed in subsequent generations, indicating a single transgene insertion.

Example 21 sample preparation and multispectral image acquisition and instrumentation

Images were acquired on a zeiss LSM780 inverted confocal microscope (jena, carl zeiss, germany) equipped with a QUASAR detector. A typical data set includes 32 spectral channels covering wavelengths from 410.5nm to 694.9nm with an 8.9nm bandwidth, generating an x, y, λ image cube. The detailed acquisition parameters are reported in table 8.

Table 8, parameters for in vivo imaging. All data points were 16 bit depth and were acquired using an LD C-Apochromat 40x/11W lens.

Zebrafish samples were prepared for in vivo imaging by placing 5 to 6 embryos in a 1% agarose (cat.16500-100, Invitrogen) mold (mold) under 24 to 72hpf, using a custom designed negative plastic mold on an imaging dish with a No.1.5 cover slide bottom (cat.D5040P, WillCo Wells)45Is created in (1). By adding 2ml of 1% UltraPureTMLow melting point agarose (cat.16520-050, Invitrogen) solution prepared in 30% Danieu (17.4 mM NaCl in water, 210. mu.M KCl, 120. mu.M MgSO47H2O, 180. mu.M Ca (NO3)2, 1.5mM HEPES buffer, pH 7.6) with 0.003% PTU and 0.01% tricaine to ensure embryo stability. The solution is then added to the mounted embryosOn the top of (c). After curing the agarose at room temperature (1 to 2 minutes), the imaging dish was filled with 30% Danieau solution and 0.01% tricaine at 28.5 ℃. Imaging was performed on an inverted confocal microscope by positioning the imaging dish on the microscope stage. For the Tg (ubi: Zebranow) samples, to initiate the expression of CreERT2, embryos were heat shocked in a water bath at 37C in a 50ml centrifuge tube (falcon tube) 15 hours after fertilization, and then returned to the 28.6C incubator. To initiate recombination of the zebrabow transgene, 5uM 4-OHT (Sigma; H7904) was added to the medium 24 hours after fertilization. Samples of Tg (ubi: Zebranow) were imaged using a 458nm laser to excite CFP, YFP and RFP in combination with a narrow 458nm dichroism.

Mouse trachea samples were collected from wild type C57Bl mice and mounted on coverslips with sufficient phosphate buffered saline to avoid dehydration of the samples. Imaging was performed in the 2-photon mode excited at 740nm with a 690+ nm dichroism.

Example 22, non-descanning (NDD) multiphoton Fluorescence Lifetime Imaging (FLIM) and analysis

Fluorescence Lifetime Imaging Microscope (FLIM) data were acquired with a two-photon microscope (zeiss LSM-780 inverted, germany, jena, zeiss) equipped with Ti: sapphire laser system (Coherent Chameleon Ultra II, santa clara Coherent, california) and ISS a320 FastFLIM (ISS, university of illinois, illinois). The objective lens used was a 2-p optimized 40X 1.1 NA water immersion objective lens (Korr C-Apochromate, Germany, Jena, Zeiss). Images were collected at a size of 256 x 256 pixels with a pixel dwell time of 12.6 mus/pixel. The excitation light was separated from the fluorescence emission using a dichroic filter (690+ nm). The detection of fluorescence included a combination of a hybrid photomultiplier tube (R10467U-40, Japan, Hamamatsu) and a 460/80nm bandpass filter. Acquisition was performed using VistaVision software (ISS, university of illinois). The excitation wavelength used was 740nm and the average power of the samples was about 7 mW. Calibration of the lifetime of the frequency domain system was performed by measuring the known lifetime of coumarin 6 at a single exponent of 2.55 ns. FLIM data is collected until 100 counts out of the brightest pixels of the image are acquired.

Data were processed using SimFCS software developed by Gratton Lab (fluorescence dynamics Laboratory (LFD), university of california, ohns, www.lfd.uci.edu). FLIM analysis of intrinsic fluorophores was performed as previously described and reported in detail. Phasor coordinates (g, s) are obtained by fourier transformation. According to the disclosed protocol, cluster identification is utilized to associate specific regions in the phasors with pixels in the FLIM data set.

Example 23 selection of harmonics for visualization

The distribution of spectral wavelengths on the phasor diagram is highly dependent on the harmonic number used. Typically, the first and second harmonics are used to obtain high spectral phasor values due to the visualization limitations imposed by branches within the riemann surface in complex space.

The first harmonic results in a spectral distribution that covers approximately 3/2 pi radians of the spectrum in the visible range (400nm to 700nm) along a counterclockwise path within the general circle. As a result, spectra that are spaced apart by any peak-to-peak distance will appear in different locations on the phasor diagram. However, the first harmonic provides lower efficiency of use of phasor space, leaving 1/2 π radians unused, and resulting in lower dynamic range of separation, as can be seen in FIGS. 39 and 40.

Similarly, for spectra in the visible range (400nm to 700nm), the second harmonic spans approximately (3/2+2) pi radians on phasors, distributing the spectra in a more expanded manner within the general circle, simplifying the differentiation of spectra that can be more closely overlapped, and providing a separate higher dynamic range, as seen in fig. 36 and 37. The disadvantage of this harmonic is that there is an overlapping region from orange fluorescence to deep red fluorescence. In this region, spectra spaced 140nm apart (which in the system of the present disclosure have 32 bands, 410.5nm to 694.9nm, with a bandwidth of 8.9nm) may eventually overlap within the phasor diagram. In this scenario it would not be possible to use the second harmonic to distinguish between those well-spaced spectra, requiring the use of the first harmonic. With SEER, it is possible to quickly verify and change within the HySP software which harmonic to choose to visualize with.

In a common scenario of imaging with a single laser line, the range of most of the signals emitted from multiple common fluorophores may be much less than 150nm due to the stokes shift, which is typically in the range of 20nm to 25 nm. The excitation spectra of fluorescent proteins spaced 140nm apart do not generally overlap well, requiring the use of a second excitation wavelength to obtain a signal.

The SEER method proposed here utilizes the second harmonic in order to maximize the dynamic range of the phasor space and separate closely overlapping spectra. However, SEER can work seamlessly with the first harmonic, maintaining rapid visualization of multiple fluorophores that may be far from the peak spectral wavelength.

Example 24 color visualization restriction of SEER

The SEER map is constructed based on frequency domain values generated by applying a phasor method to the hyperspectral and multispectral fluorescence data. RGB colors are used to directly represent these values. In this way, the quality of the color separation has a maximum resolution limited by the spectral separation provided by the phasor method itself. Thus, the SEER plot will assign different colors to spectra with higher amounts of fluorescence signal-to- (vs) noise (high signal-to-noise) and combinations of higher noise-to- (vs) signals (low signal-to-noise) as long as the phasor method is able to distinguish between these spectra. In scenarios where the spectra derived from two different effects are identical, e.g. where low protein expression is on the outer layer and high level expression decays at a deeper level, the phasor method and SEER plots will not be able to distinguish between the two effects in their current embodiments. The separation of these two effects is a distinct and complex problem that depends on more factors in the optical microscope components, sample, labeling, multispectral imaging methods, and experimental design, and this disclosure believes that this separation falls outside the scope and constitutes its own item.

Any combination of the above features/configurations is within the scope of the present disclosure.

The components, steps, features, objects, benefits and advantages that have been discussed are merely illustrative. Neither of them nor the discussion related to them is intended to limit the scope of protection in any way. Many other embodiments are also contemplated. These embodiments include embodiments having fewer, additional, and/or different components, steps, features, objects, benefits, and/or advantages. These also include embodiments in which components and/or steps are arranged and/or ordered differently.

Unless otherwise indicated, all measurements, values, nominal values, positions, sizes, dimensions and other specifications set forth in this specification (including the appended claims) are approximate and not precise. They are intended to have a reasonable range consistent with their associated functions and practices in the art to which they pertain.

All articles, patent applications, and other publications cited in this disclosure are incorporated herein by reference.

When used in the claims, the word "means for … …" is intended to and should be interpreted to include the corresponding structures and materials that have been described, and equivalents thereof. Similarly, the word "step for … …" when used in a claim is intended and should be interpreted to include the corresponding actions already described and their equivalents. The absence of these words in the claims means that the claims are not intended and should not be construed as limited to these corresponding structures, materials or acts, or to their equivalents.

Relational terms such as "first" and "second," and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The term "comprising" and any other variations thereof, when used in conjunction with a list of elements in the specification or claims, is intended to indicate that the list is not exclusive and that other elements may be included. Similarly, an element preceded by "a" or "an" does not exclude the presence of additional elements of the same type, without further restriction.

None of the claims are intended to contain subject matter which fails to meet the requirements of section 101, 102 or 103 of the patent statutes, nor should they be construed in such a manner. Thus giving up any unintended coverage of such subject matter. Nothing stated or shown, other than as just recited in this paragraph, whether or not it is recited in a claim, is not intended or should be construed to cause any element, step, feature, object, benefit, advantage, or equivalent to be dedicated to the public.

The abstract is provided to assist the reader in quickly determining the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, various features from the foregoing detailed description are grouped together in various embodiments to streamline the disclosure. This method of disclosure is not to be interpreted as requiring that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separately claimed subject matter.

206页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:色散测量装置、脉冲光源、色散测量方法和色散补偿方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!