Compressed acquisition of microscopic images

文档序号:54516 发布日期:2021-09-28 浏览:45次 中文

阅读说明:本技术 显微图像的压缩获取 (Compressed acquisition of microscopic images ) 是由 本·莱谢姆 艾兰·斯莫尔 伊泰·哈尤特 埃雷兹·纳曼 埃亚勒·本-巴萨特 于 2019-12-19 设计创作,主要内容包括:一种用于计算成像的显微镜可以包括配置为用多个波长照明样本的照明源、图像传感器、将样本成像到图像传感器上的物镜和可操作地耦合到照明组件和图像传感器的处理器。处理器可以被配置为从使用第一组照明条件以第一波长照明的样本获取第一图像数据集。处理器还可以被配置为从使用具有第二数量的照明条件的第二组照明条件以第二波长照明的样本获取第二图像数据集。第二组照明条件包括比第一组更少的照明条件,以便减少获取时间。处理器可以被配置为将第一图像数据集和第二图像数据集组合成样本的计算重建图像。(A microscope for computed imaging may include an illumination source configured to illuminate a sample with a plurality of wavelengths, an image sensor, an objective lens to image the sample onto the image sensor, and a processor operably coupled to the illumination assembly and the image sensor. The processor may be configured to acquire a first image dataset from a sample illuminated at a first wavelength using a first set of illumination conditions. The processor may be further configured to acquire a second image dataset from the sample illuminated at the second wavelength using a second set of illumination conditions having a second number of illumination conditions. The second set of lighting conditions includes fewer lighting conditions than the first set in order to reduce acquisition time. The processor may be configured to combine the first image data set and the second image data set into a computed reconstructed image of the sample.)

1. A method for generating a computed reconstructed image, the method comprising:

acquiring, with an image sensor, a first image dataset from a sample illuminated with a first set of illumination conditions, the first set of illumination conditions comprising a first number of illumination conditions, each illumination condition comprising a first wavelength;

acquiring, with the image sensor, a second image dataset from the sample illuminated with a second set of illumination conditions, the second set of illumination conditions comprising a second number of illumination conditions, each illumination condition comprising a second wavelength, wherein the first number is greater than the second number; and

combining the first image data set and the second image data set into a computed reconstructed image of the sample.

2. The method of claim 1, wherein the computed reconstructed image includes a first spatial frequency bandwidth corresponding to the first wavelength and a second spatial frequency bandwidth corresponding to the second wavelength, the first spatial frequency bandwidth being greater than the second spatial frequency bandwidth.

3. The method of claim 2, wherein the first spatial frequency bandwidth is greater than a spatial frequency bandwidth of one or more images acquired by the image sensor with the first set of illumination conditions at the first wavelength.

4. The method of claim 2, wherein the second spatial frequency bandwidth is greater than a spatial frequency bandwidth of one or more images acquired by the image sensor with the second set of illumination conditions at the second wavelength.

5. The method of claim 2, wherein the computing a reconstructed image comprises one or more of increased contrast or aberration correction for the second wavelength based at least in part on the first image dataset from the first wavelength.

6. The method of claim 1, further comprising:

providing the reconstructed image on a display with a first spatial frequency bandwidth and a first user perceivable color corresponding to the first wavelength and a second spatial frequency bandwidth and a second user perceivable color corresponding to the second wavelength, the first spatial frequency bandwidth being greater than the second spatial frequency bandwidth and a spatial frequency bandwidth of one or more images acquired by the image sensor at the first wavelength with the first set of illumination conditions.

7. The method of claim 1, wherein the first number is at least 2 times greater than the second number and a second acquisition time associated with the second image dataset is no more than half of a first acquisition time associated with the first image dataset, and wherein the computing a spatial frequency bandwidth of the reconstructed image is at least 1.5 times greater than a spatial frequency bandwidth of each of the plurality of images acquired with the first set of illumination conditions.

8. The method of claim 1, wherein the first image dataset comprises a plurality of images of the sample and the second image dataset comprises one or more images of the sample, and wherein the second image dataset comprises fewer images of the sample than the first image dataset.

9. The method of claim 1, wherein the computing a reconstructed image includes one or more of an increased spatial frequency bandwidth, a correction for optical aberrations, or an increase in image contrast.

10. The method of claim 9, wherein the first image dataset is converted to a spatial frequency space and mapped to spatial frequencies in the spatial frequency space based on the first set of lighting conditions to provide an increased spatial frequency bandwidth of the computed reconstructed image compared to a first spatial frequency bandwidth of each of a plurality of images acquired with the first set of lighting conditions.

11. The method of claim 9, wherein the image sensor includes a spatial frequency bandwidth and the increased spatial frequency bandwidth of the computed reconstructed image is greater than the spatial frequency bandwidth of the image sensor divided by a magnification of an image of the sample onto the image sensor.

12. The method of claim 9, wherein the first image data set includes a first plurality of images, each image including a first spatial frequency bandwidth, and the increased spatial frequency bandwidth of the computed reconstructed image is greater than the first spatial frequency bandwidth of the each image of the first plurality of images.

13. The method of claim 12, wherein the first plurality of images includes features of the sample that are not resolved with the first spatial frequency bandwidth of the each image of the first plurality of images, and the computing a reconstructed image includes the features of the sample that are resolved with the increased spatial frequency bandwidth of the computing a reconstructed image.

14. The method of claim 9, wherein the correction of optical aberrations is provided by separating aberration information from sample information so as to reduce the effect of optical aberrations on the computed reconstructed image, and optionally wherein the aberration information comprises aberration spatial frequency and phase associated with optics used to image the sample on the image sensor, and optionally wherein the sample information comprises sample spatial frequency and phase associated with the structure of the sample.

15. The method of claim 9, wherein the increased image contrast of the computed reconstructed image is provided by computationally magnifying the high spatial frequencies of the reconstructed image to better represent the sample.

16. The method of claim 9, wherein the computed reconstructed image includes the increased contrast, and wherein the increased contrast includes an increased ratio of high spatial frequencies to low spatial frequencies in the computed reconstructed image as compared to a ratio of high spatial frequencies to low spatial frequencies for each of a plurality of images of the first image dataset.

17. The method of claim 1, wherein the first image data set and the second image data set are processed separately to generate a first computed reconstructed image from the first image data set and a second computed reconstructed image from the second data set, and wherein the first computed reconstructed image is combined with the second computed reconstructed image to generate the computed reconstructed image.

18. The method of claim 1, wherein the first image dataset comprises a first plurality of images and the second image dataset comprises one or more images, and wherein the first plurality of images and the one or more images are processed together to generate the computed reconstructed image.

19. The method of claim 18, wherein the one or more images comprise a single image acquired with the second set of lighting conditions.

20. The method of claim 18, wherein the one or more images comprise a second plurality of images.

21. The method of claim 1, wherein the computed reconstructed image comprises a color image including two or more of a red channel, a green channel, or a blue channel.

22. The method of claim 21, wherein the first wavelength corresponds to one of the red, green, or blue channel and the second wavelength corresponds to another of the red, blue, or green channel.

23. The method of claim 22, wherein the reconstructed image is displayed on the display with a first spatial frequency bandwidth for a first color channel corresponding to the first wavelength and with a second spatial frequency bandwidth for a second color channel corresponding to the second wavelength, the first spatial frequency bandwidth being greater than the second spatial frequency bandwidth.

24. The method of claim 23, wherein the first wavelength comprises green light, the first channel comprises the green channel, the second wavelength comprises red or blue light, and the second color channel comprises the red or blue channel, and wherein the green channel is displayed on the display with the first spatial frequency bandwidth and the red or blue channel is displayed on the display with the second spatial frequency bandwidth.

25. The method of claim 24, wherein the computed reconstructed image includes the red, green, and blue channels, and wherein the second wavelength includes red light corresponding to the red channel and the third wavelength includes blue light corresponding to the third channel, wherein the third channel is displayed on the display with a third spatial frequency bandwidth that is smaller than the first spatial frequency bandwidth.

26. The method of claim 21, wherein an image sensor comprises a sensor comprising a two-dimensional array of pixels and a first color of the first image dataset corresponds to the first wavelength and a second color of the second image dataset corresponds to the second wavelength, and wherein the computed reconstructed image is mapped to the red, green, and blue channels based on the first and second wavelengths.

27. The method of claim 26, wherein the image sensor comprises a grayscale image sensor comprising the two-dimensional array of pixels.

28. The method of claim 26, wherein the image sensor comprises a color image sensor comprising a two-dimensional array of pixels and a color filter array comprising a plurality of color filters arranged over the two-dimensional array.

29. The method of claim 28, wherein the first image data set is determined based on the first wavelength and a first absorption characteristic of the color filter at the first wavelength, and the second image data set is determined based on the second wavelength and a second absorption characteristic at the second wavelength.

30. The method of claim 28, wherein the first image data set and the second image data set are combined according to a first absorption characteristic of the color filter at the first wavelength and a second absorption characteristic of the color filter at the second wavelength to generate the computed reconstructed image.

31. The method of claim 28, wherein the first wavelength is different from the second wavelength, and wherein a portion of the first image dataset and a portion of the second image dataset are acquired substantially simultaneously with one or more of the set of illumination conditions illuminating the sample when the sample is illuminated with one or more of the second set of illumination conditions.

32. The method of claim 1, wherein the first wavelength is different from the second wavelength.

33. The method of claim 32, wherein the first wavelength comprises a first color and the second wavelength comprises a second color different from the first color.

34. The method of claim 32, wherein the first wavelength comprises a first peak of a first illumination source emitting a first wavelength distribution comprising a first full width half maximum, and wherein the second wavelength comprises a second peak of a second wavelength distribution comprising a second full width half maximum, and wherein the first full width half maximum does not overlap the second full width half maximum.

35. The method of claim 32, wherein the first wavelength is within one of the following ranges and the second wavelength is within a different one of the following ranges: an ultraviolet range from about 200 nanometers (nm) to about 380 nanometers (nm), a violet range from about 380nm to about 450nm, a blue range from about 450nm to about 485nm, a cyan range from about 485nm to 500nm, a green range from about 500nm to 565nm, a yellow range from about 565nm to about 590nm, an orange range from about 590nm to 625nm, a red range from about 625nm to about 740nm, or a near infrared range from about 700nm to about 1100 nm.

36. The method of claim 33, wherein the first wavelength is within one of the ranges and the second wavelength is within a different one of the ranges.

37. The method of claim 1, further comprising:

acquiring, with the image sensor, a third image dataset from the sample illuminated with a third wavelength using a third set of illumination conditions;

wherein the third data set is combined with the first image data set and the second image data set to generate the computed reconstructed image.

38. The method of claim 37, wherein the third set of lighting conditions comprises a third number of lighting conditions that is less than the first number of lighting conditions.

39. The method of claim 37, further comprising:

acquiring, with the image sensor, N additional image datasets from the sample illuminated with N additional wavelengths using additional N sets of illumination conditions;

wherein the N additional data sets are combined with the first image data set, the second image data set, and the third image data to generate the computed reconstructed image;

wherein N comprises at least one integer.

40. The method of claim 39, wherein N comprises an integer in the range from about 10 to 100.

41. The method of claim 39, wherein the computed reconstructed image comprises a hyperspectral image.

42. The method of claim 1, wherein the computed reconstructed image comprises one or more of a 2D image, a 3D image, a 2D intensity image, a 3D intensity image, a 2D phase image, a 3D phase image, a 2D fluoroscopic image, a 3D fluoroscopic image, a 2D hyperspectral image, or a 3D hyperspectral image.

43. The method of claim 1, wherein the first data set and the second data set correspond to a first depth of the sample, the method further comprising:

adjusting a focal length of the microscope to image the sample at a plurality of depths; and

repeating the acquiring and combining steps to generate the computed reconstructed image comprising a plurality of computed reconstructed images at different depths corresponding to the plurality of depths.

44. The method of claim 1, further comprising determining the first wavelength.

45. The method of claim 44, wherein the first wavelength is user defined.

46. The method of claim 44, wherein determining the first wavelength further comprises:

acquiring, with the image sensor, an initial image dataset of the sample illuminated with a plurality of wavelengths;

determining that a first image of the initial image dataset includes more information than a second image of the initial image dataset;

selecting a first wavelength of the plurality of wavelengths corresponding to the first image as the first wavelength; and

selecting a second wavelength of the plurality of wavelengths corresponding to the second image as the second wavelength.

47. The method of claim 46, wherein the information comprises spatial frequency information.

48. The method of claim 46, further comprising determining the first set of lighting conditions based on information identified from the initial image dataset.

49. The method of claim 48, wherein determining the first set of lighting conditions comprises determining one or more of a number of light sources, a position of a light source, a combination of positions of a plurality of light sources, an illumination angle, a combination of illumination angles, an illumination number, a position of a diffuser, a pattern of light, a filter, a mask, or a focal point of the sample.

50. The method of claim 46, further comprising determining a computational process for reconstructing the first image based on the initial image dataset.

51. The method of claim 1, wherein the computed reconstructed image comprises a three-dimensional image.

52. The method of claim 1, wherein the second set of lighting conditions is different from the first set of lighting conditions.

53. The method of claim 1, wherein the second image dataset is smaller than the first image dataset.

54. The method of claim 1, wherein the first image dataset is acquired after the second image dataset.

55. A microscope for image reconstruction, the microscope comprising:

an illumination source configured to illuminate a sample with a plurality of wavelengths at a plurality of angles;

an image sensor;

an objective lens for imaging the sample illuminated with an illumination assembly onto the image sensor; and

a processor operably coupled to the illumination assembly and the image sensor, the processor configured with instructions to perform the method of any of the preceding claims.

56. The microscope of claim 55, wherein the plurality of wavelengths includes one or more of a violet wavelength in a range from about 380 nanometers (nm) to about 450 nanometers (nm), a blue wavelength in a range from about 450nm to about 485nm, a cyan wavelength in a range from about 485nm to 500nm, a green wavelength in a range from about 500nm to 565nm, a yellow wavelength in a range from about 565nm to about 590nm, an orange wavelength in a range from about 590nm to 625nm, a red wavelength in a range from about 625nm to about 740nm, an infrared wavelength greater than 700nm, or a near infrared wavelength in a range from about 700nm to about 1100 nm.

57. The microscope of claim 55, wherein the illumination assembly is configured to illuminate the sample with a plurality of light sources at a plurality of positions corresponding to different illumination angles of the sample.

58. The microscope of claim 55, further comprising a focus actuator coupled to the processor to adjust a depth of the sample used to form an image of the sample on the image sensor.

59. The microscope of claim 58, wherein the focus actuator is configured to move to a first configuration to image the sample at a first depth and to a second configuration to image the sample at a second depth.

Background

In computational imaging, a high resolution computed reconstructed image (computed reconstructed image) of an object may be generated from a series of low resolution images taken with different illumination. This approach has the benefit of providing a high resolution computed reconstructed image of the sample from an imaging system having a lower resolution. Computational imaging can be used to generate high resolution computed reconstructed color images from low resolution images. However, the overhead associated with computational imaging is less than ideal due to the time required to collect multiple low resolution images and computationally generate high resolution images from the collected information. Because the number of low resolution images may generally dictate the quality and resolution of the output high resolution images, it may be difficult, in at least some instances, to reduce overhead without substantially degrading the output images. For applications requiring multiple views, the overhead can be particularly significant.

Because computational imaging algorithms often rely on relatively good models of physical systems (which may be known, learned, or implicit), these algorithms may be wavelength dependent. In some cases, such as when each wavelength is processed separately, this wavelength dependence can create further overhead. For each wavelength, the reconstruction time, and in some cases, the acquisition time, may be replicated. For example, a color imaging process may operate with three color channels (e.g., red, green, and blue or "RGB"), which may result in an acquisition time multiplied by three and a computed reconstruction time multiplied by three. These channels may also be used in another color space, such as Lab (e.g., CIELab), YCbCr, YUV, etc., which may further add to the computational reconstruction and/or acquisition time.

In view of the above, it would be desirable to reduce acquisition time and/or reconstruction time without significantly degrading the quality and utility of the output computed reconstructed images.

SUMMARY

As will be described in greater detail below, the present disclosure describes various systems and methods for compression acquisition of microscopic images by acquiring a first image dataset of a sample using a first set of illumination conditions for a first wavelength and acquiring a second image dataset of the sample using a second set of illumination conditions for a second wavelength. The first set of lighting conditions may include a greater number of lighting conditions than the second set of lighting conditions. The first image data set and the second image data set may be combined into a computed reconstructed image of the sample. Because the second set of lighting conditions is less than the first set of lighting conditions, this approach may reduce acquisition time, reconstruction time, storage requirements, and other costs when compared to conventional approaches. In addition, because the first wavelength may be selected to produce an image that contains more information than an image of the second wavelength, the resulting reconstructed image may not contain significantly less information than a reconstructed image from a conventional method.

Additionally, the systems and methods described herein may improve the functionality of a computing device (e.g., a computing device connected to or integrated with a microscope) by reducing the data set size and computing reconstructed images more efficiently. These systems and methods may also improve the field of microscopic imaging by improving acquisition times.

Is incorporated by reference

All patents, applications, and publications mentioned and identified herein are incorporated by reference in their entirety and, even if mentioned elsewhere in this application, should be considered to be fully incorporated by reference.

Brief Description of Drawings

A better understanding of the features, advantages, and principles of the present disclosure will be obtained by reference to the following detailed description that sets forth illustrative embodiments, and the accompanying drawings of which:

FIG. 1 shows a diagram of an exemplary microscope according to some embodiments;

FIG. 2A shows a diagram of the optical paths of two beam pairs when the microscope of FIG. 1 is out of focus, in accordance with some embodiments;

FIG. 2B shows a diagram of the optical paths of two beam pairs when the microscope of FIG. 1 is focused, in accordance with some embodiments;

FIG. 3 illustrates a flow diagram of an exemplary process for compressed acquisition of microscopic images, in accordance with some embodiments;

4A-4C illustrate several workflow diagrams of an exemplary process for compressed acquisition of microscopic images, according to some embodiments;

FIG. 5 illustrates a workflow diagram of an exemplary process for compressed acquisition of microscopic images, according to some embodiments;

FIG. 6 illustrates a high resolution reconstruction of samples from a plurality of low resolution images in a first channel in accordance with some embodiments;

FIG. 7 illustrates a single image of the sample of FIG. 6 acquired in a second channel, in accordance with some embodiments;

FIG. 8 illustrates a single image of the sample of FIG. 6 taken in a third channel in accordance with some embodiments;

FIG. 9 illustrates a color high resolution image generated by processing the images of FIGS. 6, 7, and 8 together, according to some embodiments;

FIG. 10 shows an enlarged raw image acquired with an image sensor using a red illumination color and corresponding cell structure;

FIG. 11 shows an enlarged raw image acquired with an image sensor using blue illumination color and the corresponding cell structure of FIG. 10;

FIG. 12 shows a magnified original image acquired with an image sensor using a green illumination color and the corresponding cell structure of FIG. 10;

FIG. 13 shows a computed reconstructed image obtained from a plurality of images illuminated with green color and the corresponding cell structures of FIGS. 10-12; and

fig. 14 shows a magnified computed reconstructed color image from the images and corresponding cellular structures as in fig. 10-12.

Detailed Description

The following detailed description provides a better understanding of the features and advantages of the invention described in the present disclosure, in accordance with the embodiments disclosed herein. Although the detailed description includes many specific embodiments, these are provided by way of example only and should not be construed as limiting the scope of the invention disclosed herein.

Because cone cells in the human eye can produce maximum visual acuity at about 555nm, the highest resolution human perception can be the green channel under certain lighting conditions. For other color channels, the human eye may not perceive as high a resolution as the green channel. In some embodiments, the computed reconstructed image includes a higher spatial frequency bandwidth for a first wavelength (e.g., green) and a lower spatial frequency bandwidth for a second wavelength (e.g., red). The user may perceive these images as having a higher spatial frequency bandwidth of the first wavelength while perceiving the images as color images. Although reference is made to color images, this method can be similarly applied to computed reconstructed images having only two wavelengths, which may include ultraviolet wavelengths or infrared wavelengths. Embodiments disclosed herein may improve the imaging time (e.g., acquisition time and reconstruction time) of a computational imaging system by processing images in a manner tuned to different color channels (e.g., wavelengths). This approach may reduce the number of images that are acquired and/or used for reconstruction. In addition, different reconstruction processes may be used for different wavelengths.

In some embodiments based on the sensitivity of the human eye to different wavelengths, the number of low resolution images acquired may vary for different wavelengths. For example, in one embodiment, the system described herein may perform computational imaging using a green channel, a red channel, and a blue channel. Due to the sensitivity of the human eye to the green channel, the system described herein can acquire multiple images in the green channel, and fewer images (e.g., a single image) in each of the red and blue channels, without significantly reducing the human perception of the resulting reconstructed image. In some embodiments, the systems described herein may acquire multiple images in the green channel for reconstructing a high resolution computed image in the green channel, while acquiring fewer images in the red and blue channels for reconstructing a medium resolution image in the red and blue channels.

Slides of biological and other samples may retain (hold) more information in some color channels than others. For example, capturing high resolution details using the green channel may be sufficient for capturing information. When certain smudges or other characteristics are present, it may be beneficial to capture the high resolution in other channels (rather than or in addition to the green channel). This method may be applied to scenes using other wavelengths, such as Infrared (IR) wavelengths, Ultraviolet (UV) wavelengths, and/or fluorescent wavelengths.

A detailed description of adaptive sensing will be provided below with reference to fig. 1 to 9. Fig. 1 and 2 show a microscope and various microscope configurations. Fig. 3 to 5 show an exemplary procedure for compressed acquisition of a microscopic image of a sample. Fig. 6-9 show exemplary images of a sample at different wavelengths.

Fig. 1 is a schematic diagram of a microscope 100 consistent with an exemplary disclosed embodiment. The term "microscope" as used herein generally refers to any device or instrument for magnifying an object that is smaller than easily observable to the naked eye (i.e., creating an image of the object for a user, where the image is larger than the object). One type of microscope may be an "optical microscope" which uses light in conjunction with an optical system for magnifying an object. The optical microscope may be a simple microscope with one or more magnifying lenses. Another type of microscope may be a "computational microscope" that includes an image sensor and image processing algorithms to enhance or magnify the size or other characteristics of an object. The computational microscope may be a dedicated device or created by combining software and/or hardware with an existing optical microscope to produce high resolution digital images. As shown in fig. 1, microscope 100 includes an image capture device 102, a focus actuator 104, a controller 106 connected to a memory 108, an illumination assembly 110, and a user interface 112. An example use of the microscope 100 may be to capture an image of a specimen 114 mounted on a stage 116 located within a field of view (FOV) of the image capture device 102, process the captured image, and present a magnified image of the specimen 114 on the user interface 112.

The image capture device 102 may be used to capture an image of the sample 114. In this specification, the term "image capture device" as used herein generally refers to a sensor or device that records an optical signal entering a lens as an image or sequence of images. The optical signal may be in the near infrared spectrum, visible spectrum, and ultraviolet spectrum. Examples of image capturing devices include CCD cameras, CMOS cameras, light sensor arrays, video cameras, camera-equipped mobile phones, webcams, preview cameras, microscope objectives, and detectors, among others. Some embodiments may include only a single image capture device 102, while other embodiments may include two, three, or even four or more image capture devices 102. In some embodiments, the image capture device 102 may be configured to capture images in a defined field of view (FOV). Additionally, when the microscope 100 includes several image capture devices 102, the image capture devices 102 may have overlapping regions in their respective FOVs. Image capture device 102 may have one or more image sensors (not shown in fig. 1) for capturing image data of sample 114. In other embodiments, image capture device 102 may be configured to capture images at image resolutions above VGA, above 1 megapixel, above 2 megapixel, above 5 megapixel, 10 megapixel, above 12 megapixel, above 15 megapixel, or above 20 megapixel. Additionally, image capture device 102 may also be configured to have a pixel size of less than 15 microns, less than 10 microns, less than 5 microns, less than 3 microns, or less than 1.6 microns.

In some embodiments, microscope 100 includes a focus actuator 104. The term "focus actuator" as used herein generally refers to any device capable of converting an input signal into physical motion for adjusting the relative distance between the sample 114 and the image capture device 102. Various focus actuators may be used, including, for example, linear motors, electrostrictive actuators, electrostatic motors, capacitive motors, voice coil actuators, magnetostrictive actuators, and the like. In some embodiments, the focus actuator 104 may include an analog position feedback sensor and/or a digital position feedback element. The focus actuator 104 is configured to receive instructions from the controller 106 in order to converge the light beam to form a sharp and well-defined image of the sample 114. In the example shown in fig. 1, the focus actuator 104 may be configured to adjust the distance by moving the image capture device 102. In some examples, the focus actuator 104 may be configured to adjust a depth of the specimen 114 used to form an image of the specimen 114 on the image capture device 102. For example, the focus actuator 114 may be configured to move to a first configuration to image or capture the sample 114 at a first depth and to a second configuration to image or capture the sample 114 at a second depth.

However, in other embodiments, the focus actuator 104 may be configured to adjust the distance by moving the stage 116 or by moving the image capture device 102 and the stage 116. Microscope 100 may also include a control unit for controlling the display according to the disclosed embodimentsA controller 106 for the operation of the micro mirror 100. The controller 106 may include various types of devices for performing logical operations on one or more inputs of image data and other data according to stored or accessible software instructions that provide the desired functionality. For example, the controller 106 may include a Central Processing Unit (CPU), support circuits, digital signal processor, integrated circuit, cache memory, or any other type of device for image processing and analysis, such as a Graphics Processing Unit (GPU). The CPU may include any number of microcontrollers or microprocessors configured to process images from the image sensor. For example, the CPU may include any type of single or multi-core processor, mobile device microcontroller, or the like. Various processors may be used, including for example, a microprocessor, a memory, a computer readable medium, a computer program, a computer readable medium, a computerEtc. and may include various architectures (e.g., x86 processor, etc.)Etc.). The support circuits may be any number of circuits known in the art, including cache, power supplies, clocks, and input-output circuits. The controller 106 may be at a remote location, such as a computing device communicatively coupled to the microscope 100.

In some embodiments, the controller 106 may be associated with a memory 108 for storing software that, when executed by the controller 106, controls the operation of the microscope 100. Additionally, the memory 108 may also store electronic data associated with the operation of the microscope 100, such as, for example, captured or generated images of the sample 114. In one case, the memory 108 may be integrated into the controller 106. In another case, the memory 108 may be separate from the controller 106.

In particular, the memory 108 may refer to a plurality of structures or computer-readable storage media, such as cloud servers, located at the controller 106 or at a remote location. Memory 108 may include any number of random access memories, read-only memories, flash memories, disk drives, optical storage devices, tape storage devices, removable storage devices, or other types of storage devices.

The microscope 100 may include an illumination assembly 110. The term "illumination assembly" as used herein generally refers to any device or system capable of projecting light to illuminate a sample 114.

The illumination assembly 110 may include any number of light sources, such as Light Emitting Diodes (LEDs), LED arrays, lasers, and lamps configured to emit light, such as halogen, incandescent, or sodium lamps. In one embodiment, the lighting assembly 110 may include only a single light source. Alternatively, the lighting assembly 110 may comprise four, sixteen or even more than one hundred light sources organized in an array or matrix. In some embodiments, the illumination assembly 110 may use one or more light sources located at a surface parallel to the illuminated sample 114. In other embodiments, the illumination assembly 110 may use one or more light sources located at a surface perpendicular to the sample 114 or at an angle to the sample 114.

In addition, the illumination assembly 110 may be configured to illuminate the sample 114 under a range of different illumination conditions. In one example, the illumination assembly 110 may include a plurality of light sources arranged at different illumination angles, such as a two-dimensional arrangement of light sources. In this case, the different illumination conditions may include different illumination angles. For example, fig. 1 depicts a light beam 118 projected from a first illumination angle α 1 and a light beam 120 projected from a second illumination angle α 2. In some embodiments, the first illumination angle α 1 and the second illumination angle α 2 may have the same value but opposite signs. In other embodiments, the first illumination angle α 1 may be separated from the second illumination angle α 2. However, both angles originate from points within the acceptance angle of the optics.

In another example, the lighting assembly 110 may include one or more light sources configured to emit light at different wavelengths. In this case, the different lighting conditions may comprise different wavelengths. The different wavelengths may include one or more of ultraviolet wavelengths in a range from about 380 nanometers (nm) to about 450nm, blue wavelengths in a range from about 450nm to about 485nm, cyan wavelengths in a range from about 485nm to 500nm, green wavelengths in a range from about 500nm to 565nm, yellow wavelengths in a range from about 565nm to about 590nm, orange wavelengths in a range from about 590nm to 625nm, red wavelengths in a range from about 625nm to about 740nm, infrared wavelengths greater than 700nm, or near infrared wavelengths in a range from about 700nm to about 1100 nm.

In yet another example, the lighting assembly 110 may be configured to use a plurality of light sources a predetermined number of times. In this case, the different lighting conditions may include different lighting patterns. Accordingly and consistent with the present disclosure, the different lighting conditions may be selected from the group consisting of: different durations, different intensities, different locations, different illumination angles, different illumination patterns, different wavelengths, or any combination thereof.

Consistent with the disclosed embodiments, the microscope 100 may include a user interface 112, be connected with the user interface 112, or be in communication with the user interface 112 (e.g., over a network or wirelessly, such as via bluetooth). The term "user interface" as used herein generally refers to any device suitable for presenting a magnified image of the sample 114 or for receiving input from one or more users of the microscope 100. Fig. 1 shows two examples of user interfaces 112. A first example is a smart phone or tablet that communicates wirelessly with the controller 106 through bluetooth, a cellular connection, or a Wi-Fi connection, directly or through a remote server. A second example is a PC display physically connected to the controller 106. In some embodiments, the user interface 112 may include user output devices including, for example, a display, a haptic device, a speaker, and the like. In other embodiments, the user interface 112 may include user input devices including, for example, a touch screen, a microphone, a keyboard, a pointing device, a camera, knobs, buttons, and the like. Using these input devices, a user may be able to provide information input or commands to the microscope 100 by typing instructions or information, providing voice commands, selecting menu options on a screen using buttons, pointers, or eye tracking capabilities, or by any other suitable technique for communicating information to the microscope 100. The user interface 112 may be connected (physically or wirelessly) with one or more processing devices, such as the controller 106, to provide information to a user or to receive information from a user and process the information. In some embodiments, such a processing device may execute instructions for responding to keyboard inputs or menu selections, recognizing and interpreting touches and/or gestures made on a touch screen, recognizing and tracking eye movements, receiving and interpreting voice commands, and so forth.

The microscope 100 may also include an object stage 116, or be coupled to the object stage 116. Stage 116 includes any horizontal rigid surface in which sample 114 may be mounted for inspection. The stage 116 may include a mechanical connector for holding a slide containing the specimen 114 in a fixed position. The mechanical connector may use one or more of the following: a mount, an attachment member, a retaining arm, a clamp, a clip, an adjustable frame, a locking mechanism, a spring, or any combination thereof. In some embodiments, the stage 116 may include a translucent portion or opening for allowing light to illuminate the sample 114. For example, light transmitted from illumination assembly 110 may pass through sample 114 and toward image capture device 102. In some embodiments, the stage 116 and/or the specimen 114 may be moved in the XY plane using motors or manual controls to enable imaging of multiple regions of the specimen.

Fig. 2A and 2B depict closer views of the microscope 100 in both cases. Specifically, fig. 2A shows the optical paths of two light beam pairs when the microscope 100 is out of focus, and fig. 2B shows the optical paths of two light beam pairs when the microscope 100 is in focus. In the case where the sample is thicker than the depth of the focal point or changes rapidly in depth, some portions of the sample may be in focus while other portions may not.

As shown in fig. 2A and 2B, the image capture device 102 includes an image sensor 200 and a lens 202. In a microscope, the lens 202 may be referred to as an objective lens of the microscope 100. The term "image sensor" as used herein generally refers to a device capable of detecting optical signals and converting them into electrical signals. The electrical signals may be used to form an image or video stream based on the detected signals. Examples of the image sensor 200 may include a semiconductor Charge Coupled Device (CCD), an active pixel sensor in a Complementary Metal Oxide Semiconductor (CMOS), or an N-type metal oxide semiconductor (NMOS, Live MOS). The term "lens" as used herein refers to an abraded or molded piece of glass, plastic, or other transparent material, one or both of whose opposing surfaces are curved, whereby light rays are refracted so that they converge or diverge to form an image. The term "lens" may also refer to an element comprising one or more lenses as defined above, such as in a microscope objective. The lens is positioned at least substantially transverse to the optical axis of the image sensor 200. Lens 202 may be used to focus the light beams from sample 114 and direct them toward image sensor 200. In some embodiments, the image capture device 102 may include a fixed lens or a zoom lens.

When the sample 114 is located at the focal plane 204, the image projected from the lens 202 is fully in focus. The term "focal plane" is used herein to describe a plane perpendicular to the optical axis of the lens 202 and passing through the focal point of the lens. The distance between the focal plane 204 and the center of the lens 202 is called the focal length and is denoted by D1. In some cases, the sample 114 may not be completely flat, and there may be slight differences between the focal plane 204 and various regions of the sample 114. Thus, the distance between the focal plane 204 and the sample 114 or a region of interest (ROI) of the sample 114 is labeled D2. The distance D2 corresponds to a degree in which the image of the sample 114 or the image of the ROI of the sample 114 is out of focus. For example, the distance D2 may be between 0mm and about 3 mm. In some embodiments, D2 may be greater than 3 mm. When the distance D2 equals zero, the image of the sample 114 (or the image of the ROI of the sample 114) is fully in focus. In contrast, when D2 has a non-zero value, the image of sample 114 (or the image of the ROI of sample 114) is out of focus.

Fig. 2A depicts the situation where the image of the sample 114 is out of focus. For example, when the light beam received from sample 114 does not converge on image sensor 200, the image of sample 114 may be out of focus. Fig. 2A depicts a beam pair 206 and a beam pair 208. Neither pair converges on the image sensor 200. For simplicity, the optical path under the sample 114 is not shown. Consistent with the present disclosure, light beam pair 206 may correspond to light beam 120 projected from illumination assembly 110 at illumination angle α 2, and light beam pair 208 may correspond to light beam 118 projected from illumination assembly 110 at illumination angle α 1. Additionally, the beam pair 206 may strike the image sensor 200 simultaneously with the beam pair 208. The term "simultaneously" in this context means that the image sensor 200 records information associated with two or more beam pairs during coincident or overlapping times, one of which starts and ends during the duration of the other, or the latter of which starts before the other is completed. In other embodiments, the beam pair 206 and the beam pair 208 may contact the image sensor 200 sequentially. The term "sequentially" means that after completing recording information associated with, for example, beam pair 208, image sensor 200 has begun recording information associated with, for example, beam pair 206.

As discussed above, D2 is the distance between the focal plane 204 and the sample 114, and it corresponds to the degree to which the sample 114 is out of focus. In one example, D2 may have a value of 50 microns. The focus actuator 104 is configured to change the distance D2 by converting an input signal from the controller 106 into physical motion. In some embodiments, to focus an image of the sample 114, the focus actuator 104 may move the image capture device 102. In this example, to focus the image of the sample 114, the focus actuator 104 may move the image capture device 102 up 50 microns. In other embodiments, the focus actuator 104 may move the stage 116 downward in order to focus the image of the sample 114. Thus, in this example, instead of moving image capture device 102 up 50 microns, focus actuator 104 may move stage 116 down 50 microns.

Fig. 2B shows the case where the image of the sample 114 is in focus. In this case, the pair of beams 206 and 208 converge on the image sensor 200, and the distance D2 equals zero. In other words, focusing the image of the sample 114 (or the image of the ROI of the sample 114) may be dependent on adjusting the relative distance between the image capture device 102 and the sample 114. The relative distance may be represented by D1-D2, and when distance D2 equals zero, the relative distance between image capture device 102 and sample 114 equals distance D1, which means that the image of sample 114 is focused.

Fig. 3 illustrates an exemplary method 300 for compressively acquiring a microscopic image of a sample (e.g., sample 114) using a suitable microscope (e.g., microscope 100). In one example, each of the steps shown in fig. 3 may represent an algorithm whose structure includes and/or is represented by a plurality of sub-steps, examples of which are provided in more detail below.

As shown in fig. 3, at step 310, one or more of the systems described herein may acquire, with an image sensor, a first image dataset from a sample illuminated with a first set of illumination conditions including a first number of illumination conditions, each illumination condition including a first wavelength. For example, the microscope 100 may acquire a first image dataset with the image capture device 102 from a sample 114 illuminated by the illumination assembly 110 using a first set of illumination conditions having a first number of illumination conditions. The first image dataset may comprise a plurality of images of the sample 114.

The first set of illumination conditions may each include a first wavelength such that the illumination assembly 110 may illuminate the sample 114 by emitting light at the first wavelength. The first wavelength may correspond to a first color. For example, the first wavelength may correspond to one of the following ranges: an ultraviolet range from about 200 nanometers (nm) to about 380 nanometers (nm), a violet range from about 380nm to about 450nm, a blue range from about 450nm to about 485nm, a cyan range from about 485nm to 500nm, a green range from about 500nm to 565nm, a yellow range from about 565nm to about 590nm, an orange range from about 590nm to 625nm, a red range from about 625nm to about 740nm, or a near infrared range from about 700nm to about 1100 nm. In some examples, the first wavelength may correspond to a first peak of a first illumination source emitting a first wavelength distribution. The first wavelength distribution may include a first full-width half-maximum.

In some implementations, the method 300 can include determining a first wavelength. The first wavelength may be user defined, as defined for and/or associated with each sample. Alternatively, for a first wavelength to be captured by the microscope 100, determining the first wavelength may be dynamic and/or adaptive. Determining the first wavelength may include acquiring, with the image capture device 102, an initial image dataset of the sample 114 illuminated with the plurality of wavelengths, for example, by the illumination assembly 110. The controller 106, as part of the microscope 100, may determine that a first image of the initial image data set includes more information than a second image of the initial data set. The microscope 100 may select a wavelength corresponding to the first image as the first wavelength. For example, the microscope 100 may capture a first image of the sample 114 illuminated at a green wavelength, a second image of the sample 114 illuminated at a red wavelength, and a third image of the sample 114 illuminated at a blue wavelength. The microscope 100 may select a wavelength corresponding to the image with the most information as the first wavelength, which may correspond to spatial frequency information. For example, if the green channel image contains more information than the red channel image and the blue channel image, the microscope 100 may select green as the first wavelength. The microscope 100 may also prioritize or otherwise order other wavelengths. For example, the microscope 100 may select red as the second wavelength and blue as the third wavelength.

In some examples, the microscope 100 may determine a first set of illumination conditions. The microscope 100 may determine the first set of illumination conditions based on information identified from the initial image dataset. The microscope 100 may determine, for example, the number of light sources, the location of the light sources, a combination of the locations of the plurality of light sources, an illumination angle, a combination of illumination angles, a number of illuminations, a location of a diffuser, a pattern of light, a filter, a mask, or a focal point of the sample.

In some examples, the microscope 100 may determine a computational process for reconstructing the first image. For example, the microscope 100 may determine the computational process based on the initial image dataset, such as by identifying certain information and/or patterns from the initial image dataset.

At step 320, one or more of the systems described herein may acquire, with the image sensor, a second image dataset from the sample illuminated with a second set of illumination conditions including a second number of illumination conditions, each illumination condition including a second wavelength. The first number of lighting conditions may be greater than the second number of lighting conditions. For example, the microscope 100 may acquire a second image dataset with the image capture device 102 from a sample 114 illuminated by the illumination assembly 110 using a second set of illumination conditions having a second number of illumination conditions, the second number of illumination conditions being less than the first number of illumination conditions. The second image dataset may comprise one or more images of the sample 114.

The first number may be at least twice greater than the second number. The second acquisition time associated with the second image data set may not exceed half of the first acquisition time associated with the first image data set.

In addition to having fewer lighting conditions, the second set of lighting conditions may also be different from the first set of lighting conditions. The second image data set may be smaller than the first image data set. For example, the microscope 100 may capture fewer images for the second image data set than the first image data set because there are fewer lighting conditions. In some embodiments, the microscope 100 may acquire the second image data set after acquiring the first image data set.

The second set of illumination conditions may each include a second wavelength such that the illumination assembly 110 may illuminate the sample 114 by emitting light at the second wavelength. The second wavelength may correspond to a second color. For example, the second wavelength may correspond to one of the following ranges: an ultraviolet range from about 200 nanometers (nm) to about 380 nanometers (nm), a violet range from about 380nm to about 450nm, a blue range from about 450nm to about 485nm, a cyan range from about 485nm to 500nm, a green range from about 500 to 565nm, a yellow range from about 565nm to about 590nm, an orange range from about 590nm to 625nm, a red range from about 625nm to about 740nm, or a near infrared range from about 700nm to about 1100 nm.

The second wavelength may be different from the first wavelength. For example, the second wavelength may correspond to a second color different from the first color (of the first wavelength). In some examples, the first wavelength may correspond to a first peak of a first illumination source emitting a first wavelength distribution (which may include a first full width half maximum), and the second wavelength may correspond to a second peak of a second wavelength distribution (which may include a second full width half maximum). The first full width half maximum may not overlap the second full width half maximum. Alternatively, the first wavelength and the second wavelength may correspond to different ranges of wavelengths.

In some examples, the second wavelength may be determined. For example, the second wavelength may be user defined. Alternatively, the microscope 100 may determine the second wavelength, as described above with respect to step 310. Additionally, the microscope 100 may determine the second set of illumination conditions, for example, based on the initial image dataset described above with respect to step 310.

At step 330, one or more of the systems described herein may combine the first image dataset and the second image dataset into a computed reconstructed image of the sample. For example, the microscope 100 may combine the first image data set and the second image data set into a computed reconstructed image of the sample 114.

The computed reconstructed image may include one or more of a two-dimensional (2D) image, a three-dimensional (3D) image, a 2D intensity image, a 3D intensity image, a 2D phase image, a 3D phase image, a 2D fluoroscopic image, a 3D fluoroscopic image, a 2D hyperspectral image, or a 3D hyperspectral image. The computed reconstructed image may comprise a color image. For example, the color image may include two or more of a red channel, a green channel, or a blue channel. In such an example, the first wavelength may correspond to one of a red channel, a green channel, or a blue channel, and the second wavelength may correspond to another of the red channel, the green channel, or the blue channel.

Further, the spatial frequency bandwidth of the computed reconstructed image may be at least 1.5 times greater than the spatial frequency bandwidth of each of the plurality of images acquired with the first set of illumination conditions.

In some examples, the microscope 100 may process the first image data set and the second image data set separately to generate a first computed reconstructed image from the first image data set and a second computed reconstructed image from the second image data set. The microscope 100 may combine the first computed reconstructed image with the second computed reconstructed image to generate a computed reconstructed image. The first image data set may comprise a first plurality of images and the second image data set may comprise one or more images. The one or more images may include a single image acquired with the second set of lighting conditions. Alternatively, the one or more images may include a second plurality of images. The microscope 100 may process the first plurality of images and the one or more images together to generate a computed reconstructed image.

Further, in some examples, microscope 100 may acquire a third image dataset with an image sensor (e.g., image capture device 102) from a sample illuminated with a third wavelength using a third set of illumination conditions. The third set of lighting conditions may include a third number of lighting conditions less than the first number of lighting conditions. The microscope 100 may combine the third data set with the first image data set and the second image data set to generate a computed reconstructed image.

In another example, the microscope 100 may acquire N additional image datasets with an image sensor (e.g., image capture device 102) from a sample illuminated with N additional wavelengths using additional N sets of illumination conditions. N may be an integer of at least one. For example, N may be an integer in the range from about 10 to about 100. The microscope 100 may combine the N additional data sets with the first image data set, the second image data set, and the third image data to generate a computed reconstructed image. Computing the reconstructed image may include hyperspectral imaging.

Fig. 4A shows a workflow diagram of a corresponding method 400 that may be performed by a suitable microscope, such as microscope 100. The method 400 may correspond to a variation of the method 300.

As shown in fig. 4A, at step 410 (which may correspond to step 310), the microscope 100 may acquire N from channel #1 as illuminated by the illumination assembly 110 using the image capture device 1021An image. At step 411 (which may correspond to step 330), the microscope 100 may process N from channel #11The images, for example, to generate a first computed reconstructed image for channel # 1. The microscope 100 may be based on the number of low resolution images (e.g., N)1) A resolution for the first computed reconstructed image is determined.

At step 420 (which may correspond to step 320), the microscope 100 may acquire N from channel #2 as illuminated by the illumination assembly 110 using the image capture device 1012An image. At step 421 (which may correspond to step 330), the microscope 100 may process N from channel #22The image, for example, to generate a second computed reconstructed image for channel # 2. The microscope 100 may be based on the number of low resolution images (e.g., N)2) A resolution for the second computed reconstructed image is determined.

The microscope 100 may repeat the acquisition and processing for each channel, e.g., channel #1 through channel # M, in any order. In addition, as described herein, the microscope 100 may adaptively determine illumination conditions including a number of illuminations for each channel and for each sample. At step 430 (which may correspond to step 320), the microscope 100 may acquire N from channel # M using the image capture device 101MAn image. At step 431 (which may correspond to step 330), the microscope 100 may process N from channel # MMThe images, for example, to generate a computed reconstructed image for channel # M.

At step 440 (which may correspond to step 330), the microscope 100 may fuse channels (e.g., channel #1 to channel # M) to form a final image. For example, the microscope 100 may generate a final computed reconstructed image for the sample 114 using the acquired images for each channel, the computed reconstructed images for each channel, and/or any sub-combination thereof. Fusion of the channels can be performed in a number of ways. In some embodiments, one or more of steps 540, 440, 442, or 444 includes color calibration, and may use, for example, a color space (e.g., CIELAB, CIELUV, YCbCr, XYZ, or cievw).

Fig. 4B illustrates a workflow diagram of another method 402 that may be performed by a suitable microscope, such as microscope 100. Method 402 may correspond to a variation of method 300 and/or method 400.

As shown in fig. 4B, at step 412 (which may correspond to step 310), the microscope 100 may acquire N from channel #1 as illuminated by the illumination assembly 110 using the image capture device 1021An imageLike this. At step 422 (which may correspond to step 320), the microscope 100 may capture N using the image capture device 101 from channel #2 as illuminated by the illumination assembly 1102An image.

At step 413 (which may correspond to step 330), the microscope 100 may process N from channel #11One image and N from channel #22The images, for example, to generate computed reconstructed images for channel #1 and channel # 2.

The microscope 100 may repeat the acquisition and processing for each channel (e.g., channel #1 to channel # M) in any order. At step 432 (which may correspond to step 320), the microscope 100 may acquire N from channel # M using the image capture device 101MAn image. At step 433 (which may correspond to step 330), the microscope 100 may process N from channel # MMThe images, for example, to generate a computed reconstructed image for channel # M.

At step 442 (which may correspond to step 330), the microscope 100 may fuse channels (e.g., channel #1 to channel # M) to form a final image. For example, the microscope 100 may generate a final computed reconstructed image for the sample 114 using the acquired images for each channel, the computed reconstructed images for each channel and/or combination of channels, and/or any combination thereof.

Fig. 4C shows a workflow diagram of another method 404 that may be performed by a suitable microscope, such as microscope 100. Method 404 may correspond to variations of method 300, method 400, and/or method 402.

The microscope 100 may acquire images and process each channel, e.g., channel #1 to channel # M, in any order. As shown in fig. 4C, at step 414 (which may correspond to step 310), the microscope 100 may acquire N from channel #1 as illuminated by the illumination assembly 110 using the image capture device 1021An image. At step 424 (which may correspond to step 320), the microscope 100 may acquire N from channel #2 as illuminated by the illumination assembly 110 using the image acquisition device 1012An image. At step 432 (which may correspond to step 320), the microscope 100 may use the image capture device 101 to slave to the channel# M acquisition NMAn image.

At step 415 (which may correspond to step 330), the microscope 100 may process N from channel #11N of one image from channel #22Image and N from channel # MMThe images, for example, to generate computed reconstructed images for channel #1 through channel # M.

At step 444 (which may correspond to step 330), the microscope 100 may fuse channels (e.g., channel #1 to channel # M) to form a final image. For example, the microscope 100 may generate a final computed reconstructed image for the sample 114 using the acquired images for each channel, the computed reconstructed images for the combination of channels, and/or any sub-combination thereof.

Fig. 5 illustrates a workflow diagram of a method 500 that may be performed by a suitable microscope, such as microscope 100. Method 500 may correspond to variations of method 300, method 400, method 402, and/or method 404. In particular, the method 500 may correspond to a specific example of the method 400.

As shown in fig. 5, at step 510 (which may correspond to step 310 and/or step 410), the microscope 100 may acquire N images using the green channel as illuminated by the illumination assembly 110 using the image capture device 102. As described above, the green channel may provide more information than either the red channel or the blue channel, such that it may be desirable to prioritize the green channel. As described herein, the microscope 100 may adaptively determine N, or may use a preconfigured value. In addition, the microscope 100 may acquire N images from the green channel at a low resolution (e.g., a lower resolution than the original resolution of the microscope 100). At step 511 (which may correspond to step 330 and/or step 411), the microscope 100 may process the N images from the green channel, for example to reconstruct high resolution images from the green channel images, which may be low resolution images. Fig. 6 shows a high resolution image 600 that can be reconstructed from N images from the green channel.

At step 520 (which may correspond to step 320 and/or step 420), the microscope 100 may acquire a single image from the red channel as illuminated by the illumination assembly 110 using the image capture device 101. The microscope 100 may acquire red channel images at the original resolution of the microscope 100. At step 521 (which may correspond to step 330 and/or step 421), the microscope 100 may process the image from the red channel, for example to denoise or otherwise enhance the red image. Fig. 7 shows an image 700, which may be a single image acquired using kohler illumination. As seen in fig. 6 and 7, image 700 may have a lower resolution than that of image 600.

At step 530 (which may correspond to step 320 and/or step 430), the microscope 100 may acquire a single image from the blue channel using the image capture device 101. The microscope 100 can acquire the blue channel image at the original resolution of the microscope 100. At step 531 (which may correspond to step 330 and/or step 431), the microscope 100 may process the image from the blue channel, for example to denoise the blue image. Fig. 8 shows an image 800, which is a single image acquired using kohler illumination. As seen in fig. 6 and 8, image 800 may have a lower resolution than that of image 600.

At step 540 (which may correspond to step 330 and/or step 440), the microscope 100 may fuse channels (e.g., red, green, and blue channels) to form a high resolution color image. For example, the microscope 100 may generate a final computed reconstructed image for the sample 114 using the acquired images for each channel, the processed images for each channel, and/or any sub-combination thereof. Fig. 9 shows an image 900, which may be a color high resolution image generated by processing image 600, image 700, and image 800 together. As seen in fig. 6-9, image 900 may have a higher resolution than the resolution of images 600, 700, and 800. Although fig. 5 shows the acquisition and processing in the order of green, red, and blue, the microscope 100 may perform the acquisition and processing for each channel in any order.

According to some embodiments, with reference to the exemplary images shown in fig. 10-13, it will be appreciated that an improved spatial frequency bandwidth of the reconstructed image is calculated.

FIG. 10 shows an enlarged raw image 1000 acquired with an image sensor using a red illumination color and corresponding cell structure;

FIG. 11 shows an enlarged raw image 1100 acquired with an image sensor using blue illumination colors and the corresponding cell structures of FIG. 10;

FIG. 12 shows a magnified original image 1200 acquired with an image sensor using a green illumination color and the corresponding cell structure of FIG. 10;

fig. 13 shows a computed reconstructed image 1300 obtained from a plurality of images illuminated with green and corresponding cell structures of fig. 10-12. As described herein, a plurality of images are generated with a first set of illumination conditions, and a computed reconstructed image 1300 is obtained from the plurality of images as described herein.

Fig. 14 shows the corresponding cell structures of the computationally reconstructed RGB image 1400 and the images in fig. 10 to 13. A computed reconstructed color image 1400 is generated with a plurality of green illumination conditions and a single red illumination condition and a single blue illumination condition. As will be understood with reference to fig. 10-14, the computed reconstructed color image 1400 shows improved resolution of cellular structures as compared to the individual red, blue and green images of fig. 10-12, respectively. A computed reconstructed color image 1400 may be obtained by combining a computed reconstructed image (e.g., image 1300) corresponding to a first wavelength and a first set of illumination conditions with one or more images (e.g., images 1000 and 1100) from other illumination wavelengths as described herein. Alternatively, data from images at different wavelengths and illumination conditions may be combined to generate the computed reconstructed color image 1400 without first generating a computed reconstructed image corresponding to the first wavelength and the first set of illumination conditions as described herein.

The computed reconstructed color images may be used for cell analysis and may be displayed on a display viewed by a user as described herein.

In some examples, the first number of lighting conditions is at least twice greater than the second number of lighting conditions. The second acquisition time associated with the second image data set may not exceed half of the first acquisition time associated with the first image data set. The spatial frequency bandwidth of the computed reconstructed image may be at least 1.5 times greater than the spatial frequency bandwidth of each of the plurality of images acquired with the first set of illumination conditions.

In some examples, calculating the reconstructed image may include one or more of increased spatial frequency bandwidth, correction for optical aberrations, or an increase in image contrast. The microscope 100 may convert the first image data set to a spatial frequency space and map to spatial frequencies in the spatial frequency space based on the first set of illumination conditions to provide an increased spatial frequency bandwidth of the computed reconstructed image compared to the first spatial frequency bandwidth of each of the plurality of images acquired with the first set of illumination conditions. An image sensor (e.g., image capture device 102) may include a spatial frequency bandwidth, and the increased spatial frequency bandwidth to compute a reconstructed image may be greater than the spatial frequency bandwidth of the image sensor divided by a magnification of an image of the sample onto the image sensor.

In some examples, the first image dataset may include a first plurality of images, each image including a first spatial frequency bandwidth. The increased spatial frequency bandwidth of the computed reconstructed image may be greater than the first spatial frequency bandwidth of said each image of the first plurality of images. The first plurality of images may include features of samples that are not resolved with the first spatial frequency bandwidth of said each image of the first plurality of images, and computing the reconstructed image may include features of samples that are resolved with the increased spatial frequency bandwidth of the computing reconstructed image.

The microscope 100 may provide correction of optical aberrations by separating aberration information from sample information in order to reduce the effect of optical aberrations on the calculation of the reconstructed image. Optionally, the aberration information may include an aberration spatial frequency and phase associated with optics used to image the sample on the image sensor. Optionally, the sample information may include a sample spatial frequency and phase associated with the structure of the sample.

The microscope 100 may provide increased image contrast of the computed reconstructed image by computationally magnifying the high spatial frequencies of the reconstructed image to better represent the sample. Computing the reconstructed image may include increased contrast. The increased contrast may comprise calculating an increased ratio of high spatial frequencies to low spatial frequencies in the reconstructed image compared to a ratio of high spatial frequencies to low spatial frequencies for each of the plurality of images of the first image data set.

The microscope 100 may display the reconstructed image on a display, such as a user interface 112. The reconstructed image may be displayed on a display with a first spatial frequency bandwidth for a first color channel corresponding to a first wavelength and a second spatial frequency bandwidth for a second color channel corresponding to a second wavelength. The first spatial frequency bandwidth may be greater than the second spatial frequency bandwidth. For example, the first wavelength may correspond to green light such that the first color channel corresponds to a green color channel. The second wavelength may correspond to red or blue light such that the second color channel corresponds to a red channel or a blue channel. The green channel may be displayed on the display with a first spatial frequency bandwidth and the red channel or the blue channel may be displayed on the display with a second spatial frequency bandwidth.

In some examples, the microscope 100 may provide the reconstructed image on a display (e.g., the user interface 112) with a first spatial frequency bandwidth and a first user perceivable color corresponding to the first wavelength and a second spatial frequency bandwidth and a second user perceivable color corresponding to the second wavelength. The first spatial frequency bandwidth may be greater than a spatial frequency bandwidth of one or more images acquired by the image sensor at the second wavelength with the second set of illumination conditions.

In some examples, computing a reconstructed image may include a first spatial frequency bandwidth corresponding to a first wavelength and a second spatial frequency bandwidth corresponding to a second wavelength. The first spatial frequency bandwidth may be greater than a spatial frequency bandwidth of one or more images acquired by the image sensor at the second wavelength with the second set of illumination conditions.

In some examples, the computed reconstructed image may include a red channel, a green channel, and a blue channel. The second wavelength may include red light corresponding to a red channel, and the third wavelength may include blue light corresponding to a third channel. The third channel may be displayed on the display with a third spatial frequency bandwidth, which may be less than the first spatial frequency bandwidth.

In some examples, an image sensor (e.g., image capture device 102) may include a sensor that includes a two-dimensional array of pixels. The first color of the first image data set may correspond to a first wavelength and the second color of the second image data set may correspond to a second wavelength. The microscope 100 may map the computed reconstructed image to a red channel, a green channel, and a blue channel based on the first wavelength and the second wavelength.

In some examples, the image sensor may include a grayscale image sensor including a two-dimensional array of pixels. In some examples, an image sensor may include a color image sensor including a two-dimensional array of pixels and a color filter array including a plurality of color filters arranged over the two-dimensional array. The first image data set may be determined based on the first wavelength and a first absorption characteristic of the color filter at the first wavelength. The second image dataset may be determined based on the second wavelength and a second absorption characteristic at the second wavelength. The microscope 100 may combine the first image data set and the second image data set according to a first absorption characteristic of a color filter at a first wavelength and a second absorption characteristic of the color filter at a second wavelength to generate a computed reconstructed image.

In some examples, the first wavelength may be different from the second wavelength, and the portion of the first image dataset and the portion of the second image dataset may be acquired substantially simultaneously with one or more of the set of illumination conditions illuminating the sample when the sample is illuminated with one or more of the second set of illumination conditions.

In some examples, the first data set and the second data set correspond to a first depth of the sample. The microscope 100 may also adjust the focus of the microscope 100 to image the sample at multiple depths. The microscope 100 may also repeat the acquiring and combining steps (e.g., steps 310-330) to generate a computed reconstructed image. The computed reconstructed image may include a plurality of computed reconstructed images at different depths corresponding to the plurality of depths.

As explained above in connection with example methods 300, 400, 402, 404, and 500, the computational imaging system described herein may reduce acquisition times for acquiring images at different wavelengths by customizing the acquisition for each wavelength. Wavelengths that will produce more information may be prioritized over wavelengths that will produce less information (e.g., given more acquisition and/or reconstruction time). For example, the green channel may be prioritized over the red and blue channels due to the sensitivity of the human eye to green. By reducing the acquisition and/or reconstruction time for sub-optimal wavelengths, and otherwise maintaining the acquisition and/or reconstruction time for preferential wavelengths, the overall imaging time of the computed imaging system may be reduced without significant adverse effects on the resulting reconstructed image. In other words, loss of information due to reduced acquisition and/or reconstruction times for sub-optimal wavelengths is mitigated by preserving information from prioritized wavelengths.

As described herein, the computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions (such as those contained within the modules described herein). In their most basic configuration, these computing devices may each include at least one memory device and at least one physical processor.

As used herein, the term "memory" or "memory device" generally represents any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, a memory device may store, load, and/or maintain one or more of the modules described herein. Examples of memory devices include, but are not limited to, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDD), Solid State Drives (SSD), optical disk drives, cache, variations or combinations of one or more of these, or any other suitable storage memory.

Additionally, as used herein, the term "processor" or "physical processor" generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the memory device described above. Examples of physical processors include, but are not limited to, a microprocessor, a microcontroller, a Central Processing Unit (CPU), a Field Programmable Gate Array (FPGA) implementing a soft-core processor, an Application Specific Integrated Circuit (ASIC), portions of one or more of these, variations or combinations of one or more of these, or any other suitable physical processor.

Although illustrated as separate elements, method steps described and/or illustrated herein may represent portions of a single application. Additionally, in some embodiments, one or more of these steps may represent or correspond to one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks, such as method steps.

In addition, one or more of the devices described herein may convert data, physical devices, and/or representations of physical devices from one form to another. Additionally or alternatively, one or more of the modules described herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form of computing device to another form of computing device by executing on, storing data on, and/or otherwise interacting with the computing device.

The term "computer-readable medium" as used herein generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer readable media include, but are not limited to, transmission type media such as carrier waves, and non-transitory type media such as magnetic storage media (e.g., hard disk drives, tape drives, and floppy disks), optical storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic storage media (e.g., solid state drives and flash media), and other distribution systems.

One of ordinary skill in the art will recognize that any of the processes or methods disclosed herein can be modified in many ways. The process parameters and sequence of steps described and/or illustrated herein are given by way of example only and may be varied as desired. For example, while the steps shown and/or described herein may be shown or discussed in a particular order, these steps need not necessarily be performed in the order shown or discussed.

Various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein, or include additional steps in addition to those disclosed. Further, the steps of any method as disclosed herein may be combined with one or more steps of any other method as disclosed herein.

A processor as described herein may be configured to perform one or more steps of any of the methods disclosed herein. Alternatively or in combination, the processor may be configured to combine one or more steps of one or more methods as disclosed herein.

Unless otherwise indicated, the terms "connected to" and "coupled to" (and derivatives thereof) as used in the specification and claims are to be understood as allowing for direct and indirect (i.e., via other elements or components) connection. In addition, the terms "a" or "an" as used in the specification and claims should be interpreted as at least one of "… …. Finally, for convenience of use, the terms "comprising" and "having" (and derivatives thereof) as used in the specification and claims are interchangeable with the word "comprising" and shall have the same meaning as the word "comprising".

A processor as disclosed herein may be configured with instructions to perform any one or more steps of any method as disclosed herein.

It will be understood that, although the terms "first," "second," "third," etc. may be used herein to describe various layers, elements, components, regions or sections, these do not relate to any particular order or sequence of events. These terms are only used to distinguish one layer, element, component, region or section from another layer, element, component, region or section. A first layer, element, component, region or section discussed herein could be termed a second layer, element, component, region or section without departing from the teachings of the present disclosure.

As used herein, the term "or" is used inclusively to refer to items in substitution or combination.

As used herein, characters such as numbers refer to like elements.

The present disclosure includes the following numbered clauses. Each clause may be combined with one or more other clauses to the extent such combination is consistent with the teachings disclosed herein.

Article 1, a method for generating a computed reconstructed image, the method comprising: acquiring, with an image sensor, a first image dataset from a sample illuminated with a first set of illumination conditions comprising a first number of illumination conditions, each illumination condition comprising a first wavelength; acquiring, with the image sensor, a second image dataset from the sample illuminated with a second set of illumination conditions comprising a second number of illumination conditions, each illumination condition comprising a second wavelength, wherein the first number is greater than the second number; and combining the first image data set and the second image data set into a computed reconstructed image of the sample.

The method of clause 2, wherein computing a reconstructed image comprises a first spatial frequency bandwidth corresponding to the first wavelength and a second spatial frequency bandwidth corresponding to the second wavelength, the first spatial frequency bandwidth being greater than the second spatial frequency bandwidth.

The method of clause 3, 2, wherein the first spatial frequency bandwidth is greater than a spatial frequency bandwidth of one or more images acquired by the image sensor with the first set of illumination conditions at the first wavelength.

The method of clause 4, wherein the second spatial frequency bandwidth is greater than a spatial frequency bandwidth of one or more images acquired by the image sensor with the second set of illumination conditions at the second wavelength.

The method of clause 5, the method according to clause 2, wherein calculating the reconstructed image comprises one or more of increased contrast or aberration correction for the second wavelength based at least in part on the first image dataset from the first wavelength.

The method of clause 6, the method of clause 1, further comprising: the reconstructed image is provided on the display with a first spatial frequency bandwidth and a first user perceivable color corresponding to the first wavelength and a second spatial frequency bandwidth and a second user perceivable color corresponding to the second wavelength, the first spatial frequency bandwidth being greater than the second spatial frequency bandwidth and a spatial frequency bandwidth of the one or more images acquired by the image sensor at the first wavelength with the first set of illumination conditions.

Article 7 the method of article 1, wherein the first number is at least 2 times greater than the second number and the second acquisition time associated with the second image dataset is no more than half of the first acquisition time associated with the first image dataset, and wherein the spatial frequency bandwidth over which the reconstructed image is computed is at least 1.5 times greater than the spatial frequency bandwidth of each of the plurality of images acquired with the first set of illumination conditions.

The method of clause 8, wherein the first image dataset comprises a plurality of images of the sample and the second image dataset comprises one or more images of the sample, and wherein the second image dataset comprises fewer images of the sample than the first image dataset.

The method of clause 9, wherein computing the reconstructed image comprises one or more of increased spatial frequency bandwidth, correction for optical aberrations, or increase in image contrast.

The method of clause 10, 9, wherein the first image dataset is converted to a spatial frequency space and mapped to spatial frequencies in the spatial frequency space based on the first set of lighting conditions to provide an increased spatial frequency bandwidth of the computed reconstructed image compared to the first spatial frequency bandwidth of each of the plurality of images acquired with the first set of lighting conditions.

The method of clause 11, 9, wherein the image sensor includes a spatial frequency bandwidth, and the increased spatial frequency bandwidth of the computed reconstructed image is greater than the spatial frequency bandwidth of the image sensor divided by a magnification of the image of the sample onto the image sensor.

The method of clause 12, 9, wherein the first image dataset comprises a first plurality of images, each image comprising a first spatial frequency bandwidth, and the increased spatial frequency bandwidth of the computed reconstructed image is greater than the first spatial frequency bandwidth of said each image of the first plurality of images.

The method of clause 13, 12, wherein the first plurality of images includes features of samples that are not resolved with the first spatial frequency bandwidth of the each image of the first plurality of images, and calculating the reconstructed image includes features of samples that are resolved with the increased spatial frequency bandwidth of the calculated reconstructed image.

The method of clause 14, 9, wherein the correction for the optical aberration is provided by separating aberration information from the sample information so as to reduce the effect of the optical aberration on computing the reconstructed image, and optionally wherein the aberration information comprises an aberration spatial frequency and phase associated with optics used to image the sample on the image sensor, and optionally wherein the sample information comprises a sample spatial frequency and phase associated with the structure of the sample.

The method of clause 15, 9, wherein the increased image contrast of the computed reconstructed image is provided by computationally magnifying the high spatial frequencies of the reconstructed image to better represent the sample.

The method of clause 16, 9, wherein calculating the reconstructed image includes increasing contrast, and wherein increasing contrast includes calculating an increased ratio of high spatial frequencies to low spatial frequencies in the reconstructed image as compared to a ratio of high spatial frequencies to low spatial frequencies for each of the plurality of images of the first image data set.

The method of clause 17, wherein the first image data set and the second image data set are processed separately to generate a first computed reconstructed image from the first image data set and a second computed reconstructed image from the second data set, and wherein the first computed reconstructed image is combined with the second computed reconstructed image to generate the computed reconstructed image.

The method of clause 18, wherein the first image dataset comprises a first plurality of images and the second image dataset comprises one or more images, and wherein the first plurality of images and the one or more images are processed together to generate a computed reconstructed image.

The method of clause 19, wherein the one or more images comprise a single image acquired with the second set of lighting conditions.

The method of clause 20, wherein the one or more images comprise a second plurality of images.

The method of clause 21, wherein computing a reconstructed image comprises a color image including two or more of a red channel, a green channel, or a blue channel.

Article 22 the method of article 21, wherein the first wavelength corresponds to one of a red channel, a green channel, or a blue channel, and the second wavelength corresponds to another of the red channel, the blue channel, or the green channel.

Article 23 the method of article 22, wherein the reconstructed image is displayed on the display with a first spatial frequency bandwidth for a first color channel corresponding to the first wavelength and with a second spatial frequency bandwidth for a second color channel corresponding to the second wavelength, the first spatial frequency bandwidth being greater than the second spatial frequency bandwidth.

The method of clause 24, wherein the first wavelength comprises green light, the first channel comprises a green channel, the second wavelength comprises red or blue light, and the second color channel comprises a red channel or a blue channel, and wherein the green channel is displayed on the display with the first spatial frequency bandwidth and the red channel or the blue channel is displayed on the display with the second spatial frequency bandwidth.

The method of clause 25, wherein the computed reconstructed image includes a red channel, a green channel, and a blue channel, and wherein the second wavelength includes red light corresponding to the red channel and the third wavelength includes blue light corresponding to the third channel, wherein the third channel is displayed on the display with a third spatial frequency bandwidth that is less than the first spatial frequency bandwidth.

Article 26 the method of article 21, wherein the image sensor comprises a sensor comprising a two-dimensional array of pixels, and a first color of the first image dataset corresponds to a first wavelength and a second color of the second image dataset corresponds to a second wavelength, and wherein the computed reconstructed image is mapped to a red channel, a green channel, and a blue channel based on the first wavelength and the second wavelength.

The method of clause 27, 26, wherein the image sensor comprises a grayscale image sensor comprising a two-dimensional array of pixels.

The method of clause 28, wherein the image sensor comprises a color image sensor including a two-dimensional array of pixels and a color filter array including a plurality of color filters arranged over the two-dimensional array.

The method of clause 29, 28, wherein the first image data set is determined based on the first wavelength and a first absorption characteristic of the color filter at the first wavelength, and the second image data set is determined based on the second wavelength and a second absorption characteristic at the second wavelength.

The method of clause 30, the method of clause 29, wherein the first image data set and the second image data set are combined according to a first absorption characteristic of a color filter at a first wavelength and a second absorption characteristic of the color filter at a second wavelength to generate a computed reconstructed image.

The method of clause 31, 28, wherein the first wavelength is different than the second wavelength, and when the sample is illuminated with one or more of the second set of illumination conditions, the portion of the first image dataset and the portion of the second image dataset are acquired substantially simultaneously with one or more of the set of illumination conditions illuminating the sample.

Article 32 the method of article 1, wherein the first wavelength is different from the second wavelength.

Article 33 the method of article 32, wherein the first wavelength comprises a first color and the second wavelength comprises a second color different from the first color.

Article 34 the method of article 32, wherein the first wavelength comprises a first peak of a first illumination source emitting a first wavelength distribution, the first wavelength distribution comprising a first full width half maximum, and wherein the second wavelength comprises a second peak of a second wavelength distribution, the second wavelength distribution comprising a second full width half maximum, and wherein the first full width half maximum does not overlap the second full width half maximum.

Article 35 the method of article 32, wherein the first wavelength is in one of the following ranges and the second wavelength is in a different one of the following ranges: an ultraviolet range from about 200 nanometers (nm) to about 380 nanometers (nm), a violet range from about 380nm to about 450nm, a blue range from about 450nm to about 485nm, a cyan range from about 485nm to 500nm, a green range from about 500nm to 565nm, a yellow range from about 565nm to about 590nm, an orange range from about 590nm to 625nm, a red range from about 625nm to about 740nm, or a near infrared range from about 700nm to about 1100 nm.

Article 36 the method of article 33, wherein the first wavelength is in one of the ranges and the second wavelength is in a different one of the ranges.

Article 37 the method of article 1, further comprising: acquiring, with the image sensor, a third image dataset from the sample illuminated with the third wavelength using a third set of illumination conditions; wherein the third data set is combined with the first image data set and the second image data set to generate a computed reconstructed image.

The method of clause 38, wherein the third set of lighting conditions includes a third number of lighting conditions that is less than the first number of lighting conditions.

Article 39 the method of claim 37, further comprising: acquiring, with an image sensor, N additional image datasets from a sample illuminated with N additional wavelengths using additional N sets of illumination conditions; wherein the N additional data sets are combined with the first image data set, the second image data set, and the third image data to generate a computed reconstructed image; wherein N comprises at least one integer.

The method of clause 40, wherein N comprises an integer in the range of from about 10 to 100.

The method of clause 41, wherein computing a reconstructed image comprises hyperspectral image.

Item 42 the method of item 1, wherein computing a reconstructed image comprises one or more of a 2D image, a 3D image, a 2D intensity image, a 3D intensity image, a 2D phase image, a 3D phase image, a 2D fluoroscopic image, a 3D fluoroscopic image, a 2D hyperspectral image, or a 3D hyperspectral image.

The method of clause 43, wherein the first data set and the second data set correspond to a first depth of the sample, the method further comprising: adjusting a focal length of the microscope to image the sample at a plurality of depths; and repeating the acquiring and combining steps to generate a computed reconstructed image comprising a plurality of computed reconstructed images at different depths corresponding to the plurality of depths.

Article 44 the method of article 1, further comprising determining the first wavelength.

The method of clause 45, according to clause 44, wherein the first wavelength is user-defined.

The method of clause 46, 44, wherein determining the first wavelength further comprises: acquiring, with an image sensor, an initial image dataset of a sample illuminated with a plurality of wavelengths; determining that a first image of the initial image dataset includes more information than a second image of the initial image dataset; selecting a first wavelength corresponding to the first image from the plurality of wavelengths as a first wavelength; and selecting a second wavelength of the plurality of wavelengths corresponding to the second image as the second wavelength.

The method of clause 47, wherein the information comprises spatial frequency information.

The method of clause 48, according to clause 46, further comprising determining a first set of lighting conditions based on information identified from the initial image dataset.

Clause 49, the method of clause 48, wherein determining the first set of lighting conditions comprises determining one or more of a number of light sources, a location of a light source, a combination of locations of a plurality of light sources, an illumination angle, a combination of illumination angles, an illumination number, a location of a diffuser, a pattern of light, a filter, a mask, or a focal point of the sample.

The method of clause 50, which is dependent on clause 46, further comprising determining a computational process for reconstructing the first image based on the initial image dataset.

Article 51 the method of article 1, wherein computing a reconstructed image comprises a three-dimensional image.

Clause 52, the method of clause 1, wherein the second set of lighting conditions is different from the first set of lighting conditions.

Article 53 the method of article 1, wherein the second image dataset is smaller than the first image dataset.

Article 54 the method of article 1, wherein the first image dataset is acquired after the second image dataset.

Article 55, a microscope for image reconstruction, the microscope comprising: an illumination source configured to illuminate the sample with a plurality of wavelengths at a plurality of angles; an image sensor; an objective lens for imaging the sample illuminated with the illumination assembly onto an image sensor; and a processor operatively coupled to the illumination assembly and the image sensor, the processor configured with instructions to perform the method of any of the preceding clauses.

Clause 56, the microscope of clause 55, wherein the plurality of wavelengths includes one or more of a violet wavelength in the range of from about 380 nanometers (nm) to about 450 nanometers (nm), a blue wavelength in the range of from about 450nm to about 485nm, a cyan wavelength in the range of from about 485nm to 500nm, a green wavelength in the range of from about 500nm to 565nm, a yellow wavelength in the range of from about 565nm to about 590nm, an orange wavelength in the range of from about 590nm to 625nm, a red wavelength in the range of from about 625nm to about 740nm, an infrared wavelength greater than 700nm, or a near infrared wavelength in the range of from about 700nm to about 1100 nm.

The microscope of clause 57, 55, wherein the illumination assembly is configured to illuminate the sample with the plurality of light sources at a plurality of positions corresponding to different illumination angles of the sample.

The microscope of clause 58, the microscope of clause 55, further comprising a focus actuator coupled to the processor to adjust a depth of the specimen used to form the image of the specimen on the image sensor.

The microscope of clause 59, 58, wherein the focus actuator is configured to move to a first configuration to image the specimen at the first depth and to move to a second configuration to image the specimen at the second depth.

Embodiments of the present disclosure have been shown and described as set forth herein, and are provided by way of example only. Those of ordinary skill in the art will recognize many adaptations, modifications, variations, and alternatives without departing from the scope of the present disclosure. Several alternatives and combinations of the embodiments disclosed herein may be utilized without departing from the scope of the present disclosure and the inventions disclosed herein. Accordingly, the scope of the presently disclosed invention should be limited only by the scope of the appended claims and equivalents thereof.

40页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:光学镜头、摄像模组及电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!