Fluorescence microscopy system, apparatus and method

文档序号:12615 发布日期:2021-09-17 浏览:41次 中文

阅读说明:本技术 荧光显微镜检查系统、设备和方法 (Fluorescence microscopy system, apparatus and method ) 是由 马修·C·普特曼 约翰·B·普特曼 瓦迪姆·潘斯基 丹尼斯·沙鲁霍夫 于 2020-02-04 设计创作,主要内容包括:一种荧光显微镜检查系统包括能够发射使样本发荧光的光和使样本不发荧光的光的光源。所发射的光通过一个或更多个滤光器和物镜通道被朝向样本定向。灯环通过暗场通道以倾斜角度将光投射在样本上。滤光器之一可以修改光以匹配与样本相关联的预定带隙能量,并且另一滤光器可以过滤从样本反射并且到相机的光的波长。相机可以根据所接收的光产生图像,并且可以对图像执行样本分类和特征分析。(A fluorescence microscopy system includes a light source capable of emitting light that fluoresces a sample and light that does not fluoresce the sample. The emitted light is directed towards the sample through one or more filters and objective channels. The light ring projects light through the dark field channel at an oblique angle onto the sample. One of the filters may modify the light to match a predetermined band gap energy associated with the sample, and the other filter may filter the wavelength of the light reflected from the sample and to the camera. The camera may produce an image from the received light, and may perform sample classification and feature analysis on the image.)

1. A microscope system, comprising:

a frame;

one or more incoherent light sources connected to the frame and configured to emit at least a first wavelength of light that will fluoresce a sample and a second wavelength of light that will not fluoresce a sample, wherein the emitted light is configured to be directed to the sample;

an excitation filter connected to the frame and configured to filter light from the one or more incoherent light sources, wherein the filtered light is configured to match a predetermined band gap energy associated with the sample;

an objective lens coupled to the frame, the objective lens including a bright field channel and a dark field channel;

a nosepiece connected to the objective lens via an attachment;

a slider connected to the frame and positioned along an optical path between the objective lens and the one or more incoherent light sources, wherein the slider comprises at least one configuration configured to transmit light along the optical path at least to a dark field channel configured to direct light at an oblique angle to the specimen; and

an emission filter connected to the frame and configured to filter selected wavelengths of light reflected from the sample to a receiving camera;

one or more processors; and

a memory storing instructions that, when executed by the one or more processors, cause the microscope system to:

obtaining image data from the receiving camera, the image data based on the directional light reflected from the sample;

classifying the sample with a trained classifier based on the received image data;

retrieving the stored system configuration associated with the classification of the sample;

applying the system configuration to one or more of the incoherent light source, excitation filter, emission filter, or the receive camera;

obtaining additional image data from the receiving camera, the additional image data obtained after the system configuration has been applied;

identifying sample defects using an image data model based on the obtained additional image data; and

generating a feature map based on the sample defects.

2. The system of claim 1, wherein at least one configuration of the slider is configured to transmit light to both the bright field channel and the dark field channel.

3. The system of claim 1, wherein the slider is:

a filter slider connected to the frame and positioned below the dark field insert, the filter slider configured to provide multiple types of excitation filters; and

one or more additional emission filters for one or more of a bright field channel or a dark field channel.

4. The system of claim 1, further comprising at least a second camera, and wherein the emitted light comprises visible light and non-visible light directed to the respective camera.

5. The system of claim 1, further comprising one or more additional cameras connected to the frame, each additional camera configured to receive light of a respective unique wavelength.

6. The system of claim 1, further comprising:

one or more processors; and

a memory storing instructions that, when executed by the one or more processors, cause the one or more processors to:

receiving image data from the receiving camera, the image data based on the directional light from the sample;

classifying the sample with a trained classifier based on the received image data;

retrieving the stored system configuration associated with the classification of the sample; and

applying the system configuration to one or more of the light source, excitation filter, emission filter, or receiving camera.

7. The system of claim 6, wherein the memory further stores instructions to:

receiving additional image data from the receiving camera, the additional image data being received after the system configuration has been applied;

identifying a sample defect using an image data model based on the received additional image data; and

generating a feature map based on the sample defects.

8. The system of claim 1, wherein the one or more incoherent light sources further comprise:

a first light source connected to the frame and configured to emit reflected light from the one or more incoherent light sources to the sample; and

an additional light source attached to the frame below the sample and configured to increase an intensity of light on the sample by emitting light directed to the sample simultaneously with light emitted by one or the first incoherent light source.

9. The system of claim 1, further comprising a beam splitter connected to the frame and configured to direct the emitted light toward the sample.

10. A method, comprising:

emitting light of at least a first wavelength that fluoresces a sample and light of a second wavelength that does not fluoresce a sample from one or more incoherent light sources, wherein the emitted light is directed to the sample;

filtering the emitted light by an excitation filter, the filtered light matching a predetermined band gap energy;

transmitting the emitted light to the sample through a dark field channel of an objective lens at an oblique angle by a slider;

directing light reflected from the sample to a receiving camera using an emission filter, the reflected light responsive to the directed filtered light, wherein the directed light reflected from the sample comprises a selected wavelength;

obtaining, from the receiving camera, image data based on the directional light reflected from the sample;

classifying the sample with a trained classifier based on the image data;

retrieving the stored system configuration associated with the classification of the sample;

applying the system configuration to one or more of the incoherent light source, the excitation filter, the emission filter, or the receive camera;

obtaining additional image data from a receiving camera, the additional image data obtained after applying the system configuration;

identifying a sample defect using an image data model based on the obtained additional image data; and

generating a feature map based on the sample defects.

11. The method of claim 10, further comprising transmitting the filtered light through the slider to the sample to reach a bright field channel of the objective lens.

12. The method of claim 10, wherein a dark field insert emits light at an oblique angle toward a specimen via the dark field channel of the objective lens, wherein the dark field insert is positioned above the dark field channel of the objective lens and comprises a light ring.

13. The method of claim 10, wherein the emitted light comprises visible light and invisible light, and the method further comprises receiving, by a second camera, at least a portion of the directional light reflected from the sample.

14. The method of claim 10, further comprising receiving, by one or more additional cameras, light of a unique wavelength reflected from the sample.

15. The method of claim 10, further comprising:

receiving image data from the receiving camera, the image data based on the directional light reflected from the sample;

classifying the sample with a trained classifier based on the received image data;

retrieving the stored system configuration associated with the classification of the sample; and

applying the system configuration to one or more of the light source, excitation filter, emission filter, or receiving camera.

16. The method of claim 15, further comprising:

receiving additional image data from the receiving camera, the additional image data being received after the system configuration has been applied;

identifying a sample defect using an image data model based on the received additional image data; and

generating a feature map based on the sample defects.

17. The method of claim 10, further comprising:

emitting first light from a first light source of the one or more incoherent light sources toward the sample; and

increasing the intensity of light on the sample by emitting second light directed to the sample from additional light sources of the one or more incoherent light sources from below the sample, wherein the additional light is emitted simultaneously with the light emitted by the first light source of the one or more incoherent light sources.

18. The method of claim 10, further comprising directing the emitted light toward the sample with a beam splitter.

19. A microscope device, comprising:

one or more incoherent light sources configured to emit at least a first wavelength of light that will cause a sample to fluoresce and a second wavelength of light that will cause the sample not to fluoresce, wherein the emitted light is configured to be directed to a sample;

an excitation filter configured to filter light from one or more light sources, wherein the filtered light is configured to match a predetermined band gap energy associated with the sample;

an objective lens comprising a bright field channel and a dark field channel;

the lens changing rotary base is connected to the objective lens through an accessory;

a dark field insert secured to the accessory and positioned above the dark field channel of the objective lens, the dark field insert including a light ring configured to project light at an oblique angle onto the specimen;

an emission filter configured to filter selected wavelengths of reflected light from the sample to the receiving camera;

one or more processors; and

a memory storing instructions that, when executed by the one or more processors, cause the one or more processors to:

receiving image data from the receiving camera, the image data based on the directional light reflected from the sample;

classifying the sample with a trained classifier based on the received image data;

retrieving the stored system configuration associated with the classification of the sample;

applying the system configuration to one or more of the light source, excitation filter, emission filter, or receiving camera;

receiving additional image data from the receiving camera, the additional image data being received after the system configuration has been applied;

identifying a sample defect using an image data model based on the received additional image data; and

generating a feature map based on the sample defects.

Technical Field

The present disclosure relates generally to fluorescence microscopy systems, devices, and methods using incoherent illumination techniques. More particularly, embodiments of the present invention relate to fluorescence microscopy systems that can provide variable wavelength incoherent light targeted to excite specific layers of a sample or materials contained in the sample, and automatically detect characteristics of the sample from the resulting fluorescence caused by absorption of light or other electromagnetic radiation.

Background

Projecting non-visible light at the sample and capturing the resulting fluorescence/photoluminescence emitted by the sample can provide important information about the number, type, location and morphology of features on the sample. Furthermore, certain characteristics of the sample, such as purity or structural imperfections of the sample, may only be observed using invisible illumination. As understood by one of ordinary skill in the art, a specimen refers to an article of inspection (e.g., a wafer or biological slide), and a feature refers to an observable characteristic of the specimen, including deformities and/or defects. Features may include, but are not limited to: circuits, circuit board assemblies, biological cells, tissues, defects (e.g., impurities, structural defects, irregularities, stacking faults, contaminants, crystal defects, scratches, dust, fingerprints).

Note that the term Fluorescence (FL) as used herein includes photoluminescence, which is typically associated with light emission from semiconductor materials. Non-visible light refers to the region of the electromagnetic spectrum (i.e., the region between visible light and X-rays) having a wavelength of 10 to 400 nanometers (nm). In some embodiments, for example, the light wavelength may be selected within 200nm to 400nm, 300nm to 400nm, and/or any other suitable wavelength range. Furthermore, the wavelength of light required to excite the sample and cause fluorescence by absorption of light or other electromagnetic radiation by the sample is not limited to wavelengths ranging between 10nm and 400nm, and in some embodiments may be selected in the range above 400nm to provide the desired excitation to the sample, as explained herein. Coherent light refers to light energy particles having the same frequency and whose waves are in phase with each other. In contrast, the light energy particles of incoherent light do not have the same frequency, and their waves are out of phase with each other.

While coherent light sources (e.g., lasers) are commonly used for sample fluorescence, such light sources are not ideal for detecting large features or for certain types of samples (e.g., patterned wafers). Incoherent light sources, on the other hand, are more suitable for detecting a greater range of features (including large features and features on patterned wafers). Furthermore, the coherent light source illuminates only a small portion of the field of view, while the incoherent light illuminates the entire field of view, making the incoherent light more suitable for creating a sample signature. The sample feature map classifies features on the sample and specifies locations of the features. Note that the term field of view, as understood by those of ordinary skill in the art, refers to an inspection zone captured at one time by an image sensor. Further, one of ordinary skill in the art will readily appreciate that the terms field of view and image are used interchangeably herein.

Therefore, it is desirable to use a new fluorescence microscopy mechanism of incoherent illumination technology to excite specific layers of a sample or materials contained in a sample to fluoresce them, and to automatically detect characteristics of the sample from the resulting fluorescence. Furthermore, it is also desirable that the same mechanism use illumination techniques that do not cause fluorescence to inspect the characteristics of the sample.

Disclosure of Invention

In one example, a system includes a frame, one or more incoherent light sources connected to the frame and configured to emit at least a first wavelength of light that will fluoresce a sample and a second wavelength of light that will not fluoresce the sample, an excitation filter connected to the frame and configured to filter light from the one or more light sources, wherein the filtered light is configured to match a predetermined band gap energy associated with the sample, an objective connected to the frame and including a bright field channel and a dark field channel, an objective connected to the frame and positioned along an optical path between the objective and the one or more incoherent light sources, wherein the slider includes at least one configuration configured to transmit light along the optical path at least to the dark field channel, the dark field channel is configured to direct light at an oblique angle to the sample, the emission filter is connected to the frame and configured to filter selected wavelengths of light reflected from the sample to the receiving camera.

In some examples, at least one configuration of the slider is configured to transmit light to both the bright field channel and the dark field channel.

In some examples, the system further includes a nosepiece connected to the frame, the objective lens connected to the nosepiece via an accessory, and a dark field insert secured to the accessory and positioned over a dark field channel of the objective lens, the dark field insert including a light ring configured to project light at an oblique angle at the specimen.

In some embodiments, the slider is a filter slider connected to the frame and positioned below the dark field insert, the filter slider configured to provide multiple types of excitation filters, and one or more additional emission filters for one or more of the bright field channel or the dark field channel.

In some examples, the system further includes at least a second camera, and the emitted light includes visible light and non-visible light directed to the respective camera.

In some examples, the system includes one or more additional cameras connected to the frame, each additional camera configured to receive light of a respective unique wavelength.

In some examples, the system further includes one or more processors and memory storing instructions that, when executed by the one or more processors, cause the one or more processors to receive image data from a receiving camera, classify a sample with a trained classifier based on the received image data, retrieve a stored system configuration associated with the classification of the sample, and apply the system configuration to one or more of the light source, excitation filter, emission filter, or receiving camera, wherein the image data is based on directional light from the sample.

In some examples, the memory further stores instructions to receive additional image data from the receiving camera, identify a sample defect with an image data model based on the received additional image data, and generate a feature map based on the sample defect, wherein the additional image data is received after the system configuration has been applied.

In some examples, the one or more incoherent light sources further include a first light source connected to the frame and configured to emit reflected light from the one or more incoherent light sources to the sample, and an additional light source attached to the frame below the sample and configured to increase the intensity of light on the sample by emitting light directed to the sample simultaneously with the light emitted by the one or first incoherent light sources.

In some examples, the system further includes a beam splitter connected to the frame and configured to direct the emitted light toward the sample.

In one example, a method includes emitting light of a first wavelength that fluoresces a sample and light of a second wavelength that does not fluoresce the sample from one or more incoherent light sources, wherein the emitted light is directed to the sample, filtering the emitted light by an excitation filter, the filtered light matching a predetermined band gap energy, transmitting the emitted light to the sample by a slider via a dark-field channel of an objective lens at an oblique angle, and directing light reflected from the sample in response to the directed filtered light to a receiving camera, wherein the directed light reflected from the sample includes a selected wavelength.

In some examples, the method further includes transmitting the filtered light to the sample through the slider to reach a bright field channel of the objective lens.

In some examples, the method includes the dark field insert emitting light to the specimen at an oblique angle via a dark field channel of the objective lens, wherein the dark field insert is positioned above the dark field channel of the objective lens and includes a light ring.

In some examples, the emitted light includes visible light and invisible light, and the method further includes receiving, by the second camera, at least a portion of the directional light reflected from the sample.

In some examples, the method includes receiving, by one or more additional cameras, light of a unique wavelength reflected from the sample.

In some examples, the method includes receiving image data from a receiving camera, classifying the sample with a trained classifier based on the received image data, retrieving a stored system configuration associated with the classification of the sample, and applying the system configuration to one or more of the light source, excitation filter, emission filter, or receiving camera, wherein the image data is based on the directional light reflected from the sample.

In some examples, the method includes receiving additional image data from a receiving camera, identifying a sample defect with an image data model based on the received additional image data, and generating a feature map based on the sample defect, wherein the additional image data is received after the system configuration has been applied.

In some examples, the method further includes emitting first light from a first light source of the one or more incoherent light sources toward the sample, and increasing the intensity of the light on the sample by emitting second light directed to the sample from an additional light source of the one or more incoherent light sources from below the sample, wherein the additional light is emitted simultaneously with the light emitted by the first light source of the one or more incoherent light sources.

In some examples, the method further includes directing the emitted light toward the sample with a beam splitter.

In one example, an apparatus includes one or more incoherent light sources configured to emit at least a first wavelength of light that will fluoresce a sample and a second wavelength of light that will not fluoresce the sample, an excitation filter configured to filter light from the one or more light sources, an emission filter configured to filter the filtered light configured to match a predetermined band gap energy associated with the sample, an objective including a bright field channel and a dark field channel, an objective, an nosepiece connected to the objective via an accessory, a nosepiece secured to the accessory and positioned over the dark field channel of the objective, the dark field insert including a light ring configured to project light at the sample at an oblique angle, the emission filter is configured to filter reflected light of a selected wavelength from the sample to the receiving camera, the memory stores instructions that, when executed by the one or more processors, cause the one or more processors to receive image data from the receiving camera, classify the sample with a trained classifier based on the received image data, retrieve a stored system configuration associated with the classification of the sample, apply the system configuration to one or more of the light source, the excitation filter, the emission filter, or the receiving camera, receive additional image data from the receiving camera, identify a sample defect with an image data model based on the received additional image data, wherein the image data is based on the directed light reflected from the sample, and generate a feature map based on the sample defect.

Drawings

In order to describe the manner in which the above-recited and other advantages and features of the disclosure can be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only exemplary embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:

fig. 1A and 1B illustrate an example of a fluorescence microscopy system in accordance with aspects of the disclosed technology.

Fig. 2 shows an exemplary embodiment of a fluorescence microscopy system comprising two imaging devices.

Fig. 3A and 3B show an exemplary embodiment of a bright field/dark field slider in the optical path of a fluorescence microscopy system.

Fig. 3C, 4A and 4B illustrate exemplary embodiments of bright field/dark field sliders.

Fig. 5A shows an exemplary embodiment of a fluorescence microscopy system with a cylinder attachment including a dark field insert.

Fig. 5B illustrates an exemplary nosepiece with a cylinder attachment including a dark field insert.

Fig. 6A illustrates an exemplary embodiment of a dark field insert.

Fig. 6B illustrates an exemplary embodiment of a cylinder attachment.

Fig. 6C illustrates an exemplary embodiment of a filter slider.

Fig. 7A illustrates, at a high level, an exemplary method for illuminating a sample using an FM inspection system in accordance with aspects of the disclosed technology.

Fig. 7B illustrates steps of an exemplary process for identifying sample classifications and automatically adjusting light sources and filters for an FM inspection system in accordance with aspects of the disclosed technology.

FIG. 7C illustrates steps of an exemplary process for automatically identifying and/or classifying sample defects, in accordance with aspects of the disclosed technology.

Fig. 8 illustrates a general configuration of an embodiment of a computer analysis system according to some embodiments of the disclosed subject matter.

FIG. 9 illustrates an image processing algorithm that is first trained using training data so that the image processing module can identify a sample and features on the sample.

Fig. 10 illustrates an exemplary classification method using a Convolutional Neural Network (CNN) in accordance with aspects of the disclosed technology.

Detailed Description

In accordance with some embodiments of the disclosed subject matter, mechanisms (which may include systems, methods, apparatus, devices, etc.) are provided for fluorescence microscopy that uses incoherent illumination techniques to excite specific layers of a sample or materials contained in a sample to fluoresce them and automatically detect characteristics of the sample from the resulting fluorescence. The same mechanism can also be used to inspect the characteristics of the sample using illumination techniques that do not cause fluorescence. Further, in some embodiments, pigments may be added to the sample, and incoherent illumination techniques may be used to aim the pigments to fluoresce. Examination (sometimes referred to as inspection) refers to scanning, imaging, analyzing, measuring, and any other suitable review of a sample using the disclosed non-coherent microscopy mechanism for fluorescence imaging.

Fig. 1A and 1B illustrate an example of a fluorescence microscopy system (referred to herein as "FMIS 100") using incoherent illumination to automatically analyze fluorescence emitted from a sample, according to some embodiments of the disclosed subject matter. At a high level, according to some embodiments, the basic components of the FMIS 100 include one or more illumination sources (e.g., light sources 25, 25a, and 28) for providing incoherent light, a focusing mechanism 32 for finding the focal plane of the sample, an illuminator 22, an imaging device 6, one or more objective lenses 35, a stage 30, one or more filter mechanisms 15, a bright/dark field slider 40, and a control module 110 including hardware, software, and/or firmware, and a computer analysis system 115. As shown, control module 110 and computer analysis system 115 are coupled to inspection system 100 via communication channel 120. It should be understood that the communication channel 120 may include one or more signal transmission devices, such as a bus or a wireless RF channel. It should also be understood that FMIS 100 may include additional microscope components as are known in the art. For example, the FMIS 100 may include a frame (not shown) to which various components of the FMIS 100 (e.g., one or more illumination sources, focusing mechanisms, illuminators, imaging devices, one or more objective lenses, stages, one or more filter mechanisms, bright/dark field sliders, control modules, nosepiece, beam splitters) may be connected (e.g., for portability, stability, modular support, etc.). In some embodiments, the computer analysis system may be connected to the frame, while in some embodiments, the computer analysis system may not be connected to the frame. Other microscope components not listed herein but known in the art may also be attached to the frame.

The FMIS 100 may be implemented as part of any suitable type of microscope. For example, in some embodiments, the FMIS 100 may be implemented as part of an optical microscope using reflected light (as shown in fig. 1A) and/or transmitted light (as shown in fig. 1B). More particularly, FMIS 100 is available as Nanotronics imaging Inc. of Cuyahoga Falls, OHAs part of an optical microscope.

In some embodiments, an XY translation stage may be used for stage 30. The XY translation stage may be driven by a stepper motor, a servo motor, a linear motor, a piezoelectric motor, and/or any other suitable mechanism. In some embodiments, the XY translation stage may be configured to move the sample in the X-axis and/or Y-axis directions under the control of any suitable controller.

In some embodiments, a focus mechanism 32 coupled to stage 30 may be used to adjust the stage in the Z-direction toward and away from objective lens 35. The focus mechanism 32 may be used to make coarse focus adjustments at distances of, for example, 0 to 5mm, 0 to 10mm, 0 to 30mm, and/or any other suitable range. The focus mechanism 32 may also be used to move the stage 30 up and down to allow samples of different thicknesses to be placed on the stage. In some embodiments, the focus mechanism 32 may also be used to provide precise focus at distances of, for example, 0 to 50 μm, 0 to 100 μm, 0 to 200 μm, and/or any other suitable range. In some embodiments, the focus mechanism 32 may also include a positioning device. The positioning device may be configured to determine the position of the table 30 at any suitable point in time. In some embodiments, any suitable position (e.g., the position of the stage when the sample is in focus) may be stored in any suitable manner and subsequently used to bring the stage back to that position, even upon reset and/or power cycling of the FMIS 100. In some embodiments, the positioning device may be a linear encoder, a rotary encoder, or any other suitable mechanism to track the absolute position of stage 30 relative to the objective lens.

According to some embodiments, the FMIS 100 may include one or more objective lenses 35. The objectives may have different magnifications and/or be configured to operate with fluorescence, as well as bright/dark field, Differential Interference Contrast (DIC), polarized light, cross-polarized (cross-polarized) light, and/or any other suitable form of illumination. In some embodiments, the objective lens and/or illumination technique used to inspect the sample may be controlled by software, hardware, and/or firmware.

In some embodiments, a second focusing mechanism (not shown) may be used to drive the objective lens 35 in the Z direction toward and away from the stage 30. The second focusing mechanism may be designed for coarse or fine focus adjustment of the objective lens 35. The second focusing mechanism may be a stepper motor, a servo motor, a linear actuator, a piezoelectric motor, and/or any other suitable mechanism. For example, in some embodiments, a piezoelectric motor may be used, and the piezoelectric motor may drive the objective lens by 0 to 50 micrometers (μm), 0 to 100 μm, or 0 to 200 μm, and/or any other suitable distance range.

In some embodiments, communication between the control modules (e.g., controllers and controller interfaces) and components of the FMIS 100 may use any suitable communication technology that provides the ability to communicate with one or more other devices and/or conduct data transactions with a computer network. By way of example, the communication techniques implemented may include, but are not limited to: analog technologies (e.g., relay logic), digital technologies (e.g., RS232, ethernet, or wireless), network technologies (e.g., Local Area Network (LAN), Wide Area Network (WAN), the internet, bluetooth technologies, near field communication technologies, secure RF technologies, and/or any other suitable communication technologies.

In some embodiments, any suitable input device (e.g., keyboard, mouse, joystick, touch screen, etc.) may be used to communicate operator inputs to the control module 110.

In some embodiments, the computer analysis system 115 may be coupled to or included in the FMIS 100 in any suitable manner using any suitable communication technology, such as analog technology (e.g., relay logic), digital technology (e.g., RS232, ethernet, or wireless), network technology (e.g., Local Area Network (LAN), Wide Area Network (WAN), the internet), bluetooth technology, near field communication technology, secure RF technology, and/or any other suitable communication technology. The computer analysis system 115 and modules within the computer analysis system 115 may be configured to perform a number of functions as further described herein using images output by the FMIS 100 and/or stored on computer readable media.

The computer analysis system 115 may include any suitable hardware (which may execute software in some embodiments), such as a computer, a microprocessor, a microcontroller, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), and a Digital Signal Processor (DSP) (any of which may be referred to as a hardware processor), an encoder, circuitry for reading an encoder, a memory device (including one or more EPROMs, one or more EEPROMs, dynamic random access memory ("DRAM"), static random access memory ("SRAM"), and/or flash memory), and/or any other suitable hardware element.

Computer readable media can be any non-transitory media that can be accessed by the computer and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Video Disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer.

FMIS 100 may include one or more illumination sources, such as light sources 25, 25a, and 28. In some embodiments, such as shown in fig. 1A, reflected illumination (i.e., light originating from above the sample) may be used. The reflected light passes through the vertical illuminator 22 to the beam splitter 20. The beam splitter 20 can reflect light from the illumination source(s) 90 ° downward through the nosepiece 23 and through the bright field channel 42 of the objective lens 35 to the sample. In other embodiments, such as shown in fig. 1B, transmissive illumination (i.e., light originating from below the sample (light source 25a)) may be used. The different illumination sources may be configured to provide illumination of different wavelengths from one another. Different illumination sources may also be adjusted to control the intensity provided per unit area.

As used herein, a beam splitter may refer to a mirror, dichroic mirror, filter, or beam combiner that transmits light of a known specified wavelength and combines the transmitted light with light of another known specified wavelength.

In some embodiments, as shown in fig. 1B, to increase the light intensity on the sample, reflected light from illumination source 25/28 may be projected simultaneously with transmitted light from illumination source 25 a. In some aspects, the various illumination sources may provide light of similar or equal wavelengths. In other embodiments, the FMIS 100 may include a single illumination source that may provide light of different wavelength ranges.

In some embodiments, for example, the first illumination source 25 provides non-visible light 8 (e.g., projects light having a wavelength in the range of 10 to 400 nanometers (nm)), while the second illumination source 28 provides visible light 9 (e.g., projects light having a wavelength in the range of 400 to 740 nanometers (nm)). In further embodiments, the illumination source may provide other suitable wavelengths.

In some embodiments, as shown in fig. 1A and 1B, the illumination source 25 is positioned such that light of the illumination source 25 is projected in a substantially horizontal direction toward the vertical illuminator 22. Illumination sources 25, 25a, and 28 may include focusing lenses suitable for the wavelength of the emitted light of each light source.

In some embodiments using two illumination sources, the beam splitter 60 is placed in the optical path of the two illumination sources (e.g., illumination sources 25 and 28) before the light travels to the vertical illuminator 22. The illumination sources may be activated such that they all provide illumination at the same time or at different times. Other placements of the illumination sources are contemplated without departing from the scope of the disclosed technology. Note that combinations of the above devices may be used to reflect and transmit the desired illumination source and wavelength in any suitable configuration. In some embodiments, the beam splitter with a particular cut-off wavelength is selected to reflect the wavelengths of light emitted by illumination source 28 and to allow the wavelengths of light emitted from illumination source 25 to pass through. The beam splitter 60 may be designed for a 45 ° angle of incidence such that rejected light from illumination source 28 is reflected at a 90 ° angle and travels parallel to the optical path from illumination source 25. Other beam splitter designs are contemplated without departing from the scope of the disclosed technology.

Note that in some embodiments, any suitable incoherent illumination source(s) may be used with illumination sources 25, 25a, and 28, including but not limited to Light Emitting Diodes (LEDs), halogen lamps, and/or fluorescent lamps.

In some embodiments, filter mechanism 15 may be used to allow a specified range of wavelengths from light sources 25 and 28 to pass through to the sample. The filter mechanism 15 (also referred to as an excitation filter) may be, for example, a slide having different bandpass filters (e.g., bandpass filters 16 and 17). Each bandpass filter allows certain wavelengths to pass through and blocks all other wavelengths. A motor or mechanical mechanism may be used to select and position one of the bandpass filters. In other embodiments, a tunable optical filter comprising software, firmware, and/or hardware may be used to control the desired wavelength passing to the sample. In some embodiments, the selected bandpass filter may be based on the bandgap properties of one or more of the materials in the sample. For example, the bandpass filter may be selected to correspond to a wavelength energy that matches or exceeds the band gap of one of the materials in the sample being examined. In other words, the wavelength energy transmitted to the sample may be selected such that it causes fluorescence of the target material within the sample. Each material has a known band gap energy that is different from the other materials. Bandgap energy refers to the energy difference between the top of the valence band and the bottom of the conduction band for a particular material. When electrons in a material are excited by light wavelengths, fluorescence is generated such that they absorb photons and emit excitation light (typically the emitted light is emitted at a longer wavelength than the absorbed light). In addition to applying the appropriate wavelength to excite the sample, sufficient intensity per unit area must be applied to enable fluorescence to occur. The sufficiency of strength per unit area will depend on the material composition of the sample and is typically at 1 watt/cm2To 11 watts/cm2Within the range of (1). For example, an illumination source projecting light at a wavelength of 365nm and an intensity of 4 watts may be applied to a silicon carbide sample having a band gap energy of 3.26eV to excite a fluorescent response.

In further embodiments, the selected wavelength energy may correspond to the wavelength energy required to cause fluorescence of target materials within the sample and/or pigments added to the target sample. Note that the term "excitation" refers to wavelength energy that causes fluorescence (i.e., emits fluorescence) of the sample or of a pigment added to the sample.

In some embodiments, an excitation filter mechanism may be used based on the desired microscopy to be performed, and for example, only allow wavelengths in a selected range to pass. For example, a filter mechanism may be used to select wavelengths in the invisible range (e.g., ultraviolet light from illumination source 25) or wavelengths in the visible range (e.g., from illumination source 28) to pass through. In other embodiments, a filter mechanism may be used to transmit light of a particular wavelength (e.g., a wavelength that corresponds to the band gap of the material being examined and that will excite the material) to the sample.

Note that excitation filter slider 15 represents one exemplary embodiment, and one or more excitation filters may be placed at any suitable location along the optical path before the light reaches the sample. In some embodiments, the slider 40 may include an excitation filter and/or the excitation filter may be included in the nosepiece 23. These various embodiments will be described herein.

In further embodiments, one or more emission filters may be used to allow the appropriate wavelengths to be transmitted from the sample to the imaging device, such that only the desired wavelengths are imaged. Similar to the excitation filter, the emission filter may be a bandpass filter that allows certain wavelengths to pass and blocks other wavelengths. In other embodiments, a tunable optical filter comprising software, firmware, and/or hardware may be used to control the desired wavelength that passes.

One or more emission filters (e.g., emission filters 18 and 19 shown in fig. 2) may be placed before each imaging device, before tube lens 90 (e.g., emission filter 21 shown in fig. 1A), and/or in nosepiece 23 (e.g., emission filter F3 of filter slider 52 shown in fig. 6C) to transmit the fluorescent response of the sample. In some embodiments, an emission filter wheel may be used that further filters wavelengths of certain colors to prevent them from reaching one or more imaging devices. The emission filter may be selected or controlled to allow a specified wavelength to reach the imaging device. For example, to detect the fluorescence response of silicon carbide at different wavelengths, different emission bandpass filters (or a single wavelength) may be used, which allow different wavelength ranges (e.g., 414-440nm,500-550nm, or 590-670nm) to pass through. These filters may be applied one at a time, or if there are multiple cameras, they may be applied simultaneously or as part of a sequential slider (i.e., using filters that allow different wavelength ranges in front of each imaging device).

Fig. 2 shows an exemplary embodiment comprising two imaging devices 6 and 7. In some embodiments, imaging devices 6 and 7 may be cameras that include image sensors 5 and 4, respectively. The imaging devices 6 and 7 may be used to capture images of the sample. The image sensors 5 and 4 may be, for example, CCD, CMOS image sensors, and/or any other suitable electronic device that converts light into one or more electrical signals. Such electrical signals may be used to form images and/or video (including fluorescence images and/or video) of the sample. In some embodiments, the imaging device may be a high quantum efficiency camera that is effective in generating electronic charges from incident photons. In some embodiments, such electrical signals are transmitted for display on a display screen connected to the FMIS 100. In some embodiments, the imaging device may be used in place of or in addition to an eyepiece or eyepiece for viewing the sample or a spectrometer for measuring spectral emissions from the sample.

The imaging device may be positioned at a conjugate focal plane of the FMIS 100. In some embodiments, the imaging device may be mounted in other locations using appropriate components to adapt the selected location to the optical characteristics of the system. In further embodiments, more than one imaging device may be used. In some embodiments, the imaging device may be a rotatable camera including an image sensor configured to allow the camera to be directed at the sample, stage, and/or feature on the sample. Some example methods for rotating cameras that may be used by FMIS 100 are described in U.S. Pat. No.10,048,477 entitled "Camera and Object Alignment to surface Area Imaging in Microcopy," the contents of which are incorporated herein by reference.

Fig. 2 includes emission filtering devices 18 and 19, each of the emission filtering devices 18 and 19 being coupled to a respective imaging device. Each filter device allows certain wavelengths reflected and/or emitted from the sample to be received by the associated imaging device and blocks all other wavelengths. Fig. 2 includes a beam splitter 24, which beam splitter 24 is positioned above the illuminator 22 in the optical path of light reflected/emitted from the sample. The beam splitter may be positioned such that a range of wavelengths is directed toward one imaging device, while a different range of wavelengths is directed toward a second imaging device.

The imaging of the sample by the FMIS 100 may be performed using various viewing modes including bright field, dark field, Differential Interference Contrast (DIC), and other modes known to those skilled in the art.

In some embodiments, FMIS 100 may provide bright field and dark field illumination simultaneously or separately. Dark field illumination refers to an illumination technique that uses oblique illumination rather than orthogonal light to illuminate the sample. The objective lens may include an annular dark field channel surrounding the bright field channel that allows light to be transmitted to the sample at an angle of incidence less than 90 degrees and greater than 0 degrees, typically 25 to 80 degrees. In some embodiments, the FMIS 100 may include a bright field/dark field slider 40 or other suitable mechanism (e.g., a cage cube) that allows only dark field illumination, only bright field illumination, a combination of bright field/dark field illumination, or other types of illumination (e.g., DIC) to reach the sample. Different configurations of the bright/dark field sliders 40 will be discussed in conjunction with fig. 2-5. In other embodiments, bright field/dark field illumination can be achieved by coupling a light source on the dark field channel and activating the light source through the control module 110 to provide dark field illumination to the specimen. Some exemplary embodiments are discussed in connection with fig. 2-5.

In some embodiments, as shown in fig. 3A, 3B and 3C, the FMIS 100 uses a bright/dark field slider 40. A slider comprises a bright field configuration 43 and a dark field configuration 44. The bright/dark field slider 40 can be positioned anywhere along the optical path 10 traveling to the sample (e.g., in a vertical illuminator, before the beam splitter 20 or coupled above or below the nosepiece 23). The bright field/dark field slider 40 includes two configurations: 43 and 44. In a first position, as shown in fig. 3A, when the arrangement 43 is positioned in the optical path, the aperture in the centre of the arrangement 43 allows light 10a to pass through and reflect off the beam splitter 20 through the bright field channel in the centre of the objective lens 35 to provide bright field illumination of the sample and prevents light from passing through to the dark field channel 41. Furthermore, in some embodiments, the aperture in the center of the configuration 43 may be replaced with an excitation filter that only allows certain wavelengths (reflected by the beam splitter 20) to reach the sample.

In a second position, as shown in fig. 3B, when configuration 44 is positioned in the light path, the central aperture is closed, thereby preventing light from being transmitted to bright field channel 42 and reflected out through beam splitter 20, thereby transmitting light 10B through dark field channel ring 41 to provide oblique illumination of the sample. A motor or mechanical mechanism may be used to select and position one of the bright field/dark field slider configurations. The light 10c reflected from the sample travels through the objective lens 35 to the imaging device(s).

Other configurations of the bright field/dark field slider 40 are also possible, for example as shown in fig. 4A (bright field/dark field slider 40a) and 4B (bright field/dark field slider 40B).

The bright/dark field slider 40a may include configurations 45 including a light ring 46 (e.g., an LED light ring) around a closed center and 43 (described in connection with fig. 3A-3C). When configuration 45 is positioned in the light path and LED lamp 46 is activated, oblique illumination can be transmitted to the sample via dark field channel 41 (reflected out through beam splitter 20). Since the center of the ring is closed and blocks light (reflected out through the beam splitter 20) from entering the bright field channel, no bright field light is transmitted to the sample.

As shown in fig. 4B, the bright/dark field slider 40B may include a configuration 47 (which includes an LED light ring 46 surrounding an aperture) and 43 (described in connection with fig. 3A and 3B). When the configuration 47 is in the light path and the LED light ring 46 is activated, oblique illumination may be projected and transmitted (reflected off the beam splitter 20) to the sample, while bright field illumination may be transmitted through the central aperture and through the bright field channel 42 of the objective lens 35 to the sample (as shown in fig. 5B).

In some embodiments, as shown in fig. 5A (showing an exemplary FMIS 100), 5B (showing details of an exemplary nosepiece 23), 6A (showing an exemplary dark field insert 51), and 6B (showing an exemplary cylinder 29), a cylinder 29 (also referred to herein as an "attachment" or "cylinder attachment") can be secured (e.g., by screws or other fasteners) to the nosepiece 23 of the FMIS 100, and an objective lens 35 (including an annular dark field channel 41 and a bright field channel 42) can be secured to the cylinder 29 above the dark field channel 41. Furthermore, a dark field insert 51 with a light ring 46 (e.g., LED light 46) may be secured to the raw cylinder 29 above the dark field channel 41. This configuration allows the cylinder 29 to be fixed into any nosepiece and used with any objective lens. Note that the cylinder 29 may be of any suitable shape. Further, the light ring included on the dark field insert 51 may include any suitable lamp that emits one or more wavelengths, and may be flexibly interchanged with another insert 51 having a light ring that includes a different type of lamp and emits a different wavelength (or set of wavelengths).

In some embodiments, a filter slider 52 (a slider) having a plurality of emission/excitation filters F1, F2, F3 … … FN may be coupled to the cylinder 29 below the dark field insert 51. In some embodiments, filter F1 of slider 52 includes an aperture that, when activated, allows light from lamp 46 to pass through dark field channel 41 to the sample unfiltered. Filter F2 of slider 52 includes an excitation filter that only allows certain dark field and bright field wavelengths to reach the sample. In some embodiments, the excitation filter may include a centrally located aperture and only filter dark field light that reaches the sample. In a further embodiment, filter F3 may include different filters for the bright field and dark field channels. For example, a dark field filter may be configured to filter dark field excitation light, while a bright field filter may be configured as an emission filter to filter light emitted from the sample before it reaches the one or more imaging devices. The slider 52 may include other suitable filters that only pass specific excitation wavelengths to the sample and/or only pass specific emission wavelengths to one or more imaging devices.

Note that combinations of excitation and emission filters, in any suitable configuration, as described above, may be used to reflect and transmit the desired illumination source and wavelength.

Fig. 7A illustrates at a high level an exemplary method 700 for illuminating a sample using an FM inspection system to achieve desired spectral emissions and other desired illumination for image capture, in accordance with some embodiments of the disclosed subject matter. In some embodiments, method 700 may use FMIS 100.

At 710, a sample to be tested is placed on the sample stage 30. In some embodiments, the sample is focused prior to selecting the light source and filters of FMIS 100.

At 720, settings of the FMIS 100 may be adjusted for image capture. This may be performed manually or automatically (e.g., using a computer algorithm), based on, for example, the characteristics of the specimen being examined or the material composition of the specimen. In some embodiments, control module 110 may activate and adjust the wavelength and intensity of light from the light source(s) and corresponding excitation and emission filters according to stored information for a particular sample, sample class, and/or any other suitable classification group. The stored information may include a map ("sample feature map" or "feature map") that identifies the type and location of known features on the sample. The stored information may also include the material composition of the sample, the optimal FM inspection system settings for capturing different images of the sample at different regions of interest (e.g., by specifying the wavelength and intensity of light to be directed to the sample, by selecting and adjusting appropriate excitation and/or emission filters). Further, the stored information may include information regarding the type and location of known or expected defects of the specimen. Methods for selecting the appropriate stored information are further discussed in conjunction with fig. 8.

At 730, according to some embodiments, the FMIS 100 captures one or more images of the sample. Steps 720 and 730 may be repeated as many times as necessary to capture different images of the sample. For example, the intensity and wavelength of light sources 25, 25a, and/or 28 and corresponding excitation and emission filters may be adjusted to capture different images of the sample. For example, the light source may be adjusted based on stored information of the sample (including sample composition, known or expected defects of the sample, and/or a sample profile). Further, the wavelengths of light sources 25, 25a, and/or 28 and corresponding filters may be adjusted for different regions of interest of the sample (as indicated by the sample profile or otherwise), and an image may be captured for each region of interest. In some embodiments, the wavelengths of light sources 25, 25a, and/or 28 and corresponding filters may be selected within a range suitable to provide the desired excitation to the sample and/or region of interest. Further, different images of the specimen may be captured by adjusting the type of illumination provided to the specimen, e.g., applying bright field, dark field, a combination of bright and dark fields, and/or DIC illumination.

Fig. 7B illustrates steps of an exemplary process 705 for identifying sample classifications and automatically adjusting the light sources and filters of FMIS 100 in accordance with aspects of the disclosed technology. Process 705 begins at step 740 where image data is received, for example, by an image processing system (e.g., image processing module 834 (shown in fig. 8)). In some approaches, the image data may be included in a received image of a sample taken by an imaging device as part of the FMIS 100. The image data may include all or a portion of the sample disposed on the stage of the FMIS 100.

In step 750, the image data is analyzed to identify a classification of the sample. In some cases, image analysis may be performed to identify a subset of the sample, such as a particular region, feature, or material within the sample. As discussed below, machine learning classifiers, computer vision, and/or artificial intelligence can be used to identify/classify samples and features on samples. An exemplary classification method using a Convolutional Neural Network (CNN) is shown in fig. 10.

The stored information may then be automatically selected based on the sample (or feature) classification (step 760). The sample/feature classification may be used to query a database (e.g., the stored information database 836) containing stored information associated with: samples, material compositions of samples, sample feature types, and/or other suitable classification groups. By reference to the sample classification determined in step 750, stored information appropriate for the sample can be automatically identified and retrieved. As described above, the stored information may include various settings data describing the configuration of the FMIS 100, which configuration of the FMIS 100 may be used to achieve optimal illumination and image capture of the observed sample, feature, and/or material.

Fig. 7C illustrates steps of an exemplary process 710 for automatically identifying and/or classifying sample defects, in accordance with aspects of the disclosed technology. In some embodiments, step 780 may follow step 770 discussed above with respect to fig. 7B. However, it should be understood that process 710 may be performed independently of the various steps of processes 700 and 705 discussed above.

In step 780, image data including fluorescence is received from the imaging device after the adjustments/settings are applied to the FM inspection system. In some methods, step 780 may be performed after the automatic classification of the sample, as described above in step 770. Thus, the image data received in step 780 may represent an image of the sample taken under optimized or improved lighting conditions, as achieved by performing setting selection on the FM inspection system.

In step 785, the new image data containing fluorescence is provided to a defect detection classifier configured to automatically identify/detect and/or classify defects/features of the sample. Defect/feature detection and classification may be performed without knowledge of the sample classification or type. However, in some embodiments, the sample classification and/or associated stored information may be used as an input to a defect detection classifier, and thereby used to report the process of defect/feature detection and identification.

In step 790, one or more defects/features of the sample are identified and/or classified. The process of identifying and/or classifying sample defects/features may be performed in different ways, depending on the desired implementation. For example, defect/feature identification may be used to automatically generate or update a feature map and/or stored information associated with a given sample and/or sample classification (step 795). Thus, as described in process 705, the identification of new defects/features may be used to improve (train) future defect/feature classification calculations, as well as to improve the automated process of adjusting FM inspection system settings. In some aspects, defect/feature identification and/or classification may be used to trigger an alarm, for example, to notify a user of the FM inspection system of the presence of a detected defect/feature and/or defect/feature type (classification).

It should be understood that in some embodiments, at least some of the portions of methods 700, 705, and 710 described herein may be performed in any order or sequence that is not limited to the order and sequence shown and described in connection with fig. 7A, 7B, and 7C. Further, in some embodiments, some portions of processes 700, 705, and 710 described herein may be performed substantially simultaneously or in parallel, where appropriate. Additionally or alternatively, in some embodiments, some portions of processes 700, 705, and 710 may be omitted. Methods 700, 705, and 710 may be implemented in any suitable hardware and/or software. For example, in some embodiments, methods 700, 705, and 710 may be implemented in FM inspection system 100.

Fig. 8 illustrates a general configuration of an embodiment of a computer analysis system 115 according to some embodiments of the disclosed subject matter. Although the computer analysis system 115 is shown as a localized computing system in which various components are coupled via a bus 805, it will be appreciated that the various components and functional computing units (modules) may be implemented as separate physical or virtual systems. For example, one or more components and/or modules may be implemented in physically separate and remote devices, such as using virtual processes (e.g., virtual machines or containers) instantiated in a cloud environment.

The computer analysis system 115 includes a processing unit (e.g., CPU and/or processor) 810 and a bus 805 that couples various system components including a system memory 815, such as a Read Only Memory (ROM)820 and a Random Access Memory (RAM)825, to the processor 810.

The memory 815 may include various memory types having different performance characteristics, such as a memory cache 812. The processor 810 is coupled to a storage device 830, the storage device 830 being configured to store software and instructions for implementing one or more functional modules and/or database systems (e.g., a stored information database 836). Each of these modules and/or database systems may be configured to control the processor 810 as well as a dedicated processor, with software instructions being incorporated into the actual processor design. As such, the image processing module 834 and stored information database 836 may be a completely self-contained system. For example, the image processing module 834 may be implemented as a discrete image processing system without departing from the scope of the disclosed technology.

To enable a user to interact with the computer analysis system 115, the input device 845 may represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, a keyboard, a mouse, motion input, and so forth. The output device 835 can also be one or more of a variety of output mechanisms known to those skilled in the art. In some cases, the multimodal system may enable a user to provide multiple types of input to communicate with the computer analysis system 115, for example, to convey sample information related to sample type/classification or other characteristics. Communication interface 840 may generally manage and control user inputs and system outputs. There is no limitation to the operation of any particular hardware arrangement, and thus the basic features herein may be readily replaced with developed, improved hardware or firmware arrangements.

The storage device 830 is a non-transitory memory and may be a hard disk or other type of computer-readable medium capable of storing data accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, magnetic cassettes, Random Access Memories (RAMs) 825, Read Only Memories (ROMs) 820, and hybrids thereof.

In practice, the stored information database 836 may be configured to receive, store, and update contextual data associated with samples, sample categories, and/or other suitable sample classifications. The context data for each sample/sample class/sample classification may include, but is not limited to: a Computer Aided Design (CAD) file of the sample and/or features of the sample, a feature map identifying the features and their locations, an image of the features of the sample/sample captured by the FMIS 100, an image of a known sample and/or features, known dimensions, material composition, mechanical and/or physical properties of the sample, a spectral variation map of a known material or sample, common stacking faults, structural defects or other defects associated with the sample, optimal FM inspection settings for the features of the sample, sample or sample classification, identification of regions of interest and/or material of interest to be inspected. In some embodiments, regions of interest may be identified on the feature map. A stored information database 836 may be coupled to the image processing module 834 and may transmit data to and receive data from the image processing module 834. Further, the context data may include data related to the FMIS 100 used to examine the sample, such as: the number of light sources of the FMIS 100, the wavelength range and intensity of each light source, the number of imaging devices and different types of excitation/emission filters and their positions, the range of possible distances between the sample stage 30 and the objective 35.

Processor 810 may include an image processing module 834. The image processing module 834 may be used in conjunction with the stored information database 836 to classify samples based on: image data including fluorescence received in the sample image(s); contextual data retrieved from the stored information database 836 and/or other received sample characteristics (e.g., those provided manually by a user, e.g., via input 845). Additionally, the image processing module may be configured to classify particular sample features, determine other physical and/or mechanical sample properties (e.g., sample reflectivity, sample size, sample material composition). The classification of the sample type and the sample features/attributes may be stored in the stored information database 836.

In some embodiments, once the sample type, particular feature, and/or material composition of the sample has been determined (e.g., by the image processing module 834), additional contextual data associated with the determined sample type/feature may be retrieved from the stored information database 836 and sent to the control module 110 to adjust settings of the FMIS 100 to capture a particular sample image and/or to guide the FMIS 100 in examining the sample (e.g., by capturing an image of the particular feature and/or region of interest).

In some embodiments, the image processing module 834 may receive an entire sample scan, or one or more images of a sample. As shown in fig. 9, the image processing module 834 may apply one or more artificial intelligence algorithms to classify the type of sample and the features on the sample.

As will be appreciated by those skilled in the art, artificial intelligence/machine learning based classification techniques may vary depending on the desired implementation without departing from the disclosed techniques. For example, a machine learning classification scheme may utilize one or more of the following, alone or in combination: a hidden Markov model; a recurrent neural network; convolutional Neural Networks (CNN); deep learning; a Bayesian notation method; a universal antagonistic network; a support vector machine; an image registration method; applicable rule-based systems. Where regression algorithms are used, they may include, but are not limited to: random gradient descent regressors and/or passive attack regressors, etc.

The machine-learned classification model may also be based on a clustering algorithm (e.g., a small-lot K-means clustering algorithm), a recommendation algorithm (e.g., a minimum hash algorithm or a euclidean LSH algorithm), and/or an anomaly detection algorithm (such as a local outlier factor). Additionally, the machine learning model may employ dimension reduction methods, such as one or more of the following: a small batch dictionary learning algorithm, an incremental Principal Component Analysis (PCA) algorithm, a latent Dirichlet allocation algorithm, and/or a small batch K-means algorithm, etc.

In some examples, machine learning models may be used to perform classification of samples, materials within samples, sample features, and/or other sample characteristics. In some aspects, for example, by the image processing module 834, image data from the sample images can be provided as input to a machine learning classification system. The classifier output may specify a sample or feature classification that may then be used to identify particular regions of interest on the sample for further inspection by the LMIS100, and provide instructions to the control module 110 of the LMIS100 as to the types of light sources and filters that should be used to inspect those regions of interest.

Such algorithms, networks, machines and systems provide examples of structures used in relation to any "means for determining features of a sample using artificial intelligence" or "means for determining a region of interest of a sample using artificial intelligence for further inspection" or "means for determining features of a sample using artificial intelligence".

Further, for each feature or region of interest on the specimen, the image processing module may apply one or more artificial intelligence algorithms to perform the following to examine the feature/sample/material: i) detecting the feature; ii) classifying the feature types; iii) determining the location of the feature on the sample; iv) determining the material composition of the sample/feature; v) determine the optimal settings (e.g., wavelength excitation settings, wavelength emission settings, applied lighting technology) for the LMIS 100. In some embodiments, the algorithm(s) used by the image processing module 834 may take into account contextual data such as the location of features on the specimen, the type of specimen being examined, the physical and mechanical properties of the specimen being examined, similar features on the same or similar specimens, a reference signature map of the specimen being examined, FM inspection system settings used to generate a specimen scan or specimen image.

An example of a machine learning artificial intelligence based image processing algorithm that may be used by the image processing module 834 is image registration as follows: barbara Zitova, "Image Registration Methods: A Survey," Image and Vision Computing, 10/11/2003, Vol.21, No. 11, p.977-1000, which is hereby incorporated by reference in its entirety. The disclosed methods are exemplary only, and not limiting. By way of example, the machine learning/artificial intelligence model may be trained using multiple sources of training data, including but not limited to: a computer-aided design (CAD) file of the sample and/or features of the sample, a sample feature map identifying the features on the sample and their locations, an image of a known sample and/or features, and/or information about a known sample (e.g., dimensions of the sample, material composition of the sample, mechanical and/or physical properties of the sample, a spectral variation map of a known material or sample, common stacking faults, structural defects, a feature map identifying where features in a sample classification are typically located).

In some embodiments, as shown in fig. 9, the image processing algorithm 905 is first trained with training data 920, such that the image processing module 834 can identify and classify samples, and detect and identify features on the samples. A variety of training techniques may be used and may depend on the particular classifier model used. In one example, a CNN, such as a 13-layer CNN, may be trained for multiple cycles using random gradient descent to explore a corresponding error space. In one example, 80 cycles are used for training, and the random gradient descent may include a momentum factor. Additionally, an adaptive learning rate may be used, such as, but not limited to, adjusting the learning rate from 0.1 during the early period (e.g., as a step value in a random gradient descent) to 0.01 in the late period.

The training data 920 may include labeled examples of known types of samples and features. For each class (e.g., feature type, defect type, etc.) trained for it, the training data 920 may also include labeled imaged deformation features (which may be actual deformation features or deformation features modeled according to predefined parameters), and the training data 920 may include labeled images of such deformation features. The training data 920 may also include labeled images for each defect type rotated from 0-360 degrees. The training data 920 may also include label images for each defect type generated at different sizes. One example of training data 920 is an image including labeled stacking faults having different structures, shapes, and sizes, and corresponding fluorescence emissions for each type of stacking fault. Additionally, the image of the indicia may also include additional contextual data, such as information specifying settings of the FMIS 100 (e.g., wavelength excitation settings, wavelength emission settings, applied illumination techniques), material composition of the feature or sample, location of the feature on the sample, physical/mechanical properties of the feature, and/or any other suitable characteristics. In some embodiments, the training data may also include unlabeled data.

Once the image processing algorithm is trained, it may be applied by the image processing module 834 to the received sample scan(s) or image(s) of the sample to classify the sample type, detect features, classify the fault type, determine the features and/or fault location, determine the sample composition, and determine the optimal FM inspection system settings for detecting the features/samples. The output data may be visually displayed, printed, or generated in file form, and stored in a database 836 or transmitted to other components for further processing.

In some embodiments, the output data may be sent to the feature map generator module 832 to generate a feature map for the sample. In some embodiments, the output data may include a plurality of images. The generated feature map may identify and locate features on the sample. The generated signature graph may be visually displayed, printed, or generated in file form, and stored in the stored information database 836 or transmitted to other modules for further processing.

In addition, the generated feature map may be used to focus further inspection on specific features and/or regions of the sample by FMIS 100. Based on the characteristics of the features and regions, the stored information may be retrieved from the stored information database 836. For example, for each feature and/or region of interest, instructions may be retrieved from the stored information database 836 to apply different light sources and illumination techniques at different wavelengths and intensity levels to capture different images using different excitation/emission filters and transmit the images to the control module 110. For example, by applying different bandpass emission filters before one or more imaging devices, different fluorescence emissions can be detected and different features of the sample (e.g., irregularities or defects in the surface) identified.

FIG. 10 depicts one embodiment of an image processing module 834 for training the classifier 1005 using deep convolutional network. The classifier 1005 may be trained using the simulated augmented data 1007. For example, known defects for different types of samples may be generated at different orientations, different sizes, different pixel intensities, different locations on the sample (1006 and 1009). The shape of these known defects may be blurred and/or distorted. Once trained, one or more candidate images of FMIS 100 may be input into a classifier (1009). In some embodiments, the image (1001) is processed by first detecting certain regions and extracting features (1002 and 1003) from those regions. The classifier 1005 is then used to analyze the extracted features and classify the features into types and locate those features on the sample (1010). It is noted that some exemplary Methods that FMIS 100 may use for locating features on a sample are described in U.S. patent application No.16/262,017 entitled "Macro Inspection Systems, Apparatus and Methods," which is hereby incorporated by reference in its entirety. In some embodiments, the known defects include stacking faults having different structures, sizes, and shapes.

In some embodiments, any suitable computer-readable medium may be used to store instructions for performing the functions and/or processes described herein. For example, in some embodiments, the computer-readable medium may be transitory or non-transitory. For example, a non-transitory computer-readable medium may include media such as non-transitory magnetic media (such as hard disks, floppy disks, etc.), non-transitory optical media (such as compact disks, digital video disks, blu-ray disks, etc.), non-transitory semiconductor media (such as flash memory, electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), etc.), any suitable media that is not evanescent or lacks any permanent appearance during transmission, and/or any suitable tangible media. As another example, a transitory computer-readable medium may include signals on a network, signals in wires, signals in conductors, signals in optical fibers, signals in electrical circuits, and signals in any suitable medium that are fleeting slightly during transmission and lack any persistent appearance, and/or signals in any suitable intangible medium.

The various systems, methods, and computer-readable media described herein may be implemented as part of a cloud network environment. As used herein, a cloud-based computing system is a system that provides virtualized computing resources, software, and/or information to client devices. Computing resources, software, and/or information may be virtualized by maintaining centralized services and resources that the edge devices may access through a communication interface, such as a network. The cloud may provide various cloud computing services such as software as a service (SaaS) (e.g., collaboration services, email services, enterprise resource planning services, content services, communication services, etc.), infrastructure as a service (IaaS) (e.g., security services, networking services, system management services, etc.), platform as a service (PaaS) (e.g., web services, streaming services, application development services, etc.), and other types of services such as desktop as a service (DaaS), information technology management as a service (ITaaS), managed software as a service (MSaaS), mobile backend as a service (MBaaS), etc., via cloud elements.

The provision of examples (and clauses where the words are "such as," e.g., "include," etc.) described herein should not be construed as limiting the claimed subject matter to the particular examples; rather, these examples are intended to illustrate only some of the many possible aspects. One of ordinary skill in the art will appreciate that the term mechanism may encompass hardware, software, firmware, or any suitable combination thereof.

Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as "determining," "providing," "identifying," "comparing," or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices. Certain aspects of the present disclosure include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present disclosure could be implemented in software, firmware or hardware, and when implemented in software, could be downloaded to reside on and be operated from different platforms used by real time network operating systems.

The present disclosure also relates to apparatus for performing the operations herein. The apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in a computer readable medium that can be accessed by the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), Random Access Memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, Application Specific Integrated Circuits (ASICs), or any type of non-transitory computer readable storage media suitable for storing electronic instructions. Further, the computers referred to in the specification may include a single processor, or may be architectures employing multiple processor designs for increased computing capability.

The algorithms and operations presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps and system-related actions. The required structure for a variety of these systems will be apparent to those of skill in the art, as well as equivalent variations. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein, and any references to specific languages are provided for disclosure of enablement and best mode of the present disclosure.

The FM inspection apparatus, method and system have been described in detail with particular reference to these illustrated embodiments. It will, however, be evident that various modifications and changes may be made within the spirit and scope of the disclosure as described in the foregoing specification, and such modifications and changes are to be considered equivalents and portions of the disclosure.

36页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:使用深度图像的三维定位

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类