Method and system for deep learning-based inspection of semiconductor samples

文档序号:1302104 发布日期:2020-08-07 浏览:8次 中文

阅读说明:本技术 半导体样品的基于深度学习的检查的方法及其系统 (Method and system for deep learning-based inspection of semiconductor samples ) 是由 O·肖比 D·苏哈诺夫 A·阿斯巴格 B·科恩 于 2019-02-07 设计创作,主要内容包括:本文提供了一种检查半导体样品的方法及其系统。所述方法包括:使用经训练的深度神经网络(DNN)来处理制造工艺(FP)样本,其中FP样本包括:从(多个)第一检查模态接收的(多个)第一FP图像和从(多个)第二检查模态接收的(多个)第二FP图像,所述(多个)第二检查模态与所述(多个)第一检查模态不同,并且其中所述经训练的DNN与所述(多个)第二FP图像分开地处理所述(多个)第一FP图像;且进一步地通过所述经训练的DNN来处理此种单独处理的结果以获得特定于给定的应用并表征经处理的所述FP图像中的至少一个FP图像的检查相关的数据。当FP样本进一步包括与所述(多个)FP图像相关联的数值数据时,所述方法进一步包括:通过所述经训练的DNN而与处理所述第一FP图像和所述第二FP图像分开地处理数值数据的至少部分。(A method of inspecting a semiconductor sample and a system thereof are provided. The method comprises the following steps: processing a manufacturing process (FP) sample using a trained Deep Neural Network (DNN), wherein the FP sample comprises: a first FP image(s) received from a first examination modality(s) and a second FP image(s) received from a second examination modality(s), the second examination modality(s) being different from the first examination modality(s), and wherein the trained DNN processes the first FP image(s) separately from the second FP image(s); and further processing the results of such separate processing by the trained DNN to obtain inspection-related data specific to a given application and characterizing at least one of the FP images processed. When the FP sample further comprises numerical data associated with the FP image(s), the method further comprises: processing at least a portion of numerical data by the trained DNN separately from processing the first FP image and the second FP image.)

1. A method of inspecting a semiconductor sample, the method comprising:

after obtaining, by a computer, a Deep Neural Network (DNN) trained for a given inspection-related application within a semiconductor manufacturing process, using the trained DNN to process a manufacturing process (FP) sample, wherein the FP sample comprises: one or more first FP images received from one or more first inspection modalities and one or more second FP images received from one or more second inspection modalities, the one or more second inspection modalities different from the first inspection modality, and wherein the trained DNN processes the one or more first FP images separately from the one or more second FP images; and

processing, by the computer, results of at least separate processing of the one or more first FP images and the one or more second FP images by the trained DNN to obtain inspection-related data that is specific to a given application and that characterizes at least one of the processed FP images.

2. The method of claim 1, wherein the FP sample further comprises numerical data associated with the FP image in the FP sample, the method further comprising:

processing at least a portion of the numerical data by the trained DNN separately from processing the one or more first FP images and processing the one or more second FP images; and

obtaining the inspection-related data specific to the given application via results of separate processing of the one or more first FP images and the one or more second FP images by the trained DNN and results of processing at least part of the numerical data.

3. The method of claim 1, wherein the FP sample further comprises numerical data associated with the FP image in the FP sample, the method further comprising:

processing at least a portion of the numerical data by the trained DNN separately from processing the one or more first FP images and processing the one or more second FP images; and

aggregating results of the separate processing of the one or more first FP images and the one or more second FP images by the trained DNN, thereby producing aggregated image data; and

obtaining the inspection-related data specific to the given application via processing the aggregated image data by the trained DNN, and separately processing results of at least portions of numerical data.

4. The method of claim 1, wherein the inspecting a particular application is selected from the group consisting of: detecting defects in the semiconductor sample; classifying defects in the semiconductor sample; registering between at least two manufacturing process (FP) images; segmenting at least one FP image selected from the group consisting of a high resolution image of the semiconductor specimen, a low resolution image of the semiconductor specimen, and a design data based image of the semiconductor specimen; the FP images corresponding to data obtained by different examination modalities are regressively reconstructed and image characteristics are regressively reconstructed.

5. The method of claim 1, wherein the one or more first inspection modalities differ from the one or more second inspection modalities by at least one of: an inspection tool, a channel of the same inspection tool, operating parameters of the same inspection tool and/or channel, a layer of the semiconductor sample corresponding to a respective FP image, properties of obtaining the FP image, and a derivation technique applied to the captured image.

6. The method of claim 1, wherein the one or more first FP images are low resolution images and the one or more second FP images are high resolution images.

7. The method of claim 2, wherein the numerical data comprises metadata and/or manually constructed attributes.

8. The method of claim 1, wherein the DNN is trained using FAB data collected for all types of layers and products from all manufacturing stages.

9. The method of claim 1, wherein the DNN is coarsely trained on a different dataset than the FAB data, and further finely trained on at least a portion of the FAB data for a particular exam-related application.

10. The method of claim 1, wherein the inspection-specific application is classification of defects in the semiconductor sample, and wherein the inspection-related data specific to the given application is classification-related attributes and/or classification labels characterizing at least one defect to be classified.

11. A system usable for inspecting semiconductor samples, the system comprising a Processing and Memory Block (PMB) operatively connected to an input interface and an output interface, wherein:

the input interface is configured to receive one or more manufacturing process (FP) images;

the PMB is configured to:

obtaining a Deep Neural Network (DNN) trained for a given inspection-related application within a semiconductor manufacturing process;

using the trained DNN to process a manufacturing process (FP) sample, wherein the FP sample comprises: one or more first FP images received from one or more first inspection modalities and one or more second FP images received from one or more second inspection modalities, the one or more second inspection modalities different from the first inspection modality, and wherein the trained DNN processes the one or more first FP images separately from the one or more second FP images; and

processing, by the computer, results of at least separate processing of the one or more first FP images and the one or more second FP images by the trained DNN to obtain inspection-related data that is specific to a given application and that characterizes at least one of the processed FP images.

12. The system of claim 11, wherein the FP sample further comprises numerical data associated with the FP image in the FP sample, and wherein the PMC is further configured to:

processing at least a portion of the numerical data by the trained DNN separately from processing the one or more first FP images and processing the one or more second FP images; and

obtaining the inspection-related data specific to the given application via results of separate processing of the one or more first FP images and the one or more second FP images, and at least part of numerical data, by the trained DNN.

13. The system of claim 11, wherein the FP sample further comprises numerical data associated with the FP image in the FP sample, and wherein the PMC is further configured to:

processing at least a portion of the numerical data by the trained DNN separately from processing the one or more first FP images and processing the one or more second FP images; and

aggregating results of the separate processing of the one or more first FP images and the one or more second FP images by the trained DNN, thereby generating aggregated image data; and

obtaining the inspection-related data specific to the given application via processing the aggregated image data by the trained DNN and separately processing results of at least part of the numerical data.

14. The system of claim 11, wherein the one or more first inspection modalities differ from the one or more second inspection modalities by at least one of: an inspection tool, a channel of the same inspection tool, operating parameters of the same inspection tool and/or channel, a layer of the semiconductor sample corresponding to a respective FP image, properties of obtaining the FP image, and a derivation technique applied to the captured image.

15. The system of claim 11, wherein the one or more first FP images are low resolution images and the one or more second FP images are high resolution images.

16. The system of claim 12, wherein the numerical data comprises metadata and/or manually constructed attributes.

17. The system of claim 11, wherein the DNN is trained using FAB data collected for all types of layers and products from all manufacturing stages.

18. The system of claim 11, wherein the DNN is coarsely trained on a different dataset than the FAB data, and further finely trained on at least a portion of the FAB data for a particular exam-related application.

19. The system of claim 11, wherein the inspection-specific application is classification of defects in the semiconductor sample, and wherein the inspection-related data specific to the given application is a classification-related attribute and/or a classification signature characterizing at least one defect to be classified.

20. A non-transitory computer-readable medium comprising instructions that, when executed by a computer, cause the computer to perform a method of inspecting a semiconductor sample, the method comprising:

after obtaining, by a computer, a Deep Neural Network (DNN) trained for a given inspection-related application within a semiconductor manufacturing process, using the obtained trained DNN to process a manufacturing process (FP) sample, wherein the FP sample comprises: one or more first FP images received from one or more first examination modalities and one or more second FP images received from one or more second examination modalities, the one or more second examination modalities different from the first examination modality, and wherein the trained DNN processes the one or more first FP images separately from the one or more second FP images; and

processing results of at least separate processing of the one or more first FP images and the one or more second FP images by the trained DNN to obtain inspection-related data that is specific to a given application and that characterizes at least one of the processed one or more FP images.

Technical Field

The presently disclosed subject matter relates generally to the field of inspection of samples, and more particularly to methods and systems for automation of inspection of samples.

Background

The current demand for high density and high performance, which is associated with ultra-large scale integration of fabricated devices, requires sub-micron features, increased transistor and circuit speeds, and improved reliability. These requirements require the formation of device features with high precision and uniformity, which in turn necessitates careful monitoring of the manufacturing process, including automatically inspecting the devices while the apparatus is still in the form of a semiconductor wafer. Note that the manufacturing process may include: a pre-manufacturing operation, a manufacturing operation, and/or a post-manufacturing operation.

The term "specimen" as used in this specification should be broadly interpreted as encompassing any kind of wafer, mask, and other structures, combinations, and/or portions thereof, as may be used in the manufacture of semiconductor integrated circuits, magnetic heads, flat panel displays, and other semiconductor-fabricated articles.

The term "inspection" as used in this specification should be broadly construed to encompass any kind of metrology-related operation, as well as operations related to detecting and/or classifying defects in a sample during its manufacture. The inspection is performed by using a non-destructive inspection tool during or after the manufacture of the sample to be inspected. As non-limiting examples, the inspection process may include: run-time scanning (in a single or multiple scans), sampling, inspection, measurement, sorting, and/or other operations provided with respect to a sample or portion thereof using the same or different inspection tools. Likewise, at least a portion of the inspection may be performed prior to manufacturing the sample to be inspected, and may include, for example, generating an inspection recipe(s), training corresponding classifiers, or other machine learning related tools and/or other setup operations. Note that the term "inspection (examination)" or its derivatives used in this specification is not limited to the resolution or the size of the inspection region unless specifically and otherwise stated. Various non-destructive inspection tools include (as non-limiting examples): scanning electron microscopes, atomic force microscopes, optical inspection tools, and the like.

As a non-limiting example, the runtime inspection may employ a two-stage procedure (e.g., inspecting a sample and then inspecting the location of the sample for potential defects). During the first stage, the surface of the sample is inspected at high speed and with relatively low resolution. In the first stage, a defect map is generated to show locations on the sample suspected of having a high probability of defects. During the second stage, at least some of such suspicious locations are more thoroughly analyzed at a relatively high resolution. In some cases, both stages may be performed by the same inspection tool, while in some other cases, the two stages are performed by different inspection tools.

Inspection procedures are used at various steps during semiconductor manufacturing to detect and classify defects on a specimen. The effectiveness of the inspection may be increased by at least partial automation in the procedure (e.g., by using Automated Defect Classification (ADC), automated defect inspection (ADR), etc.).

Disclosure of Invention

According to certain aspects of the presently disclosed subject matter, there is provided a method of inspecting a semiconductor sample, the method comprising: after obtaining, by a computer, a Deep Neural Network (DNN) trained for a given inspection-related application within a semiconductor manufacturing process, processing a manufacturing process (FP) sample using the trained DNN, wherein the FP sample comprises: one or more first FP images received from one or more first modalities and one or more second FP images received from one or more second modalities, the second modality being different from the first modality, and wherein the trained DNN processes the one or more first FP images separately from the one or more second FP images; and processing, by the computer, results of at least separate processing of the one or more first FP images and the one or more second FP images by the trained DNN to obtain inspection-related data specific to a given application and characterizing at least one of the processed FP images. As a non-limiting example, the one or more first FP images may be low resolution images and the one or more second FP images may be high resolution images.

When the FP sample further comprises numerical data (e.g., metadata, manually constructed attributes, etc.) associated with the FP image in the FP sample, the method may further comprise: processing at least a portion of numerical data by the trained DNN separately from processing the one or more first FP images and processing the one or more second FP images. The inspection-related data specific to a given application can be obtained by: processing results of individual processing of the one or more first FP images and the one or more second FP images by the trained DNN, and processing results of at least part of the numerical data. Alternatively, obtaining inspection-related data specific to a given application may comprise: aggregating results of the separate processing of the one or more first FP images and the one or more second FP images by the trained DNN, thereby producing aggregated image data; and further processing the aggregated image data, and separately processing at least a portion of the results of the numerical data.

By way of non-limiting example, the inspection specific application may be: detecting defects in the semiconductor sample; classifying defects in the semiconductor sample; registering between at least two manufacturing process (FP) images; segmenting at least one FP image selected from the group consisting of a high resolution image of the semiconductor specimen, a low resolution image of the semiconductor specimen, and a design data based image of the semiconductor specimen; regression-based reconstruction of FP images corresponding to data obtained by different examination modalities, regression-based reconstruction of image properties, and so forth.

As a non-limiting example, the one or more first modalities may differ from the one or more second modalities by at least one of: an inspection tool, a channel of the same inspection tool, operating parameters of the same inspection tool and/or channel, a layer of the semiconductor sample corresponding to a respective FP image, properties of the obtained FP image, and a derivation technique applied to the captured image.

According to other aspects of the presently disclosed subject matter, there is provided a system that may be used for inspecting a semiconductor sample according to the above-described method.

According to other aspects of the presently disclosed subject matter, there is provided a non-transitory computer-readable medium comprising instructions that, when executed by a computer, cause the computer to perform the method described above.

Drawings

In order to understand the invention and to see how it may be carried out in practice, embodiments will now be described, by way of non-limiting example only, with reference to the accompanying drawings, in which:

FIG. 1 illustrates a functional block diagram of an inspection system according to certain embodiments of the presently disclosed subject matter.

FIG. 2 illustrates a general flow diagram for using a Deep Neural Network (DNN) to automatically determine inspection related data based on a manufacturing process (FP) image according to certain embodiments of the presently disclosed subject matter.

Fig. 3 illustrates a general functional diagram of a DNN configured according to certain embodiments of the presently disclosed subject matter.

FIGS. 4a and 4b illustrate a generalized flow diagram for classifying defects according to certain embodiments of the presently disclosed subject matter; and

fig. 5 a-5 c show non-limiting examples of architectures for classifying DNN networks according to certain embodiments of the presently disclosed subject matter.

Detailed Description

In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be understood by those skilled in the art that the presently disclosed subject matter may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to obscure the presently disclosed subject matter.

Unless specifically and otherwise stated (as will be apparent from the discussion below), it is to be understood that throughout the description, discussions utilizing terms such as "processing", "calculating", "representing", "comparing", "generating", "training", "segmenting", "registering" or the like mean the action(s) and/or program(s) of a computer that manipulates and/or transforms data into other data, the data being represented as physical (such as electronic) quantities and/or the data representing physical objects. The term "computer" should be interpreted broadly to encompass any kind of hardware-based electronic device having data processing capabilities, including (as non-limiting examples): an FPEI system and corresponding components thereof are disclosed in the present application.

The terms "non-transitory memory" and "non-transitory storage medium" as used herein should be broadly construed to encompass any volatile or non-volatile computer memory suitable for the presently disclosed subject matter.

The term "defect" as used in this specification should be broadly construed to encompass any kind of abnormal or undesired feature on or within the sample.

The term "design data" as used in the specification should be broadly construed to encompass any data representing a hierarchical physical design (layout) of a sample. Design data may be provided by separate designers and/or may be derived from a physical design (e.g., via complex simulations, simple geometric and boolean operations, etc.). The design data may be provided in different formats, such as (as non-limiting examples) GDSII format, OASIS format, etc. The design data may be presented in a vector format, a grayscale intensity image format, or other format.

It is to be understood that certain features of the presently disclosed subject matter, which are described in the context of separate embodiments, may also be provided in combination in a single embodiment, unless expressly and otherwise stated. Conversely, various features of the presently disclosed subject matter, which are described in the context of a single embodiment, can also be provided separately or in any suitable subcombination. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the methods and apparatus.

With this in mind: attention is directed to FIG. 1, wherein a functional block diagram of an inspection system is shown in accordance with certain embodiments of the presently disclosed subject matter. The inspection system 100 shown in fig. 1 may be used to inspect a sample (e.g., a wafer and/or a portion of a wafer) as part of a sample manufacturing process. The illustrated inspection system 100 includes a computer-based system 103 that can automatically determine metrology-related and/or defect-related information using images obtained during specimen manufacturing by the computer-based system 103. These images are referred to hereinafter as manufacturing process (FP) images. The system 103 is hereinafter referred to as an FPEI (manufacturing process inspection information) system. The FPEI system 103 may be operably connected to one or more low resolution inspection tools 101 and/or one or more high resolution inspection tools 102 and/or other inspection tools. The inspection tool is configured to capture FP images and/or inspect the captured FP image(s) and/or enable or provide measurements related to the captured image(s). The FPEI system may be further operably connected to the CAD server 110 and the data repository 109.

The FPEI system 103 includes: a Processor and Memory Circuit (PMC)104, said Processor and Memory Circuit (PMC)104 being operatively connected to a hardware-based input interface 105 and to a hardware-based output interface 106. The PMC104 is configured to provide: all processing required for operating the FPEI system (described in further detail with reference to fig. 2-5), and includes: a processor (not separately shown) and a memory (not separately shown). A processor of PMC104 may be configured to execute a number of functional modules in accordance with computer-readable instructions embodied on a non-transitory computer-readable memory included in the PMC. These functional modules are referred to hereinafter as being included in the PMC. The functional modules included in PMC104 include: a training set generator 111 and a Deep Neural Network (DNN)112 operably connected. DNN 112 includes DNN module 114, which DNN module 114 is configured to enable data processing using deep neural network(s) to output application-related data based on manufacturing process (FP) input data. Optionally, DNN 112 may include a pre-DNN module 113, the pre-DNN module 113 configured to provide pre-processing prior to sending input data to the DNN module; and/or a post-DNN module 115, the post-DNN module 115 configured to provide post-processing data generated by the DNN module. The operation of FPEI system 103, PMC104, and the functional blocks therein will be described in detail with further reference to fig. 2 through 5.

As will be described in further detail with reference to fig. 2-5, the FPEI system is configured to receive FP input data through the input interface 105. The FP input data may include: data generated by the inspection tool (and/or derivatives thereof and/or metadata associated therewith), and/or data generated and/or stored in one or more data repositories 109, and/or CAD server 110 and/or another related data repository. Note that the FP input data may include images (e.g., captured images, images derived from captured images, simulated images, synthetic images, etc.) and associated numerical data (e.g., metadata, manually constructed attributes, etc.). It is further noted that the image data may include data associated with a layer of interest and/or one or more other layers of the sample. Optionally, for training purposes, the FP input data may include the entire available FAB data, or portions thereof selected according to some criteria.

The FPEI system is further configured to process at least a portion of the received FP input data and send the results (or portions thereof) through the output interface 106 to the storage system 107, the inspection tool(s), a computer-based Graphical User Interface (GUI)108 for rendering the results, and/or an external system (e.g., the FAB's Yield Management System (YMS)). GUI 108 may be further configured to enable user-specified input related to operating FPEI system 103.

By way of non-limiting example, the sample may be inspected by one or more low resolution inspection machines 101 (e.g., optical inspection systems, low resolution SEMs, etc.). The resulting data (hereinafter referred to as low resolution image data 121), which provides information about the low resolution image of the sample, may be communicated to the FPEI system 103-either directly or through one or more intermediate systems. Alternatively or additionally, the sample may be inspected by the high resolution machine 102 (e.g., a subset of potential defect locations selected for inspection may be inspected by a Scanning Electron Microscope (SEM) or Atomic Force Microscope (AFM)). The resulting data (hereinafter referred to as high resolution image data 122), which provides information about the high resolution image of the sample, may be communicated to the FPEI system 103-either directly or through one or more intermediate systems.

Note that different resolutions may be utilized to capture images of the desired location on the sample. As a non-limiting example, so-called "defect images" of the desired locations may be used to distinguish between defects and error warnings, while so-called "class images" of the desired locations are obtained with a higher resolution and may be used for defect classification. In some embodiments, images of the same location (with the same or different resolutions) may include several images registered therebetween (e.g., an image captured from a given location and one or more reference images corresponding to the given location).

After processing the FP input data (e.g., low resolution image data and/or high resolution image data (optionally along with other data (e.g., design data, synthesized data, etc.)), the FPEI system may communicate the results (e.g., instruction-related data 123 and/or 124) to any of the inspection tools, store the results (e.g., defect attributes, defect classifications, etc.) in the storage system 107, present the results through the GUI 108, and/or communicate the results to an external system (e.g., to the YMS).

Those skilled in the art will readily appreciate that the teachings of the presently disclosed subject matter are not limited to the system shown in FIG. 1; the equivalent and/or modified functions may be combined or divided in another manner and may be implemented using any suitable combination of software and firmware and/or hardware.

Without limiting the scope of the present disclosure in any way, it should also be noted that the inspection tool may be implemented as various types of inspection machines (such as optical imaging machines, electron beam inspection machines, etc.). In some cases, the same inspection tool may provide both low resolution image data and high resolution image data. In some cases, at least one inspection tool may have metrology capability.

As will be described in further detail with reference to fig. 2-5, DNN module 114 may include a plurality of DNN sub-networks, where each DNN sub-network includes a plurality of layers organized according to a respective DNN architecture. Optionally, at least one of the DNN sub-networks may have a different architecture than the other DNN sub-networks. By way of non-limiting example, the layers in a subnetwork may be organized according to a Convolutional Neural Network (CNN) architecture, a recurrent neural network architecture, or other architecture. Optionally, at least part of the DNN sub-networks may have one or more layers in common (e.g., the last fusion layer, the output fully-connected layer, etc.).

Each layer of the DNN module 114 may include: a number of basic Computational Elements (CEs) commonly referred to in the art as dimensions, nerves, or nodes. The computing elements of a given layer may be connected with the CEs of the previous and/or subsequent layers. Each connection between a CE of a previous layer and a CE of a subsequent layer is associated with a weighting value. A given CE may receive input from CEs of previous layers over respective connections, each given connection being associated with a weighting value that may be applied to the input of the given connection. The weighting values may determine the relative strength of the connection and, thus, the relative impact of the corresponding input on the output of a given CE. A given CE may be configured to calculate a startup value (e.g., a weighted sum of inputs) and further derive an output by applying a startup function to the calculated startup. The start-up function may be (for example): an identity function, a deterministic function (e.g., linear, sigmoid, critical, etc.), a random function, or other suitable function. The output from a given CE may be transmitted to the CEs of subsequent layers over the corresponding connection. Likewise, as previously described, each connection at the output of a CE may be associated with a weighting value that may be applied to the output of the CE before being received as an input to the CE of a subsequent layer. In addition to weighting values, there may be a critical value (including a restriction function) associated with the connection and the CE.

The weighting values and/or the critical values of the deep neural network may be initially selected prior to training, and may further be iteratively adjusted or modified during training to achieve an optimal set of weighting values and/or critical values in the trained DNN module. After each iteration, it may be determined that: the difference between the actual output generated by the DNN module and the target output associated with the respective training set of data. The difference may be referred to as an error value. Training may be determined to be complete when the cost function representing the error value is less than a predetermined value, or when a limited change in performance between iterations is achieved. Optionally, at least part of the DNN sub-networks may be trained separately before training the entire DNN.

The set of DNN input data used to adjust the weight/threshold values of the deep neural network is referred to hereinafter as the training set.

The input to DNN 112 may be pre-processed by pre-DNN module 113 before being input to DNN module 114, and/or the output of DNN module 114 may be post-processed by post-DNN module 115 before being output from DNN 112. In these cases, the training of DNN 112 further comprises: parameters of the pre-DNN module and/or the post-DNN module are determined. The DNN module may be trained to minimize a cost function of the entire DNN, while parameters of the pre-DNN module and/or the post-DNN module may be predefined and optionally may be adjusted during training. The set based on trained parameters may further include: parameters related to pre-DNN and post-DNN processing.

It is noted that the teachings of the presently disclosed subject matter are not limited by the number and/or architecture of the DNN subnetworks.

Note that: the inspection system shown in fig. 1 may be implemented in a decentralized computing environment, wherein the aforementioned functional modules shown in fig. 1 may be distributed over several local and/or remote devices and may be linked via a communication network. Further note that: in other embodiments, at least a portion of the inspection tools 101 and/or 102, the data repository 109, the storage system 107, and/or the GUI 108 may be external to the inspection system 100 and operate in data communication with the FPEI system 103 via the input interface 105 and the output interface 106. The FPEI system 103 may be implemented as a stand-alone computer(s) for use in conjunction with an inspection tool. Alternatively, the respective functions of the FPEI system may be at least partially integrated with one or more inspection tools.

Referring to fig. 2, a general flow diagram for using DNN 112 to automatically determine inspection related data based on a manufacturing process (FP) image is shown. As presented in U.S. patent application No. 2017/0177997, which is assigned to the assignee of the present application and is incorporated herein by reference in its entirety, the program includes a setup step that includes training a Deep Neural Network (DNN)112, where the DNN is trained for a given exam-related application and is characterized by an application-specific set of training-based parameters. By way of non-limiting example, the inspection-related application may be at least one of:

defect classification using attributes generated by the DNN (defining a class may include modifying and/or updating a pre-existing class definition and/or identifying a new class);

segmentation of a manufacturing process image, the segmentation comprising: dividing the FP image into segments (e.g., material type, edges, pixel labels, regions of interest, etc.);

defect detection (e.g., using the FP image to identify one or more candidate defects (if they exist), flag the one or more candidate defects, determine true values for candidate defects, obtain shape information for defects, etc.);

a registration between two or more images, the registration comprising: obtaining geometric warping parameters (which may be global or local, simple shifts or more complex transformations) between the images;

cross-modality regression (e.g., reconstructing an image from one or more images from different examination modalities (e.g., SEM or optical images from CAD, height maps from SEM images, high resolution images from low resolution images);

regression-based reconstruction of image properties (e.g., contact hole depth, etc.); and

combination(s) of the above.

A DNN trained for a given application is obtained during the setup step (201). During runtime, the PMC of the FPEI system uses the obtained trained DNNs 112 to process the FP samples that comprise the FP images (202). Thus, the PMC obtains application-specific inspection-related data characterizing at least one of the images in the processed FP sample (203). When processing one or more FP images, the PMC may also use pre-defined parameters and/or parameters received from other sources in addition to the training-based parameters that characterize DNN 112 at the time of training.

The FP images in the FP sample can be from different inspection modalities (e.g., from different inspection tools, from different channels of the same inspection tool (e.g., bright field images and dark field images), from the same inspection tool using different operating parameters, or can be derived from design data, etc.).

For example, the FP image may be selected from an image of a sample (e.g., a wafer or portion thereof) captured during a manufacturing process, a derivative of the captured image obtained through various pre-processing stages (e.g., an image of a portion of a wafer or photomask captured by an SEM or optical inspection system, an SEM image substantially centered around a defect to be classified by an ADC, an SEM image of a larger area in which the defect is localized by an ADR), registered images of different inspection modalities corresponding to the same mask location, a segmented image, a height map image, etc.), a computer-generated image based on design data, etc. Note that the FP image may include: an image of a layer of interest, and/or a registered image of one or more other layers of the sample. The FP images of different layers are also referred to hereinafter as images received from different modalities.

By way of non-limiting example, application specific inspection related data may represent: the significance of each pixel of the value depends on the application (e.g., binary mapping for defect detection; discrete mapping for perturbed population prediction (nuisance family prediction) indicating the type of population or general class; discrete mapping for defect type classification; continuous value for cross-modal or grain-to-model (D2M) regression, etc.). A map for each pixel may further be obtained, along with a probability map for each pixel that represents the probability of the value obtained for the pixel.

Alternatively or additionally, application specific inspection related data may represent: one or more values (e.g., defect attributes, defect bounding box candidates and associated defect probabilities for an automatic defect inspection application, defect classes and class probabilities for an automatic defect classification application, etc.) for the entire image content (not every pixel) are summarized.

Alternatively or additionally, the obtained application-specific defect-related data may not be directly related to the defect, but may be used for defect analysis (e.g., boundary between layers of the wafer obtained by segmentation of FP images of the layers that may be used to define the defect, defect environment data (e.g., features of a background pattern), etc.). Alternatively or additionally, inspection related data may be used for metrology purposes.

Further note that: in embodiments of the presently disclosed subject matter, the features of the images included in the training samples and/or FP samples are different from conventional RGB images using a general deep neural network known in the art. For example, electron-based imaging yields: grayscale images with various effects, such as non-uniform noise distribution, charging effects, large variations between sensors (different tools), etc. Furthermore, SEM images are typically composed of 5 different grayscale images, each corresponding to a different viewing angle (top, left, right, top, bottom) of the captured image.

Referring to fig. 3, a generalized functional diagram of DNN 112 configured according to certain embodiments of the presently disclosed subject matter is shown.

As described in detail in the foregoing, the DNN network may be trained using FP input data of multiple data types, such as, for example, images with different origins and resolutions (e.g., defect images, classification images, reference images, CAD images, etc.), different types of numerical data (e.g., different types of data derived from images (e.g., height maps, defect masks, levels, fragments, etc.), different types of metadata (e.g., imaging conditions, pixel sizes, etc.), different types of manually constructed attributes (e.g., defect size, orientation, background fragments, etc.), etc., and the examination-related output data may be obtained using FP input data of the multiple data types. According to certain embodiments of the presently disclosed subject matter, DNN 112 is configured to provide proprietary (i.e., separate) processing (during setup and during runtime) of different types of FP input data. Furthermore, as will be described in further detail with reference to fig. 5 a-5 c, DNN 112 may be configured to combine proprietary processing with other additional types of fused input data, and further fuse a portion of the results.

The DNN module 114 may include a plurality of input sub-networks (represented by 302-1-302-3), each given input sub-network being configured to process a certain type of FP input data (represented by 301-1-301-3) specified for the given sub-network. The architecture of a given input subnetwork may correspond to the specified type(s) of input data, respectively, or may alternatively be independent of the type of input data.

The input sub-network may be connected to a convergence sub-network 305, which is further connected to an output sub-network 306, which is configured to output application-specific examination-related data. Alternatively, at least a portion of the input sub-networks may be directly connected to either the aggregation sub-network 305 or the output sub-network 306. Alternatively, the aggregation sub-network and the output sub-network may be organized in a single sub-network.

The plurality of input sub-networks includes one or more sub-networks (hereinafter referred to as "image sub-networks") configured to process FP images, wherein different image sub-networks are configured to process images received from different inspection modalities. As shown, image sub-network 302-1 processes FP images of a first type (301-1) (e.g., low resolution images) and image sub-network 302-2 separately processes FP images of a second type (301-2) (e.g., high resolution images).

By way of non-limiting example, an inspection modality may differ from another inspection modality by an inspection tool, a different channel of the same inspection tool, by operating parameters of an inspection tool (e.g., perspective and/or resolution provided by a certain inspection tool/channel, etc.), and/or by a layer corresponding to a respective FP image. Alternatively or additionally, the examination modality may differ from another examination modality by obtaining properties of the respective FP-image (i.e., captured image, image derived therefrom, pre-processed image (e.g., mean image and/or difference image)), and the simulated image (including CAD-based image) is hereinafter referred to as an image from the examination modality that differs by obtaining properties of the respective image. Alternatively or additionally, the inspection modality may be different from another inspection modality by a derivation technique applied to the captured image (e.g., FP image derived by segmentation, defect contour extraction, height map calculation, etc.).

For purposes of illustration only, the following description is provided with respect to an image subnetwork having a convolutional neural architecture (CNN). As a non-limiting example, the architecture of the image sub-network may be provided in the manner disclosed in Gao Huang et al, the document "Densely Connected Convolutional Networks" (https:// axiv. org/pdf/1608.06993.pdf), which is hereby incorporated by reference in its entirety. Those skilled in the art will readily appreciate that the teachings of the presently disclosed subject matter are equally applicable to other DNN architectures suitable for processing images.

The plurality of input sub-networks may further include at least one DNN sub-network (represented at 302-3) configured to process numerical types of input data (e.g., metadata related to FC images, general attributes related to one or more inspection tools and/or one or more multiple grains, manually constructed attributes, etc.). Alternatively or additionally, at least part of the numeric type of input data may be fused directly into aggregation sub-network 305 or output sub-network 306.

Aggregation sub-network 305 may include one or more fusion layers 303 connected to one or more fully connected layers 304. Optionally, the one or more fusion layers 303 may be organized (in parallel and/or sequentially) in one or more fusion DNN sub-networks. Optionally, one or more fully-connected layers 304 may be organized (in parallel and/or sequentially) in one or more fully-connected DNN sub-networks.

Optionally, the output subnetwork 306 can include: a customization layer configured to customize application-specific inspection-related data (e.g., according to FAB requirements).

The following description is provided for classification applications for illustrative purposes only. Those skilled in the art will readily appreciate that the teachings of the presently disclosed subject matter are equally applicable to other applications relating to the examination of samples. As a non-limiting example, the process described in detail with reference to fig. 3 may be equally applied to detecting defects in a sample; registering between at least two manufacturing process (FP) images; segmenting at least one FP image selected from the group consisting of a high resolution image of the sample, a low resolution image of the sample, and a design data based image of the sample; recursively reconstructing FP images corresponding to data obtained by different examination modalities; metrology applications, regression-based reconstruction of image characteristics, and the like.

Referring to fig. 4a and 4b, a non-limiting example of obtaining inspection-related data for classifying defects in a specimen is shown.

The process comprises the following steps: a classification-specific trained setup step 410 of the DNN (e.g., to provide classification-related attributes that achieve minimal defect classification errors), and a run-time step 420 of using the trained DNN for generating defect attributes and/or performing defect classification.

During setup 410 (common to fig. 4a and 4 b), after obtaining a set of first training samples (401) comprising FP images and their metadata, and corresponding truth data (ground truth data) (402), PMC104 generates (403) a classification training set, and uses the generated classification training set to obtain (404) a trained DNN characterized by the classification-related training parameters. Generating (403) a classification training set may comprise: the first training sample and the truth data are amplified and the amplified training sample and the amplified truth data are included into a training set. Alternatively or additionally, the expanding the first training set may comprise: generated and included in the set synthesized training samples. Alternatively or additionally, generating (403) a classification training set may comprise: derivatives of the FP image (e.g., average or difference images), manually constructed attributes, and so forth are generated.

Note that according to certain embodiments of the presently disclosed subject matter, DNNs may be trained based on the entire available FAB data (e.g., CAD data, high resolution images, low resolution images, metadata, general attributes, etc.) relating to all types of layers/artifacts from all manufacturing stages. Alternatively, it may be provided on a portion of the available FAB data (e.g., labeled/unlabeled/specific layer (s)/specific product (s)/specific category(s), etc.) selected according to certain criteria: training for DNN. The DNN may be further continuously trained (e.g., in response to new classes introduced in the FAB or on conventional automated procedures) in order to maintain its correlation with data of the continuously changing FAB.

Optionally, the DNN may be coarsely trained (possibly FAB-independent) on a different data set, and further finely trained (e.g., by means of a transfer learning technique or other suitable technique) on at least part of the available FAB data for a particular exam-related application.

Further note that: beredo (pareto) can be highly unbalanced due to the nature of the distribution of defects in the FAB, and can consist of 50-80% defects from a single class. FAB data can be further characterized by high classification error rates and limited data availability. Techniques for augmented and synthetic image generation enable the formulation of training sets based on FAB data to meet the requirements of a particular application. As a non-limiting example, the lack of FAB data associated with a particular category (e.g., a few categories) of defects may be compensated for by presenting a composite image of the corresponding defect. As another non-limiting example, the lack of suitable FAB data associated with a particular layer may be compensated for by a composite image associated with that layer. As yet another non-limiting example, a composite image may be generated to recover missing information (e.g., a high resolution image) for a particular defect.

After generating (403) a classification training set, the PMC may train (404) the DNN to extract classification-related features and provide attributes of the defect (and/or a label for the defect) to achieve minimal classification errors. The training process may include updating a training set. The training process utilizes the class-dependent training parameters to generate a trained DNN.

During run time 420, the PMC uses class-specific trained DNNs to process (405) FP samples and obtain (406) defect attributes. As non-limiting examples, FP samples may include: a group of images relating to defects to be classified and obtained by the same inspection modality or by a different inspection modality, reference grain images, CAD-based images, data derived from the obtained images (e.g., height maps, defect masks, grades, fragments, etc.). The FP samples may further include: metadata related to the defect to be classified (e.g., imaging conditions, pixel size, engineering attributes (e.g., defect size, orientation, background segments, etc.). As a non-limiting example, metadata may be generated by the PMC according to predefined instructions stored in the PMC and/or received from a respective inspection tool.

As will be described in further detail with reference to fig. 5 a-5 c, in accordance with certain embodiments of the presently disclosed subject matter, the data in the training samples and in the FC samples may be divided between the respective input subnetworks. For example, images from different examination modalities (or groups thereof) may constitute inputs to different image sub-networks.

As non-limiting examples, low resolution images (e.g., optical images and/or low resolution SEM images and/or derivatives thereof) of the FC sample from defect locations obtained from different viewing angles and/or under different illumination conditions, corresponding reference images (e.g., grain reference images, cell reference images, low resolution CAD-based images related to defect locations, etc.) may constitute a "defect (default)" level input to the first image sub-network, while high resolution images (e.g., SEM images and/or derivatives thereof of defect locations obtained from different viewing angles and/or under different illumination conditions, high resolution CAD-based images related to defects, etc.) from the FC sample may constitute a "class (class)" level input to the second image sub-network. Alternatively, images of the same resolution may constitute the input to more than one image sub-network (e.g., the captured image and a derivative of the image may be fed to different input sub-networks).

Numerical data (e.g., metadata, manually constructed attributes, etc.) included in the FC sample may constitute input to a third input sub-network or to a convergence sub-network, or to an output sub-network. Alternatively, numerical data may constitute inputs to several input sub-networks, to a sink sub-network, or to an output sub-network, depending on the data type (e.g., metadata may be fed to a third input sub-network, while manually constructed attributes may be fed to the sink sub-network).

Alternatively, the training samples may correspond to FP samples. Each training sample may comprise at least the same number of images obtained by the same examination modality and having the same relationship as the images in the corresponding FP sample. Optionally, the training samples may further comprise additional images obtained by additional inspection modalities that are not typically available during runtime. The distribution of data in the training samples on the input subnetwork should correspond to the corresponding distribution in the FP samples. Note that: in some embodiments, the training samples may miss some of the images corresponding to the images in the FP sample. These deficiencies may be compensated for by different techniques, some of which are known in the art. As a non-limiting example, missing images may be compensated for by images generated from an average of corresponding images from other training samples.

In the process shown in fig. 4a, the FPEI system outputs (408) the defect attributes obtained by DNN, and optionally the engineering attributes, to an external classifier and further outputs the engineering attributes to an external classification system. Obtaining the classification result (409) comprises: the results it receives from the external classifier (which may optionally be part of the external classification system), and the engineering attributes are processed by the external classification system.

In the procedure shown in fig. 4b, the FPEI system uses the classification-related attributes obtained by DNN, and optionally the engineering attributes (optionally obtained (407) when processing the FP image (s)) to generate (408-1) an intermediate classification result. The FPEI system further outputs (408-1) the intermediate classification results and the engineering attributes to an outer classification system. An external classification system processes (409-1) the received data and generates classified defect(s). Alternatively, operation 408-1 may be omitted and the FPEI may use the classification-related and engineering attributes obtained by DNN to generate classified defects without involving an external classification system. Note that: the engineering attributes may be part of the metadata input to the DNN. Alternatively or additionally, the engineering property and/or derivative thereof may be part of a defect representation.

Thus, as shown, the classification application may be implemented in different ways. As a non-limiting example, classifying a particular trained DNN may classify defects present in the FP image based on a combination of DNN classification-related attributes and engineering attributes obtained therefrom, or based only on DNN-obtained classification-related attributes. Alternatively, classification of class-specific trained DNNs may enable classification of these defects by providing class-related attributes (and (optionally) engineering attributes) to an external classification system.

Non-limiting examples of classifying DNN networks are shown with reference to fig. 5a to 5 c. It is noted that optionally, the images may be pre-processed before being fed to the input sub-network, and the resulting derivation (e.g., mean-external, difference-external, etc.) may be fed to the corresponding image sub-network instead of or in addition to the image obtained by the inspection tool.

As shown in FIG. 5a, the data from the FP sample may be divided among 3 input sub-networks (represented by 502-1-502-3). CNN image subnetwork 502-1 processes the "defect" level low resolution image 501-1 to obtain a "defect" level feature, and CNN image subnetwork 502-2 processes the "class" level high resolution image 501-2 to obtain a "class" level feature. Metadata 501-3 (e.g., data that may provide relevant information for pixel size, FOV, scan rotation and scan rate, frame, etc.) is processed by a fully connected input subnetwork 502-3 to obtain "metadata" level features.

Optionally, the DNN shown may have one or more additional input sub-networks (e.g., ADR-based input, etc.).

The features generated individually by each of the input sub-networks are fused to a fully connected sub-network 503, which fully connected sub-network 503 aggregates the received features and computes a final attribute representation.

In the example shown in fig. 5a, data fusion from all of the different sources is provided as feature-based.

Fig. 5b shows an example of data-based fusion followed by feature-based fusion. The different image data 501-1-501-2 k are preprocessed together by a preprocessing sub-network 510. Optionally, the pre-processing sub-network 510 may include: a respective first convolution layer for each type of image (e.g., for each examination modality or each group thereof). Alternatively, all of the first roll-up layers may be the same. Further, preprocessing sub-network 510 may aggregate the images to obtain an aggregated image (e.g., a maximum value obtained for each pixel). The resulting aggregated image is fed to the input subnetwork 502-1 to extract the aggregated image features. Further, the numerical level features and the aggregated image features are fed to the output sub-network 503 to obtain a final attribute representation.

Fig. 5c shows an example of decision-based fusion. Each input subnetwork 502-1-502-3 feeds a dedicated fully connected subnetwork (denoted 503-1-503-3), respectively. Thus, each input type is provided by a dedicated channel that extracts features at a corresponding level. The final defect-level features, the final category-level features, and the final metadata features are fused to a classification sub-network 505, which classifies the received features and computes classification labels for known categories 505.

Note that the final attribute representation (e.g., as shown in fig. 5a and 5 b) implements: and further classified into one or more previously unknown classes.

Thus, as indicated in the foregoing, the DNN is able to generate a defect representation using FP samples that include various data (e.g., source images obtained from different perspectives and with different resolutions, image content (context \ defect), derivatives from the source images (height maps, fragments, etc.), numerical data (e.g., pixel size, manually constructed attributes), etc.), thereby providing accuracy of the classification results.

A further advantage of certain embodiments of the presently disclosed subject matter is the implementation of an FAB-based automated procedure that is capable of establishing new attributes for future introduced categories.

An additional advantage of certain embodiments of the presently disclosed subject matter is that a stable attribute space is generated without the need for constant professional calibration.

It is to be understood that the invention is not limited in its application to the details set forth in the description or illustrated in the drawings. The invention is capable of other embodiments and of being practiced and carried out in various ways. It is to be understood, therefore, that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting. As such, those skilled in the art will appreciate that the conception, upon which this disclosure is based, may readily be utilized as a basis for the designing of other structures, methods and systems for carrying out the several purposes of the present disclosed subject matter.

It will also be appreciated that a system according to the invention may be implemented, at least in part, on a suitably programmed computer. Likewise, the invention contemplates a computer program being readable by a computer for executing the method of the invention. The invention further contemplates a non-transitory computer readable memory tangibly embodying a program of instructions executable by a computer to perform the method of the invention.

Those skilled in the art will readily appreciate that various modifications and changes may be applied to the embodiments of the invention as described in the foregoing without departing from the scope of the invention, which is defined in and by the appended claims.

24页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于利用尖峰神经网络生成贝叶斯推断的方法、设备和系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!