Adaptive region of interest based on deep learning for critical dimension measurement of semiconductor substrates

文档序号:573191 发布日期:2021-05-18 浏览:4次 中文

阅读说明:本技术 用于半导体衬底的临界尺寸测量的基于深度学习的自适应关注区域 (Adaptive region of interest based on deep learning for critical dimension measurement of semiconductor substrates ) 是由 A·亚提 于 2019-10-01 设计创作,主要内容包括:本发明揭示一种计量系统。在一个实施例中,所述系统包含经配置以获取样本的一或多个图像的特性化子系统。在另一实施例中,所述系统包含控制器,其经配置以:从所述特性化子系统接收样本的一或多个训练图像;在所述一或多个训练图像内接收一或多个训练关注区域ROI选择;基于所述一或多个训练图像及所述一或多个训练ROI选择来产生机器学习分类器;从所述特性化子系统接收样本的一或多个产品图像;使用所述机器学习分类器来产生一或多个经分类关注区域;及在所述一或多个经分类关注区域内确定所述样本的一或多个测量。(A metrology system is disclosed. In one embodiment, the system includes a characterization subsystem configured to acquire one or more images of a sample. In another embodiment, the system includes a controller configured to: receiving one or more training images of a sample from the characterization subsystem; receiving one or more training region of interest, ROI, selections within the one or more training images; generating a machine learning classifier based on the one or more training images and the one or more training ROI selections; receiving one or more product images of a sample from the characterization subsystem; generating one or more classified regions of interest using the machine learning classifier; and determining one or more measurements of the sample within the one or more classified regions of interest.)

1. A system, comprising:

a characterization subsystem configured to acquire one or more images of a sample; and

a controller including one or more processors configured to execute a set of program instructions stored in memory, the set of program instructions configured to cause the one or more processors to:

receiving one or more training images of a sample from the characterization subsystem;

receiving one or more training region of interest, ROI, selections within the one or more training images;

generating a machine learning classifier based on the one or more training images and the one or more training ROI selections;

receiving one or more product images of a sample from the characterization subsystem;

generating one or more classified regions of interest using the machine learning classifier; and

determining one or more measurements of the sample within the one or more classified regions of interest.

2. The system of claim 1, wherein the machine learning classifier is configured to identify one or more measurements of interest for a sample based on the one or more training images and the one or more training ROI selections.

3. The system of claim 1, wherein generating one or more classified regions of interest using the machine learning classifier comprises:

receiving one or more product ROI selections within the one or more product images;

adaptively modifying one or more characteristics of the one or more product ROI selections using the machine learning classifier to generate the one or more classified regions of interest.

4. The system of claim 3, wherein at least one product ROI selection is received from a user via a user interface.

5. The system of claim 3, wherein using the machine-learned classifier to adapt one or more characteristics of the one or more product ROI selections comprises:

adapting, using the machine-learned classifier, at least one of a size or a shape of at least one product ROI selection.

6. The system of claim 3, wherein using the machine-learned classifier to adapt one or more characteristics of the one or more product ROI selections comprises:

adaptively modifying an orientation of at least one product ROI selection using the machine learning classifier to generate a classified region of interest that has been rotated relative to the at least one product ROI selection.

7. The system of claim 1, wherein generating one or more classified regions of interest using the machine learning classifier comprises:

receiving one or more product attention pattern POI selections within the one or more product images; and

generating the one or more classified regions of interest based on the one or more product POI selections

8. The system of claim 1, wherein the one or more measurements comprise critical dimension measurements within the one or more classified regions of interest.

9. The system of claim 1, wherein the characterization subsystem comprises at least one of a Scanning Electron Microscope (SEM) subsystem or an optical characterization subsystem.

10. The system of claim 1, wherein at least one training ROI selection is received from a user via a user interface.

11. The system of claim 1, wherein the machine learning classifier comprises at least one of a deep learning classifier, a Convolutional Neural Network (CNN), an ensemble learning classifier, a random forest classifier, or an artificial neural network.

12. A system, comprising:

a controller including one or more processors configured to execute a set of program instructions stored in memory, the set of program instructions configured to cause the one or more processors to:

receiving one or more training images of a sample;

receiving one or more training region of interest, ROI, selections within the one or more training images;

generating a machine learning classifier based on the one or more training images and the one or more training ROI selections;

receiving one or more product images of a sample;

generating one or more classified regions of interest using the machine learning classifier; and

determining one or more measurements of the sample within the one or more classified regions of interest.

13. The system of claim 12, wherein the machine learning classifier is configured to identify one or more measurements of interest for a sample based on the one or more training images and the one or more training ROI selections.

14. The system of claim 12, wherein generating one or more classified regions of interest using the machine learning classifier comprises:

receiving one or more product ROI selections within the one or more product images; and

adaptively modifying one or more characteristics of the one or more product ROI selections using the machine learning classifier to generate one or more modified regions of interest.

15. The system of claim 14, wherein using the machine-learned classifier to adapt one or more characteristics of the one or more product ROI selections comprises:

adapting, using the machine-learned classifier, at least one of a size or a shape of at least one product ROI selection.

16. The system of claim 14, wherein using the machine-learned classifier to adaptively modify one or more characteristics of the one or more product ROI selections to generate one or more modified regions of interest comprises:

adaptively modifying an orientation of at least one product ROI selection using the machine learning classifier to generate a classified region of interest that has been rotated relative to the at least one product ROI selection.

17. The system of claim 12, wherein the one or more measurements comprise critical dimension measurements within the one or more classified regions of interest.

18. The system of claim 12, wherein at least one training ROI selection is received from a user input device of a user interface.

19. The system of claim 12, wherein the machine learning classifier comprises at least one of a deep learning classifier, a Convolutional Neural Network (CNN), an ensemble learning classifier, a random forest classifier, or an artificial neural network.

20. A method, comprising:

acquiring one or more training images of the sample using the characterization subsystem;

receiving one or more training region of interest, ROI, selections within the one or more training images;

generating a machine learning classifier based on the one or more training images and the one or more training ROI selections;

using the characterization subsystem to acquire one or more product images of a sample;

generating one or more classified regions of interest using the machine learning classifier; and

determining one or more measurements of the sample within the one or more classified regions of interest.

Technical Field

The present invention relates generally to the field of sample characterization and metrology, and more particularly, the present invention relates to a system and method for adaptive region of interest selection using machine learning techniques.

Background

The demand for electronic logic and memory devices with increasingly smaller footprints and features faces various manufacturing challenges beyond the fabrication of desired dimensions. The increasingly complex structure results in an increasing number of parameters that must be monitored and controlled to maintain device integrity. Important characteristics in the field of semiconductor manufacturing are the Critical Dimension Uniformity (CDU) and the Critical Dimension (CD) of device features. Monitoring the CDU can help monitor process variations and identify process tool drifts that need to be repaired.

Traditionally, monitoring features of interest (e.g., CDUs) involves: defining a pattern of interest (POI); defining a region of interest (ROI) within which measurements (e.g., CDU measurements) are to be made relative to the POI; detecting an edge of the ROI; and performing the measurement. However, since current techniques involve aligning the POI with a Scanning Electron Microscope (SEM) image and placing the ROI based on the POI location, the accuracy of ROI placement depends on the SEM and SEM alignment, which may be unreliable. Furthermore, alignment accuracy is typically low since the POI structure sizes defined within each image may vary widely. Due to this misalignment, the ROI may be misplaced to thereby fail to include the entire region required for a particular measurement of interest.

Additionally, current techniques are unable to correct for process variations and/or structural variations that affect alignment accuracy. Thus, POI alignment, and thus ROI alignment, within SEM images may fail due to structural variations within the sample itself. For example, target structure size variations may cause POI and ROI alignment failures to thereby hinder efficient monitoring of measurements of interest.

It is therefore desirable to provide a system and method that addresses the deficiencies of previous methods identified above.

Disclosure of Invention

In accordance with one or more embodiments of the present disclosure, a system is disclosed. In one embodiment, the system includes a characterization subsystem configured to acquire one or more images of a sample. In another embodiment, the system includes a controller including one or more processors configured to execute a set of program instructions stored in memory, the set of program instructions configured to cause the one or more processors to: receiving one or more training images of a sample from the characterization subsystem; receiving one or more training region of interest (ROI) selections within the one or more training images; generating a machine learning classifier based on the one or more training images and the one or more training ROI selections; receiving one or more product images of a sample from the characterization subsystem; generating one or more classified regions of interest using the machine learning classifier; and determining one or more measurements of the sample within the one or more classified regions of interest.

In accordance with one or more embodiments of the present disclosure, a system is disclosed. In one embodiment, the system includes a controller including one or more processors configured to execute a set of program instructions stored in a memory, the set of program instructions configured to cause the one or more processors to: receiving one or more training images of a sample; receiving one or more training region of interest (ROI) selections within the one or more training images; generating a machine learning classifier based on the one or more training images and the one or more training ROI selections; receiving one or more product images of a sample; generating one or more classified regions of interest using the machine learning classifier; and determining one or more measurements of the sample within the one or more classified regions of interest.

According to one or more embodiments of the present disclosure, a method is disclosed. In one embodiment, the method includes: acquiring one or more training images of the sample using the characterization subsystem; receiving one or more training region of interest (ROI) selections within the one or more training images; generating a machine learning classifier based on the one or more training images and the one or more training ROI selections; using the characterization subsystem to acquire one or more product images of a sample; generating one or more classified regions of interest using the machine learning classifier; and determining one or more measurements of the sample within the one or more classified regions of interest.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not necessarily restrictive of the invention. The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the general description, serve to explain the principles of the invention.

Drawings

The numerous advantages of the present invention may be better understood by those skilled in the art by reference to the accompanying drawings in which:

FIG. 1 illustrates pattern of interest (POI) and region of interest (ROI) alignment on a sample.

Fig. 2 illustrates a pattern of interest (POI) including a target location.

Fig. 3A-3B illustrate alignment errors between a region of interest (ROI) of a product image and a region of interest (ROI) of a control image.

Fig. 4A-4B illustrate alignment errors between a region of interest (ROI) of a product image and a region of interest (ROI) of a control image.

Fig. 5 illustrates a system for adaptive region of interest (ROI) selection, in accordance with one or more embodiments of the present disclosure.

Fig. 6A illustrates a system for adaptive region of interest (ROI) selection, in accordance with one or more embodiments of the present disclosure.

Fig. 6B illustrates a system for adaptive region of interest (ROI) selection, in accordance with one or more embodiments of the present disclosure.

Fig. 7A illustrates a training image used to train a machine learning classifier in accordance with one or more embodiments of the present disclosure.

Fig. 7B illustrates a training image including a training region of interest (ROI) selection in accordance with one or more embodiments of the present disclosure.

Fig. 8A illustrates a product image in accordance with one or more embodiments of the present disclosure.

Fig. 8B illustrates a product image including a classified region of interest (ROI), in accordance with one or more embodiments of the present disclosure.

Fig. 8C illustrates a product image including product region of interest (ROI) selection and classified regions of interest, in accordance with one or more embodiments of the present disclosure.

Fig. 9A illustrates a product image in accordance with one or more embodiments of the present disclosure.

Fig. 9B illustrates a product image including a classified region of interest (ROI), in accordance with one or more embodiments of the present disclosure.

Fig. 9C illustrates a product image including angular classified regions of interest (ROIs), in accordance with one or more embodiments of the present disclosure.

Fig. 10 illustrates a flow diagram of a method for adaptive region of interest (ROI) selection, in accordance with one or more embodiments of the present disclosure.

Detailed Description

The present invention has been particularly shown and described with respect to certain embodiments and specific features thereof. The embodiments set forth herein are considered to be illustrative and not restrictive. It will be readily apparent to persons skilled in the relevant art that various changes and modifications in form and detail can be made therein without departing from the spirit and scope of the invention.

Reference will now be made in detail to the disclosed subject matter as illustrated in the accompanying drawings.

It is noted herein that monitoring features of interest, including Critical Dimension Uniformity (CDU), is an important step in monitoring process variations during semiconductor fabrication. Traditionally, monitoring features of interest (e.g., CDUs) is based on conventional image processing procedures and involves the following steps: (1) defining a pattern of interest (POI); (2) defining a region of interest (ROI) within which measurements (e.g., CDU measurements) are to be made relative to the POI; (3) defining which measurement (e.g., CDU measurement, pattern width, contact, and the like) is to be made; (4) detecting an edge of each ROI; and (5) performing the measurement. However, since current techniques involve aligning the POI with a Scanning Electron Microscope (SEM) image and placing the ROI based on the POI location, the accuracy of ROI placement depends on the SEM and SEM alignment, which may be unreliable. Furthermore, alignment accuracy is typically low since the POI structure sizes defined within each image may vary widely. Due to this misalignment, the ROI may be misplaced and thereby fail to include the entire area required for a particular measurement of interest.

In addition, current ROI placement techniques based on conventional image processing procedures cannot correct for process variations that can affect alignment accuracy. Thus, POI alignment, and thus ROI alignment, within SEM images may fail due to structural variations within the sample itself. For example, target structure size variations may cause POI and ROI alignment failures to thereby hinder efficient monitoring of measurements of interest.

Accordingly, embodiments of the present invention address one or more of the shortcomings of the previous methods identified above. Embodiments of the present invention are directed to a system and method for generating an adaptive region of interest (ROI) using machine learning techniques. More particularly, embodiments of the present invention are directed to generating adaptive ROIs using machine learning techniques to more effectively monitor features of interest.

Various deficiencies of previous approaches based on conventional image processing procedures and the importance of embodiments of the present invention may be further understood with reference to fig. 1-4B. It is contemplated herein that a brief discussion of conventional methods may be used as a basis against which the advantages of the present invention may be compared.

FIG. 1 illustrates pattern of interest (POI) and region of interest (ROI) alignment on a sample.

As previously mentioned herein, in a first step of conventional feature of interest monitoring using conventional image processing procedures, POIs 102 are defined/selected on a sample's control image 100, as can be seen in fig. 1. The POI 102 may be drawn on any control image 100 including design images, optical images, SEM images, and the like of the sample. The POI 102 defines the area of the sample within which measurements are to be made and serves as an anchor point for placement of the ROI 104. The POI 102 may comprise a unique pattern, a unit of repeating structure, or the like. After POI 102 selection, an ROI 104 is selected on the control image 100 of the sample within the area defined by the POI 102. The ROI 104 defines the region of the sample within which measurements are to be taken. In practice, the POI 102 and ROI 104 selections shown in fig. 1 may be implemented on a design image (e.g., control image 100) of the sample.

After POI 102 and ROI 104 selection, a product image of the sample is taken, and the POI 102 defined in the first step is identified and aligned in the product image. The product image taken in the second step is a different image from the control image 100 in which the POI 102 and ROI 104 are defined, and may comprise an image of a product sample. The product image may include any image known in the art including, but not limited to, optical images, SEM images, and the like. After the POIs 102 are aligned in the product image, the ROI 104 is placed within the product image according to the placement of the POIs 102. In this regard, the alignment accuracy of the POI 102 may directly affect the alignment accuracy of the ROI 104. Thus, the accuracy of ROI 104 placement depends on SEM to SEM alignment, which may be unreliable. Furthermore, since the POI 102 structure sizes defined within each image may vary widely, alignment accuracy is typically low to thereby cause ROI 104 misplacement.

After the POI 102 and ROI 104 in the product image are aligned, a measurement type may be defined. This can be further understood with reference to fig. 2.

Fig. 2 illustrates a pattern of interest (POI 102) including a target location. The target site to be measured may include a measurement of interest defined as D4. Measurement D4 may include a Critical Dimension (CD) measurement and may be expressed byAnd (4) defining.

It should be noted here that conventional ROI placement techniques using conventional image processing procedures can suffer from alignment errors attributable to process variations during the sample fabrication process. As design rules continue to shrink, even small process variations can result in large structural variations of the sample. This in turn can lead to alignment inaccuracies and alignment failures to thereby cause inaccurate placement of the ROI within the image. These process variations and the resulting alignment inaccuracies are particularly problematic during the rapid development cycles of semiconductor manufacturing. During rapid development, the shape, size, orientation, and the like of structures may vary widely. This in turn can lead to inaccurate alignment of POI/ROI placement between the control image and the product image.

Fig. 3A to 3B illustrate alignment errors between a region of interest (ROI 104B) of the product image 101 and a region of interest (ROI 104a) of the control image 100.

Using conventional POI/ROI placement techniques based on conventional image processing procedures, a user may desire to perform one or more measurements on a sample of the product within the left "lobe" illustrated in fig. 3A, and may thereby define the left lobe as the target location in question. In this regard, the target site may include one or more "measurements of interest," which may include any parameter that may be measured, including but not limited to Critical Dimension (CD) measurements. Using conventional techniques, a user may define an ROI 104a within the control image 100, where the ROI 104a is located within the POI 102a and includes a target location that includes one or more measurements of interest. Subsequently, a product image 101 may be taken, as shown in fig. 3B.

As shown in fig. 3B, one or more process variations in a layer including a target site may result in enlarging the target site (e.g., enlarging the left lobe). This structural variation between the target site of the product image 101 and the target site of the control image 100 can result in alignment inaccuracies and incorrect placement of the ROI 104 b. For example, the POI 102b of the product image 101 may be aligned with the POI 102a of the control image 100, and the ROI 104b of the product image 101 may be placed according to the placement of the POI 102b within the product image 101. As can be seen in fig. 3B, the placement of ROI 104B may be inaccurate because it cannot encompass the target site (e.g., the left lobe). Due to the fact that the ROI 104b does not include the entire target site, desired measurements of interest within the target site may not be acquired. Thus, under conventional approaches, conventional image processing procedures and alignment techniques cannot account for process variations that result in structural variations (e.g., enlarging the left lobe).

Fig. 4A-4B illustrate additional examples of alignment errors between a region of interest (ROI 104B) of the product image 101 and a region of interest (ROI 104A) of the control image 100.

Similar to the previous example, the user may desire to perform one or more measurements on a product sample within the left "lobe" illustrated in fig. 4A, and may thereby define the left lobe as the target site in question. Using conventional techniques, a user may define an ROI 104a within a control image 100, where the ROI 104a is located within the POI 102a and includes a target site. Subsequently, a product image 101 may be taken, as shown in fig. 4B.

As shown in fig. 4B, one or more process variations in the layer including the target site may result in thin and/or displaced target sites (e.g., left lobe). This structural variation between the target site of the product image 101 and the target site of the control image 100 can result in alignment inaccuracies and incorrect placement of the ROI 104 b. For example, the POI 102b of the product image 101 may be aligned with the POI 102a of the control image 100, and the ROI 104b of the product image 101 may be placed according to the placement of the POI 102b within the product image 101. As can be seen in fig. 3B, the placement of ROI 104B may be inaccurate because it cannot encompass the target site (e.g., left lobe) that includes the measurement of interest. Thus, conventional image processing procedures that rely on alignment techniques may not accurately place the ROI 104b in the product image 101. This may result in the inability to perform the desired measurement of the target site.

As previously mentioned herein, embodiments of the present invention are directed to a system and method for generating an adaptive region of interest (ROI) using machine learning techniques. More particularly, embodiments of the present invention are directed to generating adaptive ROIs using machine learning techniques to more effectively monitor features of interest. It is contemplated herein that embodiments of the present invention may allow for accurate placement of ROIs despite process and/or structural variations.

Fig. 5 illustrates a system 200 for adaptive region of interest (ROI) selection, in accordance with one or more embodiments of the present disclosure. System 200 may include, but is not limited to, one or more characterization subsystems 202. Additionally, the system 200 may include, but is not limited to, a controller 204 (which includes one or more processors 206 and memory 208) and a user interface 210.

The characterization subsystem 202 may include any characterization subsystem 202 known in the art including, but not limited to, optical-based characterization systems, charged particle-based characterization systems, and the like. For example, characterization subsystem 202 may include a Scanning Electron Microscope (SEM) characterization system. In one embodiment, the controller 204 is communicatively coupled to one or more characterization subsystems 202. In this regard, the one or more processors 206 of the controller 204 may be configured to generate one or more control signals configured to adjust one or more characteristics of the characterization subsystem 202.

Fig. 6A illustrates a system 200 for adaptive region of interest (ROI) selection, in accordance with one or more embodiments of the present disclosure. In particular, FIG. 6A illustrates a system 200 that includes an optical characterization subsystem 202 a.

Optical characterization subsystem 202a may include any optical-based characterization system known in the art, including, but not limited to, an image-based metrology tool. For example, the characterization subsystem 202a may include an optical critical dimension metrology tool. The optical characterization subsystem 202a may include, but is not limited to, an illumination source 212, an illumination arm 211, a collection arm 213, and a detector assembly 226.

In one embodiment, optical characterization subsystem 202a is configured to inspect and/or measure a sample 220 disposed on a stage assembly 222. Illumination source 212 may include any illumination source known in the art for generating illumination 201, including, but not limited to, a broadband radiation source. In another embodiment, the optical characterization subsystem 202a may include an illumination arm 211 configured to direct illumination 201 to the sample 220. It should be noted that the illumination source 212 of the optical characterization subsystem 202a may be configured in any orientation known in the art, including, but not limited to, dark field orientations, bright field orientations, and the like.

The sample 220 may comprise any sample known in the art including, but not limited to, a wafer, a reticle, a photomask, and the like. In one embodiment, sample 220 is disposed on stage assembly 222 to facilitate movement of sample 220. In another embodiment, stage assembly 222 is an actuatable stage. For example, stage assembly 222 may include, but is not limited to, one or more translation stages adapted to selectively translate sample 220 along one or more linear directions (e.g., x-direction, y-direction, and/or z-direction). By way of another example, stage assembly 222 may include, but is not limited to, one or more rotating stages adapted to selectively rotate sample 220 in a rotational direction. By way of another example, the stage assembly 222 may include, but is not limited to, a rotation stage and a translation stage adapted to selectively translate the sample 220 in a linear direction and/or rotate the sample 220 in a rotational direction. It should be noted here that the system 200 may operate in any scanning mode known in the art.

Illumination arm 211 may include any number and type of optical components known in the art. In one embodiment, illumination arm 211 includes one or more optical elements 214, a beam splitter 216, and an objective lens 218. In this regard, the illumination arm 211 may be configured to focus the illumination 201 from the illumination source 212 onto the surface of the sample 220. The one or more optical elements 214 may include any optical element known in the art including, but not limited to, one or more mirrors, one or more lenses, one or more polarizers, one or more beam splitters, and the like.

In another embodiment, the optical characterization subsystem 202a includes a collection arm 213 configured to collect illumination reflected or scattered from the sample 220. In another embodiment, the collection arm 213 may direct and/or focus the reflected and scattered light to one or more sensors of the detector assembly 226 via one or more optical elements 224. The one or more optical elements 224 may include any optical element known in the art including, but not limited to, one or more mirrors, one or more lenses, one or more polarizers, one or more beam splitters, and the like. It should be noted that detector assembly 226 may include any sensor and detector assembly known in the art for detecting illumination reflected or scattered from sample 220.

In another embodiment, detector assembly 226 of optical characterization subsystem 202 is configured to collect metrology data of sample 220 based on illumination reflected or scattered from sample 220. In another embodiment, the detector assembly 226 is configured to transmit the collected/acquired images and/or metrology data to the controller 204.

As previously mentioned herein, the controller 204 of the system 200 may include one or more processors 206 and memory 208. The memory 208 may include program instructions configured to cause the one or more processors 206 to perform the various steps of the present invention. In one embodiment, the program instructions are configured to cause the one or more processors 206 to adjust one or more characteristics of the optical characterization subsystem 202 to perform one or more measurements of the sample 220.

In additional and/or alternative embodiments, characterization subsystem 202 may include a charged particle-based characterization subsystem 202. For example, the characterization subsystem 202 may include an SEM characterization subsystem, as illustrated in fig. 6B.

Fig. 6B illustrates a system 200 for adaptive region of interest (ROI) selection in accordance with one or more embodiments of the present disclosure. In particular, fig. 6B illustrates system 200 including SEM characterization subsystem 202B.

In one embodiment, SEM characterization subsystem 202b is configured to perform one or more measurements on sample 220. In this regard, the SEM characterization subsystem 202b may be configured to acquire one or more images of the sample 220. The SEM characterization subsystem 202b may include, but is not limited to, an electron beam source 228, one or more electron optical elements 230, one or more electron optical elements 232, and an electron detector assembly 234, the electron detector assembly 234 including one or more electron sensors 236.

In one embodiment, the electron beam source 228 is configured to direct one or more electron beams 229 to the sample 220. The electron beam source 228 may form an electron optical column. In another embodiment, the electron beam source 228 includes one or more additional and/or alternative electron optical elements 230 configured to focus and/or direct one or more electron beams 229 to a surface of the sample 220. In another embodiment, the SEM characterization subsystem 202b includes one or more electron optical elements 232 configured to collect secondary and/or backscattered electrons 231 emanating from the surface of the sample 220 in response to the one or more electron beams 229. It should be noted herein that the one or more electron optical elements 230 and the one or more electron optical elements 232 may include any electron optical element configured to direct, focus, and/or collect electrons including, but not limited to, one or more deflectors, one or more electron optical lenses, one or more condenser lenses (e.g., magnetic condenser lenses), one or more objective lenses (e.g., magnetic condenser lenses), and the like.

It should be noted that the electron-optical assembly of SEM characterization subsystem 202B is not limited to being the electron-optical element depicted in fig. 6B, which is for illustration only. It should be further noted that system 200 may include any number and type of electron-optical elements required to direct/focus one or more electron beams 229 onto sample 220 and responsively collect and image emitted secondary and/or backscattered electrons 231 onto electron detector assembly 234.

For example, the system 200 may include one or more electron beam scanning elements (not shown). For example, the one or more electron beam scanning elements may include, but are not limited to, one or more electromagnetic scanning coils or electrostatic deflectors adapted to control the position of the one or more electron beams 229 relative to the surface of the sample 220. In addition, one or more scanning elements may be used to cause the one or more electron beams 229 to scan the entire sample 220 in a selected pattern.

In another embodiment, the secondary and/or backscattered electrons 231 are directed to one or more sensors 236 of the electron detector assembly 234. The electron detector assembly 234 of the SEM characterization subsystem 202 may include any electron detector assembly known in the art suitable for detecting backscattered and/or secondary electrons 231 emitted from the surface of the sample 220. In one embodiment, the electron detector assembly 234 includes an array of electron detectors. In this regard, the electron detector assembly 234 may include an array of electron detection portions. Further, each electron detection portion of the detector array of electron detector assembly 234 may be positioned to detect an electron signal associated with one of the one or more incident electron beams 229 from sample 220. In this aspect, each channel of the electron detector assembly 234 may correspond to an electron beam 229 of the one or more electron beams 229. The electron detector assembly 234 may include any type of electron detector known in the art. For example, the electron detector assembly 234 may include a microchannel plate (MCP), a PIN, or a p-n junction detector array, such as, but not limited to, a diode array or an Avalanche Photo Diode (APD). By way of another example, the electronic detector assembly 234 may include a high-speed scintillator/PMT detector.

Although fig. 6B illustrates SEM characterization subsystem 202B as including electron detector assembly 234 including only secondary electron detector assemblies, this should not be viewed as a limitation of the present invention. In this regard, it should be noted that electron detector assembly 234 may include, but is not limited to, a secondary electron detector, a backscattered electron detector, and/or a primary electron detector (e.g., an in-column electron detector). In another embodiment, the SEM characterization subsystem 202 may include a plurality of electron detector assemblies 234. For example, system 200 may include secondary electron detector assembly 234a, backscattered electron detector assembly 234b, and in-column electron detector assembly 234 c.

In one embodiment, the one or more processors 206 are configured to analyze the output of the detector assembly 226/electronic detector assembly 234. In one embodiment, the set of program instructions is configured to cause the one or more processors 206 to analyze one or more characteristics of the sample 220 based on images received from the detector assembly 226/electronic detector assembly 234. In another embodiment, the set of program instructions is configured to cause the one or more processors 206 to modify one or more characteristics of the system 200 to maintain focus on the sample 220 and/or the detector assembly 226/electron detector assembly 234. For example, the one or more processors 206 may be configured to adjust one or more characteristics of the illumination source 212/electron beam source 228 and/or other elements of the system 200 to focus the illumination 201 and/or the one or more electron beams 229 onto the surface of the sample 220. By way of another example, the one or more processors 206 may be configured to adjust one or more elements of the system 200 to collect illumination and/or secondary electrons 231 from the surface of the sample 220 and focus the collected illumination on the detector assembly 226/electron detector assembly 234. By way of another example, the one or more processors 206 may be configured to adjust one or more focus voltages applied to one or more electrostatic deflectors of the electron beam source 228 to independently adjust the position or alignment of the one or more electron beams 229 and scan the electron beams 229 across the sample 220.

In one embodiment, the one or more processors 206 are communicatively coupled to the memory 208, wherein the one or more processors 206 are configured to execute a set of program instructions stored on the memory 208 that are configured to cause the one or more processors 206 to implement the various functions and steps of the present invention.

In another embodiment, as shown in fig. 5 and 6A-6B, the system 200 includes a user interface 210 communicatively coupled to the controller 204. In another embodiment, the user interface 210 includes a user input device and a display. The user input device of the user interface 210 may be configured to receive one or more input commands from a user, the one or more input commands configured to input data into the system 200 and/or adjust one or more characteristics of the system 200. For example, as will be described in further detail herein, a user input device of the user interface 210 may be configured to receive one or more POI and/or ROI selections from a user. In another embodiment, the display of the user interface 210 may be configured to display data of the system 200 to a user.

As previously mentioned herein, the one or more processors 206 are communicatively coupled to the memory 208, wherein the one or more processors 206 are configured to execute a set of program instructions stored on the memory 208 that are configured to cause the one or more processors 206 to implement the various functions and steps of the present invention. In this regard, the controller 204 may be configured to: receiving one or more training images of the sample 220 from the characterization subsystem 202; receiving one or more training region of interest (ROI) selections within one or more training images; generating a machine learning classifier based on the one or more training images and the one or more training ROI selections; receiving one or more product images of the sample 220 from the characterization subsystem 202; generating one or more classified regions of interest using a machine learning classifier; and determining one or more measurements of the sample 220 within the one or more classified regions of interest. Each of these steps will be described in turn.

In one embodiment, the controller 204 of the system 200 is configured to receive one or more training images 225 of the sample 220 from the characterization subsystem 202. For purposes of the present invention, the term "training image" may be viewed as an image to be used as input to train a machine learning classifier. Fig. 7A illustrates a training image 225 for training a machine learning classifier in accordance with one or more embodiments of the present disclosure. For example, as shown in fig. 6A, the controller 204 may be configured to receive one or more optical training images 225 of the sample 220 from the optical characterization subsystem 202 a. By way of another example, as shown in fig. 6B, the controller 204 may be configured to receive one or more SEM training images 225 of the sample 220 from the SEM characterization subsystem 202B. In this regard, the training image 225 depicted in fig. 7A may include an optical training image 225, an SEM training image 225, and the like. In additional and/or alternative embodiments, the controller 204 may be configured to receive one or more training images 225 from a source other than the one or more characterization subsystems 202. For example, the controller 204 may be configured to receive one or more training images 225 of the sample 220 from an external storage device. In another embodiment, the controller may be further configured to store the received training image 225 in the memory 208.

In another embodiment, the controller 204 is configured to receive one or more training region of interest (ROI) selections within the one or more training images 225. Fig. 7B illustrates a training image 225 including a training ROI selection 302. In one embodiment, the one or more received training ROI selections 302 may include one or more measurements of interest. For example, as shown in fig. 7B, the training ROI selection 302 may include a first measure of interest (MOI 304a) indicative of a length of the left valve and a second measure of interest (MOI 304B) indicative of a height of the left valve. These measurements of interest (MOI 304a, MOI 304b) may include critical dimensions, which may be desirably monitored throughout the fabrication process to ensure Critical Dimension Uniformity (CDU). The measurement of interest (MOI 304) within the one or more training ROI selections 302 may include any feature that may be measured on a pattern, structure, or the like.

The one or more training ROI selections 302 may be received using any technique known in the art. For example, the program instructions stored in the memory 208 may be configured to automatically select one or more training ROI selections 302. By way of another example, one or more training ROI selections 302 may be received via the user interface 210. For example, a display device of the user interface 210 may display one or more training images 235 to a user. Then, the user may input one or more input commands indicative of the one or more training ROI selections 302 via a user input device of the user interface 210. In this regard, in some embodiments, the user may manually draw/select one or more training ROI selections 302 within the training image 225 via the user interface 210. In another embodiment, the controller 204 is configured to store the one or more training ROI selections 302 in the memory 208.

In another embodiment, the controller 204 is configured to generate a machine learning classifier based on the one or more training images 225 and the one or more training ROI selections 302. The machine learning classifier may include any type of machine learning algorithm/classifier and/or deep learning technique or classifier known in the art including, but not limited to, Convolutional Neural Networks (CNNs) such as GoogleNet, AlexNet, and the like, ensemble learning classifiers, random forest classifiers, Artificial Neural Networks (ANN), and the like.

Training the machine learning classifier may include: the teaching machine learning classifier identifies one or more measurements of interest (MOIs 304a, 304b) and/or features of interest to be measured based on the received training images 225 and the training ROI selection 302. As used herein, the term "measurement of interest" (MOI 304a, 304b) may be considered to refer to any measurement that may be desired to be performed on the sample 220. In this regard, the machine learning classifier may be trained/generated such that it is configured to identify the first measure of interest (MOI 304a) and/or the second measure of interest (MOI 304b) based on the received training image 225 and the received training ROI selection 302.

The controller 204 may be configured to generate machine learning classifiers via supervised learning and/or unsupervised learning. It should be noted herein that a machine learning classifier may include any algorithm or predictive model configured to predict and/or identify one or more measurements of interest.

In another embodiment, the controller 204 may be configured to receive one or more product images 235 of the sample 220 from the characterization subsystem 202. Fig. 8A illustrates a product image 235, in accordance with one or more embodiments of the present disclosure. As shown in fig. 6A, the controller 204 may be configured to receive one or more optical product images 235 of the sample 220 from the optical characterization subsystem 202 a. By way of another example, as shown in fig. 6B, the controller 204 may be configured to receive one or more SEM product images 235 of the sample 220 from the SEM characterization subsystem 202B. In this regard, the product image 235 depicted in fig. 8A may include an optical product image 235, an SEM product image 235, and the like. In additional and/or alternative embodiments, the controller 204 may be configured to receive one or more product images 235 from a source other than the one or more characterization subsystems 202. For example, the controller 204 may be configured to receive one or more product images 235 of the sample 220 from an external storage device. In another embodiment, the controller 204 may be further configured to store the received product image 235 in the memory 208.

The term "product image" is used herein to describe an image of a sample 220 that includes one or more measurements of interest (MOI 304). In this aspect, the one or more product images 235 may include one or more images of a product wafer (e.g., product sample 220) to be monitored by measuring one or more measurements of interest (MOI 304). This may be implemented to ensure Critical Dimension Uniformity (CDU), as previously described herein.

In another embodiment, the controller 204 is configured to generate one or more classified regions of interest using a machine learning classifier. For example, fig. 8B illustrates a product image 235 including a classified ROI 306, in accordance with one or more embodiments of the present disclosure.

In one embodiment, the controller 204 is configured to generate one or more classified ROIs 306 within the one or more product images 235 using a machine learning classifier. In another embodiment, the machine learning classifier may be configured to generate one or more classified ROIs 306 within the product image 235 such that the classified ROIs 306 include one or more identified measurements of interest (MOIs 304a, 304 b). For example, as shown in fig. 8B, the machine learning classifier may be configured to generate the classified ROI 306 such that the classified ROI 306 contains the first identified measurement of interest (MOI 304a) and/or the second identified measurement of interest (MOI 304B).

It is contemplated herein that generating an ROI (e.g., classified ROI 306) based on a machine learning algorithm may increase the likelihood that the ROI will be correctly placed such that it includes the expected measure of interest. It is further contemplated herein that generating the classified ROI 306 via a machine learning algorithm may provide numerous advantages over previous methods that placed the ROI (e.g., ROI 104B in fig. 3B and 4B) based on conventional image processing alignment procedures. This may be illustrated by comparing the placement of the ROI 104B in FIG. 3B via a conventional image processing alignment procedure with the placement of the classified ROI 306 in FIG. 8B via a machine learning classifier. As shown in fig. 3B, conventional image processing techniques may fail to account for process and structural variations, which may then lead to misplacement of the ROI 104B and failure to perform the desired measurements. In comparison, as shown in fig. 8B, it is contemplated herein that the machine learning classifier may be configured to identify the measure of interest (MOI 304a, 304B) such that the machine learning classifier may generate an adaptive classified ROI 306 that may be more accurately placed to include the identified measure of interest (MOI 304a, 304B). In particular, the characteristics (e.g., shape, size, orientation) of the classified ROI 306 generated by the machine-learned classifier may be modified according to the characteristics (e.g., shape, size, orientation) of the relevant structure (e.g., left lobe) of the sample 220. In this regard, by generating an adaptive classified ROI 306 capable of varying in size, shape, orientation, and the like, embodiments of the present disclosure may provide more accurate and reliable ROI placement.

In another embodiment, the controller is configured to generate one or more classified ROIs 306 by adaptively modifying one or more characteristics of one or more product ROI selections using a machine learning classifier. In this aspect, using a machine learning classifier to generate one or more classified ROIs 306 may include: receiving one or more product ROI selections 305 within one or more product images 235; and using a machine-learned classifier to adaptively modify one or more characteristics of one or more product ROI selections 305 to generate one or more classified ROIs 306. This may be further understood with reference to fig. 8C.

Fig. 8C illustrates a product image 235 including a product ROI selection 305 and a classified ROI 306, in accordance with one or more embodiments of the present disclosure. In this example, the controller 204 may receive a product ROI selection 305 indicating a region of the product image 235. For example, the user may enter the product ROI selection 305 via the user interface 210. Continuing with the same example, the controller 204 may be configured to adaptively modify one or more characteristics of the product ROI selection 305 using a machine-learned classifier to generate a classified ROI 306. Characteristics of the product ROI selection 305 that may be adaptively modified by the machine-learned classifier to generate the classified ROI 306 may include, but are not limited to, size, shape, orientation, and the like.

It is contemplated herein that generating a classified ROI 306 by modifying the received product ROI selection 305 may allow the machine learning classifier to act as a correction tool that is activated on-demand. For example, in some embodiments, when the received product ROI selection 305 is incorrectly placed (e.g., placed such that it cannot include one or more MOIs 304a, 304b), the machine learning classifier may generate only the classified ROI 306 by adaptively modifying the product ROI selection 305, as shown in fig. 8C.

In one embodiment, the machine learning classifier may adaptively modify one or more characteristics of the one or more product ROI selections 305 based on one or more characteristics of structures within the one or more product images 235. For example, as shown in fig. 8C, the machine learning classifier may adaptively modify the product ROI selection 305 based on the structural variation of the left lobe. In another embodiment, the machine learning classifier may adaptively modify one or more characteristics of one or more product ROI selections 305 in response to one or more process variations. In this regard, the machine learning classifier may adaptively modify the product ROI selection 305 to correct for one or more process variations.

Similarly, in another embodiment, generation of the classified ROI 306 may be assisted by receiving one or more product POI selections (not shown). For example, similar to conventional approaches, the controller 204 may receive product POI selections within the product image 235 and then generate one or more classified ROIs 306 based at least in part on the one or more received POI selections.

Fig. 9A illustrates a product image 235 according to one or more embodiments of the invention. Fig. 9B illustrates a product image 235 including a classified region of interest (ROI)306, in accordance with one or more embodiments of the present disclosure. In one embodiment, the machine learning classifier may generate one or more classified ROIs 306 such that the one or more classified ROIs 306 include the one or more identified MOIs 304a, 304 b. Comparing the placement of the ROI 104B in FIG. 4B with the placement of the classified ROI 306 in FIG. 9B, it can be seen that ROI placement via a machine learning classifier may provide improved ROI placement compared to conventional image processing alignment procedures. Accordingly, it is contemplated herein that embodiments of the present disclosure may provide more accurate and reliable ROI placement that is less susceptible and/or unaffected by structure/process variations. In particular, it is contemplated herein that enhanced adaptivity of the classified ROI 306 is particularly beneficial in the context of rapid advances in semiconductor manufacturing.

Fig. 9C illustrates a product image 235 including an angular classified region of interest (ROI)306, in accordance with one or more embodiments of the present disclosure.

In one embodiment, as shown in fig. 9C, the machine learning classifier may be configured to generate one or more angular classified ROIs 306. The term "angular" may be used herein to describe a classified ROI 306 oriented at an offset angle 307 (defined by θ) relative to a particular frame of reference or reference. For example, the angular classified ROI 306 may be rotated relative to the product ROI selection 305 such that the angular classified ROI 306 is disposed at an offset angle 307 relative to the product ROI selection 305. By way of another example, as shown in fig. 9C, the angular classified ROI 306 may be rotated such that the angular classified ROI 306 is disposed at an offset angle 307 relative to an edge or boundary of the product image 235, as shown in fig. 9C.

It should be noted here that it may be extremely difficult or even impossible to generate the angular ROI 104 using conventional image processing procedures. For example, when only a portion of the structure has been rotated (such as in fig. 9C), conventional image processing procedures may not be able to generate and accurately align the angular ROI 104. In addition, even though the angular ROI 104 may potentially be generated by conventional image processing procedures, the procedures may be computationally expensive such that they are not feasible and inefficient. Thus, it is contemplated herein that the ability to generate angular classified ROIs 306 using machine-learned classifiers can provide more accurate ROI placement of varying structures and enable more complex interleaved critical dimension measurements.

In another embodiment, the controller 204 may be configured to determine one or more measurements of the sample 220 within the one or more classified ROIs 306. For example, as shown in fig. 8B, the controller 204 may be configured to measure a first critical dimension indicated by a first measurement of interest (MOI 304a) and a second critical dimension indicated by a second measurement of interest (MOI 304B). The one or more measurements made within the one or more classified ROIs 306 may include any measurement known in the art, including, but not limited to, Critical Dimension (CD) measurements.

It should be noted herein that one or more components of system 200 may be communicatively coupled to various other components of system 200 in any manner known in the art. For example, the one or more processors 206 may be communicatively coupled to each other and to other components via wired connections (e.g., copper wires, fiber optic cables, and the like) or wireless connections (e.g., RF coupling, IR coupling, WiMax, bluetooth, 3G, 4G LTE, 5G, and the like). By way of another example, the controller 204 may be communicatively coupled to one or more components of the characterization subsystem 202 via any wired or wireless connection known in the art.

In one embodiment, the one or more processors 206 may include any one or more processing elements known in the art. To this extent, the one or more processors 206 can include any microprocessor-type device configured to execute software algorithms and/or instructions. In one embodiment, the one or more processors 206 may be comprised of: a desktop computer, mainframe computer system, workstation, image computer, parallel processor, or other computer system (e.g., network computer) configured to execute programs configured to operate system 200, as described in this disclosure. It should be recognized that the steps described in this disclosure may be implemented by a single computer system or, alternatively, multiple computer systems. Further, it is recognized that the steps described in this disclosure may be implemented on any one or more of the one or more processors 206. In general, the term "processor" may be broadly defined to encompass any device having one or more processing elements that execute program instructions from memory 208. Moreover, different subsystems of the system 200 (e.g., the illumination source 212, the electron beam source 228, the detector assembly 226, the electron detector assembly 234, the controller 204, the user interface 210, and the like) may include processors or logic elements adapted to implement at least a portion of the steps described in this disclosure. Accordingly, the above description should not be construed as limiting the invention, but merely as illustrative.

The memory 208 may include any storage medium known in the art suitable for storing program instructions executable by the associated processor(s) 206 and data received from the characterization subsystem 202. For example, memory 208 may include a non-transitory memory medium. For example, memory 208 may include, but is not limited to, Read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical memory devices (e.g., disks), magnetic tape, solid state drives, and the like. It should be further noted that the memory 208 may be housed in a common controller housing with the one or more processors 206. In alternative embodiments, the memory 208 may be remotely located relative to the physical location of the processor 206, the controller 204, and the like. In another embodiment, the memory 208 holds program instructions for causing the one or more processors 206 to perform the various steps described in this disclosure.

In one embodiment, the user interface 210 is communicatively coupled to the controller 204. In one embodiment, the user interface 210 may include, but is not limited to, one or more desktop computers, tablet computers, smartphones, smartwatches, or the like. In another embodiment, the user interface 210 includes a display for displaying data of the system 200 to a user. The display of the user interface 210 may include any display known in the art. For example, the display may include, but is not limited to, a Liquid Crystal Display (LCD), an Organic Light Emitting Diode (OLED) based display, or a CRT display. Those skilled in the art will recognize that any display device capable of being integrated with the user interface 210 is suitable for implementation in the present invention. In another embodiment, a user may input selections and/or instructions responsive to data displayed to the user via a user input device of the user interface 210.

Fig. 10 illustrates a flow diagram of a method 400 for adaptive region of interest (ROI) selection, in accordance with one or more embodiments of the present disclosure. It should be noted here that the steps of method 400 may be implemented in whole or in part by system 200. However, it should be further recognized that the method 400 is not limited to the system 100, as additional or alternative system-level embodiments may implement all or part of the steps of the method 400.

In step 402, one or more training images of the sample are acquired using the characterization subsystem. For example, as shown in fig. 6A-6B, the optical characterization subsystem 202a and/or the SEM characterization subsystem 202B may be configured to acquire one or more training images 225 of the sample 220 and transmit the one or more acquired training images 225 to the controller 204.

In step 404, one or more training ROS selections are received. For example, as shown in fig. 7A-7B, the controller 204 may receive one or more training ROI selections 302 within one or more training images 225. The one or more training ROI selections 302 may include one or more measurements of interest (MOIs 304a, 304 b). The one or more training ROI selections 302 may be received using any technique known in the art. For example, the program instructions stored in the memory 208 may be configured to automatically select one or more training ROI selections 302. By way of another example, one or more training ROI selections 302 may be received via the user interface 210. For example, a display device of the user interface 210 may display one or more training images 235 to a user. Then, the user may input one or more input commands indicative of the one or more training ROI selections 302 via a user input device of the user interface 210.

In step 406, a machine learning classifier is generated based on the one or more training images and the one or more training ROI selections. Training the machine learning classifier may include teaching the machine learning classifier to identify one or more measurements of interest (MOIs 304a, 304b) and/or features of interest to be measured based on the received training images 225 and the training ROI selection 302. The machine learning classifier may include any type of machine learning algorithm/classifier and/or deep learning technique or classifier known in the art including, but not limited to, deep learning classifiers, Convolutional Neural Networks (CNNs) such as GoogleNet, AlexNet, and the like, ensemble learning classifiers, random forest classifiers, Artificial Neural Networks (ANN), and the like.

In step 408, one or more product images of the sample are acquired using the characterization subsystem. For example, as shown in fig. 6A-6B, the optical characterization subsystem 202a and/or the SEM characterization subsystem 202B may be configured to acquire one or more product images 235 of the sample 220 and transmit the one or more acquired product images 235 to the controller 204. The one or more product images 235 may include one or more images of a product wafer (e.g., product sample 220) to be monitored by measuring one or more measurements of interest (MOI 304). This may be implemented to ensure Critical Dimension Uniformity (CDU), as previously described herein.

In step 410, one or more classified ROIs are generated using a machine learning classifier. For example, as shown in fig. 8B and 9B, the machine learning classifier may be configured to generate the classified ROI 306 such that the classified ROI 306 includes the first identified measurement of interest (MOI 304a) and/or the second identified measurement of interest (MOI 304B).

In step 412, one or more measurements of the sample are determined within the one or more classified regions of interest. For example, as shown in fig. 8B, the controller 204 may be configured to measure a first critical dimension indicated by a first measurement of interest (MOI 304a) and a second critical dimension indicated by a second measurement of interest (MOI 304B). The one or more measurements made within the one or more classified ROIs 306 may include any measurement known in the art, including, but not limited to, Critical Dimension (CD) measurements.

It will be recognized by one of ordinary skill in the art that the components (e.g., operations), devices, objects, and their accompanying discussion described herein are used as examples to clarify the concepts and that various configuration modifications may be considered. Thus, as used herein, the specific examples and accompanying discussion set forth are intended to represent a more general class thereof. In general, use of any particular paradigm is intended to indicate its class, and no particular component (e.g., operation), device, or object is to be considered limiting.

Those skilled in the art will appreciate that there are a variety of vehicles by which processes and/or systems and/or other technologies described herein can be effected (e.g., hardware, software, and/or firmware), and that the preferred vehicle will vary with the context in which the processes and/or systems and/or other technologies are deployed. For example, if the implementer determines that speed and accuracy are paramount, the implementer may opt for a hardware and/or firmware-based vehicle; alternatively, if flexibility is paramount, the implementer may opt for a software-based implementation; or alternatively, the implementer may opt for some combination of hardware, software, and/or firmware. Thus, there are a number of possible vehicles by which the processes and/or devices and/or other techniques described herein may be implemented, any vehicle not inherently superior to the other vehicles because any vehicle to be utilized is a choice depending on the context in which the vehicle will be deployed and the particular concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary.

The previous description is presented to enable any person skilled in the art to make and use the invention, as provided in the context of a particular application and its requirements. As used herein, directional terms (e.g., "top," "bottom," "above …," "below …," "up," "down," "below …," and "down") are intended to provide relative positions to facilitate description and are not intended to designate an absolute frame of reference. Various modifications to the described embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments. Thus, the present invention is not intended to be limited to the particular embodiments shown and described, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

With respect to substantially any plural and/or singular terms used herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. For clarity, various singular/plural permutations are not set forth explicitly herein.

All methods described herein may include storing results of one or more steps of a method embodiment in memory. The results may include any of the results described herein and may be stored in any manner known in the art. The memory may comprise any memory described herein or any other suitable storage medium known in the art. After storing the results, the results may be accessed in memory and used by any of the method or system embodiments described herein, formatted for display to a user, used by another software module, method or system, and the like. Further, results may be stored "permanently," "semi-permanently," "temporarily," or in memory for a certain period of time. For example, the memory may be Random Access Memory (RAM), and the results do not necessarily persist indefinitely in the memory.

It is further contemplated that each embodiment of the above-described method may include any other step of any other method described herein. Additionally, each embodiment of the above-described method may be performed by any system described herein.

The subject matter described herein sometimes illustrates different components contained within, or connected with, other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively "associated" such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as "associated with" each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being "connected," or "coupled," to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being "couplable," to each other to achieve the desired functionality. Specific examples of "couplable" include, but are not limited to, physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.

Furthermore, it is to be understood that the invention is defined by the appended claims. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including (but not limited to)", the term "having" should be interpreted as "having at least", etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to inventions containing only such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" or "an" should typically be interpreted to mean "at least one" or "one or more"); the foregoing also applies to the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, typically means at least two recitations, or two or more recitations). Moreover, in examples where a convention analogous to "at least one of A, B and C and the like" is used, such construction is generally meant in a sense commonly understood by those of skill in the art (e.g., "a system having at least one of A, B and C" would include, but not be limited to, systems having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B and C together, and the like). In examples where a convention analogous to "A, B or at least one of C and the like" is used, such construction generally means what is commonly understood by those of skill in the art (e.g., "a system having at least one of A, B or C" would include, but not be limited to, systems having a alone, B alone, C alone, both a and B together, both a and C together, both B and C together, and/or both A, B and C together, and the like). It will be further understood by those within the art that virtually any disjunctive term and/or phrase presenting two or more alternative terms, whether in [ the detailed description ], the claims, or the drawings, should be understood to encompass the possibility of including one of the two terms, either or both. For example, the phrase "a or B" will be understood to include the possibility of "a" or "B" or "a and B".

It is believed that the present invention and many of its attendant advantages will be understood by the foregoing description, and it will be apparent that various changes may be made in the form, construction and arrangement of the components without departing from the disclosed subject matter or without sacrificing all of its material advantages. The form described is merely illustrative and it is the intention of the appended claims to encompass and include such changes. Furthermore, it is to be understood that the invention is defined by the appended claims.

34页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于校正裸片放置误差的自适应布线

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类