Optical proximity correction method

文档序号:1936222 发布日期:2021-12-07 浏览:14次 中文

阅读说明:本技术 光学邻近校正方法 (Optical proximity correction method ) 是由 杜杳隽 李亮 于 2020-06-03 设计创作,主要内容包括:本申请公开了一种光学邻近校正方法,其包括:提供原始光刻图案和所述原始光刻图案的光学邻近校正特征;基于所述原始光刻图案和所述光学邻近校正特征训练第一神经网络和第二神经网络;提供待校正光刻图案;交替地使用训练后的第一神经网络和训练后的第二神经网络对所述待校正光刻图案实施光学邻近校正,以获得中间掩膜图案;基于所述中间掩膜图案建立所述待校正光刻图案的光学邻近校正模型;以及基于所述光学邻近校正模型对待校正光刻图案进行光学邻近校正。本申请所公开的方法能够缩短光学邻近校正的时间。(The application discloses an optical proximity correction method, which comprises the following steps: providing an original lithographic pattern and optical proximity correction features of the original lithographic pattern; training a first neural network and a second neural network based on the original lithographic pattern and the optical proximity correction features; providing a photoetching pattern to be corrected; alternately using the trained first neural network and the trained second neural network to carry out optical proximity correction on the photoetching pattern to be corrected so as to obtain an intermediate mask pattern; establishing an optical proximity correction model of the photoetching pattern to be corrected based on the intermediate mask pattern; and carrying out optical proximity correction on the photoetching pattern to be corrected based on the optical proximity correction model. The method disclosed in the application can shorten the time of optical proximity correction.)

1. An optical proximity correction method, comprising:

providing an original lithographic pattern and optical proximity correction features of the original lithographic pattern;

training a first neural network and a second neural network based on the original lithographic pattern and the optical proximity correction features;

providing a photoetching pattern to be corrected;

alternately using the trained first neural network and the trained second neural network to carry out optical proximity correction on the photoetching pattern to be corrected so as to obtain an intermediate mask pattern;

establishing an optical proximity correction model of the photoetching pattern to be corrected based on the intermediate mask pattern; and

and carrying out optical proximity correction on the photoetching pattern to be corrected based on the optical proximity correction model.

2. The method of claim 1, wherein the original lithographic pattern comprises one or more original main patterns, each of the one or more original main patterns comprising one or more features, the optical proximity correction features comprising one or more sub-resolution correction features and one or more edge correction features.

3. The method of claim 2, wherein training a first neural network and a second neural network based on the original lithographic pattern and the optical proximity correction model comprises:

training a first neural network based on parameters of the one or more features and parameters of one or more sub-resolution modified features corresponding to the one or more features; and

training a second neural network based on the parameters of the one or more features and the parameters of one or more edge-corrected features corresponding to the one or more features.

4. The method of claim 3, wherein training the first neural network based on the parameters of the one or more features and the parameters of the one or more sub-resolution modified features corresponding to the one or more features comprises:

acquiring geometric parameters and optical parameters of each feature;

acquiring parameters of the sub-resolution correction features corresponding to each feature part; and

and taking the geometric parameters and the optical parameters of each characteristic part as the input of the first neural network, taking the parameters of the sub-resolution correction characteristics corresponding to each characteristic part as the target output of the first neural network, and training the first neural network until the difference between the actual output and the target output of the trained first neural network is less than a preset threshold value.

5. The method of claim 3, wherein training a second neural network based on the parameters of the one or more features and the parameters of the one or more edge-corrected features corresponding to the one or more features comprises:

acquiring geometric parameters and optical parameters of each feature;

acquiring parameters of edge correction features corresponding to each feature part; and

and taking the geometric parameters and the optical parameters of each feature part as the input of the second neural network and taking the parameters of the edge correction features corresponding to each feature part as the target output of the second neural network, and training the second neural network until the difference between the actual output and the target output of the trained second neural network is less than a preset threshold value.

6. The method of claim 1, wherein performing optical proximity correction on the lithographic pattern to be corrected using the trained first neural network and the trained second neural network alternately comprises:

generating a first mask pattern based on the photoetching pattern to be corrected and the trained first neural network;

generating a second mask pattern based on the first mask pattern and the trained second neural network;

generating a third mask pattern based on the second mask pattern and the trained first neural network; and

generating an intermediate mask pattern based on the third mask pattern and the trained second neural network.

7. The method of claim 6, wherein the lithographic pattern to be corrected comprises one or more first main patterns, each of the one or more first main patterns comprising one or more first features, and wherein generating a first mask pattern based on the lithographic pattern to be corrected and the trained first neural network comprises:

acquiring geometric parameters and optical parameters of each first feature;

inputting the geometric parameters and the optical parameters of each first characteristic part into a trained first neural network so as to calculate the parameters of a first sub-resolution correction characteristic corresponding to each first characteristic part; and

after calculating the parameters of all the first sub-resolution corrected features, inserting one or more first sub-resolution auxiliary patterns in the photoetching pattern to be corrected based on the parameters of all the first sub-resolution corrected features, wherein the one or more first main patterns and the one or more first sub-resolution auxiliary patterns form the first mask pattern.

8. The method of claim 7, wherein generating a second mask pattern based on the first mask pattern and a trained second neural network comprises:

acquiring geometric parameters and optical parameters of each first feature;

inputting the geometric parameters and the optical parameters of each first characteristic part into a trained second neural network so as to calculate the parameters of the first edge correction characteristic corresponding to each first characteristic part; and

after calculating the parameters of all the first edge modification features, modifying the pattern boundaries of the one or more first main patterns based on the parameters of all the first edge modification features to form one or more second main patterns, wherein the one or more second main patterns and the one or more first sub-resolution auxiliary patterns form the second mask pattern.

9. The method of claim 8, wherein each of the one or more second primary patterns includes one or more second features, and wherein generating a third mask pattern based on the second mask pattern and the trained first neural network comprises:

acquiring geometric parameters and optical parameters of each second feature;

inputting the geometric parameters and the optical parameters of each second characteristic part into the trained first neural network so as to calculate the parameters of a second sub-resolution correction characteristic corresponding to each second characteristic part; and

after calculating the parameters of all of the second sub-resolution corrected features, replacing the one or more first sub-resolution auxiliary patterns in the second mask pattern by one or more second sub-resolution auxiliary patterns based on the parameters of all of the second sub-resolution corrected features, the one or more second main patterns and the one or more second sub-resolution auxiliary patterns constituting a third mask pattern.

10. The method of claim 9, wherein generating an intermediate mask pattern based on the third mask pattern and the trained second neural network comprises:

acquiring geometric parameters and optical parameters of each second feature; and

inputting the geometric parameters and the optical parameters of each second feature into a trained second neural network to calculate parameters of a second edge correction feature corresponding to each second feature; and

after calculating the parameters of all the second edge correction features, correcting the pattern boundaries of the one or more second main patterns based on the parameters of all the second edge correction features to form one or more third main patterns, wherein the one or more third main patterns and the one or more second sub-resolution auxiliary patterns form the intermediate mask pattern.

11. The method of any one of claims 4 to 5 or any one of claims 7 to 10, wherein the geometric parameters comprise:

the key size of the original main graph where each feature is located;

the distance between the original main graph where each feature is located and the adjacent original main graph;

the length of the original main pattern where each feature is located;

a type of each feature;

the vertical distance from the midpoint of each feature to the first end of the original main graph where it is located; and

the vertical distance from the midpoint of each feature to the second end of the original main feature where it is located.

12. The method of any of claims 4 to 5 or any of claims 7 to 10, wherein the optical parameters comprise:

the light intensity;

a light intensity gradient;

light intensity curvature; and

convolution of the light intensity with the gaussian function at different photoacid molecule diffusion lengths.

13. The method of claim 4 or any of claims 7 to 10, wherein the parameters of the one or more sub-resolution corrected features comprise:

an offset for each sub-resolution modifying feature;

the length of each sub-resolution correction feature; and

each sub-resolution modifies the width of a feature.

14. A method according to claim 5 or any of claims 7 to 10, wherein the one or more parameters of the edge modification feature comprise a compensation value for each edge modification feature.

15. The method of claim 1, wherein after establishing the optical proximity correction model of the lithographic pattern to be corrected based on the reticle pattern and before performing the optical proximity correction of the lithographic pattern to be corrected based on the optical proximity correction model, the method further comprises:

carrying out simulated exposure on the photoetching pattern to be corrected based on the optical proximity correction model to obtain a simulated exposure graph;

carrying out actual exposure on the photoetching pattern to be corrected to obtain an actual exposure graph;

acquiring the position deviation between the simulated exposure graph and the actual exposure graph; and

and judging whether the position deviation is out of the threshold range, and if the position deviation is out of the threshold range, adjusting the optical proximity correction model until the position deviation is in the threshold range.

Technical Field

The present application relates to the field of semiconductor technology, and in particular, to an Optical Proximity Correction (OPC) method.

Background

Currently, with the increasing severity of optical image distortion, the optical image resolution of a lithography machine has been difficult to keep up with process development. To compensate for optical image distortion, optical proximity correction techniques have been introduced. Optical proximity correction compensates for optical distortion effects by actively changing reticle pattern data so that the pattern on the silicon wafer is closest to the original design pattern.

The methods for realizing optical proximity correction mainly include two methods, namely Rule-Based optical proximity correction (Rule-Based OPC) and Model-Based optical proximity correction (Model-Based OPC). Rule-based optical proximity correction requires the manual formulation of optical proximity correction rules that become extremely cumbersome and difficult to follow as optical distortions grow. As semiconductor technology nodes evolve, rule-based optical proximity correction is gradually replaced by model-based optical proximity correction. Model-based optical proximity correction an accurate calculation model is established by optical simulation, and then the edges of the graph are adjusted to continuously simulate iteration until the graph approaches to an ideal graph. However, as the device size becomes smaller and smaller, various unusual physical phenomena are developed in a great number, so that the complexity of model-based optical proximity correction becomes higher and higher, and the computational resources and the computation time spent for approximating an ideal graph also increase exponentially.

Therefore, there is a need for an improved optical proximity correction method to shorten the time for optical proximity correction.

Disclosure of Invention

In view of the above-mentioned shortcomings of the prior art, the present application provides a Machine Learning-Based optical proximity correction (Machine Learning Based OPC) method, aiming to shorten the time for optical proximity correction.

The application provides an optical proximity correction method, which comprises the following steps: providing an original lithographic pattern and optical proximity correction features of the original lithographic pattern; training a first neural network and a second neural network based on the original lithographic pattern and the optical proximity correction features; providing a photoetching pattern to be corrected; alternately using the trained first neural network and the trained second neural network to carry out optical proximity correction on the photoetching pattern to be corrected so as to obtain an intermediate mask pattern; establishing an optical proximity correction model of the photoetching pattern to be corrected based on the intermediate mask pattern; and carrying out optical proximity correction on the photoetching pattern to be corrected based on the optical proximity correction model.

Optionally, the original lithographic pattern comprises one or more original main patterns, each of the one or more original main patterns comprising one or more features, the optical proximity correction features comprising one or more sub-resolution correction features and one or more edge correction features.

Optionally, training the first neural network and the second neural network based on the original lithography pattern and the optical proximity correction model comprises: training a first neural network based on parameters of the one or more features and parameters of one or more sub-resolution modified features corresponding to the one or more features; and training a second neural network based on the parameters of the one or more features and the parameters of the one or more edge-corrected features corresponding to the one or more features.

Optionally, training the first neural network based on the parameters of the one or more features and the parameters of the one or more sub-resolution modified features corresponding to the one or more features comprises: for each of the one or more features: acquiring geometric parameters and optical parameters of each feature; acquiring parameters of the sub-resolution correction features corresponding to each feature part; and taking the geometric parameters and the optical parameters of each characteristic part as the input of the first neural network, taking the parameters of the sub-resolution correction characteristics corresponding to each characteristic part as the target output of the first neural network, and training the first neural network until the difference between the actual output and the target output of the trained first neural network is less than a preset threshold value.

Optionally, training the second neural network based on the parameters of the one or more features and the parameters of the one or more edge-corrected features corresponding to the one or more features comprises: for each of the one or more features: acquiring geometric parameters and optical parameters of each feature; acquiring parameters of edge correction features corresponding to each feature part; and taking the geometric parameters and the optical parameters of each feature part as the input of the second neural network and taking the parameters of the edge correction features corresponding to each feature part as the target output of the second neural network, and training the second neural network until the difference between the actual output and the target output of the trained second neural network is less than a preset threshold value.

Optionally, the alternately using the trained first neural network and the trained second neural network to perform optical proximity correction on the lithography pattern to be corrected comprises: generating a first mask pattern based on the photoetching pattern to be corrected and the trained first neural network; generating a second mask pattern based on the first mask pattern and the trained second neural network; generating a third mask pattern based on the second mask pattern and the trained first neural network; and generating an intermediate mask pattern based on the third mask pattern and the trained second neural network.

Optionally, the lithography pattern to be corrected includes one or more first main patterns, each of the one or more first main patterns includes one or more first features, and generating a first mask pattern based on the lithography pattern to be corrected and the trained first neural network includes: acquiring geometric parameters and optical parameters of each first feature; inputting the geometric parameters and the optical parameters of each first characteristic part into a trained first neural network so as to calculate the parameters of a first sub-resolution correction characteristic corresponding to each first characteristic part; and after calculating the parameters of all the first sub-resolution correction features, inserting one or more first sub-resolution auxiliary patterns in the photoetching pattern to be corrected based on the parameters of all the first sub-resolution correction features, wherein the one or more first main patterns and the one or more first sub-resolution auxiliary patterns form the first mask pattern.

Optionally, generating a second mask pattern based on the first mask pattern and the trained second neural network comprises: acquiring geometric parameters and optical parameters of each first feature; inputting the geometric parameters and the optical parameters of each first characteristic part into a trained second neural network so as to calculate the parameters of the first edge correction characteristic corresponding to each first characteristic part; and after calculating the parameters of all the first edge correction features, correcting the graph boundaries of the one or more first main graphs based on the parameters of all the first edge correction features to form one or more second main graphs, wherein the one or more second main graphs and the one or more first sub-resolution auxiliary graphs form the second mask pattern.

Optionally, each of the one or more second primary patterns includes one or more second features, and generating a third mask pattern based on the second mask pattern and the trained first neural network includes: acquiring geometric parameters and optical parameters of each second feature; inputting the geometric parameters and the optical parameters of each second characteristic part into the trained first neural network so as to calculate the parameters of a second sub-resolution correction characteristic corresponding to each second characteristic part; and replacing the one or more first sub-resolution auxiliary patterns in the second mask pattern by one or more second sub-resolution auxiliary patterns based on the parameters of all second sub-resolution corrected features after the parameters of all second sub-resolution corrected features are calculated, the one or more second main patterns and the one or more second sub-resolution auxiliary patterns constituting a third mask pattern.

Optionally, generating an intermediate mask pattern based on the third mask pattern and the trained second neural network comprises: acquiring geometric parameters and optical parameters of each second feature; inputting the geometric parameters and the optical parameters of each second feature into a trained second neural network to calculate parameters of a second edge correction feature corresponding to each second feature; and after calculating the parameters of all the second edge correction features, correcting the graph boundaries of the one or more second main graphs based on the parameters of all the second edge correction features to form one or more third main graphs, wherein the one or more third main graphs and the one or more second sub-resolution auxiliary graphs form the intermediate mask pattern.

Optionally, the geometric parameters include: the key size of the original main graph where each feature is located; the distance between the original main graph where each feature is located and the adjacent original main graph; the length of the original main pattern where each feature is located; a type of each feature; the vertical distance from the midpoint of each feature to the first end of the original main graph where it is located; and the vertical distance from the midpoint of each feature to the second end of the original master pattern in which it is located.

Optionally, the optical parameters include: the light intensity; a light intensity gradient; light intensity curvature; and convolution of the light intensity with the gaussian function at different photoacid molecule diffusion lengths.

Optionally, the parameters of the one or more sub-resolution modified features include: an offset for each sub-resolution modifying feature; the length of each sub-resolution correction feature; and the width of each sub-resolution corrected feature.

Optionally, the one or more parameters of the edge modification feature comprise a compensation value for each edge modification feature.

Optionally, after establishing the optical proximity correction model of the lithography pattern to be corrected based on the intermediate mask pattern and before performing the optical proximity correction on the lithography pattern to be corrected based on the optical proximity correction model, the method further includes: carrying out simulated exposure on the photoetching pattern to be corrected based on the optical proximity correction model to obtain a simulated exposure graph; carrying out actual exposure on the photoetching pattern to be corrected to obtain an actual exposure graph; acquiring the position deviation between the simulated exposure graph and the actual exposure graph; and judging whether the position deviation is out of the threshold range, if so, adjusting the optical proximity correction model until the position deviation is in the threshold range.

The technical scheme of this application has following beneficial effect:

embodiments herein combine machine learning-based optical proximity correction and model-based optical proximity correction. In particular, by employing machine learning-based optical proximity correction prior to model-based optical proximity correction, approximation of an ideal pattern can be achieved entirely by a computer, thus saving the time it takes for simulated exposure and actual exposure that would otherwise need to be repeated to obtain a converged result in model-based optical proximity correction. Therefore, the time consumption of the whole optical proximity correction process is greatly reduced.

Drawings

The following drawings describe in detail exemplary embodiments disclosed in the present application. Wherein like reference numerals represent similar structures throughout the several views of the drawings. Those of ordinary skill in the art will understand that the present embodiments are non-limiting, exemplary embodiments and that the accompanying drawings are for illustrative and descriptive purposes only and are not intended to limit the scope of the present disclosure, as other embodiments may equally fulfill the inventive intent of the present application. It should be understood that the drawings are not to scale. Wherein:

FIG. 1 is a flow chart of a method of optical proximity correction according to an embodiment of the present application;

FIG. 2 is a schematic illustration of an original lithographic pattern according to an embodiment of the present application;

FIG. 3 is a schematic diagram of a first neural network according to an embodiment of the present application;

FIG. 4 is a schematic diagram of a second neural network, according to an embodiment of the present application;

FIG. 5 is a schematic view of a lithographic pattern to be corrected according to an embodiment of the present application;

FIG. 6 is a schematic diagram of a first mask pattern according to an embodiment of the present application;

FIG. 7 is a schematic view of a second mask pattern according to an embodiment of the present application;

FIG. 8 is a schematic view of a third mask pattern according to an embodiment of the present application;

FIG. 9 is a schematic view of a reticle pattern according to an embodiment of the present application;

FIG. 10 is a schematic diagram of a final mask pattern according to an embodiment of the present application.

Detailed Description

The following description is presented to enable any person skilled in the art to make and use the present disclosure, and is provided in the context of a particular application and its requirements. Various local modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is not to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.

As mentioned above, as semiconductor devices are scaled down in size, the time taken to model-based optical proximity correction becomes exponentially more complex. Therefore, purely using model-based optical proximity correction has not been able to meet practical requirements.

On the basis, the embodiment of the application provides an optical proximity correction method.

FIG. 1 is a flow chart of a method of optical proximity correction according to an embodiment of the present application. As shown in fig. 1, the optical proximity correction method includes the following steps:

step S102: providing an original lithographic pattern and optical proximity correction features of the original lithographic pattern;

step S104: training a first neural network and a second neural network based on the original lithographic pattern and the optical proximity correction features;

step S106: providing a photoetching pattern to be corrected;

step S108: alternately using the trained first neural network and the trained second neural network to carry out optical proximity correction on the photoetching pattern to be corrected so as to obtain an intermediate mask pattern;

step S110: establishing an optical proximity correction model of the photoetching pattern to be corrected based on the intermediate mask pattern;

step S112: and carrying out optical proximity correction on the photoetching pattern to be corrected based on the optical proximity correction model.

The following describes the specific steps of the present invention in detail with reference to the following embodiments and drawings.

In step S102, an original lithographic pattern and optical proximity correction features of the original lithographic pattern are provided.

The original lithographic pattern represents a lithographic pattern that is desired to be formed on a substrate. The optical proximity correction features of the original lithographic pattern represent features used to optically proximity correct the original lithographic pattern to overcome optical proximity effects, such as sub-resolution correction features (e.g., scatter bars) inserted at the voids of the original lithographic pattern and edge correction features (e.g., boundary shifts) performed on the pattern of the original lithographic pattern. Multiple sets of data containing the original lithographic pattern and the optical proximity correction features may be obtained from a database storing optical proximity correction history data as training data for the neural network.

FIG. 2 is a schematic illustration of an original lithographic pattern according to an embodiment of the present application. As shown in FIG. 2, the original lithographic pattern 200 includes one or more original master patterns 210, each original master pattern 210 including one or more features. The one or more features may include a Line End portion (Line End)211, a Line End Adjacent portion (Line End Adjacent)212, and a general Edge portion (Normal Edge) 213. In the present embodiment, the line end 211 refers to a line segment located on a short side of the original main pattern 210 of a long shape (e.g., a rectangle). The line-end neighboring part 212 refers to a line segment located on the long side of the original main pattern 210 and adjacent to the line end 211. The general edge portion 213 refers to a line segment that is located on a long side of the original main pattern 210 and is not adjacent to the line end portion 211.

In step S104, a first neural network and a second neural network are trained based on the original lithographic pattern and the optical proximity correction features.

In this embodiment, training the first neural network and the second neural network based on the original lithography pattern and the optical proximity correction model may include: training a first neural network based on parameters of the one or more features and parameters of one or more sub-resolution modified features corresponding to the one or more features; and training a second neural network based on the parameters of the one or more features and the parameters of the one or more edge-corrected features corresponding to the one or more features.

Fig. 3 is a schematic diagram of a first neural network according to an embodiment of the present application. As shown in fig. 3, the first neural network includes an input layer 310, an output layer 330, and a hidden layer 320 between the input layer 310 and the output layer 330. In the present embodiment, the input layer 310 includes 13 input nodes x1 to x13, which respectively correspond to various parameters related to features (which will be described in detail below). In the present embodiment, the output layer 330 includes 3 output nodes y1 to y3, which respectively correspond to parameters of the sub-resolution correction features (which will be described in detail below). It should be noted that although only one hidden layer 320 and 6 hidden layer nodes h 1-h 6 are shown in fig. 3, the above arrangement is merely illustrative and not limiting, and other numbers of hidden layers and hidden layer nodes are also within the scope of the present application.

In this embodiment, training the first neural network based on the parameters of the one or more features and the parameters of the one or more sub-resolution modified features corresponding to the one or more features comprises: for each of the one or more features:

acquiring geometric parameters and optical parameters of each feature;

acquiring parameters of the sub-resolution correction features corresponding to each feature part; and

and taking the geometric parameters and the optical parameters of each characteristic part as the input of the first neural network, taking the parameters of the sub-resolution correction characteristics corresponding to each characteristic part as the target output of the first neural network, and training the first neural network until the difference between the actual output and the target output of the trained first neural network is less than a preset threshold value.

In this embodiment, referring to fig. 2, the geometric parameters may include: the critical dimension CD of the original main pattern where each feature is located; a spacing S (e.g., horizontal spacing, vertical spacing) of the original master pattern in which each feature resides from an adjacent original master pattern; the length L of the original main graph where each feature is located; a type T of each feature (e.g., a value of 1 indicates a line end 211, a value of 2 indicates a line end vicinity 212, and a value of 3 indicates a general edge portion 213); a vertical distance D1 (the general edge portion 213 is taken as an example in the figure) from a midpoint of each feature (e.g., midpoint m1 of the line end portion 211, midpoint m2 of the line end neighboring portion 212, midpoint m3 of the general edge portion 213) to a first end of the original main graph (e.g., the end of the line end portion 211); and a perpendicular distance D2 (illustrated as a general edge 213) from the midpoint of each feature to a second end (e.g., opposite the first end) of the original main feature at which it is located. It should be noted that the value of each feature type in the present embodiment is only illustrative and not restrictive, and those skilled in the art may set different values for different features as needed.

In this embodiment, the optical parameters may include: the light intensity; a light intensity gradient; light intensity curvature; and convolution of the light intensity with the gaussian function at different Photo Acid (Photo Acid) molecular diffusion lengths.

In this embodiment, the optical proximity correction model may include an optical model and a photoresist model. The optical model adopts a Hopkins (Hopkins) method to perform optical imaging operation of a partially coherent light source. The optical model is a "White Box" (White Box) model that predicts the exposure light intensity at the substrate plane, the so-called "Aerial Image (Aerial Image) light intensity function". The optical model is actually a projection imaging problem in a lens train system with a phasic aperture for partially coherent light. The photoresist model is obtained by performing convolution on a proper Gaussian function model and an aerial image light intensity function model in consideration of an ion diffusion effect generated by the photoresist after exposure on the basis of obtaining the aerial image light intensity function.

In the present embodiment, the light intensity of each feature may be the light intensity at a point in the feature, and the specific value of the light intensity may be calculated by a light intensity function I, which may be calculated by the following formula based on the hopkins method.

I(x,y)=F-1{I(f,g)} (1)

Where I (x, y) is the output intensity function at point (x, y) on the substrate, F-1{ } denotes the inverse Fourier transform, F-1{ I (F, g) } is an inverse Fourier transform version of I (F, g), I (F, g) is a frequency domain value of the spatial output intensity function I (x, y) after a two-dimensional Fourier transform, F (F, g) is a two-dimensional Fourier transform version of the mask projection function F (x, y), F () is a conjugate transpose operation, F (F, g) is a conjugate transpose function of F (F, g), T (F1, g1, F + F1, g + g1) represents the Transmission Cross Coefficients (TCC, Transmission Cross Coefficients Coefficients) of the optical system, J (F, g) is a mutual intensity function of the light source describing the coherence properties of the illumination system, and K (F, g) is a frequency response function of the imaging system.

The essence of the Hodgkin's method is to describe the partially coherent illumination system as a transfer function of a bilinear system. By adopting a Hopkins method, a photoetching optical system with fixed light source Wavelength (lambda), Numerical Aperture (NA), defocusing (Defocus), light source coherence coefficient (Coherent Factor, delta) and other aberration can be described by a determined TCC, and after a TCC calculation formula is determined, the light intensity of an aerial image can be obtained.

In this embodiment, the light intensity gradient of each feature may be a light intensity gradient in a direction normal to a midpoint of the feature, and the specific value of the light intensity gradient may be determined by a light intensity gradient functionAnd (4) calculating.

In the present embodiment, the specific value of the light intensity curvature of each feature portionCan pass through the light intensity curvature functionAnd (4) calculating.

In the present embodiment, a gaussian function G at four different photoacid molecule (or carrier) Diffusion lengths (Diffusion lengths) is obtainedS1、GS2、GS3And GS4(e.g., corresponding photoacid molecule diffusion lengths of 5nm, 10nm, 20nm, 30nm, respectively). The diffusion length is the average length of the photoacid molecule that moves between generation and recombination. By separately calculating the intensity function I and the different Gaussian functions GS1、GS2、GS3And GS4The remaining optical parameters can be obtained by convolution of (1).

After obtaining the above-mentioned geometric and optical parameters, the input vector of the first neural network may be constructed

In this embodiment, the parameters of the sub-resolution correction feature include: an offset o (e.g., a horizontal offset, a vertical offset, etc.) for each sub-resolution modifying feature; length of each sub-resolution correction feature; and the width w of each sub-resolution modifying feature.

After obtaining the parameters of the sub-resolution modified features described above, a target output vector V of the first neural network may be constructedout1=[o,l,w]。

In some embodiments, if the geometric parameters and the optical parameters are not in one dimension, the data can be preprocessed by a normalization method. Normalization refers to limiting the data to be processed within a required range after being processed or transformed (e.g., by an interval scaling algorithm), so as to facilitate subsequent processing of the data and ensure faster convergence during program operation. The normalization has the specific function of converting the dimensional expression into the dimensionless expression to form the pure quantity, thereby facilitating the subsequent treatment.

In this embodiment, the training of the first neural network may be considered complete when the difference between the actual output of the first neural network and its target output is less than a predetermined threshold (e.g., 1%, 2%, 3%, 4%, 5%, 6%, 7%, 8%, 9%, 10%).

Fig. 4 is a schematic diagram of a second neural network according to an embodiment of the present application. As shown in fig. 4, the second neural network includes an input layer 410, an output layer 430, and a hidden layer 420 located between the input layer 410 and the output layer 430. In the present embodiment, the input layer 410 includes 13 input nodes x1 to x13, which respectively correspond to parameters related to features. In the present embodiment, the output layer 430 includes 1 output node z1 corresponding to parameters of an edge modification feature (described in detail below). It should be noted that although only one hidden layer 420 and 6 hidden layer nodes d 1-d 6 are shown in fig. 5, the above arrangement is merely illustrative and not limiting, and those skilled in the art may arrange other numbers of hidden layers and hidden layer nodes as needed.

In this embodiment, training the second neural network based on the parameters of the one or more features and the parameters of the one or more edge correction features corresponding to the one or more features comprises: for each of the one or more features:

acquiring geometric parameters and optical parameters of each feature;

acquiring parameters of edge correction features corresponding to each feature part; and

and taking the geometric parameters and the optical parameters of each feature part as the input of the first neural network and taking the parameters of the edge correction features corresponding to each feature part as the target output of the second neural network, and training the second neural network until the difference between the actual output and the target output of the trained second neural network is less than a preset threshold value.

In the present embodiment, referring to fig. 2, the geometric parameters include: the critical dimension CD of the original main pattern where each feature is located; a spacing S (e.g., horizontal spacing, vertical spacing) of the original master pattern in which each feature resides from an adjacent original master pattern; the length L of the original main graph where each feature is located; a type T of each feature (e.g., a value of 1 indicates a line end 211, a value of 2 indicates a line end vicinity 212, and a value of 3 indicates a general edge portion 213); a vertical distance D1 (the general edge portion 213 is taken as an example in the figure) from a midpoint of each feature (e.g., midpoint m1 of the line end portion 211, midpoint m2 of the line end neighboring portion 212, midpoint m3 of the general edge portion 213) to a first end of the original main graph (e.g., the end of the line end portion 211); and a perpendicular distance D2 (illustrated as a general edge 213) from the midpoint of each feature to a second end (e.g., opposite the first end) of the original main feature at which it is located. It should be noted that the value of each feature type in the present embodiment is only illustrative and not restrictive, and those skilled in the art may set different values for different features as needed.

In this embodiment, the optical parameters include: the light intensity; a light intensity gradient; light intensity curvature; and convolution of the light intensity with the gaussian function at different photoacid molecule diffusion lengths.

In this embodiment, the light intensity of each feature may be the light intensity at a point in the feature, the specific value of the light intensity may be calculated through a light intensity function I, and the calculation method of the light intensity function I refers to the related description of the first neural network, which is not repeated herein.

In this embodiment, the light intensity gradient of each feature may be a light intensity gradient in a direction normal to a midpoint of the feature, and the specific value of the light intensity gradient may be determined by a light intensity gradient functionAnd (4) calculating.

In this embodiment, the specific value of the light intensity curvature of each feature can be determined by the light intensity curvature functionAnd (4) calculating.

In this embodiment, four different photoacid molecule Diffusion lengths (Diffusion Len) are obtainedgth) gaussian function GS1、GS2、GS3And GS4(e.g., corresponding photoacid molecule diffusion lengths of 5nm, 10nm, 20nm, 30nm, respectively) can be calculated by calculating the intensity function I and the different Gaussian functions G, respectivelyS1、GS2、GS3And GS4To obtain the remaining optical parameters.

After the above geometric and optical parameters are obtained, the input vector of the second neural network can be formed

In this embodiment, the parameters of the edge correction feature include: a compensation value c for each edge modification feature.

After obtaining the parameters of the sub-resolution modified features described above, a target output vector V of the first neural network may be constructedout2=c。

In this embodiment, the training of the second neural network may be considered complete when the difference between the actual output of the second neural network and its target output is less than a predetermined threshold (e.g., 1%, 2%, 3%, 4%, 5%, 6%, 7%, 8%, 9%, 10%).

In step S106, a lithographic pattern to be corrected is provided.

FIG. 5 is a schematic view of a lithographic pattern to be corrected according to an embodiment of the present application. As shown in fig. 5, the lithographic pattern to be corrected 500 includes one or more first main patterns 510, each first main pattern 510 including one or more first features (not shown).

In step S108, optical proximity correction is performed on the lithography pattern to be corrected by alternately using the trained first neural network and the trained second neural network to obtain an intermediate mask pattern.

In this embodiment, the alternately using the trained first neural network and the trained second neural network to perform the optical proximity correction on the lithography pattern to be corrected includes:

generating a first mask pattern based on the photoetching pattern to be corrected and the trained first neural network;

generating a second mask pattern based on the first mask pattern and the trained second neural network;

generating a third mask pattern based on the second mask pattern and the trained first neural network; and

generating an intermediate mask pattern based on the third mask pattern and the trained second neural network.

Fig. 6 is a schematic view of a first mask pattern according to an embodiment of the present application. Specific steps for generating the first mask pattern 600 are described below with reference to fig. 5 and 6.

In this embodiment, generating the first mask pattern 600 based on the lithography pattern 500 to be corrected and the trained first neural network includes:

acquiring geometric parameters and optical parameters of each first feature;

inputting the geometric parameters and the optical parameters of each first characteristic part into a trained first neural network so as to calculate the parameters of a first sub-resolution correction characteristic corresponding to each first characteristic part; and

after calculating the parameters of all the first sub-resolution corrected features, one or more first sub-resolution auxiliary patterns 610 are inserted into the lithographic pattern to be corrected based on the parameters of all the first sub-resolution corrected features, and the one or more first main patterns 510 and the one or more first sub-resolution auxiliary patterns 610 constitute the first mask pattern 600.

In this embodiment, the geometric parameters of the first feature may include: the critical dimension CD of the first main feature 510 where each first feature is located; the distance S between the first main pattern 510 where each first feature is located and the adjacent first main pattern 510; the length L of the original main graph where each first characteristic part is located; a type T of each first feature; the perpendicular distance D1 from the midpoint of each first feature to the first end of the first main feature in which it is located; and the perpendicular distance D2 from the midpoint of each first feature to the second end of the first primary feature in which it is located. It should be noted that the value of each type of the first feature in the present embodiment is only illustrative and not restrictive, and a person skilled in the art may set different values for different first features as needed.

In this embodiment, the parameters of the first feature may include: the light intensity; a light intensity gradient; light intensity curvature; and convolution of the light intensity with the gaussian function at different photoacid molecule diffusion lengths.

In this embodiment, the calculated parameters of the first sub-resolution modified feature may include: an offset o for each first sub-resolution modifying feature; the length l of each first sub-resolution modifying feature; and a width w of each first sub-resolution modifying feature.

Fig. 7 is a schematic view of a second mask pattern according to an embodiment of the present application. Specific steps for generating the second mask pattern 700 are described below in conjunction with fig. 6 and 7.

In this embodiment, generating the second mask pattern 700 based on the first mask pattern 600 and the trained second neural network comprises:

acquiring geometric parameters and optical parameters of each first feature;

inputting the geometric parameters and the optical parameters of each first characteristic part into a trained second neural network so as to calculate the parameters of the first edge correction characteristic corresponding to each first characteristic part; and

after calculating the parameters of all the first edge modification features, the pattern boundaries of the one or more first main patterns 510 are modified based on the parameters of all the first edge modification features to form one or more second main patterns 620, and the one or more second main patterns 620 and the one or more first sub-resolution auxiliary patterns 610 constitute the second mask pattern 700.

It should be noted that the presence of the first sub-resolution auxiliary pattern 610 will cause a change in the optical parameter of each first feature. Thus, the optical parameters of the first features acquired in this step are different from the optical parameters of the first features acquired in the previous step.

In this embodiment, the calculated parameters of the first edge modification feature may include: the compensation value c for each first edge modification feature.

Fig. 8 is a schematic view of a third mask pattern according to an embodiment of the present application. Specific steps of generating the third mask pattern 800 are described below in conjunction with fig. 7 and 8.

In this embodiment, each of the one or more second main patterns 620 includes one or more second features (not shown), and generating the third mask pattern 800 based on the second mask pattern 700 and the trained first neural network includes:

acquiring geometric parameters and optical parameters of each second feature;

inputting the geometric parameters and the optical parameters of each second characteristic part into the trained first neural network so as to calculate the parameters of a second sub-resolution correction characteristic corresponding to each second characteristic part; and

after the parameters of all the second sub-resolution corrected features are calculated, one or more of the first sub-resolution auxiliary patterns 610 are replaced in the second mask pattern 700 by one or more of the second sub-resolution auxiliary patterns 710 based on the parameters of all the second sub-resolution corrected features, and the one or more of the second main patterns 620 and the one or more of the second sub-resolution auxiliary patterns 710 constitute the third mask pattern 800.

In this embodiment, the geometric parameters of the second feature may include: the critical dimension CD of the second main feature 620 where each second feature is located; the distance S between the second main pattern 620 where each feature is located and the adjacent second main pattern 620; the length L of the original main graph where each second feature is located; a type T of each second feature; the vertical distance D1 from the midpoint of each second feature to the first end of the second main feature 620 in which it is located; and the vertical distance D2 from the midpoint of each second feature to the second end of the second main feature 620 in which it is located. It should be noted that the value of each type of the second feature in the present embodiment is only illustrative and not restrictive, and a person skilled in the art may set different values for different features as needed.

In this embodiment, the parameters of the second feature may include: the light intensity; a light intensity gradient; light intensity curvature; and convolution of the light intensity with the gaussian function at different photoacid molecule diffusion lengths.

In this embodiment, the calculated parameters of the second sub-resolution modified feature may include: an offset o for each second sub-resolution modifying feature; length 1 of each second sub-resolution modifying feature; and a width w of each second sub-resolution modifying feature.

Fig. 9 is a schematic view of a reticle pattern according to an embodiment of the present application. Specific steps for generating the reticle pattern 900 are described below in conjunction with fig. 8 and 9.

In this embodiment, generating the intermediate mask pattern 900 based on the third mask pattern 800 and the trained second neural network includes:

acquiring geometric parameters and optical parameters of each second feature;

inputting the geometric parameters and the optical parameters of each second feature into a trained second neural network to calculate parameters of a second edge correction feature corresponding to each second feature; and

after the parameters of all the second edge modification features are calculated, the pattern boundaries of the one or more second main patterns 620 are modified based on the parameters of all the second edge modification features to form one or more third main patterns 720, and the one or more third main patterns 720 and the one or more second sub-resolution auxiliary patterns 710 constitute the intermediate mask pattern 900.

It should be noted that the presence of the second sub-resolution auxiliary pattern 710 will cause a change in the optical parameter of each second feature. Thus, the optical parameters of the second features acquired in this step are different from the optical parameters of the second features acquired in the previous step.

In this embodiment, the calculated parameters of the second edge modification feature may include: a compensation value c for each second edge modification feature.

It should be understood that the alternating order and number of times of the first neural network and the second neural network described in the above steps are only illustrative and not restrictive, and other orders and numbers capable of achieving the main graph boundary correction and the sub-resolution auxiliary graph correction also fall within the scope of the present application.

In step S110, an optical proximity correction model of the lithography pattern to be corrected is established based on the intermediate mask pattern.

In step S112, the photolithography pattern to be corrected is subjected to optical proximity correction based on the optical proximity correction model.

In this embodiment, between step S110 and step S112, the method may further include:

performing simulated exposure on the photoetching pattern 500 to be corrected based on the optical proximity correction model to obtain a simulated exposure graph;

actually exposing the photoetching pattern 500 to be corrected to obtain an actual exposure graph;

acquiring the position deviation between the simulated exposure graph and the actual exposure graph; and

and judging whether the position deviation is out of the threshold range, and if so, adjusting the optical proximity correction model until the position deviation is in the threshold range, thereby obtaining the final mask pattern 1000 shown in fig. 10.

The method for acquiring the difference between the simulated exposure pattern and the actual exposure pattern comprises the following steps: measuring the characteristic dimension of the simulated exposure graph to obtain simulated test data of the simulated exposure graph; measuring the characteristic dimension of the actual exposure pattern to obtain actual test data of the actual final pattern; and acquiring the position deviation between the simulation test data and the actual test data.

The method of acquiring a difference between the simulated exposure pattern and the actual exposure pattern may further include: selecting a plurality of measuring points in the photoetching pattern 500 to be corrected, and acquiring the position deviation between the simulated test data and the actual test data corresponding to the measuring points.

By employing machine learning-based optical proximity correction prior to model-based optical proximity correction in embodiments of the present application, the approximation of the ideal pattern can be achieved entirely by the computer, thus saving the time it takes for simulated and actual exposures that otherwise would need to be repeated in model-based optical proximity correction to obtain a converged result. Therefore, the time consumption of the whole optical proximity correction process is greatly reduced, for example, by more than 10 times.

Embodiments of the present application also disclose a computer device comprising a storage device, a processor, and a computer program stored on the storage device and executable on the processor, the processor implementing one or more steps as described in embodiments of the present application when executing the computer program.

The computing device may be a general purpose computer or a special purpose computer. For example, the computing device may be a server, a personal computer, a portable computer (such as a notebook computer, a tablet computer, etc.), or an electronic device with other computing capabilities. For example, the computing device may include a communication port that may be connected to or taken out of a network to which it is connected to facilitate data communication. The computing device may also include a processor in the form of one or more processors, such as a central processing unit, for executing program instructions. The computing device may also include an internal communication bus and various forms of program storage media and data storage media, such as magnetic disks, read-only memory or random access memory, etc., for storing various data files to be processed and/or transmitted. The storage medium may be a storage medium local to the computing device or a storage medium shared by the computing devices. The computing device may also include program instructions stored in read-only memory, random access memory, and/or other types of non-transitory storage media to be executed by a processor. The computing device may also include I/O components to support data communications with other computing devices in the distributed computing system. The computing device may also receive programming and data via network communications.

For purposes of illustration only, only one processor is described in the computing device. However, one of ordinary skill in the art will appreciate that the computing device in the present application may also include multiple processors. Thus, methods/steps/operations described herein as being performed by one processor may also be performed jointly or separately by multiple processors. For example, if in the present application, the processor of the computing device may perform step a and step B simultaneously. It should be understood that steps a and B may also be performed by two different processors together. For example, a first processor performs step a and a second processor performs step B, or the first processor and the second processor perform steps a and B together.

Having thus described the basic concepts, it will become apparent to those skilled in the art from this detailed disclosure, which is intended to be presented by way of example only, and not limitation. Various changes, improvements and modifications may be desired and suggested to one skilled in the art, although not explicitly described herein. For example, the steps in the methods of the present disclosure may not necessarily be operated exactly in the order described. These steps may also be performed in part and/or in other combinations as reasonably contemplated by one of ordinary skill in the art. Such alterations, improvements, and modifications are intended to be suggested by this disclosure, and are within the spirit and scope of the exemplary embodiments of this disclosure.

Furthermore, certain terminology has been used to describe embodiments of the disclosure. For example, the terms "an embodiment," "one embodiment," and/or "some embodiments" mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment," "one embodiment," or "an alternative embodiment" in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the disclosure.

Moreover, those skilled in the art will recognize that aspects of the present disclosure may be illustrated and described herein in any of a number of patentable categories or contexts, including any new and useful process, machine, manufacturing, or composition problems, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.), or in a combination of software and hardware implementations, all of which may be generally referred to herein as a "block," module, "" engine, "" unit, "" component "or" system. Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer-readable media having computer-readable program code embodied therein.

Furthermore, the order in which the elements or sequences of a process are recited, or the number, letter, or other designation used herein, is not intended to limit the claimed processes and methods to any order except as may be specified in the claims. While the foregoing disclosure discusses, by way of various examples, various useful embodiments of the present disclosure that are presently considered to be illustrative, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the disclosed embodiments. For example, although an implementation of the various components described above may be embodied in a hardware device, it may also be implemented as a software-only solution, e.g., an installation on an existing server or mobile device.

Similarly, it should be appreciated that in the foregoing description of embodiments of the disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various inventive embodiments. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, inventive embodiments lie in less than all features of a single foregoing disclosed embodiment.

26页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:掩模制造设备和使用该掩模制造设备制造掩模的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类