Thickness measurement of substrates using color metrology

文档序号:118696 发布日期:2021-10-19 浏览:35次 中文

阅读说明:本技术 使用色彩计量进行的基板的厚度测量 (Thickness measurement of substrates using color metrology ) 是由 D·J·本韦格努 B·A·斯韦德克 于 2020-02-06 设计创作,主要内容包括:一种用于获得表示基板上的层的厚度的测量值的系统,包括用于固持基板的支撑件、用于通过光以不同入射角撞击基板来捕获两个彩色图像的光学组件、以及控制器。控制器被配置成存储函数,所述函数根据在至少四个维度的坐标空间中沿预定路径的位置来提供表示厚度的值。针对两个彩色图像中的像素,控制器从色彩数据确定在坐标空间中的坐标,确定在预定路径上最靠近所述坐标的点的位置,以及从函数以及预定路径上的点的位置来计算表示厚度的值。(A system for obtaining measurements representative of a thickness of a layer on a substrate includes a support for holding the substrate, an optical assembly for capturing two color images by light striking the substrate at different angles of incidence, and a controller. The controller is configured to store a function that provides a value representative of the thickness as a function of position along the predetermined path in a coordinate space of at least four dimensions. For pixels in the two color images, the controller determines coordinates in a coordinate space from the color data, determines a location of a point on the predetermined path that is closest to the coordinates, and calculates a value representing the thickness from the function and the location of the point on the predetermined path.)

1. A system for obtaining a measurement indicative of a thickness of a layer on a substrate, comprising:

a support for holding a substrate for integrated circuit fabrication;

an optical assembly to capture a first color image of at least a portion of the substrate held by the support by light striking the substrate at a first incident angle and to capture a second color image of the at least a portion of the substrate held by the support by light striking the substrate at a different second incident angle; and

a controller configured to

Receiving the first color image and the second color image from the optical assembly,

storing a function that provides a value representing thickness as a function of position along a predetermined path in at least four dimensions of a coordinate space including a first color channel and a second color channel from the first color image and a third color channel and a fourth color channel from the second color image,

determining, for a pixel of the first color image and a corresponding pixel in the second color image, coordinates in the coordinate space from color data for the pixel in the first color image and the color data for the corresponding pixel in the second color image,

determining the position of the point on the predetermined path closest to the coordinates, an

Calculating a value representing thickness from the function and the position of the point on the predetermined path.

2. The system of claim 1, wherein the coordinate space is four-dimensional.

3. The system of claim 1, wherein the coordinate space is six-dimensional.

4. The system of claim 1, wherein the first color channel and the second color channel are selected from a group of color channels comprising hue, saturation, luminance, X, Y, Z, red chroma, green chroma, and blue chroma of the first color image, and the third color channel and the fourth color channel are selected from a group of color channels comprising hue, saturation, luminance, X, Y, Z, red chroma, green chroma, and blue chroma of the second color image.

5. The system of claim 1, wherein the first incident angle and the second incident angle are both between about 20 ° and 85 °.

6. The system of claim 1, wherein the first angle of incidence is at least 5 ° greater than the second angle of incidence.

7. A computer program product for obtaining a measurement representative of a thickness of a layer on a substrate, the computer program product tangibly embodied in a non-transitory computer-readable medium, comprising instructions for causing a processor to:

receiving a first color image of the substrate and a second color image of the substrate from one or more cameras;

storing a function that provides a value representing thickness as a function of position on a predetermined path in at least four dimensions of a coordinate space including a first color channel and a second color channel from the first color image and a third color channel and a fourth color channel from the second color image;

determining, for a pixel of the first color image and a corresponding pixel in the second color image, coordinates in the coordinate space from color data for the pixel in the first color image and the color data for the corresponding pixel in the second color image;

determining a location of a point on the predetermined path that is closest to the coordinates; and

calculating a value representing a thickness of a layer on the substrate from the function and the location of the point on the predetermined path.

8. A method for obtaining a measurement representative of a thickness of a layer on a substrate, comprising:

positioning a substrate for integrated circuit fabrication in a field of view of a color camera;

using one or more color cameras to produce a first color image of the substrate and a second color image of the substrate, the first color image produced by light striking the substrate at a first angle of incidence and the second color image produced by light striking the substrate at a different second angle of incidence;

storing a function that provides a value representing a thickness as a function of position on a predetermined path in at least four dimensions of a coordinate space including a first color channel and a second color channel from the first color image and a third color channel and a fourth color channel from the second color image;

determining, for a pixel of the first color image and a corresponding pixel in the second color image, coordinates in the coordinate space from color data for the pixel in the first color image and the color data for the corresponding pixel in the second color image;

determining a location of a point on the predetermined path that is closest to the coordinates; and

calculating a value representing a thickness of a layer on the substrate from the function and the location of the point on the predetermined path.

9. A polishing system, comprising:

a polishing station comprising a platen for supporting a polishing pad;

a support for holding a substrate;

an in-line metrology station for measuring the substrate before or after polishing the surface of the substrate in the polishing station, the in-line metrology station comprising

One or more elongated white light sources, each elongated white light source having a longitudinal axis and being configured to direct light at a non-zero angle of incidence to the substrate to form an illumination area on the substrate, the illumination area extending along a first axis during scanning of the substrate,

a first color line scan camera having detector elements arranged to receive light reflected from the substrate that impinges the substrate at a first angle of incidence and form an image portion extending along the first axis during scanning of the substrate,

a second color line scan camera having detector elements arranged to receive light reflected from the substrate that impinges the substrate at a different second angle of incidence and form a second image portion extending along the first axis during scanning of the substrate,

a frame supporting the one or more light sources, the first color line scan camera, and the second color line scan camera, an

A motor that causes relative motion between the frame and the support along a second axis perpendicular to the first axis to cause the one or more light sources, the first color line scan camera, and the second color line scan camera to scan over the substrate; and

a controller configured to receive color data from the first and second color line scan cameras, to generate a first two-dimensional color image from the color data from the first color line scan camera and a second two-dimensional color image from the color data from the second color line scan camera, and to control polishing at the polishing station based on the first and second two-dimensional color images.

10. The system of claim 9, comprising one or more diffusers in a light path between the one or more elongated white light sources and the substrate.

11. The system of claim 9, wherein the first incident angle and the second incident angle are both between about 5 ° and 85 °.

12. The system of claim 11, wherein the first incident angle and the second incident angle are both between about 20 ° and 75 °.

13. The system of claim 9, wherein the first angle of incidence is at least 5 ° greater than the second angle of incidence.

14. The system of claim 9, wherein the first color line scan camera and the second line scan camera are configured to image coincident regions on the substrate.

15. The system of claim 9, wherein the one or more elongated light sources comprise a first elongated light source for generating the light that strikes the substrate at the first angle of incidence and a second elongated light source for generating the light that strikes the substrate at the second angle of incidence.

Technical Field

The present disclosure relates to optical metrology, for example, to detect the thickness of a layer on a substrate.

Background

Integrated circuits are typically formed on a substrate by the sequential deposition of conductive, semiconductive, or insulative layers onto a silicon wafer. One fabrication step involves depositing a filler layer over a non-planar surface and planarizing the filler layer. For some applications, the filler layer is planarized until the top surface of the patterned layer is exposed. For example, a conductive filler layer can be deposited on a patterned insulating layer to fill trenches or holes in the insulating layer. After planarization, the portions of the metal layer remaining between the raised patterns of the insulating layer form vias, plugs, and wires that provide conductive paths between thin film circuits on the substrate. For other applications, a filler layer is deposited over an underlying topology provided by other layers, and the filler layer is planarized until a predetermined thickness remains. For example, a dielectric filler layer may be deposited over a patterned metal layer and patterned to provide insulation between metal regions and to provide a planar surface for further photolithography.

Chemical Mechanical Polishing (CMP) is one well-established planarization method. This planarization method typically requires that the substrate be mounted on a carrier or polishing head. The exposed surface of the substrate is typically placed against a rotating polishing pad. The carrier head provides a controllable load on the substrate to urge the substrate against the polishing pad. An abrasive polishing slurry is typically supplied to the surface of the polishing pad.

Variations in slurry distribution, polishing pad conditions, relative velocity between the polishing pad and the substrate, and load on the substrate can result in variations in material removal rates. These variations, as well as variations in the initial thickness of the substrate layer, result in variations in the time required to reach the polishing endpoint. Therefore, determining the polishing end point based on only the polishing time may result in over-polishing or under-polishing of the substrate.

Various optical metrology systems (e.g., spectroscopic optical metrology systems or ellipsometric optical metrology systems) may be used to measure the thickness of the substrate layer before and after polishing, for example, at an in-line or separate metrology station. In addition, various in situ monitoring techniques, such as monochromatic optical or eddy current monitoring, can be used to detect the polishing endpoint.

Disclosure of Invention

In one aspect, a system for obtaining a measurement representative of a thickness of a layer on a substrate includes: a support for holding a substrate for integrated circuit fabrication; an optical assembly to capture a first color image of at least a portion of the substrate held by the support by light striking the substrate at a first incident angle and to capture a second color image of the at least a portion of the substrate held by the support by light striking the substrate at a different second incident angle; and a controller. The controller is configured to receive the first color image and the second color image from the optical assembly; storing a function that provides a value representing a thickness as a function of position along a predetermined path in at least four dimensions of a coordinate space including a first color channel and a second color channel from a first color image and a third color channel and a fourth color channel from a second color image; determining, for a pixel of the first color image and a corresponding pixel in the second color image, coordinates in a coordinate space from color data for the pixel in the first color image and color data for the corresponding pixel in the second color image; determining a position of a point on a predetermined path that is closest to the coordinates; and calculating a value indicative of the thickness from the function and the position of the point on the predetermined path.

In other aspects, a computer program includes instructions for causing a processor to perform the operations of the controller, and a polishing method includes: the method includes positioning a substrate for integrated circuit fabrication in a field of view of a color camera, generating a color image of the substrate from the color camera, and performing an operation.

Implementations of any of the aspects may include one or more of the following features.

The coordinate space may be four-dimensional or the coordinate space may be six-dimensional. The first color channel and the second color channel may be selected from a group of color channels including hue, saturation, luminance, X, Y, Z, red chroma, green chroma, and blue chroma of the first color image. The third color channel and the fourth color channel may be selected from a group of color channels including hue, saturation, luminance, X, Y, Z, red chromaticity, green chromaticity, and blue chromaticity of the second color image. The first color channel and the third color channel may be a red chromaticity, and the second color channel and the fourth color channel may be a green chromaticity.

Both the first incident angle and the second incident angle may be between about 20 ° and 85 °. The first angle of incidence may be at least 5 ° greater than the second angle of incidence, e.g., at least 10 ° greater.

In another aspect, a polishing system includes a polishing station including a platen for supporting a polishing pad; a support for holding a substrate; an in-line metrology station for measuring the substrate before and after polishing the surface of the substrate in the polishing station; and a controller. The inline metrology station includes one or more elongated white light sources, each elongated white light source having a longitudinal axis and configured to direct light at a non-zero angle of incidence toward the substrate to form an illumination region on the substrate, the illumination region extending along a first axis during scanning of the substrate; a first color line scan camera having detector elements arranged to receive light reflected from the substrate that impinges the substrate at a first angle of incidence and form an image portion extending along a first axis during scanning of the substrate; a second color line scan camera having detector elements arranged to receive light reflected from the substrate that impinges the substrate at a different second angle of incidence and form a second image portion extending along the first axis during scanning of the substrate; a frame supporting one or more light sources, a first color line scan camera, and a second color line scan camera; and a motor that causes relative motion between the frame and the support along a second axis perpendicular to the first axis to cause the one or more light sources, the first color line scan camera, and the second color line scan camera to scan over the substrate. The controller is configured to receive color data from the first color line scan camera and the second color line scan camera, to generate a first two-dimensional color image from the color data from the first color line scan camera and a second two-dimensional color image from the color data from the second color line scan camera, and to control polishing at the polishing station based on the first two-dimensional color image and the second two-dimensional color image.

In other aspects, a computer program includes instructions for causing a processor to perform the operations of a controller, and a polishing method includes positioning a substrate for integrated circuit fabrication in a field of view of a color camera, generating a color image of the substrate from the color camera, and performing the operations.

Implementations of any of the aspects may include one or more of the following features.

One or more diffusers can be positioned in the path of light between the one or more elongated white light sources and the substrate.

The first incident angle and the second incident angle may both be between about 5 ° and 85 °, for example, both between about 20 ° and 75 °. The first angle of incidence may be at least 5 ° greater than the second angle of incidence, e.g., at least 10 ° greater. The first color line scan camera and the second line scan camera may be configured to image coincident regions on the substrate. The one or more elongated light sources may include a first elongated light source for generating light that impinges the substrate at a first angle of incidence and a second elongated light source for generating light that impinges the substrate at a second angle of incidence. The light from the first light source and the light from the second light source may impinge on an overlapping region, e.g., a coincident region, on the substrate.

The frame may be fixed and the motor may be coupled to the support, and the controller may be configured to cause the motor to move the support while the one or more elongated light sources and the first and second color line scan cameras remain fixed to scan over the substrate.

Implementations may include one or more of the following potential advantages. The accuracy of the thickness measurement can be improved. This information can be used in feed-forward or feedback for controlling polishing parameters, thereby providing improved thickness uniformity. The algorithm for determining the change may be simple and have a low computational load.

The details of one or more implementations are set forth in the accompanying drawings and the description below. Other aspects, features and advantages will be apparent from the description and drawings, and from the claims.

Drawings

FIG. 1A shows a schematic diagram of an example of an inline optical measurement system.

FIG. 1B shows a schematic diagram of an example of an in-situ optical measurement system.

Fig. 1C shows a schematic diagram of an example of a portion of a measurement system.

Fig. 2 is a flow chart of a method of determining layer thicknesses.

Fig. 3 is a schematic top view of a substrate.

Fig. 4 is a schematic view of a mask.

Fig. 5 shows an example graph showing the evolution of the color of light reflected from a substrate in the coordinate space of two color channels.

FIG. 6 illustrates an example graph showing predetermined paths in the coordinate space of two color channels.

Fig. 7 is a flow chart of a method of determining layer thicknesses from color image data.

Fig. 8 shows an example graph showing histograms in the coordinate space of two color channels derived from a color image of a test substrate.

Fig. 9A and 9B show example graphs showing histograms in the coordinate spaces of two color channels before and after color correction, respectively.

Like reference symbols in the various drawings indicate like elements.

Detailed Description

The thickness of a layer on a substrate can be measured optically before or after polishing (e.g., at an in-line or stand-alone metrology station) or during polishing (e.g., by an in-situ monitoring system). However, some optical techniques (such as spectrometry) require expensive spectrometers and computationally burdensome manipulation of the spectral data. Even with the exception of the computational load, in some cases the algorithm results fail to meet the ever-increasing accuracy requirements of the user.

One measurement technique is to take a color image of the substrate and analyze the image in color space to determine the thickness of the layer. In particular, the location along the path in the two-dimensional color space may provide information about the current state of polishing, e.g., the amount removed or the amount of material remaining. However, in some cases, it may be difficult to resolve differences between colors in an image. By performing color correction on the image, color contrast can be increased. Accordingly, thickness resolution can be enhanced, and reliability and accuracy of thickness measurement can be improved.

Another problem is that paths in a two-dimensional color space may have degeneracy. By increasing the dimensionality of the color space, the likelihood of degeneracy can be reduced. One technique is to use one type of camera (e.g., a hyperspectral camera) that produces images with four or more (e.g., six to twenty) color channels. Another technique is to use multiple cameras, but at different angles of incidence (different strikes and thus different colors are produced due to the different lengths of the light paths through the thin film layers at different angles of incidence).

Referring to FIG. 1, polishing apparatus 100 includes an in-line (also referred to as a sequential) optical metrology system 160, such as a color imaging system.

The polishing apparatus 100 includes one or more carrier heads 126, one or more polishing stations 106, and a transfer station, each carrier head 126 of the one or more carrier heads 126 configured to carry a substrate 10, for loading and unloading substrates to and from the carrier head. Each polishing station 106 includes a polishing pad 130 supported on a platen 120. The polishing pad 130 may be a double-layer polishing pad having an outer polishing layer and a softer pad layer.

The carrier head 126 may be suspended from a support 128 and may be moved between polishing stations. In some embodiments, the support 128 is an overhead rail and the carrier head 126 is coupled to the carriage 108, the carriage 108 being mounted to the rail. The overhead track 128 allows each carrier 108 to be selectively positioned over the polishing station 124 and the transfer station. Alternatively, in some embodiments, the support 128 is a rotatable carousel, and rotation of the carousel moves the carrier heads 126 simultaneously along a circular path.

Each polishing station 106 of polishing apparatus 100 can include a port (e.g., at the end of arm 134) to dispense a polishing liquid 136, such as an abrasive slurry, onto polishing pad 130. Each polishing station 106 of the polishing apparatus 100 can also include a pad conditioning apparatus to grind (abrad) the polishing pad 130 in order to maintain the polishing pad 130 in a consistent abrasive state.

Each carrier head 126 is operable to hold a substrate 10 against a polishing pad 130. Each carrier head 126 may have independent control of polishing parameters (e.g., pressure) associated with each respective substrate. Specifically, each carrier head 126 may include a securing ring 142 to secure the substrate 10 beneath the flexible membrane 144. Each carrier head 126 also includes a plurality of independently controllable pressurizable chambers (e.g., three chambers 146 a-146 c) defined by the membrane that can apply independently controllable pressurization to associated zones on the flexible membrane 144 and thus to the substrate 10. Although only three chambers are illustrated in fig. 1 for ease of illustration, there may be one or two chambers, or four or more chambers, e.g., five chambers.

Each carrier head 126 depends from the support 128 and is connected by a drive shaft 154 to a carrier head rotation motor 156 so that the carrier head can rotate about the axis 127. Alternatively, each carrier head 140 may oscillate laterally, for example, by driving the carriage 108 on the track 128, or by rotational oscillation of the turntable itself. In operation, the platen rotates about its central axis 121 and each carrier head rotates about its central axis 127 and translates laterally across the top surface of the polishing pad. The lateral sweeping is in a direction parallel to the polishing surface 212. The lateral sweeping may be a linear or arcuate motion.

A controller 190, such as a programmable computer, is connected to each motor to independently control the rate of rotation of the platform 120 and carrier head 126. For example, each motor may include an encoder that measures the angular position or rotational rate of the associated drive shaft. Similarly, the controller 190 is connected to the actuators and/or rotary motors of the turntable in each carrier 108 to independently control the lateral movement of each carrier head 126. For example, each actuator may include a linear encoder that measures the position of carriage 108 along rail 128.

Controller 190 may include a Central Processing Unit (CPU), memory, and support circuits such as input/output circuitry, power supplies, clock circuits, cache, and the like. The memory is connected to the CPU. The memory is a non-transitory computer readable medium and may be one or more of readily available memory, such as Random Access Memory (RAM), Read Only Memory (ROM), floppy disk, hard disk, or other forms of digital storage. Additionally, although illustrated as a single computer, the controller 190 may be a distributed system, e.g., including multiple independently operating processors and memories.

The in-line optical metrology system 160 is positioned within the polishing apparatus 100, but no measurements are performed during the polishing operation; rather, measurements are collected between polishing operations (e.g., while moving a substrate from one polishing station to another or from or to a transfer station).

The in-line optical metrology system 160 includes a sensor assembly 161, the sensor assembly 161 being supported at a location in the polishing station 106 between two polishing stations 106 (e.g., between two stages 120). Specifically, the sensor assembly 161 is positioned such that the carrier head 126, supported by the support 128, can position the substrate 10 over the sensor assembly 161.

In embodiments where the polishing apparatus 100 includes three polishing stations and the substrates are sequentially carried from the first polishing station to the second polishing station to the third polishing station, one or more sensor assemblies 161 may be positioned between the transfer station and the first polishing station, between the first polishing station and the second polishing station, between the second polishing station and the third polishing station, and/or between the third polishing station and the transfer station.

The sensor assembly 161 may include a light source 162, a light detector 164, and circuitry 166 for sending and receiving signals between the controller 190 and the light source 162 and light detector 164.

The light source 162 is operable to emit white light. In one embodiment, the emitted white light includes light having a wavelength of 200 to 800 nanometers. Suitable light sources are arrays of white Light Emitting Diodes (LEDs), or xenon mercury lamps. The light source 162 is oriented to direct light 168 onto the exposed surface of the substrate 10 at a non-zero angle of incidence α. The incident angle α may be, for example, about 30 ° to 75 °, e.g., 50 °.

The light source may illuminate an elongated area that is substantially linear across the width of the substrate 10. The light source 162 may include an optical element (e.g., a beam expander) to spread light from the light source into the elongated region. Alternatively or additionally, the light source 162 may comprise a linear array of light sources. The light sources 162 themselves, as well as the illuminated area on the substrate, may be elongated and have a longitudinal axis parallel to the substrate surface.

The light 168 from the light source 168 may be partially collimated.

A diffuser 170 may be placed in the path of the light 168 or the light source 162 may include a diffuser to diffuse the light before it reaches the substrate 10.

The detector 164 may be a color camera sensitive to light from the light source 162. The detector 164 includes an array of detector elements 178 for each color channel. For example, detector 164 may include a CCD array for each color channel. In some embodiments, the array is a single row of detector elements 178. For example, the camera may be a line scan camera. The rows of detector elements may extend parallel to the longitudinal axis of the elongated area illuminated by the light source 162 or perpendicular to the direction of motion of the illuminated area on the substrate (fig. 1A schematically illustrates elements 178, but elements 178 may be arranged in a line extending out of the plane of the illustration). In some embodiments, the detector is a prism-based color camera. A prism inside the detector 164 splits the beam 168 into three separate beams, each of which is sent to a separate array of detector elements.

Where the light source 162 includes a row of light emitting elements, the row of detector elements may extend along a first axis that is parallel to the longitudinal axis of the light source 162. The row of detector elements may comprise 1024 or more elements.

Determining the parallel or perpendicular positioning of the rows of detector elements should take into account the reflection of the light beams, e.g. by folding mirrors or from prism faces.

The detector 164 is provided with suitable focusing optics 172 to project the field of view of the substrate onto an array of detector elements 178. The field of view may be long enough to view the entire width of the substrate 10, e.g., 150mm to 300mm long. The sensor assembly 161 (including the detector 164 and associated optical element 172) may be configured such that individual pixels correspond to regions having lengths equal to or less than about 0.5 mm. For example, assuming that the field of view is about 200mm long and the detector 164 includes 1024 elements, the image produced by the line scan camera may have pixels that are about 0.5mm in length. To determine the length resolution of an image, the length of the field of view (FOV) may be divided by the number of pixels onto which the FOV is imaged to obtain the length resolution.

The detector 164 may also be configured such that the pixel width is comparable to the pixel length. For example, an advantage of a line scan camera is its very fast frame rate. The frame rate may be at least 5 kHz. The frame rate may be set to a frequency such that the pixel width is comparable to the pixel length, e.g., equal to or less than about 0.3mm, as the imaging area is scanned over the substrate 10. For example, the pixel width and length may be about 0.1mm to 0.2 mm.

The light source 162 and light detector 164 may be supported on a stage 180. Where the light detector 164 is a line scan camera, the light source 162 and camera 164 may be moved relative to the substrate 10 so that the imaging area may be scanned over the entire length of the substrate. In particular, the relative motion may be in a direction parallel to the surface of the substrate 10 and perpendicular to the rows of detector elements of the line scan camera 164.

In some embodiments, the stage 182 is stationary and the carrier head 126 is moved, for example, by movement of the carriage 108 or by rotational oscillation of a turntable. In some embodiments, the stage 180 may be movable while the carrier head 126 remains stationary for image acquisition. For example, the stage 180 may be movable along a track 184 by a linear actuator 182. In either case, this allows the light source 162 and camera 164 to stay in a fixed position relative to each other as the scanned area moves over the substrate 10.

In addition, the substrate may be held by a robot and moved past the fixed optical assembly 161. For example, in the case of a cassette interface unit or other factor interface unit, the substrate may be held by a robot that is used to transfer the substrate to or from the cassette (rather than being supported on a separate table). The light detector may be a fixed element in the cassette interface unit (e.g., a line scan camera), and the robot may move the substrate past the light detector to scan the substrate to produce an image.

A possible advantage of having a line scan camera and light source moving together over the substrate is that (e.g. as compared to a conventional 2D camera) the relative angle between the light source and the camera remains constant for different positions on the wafer. Thus, artifacts caused by changes in viewing angle can be reduced or eliminated. In addition, line scan cameras can eliminate perspective distortion, whereas conventional 2D cameras exhibit inherent perspective distortion that subsequently needs to be corrected by image transformation.

The sensor assembly 161 may include a mechanism for adjusting the vertical distance between the substrate 10 and the light source 162 and detector 164. For example, the sensor assembly 161 may be an actuator for adjusting the vertical position of the stage 180.

Optionally, a polarizing filter 174 may be positioned in the optical path, for example, between the substrate 10 and the detector 164. The polarizing filter 184 may be a Circular Polarizer (CPL). A typical CPL is a combination of a linear polarizer and a quarter-wave plate. Proper orientation of the polarization axis of the polarizing filter 184 may reduce haze in the image and sharpen or enhance desired visual features.

One or more baffles 188 may be placed near the detector 164 to prevent stray or ambient light from reaching the detector 164 (see fig. 1C). For example, the baffle may be substantially parallel to the beam 168 and extend around the area where the beam enters the detector 164. In addition, the detector 164 may have a narrow acceptance angle, for example, 1 ° to 10 °. These mechanisms can improve image quality by reducing the effects of stray or ambient light.

Assuming that the outermost layer on the substrate is a semi-transparent layer (e.g., a dielectric layer), the color of the light detected at detector 164 depends on, for example, the composition of the substrate surface, the substrate surface smoothness, and/or the amount of interference between light reflected from different interfaces of one or more layers (e.g., dielectric layers) on the substrate.

As described above, the light source 162 and the light detector 164 may be connected to a computing device (e.g., controller 190) operable to control operation of the light source 162 and the light detector 164 and to receive signals from the light source 162 and the light detector 164.

The in-line optical metrology system 160 is positioned within the polishing apparatus 100, but no measurements are performed during the polishing operation; rather, measurements are collected between polishing operations (e.g., while the substrate is moving from one polishing station to another or from or to a transfer station).

The in-line optical metrology system 160 includes a sensor assembly 161, the sensor assembly 161 being supported at a location in the polishing station 106 between two polishing stations 106 (e.g., between two stages 120). Specifically, the sensor assembly 161 is positioned such that the carrier head 126, supported by the support 128, can position the substrate 10 over the sensor assembly 161.

Referring to FIG. 1B, polishing apparatus 100 'includes an in-situ optical monitoring system 160', e.g., a color imaging system. The in-situ optical monitoring system 160' is configured similarly to the in-line optical metrology system 160, but the various optical components of the sensor assembly 161 (e.g., the light source 162, the light detector 164, the diffuser 170, the focusing optics 172, and the polarizing filter 174) may be positioned in the recess 122 in the stage 120. When the substrate contacts the polishing pad 130 and is polished by the polishing pad 130, the beam 168 may pass through the window 132 to impinge on the surface of the substrate 10. Rotation of the platen 120 sweeps the sensor assembly 161, and thus the beam 168, across the substrate 10. As the sensor assembly 161 is swept beneath the substrate 10, a 2D image may be reconstructed from the sequence of line images. The stage 180 is not required because the motion of the sensor assembly 161 is provided by the rotation of the platform 120.

Referring to fig. 2, the controller assembles the individual image lines from the light detector 164 (whether an in-line metrology system or an in-situ monitoring system) into a two-dimensional color image (step 200). As a color camera, the light detector 164 may include separate detector elements for each of the red, blue, and green colors. The two-dimensional color image may include monochrome images 204, 206, 208 for each of the red, blue, and green color channels.

The controller may apply an offset and/or gain adjustment to the intensity values of the image in each color channel (step 210). Each color channel may have a different offset and/or gain.

To set the gain, a reference substrate (e.g., a bare silicon wafer) may be imaged by measurements made by the system 160, 160'. The gain for each color channel can then be set so that the reference substrate appears gray in the image. For example, the gains may be set such that the red, green and blue channels may all give the same 8-bit value, e.g., RGB (121,121,121) or RGB (87,87, 87). Gain calibration may be performed for multiple systems using the same reference substrate.

Optionally, the image may be normalized (step 220). For example, the difference between the measured image and a standard predefined image may be calculated. For example, the controller may store a background image for each of the red, green, and blue color channels, and may subtract the background image from the measured image for each color channel. Alternatively, the measured image may be divided by the standard predefined image.

The image may be filtered to remove low frequency spatial variations (step 230). In some implementations, an image is transformed from a red-green-blue (RGB) color space to a chroma saturation luminance (HSL) color space, a filter is applied in the HSL color space, and then the image is transformed back to the red-green-blue (RGB) color space. For example, in the HSL color space, the luminance channel may be filtered to remove low frequency spatial variations, i.e., the hue and saturation channels are not filtered. In some embodiments, the luminance channel is used to generate filters that are then applied to the red, green, and blue images.

In some embodiments, the smoothing is performed only along the first axis. For example, the luminance values of the pixels along the direction of travel 186 may be averaged together to provide an average luminance value that is a function of position along the first axis only. Each row of image pixels may then be divided by a corresponding portion of the average luminance value as a function of position along the first axis.

Color correction may be performed to increase color contrast in the image (step 235). Although illustrated as being after the filtering of step 230, color correction may be performed prior to the filtering but after the normalization of step 220. Additionally, color correction may be performed later, e.g., prior to the calculation of the thickness (in step 270).

Color correction may be performed by multiplying values in the color space by a color correction matrix. This can be expressed as operation ICorrection of=IOriginalxCCM wherein IOriginalFor the original uncorrected image, CCM is a color correction matrix, and ICorrection ofIs a corrected image.

More formally, color correction can be performed as a matrix multiplication as represented by:

wherein IO1、IO2And IO3Are the original values of the three color channels from a color space (e.g., HSL color space, RGB color space, etc.), a11.. a33 are the values of a color correction matrix, and IC1、IC2And IC3Are corrected values for three color channels in a color space. A gamma function may be used instead of a color correction matrix having a constant value.

As shown by fig. 9A and 9B, applying color correction increases the scale of the histogram. This may make the determination of the layer thickness easier, since it becomes easier to distinguish different points in the histogram due to the larger separation. Accordingly, thickness resolution can be enhanced.

A color correction matrix can be generated by making color images of a reference substrate having a plurality of pre-selected colors. The value of each color channel is measured and then the best matrix for transforming the low contrast image into a higher contrast image is calculated.

The controller may analyze the image using image processing techniques to locate a wafer orientation feature 16 (e.g., wafer notch or wafer flat) on the substrate 10 (see fig. 4) (step 240). Image processing techniques may also be used to locate the center 18 of the substrate 10 (see fig. 4).

Based on this data, the image is transformed (e.g., scaled and/or rotated and/or translated) to a standard image coordinate system (step 250). For example, the image may be translated such that the wafer center is located at the center point of the image, and/or the image may be scaled such that the edge of the substrate is located at the edge of the image, and/or the image may be rotated such that there is a 0 ° angle between the x-axis of the image and the radial segment connecting the wafer center and the wafer orientation feature.

Optionally, an image mask may be applied to screen out portions of the image data (step 260). For example, referring to fig. 3, an exemplary substrate 10 includes a plurality of dies 12. The scribe lines 14 may separate the die 12. For some applications, it may be useful to process only the image data corresponding to the die. In this case, referring to fig. 4, an image mask having unmasked areas 22 corresponding in spatial position to the die 12 and masked areas 24 corresponding to the scribe lines 14 may be stored by the controller. The image data corresponding to the masked region 24 is unprocessed or unused during the thresholding step. Alternatively, the masked areas 24 may correspond to the dies such that the unmasked areas correspond to the scribe lines, or the unmasked areas may be only a portion of each die and the remainder of each die masked, or the unmasked areas may be one or more specific dies and the remainder of the dies and the scribe lines masked, and the unmasked areas may be only a portion of the one or more specific dies and the remainder of each die on the substrate masked. In some embodiments, a user may define the mask using a graphical user interface on the controller 190.

The value representing the thickness may be calculated using the color data at this stage (step 270). This value may be a thickness, or an amount of material that has been removed, or a value that indicates an amount of progress of the polishing process (e.g., as compared to a reference polishing process). The calculation may be performed for each non-occluded pixel in the image. This value can then be used in a feed-forward or feedback algorithm for controlling the polishing parameters, thereby providing improved thickness uniformity. For example, the value of each pixel may be compared to a target value to produce an error signal image, and this error signal image may be used for feed-forward or feedback control.

Some background that will help in understanding the calculation of the value representation will be discussed. For any given pixel from a color image, a pair of values corresponding to two color channels may be extracted from the color data of the given pixel. Thus, each pair of values may define a coordinate in the coordinate space of a first color channel and a different second color channel. Possible color channels include hue, saturation, brightness, X, Y, Z (e.g., from CIE 1931XYZ color space), red chromaticity, green chromaticity, and blue chromaticity. These values for these color channels may be computed from tuples of values from other channels according to known algorithms (e.g., X, Y and Z may be computed from R, G and B).

Referring to FIG. 5, for example, when polishing begins, the value pairs (e.g., V1)0、V20) Initial coordinates 502 in the coordinate space 500 of the two color channels are defined. However, since the spectrum of the reflected light changes as the polishing progresses, the color composition of the light changes, and the values (V1, V2) in the two color channels will change. Thus, the coordinate positions within the coordinate space of the two color channels will change as polishing progresses, drawing path 504 in coordinate space 500.

Referring to fig. 6 and 7, to calculate a value representing the thickness, the predetermined path 604 in the coordinate space 500 of the two color channels is stored (e.g., in the memory of the controller 190) (step 710). The predetermined path is generated prior to measurement of the substrate. The path 404 may travel from the start coordinate 402 to the end coordinate 406. Path 404 may represent the entire polishing process, with the start coordinate 402 corresponding to the starting thickness of a layer on the substrate and the end coordinate corresponding to the final thickness of the layer. Alternatively, the path may represent only a portion of the polishing process, e.g., an expected distribution of layer thicknesses on the substrate at the end point of polishing.

In some embodiments, to create the predetermined path 404, the disposed substrate is polished to approximately the target thickness to be used for the device substrate. A color image of the disposed substrate is obtained using the optical metrology system 160 or the optical monitoring system 160'. Because the polishing rate on the substrate is typically not uniform, different locations on the substrate will have different thicknesses, and therefore reflect different colors, and therefore have different coordinates within the coordinate spaces of the first color channel and the second color channel.

Referring to fig. 8, a two-dimensional (2D) histogram is calculated using pixels included in an unmasked area. That is, the scatter diagram 800 is generated in the coordinate spaces of the first color channel and the second color channel using the color-corrected color image using the coordinate values of some or all of the pixels from the unmasked portion of the disposed substrate. Each point 802 in the scatter plot is a value pair (V1, V2) of two color channels for a particular pixel. The scatter plot 800 may be displayed on the display of the controller 190 or another computer.

As described above, possible color channels include hue, saturation, brightness, X, Y, Z (e.g., from CIE 1931XYZ color space), red chromaticity, green chromaticity, and blue chromaticity. In some embodiments, the first color channel is a red chroma (r) and the second color channel is a green chroma (g), which may each be composed ofAndby definition, wherein R, G and B are the intensity values of the red, green, and blue color channels of a color image.

Thickness path 604 may be created manually by a user (e.g., an operator of a semiconductor manufacturing facility) using a graphical user interface in conjunction with a computer (e.g., controller 190). For example, while displaying the scatter plot, the user may manually construct a path that follows and overlays the scatter plot, e.g., using mouse operations to click on a displayed selected point in the scatter plot.

Alternatively, the thickness path 604 may be automatically generated using software designed to analyze the set of coordinates in the scatter plot and generate a path that fits the points in the scatter plot 800, for example, using topological skeletonization.

The thickness path 604 may be provided by a variety of functions, for example, using a single line, a plurality of segments of a line, one or more arcs of a circle, one or more bezier curves, and the like. In some implementations, the thickness path 604 is provided by a multi-segment line, which is a set of line segments drawn between discrete points in a coordinate space.

Returning to FIG. 6, the function provides a relationship between the location on the predetermined thickness path 604 and the thickness value. For example, the controller 190 may store a first thickness value for a start point 602 of the predetermined thickness path 604, and a second thickness value for an end point 606 of the predetermined thickness path 604. The first and second thickness values may be obtained by measuring the thickness of the substrate layer using a conventional thickness metrology system at locations corresponding to pixels providing points 802 closest to the start point 602 and the end point 606, respectively.

In operation, controller 190 may calculate a value representing the thickness of a given point 610 on path 604 by interpolating between a first value and a second value based on the distance along path 604 from start point 602 to given point 610. For example, if the controller can calculate the thickness T of a given point 610 according to the following equation:

where T1 is the value of the start point 602, T2 is the thickness of the end point 606, L is the total distance along the path between the start point 602 and the end point 606, and D is the distance along the path between the start point 602 and a given point 610.

As another example, controller 190 may store the thickness value for each vertex on predetermined thickness path 604 and calculate a value representing the thickness for a given point on the path based on an interpolation between the two closest vertices. For this configuration, various values for the vertices may be obtained by measuring the thickness of the substrate layer at locations corresponding to the pixels providing the point 802 closest to the vertex using a conventional thickness metrology system.

Other functions relating position on the path to thickness are possible.

In addition, rather than measuring the thickness of the set substrate using a metrology system, the thickness value may be obtained by performing a calculation based on the optical model.

The thickness value may be an actual thickness value if theoretical simulation or empirical learning is used based on a known "set" wafer. Alternatively, the thickness value at a given point on the predetermined thickness path may be a relative value, for example, with respect to the degree of polishing of the substrate. This latter value may be scaled in downstream processes to obtain an empirical value, or may simply be used to indicate an increase or decrease in thickness without specifying an absolute thickness value.

Referring to fig. 6 and 7, for a pixel analyzed from an image of a substrate, values of two color channels are extracted from color data of the pixel (step 720). This provides coordinates 620 in the coordinate system 600 for the two color channels.

Next, the point (e.g., point 610) on the predetermined thickness path 604 closest to the coordinates 620 of the pixel is calculated (step 730). In this context, "closest" does not necessarily indicate geometric perfection. The "closest" point may be defined in various ways and limitations in processing power, selection of a search function for ease of computation, presence of multiple local maxima in the search function, etc. may prevent determination of geometric desirability, but still provide results that are good enough for use. In some implementations, the closest point is defined as the point on the thickness path 604 that defines the normal vector to the thickness path through the coordinates 620 of the pixel. In some embodiments, the closest point is calculated by minimizing the euclidean distance.

Next, a value representing the thickness is calculated from the function based on the position of point 610 on path 604, as discussed above (step 740). The closest point is not necessarily one of the vertices of the polyline. As described above, in this case, interpolation may be used to obtain the thickness value (e.g., based on a simple linear interpolation between the nearest vertices of the polyline).

By repeating steps 720 to 740 for some or all of the pixels in the color image, a map of the thickness of the substrate layer may be generated.

For some layer stacks on the substrate, the predetermined thickness path will intersect itself, which results in a condition referred to as degeneracy. A degenerate point (e.g., point 650) on the predetermined thickness path has two or more thickness values associated therewith. Thus, without some additional information, it may not be possible to know which thickness value is the correct value. However, it is possible to analyze the nature of the coordinate clusters associated with pixels from a given physical region on the substrate (e.g., within a given die) and use this additional information to account for degeneracy. For example, it may be assumed that the measurements within a given small area of the substrate do not vary significantly and will therefore occupy a smaller portion along the scatter plot, i.e. not extend along both branches.

As such, the controller can analyze the coordinate clusters associated with pixels from a given physical region on the substrate that surrounds the pixels for which degeneracy needs to be resolved. Specifically, the controller may determine a principal axis of the cluster in the coordinate space. The branch of the predetermined thickness path that is closest parallel to the principal axis of the cluster may be selected and used to calculate a value indicative of the thickness.

Returning to fig. 2, optionally, a uniformity analysis may be performed for each area of the substrate (e.g., each die) or for the entire image (step 280). For example, the value of each pixel may be compared to a target value, and the total number of "failed" pixels within the die (i.e., pixels that do not meet the target value) may be calculated for the die. This total may be compared to a threshold to determine whether the die is acceptable, e.g., if the total is less than the threshold, the die is marked as acceptable. This gives a pass/fail indication for each die.

As another example, the total number of "failed" pixels within the unmasked area of the substrate may be calculated. This total may be compared to a threshold to determine whether the substrate is acceptable, e.g., if the total is less than the threshold, the substrate is marked as acceptable. The threshold may be set by the user. This gives a pass/fail indication for the substrate.

In the event that a die or wafer is determined to be "failed," controller 190 can generate an alarm or cause polishing system 100 to take corrective action. For example, an audible or visual alarm may be generated, or a data file may be generated indicating that a particular die is not available. As another example, the substrate may be sent back for rework.

In contrast to spectral processing, which typically represents pixels by 1024 or more intensity values, in a color image, a pixel may be represented by only three intensity values (red, green, and blue), and only two color channels are needed for the computation. Therefore, the computational load for processing the color image is significantly reduced.

However, in some embodiments, the light detector 164 is a spectrometer and not a color camera. For example, the light detector may include a hyperspectral camera. Such spectral cameras may produce intensity values for 30 to 200 (e.g., 100) different wavelengths per pixel. Next, rather than value pairs in a two-dimensional color space as described above, the techniques (steps 210-270) are applied to an image having an N-dimensional color space with N color channels, where N is significantly greater than 2, e.g., 10-1000 dimensions. For example, the thickness path 604 may be a path in an N-dimensional color space.

In some embodiments, the number of dimensions of the color space and color channels is not reduced during subsequent steps; each dimension corresponds to a wavelength at which the intensity values are measured by the hyperspectral camera. In some embodiments, the number of dimensions and channels of the color space is reduced by a factor of, for example, 10 to 100 dimensions and channels. The number of channels may be reduced by selecting only certain channels (e.g., certain wavelengths) or by combining channels (e.g., combining (such as averaging) measured intensity values of multiple wavelengths). In general, a larger number of channels reduces the likelihood of degeneracy in the path, but at a greater computer processing cost. The appropriate number of channels may be determined empirically.

Another technique for increasing the dimension of a color image is to use multiple beams with different angles of incidence. Such embodiments may be configured similarly to fig. 1A and 1B, except as described below. Referring to FIG. 1C, a sensor assembly 161 (of an in-line metrology system 160 or in-situ monitoring system 160') may include a plurality of light sources, e.g., two light sources 162a, 162 b. Each light source produces a light beam (e.g., light beams 168a and 168b) that is directed toward substrate 10 at a different angle of incidence. The incident angles of beams 168a and 168b may be at least 5 apart, such as at least 10 apart, such as at least 20 apart. As shown in fig. 1C, the beams 168a, 168b may impinge on the same area on the substrate 10, e.g., coincide on the substrate 10. Alternatively, the beams may impinge on different regions, for example, partially overlapping but not completely overlapping regions, or non-overlapping regions.

The beams 168a, 168b reflect from the substrate 10 and measure intensity values for a plurality of colors at a plurality of pixels by two different arrays of detector elements 178a, 178b, respectively. As shown in fig. 1C, the detector elements 178a, 178b may be provided by different light detectors 164a, 164 b. For example, the two detectors 164a, 164b may each be a color line scan camera. However, in some embodiments, there is a single light detector having a two-dimensional array, and the beams 168a, 168b strike different areas of the array of detectors. For example, the detector may be a 2D color camera.

Using two light beams with different angles of incidence effectively doubles the dimension of the color image. For example, using two light beams 168a, 168b, where each light detector 164a, 164b is a color camera, each detector will output a color image through three color channels (e.g., red, blue, and green color channels, respectively) for a total of six color channels. This provides a larger number of channels and reduces the possibility of degeneracy in the path, but still has manageable processing costs.

Although fig. 1C illustrates each beam 168a, 168b as having its own optical components (e.g., diffuser 170, focusing optics 172, and polarizer 174), it is also possible for the beams to share some components. For example, a single diffuser 170 and/or a single polarizer 174 may be placed in the path of the two beams 168a, 168 b. Similarly, although multiple light sources 162a, 162b are shown, light from a single light source may be split (e.g., by a partially reflective mirror) into multiple beams.

The color correction may be scaled by the number of channels. For the color correction step, replace IOriginalIs a1 × 3 matrix and CCM is a3 × 3 matrix, IOriginalMay be a1 × N matrix and the CCM may be an N × N matrix. For example, for embodiments where two beams are incident at different angles and measured by two color cameras, IOriginalMay be a 1x 6 matrix and the CCM may be a 6 x 6 matrix.

In general, data such as a calculated thickness of a layer on a substrate may be used to control one or more operating parameters of a CMP apparatus. The operating parameters include, for example, platen rotation speed, substrate rotation speed, polishing path of the substrate, speed of the substrate on the plate, pressure applied to the substrate, slurry composition, slurry flow rate, and temperature of the substrate surface. The operating parameters may be controlled in real time and may be automatically adjusted without further human intervention.

As used in this specification, the term substrate may include, for example, a product substrate (e.g., which includes a plurality of memory or processor dies), a test substrate, a bare substrate, and a gated substrate. The substrate may be at various stages of integrated circuit fabrication, for example, the substrate may be a bare wafer, or the substrate may include one or more deposited and/or patterned layers. The term substrate may include circular discs and rectangular sheets.

However, the color image processing techniques described above may be particularly useful in the case of 3D vertical nand (vnand) flash memory. In particular, the layer stacks used in VNAND manufacturing are so complex that current metrology methods (e.g., Nova spectral analysis) may not perform with sufficient reliability when detecting regions of inappropriate thickness. In contrast, color image processing techniques can have superior throughput.

Embodiments of the invention and all of the functional operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structural elements disclosed in this specification and structural equivalents thereof, or in combinations of them. Embodiments of the invention can be implemented as one or more computer program products, i.e., one or more computer programs tangibly embodied in a non-transitory machine-readable storage medium for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple processors or computers).

The term relative positioning is used to denote the positioning of the components of the system relative to each other, not necessarily with respect to gravity; it is understood that the polishing surface and the substrate may be held in a vertical orientation or some other orientation.

A number of embodiments have been described. Nevertheless, it will be understood that various modifications may be made. For example:

instead of a line scan camera, a camera that images the entire substrate may be used. In this case, no movement of the camera relative to the substrate is required.

The camera may cover less than the entire width of the substrate. In this case, the camera needs to be moved in two perpendicular directions (e.g., supported on an X-Y table) in order to scan the entire substrate.

The light source may illuminate the entire substrate. In this case, the light source does not need to be moved relative to the substrate.

Although coordinates represented by value pairs in a two-dimensional coordinate space are discussed above, the techniques are applicable to coordinate spaces having three or more dimensions defined by three or more color channels.

The sensor assembly does not require an in-line system positioned between the polishing stations or between the polishing stations and the transfer station. For example, the sensor assembly may be located within the transfer station, in a cassette interface unit, or as a stand-alone system.

The homogeneity analysis step is optional. For example, the image produced by applying the threshold transformation may be fed into a feed-forward process to adjust later processing steps on the substrate, or fed into a feedback process to adjust processing steps on subsequent substrates.

For in situ measurements, rather than constructing an image, the monitoring system may simply detect the color of the white light beam reflected from a spot on the substrate and use this color data to determine the thickness at this spot using the techniques described above.

Although the description focuses on polishing, the techniques may be applied to other kinds of semiconductor manufacturing processes where layers are added or removed and can be optically monitored, such as etching (e.g., wet or dry etching), deposition (e.g., Chemical Vapor Deposition (CVD), Physical Vapor Deposition (PVD), or Atomic Layer Deposition (ALD)), spin-on dielectrics, or photoresist coatings.

Accordingly, other implementations are within the scope of the following claims.

26页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于张紧连接元件的装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!