Dimension measuring apparatus, dimension measuring method, and semiconductor manufacturing system

文档序号:789175 发布日期:2021-04-09 浏览:18次 中文

阅读说明:本技术 尺寸测量装置、尺寸测量方法以及半导体制造系统 (Dimension measuring apparatus, dimension measuring method, and semiconductor manufacturing system ) 是由 松田航平 于 2019-08-07 设计创作,主要内容包括:本公开涉及排除了尺寸测量所需的时间的缩短和操作者引起的误差的尺寸测量装置。提出了一种使用输入的图像来对测量对象的尺寸进行测量的尺寸测量装置,其特征在于,通过机械学习生成输入的图像的各区域按区域区分地被附带了标签的第1图像,基于生成的第1图像,来生成包含表示第1图像的各区域的标记的中间图像,基于输入的图像和生成的中间图像,来生成输入的图像的各区域按区域区分地被附带了标签的第2图像,使用生成的第2图像来求取相邻的区域的边界线的坐标,使用求取的边界线的坐标,来求取对测量对象的尺寸条件进行规定的特征点的坐标,使用求取的特征点的坐标,来测量测量对象的尺寸。(The present disclosure relates to a dimension measuring apparatus that eliminates shortening of time required for dimension measurement and operator-induced errors. A dimension measuring device for measuring the dimension of a measurement object using an input image is provided, wherein a 1 st image in which each region of the input image is labeled with a label in a region-by-region manner is generated by machine learning, an intermediate image including a label indicating each region of the 1 st image is generated based on the generated 1 st image, a 2 nd image in which each region of the input image is labeled with a label in a region-by-region manner is generated based on the input image and the generated intermediate image, the coordinates of boundary lines of adjacent regions are obtained using the generated 2 nd image, the coordinates of feature points defining the dimension condition of the measurement object are obtained using the obtained coordinates of the boundary lines, and the dimension of the measurement object is measured using the obtained coordinates of the feature points.)

1. A dimension measuring apparatus for measuring a dimension of a measurement object using an input image,

generating a 1 st image to which a label is attached by area distinction for each area of the input image by machine learning,

generating an intermediate image including a mark indicating each region of the 1 st image based on the generated 1 st image,

generating a 2 nd image to which a label is attached on a region-by-region basis of the input image and the generated intermediate image,

using the generated 2 nd image to find coordinates of a boundary line of adjacent regions,

determining coordinates of a feature point defining a dimensional condition of the measurement object by using the determined coordinates of the boundary line,

and measuring the size of the measurement object by using the obtained coordinates of the feature points.

2. The dimension measuring device according to claim 1,

the intermediate image is an image showing an area in which each area of the 1 st image is redefined by an enlargement process or a reduction process.

3. The dimension measuring device according to claim 1,

the coordinates of the feature point are determined based on a feature point detection object defined for the determined coordinates of the boundary line, a feature point detection range defined for the determined coordinates of the boundary line, and a feature point detection direction defined for the determined coordinates of the boundary line.

4. The dimension measuring device according to claim 1,

the machine learning is a semantic segmentation method using deep learning,

the 2 nd image is generated by a semantic segmentation method using luminance information.

5. The dimension measuring device according to claim 4,

the semantic segmentation method using luminance information is a watershed transform algorithm or a graph cut algorithm.

6. The dimension measuring device according to claim 1,

the image input is a cross-sectional scanning electron microscope SEM image.

7. The dimension measuring device according to claim 1,

the input image is a transmission electron microscope, TEM, image.

8. A semiconductor manufacturing system in which a processing apparatus for processing a sample, an evaluation apparatus for imaging a processing result of the processing apparatus as an image, and a dimension measurement apparatus for measuring a dimension of a measurement object using the image input as the image imaged by the evaluation apparatus are connected via a network,

in the above-mentioned size measuring apparatus,

generating a 1 st image to which a label is attached by area distinction for each area of the input image by machine learning,

generating an intermediate image including a mark indicating each region of the 1 st image based on the generated 1 st image,

generating a 2 nd image to which a label is attached on a region-by-region basis of the input image and the generated intermediate image,

using the generated 2 nd image to find coordinates of a boundary line of adjacent regions,

determining coordinates of a feature point defining a dimensional condition of the measurement object by using the determined coordinates of the boundary line,

and measuring the size of the measurement object by using the obtained coordinates of the feature points.

9. The semiconductor manufacturing system according to claim 8,

the processing apparatus is a plasma etching apparatus,

the evaluation apparatus is a cross-sectional SEM apparatus.

10. A dimension measuring method for measuring a dimension of a measuring object using an input image,

comprises the following steps:

generating a 1 st image to which a label is attached by area for each area of the input image by machine learning;

generating an intermediate image including a mark indicating each region of the 1 st image based on the generated 1 st image;

generating a 2 nd image to which a label is attached by area distinction for each area of the input image based on the input image and the generated intermediate image;

obtaining coordinates of a boundary line between adjacent regions using the generated 2 nd image;

determining coordinates of a feature point that defines a dimensional condition of the measurement object, using the determined coordinates of the boundary line; and

and measuring the size of the measurement object using the obtained coordinates of the feature points.

Technical Field

The invention relates to a dimension measuring apparatus, a dimension measuring method and a semiconductor manufacturing system.

Background

In recent years, new materials have been introduced into semiconductor devices for the purpose of improving the performance of the semiconductor devices, and the structures of the semiconductor devices have been made three-dimensional and complicated. In addition, in the current processing of advanced semiconductor devices, nanometer-scale precision is required. Therefore, the semiconductor processing apparatus needs to be able to process a plurality of materials into various shapes with extremely high precision. In order to process a plurality of materials with extremely high precision, it is necessary to objectively measure the shape of each material, to convert the shape into numerical values, and to optimize the processing method. On the other hand, in response to the progress of three-dimensional and complicated structures of semiconductor devices, the demand for measurement has also sharply increased, and the demand for performing multipoint measurement with high accuracy in a short time has been increasing.

In the measurement for the high-precision machining, generally, an image based on SEM (Scanning Electron Microscope) or TEM (Transmission Electron Microscope) or the like is acquired for a cross section of a machining sample, and a region on the image is manually measured. However, in the case of applying manual-based measurement to an advanced process, the configuration of a processed specimen becomes complicated, and the number of measurement points per one image also increases, and therefore, size extraction by hand is reaching a limit.

In addition to this, in manual-based measurements, operator dependencies arise in the measured values. Further, even in an image of a unit pattern of a repeating line/space (space), since measurement is performed for each pattern, a human error is added to the statistic of the measurement value in addition to the process variation. Further, in the process of advancing optimization of the machining conditions, when a measurement site that is more important than the originally planned measurement site is found, re-measurement of the entire image that has been measured becomes necessary.

In response to these problems, if the dimension measurement can be automated, the time required for the measurement is significantly shortened, and the machined shape can be grasped more accurately.

Various solutions to these problems have been proposed. For example, japanese re-publication table 2018-: "to provide a charged particle beam device capable of calculating the characteristics of a cell more objectively and accurately from an observation image of the cell, and evaluating the cell. A charged particle beam device is provided with: an image acquisition unit 18 for acquiring an image of the cell; a contour extraction unit 19 for obtaining a contour of the image; a feature value calculation unit (20) for calculating the feature value of the shape of the contour from the contour and calculating the feature value of an internal structure such as cytoplasm included in the internal region of the contour; and a determination unit 21 for determining the quality and/or functionality of the cell based on the feature amount, and can objectively and accurately evaluate the quality and/or functionality of the cell included in the captured image. ".

Further, japanese patent application laid-open No. 2012 and 68138 (patent document 2) describes: "extracting contour coordinate data of a cross section by performing image processing on a pattern cross section image, extracting coordinate values corresponding to upper and lower portions of a pattern from the contour coordinate data, and calculating a height of the pattern, a coordinate value of a measurement range 2 point, and a height of the measurement range. A luminance distribution signal in the x direction corresponding to the coordinate value of the 2 points is acquired, and a signal component in a range corresponding to a white shaded portion unique to the cross-sectional SEM image is removed from the signal. And aiming at the signals, calculating the distance between the two signals by applying a cross-correlation method, and calculating the angle of the side wall. ".

Further, japanese patent application laid-open No. 2002-350127 (patent document 3) describes: a step of specifying an edge detection range and the number of edge points for each edge of a pattern in a microscope image to set an edge detection reference line; searching from the vertical direction aiming at a set edge detection reference line, and extracting brightness change points, namely edge points from image information; calculating a line that approximates each side of the pattern based on the position information of the plurality of edge points; and determining the shape of the pattern from the approximate line information of each side portion of the pattern by using the intersection point of the 2 lines, the specific point calculated from the plurality of intersection points, the angle formed by the 2 lines, and the distance between the two determined points. ".

Prior art documents

Patent document

Patent document 1: japanese Korea-Kokai publication 2018-189877

Patent document 2: japanese patent laid-open No. 2012 and 68138

Patent document 3: japanese laid-open patent publication No. 2002-350127

Disclosure of Invention

In the above-mentioned patent document 1, it is assumed that the contour of the object in the image is appropriately extracted. However, in the shape measurement of the semiconductor device structure, in many cases, proper extraction of the contour itself is difficult. As an example, in the case of the cross-sectional SEM, secondary electrons are emitted from the surface of the sample located in the depth direction of the cross-section being observed, and therefore, a white shaded portion may be generated at the interface between the vacuum region and the semiconductor device region. Further, contour line extraction may be difficult due to such white shading. Therefore, even if the means described in patent document 1 is applied to the shape measurement of the semiconductor device structure, a good measurement result may not be obtained.

Patent documents 2 and 3 each describe a means for manually specifying edge points and contours of an object using luminance values of an image, but these involve visual operations, and therefore operator dependency still remains. Further, the operator needs a work time because he works while viewing the images one by one. In addition, when it is desired to add or change a size measurement portion at a later stage, it is necessary to perform re-measurement visually from an image.

Further, when measuring an object included in a cross-sectional SEM image (or a cross-sectional TEM image), there are problems as follows: the boundary of the interface between different materials to be measured, which have different brightness for each image and capture the internal structure not required for the dimension measurement, is unclear. Therefore, in the methods of patent documents 2 and 3 based on the edge detection method based on only the brightness value, an operation of specifying the interface position or the like by visual determination becomes necessary.

Therefore, in each of patent documents 1 to 3, it is not considered to automatically provide a highly accurate measurement result for a cross-sectional SEM image (or a cross-sectional TEM image). Therefore, a method is desired that: by applying an image recognition technique using mechanical learning, in particular, deep learning, a fully automatic measurement without visual-based adjustment is achieved, and the contour of an object is extracted by recognizing not a local brightness distribution but the region of each object photographed in an image.

On the other hand, the problem is also presented for an image recognition technique using deep learning. The problem is that there is a measurement model dependency due to a learning data set and a learning model. When different learning data sets are used or different learning models are used, the inference results of the respective learning data sets are basically different even if the input images are the same, and therefore, these cannot be directly compared. That is, although measurement is performed for quantitatively comparing different images, for the reasons described above, there is a contradiction that comparison cannot be performed.

Therefore, an object of the present invention is to provide a measuring means that automatically measures a desired size from a cross-sectional SEM by using a combination of machine learning (for example, an image recognition technique based on deep learning) and an image processing technique based on brightness, and thus does not include a reduction in size measurement time and an error caused by an operator or deep learning.

Means for solving the problems

In order to solve the above problem, one of typical dimension measuring apparatuses according to the present invention is a dimension measuring apparatus for measuring a dimension of a measurement object using an input image, the method includes generating a 1 st image in which each region of an input image is labeled with a label in a region-by-region manner by machine learning, generating an intermediate image including a label indicating each region of the 1 st image based on the generated 1 st image, generating a 2 nd image in which each region of the input image is labeled with a label in a region-by-region manner based on the input image and the generated intermediate image, determining coordinates of boundary lines of adjacent regions using the generated 2 nd image, determining coordinates of feature points defining a dimension condition of a measurement object using the determined coordinates of the boundary lines, and measuring a dimension of the measurement object using the determined coordinates of the feature points.

ADVANTAGEOUS EFFECTS OF INVENTION

According to the present invention, it is possible to provide a measuring means that automatically measures a desired size from a cross-sectional SEM by using a combination of machine learning (for example, an image recognition technique based on deep learning) and an image processing technique based on brightness, and thus does not include a reduction in size measurement time and an error caused by an operator or deep learning.

Problems, structures, and effects other than those described above will become apparent from the following description of embodiments for carrying out the invention.

Drawings

FIG. 1 is a block diagram of a computer system for implementing an embodiment of the present invention.

Fig. 2 is a diagram showing a configuration example of a semiconductor manufacturing system according to an embodiment of the present invention.

Fig. 3 is a diagram showing an example of the configuration of a neural network for a semantic segmentation model according to an embodiment of the present invention.

Fig. 4 is a flowchart showing a flow of a dimension measuring method according to embodiment 1 of the present invention.

Fig. 5 is a diagram showing an example of annotation data used for learning data of an object detection model according to embodiment 1 of the present invention.

Fig. 6 is a diagram for explaining the correspondence between the feature points and the parts of the size according to example 1 of the present invention.

Fig. 7 is a table showing the correspondence relationship of tag name, tag number, and color in annotation data used for learning data of the semantic division model according to embodiment 1 of the present invention.

Fig. 8 is a diagram showing an example of a GUI (Graphical User Interface) screen displayed on the input/output device according to embodiment 1 of the present invention.

Fig. 9 is a diagram showing an example of a test image according to an embodiment of the present invention.

Fig. 10 is a flowchart showing a flow of a dimension measuring method according to embodiment 2 of the present invention.

Fig. 11 is a diagram showing an example of a GUI (Graphical User Interface) screen displayed on the input/output device according to embodiment 2 of the present invention.

Detailed Description

Embodiments of the present invention will be described below with reference to the drawings. The present invention is not limited to the embodiment. In the description of the drawings, the same reference numerals are attached to the same parts.

The dimension measuring unit according to the present invention includes: a 1 st region dividing section, a 2 nd region dividing section, and a size measuring section. The 1 st region dividing unit uses an image recognition model for distinguishing each region and background in a processing structure of an image of a measurement target. The image recognition model is trained using the cross-sectional SEM image and an annotation image (i.e., teacher data) indicating the result that each region of the cross-sectional SEM image is correctly labeled, and a label attaching method for distinguishing the regions is learned.

The 2 nd area dividing unit generates an image including a mark indicating each area in the 1 st image as an intermediate image based on the image with the label output from the 1 st area dividing unit, and generates an image with labels for each area based on luminance information of the input image based on the mark and a cross-sectional SEM image which is an input image.

The size measuring unit measures the size of a predetermined Region by calculating the coordinates of a Region boundary line in adjacent regions from an image obtained by dividing the Region, and calculating the coordinates of a feature point defining the size condition of the measurement object for each ROI (Region of Interest) using the size definition of the measurement condition based on the detection object, the detection range, the detection direction, and the like of the feature point. That is, the coordinates of the feature point are obtained from the feature point detection object defined with respect to the coordinates of the boundary line, the feature point detection range defined with respect to the coordinates of the boundary line, and the feature point detection direction defined with respect to the coordinates of the boundary line. Then, the size of the predetermined portion is measured from the coordinates of the obtained feature points.

As described above, it is possible to automatically measure a predetermined size value from the cross-sectional SEM image without visual adjustment by the operator.

Embodiments of the present invention will be described below with reference to the drawings.

First, referring to fig. 1, a computer system 300 for implementing an embodiment of the present disclosure is illustrated. The mechanisms and apparatus of the various embodiments disclosed in this specification can also be applied to any suitable computing system. The main components of the computer system 300 include: one or more processors 302, a memory 304, a terminal interface 312, a storage interface 314, an I/O (input/output) device interface 316, and a network interface 318. These components may also be interconnected via a memory bus 306, an I/O bus 308, a bus interface component 309, and an I/O bus interface component 310.

The computer system 300 may also include one or more general purpose programmable Central Processing Units (CPUs) 302A and 302B, collectively referred to as processors 302. In one embodiment, the computer system 300 may include a plurality of processors, and in another embodiment, the computer system 300 may be a single CPU system. Each processor 302 may also execute commands housed in memory 304 and include on-board cache memory.

In one embodiment, the memory 304 may further include: a random access semiconductor memory, a storage device, or a storage medium (either volatile or nonvolatile) for storing data and programs. The memory 304 may also contain all or a portion of the programs, modules, and data structures that implement the functions described in this specification. For example, memory 304 may also house a sizing management application 350. In one embodiment, the dimensional measurement management application 350 may also contain commands or descriptions that perform functions described below on the processor 302.

In certain embodiments, sizing management application 350 may also be implemented in hardware via semiconductor devices, chips, logic gates, circuits, circuit cards, and/or other physical hardware devices, instead of, or in addition to, a processor-based system. In certain embodiments, sizing management application 350 may also contain data other than commands or descriptions. In certain embodiments, a camera, sensor, or other data input device (not shown) may also be provided in direct communication with the bus interface component 309, the processor 302, or other hardware of the computer system 300.

The computer system 300 may further comprise: a bus interface component 309 that facilitates communication between the processor 302, the memory 304, the display system 324, and the I/O bus interface component 310. The I/O bus interface component 310 may also be coupled to an I/O bus 308 for transferring data between various I/O devices. The I/O bus interface component 310 may also communicate with a plurality of I/O interface components 312, 314, 316, and 318 known as I/O processors (IOPs) or I/O adapters (IOAs) via the I/O bus 308.

The display system 324 may also include a display controller, a display memory, or both. The display controller can provide data for video, audio, or both to the display device 326. Further, the computer system 300 may further include: one or more sensors or the like configured to collect data and provide the data to the processor 302.

For example, computer system 300 may also include: biosensors that collect heart rate data, mental stress level data, and the like; an environmental sensor that collects humidity data, temperature data, pressure data, and the like; and a motion sensor or the like that collects acceleration data, motion data, and the like. Other types of sensors can be used. The display system 324 may also be coupled to a separate display screen, a television, a tablet computer, or a display device 326 such as a portable device.

The I/O interface components have the functionality to communicate with various storage or I/O devices. For example, the terminal interface component 312 may enable the installation of user I/O devices 320 such as user output devices such as video display devices, speaker televisions, etc., or user input devices such as keyboards, mice, keypads, touch pads, trackballs, buttons, light pens, or other pointing devices. The user operates the user input device using the user interface, whereby input data or instructions may be input to the user I/O device 320 and the computer system 300, and output data from the computer system 300 is received. The user interface may also be displayed on a display device via the user I/O device 320, or reproduced through a speaker, or printed through a printer, for example.

The storage interface 314 enables the installation of one or more hard disk drives, direct access storage devices 322 (typically magnetic disk drive storage devices, but may also be an array of hard disk drives configured as a single hard disk drive or other storage device). In one embodiment, storage 322 may also be installed as any secondary storage. The contents of memory 304 may also be stored in storage device 322 and read from storage device 322 as needed. The I/O device interface 316 may also provide an interface to other I/O devices such as printers, fax machines, etc. Network interface 318 may also provide a communication path for computer system 300 and other devices to communicate with one another. The communication path is, for example, network 330.

In one embodiment, the computer system 300 may also be a multi-user mainframe computer system, a single-user system, or a server computer or the like that receives requests from other computer systems (clients) without a direct user interface. In another embodiment, the computer system 300 may also be a desktop computer, a portable computer, a laptop computer, a tablet computer, a pocket computer, a telephone, a smart phone, or any other suitable electronic device.

Example 1

In the present embodiment, the ROI detection algorithm, the semantic segmentation model, and the watershed algorithm are used in combination to extract coordinates of a boundary line between a processing structure and a background or a boundary line of an interface between different materials in an image of a measurement target.

Here, the semantic division model is a machine learning model that executes a process of associating each pixel of an image with a class label (class) (e.g., "flower", "person", "road", "sky", "sea", "car"). In the learning (i.e., training) step in the semantic segmentation model, a cross-sectional SEM image, which is input data, and an annotation image, which is output data and has colors distinguished for each region, are given as teacher data, and the shape of the region is learned.

In an inference step performed after the learning step, the size measuring apparatus according to the present invention detects an ROI (Region of Interest) using an ROI detection algorithm with respect to a given input image, and estimates an image in which colors are distinguished for each Region with respect to the detected ROI using a learned semantic segmentation model. Then, the dimension measuring device generates an intermediate image including a marker indicating each region in the image based on the estimation result, and outputs an image in which colors are distinguished for each region by inputting the marker included in the intermediate image and the detected ROI to a watershed algorithm. Next, in the measurement step, the size measurement device automatically measures the size of the desired portion based on the coordinates of the region boundary line obtained from the image in which the colors are distinguished for each region.

Next, a system of a dimension measuring apparatus according to an embodiment of the present invention will be described with reference to fig. 2.

Fig. 2 is a diagram showing a configuration example of a semiconductor manufacturing system 200 according to an embodiment of the present invention. As shown in fig. 2, the semiconductor manufacturing system 200 is mainly composed of a dimension measuring apparatus 100, an input/output apparatus 201, a processing apparatus 203, and an evaluation apparatus 204. These apparatuses are connected via a communication network (not shown) such as the internet.

The dimension measuring apparatus 100 is mainly composed of a central processing unit 101, a 1 st region dividing unit 102, a 2 nd region dividing unit 103, a dimension measuring unit 104, and a database 105. The dimension measuring apparatus 100 receives input information 202 such as a definition of a feature point and a dimension, a magnification, a learning dataset, and the like, which are input via the input/output apparatus 201, and the cross-sectional image 205, performs a process of measuring a predetermined dimension on the cross-sectional image 205 by a process described later, and then outputs the measurement result to the input/output apparatus 201.

As shown in fig. 2, the central processing unit 101 includes a learning unit 206. The learning unit 206 is a functional unit that manages learning (i.e., training) of the machine learning models of the 1 st region dividing unit 102 and the 2 nd region dividing unit 103.

The input/output device 201 includes an input/output interface such as GUI, and a storage medium reading device such as a card reader, and inputs input information 20 such as definition of feature points and dimensions, magnification, and a learning dataset to the dimension measuring device 100. The input/output device 201 receives the cross-sectional image 205 of the measurement target object from the evaluation device 204 as an input image, and transmits the input image to the central processing unit 101. The input/output device 201 may be, for example, a keyboard, a mouse, a display, a touch panel, a storage medium reading device, or the like. Alternatively, the input/output device 201 may be configured to display the measurement result transmitted from the size measuring apparatus 100 to the user. In this case, the input-output device 201 may display the measurement result with a display or write the measurement result out to a file.

The processing apparatus 203 is an apparatus that performs processing (for example, processing) on a semiconductor or a semiconductor device including a semiconductor. The content of the processing device 203 is not particularly limited. For example, the processing apparatus 203 may be a photolithography apparatus, a film deposition apparatus, a pattern processing apparatus, or the like. More specifically, the lithography apparatus includes, for example, an exposure apparatus, an electron beam writing apparatus, an X-ray writing apparatus, and the like. The film forming apparatus includes, for example, a CVD (Chemical Vapor Deposition), a PVD (Physical Vapor Deposition), a Vapor Deposition apparatus, a sputtering apparatus, a thermal oxidation apparatus, and the like. The pattern processing apparatus includes, for example, a wet etching apparatus, a dry etching apparatus, an electron beam processing apparatus, and a laser processing apparatus. The processing apparatus 203 performs processing of the semiconductor or the semiconductor device in accordance with the input processing conditions, and transfers the processed semiconductor or the semiconductor device to the evaluation apparatus 204.

The evaluation device 204 photographs a cross section of the semiconductor or semiconductor device processed by the processing device 203, and acquires a cross-sectional image 205 indicating the result of the processing. The evaluation device 204 may be a machining dimension measuring device using, for example, an SEM, a TEM, or an optical monitor. Further, a part of the semiconductor or the semiconductor device processed by the processing apparatus 203 may be taken out as a fragment, and the fragment may be transferred to the evaluation apparatus 204 to be measured. The acquired cross-sectional image 205 is transmitted to the input/output device 201.

Next, a configuration of a neural network for a semantic segmentation model according to an embodiment of the present invention will be described with reference to fig. 3.

Fig. 3 is a diagram showing an example of the configuration of the neural network 106 for the semantic segmentation model according to the embodiment of the present invention. The neural network 106 shown in fig. 3 is used to perform semantic segmentation used in the 1 st region segmentation unit described above (e.g., the 1 st region segmentation unit 102 shown in fig. 2). As shown in fig. 3, the neural network 106 is composed of an input layer 107, an intermediate layer 108, and an output layer 109.

The neural network 106 sequentially transmits pixel information (for example, pixel information of an SEM cross-sectional image of an input image) input to the input layer 107 to the intermediate layer 108 and the output layer 109, and performs calculation, thereby outputting a label number of a region to which each pixel belongs. In the intermediate layer 108, a multilayer of a convolution layer, a release layer (dropout layer), and the like is repeated. The specific layer configuration varies depending on the model used. In learning, the parameters of the intermediate layer are adjusted so that the error between the label of each pixel output from the neural network 106 and the annotation data indicating the label being solved is minimized.

In this example, an example using the structure of the neural network 106 is described, however, the present invention is not limited to this, and may be a structure using a mechanical learning model such as a decision tree.

Next, a dimension measuring method according to example 1 of the present invention will be described with reference to fig. 4.

Fig. 4 is a flowchart showing a flow of a dimension measuring method 400 according to embodiment 1 of the present invention.

First, a learning unit (e.g., the learning unit 206 shown in fig. 2) creates input information 202 including a learning data set and the like input to a dimension measuring apparatus (e.g., the dimension measuring apparatus 100 shown in fig. 2). Specifically, in step S100, the learning unit creates a learning data set from a cross-sectional SEM image, which is input data from an evaluation device (for example, the evaluation device 204 shown in fig. 2) via an input/output device (for example, the input/output device 201 shown in fig. 2), and an annotation data image used in a semantic segmentation model, and stores the learning data set in a database (for example, the database 105 shown in fig. 2).

Next, in step S101, the learning unit transfers the learning data set and the machine learning model from the database to the 1 st region dividing unit (for example, the 1 st region dividing unit 102 shown in fig. 2) and performs learning of the machine learning model. The parameters of the learned model are sent back and stored in the database.

Note that, the "mechanical learning model" here is described by taking a neural network having a convolutional layer, a discrete layer, and the like as an example, but the present invention is not limited thereto, and may be a mechanical learning model such as a decision tree.

Next, in step S102, the 1 st area dividing unit inputs an input image in which the measurement target object is captured from the input/output device.

Next, in step S103, the 1 st region dividing unit acquires the machine learning model and the learning parameter from the database, and specifies the target region in the input image by using the ROI detection algorithm based on the acquired model and the learning parameter.

Here, although the example of using Template Matching (ROI detection) as the ROI detection algorithm is described, the present invention is not limited to this, and a deep learning model such as RPN (Region candidate Network) or a mechanical learning model based on feature quantities such as a Haar-Like feature classifier can be used.

Next, in step S104, the 1 st region dividing unit performs estimation based on a semantic division model for the detected ROI.

Next, in step S105, the 2 nd region dividing unit (for example, the 2 nd region dividing unit 103 shown in fig. 2) generates an image including a marker indicating each region in the input image as an intermediate image based on the estimation result generated in step S104. Here, the "marker" means information indicating a region that is confirmed and a region that is not confirmed in the so-called watershed algorithm.

Further, the 2 nd region dividing unit generates a marker for each type of the label and generates an intermediate image, based on the estimation result of the semantic division by the 1 st region dividing unit being arrangement information to which the label is attached for each pixel.

Specifically, when the number of types of labels is 1, that is, when it is determined that the result of estimation by the semantic segmentation model is the same for all pixels, the interface to be detected does not exist in the detected ROI, and therefore the present process proceeds to step S106.

When the number of tag types is 2, the image is determined to be an image in which the foreground and the background are separated. A boundary area is newly generated in the vicinity of the boundary line between the foreground and the background by performing a predetermined area reduction process on the foreground and the background, respectively, the reduced foreground and background are defined as areas having a confirmation, and the newly generated boundary area is defined as an area having no confirmation. Thus, when the watershed algorithm is applied, a good region segmentation result can be obtained.

Here, as the region reduction processing, an algorithm of reducing by an amount corresponding to 10 pixels from the outermost periphery of each region may also be used, but the present invention is not limited thereto.

Next, in step S106, the 2 nd region segmentation unit performs region segmentation based on the watershed algorithm based on the image in which the target region is specified, which is generated in step S103, and the intermediate image generated in step S105.

Here, when there are 3 or more types of tag types, the 2 nd area dividing unit processes the focused tag as a foreground, processes all other tags as a background, and performs the same processing as in the case where the number of tag types is 2 until the focused tag is collected.

In the present embodiment, a watershed algorithm is used as a means for region segmentation, but the present invention is not limited thereto, and an algorithm for region segmentation based on luminance information, such as a Graph cut (Graph cut) algorithm, may be used instead.

Next, in step S107, the 2 nd area dividing unit determines whether or not another designated area exists in the input image. When there is another designated area in the input image, the 2 nd area dividing unit repeats the above-described processing for the remaining designated area. Specifically, the 2 nd area dividing unit performs generation of a mark indicating an intermediate image and area division for each of the designated areas, and executes until all the designated areas are covered.

In the case where there is no other designated area in the input image, the present process proceeds to step S109.

Next, in step S108, the size measuring unit checks whether or not the definition of the feature point and the size that define the size condition of the measurement object is stored in the database in advance. If the information is not in the database, the size measuring unit proceeds to step S109 to specify the area label, the detection range, and the detection direction to be detected.

When a plurality of portions are specified, the dimension measurement unit specifies definitions for the respective feature points.

Next, in step S110, the size measuring unit detects the feature point based on the specified definition.

Next, in step S111, the size measuring unit performs a process of measuring the size of the object to be measured based on the obtained feature points, and converts the measured size information from pixel units to actual sizes (for example, international unit system).

This enables automatic generation of highly accurate measurement results for the cross-sectional SEM image (or cross-sectional TEM image).

Next, annotation data used for learning data of the object detection model according to embodiment 1 of the present invention will be described with reference to fig. 5.

In the following, a case where the processing apparatus is an etching apparatus and the ROI is a pattern portion will be described as an example.

Fig. 5 is a diagram showing an example of annotation data used for learning data of an object detection model according to embodiment 1 of the present invention. In more detail, fig. 5 shows: the comment data 560 correctly labeled in the area of the cross-sectional SEM image 551 and the area of the cross-sectional SEM image 551, and the comment data 570 correctly labeled in the area of the cross-sectional SEM image 552 and the area of the cross-sectional SEM image 552.

The difference between the cross-sectional SEM image 551 and the cross-sectional SEM image 552 is that the processing method in the processing apparatus (for example, the processing apparatus 203 shown in fig. 2) is different. Thus, with regard to ROI, there are the following cases: even if the same pattern portion is specified and the images of the same reduction ratio are specified, the size of the ROI differs depending on the input image. In order to make the ROI size constant, image processing can be performed so as to be a constant size by enlarging or reducing the size of each input data set before inputting the data set to the semantic segmentation model. Therefore, the image shown in the present embodiment is an image on which image processing based on the nearest neighbor compensation method (nearest neighbor compensation method) is performed so that the size of the input data set becomes fixed.

The regions shown in the annotation data 560 and the annotation data 570 are composed of three regions, namely, a background 553, a mask 554, and a substrate 555. The annotation data 560 and 570 may be created manually using dedicated software or may be created using a learned semantic division model.

Next, the parts and features of the dimensions according to example 1 of the present invention will be described with reference to fig. 6.

Fig. 6 is a diagram for explaining the correspondence between the dimensional portions and the characteristic points according to example 1 of the present invention. In the example shown in fig. 6, the size measurement site is set with: (1) l1: the width of the mask/substrate interface 606; (2) l2: width 607 of the narrowest portion of the substrate; (3) l3: the height 608 of the mask; (4) l4: the depth of the trench 609 is all around. The 6 points a to F in the figure are characteristic points on the boundary line used in the size measurement, and points that can be uniquely defined only from the boundary line data are used. For example, a may be a point which becomes the highest point of the mask upper surface.

The definitions of the feature points a to F and the correspondence (L1: B, C; L2: D, E, etc.) of the sizes L1 to L4 and the feature points a to F are input by a user via an input-output device (e.g., the input-output device 201 shown in fig. 2) and are accommodated in a database (e.g., the database 105 shown in fig. 2). To input the definition, the user specifies an area as a detection target via an interface such as a mouse or a touch panel, and specifies a detection range and a direction of detection within the detection range for the area. Here, the direction of detection is used to specify which coordinate is the uppermost or lowermost, leftmost, and rightmost coordinate in the detection range as the feature point.

Further, as a defined input method, the following method may be used: in a state where an arbitrary cross-sectional SEM image is displayed on the GUI screen of the input/output device, a user may specify a feature point by clicking on the screen or may give a script in which definitions of the feature point and the size are drawn. The number and position of the feature points and the measurement dimensions may be set as appropriate according to the structure of the measurement object.

Next, referring to fig. 7, a description will be given of a tag in annotation data used for learning data of a semantic division model according to embodiment 1 of the present invention.

Fig. 7 is a table showing the correspondence relationship among the tag name 710, the tag number 720, and the color 730 in the comment data used in the learning data of the semantic division model according to embodiment 1 of the present invention. The information shown in fig. 7 is accommodated in a database (e.g., database 105 shown in fig. 2).

Note that the label number and color to be attached to each label are arbitrary.

As shown in fig. 7, the "background" label name 710 in the image corresponds to the "0" label number 720 and the "black" color 730; the mask corresponds to "1" label number 720 and "gray"; the substrate corresponds to "2" label number 720 and "white". The comment data and the image with the tag distinguished by area are created in accordance with the information of the table shown in fig. 7.

Next, a GUI (Graphical User Interface) screen displayed by the input/output device according to embodiment 1 of the present invention will be described with reference to fig. 8.

Fig. 8 is a diagram showing an example of a GUI screen 800 displayed by the input/output apparatus according to embodiment 1 of the present invention. The GUI screen 800 is mainly composed of an annotation window 401, a model learning window 402, a size definition window 403, and an execution window 404.

In the comment window 401, the user can select an image file to be displayed in the comment window 401 using the image selection button 405. In the image window 406, the selected image is displayed. Further, the user can designate the ROI407 by a mouse operation within the displayed image. The specified ROI407 in the image window 406 is displayed in the image window 408. Annotation data is created based on the image of the ROI407 displayed in the image window 408, and displayed in the image window 409. By clicking the image pair save button 410, the user can display images in the image window 408 and the image window 409, attach names that can be associated with each other, and save the images as a learning data set.

In the model learning window 402, the user can specify a data set used in model learning by clicking on the data set selection button 411. Further, the user can specify the semantic division model by clicking the model selection button 412. Further, the user can perform model learning using the specified data set and model by clicking on the model learning button 413. In the model learning, the learning result is appropriately saved. In addition, when the learned model is selected by the model selection button 414, the learned model is stored with a recognizable name.

In the size definition window 403, the user can specify an interface between regions to be targeted by clicking the detection target button 415. Further, the user can specify the detection range 417 by clicking the detection range button 416. Further, the user can specify a detection direction for defining an end portion in which one of the upper, lower, left, and right directions in the interface between the regions in the detection range 417 is defined as a feature point by clicking the detection direction button 418. Further, the user can specify how to calculate the size from the feature points detected based on the definition by clicking the size definition button 419. Further, the user can store the size definition as a measurement definition file by clicking on the definition storage button 420.

In the execution window 404, the user can specify a learned model by clicking the model selection button 414. Further, the user can specify the measurement definition file by clicking the measurement definition selection button 421. Further, the user can specify the image group to be measured by clicking the image group selection button 422. Further, the user can perform measurement on each image in the image group to be measured by clicking the execution button 423 using the specified learned model and the specified measurement definition file. Then, the user can output the measurement result as a measurement result output file to a predetermined location.

Next, a test image according to an embodiment of the present invention will be described with reference to fig. 9.

Fig. 9 is a diagram showing an example of a test image 900 according to an embodiment of the present invention. The test image is, for example, an image captured by the above-described evaluation device (for example, the evaluation device 204 shown in fig. 2), and represents a processed semiconductor device as a measurement target. As shown in fig. 9, an unnecessary outline of an internal structure is captured in this test image 900, which is a structure that should be ignored when measuring dimensions.

Therefore, by performing the dimensional measurement method shown in fig. 4, for example, using this test image 900 as an input image, it is possible to automatically generate a highly accurate measurement result for a cross-sectional SEM image (or cross-sectional TEM image).

Example 2

In embodiment 1, an example is described in which a learning data set is prepared in advance and an intermediate image including a marker indicating each region in a target image is generated using a semantic division model, but the marker generation does not necessarily require the use of a semantic division model. Therefore, in embodiment 2, a description will be given of a configuration in which the estimation accuracy of the semantic segmentation model is successively improved by manually creating a marker and adding a newly created region segmentation image to the learning data set.

With this configuration, even when it is difficult to prepare a sufficient learning data set in advance and the accuracy of estimation by a model is insufficient, the dimension measurement can be performed.

Next, a dimension measuring method according to example 2 of the present invention will be described with reference to fig. 10.

Fig. 10 is a flowchart showing a flow of a dimension measuring method 1000 according to embodiment 2 of the present invention.

First, in step S200, a learning section (e.g., the learning section 206 shown in fig. 2) confirms whether or not a learning data set exists in a database (e.g., the learning section 206 shown in fig. 2).

In the case where the learning data set exists, next, in step S201, the learning unit transfers the learning data set and the mechanical learning model from the database to the 1 st region dividing unit (for example, the 1 st region dividing unit 102 shown in fig. 2) and performs learning of the mechanical learning model. The parameters of the learned model are sent back and stored in the database.

Note that, the "mechanical learning model" here is described by taking a neural network having a convolutional layer, a discrete layer, and the like as an example, but the present invention is not limited thereto, and may be a mechanical learning model such as a decision tree.

Next, in step S202, the 1 st area dividing unit inputs an input image in which the measurement target object is captured from the input/output device.

Next, in step S203, the 1 st region dividing unit acquires the machine learning model and the learning parameter from the database, and specifies the target region in the input image by using the ROI detection algorithm based on the acquired model and the learning parameter.

Here, although the example of using Template Matching (ROI detection) as the ROI detection algorithm is described, the present invention is not limited to this, and a deep learning model such as RPN (Region candidate Network) or a mechanical learning model based on feature quantities such as a Haar-Like feature classifier can be used.

Next, in step S204, the 1 st region dividing unit checks whether or not the learned model exists in the database. When the learned model exists, the 1 st region dividing unit estimates the ROI based on the semantic division model in step S205. When the learned model does not exist in the database, the 1 st region dividing unit does not perform estimation based on the model, and the process proceeds to step S208.

Next, when the first region dividing unit performs the model-based estimation, in step S206, the 2 nd region dividing unit (for example, the 2 nd region dividing unit 103 shown in fig. 2) generates an image including a marker indicating each region in the input image as an intermediate image based on the estimation result generated in step S204. Here, the "marker" means information indicating a region that is confirmed and a region that is not confirmed in the so-called watershed algorithm.

Further, the 2 nd region dividing unit generates a marker for each type of the label and generates an intermediate image, based on the estimation result of semantic division by the 1 st region dividing unit being arrangement information to which a label is attached for each pixel.

Specifically, when the number of types of labels is 1, that is, when it is determined that the result of estimation by the semantic segmentation model is the same for all pixels, the interface to be detected does not exist in the detected ROI, and therefore the present process proceeds to the next step.

When the number of tag types is 2, the image is determined to be an image in which the foreground and the background are separated. A boundary area is newly generated in the vicinity of the boundary line between the foreground and the background by performing a predetermined area reduction process on the foreground and the background, respectively, the reduced foreground and background are defined as areas having a confirmation, and the newly generated boundary area is defined as an area having no confirmation.

Here, as the region reduction processing, an algorithm of reducing by an amount corresponding to 10 pixels from the outermost periphery of each region may also be used, but the present invention is not limited thereto.

Next, in step S207, the 2 nd region segmentation unit performs region segmentation based on the watershed algorithm, based on the image in which the target region is specified, generated in step S203, and the intermediate image generated in step S206.

Here, when there are 3 or more types of tag types, the 2 nd area dividing unit treats the focused tag as a foreground and treats all other tags as a background, and performs the same processing as in the case where the number of tag types is 2 until the focused tag is collected.

Next, when the model-based estimation is not performed, in step S208, the user creates an intermediate image including a marker by a mouse operation or the like.

Next, in step S209, the 2 nd area dividing unit determines whether or not another designated area exists in the input image. When there is another designated area in the input image, the 2 nd area dividing unit repeats the above-described processing for the remaining designated area. Specifically, the 2 nd area dividing unit performs area division and generation of a mark indicating an area for each designated area until all designated areas are accepted.

Next, in step S210, the created region-divided image is added to the learning data set and appropriately stored in the database. Thus, in the subsequent machine learning training, the learning unit can perform training of the semantic division model using the updated learning data set, thereby improving the estimation accuracy by the model.

In the case where there is no other designated area in the input image, the present process proceeds to step S211.

Next, in step S211, the size measuring unit checks whether or not the definition of the feature point and the size that define the size condition of the measurement object is stored in the database in advance. If the information is not in the database, the size measuring unit proceeds to step S212 to specify the area label, the detection range, and the detection direction to be detected.

When a plurality of portions are specified, the dimension measurement unit specifies definitions for the respective feature points.

Next, in step S213, the size measuring unit detects the feature point based on the specified definition.

Next, in step S214, the size measuring unit performs a process of measuring the size of the object to be measured based on the obtained feature points, and converts the measured size information from pixel units to actual sizes.

Next, a GUI (Graphical User Interface) screen displayed on the input/output device according to embodiment 2 of the present invention will be described with reference to fig. 11.

Fig. 11 is a diagram showing an example of a GUI screen 1100 displayed on the input/output device according to embodiment 2 of the present invention. As shown in fig. 11, the GUI screen 1100 is composed of a region division window 501, a size definition window 502, and an execution window 503.

In the divided-area window 501, the user can select an image file to be displayed in the divided-area window 501 by clicking the input image selection button 504. The selected image is displayed in the image window 505. When a specific ROI is registered in advance, the ROI is read out from a database (for example, the database 105 shown in fig. 2), but when the specific ROI is not registered or is desired to be changed, the user can designate the ROI in the displayed image by a mouse operation and click the ROI registration button 506 to register the ROI.

Further, the user can designate a region using the ROI detection algorithm by clicking the region designation button 507. The detected ROI508 is displayed in the image window 505. When the detected ROI508 is incorrect, the user can update the ROI by clicking the ROI registration button 506. Further, by clicking the region division button 509, the user can perform region division for each ROI508 detected by using a semantic division model selected in advance. When the semantic division model is not selected in advance, or when the semantic division model to be used is to be changed, the user can select the model by clicking the model selection button 510.

Further, the region division result is displayed in the image window 511. When the region segmentation result is insufficient, the user can update the learned semantic segmentation model by clicking the model update button 512, or can update the marker used in the Watershed (Watershed) algorithm by adjusting the reduction amount 513.

In the present embodiment, a watershed algorithm is used as an example, but the present invention is not limited to this, and other algorithms for dividing regions based on luminance information, such as a Graph cut (Graph cut) algorithm, may be used instead. When the region division result is sufficient, the user can add the region division image to the database 105 by clicking the data addition button 524.

In the size definition window 502, the user can specify an interface between areas as objects by clicking the detection object button 514. Further, the user can specify the detection range 516 by clicking the detection range button 515. Further, the user can specify the detection direction for defining the end in which one of the upper, lower, left, and right directions in the inter-region interface within the detection range 516 is the feature point by clicking the detection direction button 517. Further, the user can specify how to calculate the size from the feature points detected based on the definition by clicking the size definition button 518. Further, the user can store the size definition as a measurement definition file by clicking on the definition storage button 519.

In the execution window 503, the user can specify a learned model by clicking the model selection button 520. Further, the user can specify the measurement definition file by clicking the measurement definition selection button 521. Further, the user can specify the image group to be measured by clicking the image group selection button 522. Further, by clicking the execution button 523, the user can perform measurement on each image in the image group to be measured using the specified learned model and the specified measurement definition file. Then, the user can output the measurement result as a measurement result output file to a predetermined location.

While the embodiments of the present invention have been described above, the present invention is not limited to the above embodiments, and various modifications can be made without departing from the scope of the present invention.

-description of symbols-

100 size measuring device

101 central processing unit

102 1 st region dividing part

103 2 nd region dividing part

104 size measuring part

105 database

106 neural network

201 input/output device

202 input information

203 processing device

204 evaluation device

205 sectional image

401 notes window

402 model learning window

403 size definition window

404 execution Window

405 image selection button

406 image window

407 ROI

408 image window

409 image window

410 image pair save button

411 data set selection button

412 model select button

413 model learning button

414 model selection button

415 button for detecting object

416 detection Range button

417 detection Range

418 detection direction button

419 size defining button

420 define save button

421 measurement definition selection button

422 image group selection button

423 executive button

501 region division window

502 size definition window

503 execution Window

504 input image selection button

505 image window

506 ROI registration button

507 area designation button

508 ROI

509 area division button

510 model selection button

511 image window

512 model update button

513 amount of reduction

514 detected object button

515 detection range button

516 detection range

517 detecting direction button

518 size definition button

519 definition save button

520 model selection button

551 Cross-sectional SEM image

552 Cross-sectional SEM image

553 background

554 mask

555 baseplate

606 width of mask/substrate interface

607 width of narrowest part of substrate

608 height of mask

609 depth of channel

521 measurement definition selection button

522 image group select button

523 executive button

524 data add button.

28页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于测量管道的横截面尺寸的方法和设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!