WO2023166748A1 - プロファイル検出方法及びプロファイル検出装置 - Google Patents
プロファイル検出方法及びプロファイル検出装置 Download PDFInfo
- Publication number
- WO2023166748A1 WO2023166748A1 PCT/JP2022/012518 JP2022012518W WO2023166748A1 WO 2023166748 A1 WO2023166748 A1 WO 2023166748A1 JP 2022012518 W JP2022012518 W JP 2022012518W WO 2023166748 A1 WO2023166748 A1 WO 2023166748A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- detection
- image
- specific shape
- boundary
- contour
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B15/00—Measuring arrangements characterised by the use of electromagnetic waves or particle radiation, e.g. by the use of microwaves, X-rays, gamma rays or electrons
- G01B15/04—Measuring arrangements characterised by the use of electromagnetic waves or particle radiation, e.g. by the use of microwaves, X-rays, gamma rays or electrons for measuring contours or curvatures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B15/00—Measuring arrangements characterised by the use of electromagnetic waves or particle radiation, e.g. by the use of microwaves, X-rays, gamma rays or electrons
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N23/00—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
- G01N23/22—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material
- G01N23/225—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material using electron or ion
- G01N23/2251—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material using electron or ion using incident electron beams, e.g. scanning electron microscopy [SEM]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B2210/00—Aspects not specifically covered by any group under G01B, e.g. of wheel alignment, caliper-like sensors
- G01B2210/56—Measuring geometric parameters of semiconductor structures, e.g. profile, critical dimensions or trench depth
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2223/00—Investigating materials by wave or particle radiation
- G01N2223/40—Imaging
- G01N2223/401—Imaging image processing
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2223/00—Investigating materials by wave or particle radiation
- G01N2223/40—Imaging
- G01N2223/418—Imaging electron microscope
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2223/00—Investigating materials by wave or particle radiation
- G01N2223/60—Specific applications or type of materials
- G01N2223/611—Specific applications or type of materials patterned objects; electronic devices
- G01N2223/6116—Specific applications or type of materials patterned objects; electronic devices semiconductor wafer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
- G06T2207/10061—Microscopic image from scanning electron microscope
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30148—Semiconductor; IC; Wafer
Definitions
- the present disclosure relates to a profile detection method and a profile detection device.
- Patent Document 1 discloses a technique of imaging a circuit pattern present at a desired position on a semiconductor device with a scanning electron microscope (SEM) in order to measure or inspect a semiconductor.
- SEM scanning electron microscope
- the present disclosure provides a technique for efficiently detecting specific shapes included in detection target images.
- a profile detection method has a detection step and an output step.
- a specific shape included in the detection target image is detected from the detection target image including the specific shape using a learning image including the specific shape and a model that has learned information about the specific shape included in the learning image. do.
- the output step outputs shape information of the detected specific shape.
- FIG. 1 is a diagram illustrating an example of a functional configuration of an information processing apparatus according to an embodiment
- FIG. 2 is a diagram illustrating an example of a cross-sectional image of a substrate according to the embodiment
- FIG. 3 is a diagram illustrating an example of a binarized image according to the embodiment
- FIG. 4 is a diagram illustrating an example of the flow of generating a boundary detection model according to the embodiment
- FIG. 5 is a diagram illustrating an example of the flow of detecting each concave area of an image according to the embodiment.
- FIG. 6 is a diagram illustrating an example of a flow of detecting a boundary of a film included in an image according to the embodiment
- FIG. 7 is a diagram illustrating an example of a detection result of a film boundary according to the embodiment.
- FIG. 8 is a diagram schematically showing an example of the flow of the profile detection method according to the embodiment.
- FIG. 9 is a diagram schematically showing another example of the flow of the profile detection method according to the embodiment.
- a scanning electron microscope is used to image a cross section of a semiconductor device in which concave portions such as trenches and holes are formed. Then, by measuring dimensions such as the CD (Critical Dimension) of the concave portion of the imaged image, the suitability of the manufacturing process recipe is determined.
- a process engineer manually specifies the range of a recess in a captured image and the position of a contour whose dimensions are to be measured. As a result, it takes time to measure dimensions.
- the measurement work such as specifying the position of the contour for measuring the dimensions, is human-dependent, there are cases where human-dependent errors occur in the measured dimensions. Moreover, it takes time and effort to measure the dimensions of a large number of recesses.
- FIG. 1 is a diagram showing an example of a functional configuration of an information processing device 10 according to an embodiment.
- the information processing device 10 is a device that provides a function of measuring dimensions of a specific shape included in a captured image.
- the information processing device 10 is, for example, a computer such as a server computer or a personal computer.
- a process engineer uses the information processing device 10 to measure the dimensions of the concave portion of the captured image.
- the information processing device 10 corresponds to the profile detection device of the present disclosure.
- the information processing device 10 has a communication I/F (interface) section 20, a display section 21, an input section 22, a storage section 23, and a control section 24. Note that the information processing apparatus 10 may have other devices included in the computer in addition to the devices described above.
- the communication I/F unit 20 is an interface that controls communication with other devices.
- the communication I/F unit 20 is connected to a network (not shown), and transmits and receives various information to and from other devices via the network.
- the communication I/F unit 20 receives digital image data captured by a scanning electron microscope.
- the display unit 21 is a display device that displays various information.
- Examples of the display unit 21 include display devices such as LCD (Liquid Crystal Display) and CRT (Cathode Ray Tube).
- the display unit 21 displays various information.
- the input unit 22 is an input device for inputting various information.
- the input unit 22 may be an input device such as a mouse or keyboard.
- the input unit 22 receives an operation input from a user such as a process engineer, and inputs operation information indicating the content of the received operation to the control unit 24 .
- the storage unit 23 is a storage device such as a hard disk, SSD (Solid State Drive), or optical disk. Note that the storage unit 23 may be a rewritable semiconductor memory such as RAM (Random Access Memory), flash memory, NVSRAM (Non Volatile Static Random Access Memory).
- RAM Random Access Memory
- flash memory Non Volatile Static Random Access Memory
- the storage unit 23 stores an OS (Operating System) executed by the control unit 24 and various programs including a profile detection program to be described later. Furthermore, the storage unit 23 stores various data used in the programs executed by the control unit 24 . For example, the storage unit 23 stores learning data 30 , image data 31 and model data 32 .
- OS Operating System
- the learning data 30 is data used for generating a model used for profile detection.
- the learning data 30 includes various data used for model generation.
- the learning data 30 stores data of a learning image including a specific shape to be detected and information about the specific shape included in the learning image.
- the image data 31 is data of a detection target image for profile detection.
- the model data 32 is data storing a model for detecting a specific shape.
- a model of the model data 32 is generated by performing machine learning on the learning data 30 .
- the learning image and the detection target image are images obtained by imaging a cross section of a semiconductor device with a scanning electron microscope.
- a semiconductor device is formed on a substrate such as, for example, a semiconductor wafer.
- a learning image and a detection target image are obtained by imaging a cross-section of a substrate on which a semiconductor device is formed using a scanning electron microscope.
- concave portions such as trenches and holes are detected as specific shapes from the detection target image.
- FIG. 2 is a diagram showing an example of a cross-sectional image of the substrate according to the embodiment.
- FIG. 2 is an image taken by a scanning electron microscope of a cross section of a semiconductor device in which trenches and holes are formed. Let the horizontal direction of the image be the x-direction, and the vertical direction of the image be the y-direction.
- a plurality of concave portions 50 recessed in the y direction are formed side by side in the x direction.
- the recess 50 is, for example, a cross-section of a trench or hole formed in a semiconductor device.
- the learning data 30 stores a plurality of sets of images of the cross-section of the substrate used as learning images and information related to the recesses 50 included in the images.
- the image data 31 stores an image of a cross-section of the substrate that is the profile detection target.
- the control unit 24 is a device that controls the information processing apparatus 10 .
- electronic circuits such as CPU (Central Processing Unit), MPU (Micro Processing Unit), GPU (Graphics Processing Unit), etc., integration of ASIC (Application Specific Integrated Circuit), FPGA (Field Programmable Gate Array), etc. circuit can be employed.
- the control unit 24 has an internal memory for storing programs defining various processing procedures and control data, and executes various processing using these.
- the control unit 24 functions as various processing units by running various programs.
- the control unit 24 has an operation reception unit 40 , a learning unit 41 , a detection unit 42 , a measurement unit 43 and an output unit 44 .
- the operation reception unit 40 receives various operations. For example, the operation reception unit 40 displays an operation screen on the display unit 21 and receives various operations on the operation screen from the input unit 22 .
- the information processing apparatus 10 performs machine learning to generate the model data 32 and stores the model data 32 in the storage unit 23, thereby enabling profile detection.
- the operation reception unit 40 receives designation of the learning data 30 used for machine learning and an instruction to start model generation from the operation screen. Further, for example, the operation accepting unit 40 accepts designation of the image data 31 as a profile detection target from the operation screen.
- the operation accepting unit 40 reads the specified image data 31 from the storage unit 23 and displays an image of the read image data 31 on the display unit 21 .
- the operation reception unit 40 receives an instruction to start profile detection from the operation screen.
- a process engineer or administrator designates learning data 30 to be used for machine learning and instructs the start of model generation.
- the process engineer designates the cross-sectional image data 31 of the semiconductor device on which the substrate processing of the recipe for which the suitability is to be judged has been performed, from the operation screen. Then, the process engineer instructs the start of profile detection from the operation screen.
- the learning unit 41 performs machine learning on the designated learning data 30 and generates a model for detecting specific shapes included in the image.
- the learning unit 41 generates a model for detecting the concave portion 50 included in the image.
- Any machine learning method may be used as long as it can obtain a model capable of detecting a specific shape.
- Machine learning methods include, for example, a method of performing image segmentation such as U-net.
- the learning unit 41 generates a plurality of models used for detecting specific shapes included in images. For example, the learning unit 41 generates a contour detection model for detecting contours of a specific shape included in the image. In addition, the learning unit 41 generates a boundary detection model that is used for detecting the boundary of the film included in the image.
- the learning data 30 stores an image including the concave portion 50 shown in FIG.
- a binarized image obtained by binarizing the image including the recess 50 is stored.
- FIG. 3 is a diagram showing an example of a binarized image according to the embodiment.
- FIG. 3 is binarized by assigning a first value (for example, 0) to the portion of the substrate and the film in which the concave portions 50 are formed in the image of FIG. 2, and assigning a second value (for example, 1) to the portion of the space. It is a binarized image.
- the first value portion of the binarized image is shown in black, and the second value portion is shown in white.
- the learning data 30 stores a plurality of images including the concave portions 50 and binarized images of the images including the concave portions 50 in association with each other.
- the learning unit 41 reads the image containing the concave portion 50 stored in the learning data 30 and a plurality of data in which the information on the contour of the concave portion 50 contained in the image is associated with each other, and generates a contour detection model by machine learning. do.
- the learning unit 41 uses a plurality of pieces of data in which the image including the concave portion 50 shown in FIG. 2 and the binarized image of the image including the concave portion 50 shown in FIG. to generate
- the generated contour detection model outputs information about the contour of the concave portion 50 by inputting an image including the concave portion 50 and performing calculations.
- the contour detection model outputs a binarized image of the image including the recess 50.
- the learning data 30 includes, for each region of a predetermined size of the image including the concave portion 50 shown in FIG. and stores data associated with information as to whether or not the boundary of is included. For example, as information on whether or not the film boundary is included, 1 is stored when the film boundary is included, and 0 is stored when the film boundary is not included.
- the learning unit 41 associates images of areas obtained by dividing each image containing the concave portion 50 stored in the learning data 30 into predetermined sizes, and information as to whether or not the images of the areas include the boundary of the film.
- the attached data is read and a boundary detection model is generated by machine learning.
- FIG. 4 is a diagram showing an example of the flow of generating a boundary detection model according to the embodiment. For example, for each image including the concave portion 50, an image of a region of a predetermined size is randomly extracted, and the image of the region and the information of whether or not the image of the region includes the boundary of the film are associated. Create data.
- FIG. 4 shows patch images 60, which are images of respective areas obtained by dividing each image including the concave portion 50 into each predetermined size.
- a label 61 indicates whether each patch image 60 includes a membrane boundary. The label 61 stores 1 when the corresponding patch image 60 includes the membrane boundary, and 0 when it does not include the membrane boundary.
- the learning data 30 stores data in which each patch image 60 is associated with information as to whether or not the patch image 60 includes a membrane boundary.
- the learning unit 41 reads the patch image 60 stored in the learning data 30 and the value of the label 61 corresponding to the patch image 60, and generates a boundary detection model through machine learning.
- the generated boundary detection model inputs an image of a predetermined size and performs calculations, thereby outputting information as to whether or not the image includes a film boundary. For example, the boundary detection model outputs 1 if it is estimated to contain the membrane boundary and 0 if it is estimated to not contain the membrane boundary.
- the learning unit 41 stores the generated model data in the model data 32 .
- the learning unit 41 stores data of the generated contour detection model and boundary detection model in the model data 32 .
- the detection unit 42 uses the model stored in the model data 32 to detect a specific shape from the image of the designated image data 31 .
- the detection unit 42 detects the concave portion 50 from the image of the designated image data 31 using the contour detection model and boundary detection model stored in the model data 32 .
- the detection unit 42 has a contour detection unit 42a, an area detection unit 42b, and a boundary detection unit 42c.
- the contour detection unit 42a uses the contour detection model stored in the model data 32 to detect the contour of the recess 50 included in the image of the designated image data 31. For example, the contour detection unit 42a inputs the image of the specified image data 31 to the contour detection model and performs calculation.
- the contour detection model outputs a binarized image of the input image data 31 . For example, when the contour detection model inputs the image including the concave portion 50 shown in FIG. 2, it outputs the binarized image of the image including the concave portion 50 shown in FIG.
- the contour detection unit 42a detects the contour of the concave portion 50 from the binarized image output from the contour detection model.
- the contour detection unit 42a detects, as a contour, a boundary portion where pixel values change between adjacent pixels in a binarized image. For example, the contour detection unit 42a generates an image by increasing or decreasing the black area of the boundary portion by one pixel from the binarized image. Then, the contour detection unit 42a calculates a difference image by obtaining the difference between the original binarized image and the image obtained by increasing or decreasing the black area of the boundary portion by one pixel for each pixel at the corresponding position. do. Since the boundary portion is increased or decreased by one pixel, only the boundary portion of the difference image remains as a black area. The contour detection unit 42 a detects the black region of the differential image as the contour of the concave portion 50 .
- the region detection unit 42b detects the region of each recess 50 in the image of the designated image data 31. For example, the region detection unit 42b detects the regions of the recesses 50 for each recess 50 in the image using the contour detection result obtained by the contour detection unit 42a.
- FIG. 5 is a diagram showing an example of the flow of detecting the area of each concave portion 50 of the image according to the embodiment.
- FIG. 5 shows an image in which the contour of the recess 50 is detected.
- the horizontal direction of the image be the x-direction
- the vertical direction of the image be the y-direction.
- the region detection unit 42b specifies the range in which the contour of the concave portion 50 detected by the contour detection unit 42a exists in the y direction of the image.
- the area detection unit 42b obtains the minimum value and the maximum value in the y direction from the coordinates of each pixel forming the outline of the recess 50, and determines the range of the minimum value and the maximum value including the plurality of recesses 50 in the y direction. range.
- the range including the concave portion 50 is indicated as Y Range in the y direction of the image.
- the area detection unit 42b detects the boundaries of the areas of the recesses 50 in the x direction from the specified range of the image. For example, the region detection unit 42b calculates the average luminance value of each pixel in the y direction for each position in the x direction of the image from the specified range of the image. The region detection unit 42b detects the region of each concave portion 50 from the specified range of the image based on the calculated average value of each position in the x direction.
- the area detection unit 42b extracts the Y Range specified in the y direction of the image, and averages the luminance of each pixel in the y direction for each position in the image in the x direction from the image in the extracted Y Range. Calculate the value.
- the area detection unit 42b arranges the average values at each position in the x direction in order of the position in the x direction to obtain a profile of the average values.
- FIG. 5 shows an average value profile AP in which the average values at each position in the x direction are arranged in the order of the position in the x direction.
- the area detection unit 42b binarizes each value of the profile AP of the average values in the x direction.
- the area detection unit 42b obtains the average value of the profile AP of the average values in the x direction, and binarizes each value of the profile AP using the obtained average value as a threshold value. For example, if the value of the average profile AP is equal to or greater than the threshold, the area detection unit 42b sets the value to the first value, and if the value of the average profile AP is smaller than the threshold, sets the value to the second value. Each value is binarized.
- FIG. 5 shows the binarized profile BP, where "0" is set when each value of the profile AP is equal to or greater than the threshold (average value), and "1" is set when it is smaller than the threshold.
- the area detection unit 42b detects the position of the center of each continuous portion where the second value is continuous in the binarized profile BP as the pattern boundary of the concave portion 50 in the x direction.
- the region detection unit 42b detects the position of the center of each continuous portion in which "1" is continuous in the binarized profile BP as the pattern boundary of the concave portion 50 in the x direction.
- the area detection unit 42b detects areas between the detected pattern boundaries as areas of the recesses 50 in the Y Range image. In FIG. 5, the area of each recess 50 detected from the image in which the outline of the recess 50 is detected is indicated by a rectangle S1.
- the boundary detection unit 42c uses the boundary detection model stored in the model data 32 to detect the boundary of the film included in the image of the specified image data 31. For example, the boundary detection unit 42c uses the boundary detection model for each predetermined size used to generate the boundary detection model for the image of the designated image data 31 to determine whether the image of each region includes the film boundary. to derive information about The boundary detection unit 42c detects the boundary of the film included in the image of the designated image data 31 from the derived information for each region.
- FIG. 6 is a diagram showing an example of the flow of detecting boundaries of membranes included in an image according to the embodiment.
- FIG. 6 shows an image of designated image data 31 .
- the boundary detection unit 42c divides the image of the specified image data 31 into patch images 60, inputs the divided patch images 60 into the boundary detection model, and performs calculation.
- the boundary detection model outputs information as to whether or not the input patch image 60 contains the membrane boundary. For example, the boundary detection model outputs a 1 if it is estimated to contain the membrane boundary and a 0 if it is estimated to not contain the membrane boundary.
- the boundary detection unit 42c calculates, for each position in the y direction of the image of the image data 31, the average value of the output values of the boundary detection model of each patch image 60 at the same position in the y direction.
- the boundary detection unit 42c obtains a profile of the average values by arranging the average values of each position in the y direction in the order of the positions in the y direction.
- the boundary detection unit 42c detects the boundary position of the film based on the average value at each position in the y direction.
- the average value will be close to 1 at locations containing the membrane boundary, which is the probability of being the boundary.
- the boundary detection unit 42c detects the position in the y direction where the average value is close to 1 as the film boundary position. For example, the boundary detection unit 42c detects a position where the average value is equal to or greater than a predetermined threshold value (for example, 0.8) as the film boundary position.
- a predetermined threshold value for example, 0.8
- FIG. 7 is a diagram showing an example of detection results of film boundaries according to the embodiment.
- FIG. 7 shows an example of a cross-sectional image of the substrate.
- the films forming the sidewalls of the recess 50 are shown in different patterns, and the detected boundaries of the films in the y direction are indicated by lines L1 and L2.
- the film above the image recesses 50 is, for example, a mask.
- the boundary detection unit 42c has detected the boundary of the film. Detecting the boundary of the film in this way can be used for automatic adjustment of the rotational deviation of the image.
- the boundary detection unit 42c may use the film boundary line L2 to perform rotation correction of the image so that the line L2 is horizontal. As a result, it is possible to correct the rotation deviation of the image, and it is possible to easily grasp the positional relationship and film thickness of the film from the image.
- the information processing apparatus 10 can thus automatically detect the range of the concave portion 50 in the image and the outline of the concave portion 50, thereby improving the efficiency of dimension measurement.
- the measurement unit 43 measures the dimensions.
- the operation reception unit 40 displays on the display unit 21 an image obtained by detecting the contour of the recess 50 by the contour detection unit 42a, and receives from the input unit 22 the designation of the position of the contour of the recess 50 whose dimensions are to be measured.
- the measurement unit 43 measures dimensions such as the CD of the recess 50 at the designated contour position.
- the measurement unit 43 may automatically measure dimensions such as the CD of the concave portion 50 at a predetermined position on the contour without receiving designation of the position.
- the position for measuring the dimension may be set in advance, or may be set based on the detection results of the boundary detection section 42c and the contour detection section 42a.
- the measuring unit 43 measures the dimension such as CD at the boundary of the film from the contour of each concave portion 50 detected by the contour detecting unit 42a at the position of the height of the boundary of the film detected by the boundary detecting unit 42c.
- the measurement unit 43 measures the CD at a predetermined position of the recess 50, such as the top (TOP) of each recess 50, the center of the side wall (MIDDLE) in the recess 50, and the bottom (BOTTOM) of the recess 50. Dimensions may be automatically measured.
- the measurement unit 43 may also measure dimensions such as CD at each position in the y direction from the edge profile of the contour of each recess 50 detected by the contour detection unit 42a.
- the output unit 44 outputs shape information of the specific shape detected by the detection unit 42 .
- the output unit 44 displays on the display unit 21 the contour of the recess 50, the area of the recess 50, and the boundary of the film detected by the detection unit 42, along with the image of the designated image data 31.
- the output unit 44 outputs the measurement result measured by the measurement unit 43 from the specific shape detected by the detection unit 42 .
- the output unit 44 displays the measured dimension together with the measurement position on the display unit 21 .
- the output unit 44 may store the shape information of the specific shape detected by the detection unit 42 and the data of the measurement result in the storage unit 23, or may transmit the data to another device via the communication I/F unit 20. good.
- the output unit 44 may output only shape information that well represents features of interest by selecting a detection target region from a plurality of regions containing a specific shape on the image.
- the input unit 22 receives selection of a region of interest from a plurality of specific-shaped regions displayed on the display unit 21 .
- the input unit 22 receives selection of a region of the recessed portion 50 of interest from the plurality of regions of the recessed portion 50 displayed on the display unit 21 .
- the output unit 44 may output only the shape information of the selected area of the concave portion 50 .
- the output unit 44 may output only features of the recesses 50 that represent features of the recesses 50 of interest, such as dimensions such as the CD of the selected recesses 50 . Thereby, the process engineer can efficiently grasp the characteristic quantity of the recessed portion 50 of interest.
- the output unit 44 may perform profile selection such as outlier removal and maximum CD selection on the measurement result measured by the measurement unit 43, and output the selected measurement result.
- profile selection such as outlier removal and maximum CD selection on the measurement result measured by the measurement unit 43
- the output unit 44 selects to remove abnormal values by applying an outlier detection method such as the 3 ⁇ rule to the TOP CDs of all the measured areas of the concave portion 50, and outputs the selected measurement results. good too.
- the output unit 44 may output the shape information and the measurement result of the concave portion 50 having the largest CD.
- the output unit 44 may output the maximum value among the TOP CDs of the concave portions 50 that have not been removed by the above-described outlier detection.
- the output unit 44 may select based on the score of an unsupervised learning model such as the median value or the Local Outlier Factor instead of the maximum value.
- the output unit 44 may not only select and output the shape information and measurement results of one recess 50, but may also calculate and output the shape information and the average value or median value of the measurement results of a plurality of recesses 50. .
- FIG. 8 is a diagram schematically showing an example of the flow of the profile detection method according to the embodiment.
- the process engineer designates the image data 31 and instructs the start of profile detection.
- the detection unit 42 uses the model stored in the model data 32 to detect a specific shape from the image of the specified image data 31 .
- the contour detection unit 42a uses the contour detection model stored in the model data 32 to detect the contour of the concave portion 50 included in the image of the specified image data 31 (step S10).
- the area detection unit 42b detects the areas of the concave portions 50 for each concave portion 50 of the image using the contour detection result obtained by the contour detection unit 42a (step S11).
- the boundary detection unit 42c uses the boundary detection model stored in the model data 32 to detect the boundary of the film included in the image of the designated image data 31 (step S12).
- the processing of steps S10 and S11 and the processing of step S12 may be reversed in order, or may be performed in parallel.
- the measurement unit 43 measures the dimensions (step S13). For example, the measurement unit 43 measures dimensions such as the CD of the concave portion 50 at a predetermined position on the contour.
- the output unit 44 outputs shape information of the specific shape detected by the detection unit 42 (step S14). For example, the output unit 44 displays on the display unit 21 the contour of the recess 50, the area of the recess 50, and the boundary of the film detected by the detection unit 42, along with the image of the designated image data 31. FIG. Also, the output unit 44 outputs the measurement result of the measurement unit 43 . The output unit 44 may perform profile selection such as outlier removal and maximum CD selection on the measurement result of the measurement unit 43 and output the selected measurement result.
- the information processing apparatus 10 can measure the dimensions of the concave portion 50 of the image in this way, thereby making the measurement of dimensions more efficient. As a result, the time required for dimension measurement can be shortened. In addition, since the information processing apparatus 10 can detect a contour that serves as a position for measuring dimensions, it is possible to reduce human-dependent errors that occur in the measured dimensions. In addition, the information processing apparatus 10 can efficiently measure the dimensions of many recesses 50 . For example, by automatically measuring the dimensions of each recess 50 included in the image, many measurements can be collected for data analysis. Further, by automatically measuring the dimension of each recess 50 included in the image and analyzing the measured dimension of each recess 50, an abnormal recess 50 can be detected.
- the region detection unit 42b detects the region of the concave portion 50 using the contour detection result by the contour detection unit 42a has been described as an example. However, it is not limited to this.
- the area detection unit 42b may detect the area of the recess 50 without using the contour detection result by the contour detection unit 42a.
- the learning unit 41 generates an area detection model by machine learning from a plurality of data in which an image including the recess 50 and an image showing the area of the recess 50 are associated, and stores the area detection model in the model data 32. do.
- the area detection unit 42 b may detect the area of the recess 50 using the area detection model stored in the model data 32 .
- the contour detection model described in the above embodiment is an example, and is not limited to this.
- the contour detection model may be any model as long as it can detect contours.
- the learning unit 41 may generate a contour detection model by machine learning from a plurality of data in which an image including the concave portion 50 and a binarized image obtained by binarizing the contour portion of the image including the concave portion 50 are associated with each other. good.
- the contour detection unit 42a may input the image of the designated image data 31 to the contour detection model, perform calculations, and detect the contour of the concave portion 50 from the binarized image output from the contour detection model.
- the boundary detection model and the area detection model are also examples, and are not limited to these.
- the boundary detection model may be any model as long as it can detect contours.
- the area detection model may be any model as long as the area of the recess 50 can be detected.
- the case where the outline of the recess 50, the region of the recess 50, and the boundary between the film are individually detected has been described as an example. However, it is not limited to this. Any two or all of the outline of the recess 50, the region of the recess 50, and the film boundary may be detected using one model.
- the outline of the recess 50 and the boundary of the film may be detected using one model.
- the learning unit 41 uses a plurality of data obtained by associating an image including the recess 50 with a binarized image obtained by binarizing the contour portion of the image including the recess 50 and the boundary portion of the membrane.
- a contour boundary detection model may be generated by learning.
- the detection unit 42 inputs the specified image data 31 to the generated contour/boundary detection model, performs calculation, and detects the contour of the concave portion 50 and the film boundary from the binarized image output from the contour/boundary detection model. may be detected.
- FIG. 9 is a diagram schematically showing another example of the flow of the profile detection method according to the embodiment.
- FIG. 9 shows the case where the contour of the recess 50 and the boundary of the film are detected simultaneously.
- the detection unit 42 instead of steps S10 and S12 in FIG. 8, the detection unit 42 inputs the specified image data 31 to the contour boundary detection model, performs calculation, and outputs 2 data from the contour boundary detection model.
- the outline of the concave portion 50 and the boundary of the film are detected from the valued image (step S20).
- the images of the learning data 30 and the image data 31 may include out-of-focus images. Therefore, the learning unit 41 may perform learning by removing out-of-focus images.
- the detection unit 42 may detect the specific shape by removing out-of-focus images. For example, the learning unit 41 and the detection unit 42 apply a fast Fourier transform (FFT) to the entire image, and determine that the image is out of focus when the magnitude of high-frequency power is equal to or less than a threshold.
- FFT fast Fourier transform
- the profile detection method according to the embodiment has the detection process (steps S10 to S12) and the output process (step S14).
- a specific shape included in the detection target image is detected from the detection target image including the specific shape using a learning image including the specific shape and a model that has learned information about the specific shape included in the learning image. do.
- the output step outputs shape information of the detected specific shape.
- the profile detection method according to the embodiment can efficiently detect the specific shape included in the detection target image.
- the profile detection method according to the embodiment can efficiently measure the dimensions of the specific shape.
- the profile detection method according to the embodiment can efficiently detect the concave portion 50 included in the image, and can shorten the time required to measure the dimensions of the concave portion 50 .
- the profile detection method according to the embodiment can reduce human-dependent errors that occur in measured dimensions.
- the profile detection method according to the embodiment can efficiently measure the dimensions of a large number of recesses 50 .
- the detection process includes a contour detection process (step S10), a boundary detection process (step S12), and an area detection process (step S11).
- a contour detection step a contour of a specific shape included in the detection target image is detected from the detection target image.
- the boundary detection step detects the boundary of the film included in the detection target image from the detection target image.
- the region detection step detects a specific-shaped region included in the detection target image from the detection target image.
- At least one of the contour detection process, the area detection process, and the boundary detection process performs detection using a model.
- the profile detection method according to the embodiment can efficiently measure the dimension of the specific shape from the detected contour of the specific shape, the boundary of the film, and the specific shape area.
- the learning image and the detection target image are cross-sectional images of a semiconductor substrate in which a plurality of concave portions 50 representing cross sections of vias or trenches are arranged as specific shapes.
- the profile detection method according to the embodiment can efficiently detect the concave portion 50 included in the detection target image.
- the profile detection method according to the embodiment can efficiently measure the dimensions of the concave portion 50 .
- the profile detection method according to the embodiment further has a measurement step (step S13).
- the measuring step measures the dimension of the detected specific shape.
- the profile detection method according to the embodiment can efficiently measure the dimension of the specific shape.
- the model learns information about the learning images and the contours of specific shapes included in the learning images.
- the contour detection step uses the model to detect contours of a specific shape included in the detection target image from the detection target image. Thereby, the profile detection method according to the embodiment can accurately detect the contour of the specific shape.
- the model learns, for each region of a predetermined size in the learning image, information about whether the image of the region and the image of the region include the boundary of the membrane.
- the boundary detection step for each region of a predetermined size of the detection target image, using a model, information is derived as to whether or not the image of each region includes the boundary of the film, and from the derived information for each region, the detection target is determined. Detect membrane boundaries in an image. As a result, the profile detection method according to the embodiment can accurately detect the boundary of the film.
- the area detection step identifies a range of the specific shape in one direction of the detection target image and a direction crossing the one direction from the contour of the specific shape detected by the contour detection step, and detects the identified range as the specific shape region. do.
- the profile detection method according to the embodiment can accurately detect a specific-shaped region.
- the profile detection method can output only shape information representing features of interest by selecting a detection target region.
- the profile detection method according to the embodiment can output only the feature amount of the recess 50 representing the feature of interest by selecting the detection target area.
- contour detection step S10
- area detection step S11
- boundary detection step S12
- the case of measuring the dimensions of the concave portion of a semiconductor device formed on a substrate such as a semiconductor wafer has been described as an example.
- the substrate may be any substrate such as, for example, a glass substrate.
- the profile detection method according to the embodiment may be applied to measurement of dimensions of recesses of any substrate.
- the profile detection method according to the embodiment may be applied to measure the dimensions of recesses formed in a substrate for FPD.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Electromagnetism (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Geometry (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
Abstract
Description
実施形態について説明する。以下では、撮像した画像に含まれる凹部などの特定形状の寸法を情報処理装置10により計測する場合を例に説明する。図1は、実施形態に係る情報処理装置10の機能的な構成の一例を示す図である。情報処理装置10は、撮像した画像に含まれる特定形状の寸法の計測する機能を提供する装置である。情報処理装置10は、例えば、サーバコンピュータ、パーソナルコンピュータなどのコンピュータである。プロセスエンジニアは、情報処理装置10を用いて、撮像した画像の凹部の寸法を計測する。情報処理装置10は、本開示のプロファイル検出装置に対応する。
次に、実施形態に係るプロファイル検出方法の流れを説明する。実施形態に係る情報処理装置10は、プロファイル検出プログラムを実行することにより、プロファイル検出方法を実施する。図8は、実施形態に係るプロファイル検出方法の流れの一例を模式的に示した図である。
20 通信I/F部
21 表示部
22 入力部
23 記憶部
24 制御部
30 学習用データ
31 画像データ
32 モデルデータ
40 操作受付部
41 学習部
42 検出部
42a 輪郭検出部
42b 領域検出部
42c 境界検出部
43 計測部
44 出力部
50 凹部
Claims (10)
- 特定形状を含む学習用画像と当該学習用画像に含まれる前記特定形状に関する情報を学習したモデルを用いて、前記特定形状を含む検出対象画像から当該検出対象画像に含まれる前記特定形状を検出する検出工程と、
検出された前記特定形状の形状情報を出力する出力工程と、
を有するプロファイル検出方法。 - 前記検出工程は、
前記検出対象画像から当該検出対象画像に含まれる前記特定形状の輪郭を検出する輪郭検出工程と、
前記検出対象画像から当該検出対象画像に含まれる膜の境界を検出する境界検出工程と、
前記検出対象画像から当該検出対象画像に含まれる前記特定形状の領域を検出する領域検出工程と、
を含み、
前記輪郭検出工程、前記領域検出工程、前記境界検出工程の少なくとも1つは、前記モデルを用いて検出を行う
請求項1に記載のプロファイル検出方法。 - 前記学習用画像及び前記検出対象画像は、前記特定形状として、ビア又はトレンチの断面を示す凹部が複数並んだ半導体基板の断面の画像である
請求項1又は2に記載のプロファイル検出方法。 - 検出された前記特定形状の寸法を計測する計測工程をさらに有し、
前記出力工程は、計測された寸法を出力する
請求項1~3の何れか1つに記載のプロファイル検出方法。 - 前記モデルは、前記学習用画像と当該学習用画像に含まれる前記特定形状の輪郭に関する情報を学習しており、
前記輪郭検出工程は、前記モデルを用いて、前記検出対象画像から当該検出対象画像に含まれる前記特定形状の輪郭を検出する
請求項2に記載のプロファイル検出方法。 - 前記モデルは、前記学習用画像の所定サイズの領域毎に、当該領域の画像と当該領域の画像が膜の境界を含むか否かの情報を学習しており、
前記境界検出工程は、前記検出対象画像の前記所定サイズの領域毎に、前記モデルを用いて、各領域の画像が膜の境界を含むか否かの情報を導出し、導出された領域毎の情報から前記検出対象画像に含まれる膜の境界を検出する
請求項2又は5に記載のプロファイル検出方法。 - 前記領域検出工程は、前記輪郭検出工程により検出された前記特定形状の輪郭から前記検出対象画像の一方方向及び前記一方方向に対する交差方向に前記特定形状の範囲を特定し、特定した範囲を前記特定形状の領域と検出する
請求項2、5、6の何れか1つに記載のプロファイル検出方法。 - 前記出力工程は、画像上の前記特定形状を含む複数の領域から、検出対象領域を選択することで、関心のある特徴を表す形状情報のみを出力する
請求項1~7の何れか1つに記載のプロファイル検出方法。 - 前記出力工程は、画像上の凹部形状を含む複数の領域から、検出対象領域を選択することで、関心のある特徴を表す凹部特徴量のみを出力する
請求項1~7の何れか1つに記載のプロファイル検出方法。 - 特定形状を含む学習用画像と当該学習用画像に含まれる前記特定形状に関する情報を学習したモデルを用いて、前記特定形状を含む検出対象画像から当該検出対象画像に含まれる前記特定形状を検出する検出部と、
前記検出部により検出された前記特定形状の形状情報を出力する出力部と、
を有するプロファイル検出装置。
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR1020247029314A KR20240154557A (ko) | 2022-03-03 | 2022-03-18 | 프로파일 검출 방법 및 프로파일 검출 장치 |
| JP2024504340A JPWO2023166748A1 (ja) | 2022-03-03 | 2022-03-18 | |
| US18/820,492 US20240420359A1 (en) | 2022-03-03 | 2024-08-30 | Profile detecting method and profile detecting apparatus |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202263316125P | 2022-03-03 | 2022-03-03 | |
| US63/316,125 | 2022-03-03 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US18/820,492 Continuation US20240420359A1 (en) | 2022-03-03 | 2024-08-30 | Profile detecting method and profile detecting apparatus |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2023166748A1 true WO2023166748A1 (ja) | 2023-09-07 |
Family
ID=87883524
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2022/012518 Ceased WO2023166748A1 (ja) | 2022-03-03 | 2022-03-18 | プロファイル検出方法及びプロファイル検出装置 |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US20240420359A1 (ja) |
| JP (1) | JPWO2023166748A1 (ja) |
| KR (1) | KR20240154557A (ja) |
| WO (1) | WO2023166748A1 (ja) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP7642913B1 (ja) | 2024-09-30 | 2025-03-10 | 太陽ホールディングス株式会社 | 輪郭抽出方法、輪郭抽出システム、及び輪郭抽出プログラム |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2007129059A (ja) * | 2005-11-04 | 2007-05-24 | Hitachi High-Technologies Corp | 半導体デバイス製造プロセスモニタ装置および方法並びにパターンの断面形状推定方法及びその装置 |
| JP2012068138A (ja) * | 2010-09-24 | 2012-04-05 | Toppan Printing Co Ltd | パターン画像測定方法及びパターン画像測定装置 |
| WO2021024402A1 (ja) * | 2019-08-07 | 2021-02-11 | 株式会社日立ハイテク | 寸法計測装置、寸法計測方法及び半導体製造システム |
| WO2021260765A1 (ja) * | 2020-06-22 | 2021-12-30 | 株式会社日立ハイテク | 寸法計測装置、半導体製造装置及び半導体装置製造システム |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP6133603B2 (ja) | 2013-01-21 | 2017-05-24 | 株式会社日立ハイテクノロジーズ | 荷電粒子線装置用の検査データ処理装置 |
-
2022
- 2022-03-18 JP JP2024504340A patent/JPWO2023166748A1/ja active Pending
- 2022-03-18 WO PCT/JP2022/012518 patent/WO2023166748A1/ja not_active Ceased
- 2022-03-18 KR KR1020247029314A patent/KR20240154557A/ko active Pending
-
2024
- 2024-08-30 US US18/820,492 patent/US20240420359A1/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2007129059A (ja) * | 2005-11-04 | 2007-05-24 | Hitachi High-Technologies Corp | 半導体デバイス製造プロセスモニタ装置および方法並びにパターンの断面形状推定方法及びその装置 |
| JP2012068138A (ja) * | 2010-09-24 | 2012-04-05 | Toppan Printing Co Ltd | パターン画像測定方法及びパターン画像測定装置 |
| WO2021024402A1 (ja) * | 2019-08-07 | 2021-02-11 | 株式会社日立ハイテク | 寸法計測装置、寸法計測方法及び半導体製造システム |
| WO2021260765A1 (ja) * | 2020-06-22 | 2021-12-30 | 株式会社日立ハイテク | 寸法計測装置、半導体製造装置及び半導体装置製造システム |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP7642913B1 (ja) | 2024-09-30 | 2025-03-10 | 太陽ホールディングス株式会社 | 輪郭抽出方法、輪郭抽出システム、及び輪郭抽出プログラム |
Also Published As
| Publication number | Publication date |
|---|---|
| US20240420359A1 (en) | 2024-12-19 |
| KR20240154557A (ko) | 2024-10-25 |
| JPWO2023166748A1 (ja) | 2023-09-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7265592B2 (ja) | 多層構造体の層間のオーバレイを測定する技法 | |
| JP4014379B2 (ja) | 欠陥レビュー装置及び方法 | |
| US7786437B2 (en) | Pattern inspection method and pattern inspection system | |
| JP5707423B2 (ja) | パターンマッチング装置、及びコンピュータープログラム | |
| JP5364528B2 (ja) | パターンマッチング方法、パターンマッチングプログラム、電子計算機、電子デバイス検査装置 | |
| JP7427744B2 (ja) | 画像処理プログラム、画像処理装置、画像処理方法および欠陥検出システム | |
| JP2011197120A (ja) | パターン評価方法及びパターン評価装置 | |
| CN114419045A (zh) | 光刻掩模板缺陷检测方法、装置、设备及可读存储介质 | |
| JP7164716B2 (ja) | 寸法計測装置、半導体製造装置及び半導体装置製造システム | |
| US20240420359A1 (en) | Profile detecting method and profile detecting apparatus | |
| JP4230980B2 (ja) | パターンマッチング方法およびプログラム | |
| JP5389456B2 (ja) | 欠陥検査装置および欠陥検査方法 | |
| JP7490094B2 (ja) | 機械学習を用いた半導体オーバーレイ測定 | |
| JP7062563B2 (ja) | 輪郭抽出方法、輪郭抽出装置、及びプログラム | |
| JP7483061B2 (ja) | プロファイル検出方法、プロファイル検出プログラム及び情報処理装置 | |
| JP5758423B2 (ja) | マスクレイアウトの作成方法 | |
| TWI600898B (zh) | 資料修正裝置、描繪裝置、檢查裝置、資料修正方法、描繪方法、檢查方法及記錄有程式之記錄媒體 | |
| JP2015099062A (ja) | パターン外観検査装置 | |
| JP4604582B2 (ja) | パターン画像計測方法 | |
| KR20190088761A (ko) | 웨이퍼 측정 설비, 웨이퍼 측정 시스템 및 이를 이용한 반도체 장치의 제조 방법 | |
| CN119722721B (zh) | 图像处理方法、电子设备及计算机可读存储介质 | |
| JP7608298B2 (ja) | 検査装置及び検査方法 | |
| JP2014021684A (ja) | 測定装置のテンプレート生成装置 | |
| JP2007064884A (ja) | 放射状欠陥の検出装置、検出方法およびコンピュータを当該検出装置として機能させるためのプログラム |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22929892 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2024504340 Country of ref document: JP Kind code of ref document: A |
|
| ENP | Entry into the national phase |
Ref document number: 20247029314 Country of ref document: KR Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 22929892 Country of ref document: EP Kind code of ref document: A1 |