[go: up one dir, main page]

WO1992011609A1 - Procede et appareil servant a mesurer automatiquement l'emplacement et la separation de bords lineaires - Google Patents

Procede et appareil servant a mesurer automatiquement l'emplacement et la separation de bords lineaires Download PDF

Info

Publication number
WO1992011609A1
WO1992011609A1 PCT/US1991/009477 US9109477W WO9211609A1 WO 1992011609 A1 WO1992011609 A1 WO 1992011609A1 US 9109477 W US9109477 W US 9109477W WO 9211609 A1 WO9211609 A1 WO 9211609A1
Authority
WO
WIPO (PCT)
Prior art keywords
slope
intercept
edge points
line
edge
Prior art date
Application number
PCT/US1991/009477
Other languages
English (en)
Inventor
James Maples
Original Assignee
Optical Specialities, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Optical Specialities, Inc. filed Critical Optical Specialities, Inc.
Publication of WO1992011609A1 publication Critical patent/WO1992011609A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Definitions

  • This invention relates generally to the field of automated measurement techniques and, more 20 specifically, it relates to a method and apparatus for detecting collinear edge points in a digitized image and measuring the distance between two virtually parallel edges.
  • the measurement of the width of a strip of material on a semiconductor device may require a measurement of the distance between the substantially parallel edges.
  • registration of two 30 layers of a semiconductor product may be accomplished by
  • Automated systems exist for performing these 35 measurements by capturing an image of the device. digitizing the image, and analyzing the digitized image. The analysis of the image requires identifying the points in the image which represent the edges, "fitting" a straight line to each edge, and then determining the separation between the two lines.
  • mapping requires large amounts of computational resources because it requires repeated calculation of transcendental trigonometric functions.
  • the invention provides a method and apparatus for detecting collinear edge points and measuring the distance between two edges in a digitized image without repeated calculation of transcendental functions.
  • An image is captured and digitized.
  • the edge points in the digitized image are identified and processed to select collinear edge points by finding the straight line passing through the maximum number of edge points.
  • the search for this line is performed by starting with an expected slope value and varying the slope by an amount corresponding to a predetermined angular increment.
  • the y-intercepts are calculated for the straight lines passing through each edge point, and the best slope/intercept pair is determined.
  • the points lying on or near the line defined by the best slope/ intercept pair are selected and used to generate a straight line.
  • the process is repeated for two edges and the distance between the two lines is measured.
  • Fig. 1 is a block diagram of an apparatus for practicing the invention
  • Figs. 2A-2B are a flow chart of the method used in the preferred embodiment to process edge data
  • Fig. 3 is a diagram of an edge and the resulting grey scale representation thereof; and Fig. 4A-4C are diagrams of trial slopes applied to edge points and the histograms resulting from these trials.
  • a system 10 for performing measurements according to the invention is shown.
  • a microscope 12 is focused on an area of the subject, which in this application is a semiconductor wafer 14.
  • a camera 16 captures an image from microscope 12 and passes the image in electronic form to an image digitizer 20.
  • Digitizer 20 converts the image to an ordered array of digital picture elements (pixels) and transmits the digital data to computer 22 for processing.
  • each pixel is an 8-bit "grey-scale" representation of a portion of the area being viewed, with a value in the range 0-255.
  • a keyboard 24 and a display screen 26 are coupled to computer 22 for input and output of parameters and commands.
  • Control circuits 26 contain circuitry for controlling the parameters of the image capture system, such as focus, lighting (illumination) , magnification, wafer position, etc.
  • the image obtained from microscope 12 and camera 16 is filtered and analyzed to determine the location, size and spacing of various features on semiconductor wafer 14. Many of the features that are analyzed require measuring the distance between two straight edges. This is accomplished by determining where in the image the two edges are, fitting a line to each edge, and then determining the separation between the two lines. The line fitting process results in a mathematical characterization of a line which approximates the true edge.
  • Computer 22 analyzes the digitized image data to determine which pixels in the image correspond to points on an edge.
  • edge-detection techniques are available for this purpose.
  • the pixels in each scan line are examined to determine the amount of contrast between adjacent pixels, i.e., the change in grey scale values.
  • the grey scale value crosses a predetermined threshold value with a predetermined amount of "rise” (gradient)
  • the threshold and rise amount can be calibrated for a particular application in accordance with the other viewing system parameters.
  • the edge points are calculated to sub-pixel accuracy by interpolating between actual pixel coordinates.
  • the resulting floating-point edge coordinates are stored in a list of edge points for further analysis.
  • the points that are located will not be collinear; there is always some noise in the image.
  • the image can be corrupted by dirt on the wafer, by slight imperfections in the formation of the semiconductor features, or other problems.
  • These non-collinear points can distort any attempt to find a line that corresponds to the "true" edge.
  • the invention provides a method for compensating for imperfections and obtaining reliable, reproducible line fitting results.
  • a flow chart 100 illustrates the method for processing the edge point data to find the line which best represents the true edge.
  • the method illustrated in Figs. 2A-2B is implemented by the computer program supplied herewith as Appendix A, which is executed on the computer system shown and described hereinabove with reference to Fig. 1.
  • the computer program is written in the "C" programming language.
  • C programming language
  • the input to the computer program of Appendix A is the floating point edge data, representing each edge point by its x-y coordinates.
  • the verification percentage is a measure of the proportion of the points which are on or close to the line.
  • the data is analyzed in stages, by making a quick initial estimate of the line and determining whether further processing is required. If the initial estimate is not sufficiently accurate, then the edge data is further analyzed to determine which points correspond to true edge points and which points correspond to noise or imperfections. Then a line is fit to the true edge points.
  • Step 110 the edge points are determined by the method described above.
  • Step 120 a standard "least squares" algorithm is applied to the edge point data to find the line which is the
  • Step 130 the RMS error found by the least squares method is tested. If the RMS error is less than .5 pixels, then the edge data is assumed to be good and no further processing is performed. If the RMS error is in an intermediate range (.5 to 1.5 pixels), then at Step 135 a verification percentage is calculated for the least squares line.
  • the verification percentage is a calculation of what percentage of the edge points fall within a given distance (closer than 1.5 pixels) of the fitted line. If the verification percentage is more than 90%, then no further processing is required. Of course, the variables used to determine whether more processing is required (the high and low RMS error, the verification percentage and the verification distance) can be adjusted for best results for a particular application. If the two tests applied in Steps 130 and 135 are both failed, this indicates that there are some bad points in the edge point data which do not correspond to the true edge. The edge data is further analyzed to determine which edge points correspond to the true edge and which correspond to noise or imperfections.
  • the selection of "true” (substantially collinear) edge points is performed by finding the slope and intercept of a line which has the maximum number of edge points on it. The line is then used as a basis for selecting the points to be used to fit the final line. Referring to Fig. 2B, the details of this selection method are shown.
  • Step 140 an initial guess is made at the slope of the line. In many applications, such as registration of layers of semiconductors, it is known in advance what the expected slope of the edge is. This expected slope will be used as the first trial slope.
  • the edge data is converted to integer .x-y pixel values to speed up the calculation.
  • An n x 4 array (“EDGE") is created, where n is the number of edge points. The columns of the EDGE array will store the x pixel value, the y pixel value, a y-intercept value and a flag.
  • the trial slope is used to calculate, for each point, the y-intercept of a line having the trial slope and passing through the point.
  • the y-intercept of a line passing through the point x,y and having slope m can be calculated as:
  • the resulting y-intercept is stored in the EDGE array.
  • a histogram is constructed of the y-intercepts of lines passing through each edge point and having slope equal to the trial slope.
  • the y-intercept values are quantized into "bins" one pixel wide, and each bin contains the number of edge points on the corresponding line. The count of edge points in the histogram bin containing the most edge points is selected as the peak value for this trial slope.
  • a new trial slope is selected.
  • the angle ( ⁇ ) of the expected slope is calculated using the inverse tangent function.
  • the angle is incremented in a first direction by a predetermined amount (to the left by .5 degrees, in this embodiment).
  • the tangent of the new angle is calculated to determine the next trial slope.
  • the y-intercepts of the lines having the new slope and passing through each point are calculated, a histogram is constructed, and the bin with the most points is determined. If the maximum number of points is greater than the previous maximum, then the search continues in the same direction, with the slope being incremented by an amount corresponding to .5 degrees in that direction.
  • the search then proceeds in the other direction until one of these conditions is met.
  • the trial slope corresponding to the histogram with the highest peak is selected as the best slope.
  • a subroutine for performing the search for the best slope i.e. the slope of the line with the most pixels on it) is given in Appendix A as the function slope_search.
  • the variables ANG_INC, ANG_MAX and LIMIT_TEST determine the angular increment for the search, the maximum angle from the expected slope angle, and the number of times the result must decrease for the search to stop, respectively. These parameters may be varied for the best results in a particular application.
  • the start_ang variable is calculated once from the expected slope using the arctangent function.
  • the slope for each new test is calculated once for each test using start_ang, an increment, and the tangent function before the call to subroutine match_slope.
  • the invention thus increases computational efficiency by minimizing the use of transcendental functions. This is in contrast to the prior art methods which use angle- radius coordinates, requiring repeated calculation of trigonometric functions.
  • subroutine slope_search calls subroutine match_slope to accumulate the histogram for that slope.
  • Subroutine match_slope computes the y-intercepts and calls subroutine histogram once for each value to enter the value into the histogram.
  • Subroutine match_slope finds the peak bin in the histogram for the trial slope and returns to subroutine slope_search for calculation of the next trial slope.
  • Subroutine slope_search determines the peak value from all histograms and the slope corresponding to this peak value. Subroutine slope_search then makes one final call to subroutine match_slope to rebuild the histogram for the best slope.
  • the histogram for that slope is further examined (Step 175) .
  • all edge points whose y-intercept values are identical will lie on the same line. In practice, however, edge points that lie along the same line will not necessarily have identical y- intercepts, although they will be very close. It is likely that some of the good edge points will, because of noise, fall into bins immediately adjacent to the peak bin. Therefore, on the last call to subroutine match_slope, the peak bin and adjacent bins are examined and good edge points are selected and flagged. Starting with the peak bin of the histogram, either the left or right adjacent bin, whichever is higher, is added to the selected points.
  • the selected points are flagged in the fourth column of the EDGE array.
  • an additional scan is made through the edge data and, if any point is not selected, than the adjacent points on both sides of that point are also marked as "bad". This takes care of the common case where edge data is good for a portion of the image and is then corrupted by dirt. The last edge point along the good portion of the line may be perturbed by the presence of dirt, but its data may still be good enough to have been selected. This final scan will throw out all such points that are adjacent to noisy areas, avoiding a small perturbation in the results.
  • a final least squares line fit is performed, using only the selected points. It will be noted that although the selection of good edge points was performed with integer arithmetic, allowing the program to execute faster, the final line fitting calculation is now performed with floating point edge data.
  • a verification percentage is calculated for the line obtained by the least squares fit.
  • the verification percentage determines if the method was able to extract a valid line from the data. (It is possible, of course, that there is no valid line.)
  • the verification percentage is a calculation of what percentage of the edge points fall within a given distance (1.5 pixels in this embodiment) of the fitted line. If the percentage falls below a limit (which may be set by the user) , then the data is rejected as too noisy.
  • the distance between the lines can be measured. If the slopes are not equal, the distance can be measured at the center point on each line.
  • the resulting measurement can be displayed, directly or indirectly, on display screen 26 (Fig. 1) .
  • a number may be displayed, or a graphical indication of the measurement may be displayed.
  • a vector map is displayed, showing areas that are not properly registered, with a graphical indication of the amount of the error.
  • Figs. 3-4 illustrate some typical results of the above-described method in operation.
  • Fig. 3 illustrates the grey scale scanlines generated from a camera scan of a straight edge with a bump.
  • Fig. 4A illustrates the histogram 40 resulting from calculation of y-intercepts of lines passing through the edge data at the expected vertical slope 42. The bump in the material shows up as a small peak in the histogram at this slope.
  • Fig. 4B illustrates the histogram 44 generated at a typical search slope 46.
  • Fig. 4C illustrates the histogram 48 generated when the search slope 50 corresponds to the actual slope of the edge. A very prominent peak in the histogram is generated when the actual edge slope is used.
  • the bump is reflected in the histogram by small side lobes.
  • the range of angles chosen for search slopes in Figs. 4A-4C has been exaggerated for purposes of illustration.
  • VERIFY_DISTANCE is the distance in pixels that an edge point can be from the calculated edge to count as a verification point */
  • match_slope() analyzes the histogram to mark edge points for selection, it includes the max bin plus enough adjacent side bins so that SELECT_PCT of the edge points are selected. If some almost empty bins are found before SELECTION_PCT is reached, then the selection process terminates immediately. This avoids including noise points in a very noisy picture. */
  • edge_of_edge_of_edge() Calculate the integer X, Y pixel values of each edge point and stuff it in edge array.
  • inputs coords[] [2] — floating point edge data count — length of coords[] [2] (and edge[][4]) edge[] [4] — dynamically allocated companion array for edge data outputs: edge[i][0] — X pixel value for coords[i][0] edge[i][l] — Y pixel value for coords[i][1] edge[i][2] — y-intercept for edge point i edge[i][3] — flag word for edge point i
  • *rms *rms/y_pix_siz; return(0) ;
  • histogram() accumulates histogram data. Called once for each value entered into the histogram. inputs: value — the value to be accumulated
  • B.AD_Y_INT; ⁇ ⁇ ⁇ return (0) ;

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

Procédé et appareil servant à détecter des points de bords colinéaires et à mesurer la distance entre deux bords d'une image numérisée. Une image est saisie (16) et numérisée (20). Les pixels correspondant à des points de bords sont identifiés et on les traite de manière à sélectionner des points de bords colinéaires à l'aide de la ligne droite passant par le nombre maximum de points de bords. On recherche cette ligne en commençant par une valeur d'inclinaison prévue (14a) et en modifiant cette inclinaison selon une valeur correspondant à une augmentation angulaire prédeterminée. Pour chaque inclinaison d'essai, on calcule (15a) les interceptions y pour les lignes droites passant par chaque point de bord, et on détermine la meilleure paire inclinaison/interception. Les points situés sur ou à proximité de la ligne définie par la meilleure paire inclinaison/interception sont choisis et utilisés pour former une ligne droite. Le procédé est répété pour deux bords et la distance entre les deux lignes est mesurée.
PCT/US1991/009477 1990-12-19 1991-12-17 Procede et appareil servant a mesurer automatiquement l'emplacement et la separation de bords lineaires WO1992011609A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US63223290A 1990-12-19 1990-12-19
US632,232 1990-12-19

Publications (1)

Publication Number Publication Date
WO1992011609A1 true WO1992011609A1 (fr) 1992-07-09

Family

ID=24534650

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1991/009477 WO1992011609A1 (fr) 1990-12-19 1991-12-17 Procede et appareil servant a mesurer automatiquement l'emplacement et la separation de bords lineaires

Country Status (1)

Country Link
WO (1) WO1992011609A1 (fr)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0635804A1 (fr) * 1993-07-22 1995-01-25 Koninklijke Philips Electronics N.V. Méthode de traitement d'image et dispositif d'exécution de cette méthode
FR2708165A1 (fr) * 1993-07-22 1995-01-27 Philips Laboratoire Electroniq Procédé de traitement d'images numérisées en imagerie de rayons X pour détecter le bord d'une région masquée par un volet de champ.
AT1856U1 (de) * 1996-10-16 1997-12-29 Walter Fink Verfahren zur elektronischen unfalldatenaufnahme und vermessung von objekten in digitalisierten unfallbildern
EP1022680A1 (fr) * 1999-01-22 2000-07-26 INTERMEC SCANNER TECHNOLOGY CENTER Société Anonyme Procédé et dispositif de detection de segments de droites dans un flot de données numériques répresentatives d'une image, dans lequel sont identifiés les points contours de ladite image
WO2002023173A3 (fr) * 2000-09-15 2003-03-20 Infineon Technologies Corp Procede permettant de mesurer et de caracteriser des traits paralleles dans des images
US6538753B2 (en) 2001-05-22 2003-03-25 Nikon Precision, Inc. Method and apparatus for dimension measurement of a pattern formed by lithographic exposure tools
EP1031942A3 (fr) * 1999-02-22 2004-10-27 Keyence Corporation Méthode de détection de contours, système d'inspection et moyen d'enregistrement
US6956659B2 (en) 2001-05-22 2005-10-18 Nikon Precision Inc. Measurement of critical dimensions of etched features
US6974653B2 (en) 2002-04-19 2005-12-13 Nikon Precision Inc. Methods for critical dimension and focus mapping using critical dimension test marks
JP2013047874A (ja) * 2011-08-29 2013-03-07 Pfu Ltd 画像処理装置、画像処理方法、画像処理プログラム及び画像処理システム
CN117670845A (zh) * 2023-12-08 2024-03-08 北京长木谷医疗科技股份有限公司 一种基于x光医学图像的脊柱滑脱识别及评估方法、装置

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
J. ILLINGWORTH et al., "The Adaptive Hough Transform" IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, Vol. PAMI-9, No. 5, September 1987, see pages 690-698. *
LI, HUNGWEN et al., "Fast Hough Transform: A Heirarchichal Approach", COMPUTER VISION GRAPHICS, AND IMAGE PROCESSING, 36, 1986, see pages 139-161. *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2708165A1 (fr) * 1993-07-22 1995-01-27 Philips Laboratoire Electroniq Procédé de traitement d'images numérisées en imagerie de rayons X pour détecter le bord d'une région masquée par un volet de champ.
EP0635804A1 (fr) * 1993-07-22 1995-01-25 Koninklijke Philips Electronics N.V. Méthode de traitement d'image et dispositif d'exécution de cette méthode
AT1856U1 (de) * 1996-10-16 1997-12-29 Walter Fink Verfahren zur elektronischen unfalldatenaufnahme und vermessung von objekten in digitalisierten unfallbildern
US7035466B2 (en) 1999-01-22 2006-04-25 Intermec Ip Corp. Process and device for detection of straight-line segments in a stream of digital data that are representative of an image in which the contour points of said image are identified
EP1022680A1 (fr) * 1999-01-22 2000-07-26 INTERMEC SCANNER TECHNOLOGY CENTER Société Anonyme Procédé et dispositif de detection de segments de droites dans un flot de données numériques répresentatives d'une image, dans lequel sont identifiés les points contours de ladite image
FR2788873A1 (fr) * 1999-01-22 2000-07-28 Intermec Scanner Technology Ce Procede et dispositif de detection de segments de droites dans un flot de donnees numeriques representatives d'une image, dans lequel sont identifies les points contours de ladite image
US6687403B1 (en) 1999-01-22 2004-02-03 Intermec Ip Corp. Process and device for detection of straight-line segments in a stream of digital data that are representative of an image in which the contour points of said image are identified
US7412097B2 (en) 1999-01-22 2008-08-12 Intermec Ip Corp. Process and device for detection of straight-line segments in a stream of digital data that are representative of an image in which the contour points of said image are identified
EP1031942A3 (fr) * 1999-02-22 2004-10-27 Keyence Corporation Méthode de détection de contours, système d'inspection et moyen d'enregistrement
WO2002023173A3 (fr) * 2000-09-15 2003-03-20 Infineon Technologies Corp Procede permettant de mesurer et de caracteriser des traits paralleles dans des images
US6538753B2 (en) 2001-05-22 2003-03-25 Nikon Precision, Inc. Method and apparatus for dimension measurement of a pattern formed by lithographic exposure tools
US6956659B2 (en) 2001-05-22 2005-10-18 Nikon Precision Inc. Measurement of critical dimensions of etched features
US6974653B2 (en) 2002-04-19 2005-12-13 Nikon Precision Inc. Methods for critical dimension and focus mapping using critical dimension test marks
JP2013047874A (ja) * 2011-08-29 2013-03-07 Pfu Ltd 画像処理装置、画像処理方法、画像処理プログラム及び画像処理システム
CN117670845A (zh) * 2023-12-08 2024-03-08 北京长木谷医疗科技股份有限公司 一种基于x光医学图像的脊柱滑脱识别及评估方法、装置

Similar Documents

Publication Publication Date Title
US5081689A (en) Apparatus and method for extracting edges and lines
CN114820620B (zh) 一种螺栓松动缺陷检测方法、系统及装置
US7155052B2 (en) Method for pattern inspection
US8139117B2 (en) Image quality analysis with test pattern
US7623683B2 (en) Combining multiple exposure images to increase dynamic range
Deb et al. Automatic detection and analysis of discontinuity geometry of rock mass from digital images
WO1992011609A1 (fr) Procede et appareil servant a mesurer automatiquement l'emplacement et la separation de bords lineaires
KR19980070585A (ko) 위치 검출 장치 및 방법
US7085432B2 (en) Edge detection using Hough transformation
CN117037132A (zh) 一种基于机器视觉的船舶水尺读数检测和识别方法
CN112036232A (zh) 一种图像表格结构识别方法、系统、终端以及存储介质
Lassen Automatic high‐precision measurements of the location and width of Kikuchi bands in electron backscatter diffraction patterns
Princen et al. A comparison of Hough transform methods
Chin et al. Skew detection in handwritten scripts
JP2000099727A (ja) 外観画像分類装置およびその方法
Ulrich et al. Empirical performance evaluation of object recognition methods
Ulrich et al. Performance evaluation of 2d object recognition techniques
US20030185432A1 (en) Method and system for image registration based on hierarchical object modeling
Eichel et al. Quantitative analysis of a moment-based edge operator
Thomas et al. The Hough transform for locating cell nuclei
Yeh et al. A rotation-invariant and non-referential approach for ball grid array (BGA) substrate conducting path inspection
JP3111434B2 (ja) 画像処理装置
McIvor et al. Simple surface segmentation
CN117495799A (zh) 一种由边缘成像引起的缺陷检测方法、装置及电子设备
CN119919920A (zh) 指针式仪表的数据读取方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IT LU MC NL SE