[go: up one dir, main page]

WO2020136697A1 - Defect inspection device - Google Patents

Defect inspection device Download PDF

Info

Publication number
WO2020136697A1
WO2020136697A1 PCT/JP2018/047448 JP2018047448W WO2020136697A1 WO 2020136697 A1 WO2020136697 A1 WO 2020136697A1 JP 2018047448 W JP2018047448 W JP 2018047448W WO 2020136697 A1 WO2020136697 A1 WO 2020136697A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
unit
photoelectric conversion
light
illumination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2018/047448
Other languages
French (fr)
Japanese (ja)
Inventor
英司 有馬
本田 敏文
雄太 浦野
松本 俊一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi High Tech Corp
Original Assignee
Hitachi High Tech Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi High Tech Corp filed Critical Hitachi High Tech Corp
Priority to PCT/JP2018/047448 priority Critical patent/WO2020136697A1/en
Publication of WO2020136697A1 publication Critical patent/WO2020136697A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/956Inspecting patterns on the surface of objects

Definitions

  • the present invention relates to a defect inspection device.
  • Defect inspection used in the manufacturing process of semiconductors, etc. requires detecting minute defects and measuring the dimensions of the detected defects with high accuracy.
  • non-destructive inspection of the sample for example, without degrading the sample, and when the same sample is inspected, substantially constant inspection results can be obtained regarding the number, position, size, and defect type of detected defects, for example. Be required to be.
  • it is required to inspect a large number of samples within a fixed time.
  • Patent Documents 1 and 2 describe defect inspection used in the manufacturing process of semiconductors and the like.
  • US Pat. No. 6,096,837 discloses splitting the full collection NA of a collection subsystem into different segments and directing scattered light collected in the different segments to separate detectors. The configuration is described.
  • Patent Document 2 describes a configuration in which a large number of detection systems having smaller apertures are arranged with respect to the full focusing NA.
  • the size of the image formed on the sensor surface is greatly affected by the position and focal length of the lens array.
  • magnifications (magnitudes) of the divided images formed on the sensor surface are different, image blur occurs when the divided images are integrated, and the detection sensitivity decreases.
  • -Patent Documents 1 and 2 do not mention the problem that image blur occurs when the divided images are integrated and the solution thereof.
  • the object of the present invention is to prevent the detection sensitivity from decreasing due to image blurring even when the divided images are integrated in the defect inspection apparatus.
  • a defect inspection apparatus includes an illumination unit that irradiates a sample with light emitted from a light source, a detection unit that detects scattered light generated from the sample, and the scattered light detected by the detection unit. And a signal processing unit for processing the electric signal converted by the photoelectric conversion unit to detect a defect in the sample, wherein the detection unit divides the opening.
  • the defect inspection apparatus it is possible to prevent deterioration of detection sensitivity due to image blurring even when the divided images are integrated.
  • FIG. 1 is an overall schematic configuration diagram of a defect inspection apparatus of Example 1.
  • FIG. It is a figure which shows an example of the illumination intensity distribution shape implement
  • FIG. 7 is a diagram showing a mechanism for observing a divided image of Example 2 with a two-dimensional camera.
  • FIG. 11 is a diagram showing an example of a GUI for calibrating the size of a divided image according to the second embodiment.
  • FIG. 11 is a diagram showing an example of a GUI for calibrating the size of a divided image according to the second embodiment.
  • FIG. 11 is a diagram showing an example of a GUI for calibrating the size of a divided image according to the second embodiment.
  • FIG. 11 is a diagram showing an example of a GUI for calibrating the size of a divided image according to the second embodiment.
  • FIG. 11 is a diagram showing an example of a GUI for calibrating the size of a divided image according to the second embodiment.
  • FIG. 11 is a diagram showing an example of a GUI for calibrating the size of a divided image according to the second embodiment.
  • FIG. 11 is a diagram showing an example of a GUI for calibrating the size of a divided image according to the second embodiment. It is a figure which shows an example of arrangement
  • FIG. 7 is a diagram showing a mechanism for observing a divided image of Example 2 with a two-dimensional camera.
  • FIG. 7 is a diagram showing a mechanism for observing a divided image of Example 2 with a two-dimensional camera.
  • FIG. 9 is a diagram showing a block diagram of a control unit that calibrates the size and image position of a divided image acquired by the two-dimensional camera of the second embodiment.
  • the defect inspection apparatus includes an illumination unit 101, a detection unit 102, a photoelectric conversion unit 103, a stage 104, a signal processing unit 105, a control unit 53, a display unit 54, and an input unit 55.
  • the stage 104 is configured such that the sample W can be placed thereon, the actuator can move the sample W in the direction perpendicular to the surface, the sample 104 can rotate in the plane of the sample W, and the stage 104 can move in the direction parallel to the surface of the sample W. There is.
  • the illumination unit 101 includes a laser light source 2, an attenuator 3, an emitted light adjusting unit 4, a beam expander 5, a polarization control unit 6, and an illumination intensity distribution control unit 7 as appropriate.
  • the laser light beam emitted from the laser light source 2 is adjusted to a desired beam intensity by the attenuator 3 and adjusted to a desired beam position and a beam traveling direction by the emission light adjusting unit 4. Further, it is adjusted to a desired beam diameter by the beam expander 5, adjusted to a desired polarization state by the polarization control unit 6, and adjusted to a desired intensity distribution by the illumination intensity distribution control unit 7 to be an inspection target region of the sample W. Illuminated.
  • the incident angle of the illumination light with respect to the sample surface is determined by the position and angle of the reflection mirror of the emission light adjustment unit 4 arranged in the optical path of the illumination unit 101.
  • the incident angle of the illumination light is set to an angle suitable for detecting a minute defect.
  • the larger the illumination incident angle that is, the smaller the illumination elevation angle (the angle between the sample surface and the illumination optical axis), the more scattered light from the minute foreign matter on the sample surface becomes noise, and the more scattering from the minute irregularities on the sample surface.
  • Light (called haze) weakens, so it is suitable for detecting minute defects. Therefore, when the scattered light from the minute irregularities on the sample surface interferes with the detection of minute defects, the incident angle of the illumination light is preferably set to 75 degrees or more (elevation angle of 15 degrees or less).
  • the incident angle of the illumination light is preferably set to 60 degrees or more and 75 degrees or less (the elevation angle is 15 degrees or more and 30 degrees or less).
  • the polarization control of the illumination control unit 6 of the illumination unit 101 causes the polarization of illumination to be P-polarized light, so that the scattered light from defects on the sample surface is increased compared to other polarizations. To do.
  • the scattered light from the minute irregularities on the sample surface interferes with the detection of the minute defects
  • the scattered light from the minute irregularities on the sample surface is made to be S-polarized light by setting the polarization of the illumination to S polarization. Decrease.
  • the illuminating optical path is changed, and the illuminating optical path is substantially changed with respect to the sample surface.
  • Illumination light is emitted from a direction perpendicular to (vertical illumination).
  • the illumination intensity distribution on the sample surface is controlled by the illumination intensity distribution control unit 7 in the same manner as the oblique incidence illumination.
  • a beam splitter is inserted at the same position as the mirror 21.
  • vertical illumination that is incident substantially perpendicularly to the sample surface is suitable.
  • the laser light source 2 oscillates an ultraviolet or vacuum ultraviolet laser beam having a short wavelength (wavelength of 355 nm or less) as a wavelength that is difficult to penetrate into the sample in order to detect minute defects near the sample surface, and outputs 2 W.
  • the above high output is used.
  • the outgoing beam diameter is about 1 mm.
  • a laser that oscillates a visible or infrared laser beam as a wavelength that easily penetrates into the sample is used.
  • the attenuator 3 appropriately includes a first polarizing plate, a half-wave plate rotatable about the optical axis of illumination light, and a second polarizing plate.
  • the light that has entered the attenuator 3 is converted into linearly polarized light by the first polarizing plate, and the polarization direction is rotated in an arbitrary direction according to the slow axis azimuth angle of the half-wave plate, and the second polarizing plate pass.
  • the azimuth angle of the half-wave plate By controlling the azimuth angle of the half-wave plate, the light intensity is reduced at an arbitrary ratio.
  • the first polarizing plate is not always necessary.
  • the attenuator 3 the one in which the relationship between the input signal and the extinction ratio is calibrated in advance is used.
  • the attenuator 3 it is possible to use an ND filter having a gradation density distribution, or switch and use a plurality of ND filters having different densities.
  • the outgoing light adjusting unit 4 includes a plurality of reflecting mirrors.
  • a three-dimensional Cartesian coordinate system (XYZ coordinates) is tentatively defined, and it is assumed that the incident light on the reflecting mirror travels in the +X direction.
  • the first reflection mirror is installed so as to deflect the incident light in the +Y direction (incident/reflection in the XY plane), and the second reflection mirror deflects the light reflected by the first reflection mirror in the +Z direction. Installed (incident and reflection in YZ plane).
  • the position and traveling direction (angle) of the light emitted from the emission adjusting unit 4 are adjusted by parallel movement and tilt angle adjustment of each reflection mirror.
  • the first reflecting mirror's incident/reflecting surface (XY plane) and the second reflecting mirror's incident/reflecting surface (YZ plane) are arranged so as to be orthogonal to each other. Thereby, the position and angle adjustment in the XZ plane and the position and angle adjustment in the YZ plane of the light emitted from the emission adjustment unit 4 (traveling in the +Z direction) can be performed independently.
  • the beam expander 5 has two or more lens groups and has a function of expanding the diameter of the incident parallel light flux.
  • a Galileo type beam expander including a combination of a concave lens and a convex lens is used.
  • the beam expander 5 is installed on a translation stage having two or more axes, and its position can be adjusted so that its center coincides with a predetermined beam position. Further, a tilt angle adjusting function of the entire beam expander 5 is provided so that the optical axis of the beam expander 5 and a predetermined beam optical axis coincide with each other. By adjusting the distance between the lenses, it is possible to control the enlargement ratio of the luminous flux diameter (zoom mechanism).
  • the diameter of the light beam is expanded and collimation (quasi-parallel light conversion) is performed simultaneously by adjusting the lens interval.
  • the collimation of the light flux may be performed by installing a collimator lens upstream of the beam expander 5 independently of the beam expander 5.
  • the expansion factor of the beam diameter by the beam expander 5 is about 5 to 10 times, and the beam having a beam diameter of 1 mm emitted from the light source is expanded to about 5 mm to 10 mm.
  • the polarization controller 6 is composed of a half-wave plate and a quarter-wave plate, and controls the polarization state of illumination light to an arbitrary polarization state.
  • the beam monitor 22 measures the states of the light incident on the beam expander 5 and the light incident on the illumination intensity distribution control unit 7.
  • FIGS. 2 to 6 are schematic diagrams showing the positional relationship between the illumination optical axis 120 guided to the sample surface from the illumination unit 101 and the illumination intensity distribution shape.
  • the configuration of the illumination unit 101 in FIGS. 2 to 6 shows a part of the configuration of the illumination unit 101, and the emission light adjustment unit 4, the mirror 21, the beam monitor 22 and the like are omitted.
  • Fig. 2 shows a schematic diagram of the cross section of the incident surface of grazing incidence illumination (the surface including the illumination optical axis and the sample surface normal).
  • the grazing incidence illumination is inclined with respect to the sample surface within the incidence plane.
  • the illumination unit 101 produces a substantially uniform illumination intensity distribution in the incident plane.
  • the length of the portion where the illumination intensity is uniform is about 100 ⁇ m to 4 mm in order to inspect a wide area per unit time.
  • Fig. 3 shows a schematic diagram of a cross section of a plane that includes the sample surface normal and is perpendicular to the incidence plane of the oblique incidence illumination.
  • the illumination intensity distribution on the sample surface forms an illumination intensity distribution in which the intensity of the periphery is weaker than that of the center. More specifically, a Gaussian distribution that reflects the intensity distribution of light incident on the illumination intensity distribution control unit 7, or a first-order first-order Bessel function or sinc function that reflects the aperture shape of the illumination intensity distribution control unit 7.
  • the intensity distribution is similar to.
  • the length of the illumination intensity distribution within this plane reduces the haze generated from the sample surface, so that the illumination within the incidence plane is reduced. It is shorter than the length of the portion having uniform strength, and is about 2.5 ⁇ m to 20 ⁇ m.
  • the illumination intensity distribution controller 7 includes optical elements such as an aspherical lens, a diffractive optical element, a cylindrical lens array, and a light pipe described later. The optical element forming the illumination intensity distribution control unit 7 is installed perpendicularly to the illumination optical axis, as shown in FIGS.
  • the illumination intensity distribution control unit 7 includes an optical element that acts on the phase distribution and intensity distribution of incident light.
  • a diffractive optical element 71 (DOE: Diffractive Optical Element) is used as an optical element forming the illumination intensity distribution control unit 7 (FIG. 7).
  • the diffractive optical element 71 is formed by forming a fine undulation shape having a size equal to or less than the wavelength of light on the surface of a substrate made of a material that transmits incident light.
  • fused quartz is used for ultraviolet light.
  • a lithographic method is used for forming the fine relief shape.
  • the optical element provided in the illumination intensity distribution control unit 7 is provided with a translation adjusting mechanism of two or more axes and a rotation adjusting mechanism of two or more axes so that the relative position and angle of the incident light with respect to the optical axis can be adjusted. .. Further, a focus adjusting mechanism for moving in the optical axis direction is provided.
  • an aspherical lens, a combination of a cylindrical lens array and a cylindrical lens, or a combination of a light pipe and an imaging lens may be used.
  • the illumination intensity distribution control unit 7 has a spherical lens, and the beam expander 5 forms an elliptical beam that is long in one direction. It is formed of a plurality of lenses including a cylindrical lens.
  • a part or all of the spherical lens or the cylindrical lens included in the illumination intensity distribution control unit 7 is installed parallel to the sample surface, so that it is long in one direction on the sample surface and has a narrow width in the direction perpendicular thereto. An illumination intensity distribution is formed.
  • the variation of the illumination intensity distribution on the sample surface due to the variation of the state of the light entering the illumination intensity distribution control unit 7 is small, and the stability of the illumination intensity distribution is high. Further, compared with the case where a diffractive optical element, a microlens array, or the like is used for the illumination intensity distribution controller 7, the light transmittance is high and the efficiency is good.
  • the state of illumination light in the illumination unit 101 is measured by the beam monitor 22.
  • the beam monitor 22 measures and outputs the position and angle (traveling direction) of the illumination light that has passed through the emission light adjusting unit 4, or the position and the wavefront of the illumination light that enters the illumination intensity distribution control unit 7.
  • the position measurement of the illumination light is performed by measuring the position of the center of gravity of the light intensity of the illumination light.
  • an optical position sensor Position Sensitive Detector
  • an image sensor such as a CCD sensor or a CMOS sensor
  • the angle measurement of the illumination light is performed by an optical position sensor or an image sensor installed at a position farther from the light source than the position measuring means or at a condensing position by a collimator lens.
  • the illumination light position and the illumination light angle detected by the sensor are input to the control unit 53 and displayed on the display unit 54.
  • the emitted light adjusting section 4 is adjusted so as to return to the predetermined position.
  • the wavefront measurement of the illumination light is performed to measure the parallelism of the light incident on the illumination intensity control unit 7.
  • a spatial light phase modulator that is a type of a spatial light modulator (SLM: Spatial Light Modulator).
  • the wavefront accuracy measuring and adjusting means By inserting an appropriate phase difference for each position of the light flux cross section so that the wavefront becomes flat, by making the wavefront close to flat, that is, making the illumination light close to quasi-parallel light. You can By the above wavefront accuracy measuring and adjusting means, the wavefront accuracy (deviation from the predetermined wavefront (design value or initial state)) of the light incident on the illumination intensity distribution control unit 7 is suppressed to ⁇ /10 rms or less.
  • the illumination intensity distribution monitor 24 measures the illumination intensity distribution on the sample surface adjusted by the illumination intensity distribution control unit 7. As shown in FIG. 1, even when vertical illumination is used, the illumination intensity distribution monitor 24 similarly measures the illumination intensity distribution on the sample surface adjusted by the illumination intensity distribution control unit 7.
  • the illumination intensity distribution monitor 24 forms an image of the sample surface on an image sensor such as a CCD sensor or a CMOS sensor through a lens and detects it as an image.
  • the image of the illumination intensity distribution detected by the illumination intensity distribution monitor 24 is processed by the control unit 53, and the barycentric position of the intensity, the maximum intensity, the maximum intensity position, the width and the length of the illumination intensity distribution (greater than or equal to a predetermined intensity or the maximum intensity).
  • the width and length of the illumination intensity distribution area having a predetermined ratio or more with respect to the value are calculated and displayed on the display unit 54 together with the contour shape of the illumination intensity distribution, the sectional waveform, and the like.
  • the displacement of the height of the sample surface causes the displacement of the position of the illumination intensity distribution and the disturbance of the illumination intensity distribution due to defocusing.
  • the height of the sample surface is measured, and when the height is deviated, the deviation is corrected by the illumination intensity distribution control unit 7 or the height adjustment of the stage 104 by the Z axis.
  • the illuminance distribution shape (illumination spot 20) formed on the sample surface by the illumination unit 101 and the sample scanning method will be described with reference to FIGS. 8 and 9.
  • the stage 104 includes a translation stage, a rotation stage, and a Z stage (not shown) for adjusting the height of the sample surface.
  • the illumination spot 20 has an illumination intensity distribution that is long in one direction as described above, the direction is S2, and the direction substantially orthogonal to S2 is S1.
  • the rotary motion of the rotary stage scans in the circumferential direction S1 of a circle about the rotation axis of the rotary stage, and the translational motion of the translation stage scans in the translational direction S2 of the translation stage.
  • the illumination spot scans a spiral locus T on the sample W by scanning in the scanning direction S2 by a distance equal to or less than the longitudinal length of the illumination spot 20. Draw and scan the entire surface of sample 1.
  • a plurality of detecting units 102 are arranged so as to detect scattered light emitted from the illumination spot 20 in a plurality of directions. An example of the arrangement of the detection unit 102 with respect to the sample W and the illumination spot 20 will be described with reference to FIGS.
  • FIG. 10 shows a side view of the arrangement of the detection unit 102.
  • the angle formed by the detection direction of the detection unit 102 (the central direction of the detection aperture) with respect to the normal line of the sample W is defined as the detection zenith angle.
  • the detection unit 102 is configured by appropriately using a high angle detection unit 102h having a detected zenith angle of 45 degrees or less and a low angle detection unit 102l having a detected zenith angle of 45 degrees or more.
  • the high-angle detector 102h and the low-angle detector 102l each include a plurality of detectors so as to cover scattered light scattered in multiple directions at each detected zenith angle.
  • FIG. 11 shows a plan view of the arrangement of the low angle detection unit 102l.
  • the low-angle detection unit 102 includes a low-angle front detection unit 102lf, a low-angle side detection unit 102ls, a low-angle rear detection unit 102lb, and a low-angle front detection unit 102lf' and a low-angle front detection unit 102lf' which are symmetrical with respect to the illumination incidence plane.
  • a corner side detection unit 102ls' and a low angle rear detection unit 102lb' are appropriately provided.
  • the low-angle front detection unit 102lf has a detection azimuth angle of 0° or more and 60° or less
  • the low-angle side detection unit 102ls has a detection azimuth angle of 60° or more and 120° or less
  • the low-angle rear detection unit 102lb has a detection azimuth angle of 60° or more. It is installed above 120 degrees and below 180 degrees.
  • FIG. 12 shows a plan view of the arrangement of the high angle detection unit 102h.
  • the high-angle detection unit 102 includes a high-angle front detection unit 102hf, a high-angle side detection unit 102hs, a high-angle rear detection unit 102hb, and a high-angle side detection unit 102hs′ at positions symmetrical to the high-angle side detection unit 102hs and the illumination incident surface.
  • the high-angle front detection unit 102hf is installed so that the detected azimuth angle is 0° or more and 45° or less
  • the part 102b is installed at the detection azimuth angle of 135° or more and 180° or less.
  • the case where there are four high angle detection units 102h and six low angle detection units 102l is shown here, but the number is not limited to this, and the number and position of the detection units may be changed as appropriate.
  • FIG. 13 shows an example of a specific configuration diagram of the detection unit 102 having the image formation unit 102-A1.
  • the scattered light generated from the illumination spot 20 is condensed by the objective lens 1021, and the polarization direction is controlled by the polarization control filter 1022.
  • the polarization control filter 1022 for example, a half-wave plate whose rotation angle can be controlled by a drive mechanism such as a motor is applied.
  • the detection NA of the objective lens 1021 is preferably 0.3 or more.
  • the lower end of the objective lens is cut out as necessary so that the lower end of the objective lens 1021 does not interfere with the sample surface W.
  • the imaging lens 1023 forms an image of the illumination spot 20 at the position of the aperture 1024.
  • the aperture 1024 is an aperture set so that only the light in the region detected by the photoelectric conversion unit 103 in the image formed by the beam spot 20 is transmitted.
  • the aperture 1024 passes only the central portion of the Gaussian distribution where the light intensity is strong in the S2 direction, and blocks a weak light intensity region at the beam end.
  • the size of the image formed by the illumination spot 20 in the S1 direction is about the same size, and disturbances such as air scattering that occur when the illumination transmits air are suppressed.
  • the condenser lens 1025 is provided and collects the formed image of the aperture 1024 again.
  • the polarization beam splitter 1026 splits the light whose polarization direction has been converted by the polarization control filter 1022, according to the polarization direction.
  • the diffuser 1027 absorbs light in the polarization direction that is not used for detection by the photoelectric conversion unit 103.
  • the lens array 1028 forms as many images of the beam spots 20 on the photoelectric conversion unit 103 as there are arrays.
  • Each lens of the lens array 1028 is a cylindrical lens, and two or more cylindrical lenses are arranged in the curvature direction of the cylindrical lens in a plane perpendicular to the optical axis of the condenser lens 1025.
  • the combination of the half-wave plate 1022 and the polarization beam splitter 1026 causes the photoelectric conversion unit 103 to detect only the light of a specific polarization direction among the lights condensed by the objective lens 1021.
  • the polarization control filter 1022 may be a wire grid polarization plate having a transmittance of 80% or more, and only the light of a desired polarization direction can be extracted without using the polarization beam splitter 1026 and the diffuser 1027.
  • FIG. 13 Another configuration of the image forming unit 102-A1 of FIG. 13 is shown in FIG.
  • a plurality of images is formed on the photoelectric conversion unit 103 by one lens array 1028, but in FIG. 34A, three lens arrays 1028a, 1028b, and 1028c and one lens array 1028c are included.
  • An image is formed using a cylindrical lens.
  • the lens arrays 1028a and 1028b are lens arrays for magnification adjustment
  • the lens array 1028c is a lens array for image formation.
  • magnification here means an optical magnification, which can be obtained from the spread of the intensity distribution imaged on the photoelectric conversion units 1031 to 1034 and the peak position in FIG. 14B described later. Since the optical magnification varies depending on the focal length of the lens, the magnification can be set for each image formed on the photoelectric conversion unit 103 by the lens array 1028a and the lens array 1028b.
  • the lens array 1028a and the lens array 1028b are Kepler-type magnification adjusting mechanisms.
  • 34B and 34C show the intensity profile of an image of a sphere having a small size. It can be seen that the imaging positions of the photoelectric conversion units 10424 of 10424a to 10424c and 10426a to 10426c are the same.
  • the Keplerian type is used here, the present invention is not limited to this, and another adjusting mechanism such as a Galileo type magnification adjusting mechanism may be used.
  • the angle formed by the light beam incident on the objective lens 1021 and the optical axis is ⁇ 1. Further, the angle formed by the sample W and an axis perpendicular to the optical axis is ⁇ 2.
  • ⁇ 1 passes through the center of one of the lenses forming the lens array 1028 which is located at the position where the pupil of 1021 is relayed.
  • ⁇ 3 it is represented by the following formula 1.
  • the image formed at the positions 10421 to 10423 of the light receiving surface of the photoelectric conversion unit 103 is proportional to sin ⁇ 3(i) calculated from the direction ⁇ 1(i) of the principal ray incident on the lens i of 1028 forming the image. It becomes the size.
  • the intensity profile of the image of a sphere of minute size placed in W is shown in FIGS. 31 to 33.
  • FIG. 31 shows profiles of images formed on 10421
  • FIG. 32 shows 10422
  • FIG. 33 shows images of 10423.
  • 10421a to 10421c correspond to 1041a to 1041c, respectively.
  • 10422a to 10422c and 10423a to 10423c are intensity profiles of images corresponding to 1041a to 1041c. Since the intensity profiles shown in FIGS. 31 to 33 are formed by different lenses forming the lens array 1028, ⁇ 1(i) is different. Therefore, a value proportional to the magnification, sin ⁇ 3(i), is obtained. Change. As the numerical aperture of 102 increases, the change in ⁇ 1 increases within the same lens, and the change in magnification increases accordingly.
  • the image formed in this way is formed on the photoelectric conversion unit 103 described with reference to FIG. 16, it is connected to a signal line, for example, 1035-a, and the image of the pixels formed in the pixel blocks 1031 to 1034 has a constant pitch. Resolution is reduced. Therefore, the magnification of the individual cylindrical lens lenses 1028a1 to 1028aN, 1028b1 to 1028bN forming each of the lens arrays 1028a and 1028b shown in FIG. 34A is set to be a magnification ratio inversely proportional to sin ⁇ 3(i). This makes it possible to correct the change in magnification.
  • FIG. 14A shows a schematic view of the illumination spot 20 on the sample W. Further, FIG. 14B shows correspondence with image formation from the lens array 1028 to the photoelectric conversion unit 103.
  • the illumination spot 20 extends long in the S2 direction in FIG. W0 indicates a defect to be detected.
  • the objective lens 1021 is placed in a direction in which its optical axis is not orthogonal to the S2 direction.
  • the photoelectric conversion unit 103 divides this illumination spot into Wa to Wd and detects it. Although the number of divisions is four here, the number of divisions is not limited to this number, and the present invention can be embodied with an arbitrary number of divisions.
  • the scattered light from the defect W0 to be detected is condensed by the objective lens 1021 and guided to the photoelectric conversion unit 103.
  • the lens array 1028 is a cylindrical lens that forms an image only in one direction. Pixel blocks 1031, 1032, 1033, and 1034 corresponding to the number of lens arrays 1028 are formed in the photoelectric conversion unit 103.
  • the aperture 1024 shields a region where the amount of light is weak and which is not subjected to photoelectric conversion, so that the pixel blocks 1031 to 1034 can be formed close to each other.
  • the lens array 1028 is placed at the position where the pupil of the objective lens is relayed. Since an image is formed for each of the divided pupil regions, the image formed by the lens array 1028 has a narrowed aperture, and the depth of focus is expanded. As a result, it becomes possible to detect the image formation from the direction not orthogonal to S2.
  • the condenser lens 1025 has a large numerical aperture and is usually the same as the numerical aperture of the objective lens 1021.
  • a condenser lens with a large numerical aperture collects light scattered in various directions, which results in a shallow depth of focus.
  • s2 which is the longitudinal direction of the illumination
  • the optical axis of the objective lens 1021 are arranged so as not to intersect at right angles, the optical distance changes at the center of the visual field and the visual field end, and the image formed on the photoelectric conversion unit 103 has defocus. Will occur.
  • the lens array 1028 includes a pupil position of the condenser lens 1025, in other words, a relayed pupil position of the objective lens 1021, and in other words, a rear side of the condenser lens 1025. It is placed in the focal position.
  • the condenser lens 1025 is set to have a size equivalent to the pupil diameter so that ideally all the light incident on the aperture diameter of the objective lens 1021 can be imaged.
  • the lens array 1028 At the position of the lens array 1028, lights having similar incident directions to the condenser lens 1025 are distributed closely. As a result, when the lens array 1028 is placed at this position, it is equivalent to a reduction in the numerical aperture, and the depth of focus can be increased. In this way, the image is divided so that the numerical aperture becomes small, and the corresponding images are formed on the photoelectric conversion surface, and an image without defocus is formed to resolve minute defects.
  • Reference numerals 1031a to 1031d are pixel groups formed in the pixel block of the pixel block 1031 and form images of light from the sections W1 to W4 at the illumination spot positions, respectively.
  • Reference numerals 1031a1 to 1031aN are pixels belonging to 1031a, and each pixel outputs a predetermined current when photons are incident. The outputs of the pixels belonging to the same pixel group are electrically connected, and one pixel group outputs the sum of the current outputs of the pixels belonging to the pixel group.
  • 1032 to 1034 also output corresponding to Wa to Wd.
  • outputs corresponding to the same section from different pixel groups are electrically connected, and the photoelectric conversion unit 103 performs output corresponding to the number of photons detected from each section of W1 to W4.
  • the detection system of FIG. 13 is arranged so that the long axis direction of the image formed by the illumination spot 20 in the photoelectric conversion unit 103 and the direction of S2′ match.
  • S1 and S2 are defined as shown in FIG. 8
  • a vector in the length direction of the illumination spot is expressed as in Equation 2.
  • Equation 3 (See FIG. 15).
  • the two-dimensional plane excluding the optical axis of the objective lens 1021 is divided into two, a vector having a component in the Z direction and a vector not having it (see Formulas 5 and 6).
  • S2' in FIG. 13 is set in the direction rotated from the vector having no Z-direction component represented by Formula 6 by the angle represented by Formula 7.
  • ⁇ S1′′ is set so as to be orthogonal to this.
  • the lens array 1028 and the photoelectric conversion unit 103 are arranged.
  • the difference ⁇ d in the optical distance between the visual field center and the visual field end is expressed by the following expression 8.
  • the depth of focus DOF of the image of each lens array 1028 is expressed by the following equation 9.
  • the resolvable interval in the S2 direction is expressed by the following formula 10 based on the size of the Airy disk.
  • M is set so as to satisfy the following expression 11.
  • the internal circuit of the photoelectric conversion unit 103 will be described with reference to FIG. 14, the photoelectric conversion unit that outputs corresponding to the four sections corresponding to W1 to W4 has been described, but in FIG. 16, an example in which this is expanded into eight sections will be described.
  • Eight pixel groups are formed in each of the pixel blocks 1031 to 1034.
  • 1031a to 1031h are formed in 1031 and 1032 to 1034 are similarly formed in each group.
  • Reference numeral 1031a5 denotes the fifth pixel of 1031a, and an avalanche photodiode operating in the Geiger mode is connected to the signal line 1035-1a via the quenching resistor 1031a5q.
  • all the pixels belonging to the pixel group 1031a are connected to the signal line 1035-1a, and when photons are incident on the pixel, a current flows through the signal line 1035-1a.
  • the pixel of the pixel group 1032a is connected to the signal line 1035-2a.
  • all the pixel groups are provided with the signal lines to which the pixels belonging to the pixel group are electrically connected.
  • the pixel groups 1031a, 1032a,... 1034a respectively connect the signal lines to the signal line 1035-a by 1036-1a to 1036-4a in order to detect scattered light from the same position in the sample W. This signal is connected by the pad 1036-a and transmitted to the signal processing unit 105.
  • the pixels belonging to 1031b to 1034b are connected to the signal line 1035-b, connected by the pad 1036-b, and transmitted to the signal processing unit 105.
  • FIG. 16 The equivalent circuit of FIG. 16 is shown in FIG.
  • the N pixels 1031a1, 1031a2,..., 1031aN belonging to the pixel group 1031a in the pixel block 1031 are avalanche photodiodes and quenching resistors connected thereto.
  • the reverse voltage VR is applied to all the avalanche photodiodes formed in the photoelectric conversion unit 103, so that they operate in the Geiger mode.
  • a current flows through the avalanche photodiode, but the reverse bias voltage is lowered by the quenching resistor forming a pair and the current is electrically cut off again. In this way, a constant current flows each time a photon is incident.
  • the N pixels 1034a1 to 1034aN belonging to the pixel group 1034a in the pixel block 1034 are also Geiger mode avalanche photodiodes and a quenching resistor coupled thereto. All the pixels belonging to the pixel groups 1031a and 1034a correspond to the reflected or scattered light from the region Wa of the sample W. All of these signals are electrically coupled and connected to the current/voltage converter 103a. The current-voltage converter 103a outputs the signal 500-a converted into a voltage.
  • the pixels belonging to the pixel group 1031b of the pixel block 1031, 1031b1 to 1031bN, and the pixels belonging to the pixel group 1034b of the pixel block 1034, 1034b1 to 1034bN correspond to the light from the sample surface Wb, and these outputs Are all electrically coupled and connected to the current-voltage converter 103b.
  • 103b outputs the voltage signal 500-b. In this way, signals corresponding to all the areas obtained by dividing the illumination spot 20 are output.
  • FIG. 18 shows the data processing unit 105 when the illumination spot 20 is divided into Wa and Wh.
  • the block 105-lf is a block for processing the signals 500a-lf to 500h-lf obtained by photoelectrically converting the light detected by the low-angle front detector 102-lf.
  • the block 105-hb is a block for processing the signals 500a-hb to 500h-hb obtained by photoelectrically converting the light detected by the high-angle rear detection unit 102-hb.
  • a block for processing the output signal is provided.
  • the outputs of the high-frequency pass filters 1051a to 1051h are accumulated in the signal synthesizing unit 1053 for a plurality of rotations of the rotary stage, and the signals obtained at the same position on the sample W are added together and output as an array stream signal 1055-lf. To do.
  • the signal combining unit 1054 outputs an array stream signal 1056-lf obtained by adding together signals acquired at the same position and combining them.
  • the block 105-hb also performs the same calculation as the block 105-lf, and the array stream signal 1055-hb synthesized from the output of the high frequency pass filter and the array stream signal synthesized from the outputs of the low frequency pass filters 1052a to 1052h. Output 1056-hb.
  • the defect detection unit 1057 performs threshold processing after linearly adding the signals output from the plurality of photoelectric conversion units and subjected to the high frequency pass filter.
  • the low frequency signal integration unit 1058 integrates the low frequency pass filtered signals. The output of the low frequency signal integration unit 1058 is input to the defect detection unit 1057 and is used when determining the threshold value. It is estimated that the noise typically increases in proportion to the square root of the output of the low frequency signal integration unit 1058.
  • a threshold value proportional to the square root of the signal of the low frequency signal integration unit 1058 is given. Then, the signal of the defect detection unit 1057 that exceeds this is extracted as a defect.
  • the defect detected by the defect detection unit 1057 is output to the control unit 53 together with the signal strength and the detection coordinate in W.
  • the signal intensity detected by the low-frequency signal integration unit 1058 is also transmitted to the control unit 53 as roughness information of the sample surface and output to the display unit 54 or the like for the user who operates the apparatus.
  • the size of the image formed on the sensor surface is greatly affected by the position and focal length of the lens array 1028.
  • magnification of each of the divided images formed on the sensor surface is different, when the spread of the intensity distribution imaged on the pixel blocks 1031 to 1034 in FIG. 16 is different, or when the position of the intensity distribution imaged is different.
  • image blurring occurs and the detection sensitivity decreases.
  • the control unit 53 of FIG. 1 calculates the magnification of the image formed on the photoelectric conversion unit 103 based on the magnification calculation unit 532 and the magnification of the image calculated by the magnification calculation unit 532. It has a calculation processing unit 533 for obtaining a control amount for changing the image formation state of the image.
  • the image formation state control unit 10212 changes the image formation state of the image formed on the photoelectric conversion unit 103 based on the control amount obtained by the calculation processing unit 533.
  • FIG. 19A An example of the image forming unit 102-A1 is shown in FIG. 19A.
  • the aperture 1029 is smaller than the lens array 1028 to be installed, and is made of a material such as metal that can block a part of the light incident on the lens array 1028.
  • FIG. 19A light is incident only on the uppermost lens of the plurality of lens arrays, and the light is blocked by the apertures in the other lenses.
  • 19B and 19C show examples of apertures.
  • FIG. 19B there is an outer frame 1029a, and the metal plate 1029-1 can be moved inside the outer frame 1029a by an electrically controlled motor. As shown in FIG. 19B, by moving the metal plate 1029-1 in the arrow direction, it is possible to independently observe the images formed by the plurality of lenses of the lens array 1028.
  • the aperture 1029 enters the optical path at the time of adjustment, and the aperture 1029 is completely deviated from the optical path at the time of inspection to detect all the divided images.
  • FIG. 19C shows a method in which the metal plate 1029-2 slides in the direction of the arrow from the side in the outer frame 1029b that is about twice the size of the lens array 1028.
  • the metal plate 1029-2 is slid from the side to block part of the light incident on the lens array 1028.
  • the image formation of each lens of the lens array 1028 can be observed independently.
  • the image forming unit 102-A1 has an image selection mechanism that selects a part of images from a plurality of images formed by dividing the aperture.
  • the magnification calculator 532 calculates the magnification of a part of the images selected by the image selection mechanism.
  • the calculation processing unit 533 obtains a control amount for changing the image formation state of some images based on the magnification of some images calculated by the magnification calculation unit 532.
  • the image selection mechanism is configured by an aperture 1029 that selects a part of the plurality of images by blocking a part of the light that is incident on the front side of the photoelectric conversion unit 103.
  • a changeover switch 1037 is attached to the photoelectric conversion unit 103 as shown in FIG.
  • the operator can electrically switch each of the changeover switches 1037 ON and OFF from a GUI described later, and only a part of each sensor is detected.
  • the signal of the sensor 1031 is detected, but the signals of the sensor 1032, the sensor 1033, and the sensor 1034 are not detected.
  • the mechanism of FIG. 20 does not detect leaked light from the aperture 1029 and is highly accurate. Furthermore, since the aperture 1029 can be switched at a higher speed than mechanically moving, the time required for measurement can be shortened.
  • the image selection mechanism is configured by a changeover switch 1037 that electrically selects ON/OFF to select the partial image from the plurality of images.
  • FIG. 21 shows a mechanism for controlling the external atmospheric pressure of the lens array 1028.
  • the lens array 1028 is inserted into the sealed space 10210-a.
  • the surface through which light enters and exits is made of synthetic quartz or the like so that light is transmitted.
  • An atmospheric pressure sensor 10210-b is attached inside the closed space to measure the atmospheric pressure. While referring to the measured atmospheric pressure data in the signal processing unit 105, the atmospheric pressure in the closed space is controlled using the control box 10210-c.
  • This mechanism constitutes the image formation state control unit 10212 that changes the image state of the sensor surface because the focal length of the lens depends on the external atmospheric pressure.
  • the imaging unit 102-A1 has an atmospheric pressure adjusting mechanism that controls the atmospheric pressure of the closed space (space 10210-a) including the lens array 1028. Then, the imaging state control unit 10212 changes the imaging state by controlling the atmospheric pressure of the closed space (space 10210-a) by the atmospheric pressure adjusting mechanism.
  • micrometers 10211a, 10211b, 10211c are attached to the lens arrays 1028a, 1028b, 1028c, respectively.
  • the respective positions of the lens arrays 1028a, 1028b, 1028c are changed in the optical axis direction. This constitutes the image formation state control unit 1021 that changes the image state of the sensor surface.
  • the image forming unit 102-A1 includes the plurality of lens arrays 1028a, 1028b, and 1028c. Then, the image formation state control unit 10212 moves at least one of the plurality of lens arrays 1028a, 1028b, 1028c in the optical axis direction of the detection unit 102 to change the image formation state.
  • the image forming unit 102-A1 has micrometers 10211a, 10211b, 10211c arranged in each of the plurality of lens arrays 1028a, 1028b, 1028c.
  • the imaging state control unit 10212 moves at least one of the plurality of lens arrays 1028a, 1028b, 1028c in the optical axis direction by moving the micrometers 10211a, 10211b, 10211c to change the imaging state.
  • FIG. 23 shows another embodiment of FIGS. 21 and 22.
  • the micrometer 10211d is moved to move the photoelectric conversion unit 103 in the optical axis direction. This constitutes the image formation state control unit 10212 that changes the image state of the sensor surface.
  • the image formation state control unit 10212 changes the image formation state by moving the photoelectric conversion unit 103 in the optical axis direction of the detection unit 102.
  • the image forming unit 102-A1 includes a micrometer 10211d arranged in the photoelectric conversion unit 103. Then, the image formation state control unit 10212 moves the micrometer 10211d to move the photoelectric conversion unit 103 in the optical axis direction to change the image formation state.
  • the image state control unit 10212 may change the image forming state by moving the sample W in the direction perpendicular to the surface.
  • 24A to 24C show a GUI for observing a selected part of the divided image displayed on the display unit 54. It is an example in which a light beam divided into four is imaged on four sensors.
  • the image formation state of the sensor 1 is shown in the observation result 541-1.
  • the sensor shown in the observation result 541-1 is selected by the selection button 542-1. This indicates that the sensor 1 displayed in gray is selected.
  • the observation sensor is switched by the mechanism shown in FIGS. 19A to 19C or 20 and the image formation state of the sensor 2 is shown in the observation result 541-2. .. A part of the divided image is observed, and the image of each sensor is stored in the memory 531 in the control unit 53 shown in FIG.
  • the monitor 54-3 displays an integrated image of the images acquired by the sensors. Compared with the calibration value, the integrated measurement value is displayed larger and it can be seen that the integrated image is blurred.
  • the magnification is obtained from the image size of each sensor surface by the magnification calculator 532 in the controller 53 shown in FIG. 25, and the deviation amount from the calibration value specified by the operator is calculated. Measure at 533.
  • the image on the sensor surface is changed by the image formation state control unit 10212 in any of FIGS. 21, 22, and 23, and the image on each sensor surface is detected again.
  • 24D and 24E show the observation results of the images formed on the sensor 1 and the sensor 2 after changing the image on the sensor surface.
  • the size of the integrated image of the sensor 1, the sensor 2, the sensor 3, and the sensor 4 is approximately equal to the calibration value.
  • the magnification calculation unit 532 obtains the magnification from the size of each image, and if the deviation amount from the designated calibration value is smaller than the allowable value, the wafer inspection is started.
  • one image selected by the image selection mechanism is displayed on the display unit 54 of FIG. 1 (see FIGS. 24A, 24B, 24D, and 24E).
  • the display unit 54 also displays an integrated image of all the images selected by the image selection mechanism (see FIGS. 24C and 24F).
  • FIG. 26 shows a flowchart for starting measurement with equal magnifications for the divided images.
  • the size of the detected image is compared with the reference size (S262). As a result of the comparison, when the difference from the reference value is smaller than the allowable value (S263), the measurement is started (S265).
  • defect inspection apparatus of the second embodiment Since the basic structure of the defect inspection apparatus is the same as that of the first embodiment, its description is omitted.
  • FIG. 27 shows an example of the image forming unit 102-A1.
  • a polarization beam splitter 10213 is inserted as an optical path branching element between the lens array 1028 and the photoelectric conversion unit 103-1 to position the two-dimensional camera 103-2 at a position conjugate with the photoelectric conversion unit 103-1. Deploy.
  • the polarization beam splitter 10213 is used in this embodiment, a removable mirror 10214 for splitting light as shown in FIGS. 35A and 35B can also be used.
  • a CMOS camera or a CCD camera is used as the two-dimensional camera.
  • the pixel size of the two-dimensional camera 103-2 is smaller than the size of the image, and the size of the light-receiving surface of the two-dimensional camera 103-2 is a size that allows observation of all divided images.
  • the image formed at the position of the photoelectric conversion unit 103-1 can be observed with high resolution, and the position and size of the image can be measured with high accuracy.
  • the detected divided image 544 is displayed on the two-dimensional camera image display unit 543 in the display unit 54 from the two-dimensional camera 103-2 via the control unit 53.
  • the image state can be changed while observing the image on the sensor surface. Further, it is possible to obtain the deviation from the ideal state and the magnification of the image forming position of each divided image.
  • the image forming unit 102-A1 includes the polarization beam splitter 10213 that splits a part of the light that is incident on the front side of the photoelectric conversion unit 103-1 and the two-dimensional light that the light that is split by the polarization beam splitter 10213 is incident on. It has a camera 103-2 and a two-dimensional camera image display unit 543 that displays at least one image of the plurality of images captured by the two-dimensional camera 103-2.
  • 28A to 28F show GUIs displayed on the display unit 54 by which an operator observes an image state from a two-dimensional camera image. This is an example in which an image divided into four is detected by the two-dimensional camera 103-2.
  • the monitor 54-7 and the monitor 54-8 have line profiles of lines 546-1 and 546-2 along the divided image 544 in the two-dimensional camera image display unit 543.
  • the calibration values are displayed on the observation results 545-1 and 545-2.
  • the magnification calculation unit 532 and the image position calculation unit 534 provided in the control unit 53 shown in FIG. 36 can measure the size and position of each divided image and can measure the difference from the calibration value.
  • the divided image integration processing unit 535 provided in the control unit 53 shown in FIG. 36 calculates the integrated image of each image and displays the line profile of the integrated image. , You can see how different the image size is.
  • the calibration button By pressing the calibration button, the amount of deviation from the calibration value specified by the operator is measured. Then, as in the first embodiment, when the deviation amount is larger than the allowable value, the image of the sensor surface is changed and the image of each sensor surface is detected again. If the deviation amount is smaller than the allowable value, the wafer inspection is started (see FIG. 26).
  • 28D to 28F show the observation results of the image state of the sensor surface after the image change.
  • the sizes and positions of the divided images can be made equal to each other, blurring of the integrated images can be prevented, and deterioration of detection sensitivity can be prevented.

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The present invention comprises: an image formation part for forming a plurality of images, formed by partitioning an aperture, on a photoelectric conversion part at a magnification established for each image; and a signal processing unit for synthesizing the plurality of images formed on the photoelectric conversion unit to detect defects in a sample.

Description

欠陥検査装置Defect inspection equipment

 本発明は、欠陥検査装置に関する。 The present invention relates to a defect inspection device.

 半導体基板や薄膜基板等の製造ラインにおいて、製品の歩留りを維持して向上するために、半導体基板や薄膜基板等の表面に存在する欠陥の検査が行われている。 In the manufacturing line of semiconductor substrates, thin film substrates, etc., in order to maintain and improve the product yield, defects on the surface of semiconductor substrates, thin film substrates, etc. are inspected.

 半導体等の製造工程で用いられる欠陥検査には、微小な欠陥を検出すること、検出した欠陥の寸法を高精度に計測することが求められる。また、試料を非破壊で、例えば試料を変質させることなく検査すること、同一の試料を検査した場合に、例えば検出欠陥の個数、位置、寸法、欠陥種に関して実質的に一定の検査結果が得られることが求められる。さらに、一定時間内に多数の試料を検査することが求められる。 Defect inspection used in the manufacturing process of semiconductors, etc. requires detecting minute defects and measuring the dimensions of the detected defects with high accuracy. In addition, non-destructive inspection of the sample, for example, without degrading the sample, and when the same sample is inspected, substantially constant inspection results can be obtained regarding the number, position, size, and defect type of detected defects, for example. Be required to be. Furthermore, it is required to inspect a large number of samples within a fixed time.

 特許文献1、2には、半導体等の製造工程で用いられる欠陥検査が記載されている。 Patent Documents 1 and 2 describe defect inspection used in the manufacturing process of semiconductors and the like.

 具体的には、特許文献1には、集光サブシステムのフル集光NAの異なるセグメントへの分割と、異なるセグメント中に集光された散乱光の別個の検出器への方向付けとを行う構成が記載されている。 In particular, US Pat. No. 6,096,837 discloses splitting the full collection NA of a collection subsystem into different segments and directing scattered light collected in the different segments to separate detectors. The configuration is described.

 特許文献2には、フル集光NAに対してより小さな開口をもつ検出系を多数配置する構成が記載されている。 Patent Document 2 describes a configuration in which a large number of detection systems having smaller apertures are arranged with respect to the full focusing NA.

国際公開WO2012/082501号公報International publication WO2012/082501 特開2013-231631号公報JP, 2013-231631, A

 ところで、センサ面に結像される像の大きさは、レンズアレイの位置や焦点距離に大きな影響を受ける。センサ面に結像される分割されたそれぞれの像の倍率(大きさ)が異なる場合、分割されたそれぞれの像が統合された場合に像ボケが生じて検出感度が低下してしまう。 By the way, the size of the image formed on the sensor surface is greatly affected by the position and focal length of the lens array. When the magnifications (magnitudes) of the divided images formed on the sensor surface are different, image blur occurs when the divided images are integrated, and the detection sensitivity decreases.

 特許文献1、2には、分割されたそれぞれの像が統合されたときに像ボケが生じるという課題及びその解決手段については言及されていない。 -Patent Documents 1 and 2 do not mention the problem that image blur occurs when the divided images are integrated and the solution thereof.

 本発明の目的は、欠陥検査装置において、分割されたそれぞれの像が統合されたときでも、像ボケが生じて検出感度が低下することを防止することにある。 The object of the present invention is to prevent the detection sensitivity from decreasing due to image blurring even when the divided images are integrated in the defect inspection apparatus.

 本発明の一態様の欠陥検査装置は、光源から射出された光を試料に照射する照明部と、前記試料から発生する散乱光を検出する検出部と、前記検出部によって検出された前記散乱光を電気信号に変換する光電変換部と、前記光電変換部により変換された前記電気信号を処理して前記試料の欠陥を検出する信号処理部と、を有し、前記検出部は、開口を分割して形成された複数の像を前記像ごとに定めた倍率で前記光電変換部上に結像する結像部を有し、前記信号処理部は、前記光電変換部上に結像した前記複数の像を合成して前記試料の欠陥を検出することを特徴とする。 A defect inspection apparatus according to one aspect of the present invention includes an illumination unit that irradiates a sample with light emitted from a light source, a detection unit that detects scattered light generated from the sample, and the scattered light detected by the detection unit. And a signal processing unit for processing the electric signal converted by the photoelectric conversion unit to detect a defect in the sample, wherein the detection unit divides the opening. A plurality of images formed by forming an image on the photoelectric conversion unit at a magnification determined for each image, and the signal processing unit includes the plurality of images formed on the photoelectric conversion unit. Is detected to detect defects in the sample.

 本発明の一態様によれば、欠陥検査装置において、分割されたそれぞれの像が統合されたときでも、像ボケが生じて検出感度が低下することを防止することができる。 According to one aspect of the present invention, in the defect inspection apparatus, it is possible to prevent deterioration of detection sensitivity due to image blurring even when the divided images are integrated.

実施例1の欠陥検査装置の全体概略構成図である。1 is an overall schematic configuration diagram of a defect inspection apparatus of Example 1. FIG. 照明部により実現される照明強度分布形状の一例を示す図である。It is a figure which shows an example of the illumination intensity distribution shape implement|achieved by the illumination part. 照明部により実現される照明強度分布形状の一例を示す図である。It is a figure which shows an example of the illumination intensity distribution shape implement|achieved by the illumination part. 照明部により実現される照明強度分布形状の一例を示す図である。It is a figure which shows an example of the illumination intensity distribution shape implement|achieved by the illumination part. 照明部により実現される照明強度分布形状の一例を示す図である。It is a figure which shows an example of the illumination intensity distribution shape implement|achieved by the illumination part. 本発明に係る照明部により実現される照明強度分布形状の一例を示す図である。It is a figure which shows an example of the illumination intensity distribution shape implement|achieved by the illuminating part which concerns on this invention. 照明強度分布制御部が備える光学素子の一例を示す図である。It is a figure which shows an example of the optical element with which an illumination intensity distribution control part is equipped. 試料表面上の照明分布形状と走査方向の一例を示す図である。It is a figure which shows an example of an illumination distribution shape on a sample surface and a scanning direction. 走査による照明スポットの軌跡の一例を示す図である。It is a figure which shows an example of the locus|trajectory of the illumination spot by scanning. 検出部の配置および検出方向を側面から見た図である。It is the figure which looked at the arrangement and detection direction of a detection part from the side. 低角検出部の配置及び検出方向を上面から見た図である。It is the figure which looked at arrangement and a detection direction of a low angle detection part from the upper surface. 高角検出部の配置及び検出方向を上面から見た図である。It is the figure which looked at arrangement and a detection direction of a high angle detection part from the upper surface. 検出部の構成の一例を示す図である。It is a figure which shows an example of a structure of a detection part. 光電変換部への結像光学系の構成の一例を示す図である。It is a figure which shows an example of a structure of the imaging optical system to a photoelectric conversion part. 光電変換部への結像光学系の構成の一例を示す図である。It is a figure which shows an example of a structure of the imaging optical system to a photoelectric conversion part. 検出部の座標系を示す図である。It is a figure which shows the coordinate system of a detection part. 検出部の座標系を示す図である。It is a figure which shows the coordinate system of a detection part. 光電変換部の一例を示す図である。It is a figure which shows an example of a photoelectric conversion part. 光電変換部の構成要素の等価回路の一例を示す図である。It is a figure which shows an example of the equivalent circuit of the component of a photoelectric conversion part. データ処理部のブロック図の一例を示す図である。It is a figure which shows an example of the block diagram of a data processing part. 分割された像の一部を検出する機構の一例を示す図である。It is a figure which shows an example of the mechanism which detects a part of divided image. 分割された像の一部を検出する機構の一例を示す図である。It is a figure which shows an example of the mechanism which detects a part of divided image. 分割された像の一部を検出する機構の一例を示す図である。It is a figure which shows an example of the mechanism which detects a part of divided image. 分割された像の一部を検出する機構の一例を示す図である。It is a figure which shows an example of the mechanism which detects a part of divided image. 検出された像の状態を変化させる機構の一例を示す図である。It is a figure which shows an example of the mechanism which changes the state of the detected image. 検出された像の状態を変化させる機構の一例を示す図である。It is a figure which shows an example of the mechanism which changes the state of the detected image. 検出された像の状態を変化させる機構の一例を示す図である。It is a figure which shows an example of the mechanism which changes the state of the detected image. 分割された像の大きさをキャリブレートするGUIの一例を示す図である。It is a figure which shows an example of GUI which calibrates the magnitude|size of the divided image. 分割された像の大きさをキャリブレートするGUIの一例を示す図である。It is a figure which shows an example of GUI which calibrates the magnitude|size of the divided image. 分割された像の大きさをキャリブレートするGUIの一例を示す図である。It is a figure which shows an example of GUI which calibrates the magnitude|size of the divided image. 分割された像の大きさをキャリブレートするGUIの一例を示す図である。It is a figure which shows an example of GUI which calibrates the magnitude|size of the divided image. 分割された像の大きさをキャリブレートするGUIの一例を示す図である。It is a figure which shows an example of GUI which calibrates the magnitude|size of the divided image. 分割された像の大きさをキャリブレートするGUIの一例を示す図である。It is a figure which shows an example of GUI which calibrates the magnitude|size of the divided image. 分割された像の大きさをキャリブレートする制御部のブロック図を示す図である。It is a figure which shows the block diagram of the control part which calibrates the magnitude|size of the divided image. 分割された像の大きさをキャリブレートするフローチャートを示す図である。It is a figure which shows the flowchart which calibrates the magnitude|size of the divided image. 実施例2の分割された像を2次元カメラで観察する機構を示す図である。FIG. 7 is a diagram showing a mechanism for observing a divided image of Example 2 with a two-dimensional camera. 実施例2の分割された像の大きさをキャリブレートするGUIの一例を示す図である。FIG. 11 is a diagram showing an example of a GUI for calibrating the size of a divided image according to the second embodiment. 実施例2の分割された像の大きさをキャリブレートするGUIの一例を示す図である。FIG. 11 is a diagram showing an example of a GUI for calibrating the size of a divided image according to the second embodiment. 実施例2の分割された像の大きさをキャリブレートするGUIの一例を示す図である。FIG. 11 is a diagram showing an example of a GUI for calibrating the size of a divided image according to the second embodiment. 実施例2の分割された像の大きさをキャリブレートするGUIの一例を示す図である。FIG. 11 is a diagram showing an example of a GUI for calibrating the size of a divided image according to the second embodiment. 実施例2の分割された像の大きさをキャリブレートするGUIの一例を示す図である。FIG. 11 is a diagram showing an example of a GUI for calibrating the size of a divided image according to the second embodiment. 実施例2の分割された像の大きさをキャリブレートするGUIの一例を示す図である。FIG. 11 is a diagram showing an example of a GUI for calibrating the size of a divided image according to the second embodiment. レンズアレイの配置の一例を示す図である。It is a figure which shows an example of arrangement|positioning of a lens array. 検出部の構成の一例を示す図である。It is a figure which shows an example of a structure of a detection part. 検出部が結像する像の強度プロファイルを示す図である。It is a figure which shows the intensity profile of the image which a detection part forms. 検出部が結像する像の強度プロファイルを示す図である。It is a figure which shows the intensity profile of the image which a detection part forms. 検出部が結像する像の強度プロファイルを示す図である。It is a figure which shows the intensity profile of the image which a detection part forms. 検出部の構成の第一例を表す図である。It is a figure showing the 1st example of a structure of a detection part. 検出部が結像する像の強度プロファイルを示す図である。It is a figure which shows the intensity profile of the image which a detection part forms. 検出部が結像する像の強度プロファイルを示す図である。It is a figure which shows the intensity profile of the image which a detection part forms. 実施例2の分割された像を2次元カメラで観察する機構を示す図である。FIG. 7 is a diagram showing a mechanism for observing a divided image of Example 2 with a two-dimensional camera. 実施例2の分割された像を2次元カメラで観察する機構を示す図である。FIG. 7 is a diagram showing a mechanism for observing a divided image of Example 2 with a two-dimensional camera. 実施例2の2次元カメラで取得された分割された像の大きさ、像位置をキャリブレートする制御部のブロック図を示す図である。FIG. 9 is a diagram showing a block diagram of a control unit that calibrates the size and image position of a divided image acquired by the two-dimensional camera of the second embodiment.

 以下、図面を用いて、実施例について説明する。 Hereinafter, embodiments will be described with reference to the drawings.

 図1を参照して、実施例1の欠陥検査装置の構成について説明する。 The configuration of the defect inspection apparatus according to the first embodiment will be described with reference to FIG.

 実施例1の欠陥検査装置は、照明部101、検出部102、光電変換部103、ステージ104、信号処理部105、制御部53、表示部54及び入力部55を有する。ステージ104は、試料Wを載置可能でアクチュエータで試料Wに面直方向に移動可能であり、試料Wの面内で回転可能であり、さらに試料Wの面平行方向に移動可能に構成されている。 The defect inspection apparatus according to the first embodiment includes an illumination unit 101, a detection unit 102, a photoelectric conversion unit 103, a stage 104, a signal processing unit 105, a control unit 53, a display unit 54, and an input unit 55. The stage 104 is configured such that the sample W can be placed thereon, the actuator can move the sample W in the direction perpendicular to the surface, the sample 104 can rotate in the plane of the sample W, and the stage 104 can move in the direction parallel to the surface of the sample W. There is.

 照明部101はレーザ光源2、アッテネータ3、出射光調整部4、ビームエキスパンダ5、偏光制御部6、照明強度分布制御部7を適宜備える。レーザ光源2から射出されたレーザ光ビームは、アッテネータ3で所望のビーム強度に調整され、出射光調整部4で所望のビーム位置及びビーム進行方向に調整される。さらに、ビームエキスパンダ5で所望のビーム径に調整され、偏光制御部6で所望の偏光状態に調整され、照明強度分布制御部7で所望の強度分布に調整されて試料Wの検査対象領域に照明される。 The illumination unit 101 includes a laser light source 2, an attenuator 3, an emitted light adjusting unit 4, a beam expander 5, a polarization control unit 6, and an illumination intensity distribution control unit 7 as appropriate. The laser light beam emitted from the laser light source 2 is adjusted to a desired beam intensity by the attenuator 3 and adjusted to a desired beam position and a beam traveling direction by the emission light adjusting unit 4. Further, it is adjusted to a desired beam diameter by the beam expander 5, adjusted to a desired polarization state by the polarization control unit 6, and adjusted to a desired intensity distribution by the illumination intensity distribution control unit 7 to be an inspection target region of the sample W. Illuminated.

 照明部101の光路中に配置された出射光調整部4の反射ミラーの位置と角度により試料表面に対する照明光の入射角が決められる。照明光の入射角は微小な欠陥の検出に適した角度に設定される。照明入射角が大きいほど、すなわち照明仰角(試料表面と照明光軸との成す角)が小さいほど、試料表面上の微小異物からの散乱光に対してノイズとなる試料表面の微小凹凸からの散乱光(ヘイズと呼ばれる)が弱まるため、微小な欠陥の検出に適する。このため、試料表面の微小凹凸からの散乱光が微小欠陥検出の妨げとなる場合には、照明光の入射角は好ましくは75度以上(仰角15度以下)に設定するのがよい。 The incident angle of the illumination light with respect to the sample surface is determined by the position and angle of the reflection mirror of the emission light adjustment unit 4 arranged in the optical path of the illumination unit 101. The incident angle of the illumination light is set to an angle suitable for detecting a minute defect. The larger the illumination incident angle, that is, the smaller the illumination elevation angle (the angle between the sample surface and the illumination optical axis), the more scattered light from the minute foreign matter on the sample surface becomes noise, and the more scattering from the minute irregularities on the sample surface. Light (called haze) weakens, so it is suitable for detecting minute defects. Therefore, when the scattered light from the minute irregularities on the sample surface interferes with the detection of minute defects, the incident angle of the illumination light is preferably set to 75 degrees or more (elevation angle of 15 degrees or less).

 一方、斜入射照明において照明入射角が小さいほど微小異物からの散乱光の絶対量が大きくなる。このため、欠陥からの散乱光量の不足が微小欠陥検出の妨げとなる場合には、照明光の入射角は好ましくは60度以上75度以下(仰角15度以上30度以下)に設定するのがよい。また、斜入射照明を行う場合、照明部101の偏光制御部6における偏光制御により、照明の偏光をP偏光とすることで、その他の偏光と比べて試料表面上の欠陥からの散乱光が増加する。また、試料表面の微小凹凸からの散乱光が微小欠陥検出の妨げとなる場合には、照明の偏光をS偏光とすることで、その他の偏光と比べて試料表面の微小凹凸からの散乱光が減少する。 On the other hand, in grazing incidence illumination, the smaller the illumination incident angle, the greater the absolute amount of scattered light from minute foreign matter. Therefore, when the shortage of the scattered light amount from the defect hinders the detection of the minute defect, the incident angle of the illumination light is preferably set to 60 degrees or more and 75 degrees or less (the elevation angle is 15 degrees or more and 30 degrees or less). Good. Further, when oblique incidence illumination is performed, the polarization control of the illumination control unit 6 of the illumination unit 101 causes the polarization of illumination to be P-polarized light, so that the scattered light from defects on the sample surface is increased compared to other polarizations. To do. In addition, when the scattered light from the minute irregularities on the sample surface interferes with the detection of the minute defects, the scattered light from the minute irregularities on the sample surface is made to be S-polarized light by setting the polarization of the illumination to S polarization. Decrease.

 また、必要に応じて、図1に示すように、照明部101の光路中にミラー21を挿入し、適宜他のミラーを配置することにより、照明光路が変更され、試料面に対して実質的に垂直な方向から照明光が照射される(垂直照明)。このとき、試料面上の照明強度分布は照明強度分布制御部7により、斜入射照明と同様に制御される。ミラー21と同じ位置にビームスプリッタを挿入する。斜入射照明と試料面の凹み状の欠陥(研磨キズや結晶材料における結晶欠陥)からの散乱光を得るには、試料表面に実質的に垂直に入射する垂直照明が適する。 Further, if necessary, as shown in FIG. 1, by inserting the mirror 21 in the optical path of the illuminating unit 101 and arranging another mirror appropriately, the illuminating optical path is changed, and the illuminating optical path is substantially changed with respect to the sample surface. Illumination light is emitted from a direction perpendicular to (vertical illumination). At this time, the illumination intensity distribution on the sample surface is controlled by the illumination intensity distribution control unit 7 in the same manner as the oblique incidence illumination. A beam splitter is inserted at the same position as the mirror 21. In order to obtain the scattered light from the oblique incidence illumination and the concave defect (polishing flaw or crystal defect in the crystalline material) on the sample surface, vertical illumination that is incident substantially perpendicularly to the sample surface is suitable.

 レーザ光源2としては、試料表面近傍の微小な欠陥を検出するには、試料内部に浸透しづらい波長として、短波長(波長355nm以下)の紫外または真空紫外のレーザビームを発振し、かつ出力2W以上の高出力のものが用いられる。出射ビーム径は1mm程度である。試料内部の欠陥を検出するには、試料内部に浸透しやすい波長として、可視あるいは赤外のレーザビームを発振するものが用いられる。 The laser light source 2 oscillates an ultraviolet or vacuum ultraviolet laser beam having a short wavelength (wavelength of 355 nm or less) as a wavelength that is difficult to penetrate into the sample in order to detect minute defects near the sample surface, and outputs 2 W. The above high output is used. The outgoing beam diameter is about 1 mm. In order to detect defects inside the sample, a laser that oscillates a visible or infrared laser beam as a wavelength that easily penetrates into the sample is used.

 アッテネータ3は、第一の偏光板と、照明光の光軸周りに回転可能な1/2波長板と、第二の偏光板を適宜備える。アッテネータ3に入射した光は、第一の偏光板により直線偏光に変換され、1/2波長板の遅相軸方位角に応じて偏光方向が任意の方向に回転され、第二の偏光板を通過する。1/2波長板の方位角を制御することで、光強度が任意の比率で減光される。アッテネータ3に入射する光の直線偏光度が十分高い場合は第一の偏光板は必ずしも必要ない。アッテネータ3は入力信号と減光率との関係が事前に較正されたものを用いる。アッテネータ3として、グラデーション濃度分布を持つNDフィルタを用いることも、互いに異なる複数の濃度のNDフィルタを切替えて使用することも可能である。 The attenuator 3 appropriately includes a first polarizing plate, a half-wave plate rotatable about the optical axis of illumination light, and a second polarizing plate. The light that has entered the attenuator 3 is converted into linearly polarized light by the first polarizing plate, and the polarization direction is rotated in an arbitrary direction according to the slow axis azimuth angle of the half-wave plate, and the second polarizing plate pass. By controlling the azimuth angle of the half-wave plate, the light intensity is reduced at an arbitrary ratio. When the linear polarization degree of the light incident on the attenuator 3 is sufficiently high, the first polarizing plate is not always necessary. As the attenuator 3, the one in which the relationship between the input signal and the extinction ratio is calibrated in advance is used. As the attenuator 3, it is possible to use an ND filter having a gradation density distribution, or switch and use a plurality of ND filters having different densities.

 出射光調整部4は複数枚の反射ミラーを備える。ここでは二枚の反射ミラーで構成した場合の実施例を説明するが、これに限られるものではなく、三枚以上の反射ミラーを適宜用いても構わない。ここで、三次元の直交座標系(XYZ座標)を仮に定義し、反射ミラーへの入射光が+X方向に進行しているものと仮定する。第一の反射ミラーは入射光を+Y方向に偏向するよう設置され(XY面内での入射・反射)、第二の反射ミラーは第一の反射ミラーで反射した光を+Z方向に偏向するよう設置される(YZ面内での入射及び反射)。各々の反射ミラーは平行移動とあおり角調整により、出射調整部4から出射する光の位置、進行方向(角度)が調整される。 The outgoing light adjusting unit 4 includes a plurality of reflecting mirrors. Here, an example of the case where it is configured by two reflection mirrors will be described, but the present invention is not limited to this, and three or more reflection mirrors may be appropriately used. Here, a three-dimensional Cartesian coordinate system (XYZ coordinates) is tentatively defined, and it is assumed that the incident light on the reflecting mirror travels in the +X direction. The first reflection mirror is installed so as to deflect the incident light in the +Y direction (incident/reflection in the XY plane), and the second reflection mirror deflects the light reflected by the first reflection mirror in the +Z direction. Installed (incident and reflection in YZ plane). The position and traveling direction (angle) of the light emitted from the emission adjusting unit 4 are adjusted by parallel movement and tilt angle adjustment of each reflection mirror.

 前記のように、第一の反射ミラーの入射及び反射面(XY面)と第二の反射ミラー入射・反射面(YZ面)が直交するような配置とする。これにより、出射調整部4から出射する光(+Z方向に進行)のXZ面内の位置、角度調整と、YZ面内の位置、角度調整とを独立に行うことができる。 As mentioned above, the first reflecting mirror's incident/reflecting surface (XY plane) and the second reflecting mirror's incident/reflecting surface (YZ plane) are arranged so as to be orthogonal to each other. Thereby, the position and angle adjustment in the XZ plane and the position and angle adjustment in the YZ plane of the light emitted from the emission adjustment unit 4 (traveling in the +Z direction) can be performed independently.

 ビームエキスパンダ5は二群以上のレンズ群を有し、入射する平行光束の直径を拡大する機能を持つ。例えば、凹レンズと凸レンズの組合せを備えるガリレオ型のビームエキスパンダが用いられる。ビームエキスパンダ5は二軸以上の並進ステージに設置され、所定のビーム位置と中心が一致するように位置調整が可能である。また、ビームエキスパンダ5の光軸と所定のビーム光軸が一致するようにビームエキスパンダ5全体のあおり角調整機能が備えられる。レンズの間隔を調整することにより、光束直径の拡大率を制御することが可能である(ズーム機構)。 The beam expander 5 has two or more lens groups and has a function of expanding the diameter of the incident parallel light flux. For example, a Galileo type beam expander including a combination of a concave lens and a convex lens is used. The beam expander 5 is installed on a translation stage having two or more axes, and its position can be adjusted so that its center coincides with a predetermined beam position. Further, a tilt angle adjusting function of the entire beam expander 5 is provided so that the optical axis of the beam expander 5 and a predetermined beam optical axis coincide with each other. By adjusting the distance between the lenses, it is possible to control the enlargement ratio of the luminous flux diameter (zoom mechanism).

 ビームエキスパンダ5に入射する光が平行でない場合には、レンズの間隔の調整により、光束の直径の拡大とコリメート(光束の準平行光化)が同時に行われる。光束のコリメートはビームエキスパンダ5の上流にビームエキスパンダ5と独立にコリメートレンズを設置して行ってもよい。ビームエキスパンダ5によるビーム径の拡大倍率は5倍から10倍程度であり、光源から出射したビーム径1mmのビームが5mmから10mm程度に拡大される。 If the light incident on the beam expander 5 is not parallel, the diameter of the light beam is expanded and collimation (quasi-parallel light conversion) is performed simultaneously by adjusting the lens interval. The collimation of the light flux may be performed by installing a collimator lens upstream of the beam expander 5 independently of the beam expander 5. The expansion factor of the beam diameter by the beam expander 5 is about 5 to 10 times, and the beam having a beam diameter of 1 mm emitted from the light source is expanded to about 5 mm to 10 mm.

 偏光制御部6は、1/2波長板、1/4波長板によって構成され、照明光の偏光状態を任意の偏光状態に制御する。照明部101の光路の途中において、ビームモニタ22によって、ビームエキスパンダ5に入射する光、および照明強度分布制御部7に入射する光の状態が計測される。 The polarization controller 6 is composed of a half-wave plate and a quarter-wave plate, and controls the polarization state of illumination light to an arbitrary polarization state. In the middle of the optical path of the illumination unit 101, the beam monitor 22 measures the states of the light incident on the beam expander 5 and the light incident on the illumination intensity distribution control unit 7.

 図2乃至図6に、照明部101より試料面に導かれる照明光軸120と照明強度分布形状との位置関係の模式図を示す。なお、図2乃至図6における照明部101の構成は照明部101の構成の一部を示したものであり、出射光調整部4、ミラー21、ビームモニタ22等は省略されている。 2 to 6 are schematic diagrams showing the positional relationship between the illumination optical axis 120 guided to the sample surface from the illumination unit 101 and the illumination intensity distribution shape. The configuration of the illumination unit 101 in FIGS. 2 to 6 shows a part of the configuration of the illumination unit 101, and the emission light adjustment unit 4, the mirror 21, the beam monitor 22 and the like are omitted.

 図2に、斜入射照明の入射面(照明光軸と試料表面法線とを含む面)の断面の模式図を示す。斜入射照明は入射面内にて試料表面に対して傾斜している。照明部101により入射面内において実質的に均一の照明強度分布が作られる。照明強度が均一である部分の長さは、単位時間当たりに広い面積を検査するため、100μmから4mm程度である。 Fig. 2 shows a schematic diagram of the cross section of the incident surface of grazing incidence illumination (the surface including the illumination optical axis and the sample surface normal). The grazing incidence illumination is inclined with respect to the sample surface within the incidence plane. The illumination unit 101 produces a substantially uniform illumination intensity distribution in the incident plane. The length of the portion where the illumination intensity is uniform is about 100 μm to 4 mm in order to inspect a wide area per unit time.

 図3に、試料表面法線を含みかつ斜入射照明の入射面に垂直な面の断面の模式図を示す。この面内で、試料面上の照明強度分布は中心に対して周辺の強度が弱い照明強度分布を成す。より具体的には、照明強度分布制御部7に入射する光の強度分布を反映したガウス分布、あるいは照明強度分布制御部7の開口形状を反映した第一種第一次のベッセル関数あるいはsinc関数に類似した強度分布となる。この面内での照明強度分布の長さ(最大照明強度の13.5%以上の照明強度を持つ領域の長さ)は、試料表面から発生するヘイズを低減するため、前記入射面内における照明強度が均一である部分の長さより短く、2.5μmから20μm程度である。照明強度分布制御部7は、後述する非球面レンズ、回折光学素子、シリンドリカルレンズアレイ、ライトパイプなどの光学素子を備える。照明強度分布制御部7を構成する光学素子は図2、図3に示されるように、照明光軸に垂直に設置される。 Fig. 3 shows a schematic diagram of a cross section of a plane that includes the sample surface normal and is perpendicular to the incidence plane of the oblique incidence illumination. In this plane, the illumination intensity distribution on the sample surface forms an illumination intensity distribution in which the intensity of the periphery is weaker than that of the center. More specifically, a Gaussian distribution that reflects the intensity distribution of light incident on the illumination intensity distribution control unit 7, or a first-order first-order Bessel function or sinc function that reflects the aperture shape of the illumination intensity distribution control unit 7. The intensity distribution is similar to. The length of the illumination intensity distribution within this plane (the length of the region having an illumination intensity of 13.5% or more of the maximum illumination intensity) reduces the haze generated from the sample surface, so that the illumination within the incidence plane is reduced. It is shorter than the length of the portion having uniform strength, and is about 2.5 μm to 20 μm. The illumination intensity distribution controller 7 includes optical elements such as an aspherical lens, a diffractive optical element, a cylindrical lens array, and a light pipe described later. The optical element forming the illumination intensity distribution control unit 7 is installed perpendicularly to the illumination optical axis, as shown in FIGS.

 照明強度分布制御部7は入射する光の位相分布および強度分布に作用する光学素子を備える。照明強度分布制御部7を構成する光学素子として、回折光学素子71(DOE:Diffractive Optical Element)が用いられる(図7)。回折光学素子71は、入射光を透過する材質からなる基板の表面に、光の波長と同等以下の寸法の微細な起伏形状を形成したものである。 The illumination intensity distribution control unit 7 includes an optical element that acts on the phase distribution and intensity distribution of incident light. A diffractive optical element 71 (DOE: Diffractive Optical Element) is used as an optical element forming the illumination intensity distribution control unit 7 (FIG. 7). The diffractive optical element 71 is formed by forming a fine undulation shape having a size equal to or less than the wavelength of light on the surface of a substrate made of a material that transmits incident light.

 入射光を透過する材質として、紫外光用には溶融石英が用いられる。回折光学素子71を通過することによる光の減衰を抑えるため、反射防止膜によるコーティングが施されたものを用いるとよい。前記の微細な起伏形状の形成にはリソグラフィ法が用いられる。前記ビームエキスパンダ5を通過後に準平行光となった光を、回折光学素子71を通過させることにより、回折光学素子71の起伏形状に応じた試料面上照明強度分布が形成される。回折光学素子71の起伏形状は、試料表面上で形成される照明強度分布が前記入射面内に長く均一な分布となるよう、フーリエ光学理論を用いた計算に基づいて求められた形状に設計され、製作される。 ◇ As a material that transmits incident light, fused quartz is used for ultraviolet light. In order to suppress the attenuation of light due to passing through the diffractive optical element 71, it is preferable to use one coated with an antireflection film. A lithographic method is used for forming the fine relief shape. By passing the light that has become quasi-parallel light after passing through the beam expander 5 through the diffractive optical element 71, an illumination intensity distribution on the sample surface corresponding to the undulating shape of the diffractive optical element 71 is formed. The undulating shape of the diffractive optical element 71 is designed to be a shape obtained based on calculation using Fourier optical theory so that the illumination intensity distribution formed on the sample surface becomes a long and uniform distribution in the incident surface. , Produced.

 照明強度分布制御部7に備えられる光学素子は、入射光の光軸との相対位置、角度が調整可能となるよう、二軸以上の並進調整機構、および二軸以上の回転調整機構が備えられる。さらに、光軸方向の移動によるフォーカス調整機構が設けられる。前記回折光学素子71と同様の機能を持つ代替の光学素子として、非球面レンズ、シリンドリカルレンズアレイとシリンドリカルレンズとの組合せ、ライトパイプと結像レンズとの組合せを用いてもよい。 The optical element provided in the illumination intensity distribution control unit 7 is provided with a translation adjusting mechanism of two or more axes and a rotation adjusting mechanism of two or more axes so that the relative position and angle of the incident light with respect to the optical axis can be adjusted. .. Further, a focus adjusting mechanism for moving in the optical axis direction is provided. As an alternative optical element having the same function as the diffractive optical element 71, an aspherical lens, a combination of a cylindrical lens array and a cylindrical lens, or a combination of a light pipe and an imaging lens may be used.

 照明部101によって試料面上に作られる照明強度分布の変形例を説明する。前記の一方向に長く(線状)、長手方向に関して実質的に均一な強度を持つ照明強度分布の代替として、長手方向に関してガウス分布を持つ照明強度分布を用いることも可能である。一方向に長いガウス分布照明は、照明強度分布制御部7に球面レンズを有し、ビームエキスパンダ5にて一方向に長い楕円ビームを形成する構成とすること、あるいは照明強度分布制御部7をシリンドリカルレンズを含む複数のレンズで構成すること、などにより形成される。照明強度分布制御部7が有する球面レンズあるいはシリンドリカルレンズの一部あるいは全部は、試料面に対して平行に設置されることで、試料面上の一方向に長く、それに垂直な方向の幅の狭い照明強度分布が形成される。 A modified example of the illumination intensity distribution created on the sample surface by the illumination unit 101 will be described. As an alternative to the illumination intensity distribution which is long in one direction (linear) and has substantially uniform intensity in the longitudinal direction, it is also possible to use an illumination intensity distribution having a Gaussian distribution in the longitudinal direction. For Gaussian distribution illumination that is long in one direction, the illumination intensity distribution control unit 7 has a spherical lens, and the beam expander 5 forms an elliptical beam that is long in one direction. It is formed of a plurality of lenses including a cylindrical lens. A part or all of the spherical lens or the cylindrical lens included in the illumination intensity distribution control unit 7 is installed parallel to the sample surface, so that it is long in one direction on the sample surface and has a narrow width in the direction perpendicular thereto. An illumination intensity distribution is formed.

 均一な照明強度分布を作る場合に比べて、照明強度分布制御部7に入射する光の状態の変動による試料面上の照明強度分布の変動が小さく、照明強度分布の安定性が高い。また、照明強度分布制御部7に回折光学素子やマイクロレンズアレイなどを用いる場合と比べて光の透過率が高く効率がよい。 Compared to the case where a uniform illumination intensity distribution is created, the variation of the illumination intensity distribution on the sample surface due to the variation of the state of the light entering the illumination intensity distribution control unit 7 is small, and the stability of the illumination intensity distribution is high. Further, compared with the case where a diffractive optical element, a microlens array, or the like is used for the illumination intensity distribution controller 7, the light transmittance is high and the efficiency is good.

 照明部101における照明光の状態がビームモニタ22によって計測される。ビームモニタ22は、出射光調整部4を通過した照明光の位置および角度(進行方向)、あるいは照明強度分布制御部7に入射する照明光の位置および波面を計測して出力する。照明光の位置計測は、照明光の光強度の重心位置を計測することによって行われる。 The state of illumination light in the illumination unit 101 is measured by the beam monitor 22. The beam monitor 22 measures and outputs the position and angle (traveling direction) of the illumination light that has passed through the emission light adjusting unit 4, or the position and the wavefront of the illumination light that enters the illumination intensity distribution control unit 7. The position measurement of the illumination light is performed by measuring the position of the center of gravity of the light intensity of the illumination light.

 具体的な位置計測手段としては、光位置センサ(PSD:Position Sensitive Detector)、あるいはCCDセンサやCMOSセンサなどのイメージセンサが用いられる。照明光の角度計測は前記位置計測手段より光源から遠く離れた位置、あるいはコリメートレンズによる集光位置に設置された光位置センサあるいはイメージセンサによって行われる。前記センサにより検出された照明光位置、照明光角度は制御部53に入力され、表示部54に表示される。照明光位置あるいは角度が所定の位置あるいは角度からずれていた場合は、前記出射光調整部4において所定の位置に戻るよう調整される。 As a specific position measuring means, an optical position sensor (PSD: Position Sensitive Detector) or an image sensor such as a CCD sensor or a CMOS sensor is used. The angle measurement of the illumination light is performed by an optical position sensor or an image sensor installed at a position farther from the light source than the position measuring means or at a condensing position by a collimator lens. The illumination light position and the illumination light angle detected by the sensor are input to the control unit 53 and displayed on the display unit 54. When the position or angle of the illumination light is deviated from the predetermined position or angle, the emitted light adjusting section 4 is adjusted so as to return to the predetermined position.

 照明光の波面計測は、照明強度制御部7に入射する光の平行度を測定するために行われる。波面計測により照明強度制御部7に入射する光が準平行光でなく、発散あるいは収束していることが判明した場合、前段のビームエキスパンダ5のレンズ群を光軸方向に変位させることで、準平行光に近づけることができる。また、波面計測により照明強度制御部7に入射する光の波面が部分的に傾斜していることが判明した場合、空間光変調素子(SLM:Spatial Light Modulator)の一種である空間光位相変調素子を照明強度制御部7の前段に挿入し、波面が平坦になるよう光束断面の位置ごとに適当な位相差を与えることで、波面を平坦に近づける、すなわち、照明光を準平行光に近づけることができる。以上の波面精度計測及び調整手段により、照明強度分布制御部7に入射する光の波面精度(所定の波面(設計値あるいは初期状態)からのずれ)がλ/10rms以下に抑えられる。 The wavefront measurement of the illumination light is performed to measure the parallelism of the light incident on the illumination intensity control unit 7. When it is found from the wavefront measurement that the light incident on the illumination intensity control unit 7 is not quasi-parallel light but diverging or converging, by displacing the lens group of the beam expander 5 in the preceding stage in the optical axis direction, It can approach quasi-parallel light. Further, when it is found by the wavefront measurement that the wavefront of the light incident on the illumination intensity control unit 7 is partially inclined, a spatial light phase modulator that is a type of a spatial light modulator (SLM: Spatial Light Modulator). By inserting an appropriate phase difference for each position of the light flux cross section so that the wavefront becomes flat, by making the wavefront close to flat, that is, making the illumination light close to quasi-parallel light. You can By the above wavefront accuracy measuring and adjusting means, the wavefront accuracy (deviation from the predetermined wavefront (design value or initial state)) of the light incident on the illumination intensity distribution control unit 7 is suppressed to λ/10 rms or less.

 照明強度分布制御部7において調整された試料面上の照明強度分布は、照明強度分布モニタ24によって計測される。なお、図1で示したように、垂直照明を用いる場合でも、同様に、照明強度分布制御部7において調整された試料面上の照明強度分布が照明強度分布モニタ24によって計測される。照明強度分布モニタ24はレンズを介して試料面をCCDセンサやCMOSセンサなどのイメージセンサ上に結像して画像として検出するものである。照明強度分布モニタ24で検出された照明強度分布の画像は制御部53において処理され、強度の重心位置、最大強度、最大強度位置、照明強度分布の幅、長さ(所定の強度以上あるいは最大強度値に対して所定の比率以上となる照明強度分布領域の幅、長さ)などが算出され、表示部54において照明強度分布の輪郭形状、断面波形などと共に表示される。 The illumination intensity distribution monitor 24 measures the illumination intensity distribution on the sample surface adjusted by the illumination intensity distribution control unit 7. As shown in FIG. 1, even when vertical illumination is used, the illumination intensity distribution monitor 24 similarly measures the illumination intensity distribution on the sample surface adjusted by the illumination intensity distribution control unit 7. The illumination intensity distribution monitor 24 forms an image of the sample surface on an image sensor such as a CCD sensor or a CMOS sensor through a lens and detects it as an image. The image of the illumination intensity distribution detected by the illumination intensity distribution monitor 24 is processed by the control unit 53, and the barycentric position of the intensity, the maximum intensity, the maximum intensity position, the width and the length of the illumination intensity distribution (greater than or equal to a predetermined intensity or the maximum intensity). The width and length of the illumination intensity distribution area having a predetermined ratio or more with respect to the value are calculated and displayed on the display unit 54 together with the contour shape of the illumination intensity distribution, the sectional waveform, and the like.

 斜入射照明を行う場合、試料面の高さ変位によって、照明強度分布の位置の変位およびデフォーカスによる照明強度分布の乱れが起こる。これを抑制するため、試料面の高さを計測し、高さがずれた場合は照明強度分布制御部7、あるいはステージ104のZ軸による高さ調整によりずれを補正する。 When performing oblique incidence illumination, the displacement of the height of the sample surface causes the displacement of the position of the illumination intensity distribution and the disturbance of the illumination intensity distribution due to defocusing. In order to suppress this, the height of the sample surface is measured, and when the height is deviated, the deviation is corrected by the illumination intensity distribution control unit 7 or the height adjustment of the stage 104 by the Z axis.

 照明部101によって試料面上に形成される照度分布形状(照明スポット20)と試料走査方法について図8及び図9を用いて説明する。 The illuminance distribution shape (illumination spot 20) formed on the sample surface by the illumination unit 101 and the sample scanning method will be described with reference to FIGS. 8 and 9.

 試料Wとして円形の半導体シリコンウェハを想定する。ステージ104は、並進ステージ、回転ステージ、試料面高さ調整のためのZステージ(いずれも図示せず)を備える。照明スポット20は前述の通り一方向に長い照明強度分布を持ち、その方向をS2とし、S2に実質的に直交する方向をS1とする。回転ステージの回転運動によって、回転ステージの回転軸を中心とした円の円周方向S1に、並進ステージの並進運動によって、並進ステージの並進方向S2に走査される。走査方向S1の走査により試料を1回転する間に、走査方向S2へ照明スポット20の長手方向の長さ以下の距離だけ走査することにより、照明スポットが試料W上にてらせん状の軌跡Tを描き、試料1の全面が走査される。 A round semiconductor silicon wafer is assumed as the sample W. The stage 104 includes a translation stage, a rotation stage, and a Z stage (not shown) for adjusting the height of the sample surface. The illumination spot 20 has an illumination intensity distribution that is long in one direction as described above, the direction is S2, and the direction substantially orthogonal to S2 is S1. The rotary motion of the rotary stage scans in the circumferential direction S1 of a circle about the rotation axis of the rotary stage, and the translational motion of the translation stage scans in the translational direction S2 of the translation stage. While the sample is rotated once by scanning in the scanning direction S1, the illumination spot scans a spiral locus T on the sample W by scanning in the scanning direction S2 by a distance equal to or less than the longitudinal length of the illumination spot 20. Draw and scan the entire surface of sample 1.

 検出部102は、照明スポット20から発する複数の方向の散乱光を検出するよう、複数配置される。検出部102の試料Wおよび照明スポット20に対する配置例について図10乃至図12用いて説明する。 A plurality of detecting units 102 are arranged so as to detect scattered light emitted from the illumination spot 20 in a plurality of directions. An example of the arrangement of the detection unit 102 with respect to the sample W and the illumination spot 20 will be described with reference to FIGS.

 図10に検出部102の配置の側面図を示す。試料Wの法線に対して、検出部102による検出方向(検出開口の中心方向)のなす角を、検出天頂角と定義する。検出部102は、検出天頂角が45度以下の高角検出部102hと、検出天頂角が45度以上の低角検出部102lを適宜用いて構成される。高角検出部102h、低角検出部102l各々は、各々の検出天頂角において多方位に散乱する散乱光をカバーするよう、複数の検出部からなる。 FIG. 10 shows a side view of the arrangement of the detection unit 102. The angle formed by the detection direction of the detection unit 102 (the central direction of the detection aperture) with respect to the normal line of the sample W is defined as the detection zenith angle. The detection unit 102 is configured by appropriately using a high angle detection unit 102h having a detected zenith angle of 45 degrees or less and a low angle detection unit 102l having a detected zenith angle of 45 degrees or more. The high-angle detector 102h and the low-angle detector 102l each include a plurality of detectors so as to cover scattered light scattered in multiple directions at each detected zenith angle.

 図11に、低角検出部102lの配置の平面図を示す。試料Wの表面と平行な平面内において、斜入射照明の進行方向と検出方向とのなす角を検出方位角と定義する。低角検出部102は、低角前方検出部102lf、低角側方検出部102ls、低角後方検出部102lb、およびそれらと照明入射面に関して対称な位置にある低角前方検出部102lf’、低角側方検出部102ls’、低角後方検出部102lb’を適宜備える。例えば、低角前方検出部102lfは検出方位角が0度以上60度以下、低角側方検出部102lsは検出方位角が60度以上120度以下、低角後方検出部102lbは検出方位角が120度以上180度以下に設置される。 FIG. 11 shows a plan view of the arrangement of the low angle detection unit 102l. In a plane parallel to the surface of the sample W, the angle formed by the advancing direction of the oblique incidence illumination and the detection direction is defined as the detection azimuth angle. The low-angle detection unit 102 includes a low-angle front detection unit 102lf, a low-angle side detection unit 102ls, a low-angle rear detection unit 102lb, and a low-angle front detection unit 102lf' and a low-angle front detection unit 102lf' which are symmetrical with respect to the illumination incidence plane. A corner side detection unit 102ls' and a low angle rear detection unit 102lb' are appropriately provided. For example, the low-angle front detection unit 102lf has a detection azimuth angle of 0° or more and 60° or less, the low-angle side detection unit 102ls has a detection azimuth angle of 60° or more and 120° or less, and the low-angle rear detection unit 102lb has a detection azimuth angle of 60° or more. It is installed above 120 degrees and below 180 degrees.

 図12に、高角検出部102hの配置の平面図を示す。高角検出部102は、高角前方検出部102hf、高角側方検出部102hs、高角後方検出部102hb、および高角側方検出部102hsと照明入射面に関して対称な位置にある高角側方検出部102hs’を適宜備える。例えば、高角前方検出部102hfは検出方位角が0度以上45度以下、部102bは検出方位角が135度以上180度以下に設置される。なお、ここでは高角検出部102hが4つ、低角検出部102lが6つある場合を示したがこれに限られず、検出部の数・位置を適宜変更してもよい。 FIG. 12 shows a plan view of the arrangement of the high angle detection unit 102h. The high-angle detection unit 102 includes a high-angle front detection unit 102hf, a high-angle side detection unit 102hs, a high-angle rear detection unit 102hb, and a high-angle side detection unit 102hs′ at positions symmetrical to the high-angle side detection unit 102hs and the illumination incident surface. Prepare as appropriate. For example, the high-angle front detection unit 102hf is installed so that the detected azimuth angle is 0° or more and 45° or less, and the part 102b is installed at the detection azimuth angle of 135° or more and 180° or less. It should be noted that the case where there are four high angle detection units 102h and six low angle detection units 102l is shown here, but the number is not limited to this, and the number and position of the detection units may be changed as appropriate.

 結像部102-A1を有する検出部102の具体的な構成図の例を図13に示す。 FIG. 13 shows an example of a specific configuration diagram of the detection unit 102 having the image formation unit 102-A1.

 照明スポット20から発生する散乱光を対物レンズ1021によって集光し、偏光制御フィルタ1022によって偏光方向を制御する。偏光制御フィルタ1022としては、たとえばモータ等の駆動機構により回転角度を制御可能にした1/2波長板を適用する。散乱光を効率良く検出するため、対物レンズ1021の検出NAは0.3以上にするのが好ましい。低角度検出部の場合、対物レンズ1021の下端が試料面Wに干渉しないよう、必要に応じて対物レンズの下端を切り欠く。結像レンズ1023は照明スポット20の像をアパーチャ1024の位置に結像する。 The scattered light generated from the illumination spot 20 is condensed by the objective lens 1021, and the polarization direction is controlled by the polarization control filter 1022. As the polarization control filter 1022, for example, a half-wave plate whose rotation angle can be controlled by a drive mechanism such as a motor is applied. In order to detect scattered light efficiently, the detection NA of the objective lens 1021 is preferably 0.3 or more. In the case of the low angle detection unit, the lower end of the objective lens is cut out as necessary so that the lower end of the objective lens 1021 does not interfere with the sample surface W. The imaging lens 1023 forms an image of the illumination spot 20 at the position of the aperture 1024.

 アパーチャ1024はビームスポット20の結像した像のうち、光電変換部103で検出する領域の光のみを通すように設定したアパーチャである。照明スポット20がS2方向にガウス分布のプロファイルを有す場合には,アパーチャ1024はガウス分布のうち、S2方向に光量の強い中心部のみを通過させ,ビーム端の光量の弱い領域は遮光する。 The aperture 1024 is an aperture set so that only the light in the region detected by the photoelectric conversion unit 103 in the image formed by the beam spot 20 is transmitted. When the illumination spot 20 has a Gaussian distribution profile in the S2 direction, the aperture 1024 passes only the central portion of the Gaussian distribution where the light intensity is strong in the S2 direction, and blocks a weak light intensity region at the beam end.

 また、S1方向には照明スポット20の結像した像と同程度のサイズとして、照明が空気を透過する際に発生する空気散乱等の外乱を抑制する。集光レンズ1025はあり、結像されたアパーチャ1024の像を再度集光する。偏光ビームスプリッタ1026は、偏光制御フィルタ1022で偏光方向を変換した光を偏光方向により分離する。ディフューザ1027は、光電変換部103での検出に用いない偏光方向の光を吸収する。 Also, the size of the image formed by the illumination spot 20 in the S1 direction is about the same size, and disturbances such as air scattering that occur when the illumination transmits air are suppressed. The condenser lens 1025 is provided and collects the formed image of the aperture 1024 again. The polarization beam splitter 1026 splits the light whose polarization direction has been converted by the polarization control filter 1022, according to the polarization direction. The diffuser 1027 absorbs light in the polarization direction that is not used for detection by the photoelectric conversion unit 103.

 レンズアレイ1028は、アレイの数だけビームスポット20の像を光電変換部103上に形成する。レンズアレイ1028は、個々のレンズがシリンドリカルレンズであり、2つ以上のシリンドリカルレンズが集光レンズ1025の光軸に垂直な面内でシリンドリカルレンズの曲率方向に並んでいる。なお、この実施例では1/2波長板1022と偏光ビームスプリッタ1026のコンビネーションにより対物レンズ1021で集光した光のうち特定の偏光方向の光のみを光電変換部103で検出するようにしたが、その代替として、たとえば偏光制御フィルタ1022を透過率80%以上のワイヤグリッド偏光板とし、偏光ビームスプリッタ1026、ディフューザ1027を用いないで所望の偏光方向の光のみを取り出すことも可能である。 The lens array 1028 forms as many images of the beam spots 20 on the photoelectric conversion unit 103 as there are arrays. Each lens of the lens array 1028 is a cylindrical lens, and two or more cylindrical lenses are arranged in the curvature direction of the cylindrical lens in a plane perpendicular to the optical axis of the condenser lens 1025. In this embodiment, the combination of the half-wave plate 1022 and the polarization beam splitter 1026 causes the photoelectric conversion unit 103 to detect only the light of a specific polarization direction among the lights condensed by the objective lens 1021. Alternatively, for example, the polarization control filter 1022 may be a wire grid polarization plate having a transmittance of 80% or more, and only the light of a desired polarization direction can be extracted without using the polarization beam splitter 1026 and the diffuser 1027.

 図13の結像部102-A1の別の構成を図34に示す。 Another configuration of the image forming unit 102-A1 of FIG. 13 is shown in FIG.

 図13の構成では、1つのレンズアレイ1028により複数の像を光電変換部103に結像しているが、図34Aではレンズアレイ1028a、レンズアレイ1028b、レンズアレイ1028cの3つのレンズアレイと1つのシリンドリカルレンズを用いて結像させる。まず、レンズアレイ1028aとレンズアレイ1028bは倍率調整用のレンズアレイであり、レンズアレイ1028cが結像用のレンズアレイである。レンズアレイ1028aとレンズアレイ1028bをそれぞれ構成する個々のシリンドリカルレンズ1028a1~1028aN、1028b1~1028bNは互いに異なる焦点距離を有する。ここでいう倍率とは、光学倍率のことで、後述する図14Bにおける光電変換部1031及至1034に結像する強度分布の広がりやピーク位置から求めることができる。レンズの焦点距離によって光学倍率が異なるので、レンズアレイ1028aとレンズアレイ1028bによって光電変換部103に形成される像ごとに倍率を設定することができる。レンズアレイ1028aとレンズアレイ1028bはケプラー式の倍率調整機構とした。 In the configuration of FIG. 13, a plurality of images is formed on the photoelectric conversion unit 103 by one lens array 1028, but in FIG. 34A, three lens arrays 1028a, 1028b, and 1028c and one lens array 1028c are included. An image is formed using a cylindrical lens. First, the lens arrays 1028a and 1028b are lens arrays for magnification adjustment, and the lens array 1028c is a lens array for image formation. The individual cylindrical lenses 1028a1 to 1028aN and 1028b1 to 1028bN that form the lens array 1028a and the lens array 1028b, respectively, have different focal lengths. The term “magnification” here means an optical magnification, which can be obtained from the spread of the intensity distribution imaged on the photoelectric conversion units 1031 to 1034 and the peak position in FIG. 14B described later. Since the optical magnification varies depending on the focal length of the lens, the magnification can be set for each image formed on the photoelectric conversion unit 103 by the lens array 1028a and the lens array 1028b. The lens array 1028a and the lens array 1028b are Kepler-type magnification adjusting mechanisms.

 図34Bと図34Cは微小なサイズの球体の像の強度プロファイルを示す。10424a及至10424cと10426a及至10426cの光電変換部10424での結像位置が同じであることがわかる。なお、ここではケプラー式としているが、これに限定されることなく他の調整機構、たとえばガリレオ式の倍率調整機構を用いても良い。 34B and 34C show the intensity profile of an image of a sphere having a small size. It can be seen that the imaging positions of the photoelectric conversion units 10424 of 10424a to 10424c and 10426a to 10426c are the same. Although the Keplerian type is used here, the present invention is not limited to this, and another adjusting mechanism such as a Galileo type magnification adjusting mechanism may be used.

 このレンズアレイ1028aとレンズアレイ1028bが無い結像部102-A1の構成では、レンズアレイ1028が形成するそれぞれの像に倍率誤差が発生してしまう。図30を用いてこれを説明する。 With the configuration of the image forming unit 102-A1 without the lens array 1028a and the lens array 1028b, a magnification error occurs in each image formed by the lens array 1028. This will be described with reference to FIG.

 対物レンズ1021に入射する光線と光軸のなす角度をθ1とする。また、試料Wと光軸に垂直な軸のなす角度をθ2とする。ここでθ1は1021の瞳をリレーした位置におかれるレンズアレイ1028を構成するレンズのうち1つのレンズ中心を通るとする。この光線と試料面とのなす角をθ3で表すと以下の数1で表される。 The angle formed by the light beam incident on the objective lens 1021 and the optical axis is θ1. Further, the angle formed by the sample W and an axis perpendicular to the optical axis is θ2. Here, it is assumed that θ1 passes through the center of one of the lenses forming the lens array 1028 which is located at the position where the pupil of 1021 is relayed. When the angle formed by this light ray and the sample surface is represented by θ3, it is represented by the following formula 1.

Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001

 光電変換部103の受光面の位置10421乃至10423に結像される像は、像を形成する1028のレンズiに入射される主光線の方向θ1(i)より算出するsinθ3(i)に比例する大きさとなる。Wにおいた微小なサイズの球体の像の強度プロファイルを図31乃至図33に示す。ここで、図31は10421、図32は10422、図33は10423に結像した像のプロファイルである。 The image formed at the positions 10421 to 10423 of the light receiving surface of the photoelectric conversion unit 103 is proportional to sin θ3(i) calculated from the direction θ1(i) of the principal ray incident on the lens i of 1028 forming the image. It becomes the size. The intensity profile of the image of a sphere of minute size placed in W is shown in FIGS. 31 to 33. Here, FIG. 31 shows profiles of images formed on 10421, FIG. 32 shows 10422, and FIG. 33 shows images of 10423.

 10421a乃至10421cは、それぞれ1041a乃至1041cに対応する。同様にして、10422a乃至10422cと10423a乃至10423cは、1041a乃至1041cに対応する像の強度プロファイルである。図31乃至図33に示した強度プロファイルは、レンズアレイ1028を構成するそれぞれ異なるレンズにより結像しているため、θ1(i)が異なり、このため、倍率に比例する値、sinθ3(i)が変化する。102の開口数が大きくなると、θ1の変化が同一のレンズ内で大きくなり、これに伴い倍率変化が大きくなる。 10421a to 10421c correspond to 1041a to 1041c, respectively. Similarly, 10422a to 10422c and 10423a to 10423c are intensity profiles of images corresponding to 1041a to 1041c. Since the intensity profiles shown in FIGS. 31 to 33 are formed by different lenses forming the lens array 1028, θ1(i) is different. Therefore, a value proportional to the magnification, sin θ3(i), is obtained. Change. As the numerical aperture of 102 increases, the change in θ1 increases within the same lens, and the change in magnification increases accordingly.

 このようにして形成した像を図16で説明する光電変換部103に結像すると、信号ライン、たとえば1035-aに接続すると、画素ブロック1031乃至1034に形成する画素のピッチが一定であると像の解像度が低下する。そこで、図34Aに示すレンズアレイ1028a、1028bのそれぞれを構成する個別のシリンドリカルレンズレンズ1028a1~1028aN、1028b1~1028bNからなる倍率をsinθ3(i)に反比例する倍率比となるように設定する。これにより、倍率変化を補正することができる。 When the image formed in this way is formed on the photoelectric conversion unit 103 described with reference to FIG. 16, it is connected to a signal line, for example, 1035-a, and the image of the pixels formed in the pixel blocks 1031 to 1034 has a constant pitch. Resolution is reduced. Therefore, the magnification of the individual cylindrical lens lenses 1028a1 to 1028aN, 1028b1 to 1028bN forming each of the lens arrays 1028a and 1028b shown in FIG. 34A is set to be a magnification ratio inversely proportional to sin θ3(i). This makes it possible to correct the change in magnification.

 図14Aに試料W上での照明スポット20の模式図を示す。また、図14Bにレンズアレイ1028から光電変換部103への結像との対応を示す。 FIG. 14A shows a schematic view of the illumination spot 20 on the sample W. Further, FIG. 14B shows correspondence with image formation from the lens array 1028 to the photoelectric conversion unit 103.

 照明スポット20は図8のS2方向に長く伸びている。W0は検出すべき欠陥を示している。対物レンズ1021はその光軸がS2方向に直交しない方向に置かれている。光電変換部103はこの照明スポットをW-a乃至W-dに分割して検出する。ここでは4つに分割しているが、この数に限定されるものではなく、分割数を任意の整数にして本発明を具現化することができる。 The illumination spot 20 extends long in the S2 direction in FIG. W0 indicates a defect to be detected. The objective lens 1021 is placed in a direction in which its optical axis is not orthogonal to the S2 direction. The photoelectric conversion unit 103 divides this illumination spot into Wa to Wd and detects it. Although the number of divisions is four here, the number of divisions is not limited to this number, and the present invention can be embodied with an arbitrary number of divisions.

 検出すべき欠陥W0からの散乱光は対物レンズ1021で集光され、光電変換部103まで導かれる。レンズアレイ1028は1方向のみに結像するシリンドリカルレンズである。光電変換部103にはレンズアレイ1028の数と対応する画素ブロック1031、1032、1033、1034を形成する。アパーチャ1024により、光量の弱い光電変換を行わない領域は遮光されるため、1031乃至1034の画素ブロックは近接して形成することが可能になる。レンズアレイ1028は、対物レンズの瞳がリレーされた位置に置かれる。分割された瞳領域毎に結像をするため、レンズアレイ1028が結像する像は開口が狭められたことになり焦点深度が拡大する。これにより、S2と直交しない方向からの結像検出が可能になる。 The scattered light from the defect W0 to be detected is condensed by the objective lens 1021 and guided to the photoelectric conversion unit 103. The lens array 1028 is a cylindrical lens that forms an image only in one direction. Pixel blocks 1031, 1032, 1033, and 1034 corresponding to the number of lens arrays 1028 are formed in the photoelectric conversion unit 103. The aperture 1024 shields a region where the amount of light is weak and which is not subjected to photoelectric conversion, so that the pixel blocks 1031 to 1034 can be formed close to each other. The lens array 1028 is placed at the position where the pupil of the objective lens is relayed. Since an image is formed for each of the divided pupil regions, the image formed by the lens array 1028 has a narrowed aperture, and the depth of focus is expanded. As a result, it becomes possible to detect the image formation from the direction not orthogonal to S2.

 図29を用いてレンズアレイ1028の効果についてより詳細に述べる。 The effect of the lens array 1028 will be described in more detail with reference to FIG.

 集光レンズ1025は大きな開口数を持ち、通常、対物レンズ1021の開口数と同一である。開口数が大きい集光レンズは、多様な方向に散乱した光を集光するが、これにより、焦点深度が浅くなる。照明の長手方向であるs2と対物レンズ1021の光軸が直交しないように配置された場合、視野中央と視野端で光学距離が変化してしまい、光電変換部103に形成する像は焦点ずれが発生してしまう。 The condenser lens 1025 has a large numerical aperture and is usually the same as the numerical aperture of the objective lens 1021. A condenser lens with a large numerical aperture collects light scattered in various directions, which results in a shallow depth of focus. When s2, which is the longitudinal direction of the illumination, and the optical axis of the objective lens 1021 are arranged so as not to intersect at right angles, the optical distance changes at the center of the visual field and the visual field end, and the image formed on the photoelectric conversion unit 103 has defocus. Will occur.

 図29に示すように、レンズアレイ1028は集光レンズ1025の瞳位置、別の言い方をすれば、対物レンズ1021のリレーされた瞳位置、また別の言い方をすれば集光レンズ1025の後側焦点位置に置かれる。集光レンズ1025は瞳径と同等の大きさを有すようにしておき、対物レンズ1021の開口径に入射した光を理想的には全て結像できるようにする。 As shown in FIG. 29, the lens array 1028 includes a pupil position of the condenser lens 1025, in other words, a relayed pupil position of the objective lens 1021, and in other words, a rear side of the condenser lens 1025. It is placed in the focal position. The condenser lens 1025 is set to have a size equivalent to the pupil diameter so that ideally all the light incident on the aperture diameter of the objective lens 1021 can be imaged.

 レンズアレイ1028の位置では、集光レンズ1025への入射方向が類似した光は、近接して分布する。これにより、この位置にレンズアレイ1028をおくと、開口数が小さくなったのと等価になり、焦点深度を拡大させることが可能になる。このようにして、開口数が小さくなるように分割して、それぞれに対応した像を光電変換面に結像し、焦点ずれのない像を形成して微細な欠陥を解像する。 At the position of the lens array 1028, lights having similar incident directions to the condenser lens 1025 are distributed closely. As a result, when the lens array 1028 is placed at this position, it is equivalent to a reduction in the numerical aperture, and the depth of focus can be increased. In this way, the image is divided so that the numerical aperture becomes small, and the corresponding images are formed on the photoelectric conversion surface, and an image without defocus is formed to resolve minute defects.

 画素ブロックそれぞれには、光電素子が二次元状に形成される。まず、画素ブロック1031について説明する。1031a乃至1031dは画素ブロック1031の画素ブロック内に形成した画素グループであり、それぞれ、照明スポット位置におけるW1乃至W4の区画からの光を結像させる。1031a1乃至1031aNは1031aに属する画素であり、それぞれの画素はフォトンが入射すると既定の電流出力をする。同一の画素グループに属する画素の出力は電気的に接続しており、1つの画素グループは画素グループに属する画素の電流出力の総和を出力する。同様に1032乃至1034もW-a乃至W-dに対応する出力を行う。最後に、別個の画素グループからの同一の区画に対応する出力は電気的に接続され、W1乃至W4のそれぞれの区画から検出したフォトン数に対応した出力を光電変換部103は行う。 -A photoelectric element is two-dimensionally formed in each pixel block. First, the pixel block 1031 will be described. Reference numerals 1031a to 1031d are pixel groups formed in the pixel block of the pixel block 1031 and form images of light from the sections W1 to W4 at the illumination spot positions, respectively. Reference numerals 1031a1 to 1031aN are pixels belonging to 1031a, and each pixel outputs a predetermined current when photons are incident. The outputs of the pixels belonging to the same pixel group are electrically connected, and one pixel group outputs the sum of the current outputs of the pixels belonging to the pixel group. Similarly, 1032 to 1034 also output corresponding to Wa to Wd. Finally, outputs corresponding to the same section from different pixel groups are electrically connected, and the photoelectric conversion unit 103 performs output corresponding to the number of photons detected from each section of W1 to W4.

 図13の検出系は光電変換部103における照明スポット20を結像した像の長軸方向とS2’の方向が一致するように配置する。今、図8のとおりにS1、S2を定義したとき照明スポットの長さ方向のベクトルを数2のように表す。 The detection system of FIG. 13 is arranged so that the long axis direction of the image formed by the illumination spot 20 in the photoelectric conversion unit 103 and the direction of S2′ match. Now, when S1 and S2 are defined as shown in FIG. 8, a vector in the length direction of the illumination spot is expressed as in Equation 2.

Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002

 次いで、対物レンズ1021の中心を通る光軸が試料Wの鉛直方向Zに対してθ、S2に対してφの角度と定義すると、この光軸を表すベクトルは数3のように表される(図15参照)。 Next, when the optical axis passing through the center of the objective lens 1021 is defined as an angle of θ with respect to the vertical direction Z of the sample W and φ with respect to S2, a vector representing this optical axis is expressed by Equation 3 ( (See FIG. 15).

Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003

 対物レンズ1021からS20を撮像する場合、S1の光軸と同一の成分は消失するので、このベクトルは数4であらわされる。 When imaging S20 from the objective lens 1021, the same component as the optical axis of S1 disappears, so this vector is expressed by Equation 4.

Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004

 対物レンズ1021光軸を除いた二次元平面はZ方向の成分をもつベクトルと持たないベクトルの2つに分離する(数5、数6参照)。 The two-dimensional plane excluding the optical axis of the objective lens 1021 is divided into two, a vector having a component in the Z direction and a vector not having it (see Formulas 5 and 6).

Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005

Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006

 このとき、数6で表されるZ方向成分をもたないベクトルから数7であらわされる角度だけ回転させた方向に図13のS2’が設定される。 At this time, S2' in FIG. 13 is set in the direction rotated from the vector having no Z-direction component represented by Formula 6 by the angle represented by Formula 7.

Figure JPOXMLDOC01-appb-M000007
Figure JPOXMLDOC01-appb-M000007

 これに直交するように、S1’’が設定される。このようにしてレンズアレイ1028および光電変換部103を配置する。また、ここで検出する視野の長さをLとして視野中心と視野端における光学的な距離の差分Δdは以下の数8であらわされる。 ▽ S1″ is set so as to be orthogonal to this. In this way, the lens array 1028 and the photoelectric conversion unit 103 are arranged. Further, assuming that the length of the visual field detected here is L, the difference Δd in the optical distance between the visual field center and the visual field end is expressed by the following expression 8.

Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000008

 今、対物レンズ1021の開口数をNAとして、これをレンズアレイ1028でM分割したとすると、各レンズアレイ1028の像の焦点深度DOFは以下の数9であらわされる。 Now, assuming that the numerical aperture of the objective lens 1021 is NA and this is divided into M by the lens array 1028, the depth of focus DOF of the image of each lens array 1028 is expressed by the following equation 9.

Figure JPOXMLDOC01-appb-M000009
Figure JPOXMLDOC01-appb-M000009

 このとき、S2方向に解像可能な間隔はエアリーディスクの大きさをもとに以下の数10であらわされる。 At this time, the resolvable interval in the S2 direction is expressed by the following formula 10 based on the size of the Airy disk.

Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000010

 Mを大きくすると数10にあらわされる解像度が悪化し、欠陥の検出感度が低下する。しかし、数8の光学的な距離の差分に対して数9であらわされる焦点深度が不足すると焦点深度不足により、視野端での解像度が悪化し、欠陥の検出感度が低下する。そこで、典型的には以下の数11の条件を満たすようにMを設定する。  If M is increased, the resolution shown in Equation 10 deteriorates, and the defect detection sensitivity decreases. However, if the depth of focus expressed by the equation 9 is insufficient with respect to the optical distance difference of the equation 8, the resolution at the edge of the field of view deteriorates due to the insufficient depth of focus, and the defect detection sensitivity decreases. Therefore, typically, M is set so as to satisfy the following expression 11.

Figure JPOXMLDOC01-appb-M000011
Figure JPOXMLDOC01-appb-M000011

 図16で光電変換部103の内部回路について説明する。図14ではW1乃至W4に対応した4つの区画に対応した出力をする光電変換手段について説明したが、図16ではこれを8区画拡張した例について説明する。 The internal circuit of the photoelectric conversion unit 103 will be described with reference to FIG. In FIG. 14, the photoelectric conversion unit that outputs corresponding to the four sections corresponding to W1 to W4 has been described, but in FIG. 16, an example in which this is expanded into eight sections will be described.

 画素ブロック1031乃至1034にはそれぞれ8つの画素グループが形成されている。たとえば1031には1031a乃至1031hまでが形成されており、1032乃至1034の各グループにも同様に形成されている。1031a5は1031aの第5番目の画素であり、ガイガーモードで動作するアバランシェフォトダイオードがクエンチング抵抗1031a5qを介して、信号ライン1035-1aに結線されている。 Eight pixel groups are formed in each of the pixel blocks 1031 to 1034. For example, 1031a to 1031h are formed in 1031 and 1032 to 1034 are similarly formed in each group. Reference numeral 1031a5 denotes the fifth pixel of 1031a, and an avalanche photodiode operating in the Geiger mode is connected to the signal line 1035-1a via the quenching resistor 1031a5q.

 同様にして、画素グループ1031aに属するすべての画素は信号ライン1035-1aに接続され、画素にフォトンが入射されると信号ライン1035-1aに電流を流す。信号ライン1035-2aは画素グループ1032aの画素が結線されている。このように、すべての画素グループには、その画素グループに属する画素が電気的に接続する信号ラインを備えている。画素グループ1031a、1032a・・・1034aはそれぞれ、試料Wにおいて同一の位置からの散乱光を検出するため、それぞれの信号ラインを1036-1a乃至1036-4aで信号ライン1035-aに接続する。この信号をパッド1036-aで接続し、信号処理部105に伝送する。同様にして、1031b乃至1034bに属する画素は信号ライン1035-bに結線され、パッド1036-bで接続し、信号処理部105に伝送する。 Similarly, all the pixels belonging to the pixel group 1031a are connected to the signal line 1035-1a, and when photons are incident on the pixel, a current flows through the signal line 1035-1a. The pixel of the pixel group 1032a is connected to the signal line 1035-2a. As described above, all the pixel groups are provided with the signal lines to which the pixels belonging to the pixel group are electrically connected. The pixel groups 1031a, 1032a,... 1034a respectively connect the signal lines to the signal line 1035-a by 1036-1a to 1036-4a in order to detect scattered light from the same position in the sample W. This signal is connected by the pad 1036-a and transmitted to the signal processing unit 105. Similarly, the pixels belonging to 1031b to 1034b are connected to the signal line 1035-b, connected by the pad 1036-b, and transmitted to the signal processing unit 105.

 図16の等価回路を図17に示す。 The equivalent circuit of FIG. 16 is shown in FIG.

 画素ブロック1031内の画素グループ1031aに属するN個の画素、1031a1、1031a2・・・1031aNはアバランシェフォトダイオードとそれに接続するクエンチング抵抗である。光電変換部103に形成されるすべてのアバランシェフォトダイオードには逆電圧VRが印加され、ガイガーモードで動作するようになっている。光子が入射されるとアバランシェフォトダイオードには電流が流れるが、対となるクエンチング抵抗により逆バイアス電圧が下がって再び電気的に遮断される。このようにして光子の入射毎に一定の電流が流れる。 The N pixels 1031a1, 1031a2,..., 1031aN belonging to the pixel group 1031a in the pixel block 1031 are avalanche photodiodes and quenching resistors connected thereto. The reverse voltage VR is applied to all the avalanche photodiodes formed in the photoelectric conversion unit 103, so that they operate in the Geiger mode. When a photon is incident, a current flows through the avalanche photodiode, but the reverse bias voltage is lowered by the quenching resistor forming a pair and the current is electrically cut off again. In this way, a constant current flows each time a photon is incident.

 画素ブロック1034内の画素グループ1034aに属するN個の画素、1034a1乃至1034aNも同様にガイガーモードのアバランシェフォトダイオードとそれに結合されたクエンチング抵抗である。画素グループ1031aと1034aに属するすべての画素は、試料Wにおける領域W-aからの反射あるいは散乱光に対応する。この信号をすべて電気的に結合し、電流電圧変換部103aに接続する。電流電圧変換部103aは電圧に変換された信号500-aを出力する。 The N pixels 1034a1 to 1034aN belonging to the pixel group 1034a in the pixel block 1034 are also Geiger mode avalanche photodiodes and a quenching resistor coupled thereto. All the pixels belonging to the pixel groups 1031a and 1034a correspond to the reflected or scattered light from the region Wa of the sample W. All of these signals are electrically coupled and connected to the current/voltage converter 103a. The current-voltage converter 103a outputs the signal 500-a converted into a voltage.

 同様にして、画素ブロック1031の画素グループ1031bに属する画素、1031b1乃至1031bNと、画素ブロック1034の画素グループ1034bに属する画素、1034b1乃至1034bNは試料面W-bからの光に対応し、これらの出力はすべて電気的に結合して電流電圧変換部103bに接続する。103bは電圧信号500-bを出力する。このようにして、照明スポット20を分割したすべての領域に対応した信号を出力する。 Similarly, the pixels belonging to the pixel group 1031b of the pixel block 1031, 1031b1 to 1031bN, and the pixels belonging to the pixel group 1034b of the pixel block 1034, 1034b1 to 1034bN, correspond to the light from the sample surface Wb, and these outputs Are all electrically coupled and connected to the current-voltage converter 103b. 103b outputs the voltage signal 500-b. In this way, signals corresponding to all the areas obtained by dividing the illumination spot 20 are output.

 図18に照明スポット20をW-a乃至W-hに分割した場合のデータ処理部105を示す。 FIG. 18 shows the data processing unit 105 when the illumination spot 20 is divided into Wa and Wh.

 ブロック105-lfは低角前方検出部102-lfで検出した光に対して光電変換した信号500a-lf乃至500h-lfを処理するブロックである。ブロック105-hbは、高角後方検出部102-hbが検出した光を光電変換した信号500a-hb乃至500h-hbを処理するブロックである。同様にして各光電変換部が出力するそれぞれの信号に対応して、その出力信号を処理するブロックを設ける。 The block 105-lf is a block for processing the signals 500a-lf to 500h-lf obtained by photoelectrically converting the light detected by the low-angle front detector 102-lf. The block 105-hb is a block for processing the signals 500a-hb to 500h-hb obtained by photoelectrically converting the light detected by the high-angle rear detection unit 102-hb. Similarly, corresponding to each signal output by each photoelectric conversion unit, a block for processing the output signal is provided.

 高周波通過フィルタ1051a乃至1051hの出力は信号合成部1053に回転ステージの複数回転分蓄積し、試料W上の同一位置で取得した信号同士を加算して合成したアレイ状のストリーム信号1055-lfを出力する。 The outputs of the high-frequency pass filters 1051a to 1051h are accumulated in the signal synthesizing unit 1053 for a plurality of rotations of the rotary stage, and the signals obtained at the same position on the sample W are added together and output as an array stream signal 1055-lf. To do.

 信号合成部1054は信号合成部1053と同様に、同一位置で取得した信号同士を加算して合成したアレイ状のストリーム信号1056-lfを出力する。 Like the signal combining unit 1053, the signal combining unit 1054 outputs an array stream signal 1056-lf obtained by adding together signals acquired at the same position and combining them.

 ブロック105-hbもブロック105-lfと同様の演算を行い、高周波通過フィルタの出力より合成したアレイ状のストリーム信号1055-hbと低周波通過フィルタ1052a乃至1052hの出力より合成したアレイ状のストリーム信号1056-hbを出力する。欠陥検出部1057は複数の光電変換部が出力した信号に対して高周波通過フィルタをかけた信号を線形加算した後にしきい値処理を行う。低周波信号統合部1058は低周波通過フィルタをかけた信号を統合する。低周波信号統合部1058の出力は欠陥検出部1057に入力され、前記しきい値を決定する際に用いられる。典型的にはノイズは低周波信号統合部1058の出力の平方根に比例して増えると推定する。 The block 105-hb also performs the same calculation as the block 105-lf, and the array stream signal 1055-hb synthesized from the output of the high frequency pass filter and the array stream signal synthesized from the outputs of the low frequency pass filters 1052a to 1052h. Output 1056-hb. The defect detection unit 1057 performs threshold processing after linearly adding the signals output from the plurality of photoelectric conversion units and subjected to the high frequency pass filter. The low frequency signal integration unit 1058 integrates the low frequency pass filtered signals. The output of the low frequency signal integration unit 1058 is input to the defect detection unit 1057 and is used when determining the threshold value. It is estimated that the noise typically increases in proportion to the square root of the output of the low frequency signal integration unit 1058.

 そこで、欠陥検出部1057のアレイ状のストリーム信号と低周波信号統合部1058のアレイ状のストリーム信号とを対応づけたのち、低周波信号統合部1058の信号の平方根に比例したしきい値を与えて、これを超えた欠陥検出部1057の信号を欠陥として抽出する。欠陥検出部1057で検出した欠陥は、その信号強度およびWでの検出座標とともに制御部53に出力する。低周波信号統合部1058の検出した信号強度は試料面のラフネス情報としてやはり制御部53に送信し、装置をオペレーションするユーザに対して表示部54などに出力する。 Therefore, after associating the array stream signal of the defect detection unit 1057 with the array stream signal of the low frequency signal integration unit 1058, a threshold value proportional to the square root of the signal of the low frequency signal integration unit 1058 is given. Then, the signal of the defect detection unit 1057 that exceeds this is extracted as a defect. The defect detected by the defect detection unit 1057 is output to the control unit 53 together with the signal strength and the detection coordinate in W. The signal intensity detected by the low-frequency signal integration unit 1058 is also transmitted to the control unit 53 as roughness information of the sample surface and output to the display unit 54 or the like for the user who operates the apparatus.

 センサ面に結像される像の大きさは、レンズアレイ1028の位置や焦点距離に大きな影響を受ける。センサ面に結像される分割されたそれぞれの像の倍率が異なる場合、図16の画素ブロック1031乃至1034に結像する強度分布の広がりが異なる場合、あるいは結像する強度分布の位置が異なる場合に、統合されたときに像ボケが生じ検出感度が低下する。 The size of the image formed on the sensor surface is greatly affected by the position and focal length of the lens array 1028. When the magnification of each of the divided images formed on the sensor surface is different, when the spread of the intensity distribution imaged on the pixel blocks 1031 to 1034 in FIG. 16 is different, or when the position of the intensity distribution imaged is different. In addition, when they are integrated, image blurring occurs and the detection sensitivity decreases.

 図25に示すように、図1の制御部53は、光電変換部103上に結像する像の倍率を算出する倍率算出部532と、倍率算出部532で算出された像の倍率に基づいて像の結像状態を変化させるための制御量を求める計算処理部533を有する。結像状態制御部10212は、計算処理部533で求めた制御量に基づいて、光電変換部103上に結像する像の結像状態を変化させる。 As shown in FIG. 25, the control unit 53 of FIG. 1 calculates the magnification of the image formed on the photoelectric conversion unit 103 based on the magnification calculation unit 532 and the magnification of the image calculated by the magnification calculation unit 532. It has a calculation processing unit 533 for obtaining a control amount for changing the image formation state of the image. The image formation state control unit 10212 changes the image formation state of the image formed on the photoelectric conversion unit 103 based on the control amount obtained by the calculation processing unit 533.

 ここで、結像部102-A1の例を図19Aに示す。 Here, an example of the image forming unit 102-A1 is shown in FIG. 19A.

 集光レンズ1025よりも光電変換部側にアパーチャ1029を取り付ける。このアパーチャ1029は設置するレンズアレイ1028よりも小さく、レンズアレイ1028に入射する光の一部を遮光できる金属等の素材で作られている。図19Aでは、複数のレンズアレイのうち最も上側のレンズのみ光が入射し、それ以外のレンズではアパーチャによって光が遮光されている。 Attach the aperture 1029 on the photoelectric conversion side of the condenser lens 1025. The aperture 1029 is smaller than the lens array 1028 to be installed, and is made of a material such as metal that can block a part of the light incident on the lens array 1028. In FIG. 19A, light is incident only on the uppermost lens of the plurality of lens arrays, and the light is blocked by the apertures in the other lenses.

 図19B、19Cにアパーチャの例を示す。 19B and 19C show examples of apertures.

 図19Bでは、外枠1029aがあり、外枠1029a内を電気制御されたモータで金属板1029-1を移動させることができる。図19Bのように、金属板1029-1を矢印方向に移動させることで、レンズアレイ1028の複数のレンズの結像をそれぞれ単独に観察することができる。調整時に光路中にアパーチャ1029が入り、検査時にはアパーチャ1029が光路から完全に外れ、分割された像すべてを検出する。 In FIG. 19B, there is an outer frame 1029a, and the metal plate 1029-1 can be moved inside the outer frame 1029a by an electrically controlled motor. As shown in FIG. 19B, by moving the metal plate 1029-1 in the arrow direction, it is possible to independently observe the images formed by the plurality of lenses of the lens array 1028. The aperture 1029 enters the optical path at the time of adjustment, and the aperture 1029 is completely deviated from the optical path at the time of inspection to detect all the divided images.

 図19Cでは、レンズアレイ1028の2倍程度の大きさの外枠1029b内を金属板1029-2が横から矢印方向にスライドする方式を示している。 FIG. 19C shows a method in which the metal plate 1029-2 slides in the direction of the arrow from the side in the outer frame 1029b that is about twice the size of the lens array 1028.

 この方式では、金属板1029-2を横からスライドさせ、レンズアレイ1028に入射する光の一部を遮光する。遮光する金属板1029-2を図19Cのように入れ替えていくことで、レンズアレイ1028のそれぞれのレンズの結像を単独で観察することができる。 In this method, the metal plate 1029-2 is slid from the side to block part of the light incident on the lens array 1028. By exchanging the light-shielding metal plate 1029-2 as shown in FIG. 19C, the image formation of each lens of the lens array 1028 can be observed independently.

 このように、結像部102-A1は、開口を分割して形成された複数の像の中から一部の像を選択する像選択機構を有する。倍率算出部532は、像選択機構により選択された一部の像の倍率を算出する。計算処理部533は、倍率算出部532で算出された一部の像の倍率に基づいて一部の像の結像状態を変化させるための制御量を求める。 As described above, the image forming unit 102-A1 has an image selection mechanism that selects a part of images from a plurality of images formed by dividing the aperture. The magnification calculator 532 calculates the magnification of a part of the images selected by the image selection mechanism. The calculation processing unit 533 obtains a control amount for changing the image formation state of some images based on the magnification of some images calculated by the magnification calculation unit 532.

 図19Aでは、前記像選択機構は、光電変換部103の前方に入射する光の一部を遮光することにより、複数の像の中から一部の像を選択するアパーチャ1029により構成される。 In FIG. 19A, the image selection mechanism is configured by an aperture 1029 that selects a part of the plurality of images by blocking a part of the light that is incident on the front side of the photoelectric conversion unit 103.

 図19A~19Cの別の実施例として、図20に示すように光電変換部103に切替スイッチ1037を取りつける。 As another embodiment of FIGS. 19A to 19C, a changeover switch 1037 is attached to the photoelectric conversion unit 103 as shown in FIG.

 オペレータは、後述するGUIからそれぞれの切替スイッチ1037を電気的にONとOFFの切り替えが可能であり、各センサの一部分のみの検出を行う。図20ではセンサ1031の信号が検出され、センサ1032、センサ1033、センサ1034の信号は検出されていない。図20の機構では図19Aの機構と比べて、アパーチャ1029からの漏れ光等が検出されることがなく高精度である。さらに、アパーチャ1029を機械的に移動させるよりも高速に切り替え可能なため、計測所要時間を短くすることができる。 The operator can electrically switch each of the changeover switches 1037 ON and OFF from a GUI described later, and only a part of each sensor is detected. In FIG. 20, the signal of the sensor 1031 is detected, but the signals of the sensor 1032, the sensor 1033, and the sensor 1034 are not detected. Compared to the mechanism of FIG. 19A, the mechanism of FIG. 20 does not detect leaked light from the aperture 1029 and is highly accurate. Furthermore, since the aperture 1029 can be switched at a higher speed than mechanically moving, the time required for measurement can be shortened.

 図20では、前記像選択機構は、電気的にONとOFFを切り替えることにより前記複数の像の中から前記一部の像を選択する切替スイッチ1037により構成される。 In FIG. 20, the image selection mechanism is configured by a changeover switch 1037 that electrically selects ON/OFF to select the partial image from the plurality of images.

 図21にレンズアレイ1028の外部気圧をコントロールする機構を示す。 FIG. 21 shows a mechanism for controlling the external atmospheric pressure of the lens array 1028.

 図21に示すように、レンズアレイ1028を密閉された空間10210-aに挿入する。光の入射、射出する面は合成石英などを用いて、光が透過する構成となる。密閉空間の内部に気圧センサ10210-bを取り付け、気圧を測定する。測定された気圧データを信号処理部105で参照しながら、コントロールボックス10210-cを用いて密閉空間内の気圧をコントロールする。この機構は、レンズの焦点距離は外部気圧に依存するので、センサ面の像状態を変化させる結像状態制御部10212を構成する。 As shown in FIG. 21, the lens array 1028 is inserted into the sealed space 10210-a. The surface through which light enters and exits is made of synthetic quartz or the like so that light is transmitted. An atmospheric pressure sensor 10210-b is attached inside the closed space to measure the atmospheric pressure. While referring to the measured atmospheric pressure data in the signal processing unit 105, the atmospheric pressure in the closed space is controlled using the control box 10210-c. This mechanism constitutes the image formation state control unit 10212 that changes the image state of the sensor surface because the focal length of the lens depends on the external atmospheric pressure.

 このように、結像部102-A1は、レンズアレイ1028を含む閉空間(空間10210-a)の気圧を制御する気圧調整機構を有する。そして、結像状態制御部10212は、気圧調整機構により閉空間(空間10210-a)の気圧を制御して結像状態を変化させる。 As described above, the imaging unit 102-A1 has an atmospheric pressure adjusting mechanism that controls the atmospheric pressure of the closed space (space 10210-a) including the lens array 1028. Then, the imaging state control unit 10212 changes the imaging state by controlling the atmospheric pressure of the closed space (space 10210-a) by the atmospheric pressure adjusting mechanism.

 また、図22のように、レンズアレイ1028a、1028b、1028cにそれぞれマイクロメータ10211a、10211b、10211cを取り付ける。マイクロメータ10211a、10211b、10211cを動かし、レンズアレイ1028a、1028b、1028cのそれぞれの位置を光軸方向に変化させる。これにより、センサ面の像状態を変化させる結像状態制御部1021を構成する。 Further, as shown in FIG. 22, micrometers 10211a, 10211b, 10211c are attached to the lens arrays 1028a, 1028b, 1028c, respectively. By moving the micrometers 10211a, 10211b, 10211c, the respective positions of the lens arrays 1028a, 1028b, 1028c are changed in the optical axis direction. This constitutes the image formation state control unit 1021 that changes the image state of the sensor surface.

 このように、結像部102-A1は、複数のレンズアレイ1028a、1028b、1028cを有する。そして、結像状態制御部10212は、複数のレンズアレイ1028a、1028b、1028cの中の少なくとも1つを、検出部102の光軸方向に移動させて結像状態を変化させる。 As described above, the image forming unit 102-A1 includes the plurality of lens arrays 1028a, 1028b, and 1028c. Then, the image formation state control unit 10212 moves at least one of the plurality of lens arrays 1028a, 1028b, 1028c in the optical axis direction of the detection unit 102 to change the image formation state.

 また、結像部102-A1は、複数のレンズアレイ1028a、1028b、1028cの各々に配置されたマイクロメータ10211a、10211b、10211cを有する。結像状態制御部10212は、マイクロメータ10211a、10211b、10211cを動かすことにより、複数のレンズアレイ1028a、1028b、1028cの中の少なくとも1つを光軸方向に移動させて結像状態を変化させる。 Further, the image forming unit 102-A1 has micrometers 10211a, 10211b, 10211c arranged in each of the plurality of lens arrays 1028a, 1028b, 1028c. The imaging state control unit 10212 moves at least one of the plurality of lens arrays 1028a, 1028b, 1028c in the optical axis direction by moving the micrometers 10211a, 10211b, 10211c to change the imaging state.

 図23は、図21、図22の別の実施例である。 FIG. 23 shows another embodiment of FIGS. 21 and 22.

 光電変換部103にマイクロメータ10211dを取り付ける。マイクロメータ10211dを動かし、光電変換部103を光軸方向に移動させる。これにより、センサ面の像状態を変化させる結像状態制御部10212を構成する。 Attach the micrometer 10211d to the photoelectric conversion unit 103. The micrometer 10211d is moved to move the photoelectric conversion unit 103 in the optical axis direction. This constitutes the image formation state control unit 10212 that changes the image state of the sensor surface.

 このように、結像状態制御部10212は、光電変換部103を検出部102の光軸方向に移動させて結像状態を変化させる。また、結像部102-A1は、光電変換部103に配置されたマイクロメータ10211dを有する。そして結像状態制御部10212は、マイクロメータ10211dを動かすことにより、光電変換部103を光軸方向に移動させて結像状態を変化させる。また、像状態制御部10212は、試料Wを面直方向に移動させて結像状態を変化させても良い。 As described above, the image formation state control unit 10212 changes the image formation state by moving the photoelectric conversion unit 103 in the optical axis direction of the detection unit 102. Further, the image forming unit 102-A1 includes a micrometer 10211d arranged in the photoelectric conversion unit 103. Then, the image formation state control unit 10212 moves the micrometer 10211d to move the photoelectric conversion unit 103 in the optical axis direction to change the image formation state. The image state control unit 10212 may change the image forming state by moving the sample W in the direction perpendicular to the surface.

 図24A~図24Cは、表示部54に表示される分割像の選択した一部を観察するGUIを示している。4分割された光線を4つのセンサに結像した例である。 24A to 24C show a GUI for observing a selected part of the divided image displayed on the display unit 54. It is an example in which a light beam divided into four is imaged on four sensors.

 図24Aに示すように、モニタ54-1では、センサ1の結像状態が観察結果541-1に示されている。観察結果541-1に示すセンサは、選択ボタン542-1で選択されている。灰色に表示されているセンサ1が選択されていることを示している。 As shown in FIG. 24A, in the monitor 54-1, the image formation state of the sensor 1 is shown in the observation result 541-1. The sensor shown in the observation result 541-1 is selected by the selection button 542-1. This indicates that the sensor 1 displayed in gray is selected.

 図24Bに示すように、モニタ54-2では、図19A~図19Cまたは図20に示されている機構で観察センサを切り替え、センサ2の結像状態が観察結果541-2に示されている。分割された像の一部を観察し、センサごとの像を図25に示す制御部53内のメモリ531に保存する。 As shown in FIG. 24B, in the monitor 54-2, the observation sensor is switched by the mechanism shown in FIGS. 19A to 19C or 20 and the image formation state of the sensor 2 is shown in the observation result 541-2. .. A part of the divided image is observed, and the image of each sensor is stored in the memory 531 in the control unit 53 shown in FIG.

 図24Cに示すように、モニタ54-3では、各センサで取得された像の統合像を表示している。キャリブレーション値と比べて、統合測定値が大きく表示され、統合像がぼけているのがわかる。キャリブレートボタンを押すことによって、各センサ面の像の大きさから倍率を図25に示す制御部53内の倍率算出部532で求めて、オペレータが指定したキャリブレーション値とのずれ量を計算処理部533で測定する。 As shown in FIG. 24C, the monitor 54-3 displays an integrated image of the images acquired by the sensors. Compared with the calibration value, the integrated measurement value is displayed larger and it can be seen that the integrated image is blurred. By pressing the calibration button, the magnification is obtained from the image size of each sensor surface by the magnification calculator 532 in the controller 53 shown in FIG. 25, and the deviation amount from the calibration value specified by the operator is calculated. Measure at 533.

 ずれ量が許容値よりも大きい場合、図21、図22及び図23のいずれかの結像状態制御部10212でセンサ面の像を変化させ、再度各センサ面の像を検出する。図24D、図24Eは、センサ面の像を変化させた後のセンサ1、センサ2に結像される像の観察結果を示している。 If the deviation amount is larger than the allowable value, the image on the sensor surface is changed by the image formation state control unit 10212 in any of FIGS. 21, 22, and 23, and the image on each sensor surface is detected again. 24D and 24E show the observation results of the images formed on the sensor 1 and the sensor 2 after changing the image on the sensor surface.

 図24Fの観察結果541-6に示すように、センサ1、センサ2、センサ3、センサ4の統合像の大きさがキャリブレーション値とおおよそ等しくなっている。このそれぞれの像の大きさから倍率算出部532で倍率を求め、指定されたキャリブレーション値とのずれ量が許容値よりも小さい場合、ウェハ検査を開始する。 As shown by the observation result 541-6 in FIG. 24F, the size of the integrated image of the sensor 1, the sensor 2, the sensor 3, and the sensor 4 is approximately equal to the calibration value. The magnification calculation unit 532 obtains the magnification from the size of each image, and if the deviation amount from the designated calibration value is smaller than the allowable value, the wafer inspection is started.

 このように、像選択機構(図19A~図19C、図20参照)により選択された一つの像を表示する図1の表示部54に表示する(図24A、図24B、図24D,図24E参照)。また、表示部54は、前記像選択機構により選択されたすべての像の統合像を表示する(図24C、図24F参照)。 In this way, one image selected by the image selection mechanism (see FIGS. 19A to 19C and 20) is displayed on the display unit 54 of FIG. 1 (see FIGS. 24A, 24B, 24D, and 24E). ). The display unit 54 also displays an integrated image of all the images selected by the image selection mechanism (see FIGS. 24C and 24F).

 図26に、分割像のそれぞれの倍率を等しくして測定を開始するフローチャートを示す。 FIG. 26 shows a flowchart for starting measurement with equal magnifications for the divided images.

 最初に、分割された像の一部を検出する(S261)。 First, a part of the divided image is detected (S261).

 次に、検出された像の大きさを基準の大きさと比較する(S262)。比較の結果、基準値との差が許容値よりも小さい場合(S263)に測定を開始する(S265)。 Next, the size of the detected image is compared with the reference size (S262). As a result of the comparison, when the difference from the reference value is smaller than the allowable value (S263), the measurement is started (S265).

 比較の結果、基準値との差が許容値よりも大きい場合(S264)には、センサ面の像を変化させる(S266)。 As a result of the comparison, if the difference from the reference value is larger than the allowable value (S264), the image of the sensor surface is changed (S266).

 これによって、分割されたそれぞれの像の大きさをおおよそ等しい状態にし、統合された像のボケを防ぎ、検出感度の低下を防ぐことができる。 By doing this, it is possible to make the sizes of the divided images approximately equal, prevent blurring of the integrated image, and prevent deterioration of detection sensitivity.

 次に、実施例2の欠陥検査装置について説明する。欠陥検査装置の基本構成は実施例1と同様なのでその説明は省略する。 Next, the defect inspection apparatus of the second embodiment will be described. Since the basic structure of the defect inspection apparatus is the same as that of the first embodiment, its description is omitted.

 図27に結像部102-A1の例を示す。図27に示すように、レンズアレイ1028と光電変換部103-1の間に光路分岐素子として偏光ビームスプリッタ10213を挿入し、2次元カメラ103-2を光電変換部103-1と共役な位置に配置する。本実施例では偏光ビームスプリッタ10213を使用したが、図35A、図35Bのように光を分岐する抜き差し可能なミラー10214を用いることもできる。また、2次元カメラとしてCMOSカメラやCCDカメラを用いる。 FIG. 27 shows an example of the image forming unit 102-A1. As shown in FIG. 27, a polarization beam splitter 10213 is inserted as an optical path branching element between the lens array 1028 and the photoelectric conversion unit 103-1 to position the two-dimensional camera 103-2 at a position conjugate with the photoelectric conversion unit 103-1. Deploy. Although the polarization beam splitter 10213 is used in this embodiment, a removable mirror 10214 for splitting light as shown in FIGS. 35A and 35B can also be used. A CMOS camera or a CCD camera is used as the two-dimensional camera.

 2次元カメラ103-2の画素サイズは像の大きさよりも小さく、2次元カメラ103-2の受光面の大きさは分割された像をすべて観察できる大きさである。2次元カメラ103-2を用いることで、光電変換部103-1の位置に形成される像を高分解能で観察し、像の位置や大きさを高精度に計測することが可能になる。2次元カメラ103-2から制御部53を経て、表示部54内の2次元カメラ画像表示部543に検出された分割された像544が表示される。実施例1と組み合わせることで、センサ面の像を観察しながら、像状態を変化させることができる。また、分割された各像の結像位置の理想状態からのズレ、倍率をそれぞれ求めることができる。 The pixel size of the two-dimensional camera 103-2 is smaller than the size of the image, and the size of the light-receiving surface of the two-dimensional camera 103-2 is a size that allows observation of all divided images. By using the two-dimensional camera 103-2, the image formed at the position of the photoelectric conversion unit 103-1 can be observed with high resolution, and the position and size of the image can be measured with high accuracy. The detected divided image 544 is displayed on the two-dimensional camera image display unit 543 in the display unit 54 from the two-dimensional camera 103-2 via the control unit 53. By combining with the first embodiment, the image state can be changed while observing the image on the sensor surface. Further, it is possible to obtain the deviation from the ideal state and the magnification of the image forming position of each divided image.

 このように、結像部102-A1は、光電変換部103-1の前方に入射する光の一部を分岐させる偏光ビームスプリッタ10213と、偏光ビームスプリッタ10213で分岐された光が入射する2次元カメラ103-2と、2次元カメラ103-2で撮像された複数の像の内の少なくとも一つの像を表示する2次元カメラ画像表示部543を有する。 As described above, the image forming unit 102-A1 includes the polarization beam splitter 10213 that splits a part of the light that is incident on the front side of the photoelectric conversion unit 103-1 and the two-dimensional light that the light that is split by the polarization beam splitter 10213 is incident on. It has a camera 103-2 and a two-dimensional camera image display unit 543 that displays at least one image of the plurality of images captured by the two-dimensional camera 103-2.

 図28A~図28Fに表示部54に表示されるオペレータが2次元カメラ画像から像状態を観察するGUIを示す。4分割された像を2次元カメラ103-2で検出した例である。 28A to 28F show GUIs displayed on the display unit 54 by which an operator observes an image state from a two-dimensional camera image. This is an example in which an image divided into four is detected by the two-dimensional camera 103-2.

 図28A、図28Bに示すように、モニタ54-7とモニタ54-8では、2次元カメラ画像表示部543内の分割された像544に沿ったライン546-1と546-2のラインプロファイルとキャリブレーション値を観察結果545-1と545-2に表示させている。図36に示す制御部53内に設けられた倍率算出部532及び像位置算出部534により、分割された各像の大きさと位置を測定し、キャリブレーション値との差を測定することができる。 As shown in FIGS. 28A and 28B, the monitor 54-7 and the monitor 54-8 have line profiles of lines 546-1 and 546-2 along the divided image 544 in the two-dimensional camera image display unit 543. The calibration values are displayed on the observation results 545-1 and 545-2. The magnification calculation unit 532 and the image position calculation unit 534 provided in the control unit 53 shown in FIG. 36 can measure the size and position of each divided image and can measure the difference from the calibration value.

 図28Cに示すように、モニタ54-9では、図36に示す制御部53内に設けられた分割像統合処理部535により、各像の統合像が計算され、統合像のラインプロファイルが表示され、像の大きさがどの程度異なっているのかを確認することができる。キャリブレートボタンを押すことによって、オペレータが指定したキャリブレーション値とのずれ量を測定する。そして、実施例1と同様に、ずれ量が許容値よりも大きい場合、センサ面の像を変化させ、再度各センサ面の像を検出する。また、ずれ量が許容値よりも小さい場合、ウェハ検査を開始する(図26参照)。 As shown in FIG. 28C, in the monitor 54-9, the divided image integration processing unit 535 provided in the control unit 53 shown in FIG. 36 calculates the integrated image of each image and displays the line profile of the integrated image. , You can see how different the image size is. By pressing the calibration button, the amount of deviation from the calibration value specified by the operator is measured. Then, as in the first embodiment, when the deviation amount is larger than the allowable value, the image of the sensor surface is changed and the image of each sensor surface is detected again. If the deviation amount is smaller than the allowable value, the wafer inspection is started (see FIG. 26).

 図28D及至図28Fは、像の変化後のセンサ面の像状態の観察結果を示している。これによって、分割されたそれぞれの像の大きさと位置を等しい状態にし、統合された像のボケを防ぎ、検出感度の低下を防ぐことができる。 28D to 28F show the observation results of the image state of the sensor surface after the image change. As a result, the sizes and positions of the divided images can be made equal to each other, blurring of the integrated images can be prevented, and deterioration of detection sensitivity can be prevented.

2 光源
5 ビームエキスパンダ
6 偏光制御部
7 照明強度分布制御部
24 照明強度分布モニタ
53 制御部
54 表示部
55 入力部
101 照明部
102 検出部
103 光電変換部
104 ステージ部
105 信号処理部
1021 対物レンズ
1022 偏光制御フィルタ
1023 偏光制御フィルタ
1024 アパーチャ
1025 集光レンズ
1026 偏光ビームスプリッタ
1027 ディフューザ
1028 レンズアレイ
1029 可変アパーチャ
10210 気圧調整機構
10211 マイクロメータ
2 light source 5 beam expander 6 polarization control unit 7 illumination intensity distribution control unit 24 illumination intensity distribution monitor 53 control unit 54 display unit 55 input unit 101 illumination unit 102 detection unit 103 photoelectric conversion unit 104 stage unit 105 signal processing unit 1021 objective lens 1022 Polarization control filter 1023 Polarization control filter 1024 Aperture 1025 Condenser lens 1026 Polarization beam splitter 1027 Diffuser 1028 Lens array 1029 Variable aperture 10210 Atmospheric pressure adjusting mechanism 10211 Micrometer

Claims (14)

光源から射出された光を試料に照射する照明部と、
前記試料から発生する散乱光を検出する検出部と、
前記検出部によって検出された前記散乱光を電気信号に変換する光電変換部と、
前記光電変換部により変換された前記電気信号を処理して前記試料の欠陥を検出する信号処理部と、を有し、
前記検出部は、開口を分割して形成された複数の像を前記像ごとに定めた倍率で前記光電変換部上に結像する結像部を有し、
前記信号処理部は、前記光電変換部上に結像した前記複数の像を合成して前記試料の欠陥を検出することを特徴とする欠陥検査装置。
An illumination unit that irradiates the sample with the light emitted from the light source,
A detection unit that detects scattered light generated from the sample,
A photoelectric conversion unit that converts the scattered light detected by the detection unit into an electric signal,
A signal processing unit for processing the electric signal converted by the photoelectric conversion unit to detect a defect in the sample,
The detection unit has an image forming unit that forms a plurality of images formed by dividing an opening on the photoelectric conversion unit at a magnification determined for each image,
The defect inspection apparatus, wherein the signal processing unit detects the defect of the sample by combining the plurality of images formed on the photoelectric conversion unit.
前記光電変換部上に結像する前記像の結像状態を変化させる結像状態制御部を更に有することを特徴とする請求項1に記載の欠陥検査装置。 The defect inspection apparatus according to claim 1, further comprising an image formation state control unit that changes an image formation state of the image formed on the photoelectric conversion unit. 前記結像部は、複数のレンズアレイを有し、
前記複数のレンズアレイの中の少なくとも1つが、焦点距離が互いに異なるレンズで構成されることを特徴とする請求項2に記載の欠陥検査装置。
The image forming unit has a plurality of lens arrays,
The defect inspection apparatus according to claim 2, wherein at least one of the plurality of lens arrays is formed of lenses having different focal lengths.
前記結像部は、複数のレンズアレイを有し、
前記結像状態制御部は、
前記複数のレンズアレイの中の少なくとも1つが、前記検出部の光軸方向に移動して前記結像状態を変化させることを特徴とする請求項2に記載の欠陥検査装置。
The image forming unit has a plurality of lens arrays,
The imaging state control unit,
The defect inspection apparatus according to claim 2, wherein at least one of the plurality of lens arrays moves in the optical axis direction of the detection unit to change the image formation state.
前記結像状態制御部は、
前記光電変換部を前記検出部の光軸方向に移動させて前記結像状態を変化させることを特徴とする請求項2に記載の欠陥検査装置。
The imaging state control unit,
The defect inspection apparatus according to claim 2, wherein the photoelectric conversion unit is moved in the optical axis direction of the detection unit to change the image formation state.
前記像状態制御部は、
前記試料を面直方向に移動させて前記結像状態を変化させることを特徴とする請求項2に記載の欠陥検査装置。
The image state control unit,
The defect inspection apparatus according to claim 2, wherein the image forming state is changed by moving the sample in a direction perpendicular to the surface.
前記結像部は、
レンズを含む閉空間の気圧を制御する気圧調整機構を有し、
前記結像状態制御部は、
前記気圧調整機構により前記閉空間の気圧を制御して前記結像状態を変化させることを特徴とする請求項2に記載の欠陥検査装置。
The image forming unit,
It has an atmospheric pressure adjustment mechanism that controls the atmospheric pressure of the closed space including the lens,
The imaging state control unit,
The defect inspection apparatus according to claim 2, wherein the atmospheric pressure adjusting mechanism controls the atmospheric pressure of the closed space to change the image formation state.
前記光電変換部上に結像する前記像の倍率を算出する倍率算出部と、
前記倍率算出部で算出された前記像の倍率に基づいて前記像の結像状態を変化させるための制御量を求める処理部と、を更に有し、
前記結像状態制御部は、前記制御量に基づいて前記像の結像状態を変化させることを特徴とする請求項2記載の欠陥検査装置。
A magnification calculation unit that calculates the magnification of the image formed on the photoelectric conversion unit,
And a processing unit for obtaining a control amount for changing the image formation state of the image based on the magnification of the image calculated by the magnification calculation unit,
The defect inspection apparatus according to claim 2, wherein the image formation state control unit changes the image formation state of the image based on the control amount.
前記結像部は、
前記開口を分割して形成された前記複数の像の中から一つの像を選択する像選択機構を有し、
前記倍率算出部は、
前記像選択機構により選択された前記一つの像の倍率を算出し、
前記処理部は、
前記倍率算出部で算出された前記一つの像の倍率に基づいて前記一つの像の結像状態を変化させるための前記制御量を求めることを特徴とする請求項8に記載の欠陥検査装置。
The image forming unit,
An image selection mechanism for selecting one image from the plurality of images formed by dividing the opening,
The magnification calculation unit,
Calculating the magnification of the one image selected by the image selection mechanism,
The processing unit is
9. The defect inspection apparatus according to claim 8, wherein the control amount for changing the image formation state of the one image is obtained based on the magnification of the one image calculated by the magnification calculation unit.
前記像選択機構は、
前記光電変換部の前方に入射する光の一部を遮光することにより、前記複数の像の中から前記一つの像を選択するアパーチャにより構成されることを特徴とする請求項9に記載の欠陥検査装置。
The image selection mechanism is
The defect according to claim 9, wherein the photoelectric conversion unit is configured by an aperture that selects one of the plurality of images by blocking a part of the light incident on the front side of the photoelectric conversion unit. Inspection equipment.
前記像選択機構は、
前記光電変換部に配置され、電気的にONとOFFを切り替えることにより前記複数の像の中から前記一つの像を選択する切替スイッチにより構成されることを特徴とする請求項9に記載の欠陥検査装置。
The image selection mechanism is
10. The defect according to claim 9, wherein the photoelectric conversion unit is configured by a changeover switch which is arranged in the photoelectric conversion unit and selects the one image from the plurality of images by electrically switching between ON and OFF. Inspection equipment.
前記像選択機構により選択された前記一つの像を表示する表示部を更に有することを特徴とする請求項9に記載の欠陥検査装置。 The defect inspection apparatus according to claim 9, further comprising a display unit that displays the one image selected by the image selection mechanism. 前記表示部は、前記像選択機構により選択されたすべての像の統合像を表示することを特徴とする請求項9に記載の欠陥検査装置。 The defect inspection apparatus according to claim 9, wherein the display unit displays an integrated image of all the images selected by the image selection mechanism. 前記結像部は、
前記光電変換部の前方に入射する光の一部を分岐させる偏光ビームスプリッタと、
前記偏光ビームスプリッタで分岐された光が入射するCCDカメラと、
前記CCDカメラで撮像された複数の像の内の少なくとも一つの像を表示するCCD画像表示部と、
を有することを特徴とする請求項1に記載の欠陥検査装置。
The image forming unit,
A polarization beam splitter that splits a part of the light incident on the front side of the photoelectric conversion unit,
A CCD camera on which the light split by the polarization beam splitter is incident,
A CCD image display unit for displaying at least one image of a plurality of images taken by the CCD camera;
The defect inspection apparatus according to claim 1, further comprising:
PCT/JP2018/047448 2018-12-25 2018-12-25 Defect inspection device Ceased WO2020136697A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/047448 WO2020136697A1 (en) 2018-12-25 2018-12-25 Defect inspection device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/047448 WO2020136697A1 (en) 2018-12-25 2018-12-25 Defect inspection device

Publications (1)

Publication Number Publication Date
WO2020136697A1 true WO2020136697A1 (en) 2020-07-02

Family

ID=71129241

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/047448 Ceased WO2020136697A1 (en) 2018-12-25 2018-12-25 Defect inspection device

Country Status (1)

Country Link
WO (1) WO2020136697A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022162881A1 (en) * 2021-01-29 2022-08-04 株式会社日立ハイテク Defect inspection device
US12146840B2 (en) 2020-06-09 2024-11-19 Hitachi High-Tech Corporation Defect inspection device
WO2024257331A1 (en) * 2023-06-16 2024-12-19 株式会社日立ハイテク Defect inspecting device, and defect inspecting method
US12313566B2 (en) 2019-08-02 2025-05-27 Hitachi High-Tech Corporation Defect inspection device and defect inspection method
US12345661B2 (en) 2019-08-14 2025-07-01 Hitachi High-Tech Corporation Defect inspection apparatus and defect inspection method
US12366538B2 (en) 2020-04-02 2025-07-22 Hitachi High-Tech Corporation Defect inspection apparatus and defect inspection method

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5919345A (en) * 1982-07-23 1984-01-31 Hitachi Ltd recognition device
JPS6232613A (en) * 1985-08-05 1987-02-12 Canon Inc projection exposure equipment
JPH0232523A (en) * 1988-07-22 1990-02-02 Mitsubishi Electric Corp Exposure control method
JP2003114195A (en) * 2001-10-04 2003-04-18 Dainippon Screen Mfg Co Ltd Image acquiring deice
US20080074749A1 (en) * 2006-09-07 2008-03-27 Lizotte Todd E Apparatus and methods for the inspection of microvias in printed circuit boards
JP2009283633A (en) * 2008-05-21 2009-12-03 Hitachi High-Technologies Corp Surface inspection device, and surface inspection method
JP2012117898A (en) * 2010-11-30 2012-06-21 Hitachi High-Technologies Corp Defect inspection device, defect information acquisition device and defect inspection method
JP2014163681A (en) * 2013-02-21 2014-09-08 Toppan Printing Co Ltd Periodic pattern irregularity inspection method and irregularity inspection device
JP2014209068A (en) * 2013-04-16 2014-11-06 インスペック株式会社 Pattern inspection device
JP2016035466A (en) * 2015-09-24 2016-03-17 株式会社日立ハイテクノロジーズ Defect inspection method, weak light detection method, and weak light detector
US20180059552A1 (en) * 2016-08-23 2018-03-01 Asml Netherlands B.V. Metrology Apparatus for Measuring a Structure Formed on a Substrate by a Lithographic Process, Lithographic System, and Method of Measuring a Structure Formed on a Substrate by a Lithographic Process
JP2018510320A (en) * 2014-12-09 2018-04-12 ビーエーエスエフ ソシエタス・ヨーロピアBasf Se Optical detector
WO2018216277A1 (en) * 2017-05-22 2018-11-29 株式会社日立ハイテクノロジーズ Defect inspection device and defect inspection method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5919345A (en) * 1982-07-23 1984-01-31 Hitachi Ltd recognition device
JPS6232613A (en) * 1985-08-05 1987-02-12 Canon Inc projection exposure equipment
JPH0232523A (en) * 1988-07-22 1990-02-02 Mitsubishi Electric Corp Exposure control method
JP2003114195A (en) * 2001-10-04 2003-04-18 Dainippon Screen Mfg Co Ltd Image acquiring deice
US20080074749A1 (en) * 2006-09-07 2008-03-27 Lizotte Todd E Apparatus and methods for the inspection of microvias in printed circuit boards
JP2009283633A (en) * 2008-05-21 2009-12-03 Hitachi High-Technologies Corp Surface inspection device, and surface inspection method
JP2012117898A (en) * 2010-11-30 2012-06-21 Hitachi High-Technologies Corp Defect inspection device, defect information acquisition device and defect inspection method
JP2014163681A (en) * 2013-02-21 2014-09-08 Toppan Printing Co Ltd Periodic pattern irregularity inspection method and irregularity inspection device
JP2014209068A (en) * 2013-04-16 2014-11-06 インスペック株式会社 Pattern inspection device
JP2018510320A (en) * 2014-12-09 2018-04-12 ビーエーエスエフ ソシエタス・ヨーロピアBasf Se Optical detector
JP2016035466A (en) * 2015-09-24 2016-03-17 株式会社日立ハイテクノロジーズ Defect inspection method, weak light detection method, and weak light detector
US20180059552A1 (en) * 2016-08-23 2018-03-01 Asml Netherlands B.V. Metrology Apparatus for Measuring a Structure Formed on a Substrate by a Lithographic Process, Lithographic System, and Method of Measuring a Structure Formed on a Substrate by a Lithographic Process
WO2018216277A1 (en) * 2017-05-22 2018-11-29 株式会社日立ハイテクノロジーズ Defect inspection device and defect inspection method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12313566B2 (en) 2019-08-02 2025-05-27 Hitachi High-Tech Corporation Defect inspection device and defect inspection method
US12345661B2 (en) 2019-08-14 2025-07-01 Hitachi High-Tech Corporation Defect inspection apparatus and defect inspection method
US12366538B2 (en) 2020-04-02 2025-07-22 Hitachi High-Tech Corporation Defect inspection apparatus and defect inspection method
US12146840B2 (en) 2020-06-09 2024-11-19 Hitachi High-Tech Corporation Defect inspection device
WO2022162881A1 (en) * 2021-01-29 2022-08-04 株式会社日立ハイテク Defect inspection device
US12400889B2 (en) 2021-01-29 2025-08-26 Hitachi High-Tech Corporation Defect inspection device
WO2024257331A1 (en) * 2023-06-16 2024-12-19 株式会社日立ハイテク Defect inspecting device, and defect inspecting method

Similar Documents

Publication Publication Date Title
US11143598B2 (en) Defect inspection apparatus and defect inspection method
KR101478476B1 (en) Defect inspection method, low light detecting method, and low light detector
WO2020136697A1 (en) Defect inspection device
US8922764B2 (en) Defect inspection method and defect inspection apparatus
JP5773939B2 (en) Defect inspection apparatus and defect inspection method
US11366069B2 (en) Simultaneous multi-directional laser wafer inspection
KR102781740B1 (en) Method and device for measuring height on a semiconductor wafer
JP5487196B2 (en) A split field inspection system using a small catadioptric objective.
US12313566B2 (en) Defect inspection device and defect inspection method
TW201604609A (en) Auto-focus system
TW201932828A (en) System for wafer inspection
WO2013077125A1 (en) Defect inspection method and device for same
JP2004264287A (en) Method and apparatus for identifying defect in substrate surface using dithering for reconstructing image of insufficient sampling
JP5815798B2 (en) Defect inspection method and defect inspection apparatus
US11356594B1 (en) Tilted slit confocal system configured for automated focus detection and tracking
US7767982B2 (en) Optical auto focusing system and method for electron beam inspection tool
JP6117305B2 (en) Defect inspection method, weak light detection method, and weak light detector
US12146840B2 (en) Defect inspection device
WO2024257331A1 (en) Defect inspecting device, and defect inspecting method
WO2025013238A1 (en) Defect inspection device and defect inspection method
JPH10221270A (en) Foreign matter inspection device
WO2024257319A1 (en) Defect inspection device and optical system
WO2025169423A1 (en) Defect inspection device and defect inspection method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18944131

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18944131

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP