[go: up one dir, main page]

WO2009096422A1 - Three-dimensional shape measurement device, method, and program - Google Patents

Three-dimensional shape measurement device, method, and program Download PDF

Info

Publication number
WO2009096422A1
WO2009096422A1 PCT/JP2009/051346 JP2009051346W WO2009096422A1 WO 2009096422 A1 WO2009096422 A1 WO 2009096422A1 JP 2009051346 W JP2009051346 W JP 2009051346W WO 2009096422 A1 WO2009096422 A1 WO 2009096422A1
Authority
WO
WIPO (PCT)
Prior art keywords
focus
pixel
measure
dimensional shape
focus position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2009/051346
Other languages
French (fr)
Japanese (ja)
Inventor
Masaya Yamaguchi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nikon Corp
Original Assignee
Nikon Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nikon Corp filed Critical Nikon Corp
Priority to JP2009551537A priority Critical patent/JP5218429B2/en
Publication of WO2009096422A1 publication Critical patent/WO2009096422A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures

Definitions

  • the present invention relates to a three-dimensional shape measurement apparatus and method, and a program, and more particularly, to a three-dimensional shape measurement apparatus and method, and a program that improve the measurement accuracy of a three-dimensional shape.
  • an SFF (Shape-From-Focus) method is known as a method for measuring the three-dimensional shape of an object using a two-dimensional image (see, for example, Non-Patent Document 1).
  • the SFF method is an effective method when the surface of the measurement object has a texture.
  • a plurality of images with different focal positions are photographed while moving the imaging device with respect to the measurement object, and a differential operation is performed for each pixel of the obtained image to indicate the degree of focus. Calculate the measure. Then, the three-dimensional shape of the measurement object is measured based on the focal position when the focus measure of each pixel is maximized.
  • the bright part tends to swell and be photographed, and the tendency becomes stronger as the focus shift increases. For this reason, there are cases where a dark part is originally brightly photographed at a boundary part where the luminance or color of the measurement object changes greatly.
  • a focus position a position where each pixel is in focus (hereinafter referred to as a focus position) is erroneously detected, and measurement accuracy may be reduced.
  • the present invention has been made in view of such a situation, and is intended to improve the measurement accuracy of a three-dimensional shape.
  • the three-dimensional shape measurement apparatus calculates a focus measure indicating a degree of focus on each pixel of a plurality of images having different focus positions with respect to a measurement target, and the focus measure is calculated as the focus measure.
  • the three-dimensional shape measurement apparatus that measures the three-dimensional shape of the measurement object by detecting the focus position based on the focus position, at least one of luminance, hue, and saturation is used as a determination value, and the focus of the target pixel is determined.
  • Effective pixel extraction means for extracting effective pixels having the determination value whose difference from the determination value of the target pixel is within an allowable range from pixels within a predetermined range in the vicinity of the target pixel used for calculation of the focus measure And using the extracted effective pixels, a focus measure calculation means for calculating the focus measure at the target pixel, and a focus position at which the focus measure reaches a peak is detected as the focus position. And a focus position detecting means.
  • the three-dimensional shape measurement method calculates a focus measure indicating a degree of focus on each pixel of a plurality of images having different focus positions with respect to a measurement target, and the focus measure is calculated as the focus measure.
  • the three-dimensional shape measuring method of the three-dimensional shape measuring apparatus for measuring the three-dimensional shape of the measurement object by detecting the in-focus position based on at least one of luminance, hue, and saturation as a determination value
  • An effective pixel extraction step to extract a focus measure calculation step to calculate the focus measure at the pixel of interest using the extracted effective pixel, and a focus position at which the focus measure reaches a peak
  • a focus position detection step of detecting as said in-focus position is an effective pixel extraction step to extract, a focus measure calculation step to calculate the focus measure at the pixel of interest using the extracted effective pixel, and a focus position at which the focus measure reaches a peak.
  • a program calculates a focus measure indicating a degree of focus on each pixel of a plurality of images having different focus positions with respect to a measurement target, and focuses on the basis of the focus measure.
  • a program that causes a computer to execute a process of measuring a three-dimensional shape of the measurement object by detecting a position at least one of luminance, hue, and saturation is used as a determination value, and the focus on a target pixel
  • An effective pixel extracting step of extracting effective pixels having the determination value whose difference from the determination value of the target pixel is within an allowable range from pixels within a predetermined range in the vicinity of the target pixel used for calculating the measure;
  • To execute processing including a focus position detection step of detecting a focus position on the computer.
  • At least one of luminance, hue, and saturation is used as a determination value, and pixels within a predetermined range near the target pixel used for calculation of the focus measure in the target pixel are An effective pixel having the determination value whose difference from the determination value of the target pixel is within an allowable range is extracted, and using the extracted effective pixel, the in-focus measure at the target pixel is calculated, A focus position where the focus measure reaches a peak is detected as the focus position.
  • the measurement accuracy of the three-dimensional shape is improved.
  • 3D shape measurement system 1 3D shape measurement system, 2 measurement object, 11 imaging device, 12 computer, 51 3D shape measurement unit, 61 imaging control unit, 62 measurement unit, 71 pre-processing unit, 72 effective pixel extraction unit, 73 focusing measure Calculation unit, 74 Focus position detection unit, 75 3D shape data generation unit
  • FIG. 1 is a diagram showing an embodiment of a three-dimensional shape measurement system to which the present invention is applied.
  • the three-dimensional shape measurement system 1 of FIG. 1 is configured to include an imaging device 11, a computer 12, and a display 13.
  • the imaging device 11 and the computer 12 are connected via a cable 14, and the computer 12 and the display 13 are connected to each other. They are connected via a cable 15.
  • the image pickup apparatus 11 schematically shown in FIG. 1 is configured to include at least an optical lens 21 and a photodetector 22 including an image pickup device such as a CCD (Charge Coupled Device).
  • CCD Charge Coupled Device
  • the imaging device 11 has a height (in the Z-axis direction) with respect to the measurement object 2 installed on the stage 3 of the measurement microscope. While changing the position), a plurality of images with different focal positions are taken with respect to the measurement object 2, and the computer 12 measures the three-dimensional shape of the measurement object 2 using the plurality of images.
  • FIG. 2 is a block diagram showing an example of a functional configuration realized when a processor (for example, a CPU (Central Processing Unit), a DSP (Digital Signal Processor), etc.) of the computer 12 executes a predetermined program.
  • the three-dimensional shape measuring unit 51 is realized by the processor of the computer 12 executing a predetermined program.
  • the three-dimensional shape measurement unit 51 includes an imaging control unit 61 and a measurement unit 62.
  • the imaging control unit 61 controls the imaging device 11 to image the measurement object 2 while changing the focal position.
  • the imaging control unit 61 acquires a captured image (hereinafter referred to as an original image) from the imaging device 11 and supplies the acquired image to the preprocessing unit 71 of the measurement unit 62.
  • the imaging control unit 61 notifies the preprocessing unit 71 of the end of imaging when imaging of the measurement object 2 at all focal positions is completed.
  • the measuring unit 62 measures the three-dimensional shape of the measurement object 2 using a plurality of original images having different focal positions.
  • the measurement unit 62 includes a preprocessing unit 71, an effective pixel extraction unit 72, a focus measure calculation unit 73, a focus position detection unit 74, and a three-dimensional shape data generation unit 75.
  • the preprocessing unit 71 performs predetermined preprocessing on the original image and stores an image generated as a result (hereinafter also referred to as a preprocessed image) in the memory. 52 is stored. Further, when the end of shooting is notified from the shooting control unit 61, the preprocessing unit 71 transfers the notification to the effective pixel extraction unit 72.
  • the effective pixel extraction unit 72 is a pixel in a predetermined range (hereinafter referred to as a target range) in the vicinity of the target pixel for which a focus measure is calculated. Then, using the luminance, hue, or saturation allowable value set by the user, the effective pixel used for calculating the focus measure at the target pixel is extracted, and the luminance value of the effective pixel is extracted from the preprocessed image. .
  • the effective pixel extraction unit 72 supplies information indicating the luminance value of the extracted effective pixels to the focus measure calculation unit 73.
  • the focus measure calculation unit 73 calculates the focus measure of the target pixel using the extracted luminance value of the effective pixel, and calculates the calculated focus measure. Is supplied to the in-focus position detector 74.
  • the in-focus position detection unit 74 uses the luminance tolerance value set by the user based on the calculated in-focus measure, and adjusts the focus pixel of interest. The focus position is detected, and the detected focus position is stored in the memory 52. Further, when the focus position detection unit 74 detects the focus positions of all the pixels, the focus position detection unit 74 notifies the three-dimensional shape data generation unit 75 of the end of the detection of the focus position.
  • the 3D shape data generation unit 75 generates 3D shape data based on the in-focus position of each pixel stored in the memory 52, and outputs it to the subsequent stage.
  • step S1 the effective pixel extraction unit 72 and the focus position detection unit 74 acquire various allowable values. Specifically, for example, the user sets which of luminance, hue, and saturation to use as a determination value used for effective pixel extraction via an input unit (not shown) of the computer 12 and determines The permissible values of luminance, hue, and saturation used in the above are input to the computer 12.
  • the effective pixel extraction unit 72 acquires the input allowable value. Note that any one of luminance, hue, and saturation may be used for the determination value, or two or more types may be used in combination.
  • the user inputs an allowable luminance value used for determination of the maximum value of the focus measure to the computer 12 via an input unit (not shown) of the computer 12, and the focus position detection unit 74 receives the allowable value. Get the value.
  • step S ⁇ b> 2 the imaging device 11 captures an image of the measurement object under the control of the imaging control unit 61.
  • the imaging apparatus 11 supplies an original image represented by the three primary colors of RGB obtained as a result of shooting to the shooting control unit 61 via the cable 14.
  • the imaging control unit 61 supplies the acquired original image to the preprocessing unit 71.
  • the preprocessing unit 71 performs preprocessing of the image. Specifically, the preprocessing unit 71 performs an Fourier transform on the acquired original image, removes a low frequency component equal to or lower than a predetermined frequency, and a noise component within a predetermined frequency range, and then performs an inverse Fourier transform. As a result, the low-frequency component and the noise component are removed from the original image, and an RGB image (hereinafter referred to as a preprocessed RGB image) from which the medium-high frequency component is extracted is generated.
  • the preprocessing unit 71 stores the generated preprocessed RGB image in the memory 52. Further, the preprocessing unit 71 converts the generated preprocessed RGB image into an YPbPr image, and stores the converted image (hereinafter referred to as a preprocessed YPbPr image) in the memory 52.
  • step S4 the photographing control unit 61 determines whether or not photographing has been performed at all focal positions. If it is determined that shooting has not been performed at all focal positions, the process proceeds to step S5.
  • step S5 the imaging control unit 61 changes the focal position. That is, the imaging control unit 61 moves the imaging device 11 in the Z-axis direction so that the focal position becomes the following value.
  • step S4 the processes of steps S2 to S5 are repeatedly executed until it is determined in step S4 that images have been taken at all the focal positions.
  • N original images are photographed while moving the focal position from the lowermost end to the uppermost end of the measuring object 2 at a predetermined interval, and preprocessing is performed from each original image.
  • An RGB image and a preprocessed YPbPr image are generated and stored in the memory 52.
  • the vertical axis represents the focal position, and on the vertical axis, an index representing the setting order of the focal position is shown instead of the value of the focal position.
  • FIG. 6 shows an example of an original image taken at the kth focal position in FIG.
  • Each square box in FIG. 6 represents a pixel.
  • the region Rf indicates a region in which the imaging apparatus 11 is in focus
  • the region Rin indicates a region inside the region in focus
  • the region Rout indicates the focus.
  • the area outside the area where the That is, the images in the region Rin and the region Rout are so-called defocused images that are out of focus.
  • step S ⁇ b> 4 when it is determined in step S ⁇ b> 4 that the photographing control unit 61 has photographed at all the focal positions, it notifies the effective pixel extracting unit 72 of the end of photographing through the preprocessing unit 71. The process proceeds to step S6.
  • step S6 the measurement unit 62 selects one of the pixels for which the in-focus position has not been obtained and sets it as the target pixel.
  • the pixel of interest is set, for example, in raster scan order.
  • step S7 the measuring unit 62 selects one of the images for which the focus measure at the current target pixel is not obtained and sets it as the target image.
  • the attention image is set, for example, in the order of shooting (index order).
  • step S8 the effective pixel extraction unit 72 selects one pixel to be processed from the pixels within the attention range. That is, the effective pixel extraction unit 72 selects, as a pixel to be processed, one of the pixels in the attention range that has not been determined whether or not it is an effective pixel.
  • FIG. 7 shows an example of the attention range when the pixel PA1 in FIG. 6 is set as the attention pixel.
  • a 5 ⁇ 5 pixel range RA1 centered on the target pixel PA1 is set as the target range.
  • step S9 the effective pixel extraction unit 72 determines whether the difference between the determination values of the target pixel and the pixel to be processed is within an allowable range. Specifically, the effective pixel extraction unit 72 obtains the determination value of the target pixel and the pixel to be processed from the preprocessed RGB image and the preprocessed YPbPr image of the target image stored in the memory 52. The effective pixel extraction unit 72 takes the difference between the determination values of the target pixel and the pixel to be processed, and compares the absolute value of the difference value with the allowable value set in step S1.
  • the effective pixel extraction unit 72 determines that the difference between the determination values of the target pixel and the pixel to be processed is within the allowable range, that is, the pixel to be processed is valid The pixel is determined to be a pixel, and the process proceeds to step S10.
  • the difference between the determination values of the target pixel and the pixel to be processed Is determined to be within the allowable range.
  • the absolute value of the difference value between the target pixel and the pixel to be processed is an allowable value for all of luminance, hue, and saturation.
  • step S10 the effective pixel extraction unit 72 extracts the luminance value of the pixel to be processed. That is, the effective pixel extraction unit 72 extracts the luminance value of the current pixel to be processed from the preprocessed YPbPr image of the target image stored in the memory 52.
  • step S9 when the absolute value of the difference value of at least one kind of determination value exceeds the allowable value, it is determined that the difference between the determination value of the target pixel and the pixel to be processed exceeds the allowable range, The process of step S10 is skipped and the process proceeds to step S11.
  • step S11 the effective pixel extraction unit 72 determines whether all the pixels within the attention range have been determined. If it is determined that all the pixels within the attention range have not yet been determined, the process returns to step S8. After that, until it is determined in step S11 that all the pixels in the attention range have been determined, the processing in steps S8 to S11 is repeatedly performed, and the difference in determination value from the pixel in the attention range to the attention pixel is determined. An effective pixel having a determination value within the allowable range is extracted, and further, a luminance value of the effective pixel is extracted.
  • step S11 determines whether all pixels within the range of interest have been determined. If it is determined in step S11 that all pixels within the range of interest have been determined, the process proceeds to step S12.
  • the focus measure calculation unit 73 calculates the focus measure at the target pixel of the target image. Specifically, the effective pixel extraction unit 72 supplies information indicating the luminance value of the extracted effective pixel to the focus measure calculation unit 73. The focus measure calculation unit 73 calculates the focus measure at the target pixel of the target image using the following equation (1).
  • Focus measure ( ⁇
  • the focus measure is an average value of absolute values of difference values between the luminance value of the target pixel and the luminance value of the effective pixel within the target range. Therefore, the value of the focus measure increases as the difference in luminance value between the target pixel and the effective pixel increases, that is, as the contrast between the target pixel and the effective pixel increases.
  • the focus measure calculation unit 73 supplies the calculated focus measure to the focus position detection unit 74.
  • the focus measure calculation unit 73 stores the calculated focus measure, the luminance value of the target pixel of the current target image, and the focal position of the current target image in the memory 52 in association with each other.
  • the focus measure is obtained using all the pixels in the attention range RA1, it is affected by both the ranges RA1a and RA1b having different heights, and corresponds to the height between the range RA1a and the range RA1b.
  • the focus measure is calculated.
  • FIG. 9 schematically shows the state of hue or saturation distribution within the range RA1a.
  • the hue or saturation also varies between the range RA1a and the range RB1b.
  • the focus measure at the target pixel PA1 can be calculated more accurately by using only the luminance of the pixel within the range RA1a or a range close thereto. Note that effective pixels extracted using luminance, hue, or saturation do not necessarily match, and using all three types as determination values makes it possible to calculate a focus measure more accurately. .
  • step S13 the focus position detection unit 74 determines whether the focus measure is the maximum. Specifically, the in-focus position detection unit 74 determines that the in-focus measure is the maximum when the in-focus measure calculated in step S13 is larger than the maximum value of the in-focus measure stored so far in the memory 52. The process proceeds to step S14. Note that when the first focus measure is calculated for the current pixel of interest, since the maximum value of the focus measure is not yet stored in the memory 52, it is determined that the calculated focus measure is unconditionally maximum. Is done.
  • step S14 the focus position detection unit 74 determines whether the luminance value of the target pixel is within the allowable range. Specifically, the in-focus position detection unit 74 stores the brightness value of the target pixel at the current focal position (the brightness value of the target pixel of the current target image) and the focus that has been stored in the memory 52 so far. The difference between the luminance value at the focal position where the measure is maximized (the luminance value of the target pixel of the image where the in-focus measure has been maximized so far) is taken, and the allowable value set in step S1. And compare. If the difference value is less than or equal to the allowable value, the in-focus position detection unit 74 determines that the luminance value of the target pixel is within the allowable range, and the process proceeds to step S15.
  • step S15 the focus position detection unit 74 stores the current focus measure as a maximum value. That is, the focus position detection unit 74 sets the focus measure at the target pixel of the current target image as the maximum value of the focus measure, and the luminance value of the target pixel of the current target image and the focus position of the current target image. And stored in the memory 52.
  • step S14 if the calculated difference value exceeds the allowable value in step S14, it is determined that the luminance value of the target pixel exceeds the allowable range, the process of step S15 is skipped, and the process proceeds to step S16. That is, the focus measure at the current focus position is not stored as the maximum value.
  • FIG. 10 is a graph showing an example of the relationship between the focus position and the focus measure in the target pixel PA11 at the boundary portion where the luminance or color of the measurement object changes greatly.
  • the bright portion may expand as the focus shifts, and the originally dark portion may be photographed brightly.
  • the peak PK11 of the focus measure appears.
  • the luminance of the target pixel PA11 becomes brighter, and the peak PK12 of the focus measure may appear again.
  • the difference between the luminance of the pixel of interest PA11 at the focal point where the focus measure is the peak PK11 and the luminance of the pixel of interest PA11 at the focal point where the focus measure is the peak PK12 is determined by the determination process in step S14. If it is larger, the focus position where the focus measure reaches the peak PK12 and the focus position in the vicinity thereof are excluded from the detection target of the maximum value of the focus measure. Thereby, the peak of the focus measure is detected at the focus position where the focus is not in focus, and the erroneous detection of the focus position is prevented.
  • step S16 the focusing speed calculation unit 73 determines whether all the images have been processed. If it is determined that all the images have not yet been processed, the process returns to step S7. Thereafter, until it is determined in step S16 that all the images have been processed, the processes in steps S7 to S16 are repeatedly performed, and the focus measure at all the focus positions is obtained for the target pixel, and the focus measure The maximum value is detected.
  • step S16 determines whether all the images have been processed. If it is determined in step S16 that all the images have been processed, the process proceeds to step S17.
  • the focus position detection unit 74 detects the focus position. Specifically, the focus position detection unit 74 reads, from the memory 52, the focus position at the focus position where the focus measure is maximum and the focus positions before and after the focus position for the target pixel. For example, when the focus measure becomes the maximum at the k + 1th focus position, the focus measure at the kth to k + 2th focus positions is read from the memory 52. The focus position detection unit 74 performs interpolation of data between the three points by modeling the relationship between the focus position and the focus measure using a Gaussian function from the read focus positions and focus measures of the three points. Then, based on the interpolated data, the focus position detection unit 74 detects the focus position where the focus measure reaches a peak as the focus position at the target pixel. The focus position detection unit 74 stores the detected focus position in the memory 52.
  • FIG. 11 shows an example of a graph when the relationship between the focus position and the focus measure at points 1 to N is modeled by a Gaussian function.
  • a Gaussian function By modeling the relationship between the focal position and the in-focus measure with a Gaussian function, it is possible to detect the in-focus position at the target pixel with higher accuracy than the interval between the focal positions at which the images were actually captured.
  • the focal position between k + 1 and k + 2 corresponding to the peak PK21 in FIG. 11 can be detected as the in-focus position.
  • the relationship between the focal position and the focus measure may be modeled by a Gaussian function using data of four or more points. Further, instead of a Gaussian function, data interpolation may be performed by a quadratic function interpolation calculation. Furthermore, when the interval between the focal positions at which an image is taken is sufficiently small, data interpolation may be performed by taking a moving average of the focus measure.
  • step S18 the focus position detection unit 74 determines whether the focus positions of all the pixels have been detected. If it is determined that the focus positions of all the pixels have not been detected yet, the process returns to step S6, and steps S6 to S18 are performed until it is determined in step S18 that the focus positions of all the pixels have been detected. This process is repeatedly executed.
  • step S18 when it is determined that the focus position of all the pixels has been detected, the focus position detection unit 74 notifies the end of detection of the focus position to the three-dimensional shape data generation unit 75, and the process Proceed to step S19.
  • step S19 the three-dimensional shape data generation unit 75 generates three-dimensional shape data based on the focus position of each pixel stored in the memory 52, and outputs it to the subsequent stage.
  • the three-dimensional shape data is represented by the focus position of each pixel or the distance from a certain reference point to each pixel calculated based on the focus position.
  • the subsequent apparatus or processing unit of the three-dimensional shape measurement unit 51 displays a three-dimensional image of the measurement object 2 on the display 13 based on the three-dimensional shape data, for example. Thereafter, the three-dimensional shape measurement process ends.
  • the allowable value which of the darker and brighter peaks is selected as the maximum value of the focus measure in the processes of steps S14 and S15. If there are three or more focus positions where the focus measure is a peak, the peak to be selected as the maximum value of the focus measure is set based on the luminance value of the pixel of interest at the focus position. It is possible. Thereby, when there are a plurality of focus positions at which the focus measure reaches a peak, the focus position is selected from the focus positions at which the focus measure has a peak, based on the luminance value of the target pixel at each focus position. It becomes like this.
  • the focus measure is obtained by paying attention to the pixels.
  • the focus measure may be obtained in units of regions composed of a plurality of pixels.
  • the low frequency component and the noise component are removed and the inverse Fourier transform is performed.
  • Low frequency components and noise components may be removed and inverse wavelet transform may be performed.
  • the in-focus measure may be obtained using an effective pixel within the attention range and a Laplacian filter.
  • the program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
  • system means an overall apparatus composed of a plurality of apparatuses and means.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A three-dimensional shape measurement device, method, and program capable of improving precision of three-dimensional shape measurement. A three-dimensional shape measurement part (51) calculates the focusing measures of the respective pixels of images having different focus positions with respect to an object to be measured to detect focusing positions of the respective pixels measuring the three-dimensional shape of the object to be measured. An effective pixel extraction section (72), using at least one of a luminance, hue, and color saturation as judgment values, extracts effective pixels the judgment values of which are each different from the judgment value of an attention pixel by a difference within an allowable range, from pixels within a predetermined range in the vicinity of the attention pixel. A focusing measure calculation section (73) calculates the focusing measure of the attention pixel using the extracted effective pixels. A focusing position detection section (74) detects the focus position where the focusing measure is peaked, as the focusing position of the attention pixel. This invention can be applied to the three-dimensional shape measurement device, for example.

Description

3次元形状測定装置および方法、並びに、プログラムThree-dimensional shape measuring apparatus and method, and program

 本発明は、3次元形状測定装置および方法、並びに、プログラムに関し、特に、3次元形状の測定精度を向上させるようにした3次元形状測定装置および方法、並びに、プログラムに関する。 The present invention relates to a three-dimensional shape measurement apparatus and method, and a program, and more particularly, to a three-dimensional shape measurement apparatus and method, and a program that improve the measurement accuracy of a three-dimensional shape.

 従来、2次元の画像を用いて物体の3次元形状を測定する方法として、SFF(Shape From Focus)法が知られている(例えば、非特許文献1参照)。SFF法は、測定対象物の表面にテクスチャを有する場合に有効な方式である。SFF法では、測定対象物に対して撮像装置を移動させながら焦点位置の異なる複数の画像を撮影し、得られた画像の画素ごとに微分演算を行い、焦点が合っている度合いを表す合焦測度を算出する。そして、各画素の合焦測度が最大になるときの焦点位置に基づいて、測定対象物の3次元形状が測定される。 Conventionally, an SFF (Shape-From-Focus) method is known as a method for measuring the three-dimensional shape of an object using a two-dimensional image (see, for example, Non-Patent Document 1). The SFF method is an effective method when the surface of the measurement object has a texture. In the SFF method, a plurality of images with different focal positions are photographed while moving the imaging device with respect to the measurement object, and a differential operation is performed for each pixel of the obtained image to indicate the degree of focus. Calculate the measure. Then, the three-dimensional shape of the measurement object is measured based on the focal position when the focus measure of each pixel is maximized.

 また、従来、測定対象物に所定のテクスチャパターンを照射することにより、表面にテクスチャを有しない測定対象物の3次元形状をSFF法を用いて測定することが提案されている。(例えば、特許文献1参照)。 Further, conventionally, it has been proposed to measure the three-dimensional shape of a measurement object having no texture on the surface by irradiating the measurement object with a predetermined texture pattern using the SFF method. (For example, refer to Patent Document 1).

特許第3321866号公報Japanese Patent No. 3321866 S.K.Nayar, Shape from Focus, tech. report CMU-RI-TR-89-27, Robotics Institute, Carnegie Mellon University, November, 1989S.K.Nayar, Shape from Focus, tech. Report CMU-RI-TR-89-27, Robotics Institute, Carnegie Mellon University, November, 1989

 ところで、測定対象物の焦点が合っていない範囲においては、明るい部分が膨張して撮影される傾向にあり、焦点のズレが大きくなるほど、その傾向は強くなる。そのため、測定対象物の輝度または色が大きく変化する境界部分において、本来暗い部分が明るく撮影されしまうときがある。SFF法では、この現象により、各画素において焦点が合っている位置(以下、合焦位置と称する)の誤検出が発生し、測定精度が低下してしまう場合があった。 By the way, in the range where the object to be measured is not in focus, the bright part tends to swell and be photographed, and the tendency becomes stronger as the focus shift increases. For this reason, there are cases where a dark part is originally brightly photographed at a boundary part where the luminance or color of the measurement object changes greatly. In the SFF method, due to this phenomenon, a position where each pixel is in focus (hereinafter referred to as a focus position) is erroneously detected, and measurement accuracy may be reduced.

 本発明は、このような状況を鑑みてなされたものであり、3次元形状の測定精度を向上させるようにするものである。 The present invention has been made in view of such a situation, and is intended to improve the measurement accuracy of a three-dimensional shape.

 本発明の一側面の3次元形状測定装置は、測定対象物に対して焦点位置の異なる複数の画像の各画素について焦点が合っている度合いを示す合焦測度を算出し、前記合焦測度に基づいて合焦位置を検出することにより前記測定対象物の3次元形状を測定する3次元形状測定装置において、輝度、色相および彩度のうち少なくとも1つを判定値として用い、注目画素における前記合焦測度の算出に用いる前記注目画素の近傍の所定の範囲内の画素から、前記注目画素の前記判定値との差が許容範囲内である前記判定値を有する有効画素を抽出する有効画素抽出手段と、抽出された前記有効画素を用いて、前記注目画素における前記合焦測度を算出する合焦測度算出手段と、前記合焦測度がピークとなる焦点位置を前記合焦位置として検出する合焦位置検出手段とを備える。 The three-dimensional shape measurement apparatus according to one aspect of the present invention calculates a focus measure indicating a degree of focus on each pixel of a plurality of images having different focus positions with respect to a measurement target, and the focus measure is calculated as the focus measure. In the three-dimensional shape measurement apparatus that measures the three-dimensional shape of the measurement object by detecting the focus position based on the focus position, at least one of luminance, hue, and saturation is used as a determination value, and the focus of the target pixel is determined. Effective pixel extraction means for extracting effective pixels having the determination value whose difference from the determination value of the target pixel is within an allowable range from pixels within a predetermined range in the vicinity of the target pixel used for calculation of the focus measure And using the extracted effective pixels, a focus measure calculation means for calculating the focus measure at the target pixel, and a focus position at which the focus measure reaches a peak is detected as the focus position. And a focus position detecting means.

 本発明の一側面の3次元形状測定方法は、測定対象物に対して焦点位置の異なる複数の画像の各画素について焦点が合っている度合いを示す合焦測度を算出し、前記合焦測度に基づいて合焦位置を検出することにより前記測定対象物の3次元形状を測定する3次元形状測定装置の3次元形状測定方法において、輝度、色相および彩度のうち少なくとも1つを判定値として用い、注目画素における前記合焦測度の算出に用いる前記注目画素の近傍の所定の範囲内の画素から、前記注目画素の前記判定値との差が許容範囲内である前記判定値を有する有効画素を抽出する有効画素抽出ステップと、抽出された前記有効画素を用いて、前記注目画素における前記合焦測度を算出する合焦測度算出ステップと、前記合焦測度がピークとなる焦点位置を前記合焦位置として検出する合焦位置検出ステップとを含む。 The three-dimensional shape measurement method according to one aspect of the present invention calculates a focus measure indicating a degree of focus on each pixel of a plurality of images having different focus positions with respect to a measurement target, and the focus measure is calculated as the focus measure. In the three-dimensional shape measuring method of the three-dimensional shape measuring apparatus for measuring the three-dimensional shape of the measurement object by detecting the in-focus position based on at least one of luminance, hue, and saturation as a determination value An effective pixel having the determination value whose difference from the determination value of the target pixel is within an allowable range from a pixel within a predetermined range in the vicinity of the target pixel used for calculation of the in-focus measure in the target pixel. An effective pixel extraction step to extract, a focus measure calculation step to calculate the focus measure at the pixel of interest using the extracted effective pixel, and a focus position at which the focus measure reaches a peak And a focus position detection step of detecting as said in-focus position.

 本発明の一側面のプログラムは、測定対象物に対して焦点位置の異なる複数の画像の各画素について焦点が合っている度合いを示す合焦測度を算出し、前記合焦測度に基づいて合焦位置を検出することにより前記測定対象物の3次元形状を測定する処理を、コンピュータに実行させるプログラムにおいて、輝度、色相および彩度のうち少なくとも1つを判定値として用い、注目画素における前記合焦測度の算出に用いる前記注目画素の近傍の所定の範囲内の画素から、前記注目画素の前記判定値との差が許容範囲内である前記判定値を有する有効画素を抽出する有効画素抽出ステップと、抽出された前記有効画素を用いて、前記注目画素における前記合焦測度を算出する合焦測度算出ステップと、前記合焦測度がピークとなる焦点位置を前記合焦位置として検出する合焦位置検出ステップとを含む処理をコンピュータに実行させる。 A program according to an aspect of the present invention calculates a focus measure indicating a degree of focus on each pixel of a plurality of images having different focus positions with respect to a measurement target, and focuses on the basis of the focus measure. In a program that causes a computer to execute a process of measuring a three-dimensional shape of the measurement object by detecting a position, at least one of luminance, hue, and saturation is used as a determination value, and the focus on a target pixel An effective pixel extracting step of extracting effective pixels having the determination value whose difference from the determination value of the target pixel is within an allowable range from pixels within a predetermined range in the vicinity of the target pixel used for calculating the measure; A focus measure calculation step of calculating the focus measure at the target pixel using the extracted effective pixels, and a focus position where the focus measure reaches a peak. To execute processing including a focus position detection step of detecting a focus position on the computer.

 本発明の一側面においては、輝度、色相および彩度のうち少なくとも1つを判定値として用い、注目画素における前記合焦測度の算出に用いる前記注目画素の近傍の所定の範囲内の画素から、前記注目画素の前記判定値との差が許容範囲内である前記判定値を有する有効画素が抽出され、抽出された前記有効画素を用いて、前記注目画素における前記合焦測度が算出され、前記合焦測度がピークとなる焦点位置が前記合焦位置として検出される。 In one aspect of the present invention, at least one of luminance, hue, and saturation is used as a determination value, and pixels within a predetermined range near the target pixel used for calculation of the focus measure in the target pixel are An effective pixel having the determination value whose difference from the determination value of the target pixel is within an allowable range is extracted, and using the extracted effective pixel, the in-focus measure at the target pixel is calculated, A focus position where the focus measure reaches a peak is detected as the focus position.

 本発明によれば、3次元形状の測定精度が向上する。 According to the present invention, the measurement accuracy of the three-dimensional shape is improved.

本発明を適用した3次元形状測定システムの一実施の形態を示す模式図である。It is a schematic diagram which shows one Embodiment of the three-dimensional shape measurement system to which this invention is applied. コンピュータにより実現される機能の構成の例を示す図である。It is a figure which shows the example of a structure of the function implement | achieved by computer. 3次元形状測定システムにより実行される3次元形状測定処理を説明するためのフローチャートである。It is a flowchart for demonstrating the three-dimensional shape measurement process performed by a three-dimensional shape measurement system. 3次元形状測定システムにより実行される3次元形状測定処理を説明するためのフローチャートである。It is a flowchart for demonstrating the three-dimensional shape measurement process performed by a three-dimensional shape measurement system. 焦点位置の例を示す図である。It is a figure which shows the example of a focus position. 原画像における焦点位置の例を示す図である。It is a figure which shows the example of the focus position in an original image. 注目範囲の例を示す図である。It is a figure which shows the example of an attention range. 注目範囲の輝度の分布の例を示す図である。It is a figure which shows the example of distribution of the brightness | luminance of an attention range. 注目範囲の色相または彩度の分布の例を示す図である。It is a figure which shows the example of distribution of the hue or saturation of an attention range. 合焦測度の分布の例を示すグラフである。It is a graph which shows the example of distribution of a focus measure. 合焦測度の補間処理を説明するためのグラフである。It is a graph for demonstrating the interpolation process of a focusing measure.

符号の説明Explanation of symbols

 1 3次元形状測定システム, 2 測定対象物, 11 撮像装置, 12 コンピュータ, 51 3次元形状測定部, 61 撮影制御部, 62 測定部, 71 前処理部, 72 有効画素抽出部, 73 合焦測度算出部, 74 合焦位置検出部, 75 3次元形状データ生成部 1 3D shape measurement system, 2 measurement object, 11 imaging device, 12 computer, 51 3D shape measurement unit, 61 imaging control unit, 62 measurement unit, 71 pre-processing unit, 72 effective pixel extraction unit, 73 focusing measure Calculation unit, 74 Focus position detection unit, 75 3D shape data generation unit

 以下、図面を参照して本発明を適用した実施の形態について説明する。 Embodiments to which the present invention is applied will be described below with reference to the drawings.

 図1は、本発明を適用した3次元形状測定システムの一実施の形態を示す図である。図1の3次元形状測定システム1は、撮像装置11、コンピュータ12、および、ディスプレイ13を含むように構成され、撮像装置11とコンピュータ12はケーブル14を介して接続され、コンピュータ12とディスプレイ13はケーブル15を介して接続されている。また、図1において模式的に示されている撮像装置11は、少なくとも光学レンズ21、および、CCD(Charge Coupled Device)などの撮像素子などからなる光検出器22を含むように構成される。 FIG. 1 is a diagram showing an embodiment of a three-dimensional shape measurement system to which the present invention is applied. The three-dimensional shape measurement system 1 of FIG. 1 is configured to include an imaging device 11, a computer 12, and a display 13. The imaging device 11 and the computer 12 are connected via a cable 14, and the computer 12 and the display 13 are connected to each other. They are connected via a cable 15. Further, the image pickup apparatus 11 schematically shown in FIG. 1 is configured to include at least an optical lens 21 and a photodetector 22 including an image pickup device such as a CCD (Charge Coupled Device).

 3次元形状測定システム1においては、図3および図4などを参照して後述するように、撮像装置11が、測定顕微鏡のステージ3に設置された測定対象物2に対する高さ(Z軸方向の位置)を変えながら、測定対象物2に対して焦点位置の異なる複数の画像を撮影し、コンピュータ12が、その複数の画像を用いて、測定対象物2の3次元形状を測定する。 In the three-dimensional shape measurement system 1, as will be described later with reference to FIGS. 3 and 4, the imaging device 11 has a height (in the Z-axis direction) with respect to the measurement object 2 installed on the stage 3 of the measurement microscope. While changing the position), a plurality of images with different focal positions are taken with respect to the measurement object 2, and the computer 12 measures the three-dimensional shape of the measurement object 2 using the plurality of images.

 図2は、コンピュータ12のプロセッサ(例えば、CPU(Central Processing Unit)、DSP(Digital Signal Processor)など)が所定のプログラムを実行することにより実現される機能の構成の例を示すブロック図である。コンピュータ12のプロセッサが所定のプログラムを実行することにより、3次元形状測定部51が実現される。また、3次元形状測定部51は、撮影制御部61および測定部62により構成される。 FIG. 2 is a block diagram showing an example of a functional configuration realized when a processor (for example, a CPU (Central Processing Unit), a DSP (Digital Signal Processor), etc.) of the computer 12 executes a predetermined program. The three-dimensional shape measuring unit 51 is realized by the processor of the computer 12 executing a predetermined program. The three-dimensional shape measurement unit 51 includes an imaging control unit 61 and a measurement unit 62.

 撮影制御部61は、撮像装置11を制御して、焦点位置を変化させながら測定対象物2を撮影させる。撮影制御部61は、撮影された画像(以下、原画像と称する)を撮像装置11から取得し、測定部62の前処理部71に供給する。また、撮影制御部61は、全ての焦点位置における測定対象物2の撮影が終了したとき、撮影の終了を前処理部71に通知する。 The imaging control unit 61 controls the imaging device 11 to image the measurement object 2 while changing the focal position. The imaging control unit 61 acquires a captured image (hereinafter referred to as an original image) from the imaging device 11 and supplies the acquired image to the preprocessing unit 71 of the measurement unit 62. In addition, the imaging control unit 61 notifies the preprocessing unit 71 of the end of imaging when imaging of the measurement object 2 at all focal positions is completed.

 測定部62は、焦点位置の異なる複数の原画像を用いて、測定対象物2の3次元形状を測定する。測定部62は、前処理部71、有効画素抽出部72、合焦測度算出部73、合焦位置検出部74、および、3次元形状データ生成部75により構成される。 The measuring unit 62 measures the three-dimensional shape of the measurement object 2 using a plurality of original images having different focal positions. The measurement unit 62 includes a preprocessing unit 71, an effective pixel extraction unit 72, a focus measure calculation unit 73, a focus position detection unit 74, and a three-dimensional shape data generation unit 75.

 前処理部71は、図3および図4などを参照して後述するように、原画像に対して所定の前処理を行い、その結果生成された画像(以下、前処理画像とも称する)をメモリ52に記憶させる。また、前処理部71は、撮影制御部61から撮影の終了が通知されたとき、その通知を有効画素抽出部72に転送する。 As will be described later with reference to FIGS. 3 and 4, the preprocessing unit 71 performs predetermined preprocessing on the original image and stores an image generated as a result (hereinafter also referred to as a preprocessed image) in the memory. 52 is stored. Further, when the end of shooting is notified from the shooting control unit 61, the preprocessing unit 71 transfers the notification to the effective pixel extraction unit 72.

 有効画素抽出部72は、図3および図4などを参照して後述するように、合焦測度を算出する対象となる注目画素の近傍の所定の範囲(以下、注目範囲と称する)内の画素から、ユーザにより設定される輝度、色相、または、彩度の許容値を用いて、注目画素における合焦測度の算出に用いる有効画素を抽出し、前処理画像から有効画素の輝度値を抽出する。有効画素抽出部72は、抽出した有効画素の輝度値を示す情報を合焦測度算出部73に供給する。 As will be described later with reference to FIGS. 3 and 4, the effective pixel extraction unit 72 is a pixel in a predetermined range (hereinafter referred to as a target range) in the vicinity of the target pixel for which a focus measure is calculated. Then, using the luminance, hue, or saturation allowable value set by the user, the effective pixel used for calculating the focus measure at the target pixel is extracted, and the luminance value of the effective pixel is extracted from the preprocessed image. . The effective pixel extraction unit 72 supplies information indicating the luminance value of the extracted effective pixels to the focus measure calculation unit 73.

 合焦測度算出部73は、図3および図4などを参照して後述するように、抽出された有効画素の輝度値を用いて、注目画素の合焦測度を算出し、算出した合焦測度を示す情報を合焦位置検出部74に供給する。 As will be described later with reference to FIGS. 3 and 4, the focus measure calculation unit 73 calculates the focus measure of the target pixel using the extracted luminance value of the effective pixel, and calculates the calculated focus measure. Is supplied to the in-focus position detector 74.

 合焦位置検出部74は、図3および図4などを参照して後述するように、算出された合焦測度に基づいて、ユーザにより設定される輝度の許容値を用いて、注目画素の合焦位置を検出し、検出した合焦位置をメモリ52に記憶させる。また、合焦位置検出部74は、全ての画素の合焦位置を検出したとき、合焦位置の検出の終了を3次元形状データ生成部75に通知する。 As will be described later with reference to FIGS. 3 and 4, the in-focus position detection unit 74 uses the luminance tolerance value set by the user based on the calculated in-focus measure, and adjusts the focus pixel of interest. The focus position is detected, and the detected focus position is stored in the memory 52. Further, when the focus position detection unit 74 detects the focus positions of all the pixels, the focus position detection unit 74 notifies the three-dimensional shape data generation unit 75 of the end of the detection of the focus position.

 3次元形状データ生成部75は、メモリ52に記憶されている各画素の合焦位置に基づいて、3次元形状データを生成し、後段に出力する。 The 3D shape data generation unit 75 generates 3D shape data based on the in-focus position of each pixel stored in the memory 52, and outputs it to the subsequent stage.

 次に、図3および図4のフローチャートを参照して、3次元形状測定システム1により実行される3次元形状測定処理について説明する。なお、この処理は、例えば、ユーザが、コンピュータ12の図示せぬ入力部を介して、ステージ3上の測定対象物2の3次元形状の測定の指令を入力したとき開始される。 Next, the three-dimensional shape measurement process executed by the three-dimensional shape measurement system 1 will be described with reference to the flowcharts of FIGS. This process is started, for example, when the user inputs a command for measuring the three-dimensional shape of the measuring object 2 on the stage 3 via an input unit (not shown) of the computer 12.

 ステップS1において、有効画素抽出部72および合焦位置検出部74は、各種の許容値を取得する。具体的には、例えば、ユーザは、コンピュータ12の図示せぬ入力部を介して、有効画素の抽出に用いる判定値として、輝度、色相および彩度のうちどれを用いるかを設定するとともに、判定に用いる輝度、色相、彩度の許容値をコンピュータ12に入力する。有効画素抽出部72は、入力された許容値を取得する。なお、判定値には、輝度、色相、彩度のうちいずれか1種類を用いるようにしてもよいし、2種類以上を組み合わせて用いるようにしてもよい。 In step S1, the effective pixel extraction unit 72 and the focus position detection unit 74 acquire various allowable values. Specifically, for example, the user sets which of luminance, hue, and saturation to use as a determination value used for effective pixel extraction via an input unit (not shown) of the computer 12 and determines The permissible values of luminance, hue, and saturation used in the above are input to the computer 12. The effective pixel extraction unit 72 acquires the input allowable value. Note that any one of luminance, hue, and saturation may be used for the determination value, or two or more types may be used in combination.

 また、例えば、ユーザは、コンピュータ12の図示せぬ入力部を介して、合焦測度の最大値の判定に用いる輝度の許容値をコンピュータ12に入力し、合焦位置検出部74は、その許容値を取得する。 Further, for example, the user inputs an allowable luminance value used for determination of the maximum value of the focus measure to the computer 12 via an input unit (not shown) of the computer 12, and the focus position detection unit 74 receives the allowable value. Get the value.

 ステップS2において、撮像装置11は、撮影制御部61の制御の基に、測定対象物を撮影する。撮像装置11は、撮影の結果得られた、RGBの3原色で表される原画像を、ケーブル14を介して撮影制御部61に供給する。撮影制御部61は、取得した原画像を前処理部71に供給する。 In step S <b> 2, the imaging device 11 captures an image of the measurement object under the control of the imaging control unit 61. The imaging apparatus 11 supplies an original image represented by the three primary colors of RGB obtained as a result of shooting to the shooting control unit 61 via the cable 14. The imaging control unit 61 supplies the acquired original image to the preprocessing unit 71.

 ステップS3において、前処理部71は、画像の前処理を行う。具体的には、前処理部71は、取得した原画像をフーリエ変換し、所定の周波数以下の低周波成分、および、所定の周波数の範囲内のノイズ成分を除去した後、逆フーリエ変換する。これにより、原画像から低周波成分およびノイズ成分が除去され、中高周波成分が抽出されたRGB画像(以下、前処理RGB画像と称する)が生成される。前処理部71は、生成した前処理RGB画像をメモリ52に記憶させる。さらに、前処理部71は、生成した前処理RGB画像をYPbPr画像に変換し、変換した画像(以下、前処理YPbPr画像と称する)をメモリ52に記憶させる。 In step S3, the preprocessing unit 71 performs preprocessing of the image. Specifically, the preprocessing unit 71 performs an Fourier transform on the acquired original image, removes a low frequency component equal to or lower than a predetermined frequency, and a noise component within a predetermined frequency range, and then performs an inverse Fourier transform. As a result, the low-frequency component and the noise component are removed from the original image, and an RGB image (hereinafter referred to as a preprocessed RGB image) from which the medium-high frequency component is extracted is generated. The preprocessing unit 71 stores the generated preprocessed RGB image in the memory 52. Further, the preprocessing unit 71 converts the generated preprocessed RGB image into an YPbPr image, and stores the converted image (hereinafter referred to as a preprocessed YPbPr image) in the memory 52.

 ステップS4において、撮影制御部61は、全ての焦点位置において撮影したかを判定する。まだ全ての焦点位置において撮影していないと判定された場合、処理はステップS5に進む。 In step S4, the photographing control unit 61 determines whether or not photographing has been performed at all focal positions. If it is determined that shooting has not been performed at all focal positions, the process proceeds to step S5.

 ステップS5において、撮影制御部61は、焦点位置を変更する。すなわち、撮影制御部61は、焦点位置が次の値になるように、撮像装置11をZ軸方向に移動させる。 In step S5, the imaging control unit 61 changes the focal position. That is, the imaging control unit 61 moves the imaging device 11 in the Z-axis direction so that the focal position becomes the following value.

 その後、処理はステップS2に戻り、ステップS4において、全ての焦点位置において撮影したと判定されるまで、ステップS2乃至S5の処理が繰り返し実行される。これにより、例えば、図5に示されるように、測定対象物2の最下端から最上端まで焦点位置を所定の間隔で移動させながら、N枚の原画像が撮影され、各原画像から前処理RGB画像および前処理YPbPr画像が生成され、メモリ52に記憶される。なお、図5において、縦軸は焦点位置を表し、縦軸上には焦点位置の値の代わりに、焦点位置の設定順を表すインデックスが示されている。 Thereafter, the process returns to step S2, and the processes of steps S2 to S5 are repeatedly executed until it is determined in step S4 that images have been taken at all the focal positions. Thereby, for example, as shown in FIG. 5, N original images are photographed while moving the focal position from the lowermost end to the uppermost end of the measuring object 2 at a predetermined interval, and preprocessing is performed from each original image. An RGB image and a preprocessed YPbPr image are generated and stored in the memory 52. In FIG. 5, the vertical axis represents the focal position, and on the vertical axis, an index representing the setting order of the focal position is shown instead of the value of the focal position.

 図6は、図5のk番目の焦点位置において撮影された原画像の例を示している。なお、図6内の四角の各マスは画素を示している。また、図6の測定対象物2のうち領域Rfは、撮像装置11の焦点が合っている領域を示し、領域Rinは、焦点が合っている領域より内側の領域を示し、領域Routは、焦点が合っている領域より外側の領域を示している。すなわち、領域Rinおよび領域Rout内の画像は、焦点がずれた、いわゆるピンぼけ画像となる。 FIG. 6 shows an example of an original image taken at the kth focal position in FIG. Each square box in FIG. 6 represents a pixel. In the measurement object 2 in FIG. 6, the region Rf indicates a region in which the imaging apparatus 11 is in focus, the region Rin indicates a region inside the region in focus, and the region Rout indicates the focus. The area outside the area where the That is, the images in the region Rin and the region Rout are so-called defocused images that are out of focus.

 図3に戻り、一方、ステップS4において、撮影制御部61は、全ての焦点位置において撮影したと判定した場合、前処理部71を介して、有効画素抽出部72に撮影の終了を通知し、処理はステップS6に進む。 Returning to FIG. 3, on the other hand, when it is determined in step S <b> 4 that the photographing control unit 61 has photographed at all the focal positions, it notifies the effective pixel extracting unit 72 of the end of photographing through the preprocessing unit 71. The process proceeds to step S6.

 ステップS6において、測定部62は、合焦位置を求めてない画素のうち1つを選択し、注目画素に設定する。なお、注目画素は、例えば、ラスタスキャン順に設定される。 In step S6, the measurement unit 62 selects one of the pixels for which the in-focus position has not been obtained and sets it as the target pixel. Note that the pixel of interest is set, for example, in raster scan order.

 ステップS7において、測定部62は、現在の注目画素における合焦測度を求めていない画像のうち1つを選択し、注目画像に設定する。なお、注目画像は、例えば、撮影された順(インデックス順)に設定される。 In step S7, the measuring unit 62 selects one of the images for which the focus measure at the current target pixel is not obtained and sets it as the target image. Note that the attention image is set, for example, in the order of shooting (index order).

 ステップS8において、有効画素抽出部72は、注目範囲内の画素から処理対象の画素を1つ選択する。すなわち、有効画素抽出部72は、注目範囲内の画素のうち、有効画素であるか否かを判定していない画素の1つを処理対象の画素として選択する。 In step S8, the effective pixel extraction unit 72 selects one pixel to be processed from the pixels within the attention range. That is, the effective pixel extraction unit 72 selects, as a pixel to be processed, one of the pixels in the attention range that has not been determined whether or not it is an effective pixel.

 なお、図7は、図6の画素PA1が注目画素に設定されている場合の注目範囲の例を示している。図7の例においては、注目画素PA1を中心とする5×5画素の範囲RA1が、注目範囲に設定されている。 FIG. 7 shows an example of the attention range when the pixel PA1 in FIG. 6 is set as the attention pixel. In the example of FIG. 7, a 5 × 5 pixel range RA1 centered on the target pixel PA1 is set as the target range.

 ステップS9において、有効画素抽出部72は、注目画素と処理対象の画素の判定値の差が許容範囲内であるかを判定する。具体的には、有効画素抽出部72は、メモリ52に記憶されている、注目画像の前処理RGB画像および前処理YPbPr画像から、注目画素および処理対象の画素の判定値を求める。有効画素抽出部72は、注目画素と処理対象の画素の判定値の差分をとり、その差分値の絶対値と、ステップS1において設定された許容値とを比較する。有効画素抽出部72は、差分値の絶対値が許容値以下である場合、注目画素と処理対象の画素の判定値の差が許容範囲内であると判定し、すなわち、処理対象の画素が有効画素であると判定し、処理はステップS10に進む。 In step S9, the effective pixel extraction unit 72 determines whether the difference between the determination values of the target pixel and the pixel to be processed is within an allowable range. Specifically, the effective pixel extraction unit 72 obtains the determination value of the target pixel and the pixel to be processed from the preprocessed RGB image and the preprocessed YPbPr image of the target image stored in the memory 52. The effective pixel extraction unit 72 takes the difference between the determination values of the target pixel and the pixel to be processed, and compares the absolute value of the difference value with the allowable value set in step S1. When the absolute value of the difference value is equal to or smaller than the allowable value, the effective pixel extraction unit 72 determines that the difference between the determination values of the target pixel and the pixel to be processed is within the allowable range, that is, the pixel to be processed is valid The pixel is determined to be a pixel, and the process proceeds to step S10.

 なお、2種類以上の判定値を用いるように設定されている場合、全ての種類の判定値の差分値の絶対値が許容値以下であるとき、注目画素と処理対象の画素の判定値の差が許容範囲内であると判定される。例えば、輝度、色相、彩度の3種類の判定値を用いるように設定されている場合、輝度、色相、彩度の全てについて、注目画素と処理対象の画素の差分値の絶対値が許容値以下であるとき、注目画素と処理対象の画素の判定値の差が許容範囲内であると判定される。 In addition, when it is set to use two or more types of determination values, when the absolute value of the difference value of all types of determination values is less than or equal to the allowable value, the difference between the determination values of the target pixel and the pixel to be processed Is determined to be within the allowable range. For example, when three types of determination values of luminance, hue, and saturation are set to be used, the absolute value of the difference value between the target pixel and the pixel to be processed is an allowable value for all of luminance, hue, and saturation. When it is below, it is determined that the difference between the determination values of the target pixel and the pixel to be processed is within the allowable range.

 ステップS10において、有効画素抽出部72は、処理対象の画素の輝度値を抽出する。すなわち、有効画素抽出部72は、現在の処理対象の画素の輝度値を、メモリ52に記憶されている注目画像の前処理YPbPr画像から抽出する。 In step S10, the effective pixel extraction unit 72 extracts the luminance value of the pixel to be processed. That is, the effective pixel extraction unit 72 extracts the luminance value of the current pixel to be processed from the preprocessed YPbPr image of the target image stored in the memory 52.

 一方、ステップS9において、少なくとも1種類の判定値の差分値の絶対値が許容値を超えている場合、注目画素と処理対象の画素の判定値の差が許容範囲を超えていると判定され、ステップS10の処理はスキップされ、処理はステップS11に進む。 On the other hand, in step S9, when the absolute value of the difference value of at least one kind of determination value exceeds the allowable value, it is determined that the difference between the determination value of the target pixel and the pixel to be processed exceeds the allowable range, The process of step S10 is skipped and the process proceeds to step S11.

 ステップS11において、有効画素抽出部72は、注目範囲内の全ての画素の判定を行ったかを判定する。まだ注目範囲内の全ての画素の判定を行っていないと判定された場合、処理はステップS8に戻る。その後、ステップS11において、注目範囲内の全ての画素の判定を行ったと判定されるまで、ステップS8乃至S11の処理が繰り返し実行され、注目範囲内の画素から、注目画素との判定値の差が許容範囲内である判定値を有する有効画素が抽出され、さらに、その有効画素の輝度値が抽出される。 In step S11, the effective pixel extraction unit 72 determines whether all the pixels within the attention range have been determined. If it is determined that all the pixels within the attention range have not yet been determined, the process returns to step S8. After that, until it is determined in step S11 that all the pixels in the attention range have been determined, the processing in steps S8 to S11 is repeatedly performed, and the difference in determination value from the pixel in the attention range to the attention pixel is determined. An effective pixel having a determination value within the allowable range is extracted, and further, a luminance value of the effective pixel is extracted.

 一方、ステップS11において、注目範囲内の全ての画素の判定を行ったと判定された場合、処理はステップS12に進む。 On the other hand, if it is determined in step S11 that all pixels within the range of interest have been determined, the process proceeds to step S12.

 ステップS12において、合焦測度算出部73は、注目画像の注目画素における合焦測度を算出する。具体的には、有効画素抽出部72は、抽出した有効画素の輝度値を示す情報を合焦測度算出部73に供給する。合焦測度算出部73は、以下の式(1)により、注目画像の注目画素における合焦測度を算出する。 In step S12, the focus measure calculation unit 73 calculates the focus measure at the target pixel of the target image. Specifically, the effective pixel extraction unit 72 supplies information indicating the luminance value of the extracted effective pixel to the focus measure calculation unit 73. The focus measure calculation unit 73 calculates the focus measure at the target pixel of the target image using the following equation (1).

 合焦測度=(Σ|注目画素の輝度値-有効画素の輝度値|)÷有効画素の数 ・・・(1) Focus measure = (Σ | Luminance value of target pixel−Luminance value of effective pixel |) ÷ Number of effective pixels (1)

 すなわち、合焦測度は、注目画素の輝度値と注目範囲内の有効画素の輝度値との差分値の絶対値の平均値である。従って、合焦測度は、注目画素と有効画素との輝度値の差が大きいほど、すなわち、注目画素と有効画素との間のコントラストが強いほど、値が大きくなる。 In other words, the focus measure is an average value of absolute values of difference values between the luminance value of the target pixel and the luminance value of the effective pixel within the target range. Therefore, the value of the focus measure increases as the difference in luminance value between the target pixel and the effective pixel increases, that is, as the contrast between the target pixel and the effective pixel increases.

 合焦測度算出部73は、算出した合焦測度を合焦位置検出部74に供給する。また、合焦測度算出部73は、算出した合焦測度、現在の注目画像の注目画素の輝度値、並びに、現在の注目画像の焦点位置を関連づけてメモリ52に記憶させる。 The focus measure calculation unit 73 supplies the calculated focus measure to the focus position detection unit 74. The focus measure calculation unit 73 stores the calculated focus measure, the luminance value of the target pixel of the current target image, and the focal position of the current target image in the memory 52 in association with each other.

 例えば、図8に示されるように、注目範囲RA1において、注目画素PA1を含む範囲RA1a(右斜め下方向の斜線の部分)と範囲RA1b(左斜め下方向の斜線の部分)との間で、測定対象物2に段差がある場合、範囲RA1a内と範囲RA1b内とでは輝度が大きく異なる。従って、注目範囲RA1内の全ての画素を用いて合焦測度を求めた場合、高さが異なる範囲RA1aと範囲RA1bの両方の影響を受け、範囲RA1aと範囲RA1bの間の高さに対応した合焦測度が算出されてしまう。 For example, as shown in FIG. 8, in the attention range RA1, between the range RA1a including the attention pixel PA1 (the hatched portion in the lower right direction) and the range RA1b (the hatched portion in the lower left direction), When the measurement object 2 has a step, the brightness is greatly different between the range RA1a and the range RA1b. Therefore, when the focus measure is obtained using all the pixels in the attention range RA1, it is affected by both the ranges RA1a and RA1b having different heights, and corresponds to the height between the range RA1a and the range RA1b. The focus measure is calculated.

 一方、上述したように、注目範囲から有効画素を抽出し、有効画素のみを用いて合焦測度を算出することにより、例えば、図8の例の場合、範囲RA1a内の画素の輝度のみを用いて、注目画素PA1における合焦測度をより正確に算出することができる。 On the other hand, as described above, by extracting effective pixels from the range of interest and calculating the focus measure using only effective pixels, for example, in the case of the example of FIG. 8, only the luminance of the pixels in the range RA1a is used. Thus, the focus measure at the target pixel PA1 can be calculated more accurately.

 また、図9は、範囲RA1a内の色相または彩度の分布の様子を模式的に表しているが、図9に示されるように、範囲RA1a内と範囲RB1b内とでは、色相または彩度も大きく異なる可能性が高い。従って、判定値として色相または彩度を用いても、範囲RA1aまたはそれに近い範囲内の画素の輝度のみを用いて、注目画素PA1における合焦測度をより正確に算出することができる。なお、輝度、色相、または、彩度をそれぞれ用いて抽出した有効画素は、必ずしも一致しないため、3種類全てを判定値として用いることにより、より正確に合焦測度を算出することが可能となる。 FIG. 9 schematically shows the state of hue or saturation distribution within the range RA1a. As shown in FIG. 9, the hue or saturation also varies between the range RA1a and the range RB1b. There is a high possibility that it will be greatly different. Therefore, even if hue or saturation is used as the determination value, the focus measure at the target pixel PA1 can be calculated more accurately by using only the luminance of the pixel within the range RA1a or a range close thereto. Note that effective pixels extracted using luminance, hue, or saturation do not necessarily match, and using all three types as determination values makes it possible to calculate a focus measure more accurately. .

 ステップS13において、合焦位置検出部74は、合焦測度が最大であるかを判定する。具体的には、合焦位置検出部74は、ステップS13において算出された合焦測度が、メモリ52に記憶されているこれまでの合焦測度の最大値より大きい場合、合焦測度が最大であると判定し、処理はステップS14に進む。なお、現在の注目画素について最初の合焦測度が算出されたとき、メモリ52に合焦測度の最大値がまだ記憶されていないので、算出された合焦測度が無条件で最大であると判定される。 In step S13, the focus position detection unit 74 determines whether the focus measure is the maximum. Specifically, the in-focus position detection unit 74 determines that the in-focus measure is the maximum when the in-focus measure calculated in step S13 is larger than the maximum value of the in-focus measure stored so far in the memory 52. The process proceeds to step S14. Note that when the first focus measure is calculated for the current pixel of interest, since the maximum value of the focus measure is not yet stored in the memory 52, it is determined that the calculated focus measure is unconditionally maximum. Is done.

 ステップS14において、合焦位置検出部74は、注目画素の輝度値が許容範囲内であるかを判定する。具体的には、合焦位置検出部74は、現在の焦点位置における注目画素の輝度値(現在の注目画像の注目画素の輝度値)と、メモリ52に記憶されている、これまでに合焦測度が最大になった焦点位置における輝度値(これまでに合焦測度が最大になった画像の注目画素の輝度値)との差分をとり、その差分値と、ステップS1において設定された許容値とを比較する。合焦位置検出部74は、差分値が許容値以下である場合、注目画素の輝度値が許容範囲内であると判定し、処理はステップS15に進む。 In step S14, the focus position detection unit 74 determines whether the luminance value of the target pixel is within the allowable range. Specifically, the in-focus position detection unit 74 stores the brightness value of the target pixel at the current focal position (the brightness value of the target pixel of the current target image) and the focus that has been stored in the memory 52 so far. The difference between the luminance value at the focal position where the measure is maximized (the luminance value of the target pixel of the image where the in-focus measure has been maximized so far) is taken, and the allowable value set in step S1. And compare. If the difference value is less than or equal to the allowable value, the in-focus position detection unit 74 determines that the luminance value of the target pixel is within the allowable range, and the process proceeds to step S15.

 ステップS15において、合焦位置検出部74は、現在の合焦測度を最大値として記憶する。すなわち、合焦位置検出部74は、現在の注目画像の注目画素における合焦測度を合焦測度の最大値として、現在の注目画像の注目画素の輝度値、および、現在の注目画像の焦点位置と関連づけて、メモリ52に記憶させる。 In step S15, the focus position detection unit 74 stores the current focus measure as a maximum value. That is, the focus position detection unit 74 sets the focus measure at the target pixel of the current target image as the maximum value of the focus measure, and the luminance value of the target pixel of the current target image and the focus position of the current target image. And stored in the memory 52.

 一方、ステップS14において、算出した差分値が許容値を超える場合、注目画素の輝度値が許容範囲を超えていると判定され、ステップS15の処理はスキップされ、処理はステップS16に進む。すなわち、現在の焦点位置における合焦測度は、最大値として記憶されない。 On the other hand, if the calculated difference value exceeds the allowable value in step S14, it is determined that the luminance value of the target pixel exceeds the allowable range, the process of step S15 is skipped, and the process proceeds to step S16. That is, the focus measure at the current focus position is not stored as the maximum value.

 図10は、測定対象物の輝度または色が大きく変化する境界部分の注目画素PA11における、焦点位置と合焦測度の関係の例を示すグラフである。上述したように、測定対象物の輝度または色が大きく変化する境界部分においては、焦点がずれるに従って、明るい部分が膨張し、本来暗い部分が明るく撮影されしまう場合がある。従って、図10に示されるように、注目画素PA11に対応する測定対象物の点に焦点が合い、注目画素PA11が本来の暗い輝度で撮影されたときに、合焦測度のピークPK11が現れた後、注目画素PA11に対応する測定対象物の点に対する焦点がずれるに従って、注目画素PA11の輝度が明るくなり、再び合焦測度のピークPK12が現れる場合がある。 FIG. 10 is a graph showing an example of the relationship between the focus position and the focus measure in the target pixel PA11 at the boundary portion where the luminance or color of the measurement object changes greatly. As described above, in the boundary portion where the luminance or color of the measurement object changes greatly, the bright portion may expand as the focus shifts, and the originally dark portion may be photographed brightly. Accordingly, as shown in FIG. 10, when the point of the measurement object corresponding to the target pixel PA11 is focused and the target pixel PA11 is photographed with the original dark brightness, the peak PK11 of the focus measure appears. Thereafter, as the focus on the point of the measurement object corresponding to the target pixel PA11 shifts, the luminance of the target pixel PA11 becomes brighter, and the peak PK12 of the focus measure may appear again.

 これに対して、ステップS14の判定処理により、合焦測度がピークPK11となる焦点位置における注目画素PA11の輝度と、合焦測度がピークPK12となる焦点位置における注目画素PA11の輝度との差が大きい場合、合焦測度がピークPK12となる焦点位置、および、その付近の焦点位置は、合焦測度の最大値の検出対象から除外される。
これにより、焦点が合っていない焦点位置で合焦測度のピークが検出され、合焦位置が誤検出されることが防止される。
In contrast, the difference between the luminance of the pixel of interest PA11 at the focal point where the focus measure is the peak PK11 and the luminance of the pixel of interest PA11 at the focal point where the focus measure is the peak PK12 is determined by the determination process in step S14. If it is larger, the focus position where the focus measure reaches the peak PK12 and the focus position in the vicinity thereof are excluded from the detection target of the maximum value of the focus measure.
Thereby, the peak of the focus measure is detected at the focus position where the focus is not in focus, and the erroneous detection of the focus position is prevented.

 ステップS16において、合焦速度算出部73は、全ての画像について処理したかを判定する。まだ全ての画像について処理していないと判定された場合、処理はステップS7に戻る。その後、ステップS16において、全ての画像について処理したと判定されるまで、ステップS7乃至S16の処理が繰り返し実行され、注目画素について、全ての焦点位置における合焦測度が求められるとともに、合焦測度の最大値が検出される。 In step S16, the focusing speed calculation unit 73 determines whether all the images have been processed. If it is determined that all the images have not yet been processed, the process returns to step S7. Thereafter, until it is determined in step S16 that all the images have been processed, the processes in steps S7 to S16 are repeatedly performed, and the focus measure at all the focus positions is obtained for the target pixel, and the focus measure The maximum value is detected.

 一方、ステップS16において、全ての画像について処理したと判定された場合、処理はステップS17に進む。 On the other hand, if it is determined in step S16 that all the images have been processed, the process proceeds to step S17.

 ステップS17において、合焦位置検出部74は、合焦位置を検出する。具体的には、合焦位置検出部74は、注目画素について、合焦測度が最大となる焦点位置およびその前後の焦点位置における合焦測度をメモリ52から読み出す。例えば、k+1番目の焦点位置において合焦測度が最大となった場合、k乃至k+2番目の焦点位置における合焦測度がメモリ52から読み出される。合焦位置検出部74は、読み出した3点の焦点位置と合焦測度から、焦点位置と合焦測度の関係をガウス関数でモデル化することにより、3点間のデータの補間を行う。そして、合焦位置検出部74は、補間したデータに基づいて、合焦測度がピークとなる焦点位置を注目画素における合焦位置として検出する。合焦位置検出部74は、検出した合焦位置をメモリ52に記憶させる。 In step S17, the focus position detection unit 74 detects the focus position. Specifically, the focus position detection unit 74 reads, from the memory 52, the focus position at the focus position where the focus measure is maximum and the focus positions before and after the focus position for the target pixel. For example, when the focus measure becomes the maximum at the k + 1th focus position, the focus measure at the kth to k + 2th focus positions is read from the memory 52. The focus position detection unit 74 performs interpolation of data between the three points by modeling the relationship between the focus position and the focus measure using a Gaussian function from the read focus positions and focus measures of the three points. Then, based on the interpolated data, the focus position detection unit 74 detects the focus position where the focus measure reaches a peak as the focus position at the target pixel. The focus position detection unit 74 stores the detected focus position in the memory 52.

 図11は、1乃至N点における焦点位置と合焦測度の関係をガウス関数でモデル化した場合のグラフの例を示している。このように、焦点位置と合焦測度の関係をガウス関数でモデル化することにより、実際に画像を撮影した焦点位置の間隔より詳細な精度で、注目画素における合焦位置を検出することが可能となり、例えば、図11のピークPK21に対応するk+1番目とk+2番目の間の焦点位置を合焦位置として検出することができる。 FIG. 11 shows an example of a graph when the relationship between the focus position and the focus measure at points 1 to N is modeled by a Gaussian function. In this way, by modeling the relationship between the focal position and the in-focus measure with a Gaussian function, it is possible to detect the in-focus position at the target pixel with higher accuracy than the interval between the focal positions at which the images were actually captured. Thus, for example, the focal position between k + 1 and k + 2 corresponding to the peak PK21 in FIG. 11 can be detected as the in-focus position.

 なお、4点以上のデータを用いて、焦点位置と合焦測度の関係をガウス関数によりモデル化するようにしてもよい。また、ガウス関数の代わりに、2次関数補間演算により、データの補間を行うようにしてもよい。さらに、画像を撮影する焦点位置の間隔が十分小さい場合、合焦測度の移動平均をとることにより、データの補間を行うようにしてもよい。 It should be noted that the relationship between the focal position and the focus measure may be modeled by a Gaussian function using data of four or more points. Further, instead of a Gaussian function, data interpolation may be performed by a quadratic function interpolation calculation. Furthermore, when the interval between the focal positions at which an image is taken is sufficiently small, data interpolation may be performed by taking a moving average of the focus measure.

 ステップS18において、合焦位置検出部74は、全ての画素の合焦位置を検出したかを判定する。まだ全ての画素の合焦位置を検出していないと判定された場合、処理はステップS6に戻り、ステップS18において、全ての画素の合焦位置を検出したと判定されるまで、ステップS6乃至S18の処理が繰り返し実行される。 In step S18, the focus position detection unit 74 determines whether the focus positions of all the pixels have been detected. If it is determined that the focus positions of all the pixels have not been detected yet, the process returns to step S6, and steps S6 to S18 are performed until it is determined in step S18 that the focus positions of all the pixels have been detected. This process is repeatedly executed.

 一方、ステップS18において、合焦位置検出部74は、全ての画素の合焦位置を検出したと判定した場合、合焦位置の検出の終了を3次元形状データ生成部75に通知し、処理はステップS19に進む。 On the other hand, in step S18, when it is determined that the focus position of all the pixels has been detected, the focus position detection unit 74 notifies the end of detection of the focus position to the three-dimensional shape data generation unit 75, and the process Proceed to step S19.

 ステップS19において、3次元形状データ生成部75は、メモリ52に記憶されている各画素の合焦位置に基づいて、3次元形状データを生成し、後段に出力する。例えば、3次元形状データは、各画素の合焦位置、または、合焦位置に基づいて算出される、ある基準点から各画素までの距離により表される。そして、3次元形状測定部51の後段の装置または処理部は、例えば、3次元形状データに基づいて、測定対象物2の3次元画像をディスプレイ13に表示する。その後、3次元形状計測処理は終了する。 In step S19, the three-dimensional shape data generation unit 75 generates three-dimensional shape data based on the focus position of each pixel stored in the memory 52, and outputs it to the subsequent stage. For example, the three-dimensional shape data is represented by the focus position of each pixel or the distance from a certain reference point to each pixel calculated based on the focus position. Then, the subsequent apparatus or processing unit of the three-dimensional shape measurement unit 51 displays a three-dimensional image of the measurement object 2 on the display 13 based on the three-dimensional shape data, for example. Thereafter, the three-dimensional shape measurement process ends.

 このようにして、より正確に合焦測度を算出し、かつ、合焦測度の偽のピークを合焦位置をして検出することを防止することにより、3次元形状の測定精度を向上させることができる。 In this way, it is possible to improve the measurement accuracy of the three-dimensional shape by calculating the focus measure more accurately and preventing the false peak of the focus measure from being detected at the focus position. Can do.

 なお、3次元形状測定部51においては、許容値を調整することにより、ステップS14およびS15の処理において、輝度が暗い方および明るい方のピークのうちどちらを合焦測度の最大値として選択するかを設定したり、合焦測度がピークとなる焦点位置が3つ以上ある場合、その焦点位置における注目画素の輝度値に基づいて、どのピークを合焦測度の最大値として選択するかを設定することが可能である。これにより、合焦測度がピークとなる焦点位置が複数存在する場合、各焦点位置における注目画素の輝度値に基づいて、合焦測度がピークとなる焦点位置の中から合焦位置が選択されるようになる。 In the three-dimensional shape measurement unit 51, by adjusting the allowable value, which of the darker and brighter peaks is selected as the maximum value of the focus measure in the processes of steps S14 and S15. If there are three or more focus positions where the focus measure is a peak, the peak to be selected as the maximum value of the focus measure is set based on the luminance value of the pixel of interest at the focus position. It is possible. Thereby, when there are a plurality of focus positions at which the focus measure reaches a peak, the focus position is selected from the focus positions at which the focus measure has a peak, based on the luminance value of the target pixel at each focus position. It becomes like this.

 また、以上の説明では、画素に着目して合焦測度を求めたが、複数の画素からなる領域を単位として合焦測度を求めるようにしてもよい。 In the above description, the focus measure is obtained by paying attention to the pixels. However, the focus measure may be obtained in units of regions composed of a plurality of pixels.

 さらに、以上の説明では、画像の前処理において、フーリエ変換を行った後、低周波成分およびノイズ成分を除去し、逆フーリエ変換を行う例を示したが、例えば、ウェーブレット変換を行った後、低周波成分およびノイズ成分を除去し、逆ウェーブレット変換を行うようにしてもよい。 Further, in the above description, in the preprocessing of the image, after performing the Fourier transform, the low frequency component and the noise component are removed and the inverse Fourier transform is performed. For example, after performing the wavelet transform, Low frequency components and noise components may be removed and inverse wavelet transform may be performed.

 また、従来のSFF法と同様に、注目範囲内の有効画素、および、ラプラシアンフィルタを用いて、合焦測度を求めるようにしてもよい。 Also, as in the conventional SFF method, the in-focus measure may be obtained using an effective pixel within the attention range and a Laplacian filter.

 さらに、以上の説明では、一連の処理を、コンピュータ12を用いてソフトウエアにより実行する例を示したが、ハードウエア、または、専用のハードウエアに組み込まれているコンピュータにより実行することも可能である。 Further, in the above description, an example in which a series of processing is executed by software using the computer 12 is shown, but it is also possible to execute the processing by hardware or a computer incorporated in dedicated hardware. is there.

 また、コンピュータが実行するプログラムは、本明細書で説明する順序に沿って時系列に処理が行われるプログラムであっても良いし、並列に、あるいは呼び出しが行われたとき等の必要なタイミングで処理が行われるプログラムであっても良い。 The program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.

 なお、本明細書において、システムの用語は、複数の装置、手段などより構成される全体的な装置を意味するものとする。 In this specification, the term “system” means an overall apparatus composed of a plurality of apparatuses and means.

 また、本発明の実施の形態は、上述した実施の形態に限定されるものではなく、本発明の要旨を逸脱しない範囲において種々の変更が可能である。 The embodiments of the present invention are not limited to the above-described embodiments, and various modifications can be made without departing from the scope of the present invention.

Claims (5)

 測定対象物に対して焦点位置の異なる複数の画像の各画素について焦点が合っている度合いを示す合焦測度を算出し、前記合焦測度に基づいて合焦位置を検出することにより前記測定対象物の3次元形状を測定する3次元形状測定装置において、
 輝度、色相および彩度のうち少なくとも1つを判定値として用い、注目画素における前記合焦測度の算出に用いる前記注目画素の近傍の所定の範囲内の画素から、前記注目画素の前記判定値との差が許容範囲内である前記判定値を有する有効画素を抽出する有効画素抽出手段と、
 抽出された前記有効画素を用いて、前記注目画素における前記合焦測度を算出する合焦測度算出手段と、
 前記合焦測度がピークとなる焦点位置を前記合焦位置として検出する合焦位置検出手段と
 を備えることを特徴とする3次元形状測定装置。
The measurement target is calculated by calculating a focus measure indicating a degree of focus for each pixel of the plurality of images having different focus positions with respect to the measurement target, and detecting the focus position based on the focus measure. In a three-dimensional shape measuring apparatus for measuring a three-dimensional shape of an object,
At least one of luminance, hue, and saturation is used as a determination value, and the determination value of the target pixel is determined from pixels within a predetermined range in the vicinity of the target pixel used for calculation of the focus measure at the target pixel. Effective pixel extracting means for extracting effective pixels having the determination value whose difference is within an allowable range;
A focus measure calculating means for calculating the focus measure at the target pixel using the extracted effective pixels;
A three-dimensional shape measuring apparatus comprising: a focus position detecting unit that detects a focus position at which the focus measure reaches a peak as the focus position.
 前記合焦位置検出手段は、前記合焦測度がピークとなる焦点位置が複数存在する場合、各焦点位置における前記注目画素の輝度値に基づいて、前記合焦測度がピークとなる複数の焦点位置の中から前記合焦位置を選択する
 ことを特徴とする請求項1に記載の3次元形状測定装置。
When there are a plurality of focus positions at which the focus measure reaches a peak, the focus position detection unit, based on the luminance value of the pixel of interest at each focus position, a plurality of focus positions at which the focus measure reaches a peak The three-dimensional shape measuring apparatus according to claim 1, wherein the in-focus position is selected from the list.
 前記画像をフーリエ変換し、所定の周波数範囲の低周波成分およびノイズ成分を除去した後、逆フーリエ変換することにより前記画像の中高周波成分を抽出した前処理画像を生成する前処理手段を
 さらに備え、
 前記画素抽出手段は、前記前処理画像の前記所定の範囲内の画素から前記有効画素を抽出し、
 前記合焦測度算出手段は、前記前処理画像における前記注目画素と前記有効画素との輝度値の差に基づいて、前記注目画素における前記合焦測度を算出する
 ことを特徴とする請求項1に記載の3次元形状測定装置。
The image processing apparatus further comprises preprocessing means for generating a preprocessed image obtained by extracting the middle and high frequency components of the image by performing Fourier transform on the image, removing low frequency components and noise components in a predetermined frequency range, and then performing inverse Fourier transform. ,
The pixel extraction means extracts the effective pixels from the pixels within the predetermined range of the preprocessed image,
The focus measurement calculation unit calculates the focus measurement for the target pixel based on a difference in luminance value between the target pixel and the effective pixel in the preprocessed image. The three-dimensional shape measuring apparatus as described.
 測定対象物に対して焦点位置の異なる複数の画像の各画素について焦点が合っている度合いを示す合焦測度を算出し、前記合焦測度に基づいて合焦位置を検出することにより前記測定対象物の3次元形状を測定する3次元形状測定装置の3次元形状測定方法において、
 輝度、色相および彩度のうち少なくとも1つを判定値として用い、注目画素における前記合焦測度の算出に用いる前記注目画素の近傍の所定の範囲内の画素から、前記注目画素の前記判定値との差が許容範囲内である前記判定値を有する有効画素を抽出する有効画素抽出ステップと、
 抽出された前記有効画素を用いて、前記注目画素における前記合焦測度を算出する合焦測度算出ステップと、
 前記合焦測度がピークとなる焦点位置を前記合焦位置として検出する合焦位置検出ステップと
 を含むことを特徴とする3次元形状測定方法。
The measurement target is calculated by calculating a focus measure indicating a degree of focus for each pixel of the plurality of images having different focus positions with respect to the measurement target, and detecting the focus position based on the focus measure. In the three-dimensional shape measuring method of the three-dimensional shape measuring apparatus for measuring the three-dimensional shape of an object,
At least one of luminance, hue, and saturation is used as a determination value, and the determination value of the target pixel is determined from pixels within a predetermined range in the vicinity of the target pixel used for calculation of the focus measure at the target pixel. An effective pixel extracting step of extracting effective pixels having the determination value whose difference is within an allowable range;
A focus measure calculating step for calculating the focus measure at the pixel of interest using the extracted effective pixels;
And a focus position detection step of detecting a focus position at which the focus measure reaches a peak as the focus position.
 測定対象物に対して焦点位置の異なる複数の画像の各画素について焦点が合っている度合いを示す合焦測度を算出し、前記合焦測度に基づいて合焦位置を検出することにより前記測定対象物の3次元形状を測定する処理を、コンピュータに実行させるプログラムにおいて、
 輝度、色相および彩度のうち少なくとも1つを判定値として用い、注目画素における前記合焦測度の算出に用いる前記注目画素の近傍の所定の範囲内の画素から、前記注目画素の前記判定値との差が許容範囲内である前記判定値を有する有効画素を抽出する有効画素抽出ステップと、
 抽出された前記有効画素を用いて、前記注目画素における前記合焦測度を算出する合焦測度算出ステップと、
 前記合焦測度がピークとなる焦点位置を前記合焦位置として検出する合焦位置検出ステップと
 を含む処理をコンピュータに実行させるプログラム。
The measurement target is calculated by calculating a focus measure indicating a degree of focus for each pixel of the plurality of images having different focus positions with respect to the measurement target, and detecting the focus position based on the focus measure. In a program for causing a computer to execute processing for measuring the three-dimensional shape of an object,
At least one of luminance, hue, and saturation is used as a determination value, and the determination value of the target pixel is determined from pixels within a predetermined range in the vicinity of the target pixel used for calculation of the focus measure at the target pixel. An effective pixel extracting step of extracting effective pixels having the determination value whose difference is within an allowable range;
A focus measure calculating step for calculating the focus measure at the pixel of interest using the extracted effective pixels;
A program that causes a computer to execute processing including a focus position detection step of detecting a focus position at which the focus measure reaches a peak as the focus position.
PCT/JP2009/051346 2008-01-28 2009-01-28 Three-dimensional shape measurement device, method, and program Ceased WO2009096422A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2009551537A JP5218429B2 (en) 2008-01-28 2009-01-28 Three-dimensional shape measuring apparatus and method, and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2008016349 2008-01-28
JP2008-016349 2008-01-28

Publications (1)

Publication Number Publication Date
WO2009096422A1 true WO2009096422A1 (en) 2009-08-06

Family

ID=40912774

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2009/051346 Ceased WO2009096422A1 (en) 2008-01-28 2009-01-28 Three-dimensional shape measurement device, method, and program

Country Status (2)

Country Link
JP (1) JP5218429B2 (en)
WO (1) WO2009096422A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011046115A1 (en) * 2009-10-13 2011-04-21 日立化成工業株式会社 Optical waveguide substrate and method for manufacturing same
CN102331622A (en) * 2010-07-12 2012-01-25 索尼公司 Information handling system, microscope control device and method of operating thereof
WO2012057283A1 (en) 2010-10-27 2012-05-03 株式会社ニコン Shape measuring device, shape measuring method, structure manufacturing method, and program
JP2013257187A (en) * 2012-06-11 2013-12-26 Ricoh Co Ltd Movement information detection device and multicolor image forming device
US10952827B2 (en) 2014-08-15 2021-03-23 Align Technology, Inc. Calibration of an intraoral scanner
TWI828386B (en) * 2021-10-28 2024-01-01 日商尼康股份有限公司 Shape acquisition method, object management method and operation support method, and shape acquisition system and operation support system
US12160663B2 (en) 2022-02-18 2024-12-03 Tokyo Seimitsu Co., Ltd. Image processing device, image processing method, and three-dimensional shape measuring device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7071088B2 (en) * 2017-10-24 2022-05-18 キヤノン株式会社 Distance detector, image pickup device, distance detection method, and program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0926312A (en) * 1996-08-02 1997-01-28 Hitachi Ltd Three-dimensional shape detection method and apparatus
JP2001066112A (en) * 1999-06-25 2001-03-16 Mitsutoyo Corp Image measuring method and device
JP2001074422A (en) * 1999-08-31 2001-03-23 Hitachi Ltd Three-dimensional shape detection device, inspection device with solder, and methods thereof
JP2006258444A (en) * 2005-03-15 2006-09-28 Opcell Co Ltd Surface shape of object measuring device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0926312A (en) * 1996-08-02 1997-01-28 Hitachi Ltd Three-dimensional shape detection method and apparatus
JP2001066112A (en) * 1999-06-25 2001-03-16 Mitsutoyo Corp Image measuring method and device
JP2001074422A (en) * 1999-08-31 2001-03-23 Hitachi Ltd Three-dimensional shape detection device, inspection device with solder, and methods thereof
JP2006258444A (en) * 2005-03-15 2006-09-28 Opcell Co Ltd Surface shape of object measuring device

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103869411A (en) * 2009-10-13 2014-06-18 日立化成工业株式会社 Optical waveguide substrate, photoelectric mixed-loading substrate and methods for manufacturing same and recess forming apparatus for position alignment
JP2011085647A (en) * 2009-10-13 2011-04-28 Hitachi Chem Co Ltd Optical waveguide substrate and method for manufacturing the same
US8818147B2 (en) 2009-10-13 2014-08-26 Hitachi Chemical Company, Ltd. Optical waveguide substrate and method for manufacturing same
WO2011046115A1 (en) * 2009-10-13 2011-04-21 日立化成工業株式会社 Optical waveguide substrate and method for manufacturing same
JP2012037861A (en) * 2010-07-12 2012-02-23 Sony Corp Microscope control device, image display device, image control server, method for generating focus position information, image display method, image control method, and microscopic image control system
CN102331622A (en) * 2010-07-12 2012-01-25 索尼公司 Information handling system, microscope control device and method of operating thereof
WO2012057283A1 (en) 2010-10-27 2012-05-03 株式会社ニコン Shape measuring device, shape measuring method, structure manufacturing method, and program
US9239219B2 (en) 2010-10-27 2016-01-19 Nikon Corporation Form measuring apparatus, method for measuring form, method for manufacturing structure and non-transitory computer readable medium storing a program for setting measurement area
JP2013257187A (en) * 2012-06-11 2013-12-26 Ricoh Co Ltd Movement information detection device and multicolor image forming device
US10952827B2 (en) 2014-08-15 2021-03-23 Align Technology, Inc. Calibration of an intraoral scanner
TWI828386B (en) * 2021-10-28 2024-01-01 日商尼康股份有限公司 Shape acquisition method, object management method and operation support method, and shape acquisition system and operation support system
TWI856890B (en) * 2021-10-28 2024-09-21 日商尼康股份有限公司 Shape acquisition method, object management method and operation support method, and shape acquisition system
US12160663B2 (en) 2022-02-18 2024-12-03 Tokyo Seimitsu Co., Ltd. Image processing device, image processing method, and three-dimensional shape measuring device

Also Published As

Publication number Publication date
JP5218429B2 (en) 2013-06-26
JPWO2009096422A1 (en) 2011-05-26

Similar Documents

Publication Publication Date Title
JP5218429B2 (en) Three-dimensional shape measuring apparatus and method, and program
US10726539B2 (en) Image processing apparatus, image processing method and storage medium
US20200043225A1 (en) Image processing apparatus and control method thereof
US9542754B2 (en) Device and method for detecting moving objects
JP5374119B2 (en) Distance information acquisition device, imaging device, and program
US9204034B2 (en) Image processing apparatus and image processing method
US20150278996A1 (en) Image processing apparatus, method, and medium for generating color image data
JP6711396B2 (en) Image processing device, imaging device, image processing method, and program
JP6833415B2 (en) Image processing equipment, image processing methods, and programs
US9438887B2 (en) Depth measurement apparatus and controlling method thereof
US7869706B2 (en) Shooting apparatus for a microscope
JP6025467B2 (en) Image processing apparatus and image processing method
CN107018407B (en) Information processing device, evaluation chart, evaluation system, and performance evaluation method
WO2016113805A1 (en) Image processing method, image processing apparatus, image pickup apparatus, program, and storage medium
US10063829B2 (en) Image processing method, image processing apparatus, image pickup apparatus, and non-transitory computer-readable storage medium
JP2016156702A (en) Imaging device and imaging method
JP5754931B2 (en) Image analysis apparatus, image analysis method, and program
JP2008281887A (en) Focus detection device, focus detection method, and focus detection program
KR101813371B1 (en) An image processing device and a computer readable recoding medium recording an image processing program
JP6378496B2 (en) Image processing apparatus, control method, and recording medium
JP6097597B2 (en) Image processing apparatus and control method thereof
JP6464553B2 (en) Lens drive control device, electronic camera, and lens drive control program
JP6685550B2 (en) Autofocus device and autofocus program
JP2008042227A (en) Imaging device
JP6558978B2 (en) Image processing apparatus, image processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09706627

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2009551537

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09706627

Country of ref document: EP

Kind code of ref document: A1