WO2005095886A1 - 3次元形状検出装置、3次元形状検出方法、3次元形状検出プログラム - Google Patents
3次元形状検出装置、3次元形状検出方法、3次元形状検出プログラム Download PDFInfo
- Publication number
- WO2005095886A1 WO2005095886A1 PCT/JP2005/005859 JP2005005859W WO2005095886A1 WO 2005095886 A1 WO2005095886 A1 WO 2005095886A1 JP 2005005859 W JP2005005859 W JP 2005005859W WO 2005095886 A1 WO2005095886 A1 WO 2005095886A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- image
- code
- luminance
- boundary
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0007—Image acquisition
Definitions
- 3D shape detection device 3D shape detection method, 3D shape detection program
- the present invention relates to a three-dimensional shape detection device, a three-dimensional shape detection method, and a three-dimensional shape detection method capable of quickly detecting a boundary of pattern light with sub-pixel accuracy without increasing the number of captured pattern light beams. Regarding the detection program.
- one slit light is sequentially projected onto a target object, an image is input by an imaging unit in each direction ⁇ of the projection light, and the target object is viewed from the imaging unit based on the locus of the slit light in the image.
- a three-dimensional shape detection device that detects a three-dimensional shape of a target object by using a slit light projection method that detects a position of the target object by obtaining a direction ⁇ .
- Reference 2 "Three-dimensional image measurement", Kosuke Sato, and one other, Shokodo Co., Ltd., plO 9-117 (hereinafter referred to as Reference 2) can solve such a problem.
- Techniques for detecting dimensional shapes have been disclosed. Specifically, according to the technology disclosed in Reference 2, a positive (positive) Two types of pattern light, a positive pattern light and a negative (negative) pattern light whose brightness is inverted, are projected onto the target object, and the brightness distribution of the captured image indicates that the pattern light is within the boundaries of the pattern light.
- the interpolation and detecting the boundary of the Noturn light with sub-pixel accuracy, the errors included in the boundary coordinates of the pattern light are reduced, and the three-dimensional shape of the target object is detected with high accuracy.
- the pattern light projection image of the positive Z negative is 2 * n sheets.
- One non-projection image needs to be imaged, and the number of pattern light projections and the number of pattern light imagings increase, and there is a problem that measurement takes a long time.
- the present invention has been made in order to solve the above-described problem, and can detect a boundary of pattern light at high speed without increasing the number of images of pattern light with subpixel accuracy.
- the purpose of the present invention is to provide a three-dimensional shape detection device, a three-dimensional shape detection method, and a three-dimensional shape detection program.
- one aspect of the present invention provides a three-dimensional shape detection device, which is capable of detecting each of a plurality of types of pattern light in which light and dark are alternately arranged.
- a projecting means for projecting the pattern light onto the subject in a sequence, an imaging means for imaging the subject in a state in which each pattern light is projected, and an image sensing means for imaging each subject.
- a luminance image generating means for generating a plurality of luminance images in which the luminance of the pixels has been calculated, and a result of performing threshold processing with a predetermined threshold value on the plurality of luminance images generated by the luminance image generating means, for each pixel.
- Code image generating means for generating a code image in which a predetermined code is allocated to the object, and three-dimensional shape calculating means for calculating the three-dimensional shape of the subject using the code image generated by the code image generating means And.
- the three-dimensional shape detection device further has a detection position in a direction crossing the pattern light in the code image, adjacent to the pixel having the code of interest, and different from the code of interest.
- a first pixel detecting means for detecting a first pixel having a code, and a luminance image having a boundary between light and dark at a position corresponding to the first pixel detected by the first pixel detecting means.
- a luminance image extracting means for extracting from within the image, a pixel area specifying means for specifying a pixel area composed of pixels in a predetermined area adjacent to the first pixel, and an extracted luminance image for the pixel area specified by the pixel area specifying means
- Expression calculating means for calculating an approximate expression representing a change in luminance at the position, and a position having a predetermined threshold value for luminance in the approximate expression calculated by the approximate expression calculating means, based on the calculation result.
- Boundary coordinate detecting means for detecting boundary coordinates of the code.
- the three-dimensional shape calculation means calculates the three-dimensional shape of the object based on the boundary coordinates detected by the boundary coordinate detecting means using the code image.
- the direction in which the first pixel adjacent to the target code and having a code different from the target code intersects the pattern in the code image is detected.
- the position is detected on a pixel-by-pixel basis by the first image detecting means.
- a luminance image having a boundary between light and dark is extracted from the plurality of luminance images by the luminance image extracting means.
- an approximate expression representing a change in brightness in the extracted luminance image is calculated by the approximate expression calculating unit.
- the boundary coordinate detecting means By calculating the position having the predetermined threshold value regarding the luminance in this approximation formula by the boundary coordinate detecting means, the boundary position with sub-pixel accuracy at the detected position can be calculated.
- the boundary coordinates consisting of the coordinates of the detected position and the coordinates of the position having the predetermined threshold value regarding the luminance in the approximation formula are calculated, and the three-dimensional shape calculation means uses the boundary coordinates to calculate the three-dimensional shape of the subject. Is calculated.
- the boundary coordinates between the light and dark of the pattern light at the predetermined detection position are obtained by an approximate expression representing a change in luminance in a predetermined pixel region including the boundary coordinates. Is calculated and detected as a position having a predetermined threshold value for luminance in the approximate expression. Therefore, the boundary between light and dark at the detection position can be calculated with higher accuracy than the boundary between light and dark in the code image. Therefore, if the 3D shape is calculated using the result, the 3D shape of the subject can be calculated with high accuracy. The effect that it can be obtained is obtained.
- Another aspect of the present invention provides a three-dimensional shape detection method, which projects a plurality of types of pattern light in which light and dark are alternately arranged on a subject in time series.
- a shadowing process and a projection process power An imaging process in which each pattern light is projected to capture a subject in a curved state, and a plurality of brightnesses of each pixel calculated from each captured image power captured in the imaging process.
- a predetermined code is assigned to each pixel based on the result of performing a threshold processing with a predetermined threshold on a luminance image generating step of generating a luminance image of the same and a plurality of luminance images generated in the luminance image generating step.
- the three-dimensional shape detection method further includes, after the code image generation step, a code adjacent to the pixel having the code of interest and a code different from the code of interest at the detection position in the direction crossing the pattern light in the code image.
- the three-dimensional shape calculation step calculates a three-dimensional shape of the object based on the boundary coordinates detected in the boundary coordinate detection step using the code image.
- the direction in which the first pixel adjacent to the code of interest and having a code different from the code of interest crosses the pattern in the code image is detected.
- the position is detected on a pixel-by-pixel basis by the first image detection step.
- a luminance image having a boundary between light and dark is extracted from the plurality of luminance images at a position corresponding to the first pixel by the luminance image extracting step.
- the luminance in the extracted luminance image Is calculated by an approximate expression calculating step.
- a boundary position with sub-pixel accuracy at the detected position can be calculated. That is, the boundary coordinates consisting of the coordinates of the detection position and the coordinates of the position having a predetermined threshold value regarding the luminance in the approximation formula are calculated, and the three-dimensional shape of the subject is calculated by the three-dimensional shape calculation step using the boundary coordinates. Is calculated.
- the boundary coordinate between the light and dark of the pattern light at a predetermined detection position is obtained by an approximate expression representing a change in luminance in a predetermined pixel region including the boundary coordinate. Is calculated and detected as a position having a predetermined threshold value for luminance in the approximate expression. Therefore, the boundary between light and dark at the detection position can be calculated with higher accuracy than the boundary between light and dark in the code image. Therefore, if the three-dimensional shape is calculated using the result, the effect that the three-dimensional shape of the subject can be calculated with high accuracy can be obtained.
- a three-dimensional shape detection program for projecting a plurality of types of pattern lights, which are arranged alternately in light and dark, onto a subject in time series.
- a projection step an imaging step for imaging an object in a state where each pattern light is projected, and a plurality of luminances obtained by calculating the luminance of each pixel from each captured image captured by the imaging step.
- a predetermined code is assigned to each pixel based on a result of performing a luminance image generation step of generating an image and performing a threshold process using a predetermined threshold on a plurality of luminance images generated in the luminance image generation step.
- the three-dimensional shape detection program further detects a first pixel adjacent to the pixel having the code of interest and having a code different from the code of interest at the detection position in the direction crossing the pattern light in the code image.
- a pixel detection step, and a luminance image extraction step of extracting a luminance image having a boundary between light and dark from a plurality of luminance images at a position corresponding to the first pixel detected in the first pixel detection step.
- the three-dimensional shape calculation step calculates a three-dimensional shape of the subject based on the boundary coordinates detected in the boundary coordinate detection step using the code image.
- the first pixel adjacent to the code of interest and having a code different from the code of interest is detected in the direction orthogonal to the pattern in the code image.
- the position is detected in pixel units by the first image detection step.
- a luminance image having a boundary between light and dark is extracted from the plurality of luminance images at a position corresponding to the first pixel by the luminance image extracting step.
- an approximate expression representing a change in luminance in the extracted luminance image is calculated in the approximate expression calculating step.
- a boundary position with sub-pixel accuracy at the detection position can be calculated. That is, the boundary coordinates consisting of the coordinates of the detected position and the coordinates of the position having a predetermined threshold regarding the luminance in the approximation formula are calculated, and the three-dimensional shape of the subject is calculated by the three-dimensional shape calculation step using the boundary coordinates. Is calculated.
- the boundary coordinates between the light and dark of the pattern light at the predetermined detection position indicate the change in brightness in the predetermined pixel area including the boundary coordinates.
- An approximate expression is calculated and detected as a position having a predetermined threshold value for luminance in the approximate expression. Therefore, the boundary between light and dark at the detection position can be calculated with higher accuracy than the boundary between light and dark in the code image. Therefore, if the three-dimensional shape is calculated using the result, the! / ⁇ effect can be obtained if the three-dimensional shape of the subject can be calculated with high accuracy.
- FIG. 1 is an external perspective view of an image input / output device.
- FIG. 2 is a diagram showing an internal configuration of an imaging head.
- FIG. 3 (a) is an enlarged view of an image projection unit
- FIG. 3 (b) is a plan view of a light source lens
- FIG. 3 (c) is a front view of a projection LCD.
- FIG. 4 (a) Force FIG. 4 (c) is a diagram for explaining the arrangement of LED arrays.
- FIG. 5 is an electrical block diagram of the image input / output device.
- FIG. 6 is a flowchart of a main process.
- FIG. 7 is a flowchart of a digital camera process.
- FIG. 8 is a flowchart of a webcam process.
- FIG. 9 is a flowchart of a projection process.
- FIG. 10 is a flowchart of stereoscopic image processing.
- FIG. 11 (a) is a diagram for explaining the principle of the spatial code method
- FIG. 11 (b) is a diagram for explaining the principle of the spatial code method.
- FIG. 4 is a diagram showing a mask pattern (gray code) different from 1 (a).
- FIG. 12 (a) is a flowchart of a three-dimensional shape detection process.
- FIG. 12B is a flowchart of the imaging process.
- FIG. 12C is a flowchart of the three-dimensional measurement processing.
- Fig. 13 is a diagram for describing an outline of a code boundary coordinate detection process.
- FIG. 14 is a flowchart of a code boundary coordinate detection process.
- FIG. 15 is a flowchart of a process for obtaining code boundary coordinates with sub-pixel accuracy.
- FIG. 16 is a flowchart of a process for calculating a CCDY value of a boundary for a luminance image having a mask pattern number of PatID [i].
- 17 (a) to 17 (c) are diagrams for explaining a lens aberration correction process.
- FIG. 18 (a) and FIG. 18 (b) are diagrams for explaining a method of calculating three-dimensional coordinates in a three-dimensional space from coordinates in a CCD space.
- FIG. 19 is a flowchart of flattened image processing.
- FIGS. 20 (a) to 20 (c) are diagrams for explaining the document attitude calculation processing.
- FIG. 21 is a flowchart of a plane conversion process.
- FIG. 22 (a) is a diagram for explaining the outline of a curvature calculation process
- FIG. 22 (b) is a diagram showing a flattened image flattened by a plane conversion process.
- FIG. 23 (a) is a side view showing another example of the light source lens
- FIG. 23 (b) is a plan view showing the light source lens of FIG. 23 (a).
- FIG. 24 (a) is a perspective view showing a state in which a light source lens is fixed
- FIG. 24 (b) is a partial cross-sectional view thereof.
- FIG. 1 is an external perspective view of an image input / output device 1.
- FIG. The projection device and the three-dimensional shape detection device according to the embodiments of the present invention are devices included in the image input / output device 1.
- the image input / output device 1 includes a digital camera mode functioning as a digital camera, a webcam mode functioning as a web camera, a stereo image mode for detecting a three-dimensional shape to obtain a three-dimensional image, and a curved original.
- a digital camera mode functioning as a digital camera
- a webcam mode functioning as a web camera
- a stereo image mode for detecting a three-dimensional shape to obtain a three-dimensional image
- a curved original Various modes such as a flattened image mode for acquiring a flattened image obtained by flattening the image and the like are provided.
- FIG. 1 in particular, in the stereoscopic image mode or the flattened image mode, in order to detect the three-dimensional shape of the original P as a subject, a stripe formed by alternately arranging light and dark from an image projection unit 13 described later. The shape of the pattern light is projected to show the shape of the shape.
- the image input / output device 1 has an imaging head 2 formed in a substantially box shape, a pipe-shaped arm member 3 having one end connected to the imaging head 2, and an end connected to the other end of the arm member 3. And a base 4 formed in a substantially L-shape in plan view.
- the imaging head 2 is a case in which an image projection unit 13 and an image imaging unit 14 described below are included.
- a cylindrical lens barrel 5 is disposed at the center
- a finder 6 is disposed diagonally above the lens barrel 5
- a flashlight is disposed on the opposite side of the finder 6.
- Menu 7 is arranged. Between the finder 6 and the flash 7 in front of the imaging head 2, a part of a lens of an imaging optical system 21, which is a part of an image imaging unit 14 described later, is exposed to the outside. An image of a subject is input through this exposed portion of the imaging optical system.
- the lens barrel 5 is a cover that is formed so as to protrude from the front of the imaging head 2 and includes a projection optical system 20 that is a part of the image projection unit 13 therein.
- the projection optical system 20 is held by the lens barrel 5, the entire projection optical system 20 can be moved for focus adjustment, and the projection optical system 20 is prevented from being damaged.
- the finder 6 is configured by an optical lens disposed through the rear face and the front face of the imaging head 2. When the user looks into the back surface of the imaging head 2, a range that substantially matches the range where the imaging optical system 21 forms an image on the CCD 22 can be seen.
- the flash 7 is, for example, a light source for supplementing a required amount of light in the digital camera mode, and is configured by a discharge tube filled with xenon gas. Therefore, it can be used repeatedly by discharging from a capacitor (not shown) built in the imaging head 2.
- a release button 8 is arranged on the near side, a mode switching switch 9 is arranged behind the release button 8, and a monitor LCD 10 is arranged on the opposite side of the mode switching switch 9. ing.
- the release button 8 is constituted by a two-stage push-button switch that can be set in two states, a "half-pressed state” and a "fully-pressed state”.
- the state of the release button 8 is managed by a processor 15 described later.
- the well-known auto focus (AF) and automatic exposure (AF) functions are activated when the release button 8 is "half-pressed”, and the focus, aperture, and shirt speed are adjusted.
- imaging is performed.
- the mode switching switch 9 is a switch for setting various modes such as a digital camera mode, a webcam mode, a stereoscopic image mode, a flattened image mode, and an off mode.
- the state of the mode switch 9 is managed by the processor 15, and the state of the mode switch 9 is Is detected by the processor 15, the processing of each mode is executed.
- the monitor LCD 10 is configured by a liquid crystal display (Liquid Crystal Display), and receives an image signal from the processor 15 to display an image to a user.
- the monitor LCD 10 displays a captured image in the digital camera mode or the webcam mode, a three-dimensional shape detection result image in the stereoscopic image mode, a flattened image in the flattened image mode, and the like.
- An antenna 11 as an RF (wireless) interface and a connecting member 12 for connecting the imaging head 2 and the arm member 3 are arranged above the side surface of the imaging head 2.
- the antenna 11 is used to transmit captured image data acquired in a digital camera mode or stereoscopic image data acquired in a stereoscopic image mode via an RF driver 24 to be described later to an external interface by wireless communication.
- the connecting member 12 is formed in a ring shape, and a female screw is formed on an inner peripheral surface thereof.
- the connecting member 12 is rotatably fixed to a side surface of the imaging head 2.
- a male screw is formed at one end of the arm member 3.
- the arm member 3 is for holding the imaging head 2 at a predetermined imaging position so as to be changeable, and is formed of a bellows-like pipe that can be bent into an arbitrary shape. Therefore, the imaging head 2 can be directed to an arbitrary position by the arm member 3.
- the base 4 is mounted on a mounting table such as a desk, and supports the imaging head 2 and the arm member 3. Since the base 4 is formed in a substantially L-shape in plan view, the imaging head 2 and the like can be stably supported.
- the base 4 and the arm member 3 are detachably connected to each other, which makes it easy to carry the image input / output device 1 and also allows the image input / output device 1 to be stored in a small space.
- FIG. 2 is a diagram schematically showing an internal configuration of the imaging head 2. Inside the imaging head 2 , An image projection unit 13, an image capturing unit 14, and a processor 15.
- the image projection unit 13 is a unit for projecting an arbitrary projection image on a projection surface.
- the image projection unit 13 includes a substrate 16, a plurality of LEDs 17 (hereinafter collectively referred to as “LED array 17 A”), a light source lens 18, a projection LCD 19, and a projection optical system 20 along the projection direction. Have.
- the image projection unit 13 will be described later in detail with reference to FIGS. 3 (a) to 3 (c).
- the image capturing section 14 is a unit for capturing an image of a document P as a subject.
- the image capturing section 14 includes an image capturing optical system 21 and a CCD 22 along the light input direction.
- the imaging optical system 21 includes a plurality of lenses.
- the imaging optical system 21 has a well-known autofocus function, and automatically adjusts the focal length and aperture to form an image of an external force on the CCD 22.
- the CCD 22 has photoelectric conversion elements such as CCD (Charge Coupled Device) elements arranged in a matrix.
- the CCD 22 generates a signal corresponding to the color and intensity of light of an image formed on the surface of the CCD 22 via the imaging optical system 21, converts the signal into digital data, and outputs the digital data to the processor 15.
- a flash 7, a release button 8, a mode switching switch 9, an external memory 27, and a cache memory 28 are electrically connected to the processor 15. Further, the processor 15 is connected to the monitor LCD 10 via the monitor LCD driver 23, the antenna 11 via the RF driver 24, the battery 26 via the power supply interface 25, and the light source driver 29 via the light source driver 29.
- the LED array 17A is connected, the projection LCD 19 is connected via the projection LCD driver 30, and the CCD 22 is connected via the CCD interface 31.
- the external memory 27 is a detachable flash ROM, which is used for a digital camera mode, a webcam mode, and a stereoscopic image mode!
- the stored image and three-dimensional information are stored.
- an SD card, a CompactFlash (registered trademark) card, or the like can be used as the external memory 27.
- the cache memory 28 is a high-speed storage device. For example, in digital camera mode, The captured image is transferred to the cache memory 28 at a high speed, processed by the processor 15, and then stored in the external memory 27. Specifically, SDRAM, DDRR AM, or the like can be used.
- the power supply interface 25, the light source driver 29, the projection LCD driver 30, and the CCD interface 31 are configured by an IC (Integrated Circuit).
- the power supply interface 25, the light source driver 29, the projection LCD driver 30, and the CCD interface 31 control the notebook 26, the LED array 17A, the projection LED 19, and the CCD 22, respectively.
- FIG. 3A is an enlarged view of the image projection unit 13
- FIG. 3B is a plan view of the light source lens 18, and
- FIG. 3C shows an arrangement relationship between the projection LCD 19 and the CCD 22.
- the image projection unit 13 includes the substrate 16, the LED array 17A, the light source lens 18, the projection LCD 19, and the projection optical system 20 along the projection direction.
- the substrate 16 is for mounting the LED array 17A and for performing electrical wiring with the LED array 17A.
- a substrate formed by applying an insulating resin to an aluminum substrate and forming a pattern by a capillary electroless plating, or a substrate having a single layer or a multilayer structure having a core of a force epoch substrate. can be used.
- the LED array 17 A is a light source that emits radial light toward the projection LCD 19.
- the LED array 17A is composed of a plurality of LEDs 17 (light emitting diodes) arranged in a staggered manner on the substrate 16.
- the plurality of LEDs 17 are bonded to the substrate 16 via a silver paste, and are electrically connected to the substrate 16 via bonding wires.
- the efficiency of converting electricity into light is increased as compared with the case where an incandescent light bulb, a nitrogen lamp, or the like is used as the light source.
- generation of infrared rays and ultraviolet rays can be suppressed. Therefore, according to the present embodiment, the light source can be driven with low power consumption, and power saving and long life can be achieved. Further, the temperature rise of the device can be reduced.
- the LED 17 since the LED 17 generates extremely low heat rays as compared with a halogen lamp or the like, a resin lens can be used as the light source lens 18 and the projection optical system 20 described later. Therefore, the light source lens 18 and the projection optical system 20 can be configured inexpensively and lightly in comparison with the case where a glass lens is employed.
- Each of the LEDs 17 constituting the LED array 17A emits the same emission color.
- Each of the LEDs 17 uses four elements of Al, In, Ga, and P as materials, and emits an amber color.
- the LED array 17A includes 59 LEDs 17, and each LED 17 is driven at 50 mW (20 mA, 2.5V). Therefore, all 59 LEDs 17 are driven with approximately 3W of power consumption.
- the luminous power emitted from each LED 17 The brightness as the luminous flux when illuminated from the projection optical system 20 through the light source lens 18 and the projection LCD 19 is set to about 25 ANSI lumens even in the case of full illumination. T!
- the light source lens 18 is a lens as a condensing optical system that condenses light emitted radially from the LED array 17A, and is made of an optical resin represented by acrylic.
- the light source lens 18 projects at a position facing each LED 17 of the LED array 17A, and supports a convex lens portion 18a protruding toward the LCD 19 side, and the lens portion 18a.
- Epoxy sealing for the purpose of sealing the LED 17 and bonding the substrate 16 and the light source lens 18 to the base portion 18b and the opening inside the base portion 18b and surrounding the LED array 17A. It has a member 18c and positioning pins 18d projecting from the base portion 18b to the substrate 16 side and connecting the light source lens 18 and the substrate 16.
- the light source lens 18 is fixed on the substrate 16 by inserting the positioning pins 18d into the long holes 16 formed in the substrate 16 while enclosing the LED array 17A inside the opening. It is.
- the light source lens 18 can be arranged in a small space. Also, in addition to the function of mounting the LED array 17A on the substrate 16, a function of supporting the light source lens 18 is also provided, so that it is not necessary to separately provide a component for supporting the light source lens 18, and the number of parts can be reduced. Can be reduced.
- Each lens unit 18a is arranged at a position facing each LED 17 of the LED array 17A in a one-to-one relationship.
- each LED 17 is efficiently condensed by each lens unit 18a facing each LED 17, and projected as highly directional radiation light as shown in Fig. 3 (a). Irradiated on LCD19.
- the reason why the directivity is increased in this way is that by making light incident on the projection LCD 19 substantially perpendicularly, in-plane transmittance unevenness can be suppressed.
- the reason why the directivity is improved is that the projection optical system 20 has telecentric characteristics and its incident NA is about 0.1, so that only light within vertical ⁇ 5 ° can pass through the internal aperture. This is because it is regulated.
- the emission angle of the light from the LED 17 is set to be perpendicular to the projection LCD 19 and that most of the luminous flux is within ⁇ 5 °. This is because, when light with a perpendicular force deviating from the projection LCD 19 is incident, the transmittance changes depending on the incident angle due to the optical rotation of the liquid crystal, and this is a force that causes transmittance unevenness.
- the projection LCD 19 is a spatial modulation element that spatially modulates light condensed through the light source lens 18 and outputs image signal light to the projection optical system 20.
- the projection LCD 19 is composed of a plate-like liquid crystal display (Liquid Crystal Display) having different vertical and horizontal sizes.
- each pixel constituting the projection LCD 19 is composed of one pixel column linearly arranged along the longitudinal direction of the projection LCD 19, and one pixel column thereof. Is arranged so that another pixel row shifted by a predetermined distance in the longitudinal direction of the projection LCD 19 is alternately arranged in parallel.
- the front side of the drawing corresponds to the front side of the imaging head 2, and light is emitted toward the projection LCD 19 from the back side of the drawing.
- the light of the subject power goes from the troublesome side of the paper to the back side, and the subject image is formed on the CCD 22.
- the pixels constituting the projection LCD 19 in a staggered manner in the longitudinal direction, the light that is spatially modulated by the projection LCD 19 in the direction (short direction) orthogonal to the longitudinal direction is obtained.
- a stripe-shaped pattern light in which light and dark are alternately arranged is projected toward an object to detect a three-dimensional shape of the object.
- the boundary between light and dark can be controlled at a 1Z2 pitch by matching the direction of the stripe with the short direction of the projection LCD 19. Therefore, a three-dimensional shape can be detected with high accuracy.
- the projection LCD 19 and the CCD 22 are arranged in a relationship as shown in FIG. 3 (c). More specifically, since the wide surface of the projection LCD 19 and the wide surface of the CCD 22 are arranged so as to face in substantially the same direction, an image projected from the projection LCD 19 onto the projection surface is formed on the CCD 22. When forming an image, it is possible to form the projected image as it is without bending the projected image with a nope mirror or the like.
- the CCD 22 is arranged on the longitudinal direction side of the projection LCD 19 (the direction in which the pixel columns extend).
- the inclination between the CCD 22 and the subject should be controlled at a 1Z2 pitch. This makes it possible to detect three-dimensional shapes with high accuracy.
- the projection optical system 20 includes a plurality of lenses that project the image signal light that has passed through the projection LCD 19 toward the projection surface.
- the projection optical system 20 is composed of a telecentric lens made of a combination of glass and resin. Telecentric refers to a configuration in which the principal ray passing through the projection optical system 20 is parallel to the optical axis in the space on the incident side, and the position of the exit pupil is infinite. By making it telecentric in this way, it is possible to project only the light that passes through the projection LCD 19 at a vertical angle of 5 ° as described above, so that the image quality can be improved.
- FIGS. 4A to 4C are views for explaining the arrangement of the LED array 17A.
- FIG. 4A is a diagram showing an illuminance distribution of light passing through the light source lens 18
- FIG. 4B is a plan view showing an arrangement state of the LED array 17A
- FIG. Synthesis in It is a figure showing an illuminance distribution.
- the projection is designed to reach the surface of LCD19.
- the plurality of LEDs 17 are arranged in a staggered pattern on the substrate 16. More specifically, a plurality of LEDs 17 are arranged in series at a d pitch, and a plurality of LEDs are arranged in parallel at a 3 / 2d pitch. Thus, every other row is moved lZ2d in the same direction with respect to the adjacent row.
- the distance between one LED 17 and the LEDs 17 around the one LED 17 is set to be d (that is, the LEDs 17 are arranged in a triangular lattice).
- the length of d is determined so as to be equal to or less than a full width half maximum (FWHM) of an illuminance distribution formed in the projection LCD 19 by light emitted from one of the LEDs 17. .
- FWHM full width half maximum
- the combined illuminance distribution of light that reaches the surface of the projection LCD 19 through the light source lens 18 becomes a substantially linear shape including small ripples as shown in FIG.
- Light can be applied to the surface substantially uniformly. Therefore, illuminance unevenness in the projection LCD 19 can be suppressed, and as a result, a high-quality image can be projected.
- FIG. 5 is an electrical block diagram of the image input / output device 1. The description of the configuration already described above is omitted.
- the processor 15 includes a CPU 35, a ROM 36, and a RAM 37.
- the CPU 35 performs various kinds of processing by using the program stored in the ROM 36 and the RAM 37.
- the processing performed under the control of the CPU 35 includes detection of a pressing operation of the release button 8, capture of image data from the CCD 22, transfer and storage of the image data, detection of the state of the mode switch 9, and the like.
- the ROM 36 includes a camera control program 36a, a pattern light photographing program 36b, a luminance image generation program 36c, a code image generation program 36d, a code boundary extraction program 36e, a lens aberration correction program 36f, and a triangle.
- Survey calculation program 36g and manuscript The attitude calculation program 36h and the plane conversion program 36i are stored.
- the camera control program 36a is a program relating to control of the entire image input / output device 1 including the main processing shown in FIG.
- the no-turn light photographing program 36b is a program for imaging a state in which the no-turn light is projected on a subject in order to detect the three-dimensional shape of the document P, and a state in which the no-turn light is projected.
- the luminance image generation program 36c includes a pattern light image captured in a state where the pattern light is projected by the pattern light photographing program 36b and a pattern light image captured in a state where the pattern light is projected in a state where the pattern light is projected. This is a program that calculates the difference from the image and generates a luminance image of the projected pattern light.
- a plurality of types of pattern light are projected in time series and imaged for each pattern light, and the difference between each of the plurality of captured images with pattern light and the image without pattern light is calculated. , A plurality of types of luminance images are generated.
- the code image generation program 36d is a program for superimposing a plurality of luminance images generated by the luminance image generation program 36c and generating a code image in which a predetermined code is assigned to each pixel.
- the code boundary extraction program 36e uses the code image generated by the code image generation program 36d and the luminance image generated by the luminance image generation program 36c to determine the code boundary coordinates with sub-pixel accuracy. It is a program.
- the lens aberration correction program 36f is a program for correcting the aberration of the imaging optical system 20 with respect to the boundary coordinates of the code obtained with the sub-pixel accuracy by the code boundary extraction program 36e.
- the triangulation calculation program 36g is a program that calculates three-dimensional coordinates in the real space related to the boundary coordinates from the boundary coordinates of the code corrected by the lens aberration correction program 36f.
- the original posture calculation program 36h is a program for estimating and obtaining the three-dimensional shape of the original P from the three-dimensional coordinates calculated by the triangulation calculation program 36g.
- the plane conversion program 36i is a program that generates a flattened image that also captures the frontal force of the document P based on the three-dimensional shape of the document P calculated by the document attitude calculation program 36h. It is.
- the RAM 37 includes a pattern light image storage unit 37a, a pattern light non-image storage unit 37b, a luminance image storage unit 37c, a code image storage unit 37d, a code boundary coordinate storage unit 37e, and an ID storage unit.
- 37f an aberration correction coordinate storage unit 37g, a three-dimensional coordinate storage unit 37h, a document posture calculation result storage unit 37i, a plane conversion result storage unit 37j, a projection image storage unit 37k, and a marking area 371 are stored. Assigned as an area.
- the pattern light existence image storage unit 37a stores a pattern light existence image obtained by imaging a state where the no-turn light is projected on the document P by the pattern light photographing program 36b.
- the pattern light absence image storage unit 37b projects the pattern light onto the document P by the pattern light photographing program 36b, and stores the pattern light absence image obtained by capturing the state.
- the luminance image storage unit 37c stores the luminance image generated by the luminance image generation program 36c.
- the code image storage unit 37d stores a code image generated by the code image generation program 36d.
- the code boundary coordinate storage unit 37e stores the boundary coordinates of each code obtained with the sub-pixel accuracy to be extracted by the code boundary extraction program 36e.
- the ID storage unit 37f stores an ID or the like assigned to a luminance image having a change in brightness at a pixel position having a boundary.
- the aberration correction coordinate storage unit 37g stores the boundary coordinates of the code whose aberration has been corrected by the lens aberration correction program 36f.
- the three-dimensional shape coordinate storage unit 37h stores the three-dimensional coordinates of the real space calculated by the triangulation calculation program 36g.
- the document orientation calculation result storage unit 37i stores parameters relating to the three-dimensional shape of the document P calculated by the document orientation calculation program 36h.
- the plane conversion result storage unit 37j stores the plane conversion result generated by the plane conversion program 36.
- the projection image storage section 37k stores image information projected from the image projection section 13.
- the working area 371 stores data temporarily used for the operation in the CPU 15.
- FIG. 6 is a flowchart of a main process executed under the control of the CPU 35. Note that digital camera processing (S605), webcam processing (S607), stereoscopic image processing (S607), and flattened image processing (S611) in the main processing. Details will be described later.
- a key scan for determining the state of the mode switching switch 9 is performed (S603), and it is determined whether or not the setting of the mode switching switch 9 is the digital camera mode (S604). (S604: Yes), the process proceeds to the digital camera process described below (S605).
- the mode is not the digital camera mode (S604: No)
- the mode is not the webcam mode (S605: No)
- the mode is not the stereoscopic image mode (S608: No)
- the mode switching switch 9 is in the off mode (S612). If not, the mode switching switch 9 is not (S612: No). The process is repeated. If it is determined in step S612 that the mode is the off mode (S612: Yes), the process ends.
- FIG. 7 is a flowchart of the digital camera process (S605 in FIG. 6).
- the digital camera process is a process of acquiring an image captured by the image capturing unit 14.
- a high resolution setting signal is transmitted to the CCD 22 (S701).
- a high quality captured image can be provided to the user.
- a finder image (an image in a range visible through the finder 6) is displayed on the monitor LCD 10 (S702). Therefore, the user should be able to monitor the LCD without using the viewfinder 6.
- the image displayed in 10 allows the user to confirm the captured image (imaging range) before actual imaging.
- the release button 8 is scanned (S703a), and it is determined whether or not the release button 8 is half-pressed (S703b). If it is half-pressed (S703b: Yes), activate the auto focus (AF) and auto exposure (AE) functions and adjust the focus, aperture, and shutter speed (S7 03c). If it is not half-pressed (S703b: No), the processing from S703a is repeated.
- the release button 8 is scanned again (S703d), and it is determined whether or not the release button 8 is fully pressed (S703e). If it is fully pressed (S703e: Yes), it is determined whether or not the flash mode is set (S704).
- the mode is the flash mode (S704: Yes)
- the flash 7 is emitted (S705).
- S706 shooting is performed (S706). If the mode is not the flash mode (S704: No), shooting is performed without emitting the flash 7 (S706). If it is determined in S703e that the button has not been fully pressed (S703e: No), the processing of S703a is repeated.
- the captured image is transferred from the CCD 22 to the cache memory 28 (S707), and the captured image stored in the cache memory 28 is displayed on the monitor LCD 10 (S708).
- the captured image can be displayed on the monitor LCD 10 at a higher speed than when the captured image is transferred to the main memory.
- the captured image is stored in the external memory 27 (S709).
- FIG. 8 is a flowchart of the webcam process (S607 in FIG. 6).
- the webcam process is a process of transmitting a captured image (including a still image and a moving image) captured by the image capturing unit 14 to an external network.
- a captured image including a still image and a moving image
- FIG. 8 it is assumed that a moving image is transmitted to an external network as a captured image.
- a low-resolution setting signal is transmitted to the CCD 22 (S801), a well-known auto focus and automatic exposure function is activated, and the focus, aperture, and shutter speed are adjusted (S802). Then, shooting is started (S803).
- the captured image is displayed on the monitor LCD 10 (S804), the finder image is stored in the projection image storage unit 37k (S805), and a projection process described later is performed (S806).
- the image stored in 37k is projected on the projection plane.
- the captured image is transferred from the CCD 22 to the cache memory 28 (S807), and the captured image transferred to the cache memory 28 is transmitted to an external network via the RF interface. (S808).
- FIG. 9 is a flowchart of the projection process (S806 in FIG. 8).
- This process is a process of projecting an image stored in the projection image storage unit 37k from the projection image projection unit 13 onto a projection plane.
- this processing first, it is checked whether or not the image is stored in the projection image storage unit 37k (S901). If it is stored (S901: Yes), the image stored in the projection image storage unit 37k is transferred to the projection LCD driver 30 (S902), and an image signal corresponding to the image is transmitted from the projection LCD driver 30 to the projection LCD 19. And display the image on the projection LCD (S903)
- the light source driver 29 is driven (S904), the LED array 17A is turned on by an electric signal from the light source driver 29 (S905), and the process is terminated.
- the LED array 17A when the LED array 17A is turned on, the light emitted from the LED array 17A reaches the projection LCD 19 via the light source lens 18, where the image signal transmitted from the projection LCD driver 30 is transmitted. Is subjected to spatial modulation in accordance with, and is output as image signal light. Then, the image signal light output from the projection LCD 19 is projected as a projection image on a projection plane via the projection optical system 20.
- FIG. 10 is a flowchart of the stereoscopic image processing (S609 in FIG. 6).
- the stereoscopic image processing is a process of detecting a three-dimensional shape of a subject, acquiring, displaying, and projecting a three-dimensional shape detection result image as the stereoscopic image.
- a high-resolution setting signal is transmitted to the CCD 22 (S1001), and a finder image is displayed on the monitor LCD10 (S1002).
- the release button 8 is scanned (S1003a), and it is determined whether or not the release button 8 is half-pressed (S1003b). If it is half-pressed (S1003b: Yes), the auto focus (AF) and auto exposure (AE) functions are activated, and the focus, aperture, and shutter speed are adjusted (S1003c). If not half-pressed (S1003b: No), the process from S1003a is repeated.
- the release button 8 is scanned again (S1003d), and it is determined whether or not the release button 8 is fully pressed (S1003e). If it is fully pressed (S1003e: Yes), it is determined whether or not the flash mode power is on (S1003f).
- the mode is the flash mode (S1003f: Yes)
- the flash 7 is emitted (S1003g), and shooting is performed (S1003h).
- the flash mode is not set (S1003f: No)
- shooting is performed without emitting flash 7 (S1003h).
- the processing from S1003a is repeated.
- the three-dimensional shape detection result in the three-dimensional shape detection processing (S1006) is stored in the external memory 27 (S1007), and the three-dimensional shape detection result is displayed on the monitor LCD 10 (S1008).
- the three-dimensional shape detection result is displayed as a set of three-dimensional coordinates (XYZ) in the real space of each measurement vertex.
- the 3D shape detection result image as a 3D image (3D CG image) displaying the surface by connecting measurement vertices as a 3D shape detection result with a polygon is stored in the projection image storage unit 37k.
- a projection process similar to the projection process of S806 in FIG. 8 is performed (S1010).
- the three-dimensional coordinates obtained by using the inverse function of the equation for converting the coordinates on the projection LCD 19 into three-dimensional space coordinates described with reference to FIGS. 18 (a) and 18 (b) are used.
- By calculating the coordinates on the projection LCD 19 it is possible to project the three-dimensional shape result coordinates on the projection plane.
- FIG. 11 (a) is a diagram for explaining the principle of the spatial code method used for detecting a three-dimensional shape in the above-described three-dimensional shape detection processing (S1006 in FIG. 10).
- FIG. 11 (b) is a diagram showing a pattern light different from FIG. 11 (a). Any of the patterns shown in FIGS. 11A and 11B may be used as the pattern light, and a gray level code which is a multi-tone code may be used. [0126] The details of this spatial code method are described in detail in Kosuke Sato and one other, "Distance Image Input Using Spatial Coded Rider", Transactions of the Institute of Electronics, Information and Communication Engineers, 85Z3Vol. J 68-D No3, p369-375. It has been disclosed.
- the spatial code method is a type of method for detecting the three-dimensional shape of a subject based on triangulation between the projected light and the observed image. It is characterized in that it is set apart from the container O by a distance D, and the space is elongated and divided into fan-shaped areas and encoded.
- each fan-shaped area is coded by the mask into a bright “1” and a dark “0”.
- a code corresponding to the direction ⁇ is assigned to each fan-shaped area, and the boundary of each code can be regarded as one slit light beam. Therefore, the scene is photographed by a camera as an observation device for each mask, and each bit plane of the memory is configured by converting the light and dark pattern into a binary image.
- the contents of the memory at this address give the projected light code, ie, ⁇ .
- the coordinates of the point of interest are determined from ⁇ and ⁇ .
- the mask patterns used in this method include mask patterns A and A in FIG.
- the point Q in Fig. 11 (a) indicates the boundary between the area 3 (011) and the area 4 (100). If the mask A shifts by 1, the code of the area 7 (111) may be generated. There is. In other words, a large error may occur when the hamming distance between adjacent regions is 2 or more.
- FIG. 12A is a flowchart of the three-dimensional shape detection process (S1006 in FIG. 10).
- This imaging process uses a plurality of pure binary code mask patterns shown in FIG. 11 (a) to generate striped pattern light (see FIG. 1) from the image projection unit 13 in which light and dark are alternately arranged.
- This is a process of acquiring a pattern light presence image that captures the state where each pattern light is projected onto the subject in a time series manner, and a pattern light absence image that captures the state where the no-turn light is projected. is there.
- the three-dimensional measurement process is a process of actually measuring the three-dimensional shape of the subject using the image with pattern light and the image without pattern light acquired by the imaging process.
- the processing ends.
- FIG. 12B is a flowchart of the imaging process (S1210 in FIG. 12A). This processing is executed based on the No-turn light photographing program 36a.
- the acquired pattern light non-image is stored in the pattern light non-image storage unit 37b.
- the counter i is initialized (S1212), and it is determined whether or not the value of the counter i is the maximum value imax (S1213).
- the i-th mask pattern among the mask patterns to be used is displayed on the projection LCD 19, and The i-th pattern light projected by the i-th mask pattern is projected onto the projection surface (S1214), and the state where the pattern light is projected is photographed by the image pickup section 14 (S1215).
- FIG. 12 (c) is a flowchart of the three-dimensional measurement process (S1220 in FIG. 12 (a)). This processing is executed based on the luminance image generation program 36c.
- a luminance image is generated (S1221).
- a luminance image for each pattern light presence / absence image is generated.
- the generated luminance image is stored in the luminance image storage unit 37c. Also, a number corresponding to the number of the pattern light is assigned to each luminance image.
- the code image generation program 36d generates a code image coded for each pixel by combining the generated luminance images using the spatial coding method described above (S1222).
- This code image is generated by comparing each pixel of the brightness image related to the pattern light image stored in the brightness image storage unit 37c with the previously set brightness threshold, and combining the results. be able to.
- the generated code image is stored in the code image storage unit 37d.
- lens aberration correction processing is performed by the lens aberration correction program 36f (S1224).
- a real space conversion process based on the triangulation principle is performed by the triangulation calculation program 36g (S1225).
- the code boundary coordinates in the CCD space after the aberration correction is performed by this process are converted into three-dimensional coordinates in the real space, and the three-dimensional coordinates as a three-dimensional shape detection result are obtained.
- FIG. 13 is a diagram for explaining the outline of the code boundary coordinate detection process (S1223 in FIG. 12). is there.
- the boundary between the actual pattern light and dark in the CCD space is indicated by a boundary line K, and the pattern light is coded by the above-mentioned space code method, and the boundary between the code 1 and other codes is shown in the figure. It is the figure shown by the thick line.
- this code boundary coordinate detection processing is to detect code boundary coordinates with subpixel accuracy.
- the detection position is moved to the left by “2”, and at the position of the detection position curCCDX-2, the code image of interest is referenced by referring to the code image.
- (CurCode) force Searches for a pixel that changes to another code (boundary pixel (pixel H at the detection position of curCCDX-2)) and a predetermined range centered on that pixel (3 pixels in the Y-axis direction in this embodiment) And a range of +2 pixels) (part of the pixel region specifying means).
- the luminance threshold bTh may be a fixed value given in advance, which may be calculated from a predetermined range (for example, half the average of the luminance of each pixel). Thus, the boundary between light and dark can be detected with sub-pixel accuracy.
- the detection position is moved to the right side of “1” from curCCDX-2, and the same processing as described above is performed for curCCDX-1 to obtain a representative value in curCCDX-1 (boundary coordinate detection Part of the process).
- the boundary coordinates of the code can be detected with high precision and sub-pixel accuracy, and the real space conversion processing (S1225 in FIG. 12) based on the triangulation principle described above is performed using the boundary coordinates. This makes it possible to detect the three-dimensional shape of the subject with high accuracy.
- the region constituted by the range of 2 has been described as a pixel region for obtaining an approximation, the range of the Y axis and the X axis of this pixel region is not limited to these. For example, only a predetermined range in the Y-axis direction around the boundary pixel at the curC CDX detection position may be set as the pixel region.
- FIG. 14 is a flowchart of the code boundary coordinate detection process (S1223 in FIG. 12 (c)). This processing is executed based on the code boundary extraction program 36e. In this process, first, each element of the code boundary coordinate sequence in the CCD space is initialized (S1401), and curCCDX is set as a start coordinate (S1402).
- curCode is set to "0" (S1404). That is, curCode is initially , Set to the minimum value.
- curCode is smaller than the maximum code (S1405). If curCode is smaller than the maximum code (S1405: Yes), the code image is searched for in curCCDX by searching for a pixel of curCode (S1406), and it is determined whether or not a pixel of curCode exists (S1407).
- the boundary is temporarily set to the curCode larger than the curCode. The processing proceeds assuming that the pixel is located at the pixel position of the pixel.
- curCode does not exist (S1407: No), or if there is a pixel with a Code larger than curCode! / ⁇ (S1409: No), the next curCode Calculate “1” to the curCode to obtain the boundary coordinates (S1411), and repeat the processing from S1405.
- curCCDX is changed, and when curCCDX finally becomes larger than the end coordinate (S1403), that is, when the detection of the start coordinate force to the end coordinate is completed, the process is terminated.
- FIG. 15 is a flowchart of the processing (S1410 in FIG. 14) for obtaining the code boundary coordinates with sub-pixel accuracy.
- this processing first, from among the luminance images stored in the luminance image storage unit 37c in S1221 in FIG. 12C, pixels having a Code larger than the curCode detected in S1409 in FIG. At the pixel position, all the luminance images having a change in brightness are extracted (S1501).
- the mask pattern number of the extracted luminance image is stored in array PatID []
- the number of extracted luminance images is stored in noPatID (S1502).
- the arrays PatID [] and noPatID are stored in the ID storage unit 37f.
- the counter i is initialized (S1503), and it is determined whether or not the value of the counter i is smaller than oPatID (S1504). As a result, if it is determined that the brightness is small (S1504: Yes), the counter calculates the CCDY value of the boundary for the luminance image having the corresponding mask pattern number of PatID [i], and transfers the value to f CCDY [i]. It is stored (S1505).
- the median value of fCCDY [i] obtained in the processing of S1505 is calculated, and the result is used as a boundary value, or the boundary value is calculated by statistical calculation.
- the boundary coordinates are represented by the coordinates of curCCDX and the weighted average value obtained in S1507, the boundary coordinates are stored in the code boundary coordinate storage unit 37e, and the process ends.
- Fig. 16 shows the CCD image of the boundary for the luminance image having the mask pattern number of PatID [i].
- 16 is a flowchart of a process for obtaining a Y value (S1505 in FIG. 15).
- the value of "dx” can be set to an appropriate integer including "0" in advance, and in the example described with reference to Fig. 13, this "dx” is set to "2". Therefore, according to the example of FIG. 13, this ccdx is set to “curCCDX-2J”.
- pixel I is detected as a pixel candidate having a boundary, and an eCCDY value is obtained at the position of pixel I.
- FIGS. 17 (a) to 17 (c) are diagrams for explaining the lens aberration correction processing (S1224 in FIG. 12 (c)).
- the lens aberration correction processing as shown in FIG. 17A, the image was taken in response to the fact that the incident light flux was displaced by the ideal lens due to the difference of the imaging optical system 21, and the position was changed. This is a process of correcting the position of a pixel to a position where an image should be originally formed.
- This aberration correction is performed, for example, by calculating the aberration of the optical system in the imaging range of the imaging optical system 21 using the half angle of view Ma, which is the angle of incident light, as shown in Fig. 17 (b). Correct based on the data obtained.
- This aberration correction processing is executed based on the lens aberration correction program 36f, is stored in the code boundary coordinate storage unit 37e, is performed according to the code boundary coordinates, and the data subjected to the aberration correction processing is Stored in the aberration correction coordinate storage unit 37g.
- the camera calibration (approximation) of the following (1) to (3) for converting arbitrary point coordinates (ccdx, ccdy) in the real image to coordinates (ccdcx, ccdcy) in the ideal camera image Equation) is used for correction.
- the focal length of the imaging optical system 21 is focal length (mm)
- the ccd pixel length is pixel length (mm)
- FIGS. 18 (a) and 18 (b) show three-dimensional coordinates in a three-dimensional space calculated from coordinates in a CCD space in a real space conversion process based on the principle of triangulation (S1225 in FIG. 12 (c)). It is a figure for explaining a method.
- the three-dimensional coordinates in the three-dimensional space of the aberration-corrected code boundary coordinates stored in the aberration-corrected coordinate storage unit 37g are calculated by the triangulation operation program 36g. Is calculated.
- the three-dimensional coordinates calculated in this way are stored in the three-dimensional coordinate storage unit 37h.
- the optical axis direction of the imaging optical system 21 is the Z axis, and the imaging optical system is along the Z axis.
- Principal point position force 1 The point away from VPZ is the origin, the horizontal direction to the image input / output device 1 is the X axis, and the vertical direction is the Y axis.
- the projection angle ⁇ p from the image projection unit 13 to the three-dimensional space (X, ⁇ , Z), and the distance between the optical axis of the imaging lens optical system 20 and the optical axis of the image projection unit 13 are D ,
- the field of view in the Y direction of the imaging optical system 21 from Yftop to Yfbottom, the field of view in the X direction from Xfstart to Xfend, the length (height) of the CCD 22 in the Y axis direction is Hc, and the length (width) in the X axis direction. Is Wc.
- the projection angle ⁇ p is given based on a code assigned to each pixel.
- the three-dimensional space position (X, Y, Z) corresponding to the arbitrary coordinates (ccdx, ccdy) of the CCD 22 is a point on the imaging plane of the CCD 22, a projection point of the pattern light, It can be obtained by solving five equations for the triangle formed by and the point that intersects the Y plane.
- (l) Y -(ta ⁇ ⁇ p) Z + PPZ + tan ⁇ p — D + cmp (Xtarget) (2)
- Y -(Ytarget / VPZ) Z + Yt arget
- the principal point position (0, 0, PPZ) of the image projection unit 13 the field of view in the Y direction of the image projection unit 13 is Ypf bottom from Ypftop, the field of view in the X direction is Xpfend from Xpfstart,
- the length (height) of the projection LCD 19 in the Y-axis direction is Hp, and the length (width) in the X-axis direction is Wp.
- FIG. 19 is a flowchart of the flattened image processing (S611 in FIG. 6).
- the flattened image processing is performed, for example, when capturing an image of a document P in a curved state as shown in FIG. 1 or capturing an image of a rectangular document obliquely (the captured image has a trapezoidal shape).
- This is also a process of acquiring and displaying a flattened image that is flattened in a state where the original is not curved or captured in a vertical direction.
- a high resolution setting signal is transmitted to the CCD 22 (S1901), and a finder image is displayed on the monitor LCD10 (S1902).
- the release button 8 is scanned (S1903a), and it is determined whether or not the release button 8 is half-pressed (S1903b). If it is half-pressed (SI 903b: Yes), it activates the auto focus (AF) and auto exposure (AE) functions and adjusts the focus, aperture, and shutter speed (S1903c). If it is not half-pressed (S1903b: No), the process of S1903a is repeated. Return.
- the release button 8 is scanned again (S1903d), and it is determined whether or not the release button 8 is fully pressed (S1903e). If it is fully pressed (SI 903e: Yes), it is determined whether or not the flash mode power is available (S1903f).
- a three-dimensional shape detection process which is the same process as the above-described three-dimensional shape detection process (S1006 in FIG. 10), is performed to detect the three-dimensional shape of the subject (S1906).
- a document posture calculation process of calculating the posture of the document P is performed (S1907).
- the position L, angle ⁇ ⁇ ⁇ ⁇ , and curvature ⁇ (X) of the document P with respect to the image input device 1 are calculated as the posture parameters of the document P.
- FIGS. 20 (a) to 20 (c) are diagrams for explaining the document orientation calculation process (S1907 in FIG. 19). It is assumed that the curvature of the document P is uniform in the y direction as a precondition for a document such as a book.
- the original posture calculation process first, as shown in FIG. 20 (a), points arranged in two columns in a three-dimensional space position are determined by regression curve approximation from coordinate data on code boundaries stored in a three-dimensional coordinate storage unit 37h. Find the two curves.
- the original ⁇ is rotationally transformed in the opposite direction by the previously obtained inclination ⁇ ⁇ around the X axis, that is, the original ⁇ is converted into the ⁇ - ⁇ plane. It is assumed that they are parallel to each other.
- FIG. 21 is a flowchart of the plane conversion process (S1908 in FIG. 19).
- the pattern light stored in the pattern light non-image storage unit 37b is stored.
- the four corner points of the image are moved by L in the Z direction, rotated by ⁇ in the X axis direction, and then rotated by ⁇ , and then inversely transformed into ⁇ (X) (equivalent to the “bending process” described later) (I.e., a rectangular area in which the image of the surface of the document P on which characters and the like are written is observed as viewed from a substantially orthogonal direction), and is included in this rectangular area.
- the number of pixels a to be obtained is determined (S2102).
- the counter b determines whether or not the counter b has reached the number of pixels a (S2103). If the counter b does not reach the number of pixels a (S2103: No), a curvature calculation process is performed for one pixel constituting the rectangular area to be rotated by a curvature ⁇ (X) about the Y axis (S2104). ), Tilt around the X axis ⁇ Rotate and move (S2105), and shift by the distance L in the Z axis direction ( S2106).
- the coordinates (ccdcx, ccdcy) on the CCD image captured by the ideal camera are obtained by the inverse function of the previous triangulation (S2107), and the obtained three-dimensional spatial position is used.
- the coordinates (ccdx, ccdy) on the CCD image captured by the actual camera are obtained by the inverse function of the camera calibration according to the aberration characteristics of the imaging optical system 20 (S2108), and the pattern corresponding to this position is obtained.
- the state of the pixel of the lightless image is obtained and stored in the working area 371 of the RAM 37 (S2109).
- FIG. 22 (a) is a diagram for explaining the outline of the bending process (S2104 in FIG. 21), and FIG. 22 (b) is a diagram illustrating the planarization performed by the plane conversion process (S1908 in FIG. 19). The original P is shown. The details of this curving process are disclosed in detail in IEICE Transactions DII Vol.J86-D2 No.3 p409 “Burning Document Shooting with Eye Scanner”.
- the curvature X ⁇ (X) is obtained by transforming a three-dimensional shape composed of the obtained code boundary coordinate sequence (real space) into a plane-cut cross-sectional shape parallel to the XZ plane at an arbitrary Y value. It is represented by an equation approximated by a polynomial using the least squares method.
- FIG. 23 (a) and FIG. 23 (b) are views for explaining a group of light source lenses 50 as another example of the light source lens 18 in the above-described embodiment, and FIG. FIG. 23B is a side view showing the light source lens 50, and FIG. 23B is a plan view showing the light source lens 50.
- the same members as the components in the above-described embodiment are denoted by the same reference numerals, and description thereof will be omitted.
- the light source lens 18 in the above-described embodiment is configured such that the lens portions 18a are integrally arranged on the base 18b from a convex aspherical shape corresponding to each LED 17, whereas In the example shown in FIGS. 23 (a) and 23 (b), a resin lens formed in a shell shape enclosing each of the LEDs 17 is formed separately.
- the position of each LED 17 and the corresponding light source lens 50 can be determined on a one-to-one basis.
- the relative position accuracy can be improved, and the light emitting directions are aligned.
- the surface of the projection LCD 19 is irradiated with light whose incident direction from the LED 17 is aligned perpendicular to the surface of the projection LCD 19, so that the light can uniformly pass through the stop of the projection optical system 20. Therefore, illuminance unevenness of the projected image can be suppressed, and as a result, a high-quality image can be projected.
- the LED 17 included in the light source lens 50 is mounted on the substrate 16 via electrodes 51 serving as leads and reflectors.
- a frame-shaped elastic fixing member 52 that bundles and regulates the light source lenses 50 in a predetermined direction is arranged on the outer peripheral surface of the group of light source lenses 50.
- the fixing member 52 is made of a resin material such as rubber or plastic.
- each light source lens 50 Since the light source lens 50 is formed separately from each LED 17, the angle of the optical axis formed by the convex tip of each light source lens 50 is correctly aligned to face the projection LCD 19. It is difficult to install in
- the fixing member 52 surrounds a group of the light source lenses 50, and the outer peripheral surfaces of the light source lenses 50 are brought into contact with each other, so that Lens 50
- the position of each light source lens 50 is regulated so that the optical axis of the light source faces the projection LCD 19 at a correct angle.
- the fixing member 52 is made of a material having a sufficient elasticity even if it has rigidity specified in a predetermined size, and the position of each light source lens 50 is adjusted by the elasticity. It may be restricted to a predetermined position.
- FIGS. 24 (a) and 24 (b) show another example of a fixing member 52 for fixing the light source lens 50 to a predetermined position described in FIGS. 23 (a) and 23 (b).
- 60 is a diagram for explaining FIG. FIG. 24 (a) is a perspective view showing a state where the light source lens 50 is fixed, and FIG. 24 (b) is a partial sectional view thereof.
- the same members as described above are denoted by the same reference numerals, and description thereof will be omitted.
- the fixing member 60 is formed in a plate shape having a conical through hole 60a having a cross section along the outer peripheral surface of each light source lens 50 when viewed in a sectional view. Each light source lens 50 is inserted and fixed in each through hole 60a.
- An elastic urging plate 61 is interposed between the fixing member 60 and the substrate 16, and an electrode is provided between the urging plate 61 and the lower surface of each light source lens 50.
- An annular O-ring 62 having elasticity is arranged so as to surround 51.
- the LED 17 included in the light source lens 50 is mounted on the substrate 16 via the biasing plate 61 and the electrode 51 penetrating through holes formed in the substrate 16.
- each light source lens 50 is fixed by penetrating through each through hole 60a having a cross section along the outer peripheral surface of the lens.
- the optical axis of the light source lens 50 can be more reliably fixed so as to face the projection LCD 19 at a correct angle.
- the LED 17 can be urged to a correct position by the urging force of the O-ring 62 and fixed.
- the impact force that may be generated when transporting the present device 1 is reduced by the impact of the O-ring 62. It is possible to prevent the problem that the light source lens 50 is displaced by the influence of the impact and the position of the light source lens 50 is displaced, so that light cannot be irradiated from the light source lens 50 to the projection LCD 19 vertically.
- the processes of S1211 and S1215 in Fig. 12 (b) are regarded as an imaging means, an imaging step, and an imaging step.
- the processing of S1221 in FIG. 12C is regarded as a luminance image generating means, a luminance image generating step, or a luminance image generating step.
- the processing of S1222 in FIG. 12 (c) is regarded as a code image generation means, a code image generation step, or a code image generation step.
- the processing of S1006 in FIG. 10 is regarded as a three-dimensional shape calculation means, a three-dimensional shape calculation step, or a three-dimensional shape calculation step.
- the process of S1408 in Fig. 14 is regarded as a first pixel detecting means, a first pixel detecting step, or a first pixel detecting step.
- the processing of S1501 in FIG. 15 is regarded as a luminance image extracting means, a luminance image extracting step, or a luminance image extracting step.
- Part of the processing in S1603 and S1604 in FIG. 16 is regarded as a pixel area specifying unit, a pixel area specifying step, or a pixel area specifying step.
- the processing of S1604—part in FIG. 16 is regarded as an approximate expression calculating means, an approximate expression calculating step, or an approximate expression calculating step.
- the processing of S1505 and S1507 in FIG. 15 is regarded as a boundary coordinate calculation means, a boundary coordinate calculation step, or a boundary coordinate calculation step.
- the process of acquiring and displaying a flattened image has been described as the flattened image mode.
- a well-known OCR function is provided, and the flattened flat image is displayed in the OCR function. May be configured to be read.
- the text written on the original can be read with higher accuracy than when the original that is curved by the OCR function is read.
- each value is averaged using efCCDY [j] as an approximate polynomial
- the method of averaging the values is not limited to these. For example, a method of taking a simple average of each value, a method of using the median of each value, a method of calculating an approximate expression of each value, and using the detection position in the approximate expression as a boundary coordinate, and a statistical operation. It is also possible to use a method of obtaining a higher value.
- the boundary coordinates of the code of interest detected by the boundary coordinate detection means may be calculated with a higher resolution than the resolution of the imaging means.
- the boundary coordinates of the code of interest detected by the boundary coordinate detection means or the boundary coordinate detection step have a higher resolution (sub-pixel accuracy) than the resolution of the imaging means or the imaging step. Since the calculation is performed, the effect that the three-dimensional shape of the subject can be calculated with high accuracy can be obtained.
- the pixel area specifying means refers to the code image and separates from the pixel row including the first pixel and at least one row continuously arranged from the pixel row including the first pixel.
- the pixel array power described above may also specify the pixel area.
- the approximate expression calculating means may calculate an approximate expression for each pixel column in the pixel area.
- the boundary coordinate detecting means may calculate a position having a predetermined threshold value for luminance for each of the approximate expressions, and detect the boundary coordinates of the code of interest based on the calculation result.
- the luminance image extracting means may extract at least one or more extracted luminance images.
- the pixel region specifying means may specify a pixel region for each extracted luminance image by referring to a code image.
- the approximate expression calculating means calculates each extracted brightness An approximate expression may be calculated for each pixel column in the pixel region where the degree of image power is also detected.
- the boundary coordinate detecting means may calculate a position having a predetermined threshold value for luminance for each of the approximate expressions, and detect the boundary coordinates of the code of interest based on the calculation result.
- the boundary is calculated using one approximate expression.
- the accuracy of detecting the boundary coordinates can be improved as compared with the case where the coordinates are obtained. Therefore, an effect is obtained that the three-dimensional shape of the subject can be calculated with higher accuracy.
- the luminance image extracting means may extract at least one or more extracted luminance images.
- the pixel region specifying means refers to each of the extracted luminance images with reference to the code image, separately from the pixel column including the first pixel, and at least one or more columns continuously arranged from the pixel column including the first pixel. Pixel column power A pixel area may be specified.
- the approximate expression calculating means may calculate an approximate expression for each pixel row in the pixel region for each extracted luminance image.
- the boundary coordinate detecting means may calculate a position having a predetermined threshold value for luminance for each approximate expression for each extracted luminance image, and detect the boundary coordinates of the code of interest based on the calculation result. Good,.
- a predetermined threshold value is set for each of at least four or more approximate expressions calculated based on at least two or more pixel columns. Is calculated, so that the accuracy of detecting the boundary coordinates can be improved as compared with the case where the boundary coordinates are obtained using one approximation formula. Therefore, an effect is obtained that the three-dimensional shape of the object can be calculated with higher accuracy.
- the boundary coordinate detecting means calculates a weighted average value or a median value of positions calculated for each approximation expression, and detects the boundary coordinates of the code of interest based on the calculation result. You may.
- the boundary coordinate detecting means or the boundary coordinate detecting step calculates the weighted average value or the median value of the positions calculated for each approximation expression, and based on the calculation result, determines the value of the code of interest. Detect boundary coordinates. Therefore, the load required for the arithmetic unit can be reduced by adding a simple calculation, and the effect of obtaining high speed processing can be obtained.
- the boundary coordinate detecting means may calculate an approximate expression of a position calculated for each approximate expression, and detect the boundary coordinates of the code of interest based on the calculation result. .
- the boundary coordinate detecting means or the boundary coordinate detecting step calculates the approximate expression of the position calculated for each approximate expression, and detects the boundary coordinate of the code of interest based on the calculation result. I do. Therefore, an effect is obtained that a three-dimensional shape can be calculated with higher accuracy than when a weighted average or median is calculated.
- the boundary coordinate detecting means calculates an approximate expression of a position calculated for each approximate expression for each extracted luminance image, and calculates the boundary coordinates of the code of interest based on the calculation result.
- the weighted average or the median may be calculated for the calculated coordinates and the boundary coordinates of the target code calculated for each extracted luminance image unit, and the boundary coordinates of the target code may be detected based on the result.
- the boundary coordinate detecting means calculates the approximate expression of the position calculated for each approximate expression in the unit of the extracted luminance image, and calculates the boundary coordinates of the code of interest based on the calculation result. Then, a weighted average or median is calculated for the boundary coordinates of the target code calculated for each extracted luminance image unit, and the boundary coordinates of the target code are detected based on the result. Therefore, the load required for the arithmetic unit can be reduced by adding a simple calculation, and the effect can be obtained if the processing can be speeded up.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Description
Claims
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US11/536,340 US7672505B2 (en) | 2004-03-31 | 2006-09-28 | Apparatus, method and program for three-dimensional-shape detection |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2004105426A JP2005293075A (ja) | 2004-03-31 | 2004-03-31 | 3次元形状検出装置、3次元形状検出方法、3次元形状検出プログラム |
| JP2004-105426 | 2004-03-31 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US11/536,340 Continuation-In-Part US7672505B2 (en) | 2004-03-31 | 2006-09-28 | Apparatus, method and program for three-dimensional-shape detection |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2005095886A1 true WO2005095886A1 (ja) | 2005-10-13 |
Family
ID=35063876
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2005/005859 Ceased WO2005095886A1 (ja) | 2004-03-31 | 2005-03-29 | 3次元形状検出装置、3次元形状検出方法、3次元形状検出プログラム |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US7672505B2 (ja) |
| JP (1) | JP2005293075A (ja) |
| WO (1) | WO2005095886A1 (ja) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7576845B2 (en) | 2006-03-30 | 2009-08-18 | Brother Kogyo Kabushiki Kaisha | Three-dimensional color and shape measuring device |
| US7630088B2 (en) | 2005-04-15 | 2009-12-08 | Brother Kogyo Kabushiki Kaisha | Apparatus for measurement of 3-D shape of subject using transformable holder with stable and repeatable positioning of the same subject |
| CN108664534A (zh) * | 2017-04-02 | 2018-10-16 | 田雪松 | 应用服务数据的获取方法及系统 |
| CN112580382A (zh) * | 2020-12-28 | 2021-03-30 | 哈尔滨工程大学 | 基于目标检测二维码定位方法 |
Families Citing this family (27)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7756327B2 (en) * | 2002-07-26 | 2010-07-13 | Olympus Corporation | Image processing system having multiple imaging modes |
| JP4198114B2 (ja) * | 2002-07-26 | 2008-12-17 | オリンパス株式会社 | 撮影装置、画像処理システム |
| CN101701849A (zh) * | 2004-01-23 | 2010-05-05 | 奥林巴斯株式会社 | 图像处理系统以及照相机 |
| GB2410794A (en) * | 2004-02-05 | 2005-08-10 | Univ Sheffield Hallam | Apparatus and methods for three dimensional scanning |
| US7957007B2 (en) * | 2006-05-17 | 2011-06-07 | Mitsubishi Electric Research Laboratories, Inc. | Apparatus and method for illuminating a scene with multiplexed illumination for motion capture |
| US8672225B2 (en) | 2012-01-31 | 2014-03-18 | Ncr Corporation | Convertible barcode reader |
| CA2707246C (en) | 2009-07-07 | 2015-12-29 | Certusview Technologies, Llc | Automatic assessment of a productivity and/or a competence of a locate technician with respect to a locate and marking operation |
| US8532342B2 (en) * | 2008-02-12 | 2013-09-10 | Certusview Technologies, Llc | Electronic manifest of underground facility locate marks |
| US8290204B2 (en) | 2008-02-12 | 2012-10-16 | Certusview Technologies, Llc | Searchable electronic records of underground facility locate marking operations |
| US8330771B2 (en) * | 2008-09-10 | 2012-12-11 | Kabushiki Kaisha Toshiba | Projection display device and control method thereof |
| US8572193B2 (en) | 2009-02-10 | 2013-10-29 | Certusview Technologies, Llc | Methods, apparatus, and systems for providing an enhanced positive response in underground facility locate and marking operations |
| US8902251B2 (en) | 2009-02-10 | 2014-12-02 | Certusview Technologies, Llc | Methods, apparatus and systems for generating limited access files for searchable electronic records of underground facility locate and/or marking operations |
| CA2897462A1 (en) * | 2009-02-11 | 2010-05-04 | Certusview Technologies, Llc | Management system, and associated methods and apparatus, for providing automatic assessment of a locate operation |
| WO2010098954A2 (en) * | 2009-02-27 | 2010-09-02 | Body Surface Translations, Inc. | Estimating physical parameters using three dimensional representations |
| WO2012033602A1 (en) | 2010-08-11 | 2012-03-15 | Steven Nielsen | Methods, apparatus and systems for facilitating generation and assessment of engineering plans |
| WO2012023256A2 (en) * | 2010-08-19 | 2012-02-23 | Canon Kabushiki Kaisha | Three-dimensional measurement apparatus, method for three-dimensional measurement, and computer program |
| JP5984409B2 (ja) * | 2012-02-03 | 2016-09-06 | キヤノン株式会社 | 三次元計測システム及び方法 |
| JP6041513B2 (ja) | 2012-04-03 | 2016-12-07 | キヤノン株式会社 | 画像処理装置、画像処理方法及びプログラム |
| US9188433B2 (en) | 2012-05-24 | 2015-11-17 | Qualcomm Incorporated | Code in affine-invariant spatial mask |
| US9229580B2 (en) * | 2012-08-03 | 2016-01-05 | Technokey Company Limited | System and method for detecting object in three-dimensional space using infrared sensors |
| US10368053B2 (en) * | 2012-11-14 | 2019-07-30 | Qualcomm Incorporated | Structured light active depth sensing systems combining multiple images to compensate for differences in reflectivity and/or absorption |
| JP6320051B2 (ja) * | 2014-01-17 | 2018-05-09 | キヤノン株式会社 | 三次元形状計測装置、三次元形状計測方法 |
| US10032279B2 (en) * | 2015-02-23 | 2018-07-24 | Canon Kabushiki Kaisha | Information processing apparatus, information processing method, and storage medium |
| DE102015212910A1 (de) | 2015-07-09 | 2017-01-12 | Sac Sirius Advanced Cybernetics Gmbh | Vorrichtung zur Beleuchtung von Gegenständen |
| US9754182B2 (en) | 2015-09-02 | 2017-09-05 | Apple Inc. | Detecting keypoints in image data |
| CN110672036B (zh) * | 2018-07-03 | 2021-09-28 | 杭州海康机器人技术有限公司 | 确定投影区域的方法及装置 |
| JP7219034B2 (ja) * | 2018-09-14 | 2023-02-07 | 株式会社ミツトヨ | 三次元形状測定装置及び三次元形状測定方法 |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPS60152903A (ja) * | 1984-01-21 | 1985-08-12 | Kosuke Sato | 位置計測方法 |
| JPH05332737A (ja) * | 1991-03-15 | 1993-12-14 | Yukio Sato | 形状計測装置 |
| JP2000055636A (ja) * | 1998-08-06 | 2000-02-25 | Nekusuta:Kk | 三次元形状計測装置及びパターン光投影装置 |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5646733A (en) * | 1996-01-29 | 1997-07-08 | Medar, Inc. | Scanning phase measuring method and system for an object at a vision station |
| EP1067362A1 (en) * | 1999-07-09 | 2001-01-10 | Hewlett-Packard Company | Document imaging system |
| JP3519698B2 (ja) * | 2001-04-20 | 2004-04-19 | 照明 與語 | 3次元形状測定方法 |
| US7391523B1 (en) * | 2003-06-02 | 2008-06-24 | K-Space Associates, Inc. | Curvature/tilt metrology tool with closed loop feedback control |
| WO2005031252A1 (ja) * | 2003-09-25 | 2005-04-07 | Brother Kogyo Kabushiki Kaisha | 3次元形状検出装置、3次元形状検出システム、及び、3次元形状検出プログラム |
-
2004
- 2004-03-31 JP JP2004105426A patent/JP2005293075A/ja not_active Withdrawn
-
2005
- 2005-03-29 WO PCT/JP2005/005859 patent/WO2005095886A1/ja not_active Ceased
-
2006
- 2006-09-28 US US11/536,340 patent/US7672505B2/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPS60152903A (ja) * | 1984-01-21 | 1985-08-12 | Kosuke Sato | 位置計測方法 |
| JPH05332737A (ja) * | 1991-03-15 | 1993-12-14 | Yukio Sato | 形状計測装置 |
| JP2000055636A (ja) * | 1998-08-06 | 2000-02-25 | Nekusuta:Kk | 三次元形状計測装置及びパターン光投影装置 |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7630088B2 (en) | 2005-04-15 | 2009-12-08 | Brother Kogyo Kabushiki Kaisha | Apparatus for measurement of 3-D shape of subject using transformable holder with stable and repeatable positioning of the same subject |
| US7576845B2 (en) | 2006-03-30 | 2009-08-18 | Brother Kogyo Kabushiki Kaisha | Three-dimensional color and shape measuring device |
| CN108664534A (zh) * | 2017-04-02 | 2018-10-16 | 田雪松 | 应用服务数据的获取方法及系统 |
| CN112580382A (zh) * | 2020-12-28 | 2021-03-30 | 哈尔滨工程大学 | 基于目标检测二维码定位方法 |
| CN112580382B (zh) * | 2020-12-28 | 2022-06-17 | 哈尔滨工程大学 | 基于目标检测二维码定位方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| US7672505B2 (en) | 2010-03-02 |
| JP2005293075A (ja) | 2005-10-20 |
| US20070031029A1 (en) | 2007-02-08 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2005095886A1 (ja) | 3次元形状検出装置、3次元形状検出方法、3次元形状検出プログラム | |
| JP2005293075A5 (ja) | ||
| JP4734843B2 (ja) | 3次元形状検出装置 | |
| WO2006035736A1 (ja) | 3次元情報取得方法および3次元情報取得装置 | |
| JP2007271395A (ja) | 3次元色形状計測装置 | |
| TWI668997B (zh) | 產生全景深度影像的影像裝置及相關影像裝置 | |
| CN106127745B (zh) | 结构光3d视觉系统与线阵相机的联合标定方法及装置 | |
| JP2005291839A5 (ja) | ||
| CN106537252B (zh) | 转光三维成像装置和投射装置及其应用 | |
| JP2004132829A (ja) | 3次元撮影装置と3次元撮影方法及びステレオアダプタ | |
| JP4552485B2 (ja) | 画像入出力装置 | |
| US20220124253A1 (en) | Compensation of three-dimensional measuring instrument having an autofocus camera | |
| US20210131798A1 (en) | Structured light projection optical system for obtaining 3d data of object surface | |
| US20220358678A1 (en) | Compensation of three-dimensional measuring instrument having an autofocus camera | |
| JP2005352835A (ja) | 画像入出力装置 | |
| JP4552484B2 (ja) | 画像入出力装置 | |
| JP2005293290A5 (ja) | ||
| JP2006031506A (ja) | 画像入出力装置 | |
| WO2006112297A1 (ja) | 3次元形状測定装置 | |
| CN212779132U (zh) | 深度数据测量设备和结构光投射装置 | |
| WO2005122553A1 (ja) | 画像入出力装置 | |
| JP2006267031A (ja) | 3次元入力装置および3次元入力方法 | |
| WO2007119396A1 (ja) | 三次元情報測定装置 | |
| JP2006004010A (ja) | 画像入出力装置 | |
| JP2527159B2 (ja) | 焦点検出装置 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| WWE | Wipo information: entry into national phase |
Ref document number: 11536340 Country of ref document: US |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWW | Wipo information: withdrawn in national office |
Country of ref document: DE |
|
| WWP | Wipo information: published in national office |
Ref document number: 11536340 Country of ref document: US |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 05727546 Country of ref document: EP Kind code of ref document: A1 |