[go: up one dir, main page]

WO2013155379A2 - Orthographic image capture system - Google Patents

Orthographic image capture system Download PDF

Info

Publication number
WO2013155379A2
WO2013155379A2 PCT/US2013/036314 US2013036314W WO2013155379A2 WO 2013155379 A2 WO2013155379 A2 WO 2013155379A2 US 2013036314 W US2013036314 W US 2013036314W WO 2013155379 A2 WO2013155379 A2 WO 2013155379A2
Authority
WO
WIPO (PCT)
Prior art keywords
camera
image
orthographic
active illumination
pattern
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2013/036314
Other languages
French (fr)
Other versions
WO2013155379A3 (en
Inventor
Kari Myllykoski
Shaun Lamont
Kevin CAIN
Jim GROTELUESCHEN
Mark Freeman
Dejan Jovanovic
Keith BEARDMORE
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SMART PICTURE TECHNOLOGIES Inc
Original Assignee
SMART PICTURE TECHNOLOGIES Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SMART PICTURE TECHNOLOGIES Inc filed Critical SMART PICTURE TECHNOLOGIES Inc
Publication of WO2013155379A2 publication Critical patent/WO2013155379A2/en
Publication of WO2013155379A3 publication Critical patent/WO2013155379A3/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume

Definitions

  • the present invention generally relates to optical systems, more specifically to optical systems for changing the view of a photograph from one viewing angle to a virtual viewing angle, more specifically to changing the view of a photograph to a dimensionally correct orthographic view and more specifically to extract correct dimensions of objects from photographic images.
  • the present invention relates generally to and more specifically it relates to an image data capture and processing system, consisting of a digital imaging device, active illumination source, computer and software that generates 2 dimensional data sets from which real world coordinate information with planarity, scale, aspect, and innate dimensional qualities can be extracted from the captured image in order to transform the image data into other geometric perspectives and to extract real dimensional data from the imaged objects.
  • the image transformations may be homographic transformations, orthographic transformations, perspective transformations, or other transformations that takes into account distortions in the captured image caused by the camera angle.
  • Orthographic Image Capture System we use the name Orthographic Image Capture System to refer to a system that extracts real world coordinate accurate dimensional data from imaged objects. Although the Orthographic transformation is one specific type of transformation that might be used, there are a number of similar geometric
  • the invention generally relates to a 2 dimensional textures with applied transforms which includes a digital imaging sensor, an active illumination device, a calibration system, a computing device, and software to process the digital imaging data.
  • An object is to provide an orthographic image capture system for an image data capture and processing system, consisting of a digital imaging device, active illumination source, computer and software that generates 2d orthographic data sets, with planarity, scale, aspect, and innate dimensional qualities.
  • Another object is to provide an Orthographic Image Capture System that allows a digital camera or imager data to be optically corrected, by using a software system, for a variety of lens distortions.
  • Another object is to provide an Orthographic Image Capture System that has an active illumination device mounted to the digital imaging device in a secure and consistent manner, with both devices emitting and capturing data within a common field of view.
  • Another object is to provide an Orthographic Image Capture System that has a computer and software system that triggers the digital imager to capture an image, or series of images in which the active illumination data is also present.
  • Another object is to provide an Orthographic Image Capture System that has a computer and software system that integrates digital imager data with active illumination data, synthesizing and creating a 2 dimensional image with corrected planarity and orthographically rectified information.
  • Another object is to provide an Orthographic Image Capture System that has a computer and software system that integrates digital imager data with active illumination data, synthesizing and creating a 2 dimensional image with a scalar information, aspect ratio and dimensional qualities of pixels within scene at the distance point of planarity during image capture.
  • Another object is to provide an Orthographic Image Capture System that has a software system that integrates the planarity, scalar, and aspect information, to create a corrected data set, that can be exported in a variety of common file formats.
  • Another object is to provide an Orthographic Image Capture System that has a software system that creates additional descriptive notation in or with the common file format, to describe the image pixel scalar, dimension and aspect values, at a point of planarity.
  • Another object is to provide an Orthographic Image Capture System that has a software system that displays the corrected image.
  • Another object is to provide an Orthographic Image Capture System that has a software system can export the corrected data set, and additional descriptive notation.
  • FIGURE 1 illustrates top down view of a an orthographic image capture
  • FIGURE 2 illustrates a captured image taken from a non-orthographic
  • FIGURE 3 illustrates a virtual orthographic image of the wall created from the image captured from a non-orthographic camera angle
  • FIGURE 4 illustrates in greater scale the illumination pattern shown in Figure
  • FIGURE 5 illustrates an alternative illumination pattern
  • FIGURE 6 illustrates an alternative illumination pattern
  • FIGURE 7 illustrates an alternative illumination pattern
  • FIGURE 8 illustrates an alternative illumination pattern
  • FIGURE 9 illustrates an alternative illumination pattern
  • FIGURE 10 illustrates an alternative illumination pattern
  • FIGURE 11 illustrates an alternative illumination pattern
  • FIGURE 12 illustrates an upper perspective view of an embodiment of a system with a single Camera and single Active Illumination configured in a common housing;
  • FIGURE 13 illustrates an upper perspective view of an embodiment of a system with a single Camera and dual Active Illumination configured in a common housing;
  • FIGURE 14 illustrates an upper perspective view of an embodiment of a system with a single Camera and Active Illumination configured in individual housings, with adaptor to fix the relative relationship of the housings;
  • FIGURE 15 illustrates an upper perspective view of an embodiment of a system with a single Camera and dual Active Illumination configured in individual housings, with adaptor to fix relative relationship of the housings;
  • FIGURE 16 illustrates an upper perspective view of an embodiment of a system with dual Cameras and dual Active Illumination configured in individual housings, with adaptor to fix relative relationship of the housings in a horizontal arrangement;
  • FIGURE 17 illustrates an upper perspective view of an embodiment of a system with dual Cameras and dual Active Illumination configured in individual housings, with adaptor to fix relative relationship of the housings in vertical arrangement;
  • FIGURE 18 illustrates an upper perspective view of an embodiment of a system with a single Camera and dual Active Illumination configured in individual housings, with adaptor to fix relative relationship in vertical arrangement;
  • FIGURE 19 illustrates an upper perspective view of an embodiment of a system with dual Cameras and Active Illumination configured in individual housings, with adaptor to fix relative relationship of the housings in a vertical arrangement;
  • FIGURE 20 illustrates an embodiment of data processing flow for generating the desired transformed image from the non-transformed raw image
  • FIGURE 21 illustrates an embodiment of data processing flow for generating correct world coordinate dimensions from a non-transformed raw image
  • FIGURE 22 illustrates an embodiment with an example of dimensional data which can be extracted from the digital image
  • FIGURE 23 illustrates the undistorted active illumination pattern of Figure 4.
  • FIGURE 24 illustrates the distorted active illumination pattern of Figure 4 for a camera angle like the angle illustrated in Figure 1;
  • FIGURE 25 illustrates the distorted active illumination pattern of Figure 4 for a camera angle like the angle illustrated in Figure 1 but lowered so that it was looking up at the wall;
  • FIGURE 26 illustrates the pixel mapping of the distortion ranges of the pattern illustrated in Figure 4 and Figure 23.
  • FIGURES like numerals being used to refer to like and corresponding parts of the various drawings.
  • the present invention generally relates to an improved optical system for changing the view of an image from an actual viewing angle to a virtual viewing angle.
  • the system creates orthographically correct views of an image as well as remapping the image coordinates into a set of geometrically correct world coordinates from an image taken from an arbitrary viewing angle.
  • the system also extracts dimensional information of the object imaged from images of the object taken from an arbitrary viewing angle.
  • Figure 1 illustrates an object (a wall 120 with windows 122, 124, 126) being captured 100 in photographic form by an orthographic image capture system 110.
  • Figure 1 also illustrates two images 130 and 140 of the object 120 generated by the orthographic image capture system.
  • the first image 130 is a conventional photographic image of the object 120 taken from a non-orthographic arbitrary viewing angle 112.
  • the second image 140 is a view of the object 120 as would be seen from a virtual viewing angle 152.
  • the virtual viewing angle 152 is an orthographic viewing angle of the object as would be seen from a virtual camera 150.
  • the object In view 130 the object (wall 120 with windows 122, 124, 126) are seen in a perspective view as wall 132, and windows 134, 136, and 138: the farthest window 138 appears smallest.
  • object In the orthographic view 140, object (wall 120 with windows 122, 124, 126) are seen in an orthographic perspective as wall 132, and windows 134, 136, and 138: the windows which are the same size appear to be the same size in this image.
  • the components of the orthographic image capture system 110 illustrated in Figure 1 include the housing 114, a digital imaging optics and sensor (camera 116), and an active illumination device 118.
  • the calibration system, computing device, and software to process the image data are discussed below.
  • the camera 116 is optical data capture device, with the output being preferably having multiple color fields in a pattern or array, and is commonly known as a digital camera.
  • the camera function is to capture the color image data within a scene, including the active illumination data. In other embodiments a black and white camera would work, almost as well, as well or in some cases better than a color camera.
  • the camera 116 is preferably a digital device that directly records and stores photographic images in digital form. Capture is usually accomplished by use of cameral optics (not shown) which capture incoming light and a photosensor (not shown), which transforms the light amplitude and frequency into colors.
  • the photosensors are typically constructed in an array, that allows for multiple individual pixels to be generated, with each pixel having a unique area of light capture. The data from the multiple array of photosensors is then stored as an image. These stored images can be uploaded to a computer immediately, stored in the camera, or stored in a memory module.
  • the camera may be a digital camera, that stores images to memory, That transmits images, or otherwise makes image data available to a computing device.
  • the camera shares a housing with the computing device.
  • the camera includes a computer that performs preprocessing of data to generate and imbed information about the image that can later be used by the onboard computer and/or an external computer to which the image data is transmitted or otherwise made available.
  • the active illumination device in the one several embodiments is an optical radiation emission device.
  • the emitted radiation shall have some form of beam focusing to enable precision beam emission - such as light beams generated by a laser.
  • the function is to emit a beam, or series of beams at a specific color and angle relative to the camera element.
  • the active illumination has fixed geometric properties, that remain static in operation.
  • the active illumination can be any source that can generate a beam, or series of beams that can be captured with the camera.
  • the source can produce a fixed illumination pattern, that once manufactured, installed and calibrated does not alter, move, modulate, or change geometry in any way.
  • the fixed pattern of the illumination may be a random or fixed geometric pattern, that is of known and predefined structure. The illumination pattern does not need to be visible to the naked eye provided that it can be captured by the camera for the software to detect its location in the image as further described below.
  • FIG. 1 The illumination pattern generated by the active illumination device 118 is not illustrated in Figure 1.
  • Figure 2 and Figure 3 illustrate the images 130 and 140 respectively from Figure 1 in greater detail. Specifically these illustrations include illustrations of the pattern 162 and 160 respectively of the projected by the active illumination device 118. However an embodiment of a pattern 160 and 162 is illustrated in Figures 2 and Figure 3.
  • the pattern shown in greater detail in Figure 4 is the same pattern projected in Figure 2 and Figure3.
  • Figure 2 illustrates how the camera sees the pattern 162; while, Figure 3 illustrates how the pattern looks (ideally as projected) when the orthographic imaging system creates a virtual orthographic view of the object from the non-orthographic image with the image coordinates transformed into dimensionally corrected and oriented world coordinates.
  • Figure 4 illustrates an embodiment of a projection pattern.
  • This pattern is good for capturing orthographic images of a two-dimensional object. Such as the wall 120 in Figure 1.
  • the non-orthographic view angle is primarily non-orthographic in one dimension: pan angle of the camera.
  • the tilt angle or both the pan and tilt angle of the camera may be non- orthographic.
  • the pattern shown in Figure 4 provides enough information in all three non-orthographic conditions: pan off angle, tilt off angle or both pan and tilt off angle.
  • Figure 5, Figure 6, Figure 7, Figure 8, and Figure 9 also illustrate examples of the limitless patterns that can be used. However, in embodiments that also make orthographic corrections to an image captured by a camera, based on the distortions caused the camera's optic system, patterns with more data points such as Figure 5 and particularly Figure 6 may be more desirable.
  • the illumination source 118 may utilize a lens system to allow for precision beam focus and guidance, a diffraction grating, beam splitter, or some other beam separation tool, for generation of multi path beams.
  • a laser is a device that emits light (electromagnetic radiation) through a process of optical amplification based on the stimulated emission of photons. The emitted laser light is notable for its high degree of spatial and temporal coherence, unattainable using other technologies.
  • a focused LED, halogen, or other radiation source may be utilized as the active illumination source.
  • Figure 10 and Figure 11 illustrate in greater detail the creation of the pattern illustrated in Figure 4.
  • the pattern is generated by placing a diffraction grating in front of a laser diode.
  • Figure 10 illustrates a Diffractive Optical Element, (DOE) for generating the desired pattern.
  • DOE Diffractive Optical Element
  • the DOE 180 has an active diffraction area 188 diameter of about 5mm, a physical size of about 7mm. And a thickness between 0.5 and 1mm.
  • the DOE is placed before a red laser diode with a nominal wavelength of 635nm with an expected range of 630-640nm.
  • the pattern generated is the five points 191, 192, 193, 194, 195 illustrated in Figure 11.
  • the DOE design described above: the ⁇ 206 and 208 and ⁇ ⁇ 202 and 204 values are Fifteen degrees (15.0°). In another design these angles were 11 degrees (11°) rather than 15. In other embodiments a 530nm green laser was employed. It should be appreciated that these are just two of many possible options.
  • the orthographic image capture system 110 is a computer and computer instruction sets (software) which perform processing of the image data collected by the camera 116.
  • the computer is located in the same housing as the camera 116 and active illumination system 118.
  • the housing also contains a power supply and supporting circuitry for powering the device and connection(s) 212 for charging the power supply.
  • the system 110 also includes communications circuitry 220 to communicate with wired 222 to other electronic devices 224 or wirelessly 228.
  • the system 110 also includes memory(s) for storing instructions and picture data and supporting other functions of the system 110.
  • the system 110 also includes circuitry 230 for supporting the active illumination system 118 and circuitry 240 for supporting the digital camera.
  • the processing tasks may be partially or totally performed by firmware programmed processors.
  • the onboard processors may perform some tasks and outside processors may perform other tasks. For example, the onboard processors may identify the locations of illumination pattern in the picture. Calculate corrections due to the non-orthographic image save the information and send it to another computer or data processors to complete other data processing tasks.
  • the orthographic image capture system 110 requires that data processing tasks be performed, Regardless of the location of the data processing components or how the tasks are divided, data processing tasks must be accomplished..
  • an onboard computer 200 no external processing is required.
  • the data can be exported to another digital device 224 which can perform the same or additional data processing tasks.
  • a computer is a programmable machine designed to automatically carry out a sequence of arithmetic or logical operations. The particular sequence of operations can be changed readily, allowing the computer to solve more than one kind of problem.
  • the software system controls calibration, operation, timing, camera and active illumination control, data capture processing, data display and export.
  • Computer software or just software is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it.
  • One embodiment of a suitable calibration system employs a specific physical item (Image board) that is of a predetermined size, and shape, which has a specifically patterned or textured surface, and known geometric properties.
  • the Active illumination system emits radiation in a known pattern with fixed geometric properties, upon the Image Board or upon a scene that contains the Image Board, in conjunction with information provided by an optional Distance Tool, with multiple pose and distance configurations, a Calibration map is processed and defined for the imaging system.
  • the calibration board may be a flat surface containing a superimposed image, a complex manifold surface, containing a superimposed image, an image that is displayed upon via a computer monitor, television, or other image projection device or a physical object that has a pattern of features or physical attributes with known geometric properties.
  • the calibration board may be any item that has unique geometry or textured surface that has a matching digital model.
  • the Distance Tool is used.
  • the camera and active illumination system is positioned perpendicular to the plane surface to be measured, or in other words, it is positioned to directly photograph an orthographic image.
  • the Distance Tool is then used to provide the ground truth range to the surface. Data is taken in this manner for multiple distances from the surface and a Calibration Mable is compiled.
  • the Camera and Active Illumination devices are Electrically linked.
  • the two types of devices (camera(s) and active illumination device(s)) are linked through their respective support circuitry 230 and 240 via the computer 200.
  • the devices are in separate housings, there may be a data linkage (not shown) in addition to the mechanical linkage 320 or 322. These linkages are desirable in order to coordinate in a synchronous manner the active illumination and camera image capture functions.
  • the calibration is accomplished by capturing multiple known Image Board and Distance data images.
  • the Camera(s) Active Illumination device(s) and Software may be integrated with the computer, software and software controllers within a single electro mechanical device such as a laptop, tablet, phone, PDA.
  • the Active Illumination device(s) may be an additional module, added as clamps, shells, sleeves or any similar modification to a device that already has a camera computer and software to which the orthographic image capture system software can be added.
  • the Camera(s) and Active Illumination device(s) may have overlapping optical paths with common fields of view, and this may be modified by multiple assemblies of: Camera or Active Illumination, combined in a fixed array. This provides a means to capture enough information to make corrections to the image based on distortions to the image caused by the optics of the camera, for example to correct the pincushion or barrel distortion of a telephoto, wide angle, or fish eye lens, as well as other optical aberrations such as astigmatism and coma.
  • the triggering of the Active illumination may be synchronized with the panoramic view image capturing to capture multiple planar surfaces in a panoramic scene such as all of the walls of a room.
  • Lens Systems and Filter System, Active Illumination, devices with different diffractive optical element can be added to or substituted for existing optics on the Camera(s): Active Illumination devices to provide for different operable ranges and usage environments
  • Computer is electronically linked to Camera and Active Illumination with: Electrical And Command To Camera and Electrical And Command To Active
  • Power for: Camera and Active Illumination may be supplied and controlled by the Computer and Software.
  • the user has an assembled or integrated Orthographic Image Capture System, consisting of all Cameras, Active Illumination Computer and Software elements, and sub- elements.
  • the Active illumination pattern is a non-dynamic, fixed in geometry, and matches the pattern and geometry configuration used during the calibration process with Calibration System, Image Board and optional Distance Tool and Calibration Map.
  • Calibration System generates a unique. Calibration Data file, which is stored with the Software.
  • the user aims Orthographic Image Capture System, in a pose, that allows the Camera and Active Illumination device to occupy the same physical space upon a selected predominantly planar surface, that is to be imaged.
  • Computer and Software are then triggered by a software or hardware trigger, that sends instructions to Timing To Camera and Timing To Active Illumination, via Electrical And Command To Camera and Electrical And Command To Active Illumination, which then emits radiation that is focused, split or diffracted by the Active illumination Lens System, in a fixed geometric manner.
  • the Camera may have a Filter System added or integral, which enables a more effective capture of the Active Illumination and Lens System emitted data, by reducing the background radiation, or limiting the radiation wavelengths that are captured by Camera for Software processing with reduced signal to noise ratios.
  • the data capture procedure delivers information for processing into Raw Data.
  • the Raw Data is integrated with Calibration Data with Calibration Processing, to generate Export Data and Display Data.
  • the Export Data and Display Data is a common file format image file, which has been displayed in corrected world coordinates where each pixel has a known dimension and aspect ratio, or the untransformed image of the scene with selected dimensional information that has been transformed into corrected world coordinates, or integrated with other similarly corrected images in a fashion that form natural relative scalar qualities in 2 dimensions.
  • the Orthographic Image Capture System may consist of a plurality of Cameras, and Active Illumination elements that are mounted in an array that is calibrated under a Calibration System.
  • Figure 20 illustrates a flow chart 400 of major data processing steps for the software and hardware of an orthographic image capture system.
  • the first step illustrated is a synchronized triggering of the active imaging device onto the planar object 402.
  • the next step is capturing of the digital image containing the active imaging pattern 404.
  • the next step is processing the image data to extract the position of characteristic elements of the active imaging pattern 406.
  • the software then calculates a transformation matrix and the non-orthographic orientation and position of the camera relative to the plane of the object 408 and 410. These are calculable based on determining the distortions and position shift to the pattern imaged and determining corrections that would restore the geometric ratios of the active illumination pattern. Information about the distance to the imaged surface is also contained in the imaged pattern.
  • the software creates a transformed image of the object as though the picture was taken from a virtual orthographic viewing angle on the object and present the view to the user 412 and 414.
  • the user is then provided with an opportunity to select key points of dimensional interest in the image using a mouse and keyboard and/or any other similar means such as a touch screen 416.
  • the software processes these points and provides the user with the actual dimensional information based on the dimensional points of interest selected by the user 418.
  • An example of the last two steps is illustrated in Figure 22. Where the user has selected the area of the wall 450 minus the three windows 452, 454, 456 and provided with an answer of 114 square feet.
  • Figure 21 illustrates an alternative embodiment of the data processing flow of a software implementation of an orthographic image capture system.
  • the first path the user is shown the raw image and selects key dimension points of interest 504.
  • the second path is that a separate routine automatically identifies key dimensional locations in the image 506.
  • the software is analyzing the image to locate key geometric points of interest in the active illumination pattern projected on the imaged object 508.
  • the software determines a transformation matrix and scene geometry 510 and 512.
  • the software applies the transformation matrix to the key points of dimensional interest that were automatically determined and/or input by the user 514 and then the software presents the user with dimensional information requested or automatically selected in step 506.
  • Figure 23 illustrates the same undistorted pattern illustrated in Figure 4.
  • Figures 24 and Figure 25 illustrate examples of distortion of the pattern in an
  • orthographic image capture system employing the fixed relationship of the camera and an active illumination device described in A Simple Method for Range Finding via Laser Triangulation by Hoa G. Nguyen and Michael R Blackburn,
  • the distortion(s) illustrated in Figure 25 reflect a camera angle similar to the angle illustrated in Figure 1 : of a wall - taken from the left angled to right (horizontal pan right) and but with the cameral lowered and looking up at the wall (ie. vertical tilt up). Note that the points in the pattern 502, 504, 506, 508 move along line segments 512, 514, 516 and 518 respectively.
  • Filtering image for the active illumination pattern steps (406 in Figure 20 and 408 in Figure 21) can be limited for a search for pixels proximate to the line segments 552, 554, 556, 558, and 560 illustrated in Figure 26.
  • This limited area of search greatly speeds up pattern filtering step(s).
  • the horizontal x axis represents the horizontal camera pixels
  • the vertical y axis represents the vertical camera pixels
  • the line segments 552, 554, 556, 558 represent the coordinate along which the laser points may be found and thus the areas proximate to these line segments is where the search of laser points can be concentrated.
  • the fixed projection axis of the active illuminator is slightly offset from the optical axis of the camera, which is useful in obtaining range information as described in Appendix A.
  • the direction of the projection axis of the active illuminator relative to the camera axis has been chosen based on the particular pattern of active illumination such that, as the images of the active illumination dots shift on the camera sensor over the distance range of the orthographic image capture system, the lines of pixels on the camera sensor over which they shift do not intersect.
  • the line segments 512 and 514 and 518 and 520 do not intersect. This decreases the chance of ambiguity, i.e., of confusing one spot for another in the active illumination pattern. This may be particularly helpful where the active illuminator is a laser which is fitted with a DOE which are prone to produce "ghost images”.
  • Figure 1 shows the configuration for triangulation ranging (Everett, 1995):
  • Figure 1 Configuration for triangulation ranging.
  • PI and P2 represent two reference points (e.g., camera and laser source), while P3 is a target point.
  • the range B can be determined from the knowledge of the baseline separation A and the angles ⁇ and ⁇ using the law of sines:
  • Figure 2 diagrams the setup of our laser and camera.
  • the camera is represented by the image plane, focal point, and optical axis.
  • the laser is directly above the camera, although its exact position is unimportant (we will only deal with the beam of light, represented by line CE in the diagram).
  • the laser is positioned so that the path of the laser and the optical axis form a vertical plane.
  • Point P is the target of interest.
  • x the projection of point P on the optical axis
  • u is the (vertical) projection of point P on the image plane (the scan line in the image on which the spot is detected).
  • E is the point where the path of the laser intersects the optical axis. We angle the laser such that point E is at the center of the range of interests.
  • this technique also works with the laser path parallel to the optical axis. There is no particular need for accurate determination or setting of the axes, beyond a concern for precision to be discussed later. There is also no need to know the baseline distance between camera and laser, nor the focal length (f) of the camera.
  • Figure 2 Setup of laser and camera.
  • yj and >3 ⁇ 4 are also difficult to measure. They are the offsets perpendicular to the optical axis, not the height from the ground. Also, the optical axis does not necessarily pass through the center of the image, but varies from camera to camera.
  • Figure 3 Laser and camera combination. To assess target range, the target is acquired and the optical axis automatically placed at its center using the pan-and-tilt unit. The laser illuminates a spot on the target, and the vertical position of the spot in the image is used for range calculations. To accommodate small targets, the distance v (separation between laser spot and optical axis) must be kept small. This in turn means that the baseline separation between the laser source and the camera cannot be large, and the angle of the laser path must be small (fairly parallel to the optical axis). We chose to place the laser approximately 7 cm above the camera with a slight downward tilt (E - 1.2 m). The distances we were interested in were between 0.5 m and 2 m.
  • Figure 4 shows the robot directing a remote manipulator arm to reach a cup being suspended as a target (see [Blackburn and Nguyen, 1994]).
  • Figure 4 Robot directing remote manipulator arm.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Optics & Photonics (AREA)
  • Image Processing (AREA)

Description

ORTHOGRAPHIC IMAGE CAPTURE SYSTEM
RELATED APPLICATION
This application is a utility application claiming priority of United States provisional application(s) Serial No. 61/623,178 filed on 12 April 2012 and Serial No 61/732,636 filed on 3 December 2012.
TECHNICAL FIELD OF THE INVENTION
[0001] The present invention generally relates to optical systems, more specifically to optical systems for changing the view of a photograph from one viewing angle to a virtual viewing angle, more specifically to changing the view of a photograph to a dimensionally correct orthographic view and more specifically to extract correct dimensions of objects from photographic images.
BACKGROUND OF THE INVENTION
[0002] The present invention relates generally to and more specifically it relates to an image data capture and processing system, consisting of a digital imaging device, active illumination source, computer and software that generates 2 dimensional data sets from which real world coordinate information with planarity, scale, aspect, and innate dimensional qualities can be extracted from the captured image in order to transform the image data into other geometric perspectives and to extract real dimensional data from the imaged objects. The image transformations may be homographic transformations, orthographic transformations, perspective transformations, or other transformations that takes into account distortions in the captured image caused by the camera angle. [0003] In the following specification, we use the name Orthographic Image Capture System to refer to a system that extracts real world coordinate accurate dimensional data from imaged objects. Although the Orthographic transformation is one specific type of transformation that might be used, there are a number of similar geometric
transformations that can also be used without changing the design and layout of the Orthographic Image Capture System.
[0004] There is a need for an improved optical system for changing the view of an image from an actual viewing angle to a virtual viewing angle. There is a need for using such a system to create dimensionally correct views of an image from an image taken from a non-orthographic viewing angle. There is a need to be able to extract dimensional information of the object images taken from a non-orthographical viewing angle.
BRIEF SUMMARY OF THE INVENTION
[0005] The invention generally relates to a 2 dimensional textures with applied transforms which includes a digital imaging sensor, an active illumination device, a calibration system, a computing device, and software to process the digital imaging data.
[0006] There has thus been outlined, rather broadly, some of the features of the invention in order that the detailed description thereof may be better understood, and in order that the present contribution to the art may be better appreciated. There are additional features of the invention that will be described hereinafter.
[0007] In this respect, before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction or to the arrangements of the components set forth in the following description or illustrated in the drawings. The invention is capable of other embodiments and of being practiced and carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of the description and should not be regarded as limiting.
[0008] An object is to provide an orthographic image capture system for an image data capture and processing system, consisting of a digital imaging device, active illumination source, computer and software that generates 2d orthographic data sets, with planarity, scale, aspect, and innate dimensional qualities.
[0009] Another object is to provide an Orthographic Image Capture System that allows a digital camera or imager data to be optically corrected, by using a software system, for a variety of lens distortions.
[0010] Another object is to provide an Orthographic Image Capture System that has an active illumination device mounted to the digital imaging device in a secure and consistent manner, with both devices emitting and capturing data within a common field of view.
[0011] Another object is to provide an Orthographic Image Capture System that has a computer and software system that triggers the digital imager to capture an image, or series of images in which the active illumination data is also present.
[0012] Another object is to provide an Orthographic Image Capture System that has a computer and software system that integrates digital imager data with active illumination data, synthesizing and creating a 2 dimensional image with corrected planarity and orthographically rectified information.
[0013] Another object is to provide an Orthographic Image Capture System that has a computer and software system that integrates digital imager data with active illumination data, synthesizing and creating a 2 dimensional image with a scalar information, aspect ratio and dimensional qualities of pixels within scene at the distance point of planarity during image capture.
[0014] Another object is to provide an Orthographic Image Capture System that has a software system that integrates the planarity, scalar, and aspect information, to create a corrected data set, that can be exported in a variety of common file formats.
[0015] Another object is to provide an Orthographic Image Capture System that has a software system that creates additional descriptive notation in or with the common file format, to describe the image pixel scalar, dimension and aspect values, at a point of planarity.
[0016] Another object is to provide an Orthographic Image Capture System that has a software system that displays the corrected image.
[0017] Another object is to provide an Orthographic Image Capture System that has a software system can export the corrected data set, and additional descriptive notation.
[0018] Other objects and advantages of the present invention will become obvious to the reader and it is intended that these objects and advantages are within the scope of the present invention. To the accomplishment of the above and related objects, this invention may be embodied in the form illustrated in the accompanying drawings, attention being called to the fact, however, that the drawings are illustrative only, and that changes may be made in the specific construction illustrated and described within the scope of this application. BRIEF DESCRIPTION OF THE DRAWINGS
[0019] For a more complete understanding of the present invention and the advantages thereof, reference is now made to the following description taken in conjunction with the accompanying drawings in which like reference numerals indicate like features and wherein:
[0020] FIGURE 1 illustrates top down view of a an orthographic image capture
system capturing an orthographic image of a wall with three windows;
[0021] FIGURE 2 illustrates a captured image taken from a non-orthographic
viewing angle;
[0022] FIGURE 3 illustrates a virtual orthographic image of the wall created from the image captured from a non-orthographic camera angle;
[0023] FIGURE 4 illustrates in greater scale the illumination pattern shown in Figure
2 and Figure 3;
[0024] FIGURE 5 illustrates an alternative illumination pattern;
[0025] FIGURE 6 illustrates an alternative illumination pattern;
[0026] FIGURE 7 illustrates an alternative illumination pattern;
[0027] FIGURE 8 illustrates an alternative illumination pattern;
[0028] FIGURE 9 illustrates an alternative illumination pattern;
[0029] FIGURE 10 illustrates an alternative illumination pattern;
[0030] FIGURE 11 illustrates an alternative illumination pattern; [0031] FIGURE 12 illustrates an upper perspective view of an embodiment of a system with a single Camera and single Active Illumination configured in a common housing;
[0032] FIGURE 13 illustrates an upper perspective view of an embodiment of a system with a single Camera and dual Active Illumination configured in a common housing;
[0033] FIGURE 14 illustrates an upper perspective view of an embodiment of a system with a single Camera and Active Illumination configured in individual housings, with adaptor to fix the relative relationship of the housings;
[0034] FIGURE 15 illustrates an upper perspective view of an embodiment of a system with a single Camera and dual Active Illumination configured in individual housings, with adaptor to fix relative relationship of the housings;
[0035] FIGURE 16 illustrates an upper perspective view of an embodiment of a system with dual Cameras and dual Active Illumination configured in individual housings, with adaptor to fix relative relationship of the housings in a horizontal arrangement;
[0036] FIGURE 17 illustrates an upper perspective view of an embodiment of a system with dual Cameras and dual Active Illumination configured in individual housings, with adaptor to fix relative relationship of the housings in vertical arrangement;
[0037] FIGURE 18 illustrates an upper perspective view of an embodiment of a system with a single Camera and dual Active Illumination configured in individual housings, with adaptor to fix relative relationship in vertical arrangement;
[0038] FIGURE 19 illustrates an upper perspective view of an embodiment of a system with dual Cameras and Active Illumination configured in individual housings, with adaptor to fix relative relationship of the housings in a vertical arrangement;
[0039] FIGURE 20 illustrates an embodiment of data processing flow for generating the desired transformed image from the non-transformed raw image;
[0040] FIGURE 21 illustrates an embodiment of data processing flow for generating correct world coordinate dimensions from a non-transformed raw image;
[0041] FIGURE 22 illustrates an embodiment with an example of dimensional data which can be extracted from the digital image;
[0042] FIGURE 23 illustrates the undistorted active illumination pattern of Figure 4;
[0043] FIGURE 24 illustrates the distorted active illumination pattern of Figure 4 for a camera angle like the angle illustrated in Figure 1;
[0044] FIGURE 25 illustrates the distorted active illumination pattern of Figure 4 for a camera angle like the angle illustrated in Figure 1 but lowered so that it was looking up at the wall; and
[0045] FIGURE 26 illustrates the pixel mapping of the distortion ranges of the pattern illustrated in Figure 4 and Figure 23. DETAILED DESCRIPTION OF THE INVENTION
[0046] Preferred embodiments of the present invention are illustrated in the
FIGURES, like numerals being used to refer to like and corresponding parts of the various drawings.
[0047] The present invention generally relates to an improved optical system for changing the view of an image from an actual viewing angle to a virtual viewing angle. The system creates orthographically correct views of an image as well as remapping the image coordinates into a set of geometrically correct world coordinates from an image taken from an arbitrary viewing angle. The system also extracts dimensional information of the object imaged from images of the object taken from an arbitrary viewing angle.
[0048] A. Overview
[0049] Figure 1 illustrates an object (a wall 120 with windows 122, 124, 126) being captured 100 in photographic form by an orthographic image capture system 110. Figure 1 also illustrates two images 130 and 140 of the object 120 generated by the orthographic image capture system. The first image 130 is a conventional photographic image of the object 120 taken from a non-orthographic arbitrary viewing angle 112. The second image 140 is a view of the object 120 as would be seen from a virtual viewing angle 152. In this case the virtual viewing angle 152 is an orthographic viewing angle of the object as would be seen from a virtual camera 150. In view 130 the object (wall 120 with windows 122, 124, 126) are seen in a perspective view as wall 132, and windows 134, 136, and 138: the farthest window 138 appears smallest. In the orthographic view 140, object (wall 120 with windows 122, 124, 126) are seen in an orthographic perspective as wall 132, and windows 134, 136, and 138: the windows which are the same size appear to be the same size in this image.
[0050] The components of the orthographic image capture system 110 illustrated in Figure 1 include the housing 114, a digital imaging optics and sensor (camera 116), and an active illumination device 118. The calibration system, computing device, and software to process the image data are discussed below.
[0051] B. Camera
[0052] The camera 116 is optical data capture device, with the output being preferably having multiple color fields in a pattern or array, and is commonly known as a digital camera. The camera function is to capture the color image data within a scene, including the active illumination data. In other embodiments a black and white camera would work, almost as well, as well or in some cases better than a color camera. In some embodiments of the orthographic image capture system, it may be desirable to employ a filter on the camera that enhances the image projected by the active illumination device for the optical data capture device.
[0053] The camera 116 is preferably a digital device that directly records and stores photographic images in digital form. Capture is usually accomplished by use of cameral optics (not shown) which capture incoming light and a photosensor (not shown), which transforms the light amplitude and frequency into colors. The photosensors are typically constructed in an array, that allows for multiple individual pixels to be generated, with each pixel having a unique area of light capture. The data from the multiple array of photosensors is then stored as an image. These stored images can be uploaded to a computer immediately, stored in the camera, or stored in a memory module. [0054] The camera may be a digital camera, that stores images to memory, That transmits images, or otherwise makes image data available to a computing device. In some embodiments, the camera shares a housing with the computing device. In some embodiments, the camera includes a computer that performs preprocessing of data to generate and imbed information about the image that can later be used by the onboard computer and/or an external computer to which the image data is transmitted or otherwise made available.
[0055] C. Active Illumination
[0056] The active illumination device in the one several embodiments is an optical radiation emission device. The emitted radiation shall have some form of beam focusing to enable precision beam emission - such as light beams generated by a laser. The function is to emit a beam, or series of beams at a specific color and angle relative to the camera element. The active illumination has fixed geometric properties, that remain static in operation.
[0057] However, in other embodiments, the active illumination can be any source that can generate a beam, or series of beams that can be captured with the camera. Provided that the source can produce a fixed illumination pattern, that once manufactured, installed and calibrated does not alter, move, modulate, or change geometry in any way. The fixed pattern of the illumination may be a random or fixed geometric pattern, that is of known and predefined structure. The illumination pattern does not need to be visible to the naked eye provided that it can be captured by the camera for the software to detect its location in the image as further described below.
[0058] The illumination pattern generated by the active illumination device 118 is not illustrated in Figure 1. Figure 2 and Figure 3 illustrate the images 130 and 140 respectively from Figure 1 in greater detail. Specifically these illustrations include illustrations of the pattern 162 and 160 respectively of the projected by the active illumination device 118. However an embodiment of a pattern 160 and 162 is illustrated in Figures 2 and Figure 3. The pattern shown in greater detail in Figure 4 is the same pattern projected in Figure 2 and Figure3. Figure 2 illustrates how the camera sees the pattern 162; while, Figure 3 illustrates how the pattern looks (ideally as projected) when the orthographic imaging system creates a virtual orthographic view of the object from the non-orthographic image with the image coordinates transformed into dimensionally corrected and oriented world coordinates.
[0059] As previously mentioned Figure 4 illustrates an embodiment of a projection pattern. This pattern is good for capturing orthographic images of a two-dimensional object. Such as the wall 120 in Figure 1. Note that the non-orthographic view angle is primarily non-orthographic in one dimension: pan angle of the camera. In other uses of the system the tilt angle or both the pan and tilt angle of the camera may be non- orthographic. The pattern shown in Figure 4 provides enough information in all three non-orthographic conditions: pan off angle, tilt off angle or both pan and tilt off angle.
[0060] Figure 5, Figure 6, Figure 7, Figure 8, and Figure 9 also illustrate examples of the limitless patterns that can be used. However, in embodiments that also make orthographic corrections to an image captured by a camera, based on the distortions caused the camera's optic system, patterns with more data points such as Figure 5 and particularly Figure 6 may be more desirable.
[0061] The illumination source 118 may utilize a lens system to allow for precision beam focus and guidance, a diffraction grating, beam splitter, or some other beam separation tool, for generation of multi path beams. A laser is a device that emits light (electromagnetic radiation) through a process of optical amplification based on the stimulated emission of photons. The emitted laser light is notable for its high degree of spatial and temporal coherence, unattainable using other technologies. A focused LED, halogen, or other radiation source may be utilized as the active illumination source.
[0062] Figure 10 and Figure 11 illustrate in greater detail the creation of the pattern illustrated in Figure 4. In a typical embodiment of the systems described herein, the pattern is generated by placing a diffraction grating in front of a laser diode. Figure 10 illustrates a Diffractive Optical Element, (DOE) for generating the desired pattern. In an embodiment of the active illumination system 118, the DOE 180 has an active diffraction area 188 diameter of about 5mm, a physical size of about 7mm. And a thickness between 0.5 and 1mm. The DOE is placed before a red laser diode with a nominal wavelength of 635nm with an expected range of 630-640nm. The pattern generated is the five points 191, 192, 193, 194, 195 illustrated in Figure 11. It is critical that at least the ratio of distances between the five points remain constant. If the size of the pattern changes based on distance between the object and the active illumination device, is may become necessary to be able to detect the distance from the object. In one embodiment, the DOE design described above: the θν 206 and 208 and ΘΗ 202 and 204 values are Fifteen degrees (15.0°). In another design these angles were 11 degrees (11°) rather than 15. In other embodiments a 530nm green laser was employed. It should be appreciated that these are just two of many possible options.
[0063] Other major components of the orthographic image capture system 110 are a computer and computer instruction sets (software) which perform processing of the image data collected by the camera 116. In the embodiment illustrated in Figure 12, the computer is located in the same housing as the camera 116 and active illumination system 118. In this embodiment the housing also contains a power supply and supporting circuitry for powering the device and connection(s) 212 for charging the power supply. The system 110 also includes communications circuitry 220 to communicate with wired 222 to other electronic devices 224 or wirelessly 228. The system 110 also includes memory(s) for storing instructions and picture data and supporting other functions of the system 110. The system 110 also includes circuitry 230 for supporting the active illumination system 118 and circuitry 240 for supporting the digital camera.
[0064] In the embodiment shown, all of the processing is handled by the CPU (not shown) in the on-board computer 200. However in other embodiments the processing tasks may be partially or totally performed by firmware programmed processors. In other embodiments, the onboard processors may perform some tasks and outside processors may perform other tasks. For example, the onboard processors may identify the locations of illumination pattern in the picture. Calculate corrections due to the non-orthographic image save the information and send it to another computer or data processors to complete other data processing tasks.
[0065] D. Computer
The orthographic image capture system 110 requires that data processing tasks be performed, Regardless of the location of the data processing components or how the tasks are divided, data processing tasks must be accomplished.. In the embodiment shown, an onboard computer 200, no external processing is required. However, the data can be exported to another digital device 224 which can perform the same or additional data processing tasks. For these purposes, a computer is a programmable machine designed to automatically carry out a sequence of arithmetic or logical operations. The particular sequence of operations can be changed readily, allowing the computer to solve more than one kind of problem.
[0066] E. Software
[0067] This is a process system, that allows for information or data to be manipulated in a desired fashion, via a programmable interface, with inputs, and results. The software system controls calibration, operation, timing, camera and active illumination control, data capture processing, data display and export.
[0068] Computer software or just software, is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it.
[0069] F. Calibration System
[0070] This is an item, which is used to provide a sensor system with ground truth information, which is used as a reference data point, for information acquired by the sensor system. Integration and processing of calibration data and operation data, forms corrected output data.
[0071] One embodiment of a suitable calibration system employs a specific physical item (Image board) that is of a predetermined size, and shape, which has a specifically patterned or textured surface, and known geometric properties. The Active illumination system emits radiation in a known pattern with fixed geometric properties, upon the Image Board or upon a scene that contains the Image Board, in conjunction with information provided by an optional Distance Tool, with multiple pose and distance configurations, a Calibration map is processed and defined for the imaging system.
[0072] The calibration board may be a flat surface containing a superimposed image, a complex manifold surface, containing a superimposed image, an image that is displayed upon via a computer monitor, television, or other image projection device or a physical object that has a pattern of features or physical attributes with known geometric properties. The calibration board may be any item that has unique geometry or textured surface that has a matching digital model.
[0073] In another embodiment, only the Distance Tool is used. The camera and active illumination system is positioned perpendicular to the plane surface to be measured, or in other words, it is positioned to directly photograph an orthographic image. The Distance Tool is then used to provide the ground truth range to the surface. Data is taken in this manner for multiple distances from the surface and a Calibration Mable is compiled.
[0074] G. Connections of Main Elements and Sub-Elements of Invention
[0075] In the orthographic image capture system, the Camera(s) must be
mechanically linked to the Active Illumination device(s). In the embodiment 110 illustrated in Figure 1 and Figure 12, the mechanical linkage is based on both the camera 116 and active illumination device 118 being in the same housing 114. This is also true of embodiment 310 illustrated in Figure 13 where the Camera 116 is mechanically linked to the two active illumination devices 118 and 318 by their common housing 114. This would also be true in other embodiments where there are any other combination of cameras and or active image devices. Figure 14, Figure 15 and Figure 16 have cameras and active illumination devices in separate housings 114 and 314 which are rigidly connected by adaptor 320 which fix the respective cameras 116, 316 and active illumination devices 118 and 318 relative to each other so that. Camera and Active Illumination devices have overlapping fields of view, through the useable range of the orthographic image capture system. Figure 17, Figure 18 and Figure 19 illustrate embodiments where the mechanical linkage 322 is to housings which are horizontally configured.
[0076] In addition to being mechanically linked, it is preferable though not essential that the Camera and Active Illumination devices are Electrically linked. In the embodiment illustrated in Figure 13, the two types of devices (camera(s) and active illumination device(s)) are linked through their respective support circuitry 230 and 240 via the computer 200. Where the devices are in separate housings, there may be a data linkage (not shown) in addition to the mechanical linkage 320 or 322. These linkages are desirable in order to coordinate in a synchronous manner the active illumination and camera image capture functions.
[0077] The calibration is accomplished by capturing multiple known Image Board and Distance data images.
[0078] H. Further Embodiments of the orthographic image capture system
[0079] The Camera(s) Active Illumination device(s) and Software may be integrated with the computer, software and software controllers within a single electro mechanical device such as a laptop, tablet, phone, PDA.
[0080] The Active Illumination device(s) may be an additional module, added as clamps, shells, sleeves or any similar modification to a device that already has a camera computer and software to which the orthographic image capture system software can be added.
[0081] The Camera(s) and Active Illumination device(s) may have overlapping optical paths with common fields of view, and this may be modified by multiple assemblies of: Camera or Active Illumination, combined in a fixed array. This provides a means to capture enough information to make corrections to the image based on distortions to the image caused by the optics of the camera, for example to correct the pincushion or barrel distortion of a telephoto, wide angle, or fish eye lens, as well as other optical aberrations such as astigmatism and coma.
[0082] The triggering of the Active illumination may be synchronized with the panoramic view image capturing to capture multiple planar surfaces in a panoramic scene such as all of the walls of a room.
[0083]
[0084] Lens Systems and Filter System, Active Illumination, devices with different diffractive optical element can be added to or substituted for existing optics on the Camera(s): Active Illumination devices to provide for different operable ranges and usage environments
[0085] Computer is electronically linked to Camera and Active Illumination with: Electrical And Command To Camera and Electrical And Command To Active
Illumination. Power for: Camera and Active Illumination may be supplied and controlled by the Computer and Software.
[0086] I. Operation of Preferred Embodiment 12
[0087] The user has an assembled or integrated Orthographic Image Capture System, consisting of all Cameras, Active Illumination Computer and Software elements, and sub- elements. The Active illumination pattern is a non-dynamic, fixed in geometry, and matches the pattern and geometry configuration used during the calibration process with Calibration System, Image Board and optional Distance Tool and Calibration Map.
Calibration System, generates a unique. Calibration Data file, which is stored with the Software. The user aims Orthographic Image Capture System, in a pose, that allows the Camera and Active Illumination device to occupy the same physical space upon a selected predominantly planar surface, that is to be imaged. Computer and Software are then triggered by a software or hardware trigger, that sends instructions to Timing To Camera and Timing To Active Illumination, via Electrical And Command To Camera and Electrical And Command To Active Illumination, which then emits radiation that is focused, split or diffracted by the Active illumination Lens System, in a fixed geometric manner. The Camera may have a Filter System added or integral, which enables a more effective capture of the Active Illumination and Lens System emitted data, by reducing the background radiation, or limiting the radiation wavelengths that are captured by Camera for Software processing with reduced signal to noise ratios. The data capture procedure delivers information for processing into Raw Data. The Raw Data is integrated with Calibration Data with Calibration Processing, to generate Export Data and Display Data. The Export Data and Display Data is a common file format image file, which has been displayed in corrected world coordinates where each pixel has a known dimension and aspect ratio, or the untransformed image of the scene with selected dimensional information that has been transformed into corrected world coordinates, or integrated with other similarly corrected images in a fashion that form natural relative scalar qualities in 2 dimensions.
[0088]
[0089] The Orthographic Image Capture System may consist of a plurality of Cameras, and Active Illumination elements that are mounted in an array that is calibrated under a Calibration System.
[0090] What has been described and illustrated herein is a preferred embodiment of the invention along with some of its variations. The terms, descriptions and figures used herein are set forth by way of illustration only and are not meant as limitations. Those skilled in the art will recognize that many variations are possible within the spirit and scope of the invention in which all terms are meant in their broadest, reasonable sense unless otherwise indicated. Any headings utilized within the description are for convenience only and have no legal or limiting effect.
[0091] Figure 20 illustrates a flow chart 400 of major data processing steps for the software and hardware of an orthographic image capture system. The first step illustrated is a synchronized triggering of the active imaging device onto the planar object 402. The next step is capturing of the digital image containing the active imaging pattern 404. The next step is processing the image data to extract the position of characteristic elements of the active imaging pattern 406. The software then calculates a transformation matrix and the non-orthographic orientation and position of the camera relative to the plane of the object 408 and 410. These are calculable based on determining the distortions and position shift to the pattern imaged and determining corrections that would restore the geometric ratios of the active illumination pattern. Information about the distance to the imaged surface is also contained in the imaged pattern. In this embodiment the software creates a transformed image of the object as though the picture was taken from a virtual orthographic viewing angle on the object and present the view to the user 412 and 414. The user is then provided with an opportunity to select key points of dimensional interest in the image using a mouse and keyboard and/or any other similar means such as a touch screen 416. The software processes these points and provides the user with the actual dimensional information based on the dimensional points of interest selected by the user 418. [0092] An example of the last two steps is illustrated in Figure 22. Where the user has selected the area of the wall 450 minus the three windows 452, 454, 456 and provided with an answer of 114 square feet.
[0093] Figure 21 illustrates an alternative embodiment of the data processing flow of a software implementation of an orthographic image capture system. First the active image pattern projection is triggered and the image is captured. Then the flow can proceed down two paths or one of two paths. The first path the user is shown the raw image and selects key dimension points of interest 504. The second path is that a separate routine automatically identifies key dimensional locations in the image 506. Meanwhile the software is analyzing the image to locate key geometric points of interest in the active illumination pattern projected on the imaged object 508. The software then determines a transformation matrix and scene geometry 510 and 512. The software then applies the transformation matrix to the key points of dimensional interest that were automatically determined and/or input by the user 514 and then the software presents the user with dimensional information requested or automatically selected in step 506.
[0094] Figure 23 illustrates the same undistorted pattern illustrated in Figure 4. Figures 24 and Figure 25 illustrate examples of distortion of the pattern in an
embodiment of the orthographic image capture system employing the fixed relationship of the camera and an active illumination device described in A Simple Method for Range Finding via Laser Triangulation by Hoa G. Nguyen and Michael R Blackburn,
Technical Document 2734 dated January 1995 published by the United States Naval Command, Control and Ocean Surveillance Center, RDT&E Division and NRAD attached hereto as Appendix A. [0095] The distortion(s) illustrated in Figure 24 reflect a camera angle similar to the angle illustrated in Figure 1 : of a wall - taken from the left angled to right (horizontal pan right and horizontal to the wall (ie. no vertical tilt up or down).
[0096] The distortion(s) illustrated in Figure 25 reflect a camera angle similar to the angle illustrated in Figure 1 : of a wall - taken from the left angled to right (horizontal pan right) and but with the cameral lowered and looking up at the wall (ie. vertical tilt up). Note that the points in the pattern 502, 504, 506, 508 move along line segments 512, 514, 516 and 518 respectively.
[0097] In a further embodiment of the embodiment illustrated in Figure 24 and Figure 25, Filtering image for the active illumination pattern steps (406 in Figure 20 and 408 in Figure 21) can be limited for a search for pixels proximate to the line segments 552, 554, 556, 558, and 560 illustrated in Figure 26. This limited area of search, greatly speeds up pattern filtering step(s). In Figure 26, the horizontal x axis represents the horizontal camera pixels, and the vertical y axis represents the vertical camera pixels and the line segments 552, 554, 556, 558 represent the coordinate along which the laser points may be found and thus the areas proximate to these line segments is where the search of laser points can be concentrated.
[0098] In the embodiment shown in Figure 24, Figure 25 and Figure 26, the fixed projection axis of the active illuminator is slightly offset from the optical axis of the camera, which is useful in obtaining range information as described in Appendix A. Furthermore, the direction of the projection axis of the active illuminator relative to the camera axis has been chosen based on the particular pattern of active illumination such that, as the images of the active illumination dots shift on the camera sensor over the distance range of the orthographic image capture system, the lines of pixels on the camera sensor over which they shift do not intersect. In this particular example, the line segments 512 and 514 and 518 and 520 do not intersect. This decreases the chance of ambiguity, i.e., of confusing one spot for another in the active illumination pattern. This may be particularly helpful where the active illuminator is a laser which is fitted with a DOE which are prone to produce "ghost images".
[0099] While the disclosure has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments may be devised which do not depart from the scope of the disclosure as disclosed herein. The disclosure has been described in detail, it should be understood that various changes, substitutions and alterations can be made hereto without departing from the spirit and scope of the disclosure.
APPENDIX A
Technical Document 2734
January 1995
A Simple Method for
Range Finding via
Laser Triangulation
Figure imgf000026_0001
Figure imgf000026_0002
Approved for public use; distribution is unlimited.
Figure imgf000026_0003
Technical Document 2734
January 1995
A Simple Method for Range Finding via Laser Trianguiation
Hoa G. Nguyen
Michael R. Blackburn
Figure imgf000027_0001
NAVAL COMMAND, CONTROL AND
OCEAN SURVEILLANCE CENTER
RDT&E DIVISION
San Diego, California 92152-5001
K. E. EVANS, CAPT, USN R. T. SHEARER Commanding Officer Executive Director
ADMINISTRATIVE INFORMATION
This work was performed as part of a project funded by the Advanced Research Projects Agency and the Office of Naval Research.
Released by Under authority of
D. E. Demuth, LC R, Head D. W. Murphy, Head Adaptive Systems Branch Advanced Systems Division
ACKNOWLEDGMENTS
The authors would like to thank Bart Everett for the use of the robot and his critiques of this report and Steve Timmer for mechanical fabrication support.
CONTENTS
INTRODUCTION ·
PROCEDURE
IMPLEMENTATION 3
PERFORMANCE 4
REFERENCES · 5
Figures
1. Configuration for triangulation ranging ·■ 1
2. Setup of laser and camera · 2
3. Laser and camera combination 3
4. Robot directing remote manipulator arm 4
5. Precision versus range 5
INTRODUCTION
For determining range via triangulation, the baseline distance between source and sensor as well as sensor and source angles are used in theory. Figure 1 shows the configuration for triangulation ranging (Everett, 1995):
Figure imgf000030_0001
Figure 1. Configuration for triangulation ranging.
PI and P2 represent two reference points (e.g., camera and laser source), while P3 is a target point. The range B can be determined from the knowledge of the baseline separation A and the angles Θ and φ using the law of sines:
B = A sin 6 = A - sin 0
sin a sin(6 + φ) (1)
In practice this is difficult to achieve because the baseline separation and angles are difficult to measure accurately. We have demonstrated a technique for obtaining range information via laser triangulation without the need to know A, φ, and Θ. This technique was successfully implemented on a laser range-finding system on the NRaD ModBot (Modular Robot) test bed.
PROCEDURE
Figure 2 diagrams the setup of our laser and camera. The camera is represented by the image plane, focal point, and optical axis. The laser is directly above the camera, although its exact position is unimportant (we will only deal with the beam of light, represented by line CE in the diagram).
The laser is positioned so that the path of the laser and the optical axis form a vertical plane. Point P is the target of interest. We wish to find x, the projection of point P on the optical axis, u is the (vertical) projection of point P on the image plane (the scan line in the image on which the spot is detected). Pj and are two points used in the calibration of the system; xj , ¾, uj , and t are known.
E is the point where the path of the laser intersects the optical axis. We angle the laser such that point E is at the center of the range of interests. However, this technique also works with the laser path parallel to the optical axis. There is no particular need for accurate determination or setting of the axes, beyond a concern for precision to be discussed later. There is also no need to know the baseline distance between camera and laser, nor the focal length (f) of the camera.
Figure imgf000031_0001
Figure 2. Setup of laser and camera.
Determination of x is achieved as follows:
From the geometry of similar triangles, we have
Figure imgf000031_0002
We place the origin of the coordinate system at the focal point, without loss of generality. The slope (m) of the laser path and the -interce t (c, the height of point C) are:
Figure imgf000031_0003
Substituting equation 2 into equation 3 to eliminate '/ and V2 , we have:
Figure imgf000031_0004
We note that it is difficult to find the exact "focal point" of a given camera. However, point C in figure 2, which is directly above this focal point, can be found given measurements of yj and ·? (equation 3) or knowledge of the focal length, /, (equation 4). This point can be used as the location of a "virtual" laser source, and the length OC becomes the "virtual" baseline distance. We can then proceed with the law of sines approach for range determination using these parameters.
However, / is hard to determine accurately for some lenses (e.g., zoom lenses), and yj and >¾ are also difficult to measure. They are the offsets perpendicular to the optical axis, not the height from the ground. Also, the optical axis does not necessarily pass through the center of the image, but varies from camera to camera.
We used a simpler method that does not require the knowledge of yi , y2~ , or/. We note that the line uP passing through O is represented by:
Figure imgf000031_0005
and the laser path is of the form:
y = mx + c (6)
Solving for x from equations 5 and 6, and simplifying using equation 4, we get: x— N
ud— k where N, d, and ' are obtained after a simple calibration process, and
k = U J : — u,x, (8)
N = {u, — u2 X]X2
During calibration, we put targets at distances xj and ¾ from the camera, record the height uj and i(2 at which the laser spot striking the targets appear on the image, and compute d, k, and N using equation 8. Then, during range-finding operations, we simply note the height u of the laser spot in the image and use equation 7 to compute range. We can accomplish this without knowing the baseline separation or angles between the camera and laser source. Furthermore, equations 7 and 8 are insensitive to errors in the optical axis (i.e., u' = a + u, uj ' = a + ui , = a. + will give the same results).
IMPLEMENTATION
We used this laser triangulation technique in a project studying adaptive sensor-motor transformations (Blackburn and Nguyen, 1994). We needed depth information, but only a single video camera was available. The camera provided both visual information about the scene and the range to target via detection of the laser spot in the image. We attached a 5-mW solid-state diode laser on top of the charge-coupled device (CCD) camera, and used a red filter on the lens to increase sensitivity to the laser spot. The laser and camera combination, mounted on a pan-and-tilt unit on a mobile robot (ModBot), is shown in figure
Figure imgf000032_0001
Figure 3. Laser and camera combination. To assess target range, the target is acquired and the optical axis automatically placed at its center using the pan-and-tilt unit. The laser illuminates a spot on the target, and the vertical position of the spot in the image is used for range calculations. To accommodate small targets, the distance v (separation between laser spot and optical axis) must be kept small. This in turn means that the baseline separation between the laser source and the camera cannot be large, and the angle of the laser path must be small (fairly parallel to the optical axis). We chose to place the laser approximately 7 cm above the camera with a slight downward tilt (E - 1.2 m). The distances we were interested in were between 0.5 m and 2 m. Figure 4 shows the robot directing a remote manipulator arm to reach a cup being suspended as a target (see [Blackburn and Nguyen, 1994]).
Figure imgf000033_0001
Figure 4. Robot directing remote manipulator arm.
PERFORMANCE
Our laser triangulation method is subject to a limiting factor common to all triangulation systems: reduced precision with increasing range. With the setup described above, our precision decreases from 3 mm at 30 cm to 8 cm at 1.5 m (see figure 5).
Increasing the separation distance and the laser angle will improve precision. However, in our case these are constrained by the need to keep v small. By keeping v small, we ensure that both the optical axis and the laser spot fall on the same target object (analogous to minimizing the "missing parts" problem [Everett, 1995]). Due to our somewhat unique application (i.e., motion-driven saccade mechanism), the range to target we desire is actually the distance x, and not the length OP (refer to figure 2), as is usually the case in most triangulation applications. But as a byproduct of keeping small, x ~ OP.
An alternate approach that would yield slightly higher precision is to use a lookup table that stores the predetermined range for every pixel height (see the Quantic Ranging System [Everett, 1995]). This would account for imperfections in the camera lens. But this approach is not appropriate for a research robot such as ModBot. Modbot's laser rangefinder is used in many applications. Each
Figure imgf000034_0001
DISTANCE (m)
Figure 5. Precision versus range. application requires the laser to be re-aimed to get the crossover point, E, at the middle of the range of interests (e.g., 1 m for manipulation tasks and 3 m for navigational tasks), and every change would require repeating a much more time-consuming calibration.
Another problem often associated with laser rangefinders is the specular reflections and absorption on different surfaces, decreasing detectability. We have noticed this on several instances in our application. We found that a red filter helped in the detection of the laser spot in most instances. Using a pulsed laser coupled with frame subtraction would also increase sensitivity. However, the tradeoff is that twice as many image frames would have to be digitized and transferred from the frame grabber to the processor board, and the current speed bottleneck in most real-time vision systems (including ours) is this frame -grabbing and transferring activity.
REFERENCES
1. Blackburn, Michael R., and Hoa G. Nguyen. "Robotic Sensor-Motor Transformations," 1994 Image Understanding Worhhop Proc. (pp. 209-214). November 13-16, Monterey, CA.
2. Everett, H. R. 1995 (in press). Sensors for Mobile Robots: Theory and Application. A K Peters, Wellesley, MA.
REPORT DOCUMENTATION PAGE Form Approved
OMB No. 0704-0188
Figure imgf000035_0001
Approved for public use; distribution is unlimited.
We present the design and implementation of a simple range fmder for robotic applications via laser trianguiation. The technique does not require knowledge of baseline separation and angles from camera and laser to the target. The system has been successfully implemented on a mobile robot, providing range information for controlling a remote manipulator.
Figure imgf000035_0002
UNCLASSIFIED
Hoa G. Nguyen (619) 553-1871 Code 531
UNCLAS£ INITIAL DISTRIBUTION
Code 0012 Patent Counsel (1)
Code 0271 Archive/Stock (6)
Code 0274 Library (2)
Code 50 H. O. Porter (1)
Code 53 D. W. Murphy (1)
Code 531 LCDR D. E. Demuth (1)
Code 531 H. G. Nguyen (50)
Code 531 M. R. Blackburn (50)
Defense Technical Information Center
Alexandria, VA 22304-6145 (4)
NCCOSC Washington Liaison Office
Washington, DC 20363-5100
Center for Naval Analyses
Alexandria, VA 22302-0268
Navy Acquisition, Research and Development
Information Center (NARDIC)
Arlington, VA 22244-5114
GIDEP Operations Center
Corona, CA 91718-8000

Claims

Claims
1 . An image capturing device comprising: an active illuminator projecting a pattern on an object to be imaged; a camera for capturing an image of the object and the pattern projected by the active illuminator; a data processor which processing the scanned image to calculate distortions in the projected pattern, and using the calculated distortions to create transformations of coordinates of the captured image into real world coordinates.
PCT/US2013/036314 2012-04-12 2013-04-12 Orthographic image capture system Ceased WO2013155379A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201261623178P 2012-04-12 2012-04-12
US61/623,178 2012-04-12
US201261732636P 2012-12-03 2012-12-03
US61/732,636 2012-12-03

Publications (2)

Publication Number Publication Date
WO2013155379A2 true WO2013155379A2 (en) 2013-10-17
WO2013155379A3 WO2013155379A3 (en) 2014-01-03

Family

ID=48670752

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/036314 Ceased WO2013155379A2 (en) 2012-04-12 2013-04-12 Orthographic image capture system

Country Status (1)

Country Link
WO (1) WO2013155379A2 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8923656B1 (en) 2014-05-09 2014-12-30 Silhouette America, Inc. Correction of acquired images for cutting pattern creation
WO2015073590A3 (en) * 2013-11-12 2015-07-09 Smart Picture Technology, Inc. Multiple template improved 3d modeling of imaged objects using camera position and pose to obtain accuracy
US9842397B2 (en) 2013-01-07 2017-12-12 Wexenergy Innovations Llc Method of providing adjustment feedback for aligning an image capture device and devices thereof
US10068344B2 (en) 2014-03-05 2018-09-04 Smart Picture Technologies Inc. Method and system for 3D capture based on structure from motion with simplified pose detection
US10083522B2 (en) 2015-06-19 2018-09-25 Smart Picture Technologies, Inc. Image based measurement system
US10196850B2 (en) 2013-01-07 2019-02-05 WexEnergy LLC Frameless supplemental window for fenestration
US10304254B2 (en) 2017-08-08 2019-05-28 Smart Picture Technologies, Inc. Method for measuring and modeling spaces using markerless augmented reality
US10346999B2 (en) 2013-01-07 2019-07-09 Wexenergy Innovations Llc System and method of measuring distances related to an object utilizing ancillary objects
US10501981B2 (en) 2013-01-07 2019-12-10 WexEnergy LLC Frameless supplemental window for fenestration
US10533364B2 (en) 2017-05-30 2020-01-14 WexEnergy LLC Frameless supplemental window for fenestration
US11138757B2 (en) 2019-05-10 2021-10-05 Smart Picture Technologies, Inc. Methods and systems for measuring and modeling spaces using markerless photo-based augmented reality process
US11970900B2 (en) 2013-01-07 2024-04-30 WexEnergy LLC Frameless supplemental window for fenestration

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060017720A1 (en) * 2004-07-15 2006-01-26 Li You F System and method for 3D measurement and surface reconstruction
WO2006084385A1 (en) * 2005-02-11 2006-08-17 Macdonald Dettwiler & Associates Inc. 3d imaging system
US8922647B2 (en) * 2011-08-03 2014-12-30 The Boeing Company Projection aided feature measurement using uncalibrated camera

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BLACKBURN, MICHAEL R.; HOA G. NGUYEN: "Robotic Sensor-Motor. Transformations", IMAGE UNDERSTANDING WORKSHOP PROC, 13 November 1994 (1994-11-13), pages 209 - 214
EVERETT, H. R.: "Sensors for Mobile Robots: Theory and Application", 1995, A K PETERS, WELLESLEY
HOA G. NGUYEN; MICHAEL R BLACKBURN: "Technical Document", vol. 2734, January 1995, UNITED STATES NAVAL COMMAND, article "A Simple Method for Range Finding via Laser Triangulation"

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10196850B2 (en) 2013-01-07 2019-02-05 WexEnergy LLC Frameless supplemental window for fenestration
US11970900B2 (en) 2013-01-07 2024-04-30 WexEnergy LLC Frameless supplemental window for fenestration
US9842397B2 (en) 2013-01-07 2017-12-12 Wexenergy Innovations Llc Method of providing adjustment feedback for aligning an image capture device and devices thereof
US10501981B2 (en) 2013-01-07 2019-12-10 WexEnergy LLC Frameless supplemental window for fenestration
US10346999B2 (en) 2013-01-07 2019-07-09 Wexenergy Innovations Llc System and method of measuring distances related to an object utilizing ancillary objects
WO2015073590A3 (en) * 2013-11-12 2015-07-09 Smart Picture Technology, Inc. Multiple template improved 3d modeling of imaged objects using camera position and pose to obtain accuracy
US10068344B2 (en) 2014-03-05 2018-09-04 Smart Picture Technologies Inc. Method and system for 3D capture based on structure from motion with simplified pose detection
US8923656B1 (en) 2014-05-09 2014-12-30 Silhouette America, Inc. Correction of acquired images for cutting pattern creation
US9396517B2 (en) 2014-05-09 2016-07-19 Silhouette America, Inc. Correction of acquired images for cutting pattern creation
US10083522B2 (en) 2015-06-19 2018-09-25 Smart Picture Technologies, Inc. Image based measurement system
US10533364B2 (en) 2017-05-30 2020-01-14 WexEnergy LLC Frameless supplemental window for fenestration
US10304254B2 (en) 2017-08-08 2019-05-28 Smart Picture Technologies, Inc. Method for measuring and modeling spaces using markerless augmented reality
US10679424B2 (en) 2017-08-08 2020-06-09 Smart Picture Technologies, Inc. Method for measuring and modeling spaces using markerless augmented reality
US11164387B2 (en) 2017-08-08 2021-11-02 Smart Picture Technologies, Inc. Method for measuring and modeling spaces using markerless augmented reality
US11682177B2 (en) 2017-08-08 2023-06-20 Smart Picture Technologies, Inc. Method for measuring and modeling spaces using markerless augmented reality
US11138757B2 (en) 2019-05-10 2021-10-05 Smart Picture Technologies, Inc. Methods and systems for measuring and modeling spaces using markerless photo-based augmented reality process
US11527009B2 (en) 2019-05-10 2022-12-13 Smart Picture Technologies, Inc. Methods and systems for measuring and modeling spaces using markerless photo-based augmented reality process

Also Published As

Publication number Publication date
WO2013155379A3 (en) 2014-01-03

Similar Documents

Publication Publication Date Title
WO2013155379A2 (en) Orthographic image capture system
US20140307100A1 (en) Orthographic image capture system
US12299907B2 (en) Methods and systems for imaging a scene, such as a medical scene, and tracking objects within the scene
CN113532329B (en) Calibration method with projected light spot as calibration point
US10401143B2 (en) Method for optically measuring three-dimensional coordinates and controlling a three-dimensional measuring device
US10088296B2 (en) Method for optically measuring three-dimensional coordinates and calibration of a three-dimensional measuring device
US20150369593A1 (en) Orthographic image capture system
CN103959012B (en) 6 degrees of freedom position and orientation determination
EP1792282B1 (en) A method for automated 3d imaging
EP1364226A1 (en) Apparatus and method for obtaining three-dimensional positional data from a two-dimensional captured image
JP2009508122A (en) Method for supplying survey data using surveying equipment
EP3069100B1 (en) 3d mapping device
US11481917B2 (en) Compensation of three-dimensional measuring instrument having an autofocus camera
US12299932B2 (en) Compensation of three-dimensional measuring instrument having an autofocus camera
CN111445529A (en) Calibration equipment and method based on multi-laser ranging
WO2016040271A1 (en) Method for optically measuring three-dimensional coordinates and controlling a three-dimensional measuring device
Wenzel et al. High-resolution surface reconstruction from imagery for close range cultural Heritage applications
JP2019133112A (en) Imaging device and method for controlling imaging device
Ariff et al. Near-infrared camera for night surveillance applications
AU2005279700B2 (en) A method for automated 3D imaging

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13730667

Country of ref document: EP

Kind code of ref document: A2

122 Ep: pct application non-entry in european phase

Ref document number: 13730667

Country of ref document: EP

Kind code of ref document: A2