[go: up one dir, main page]

WO2014100950A1 - Three-dimensional imaging system and handheld scanning device for three-dimensional imaging - Google Patents

Three-dimensional imaging system and handheld scanning device for three-dimensional imaging Download PDF

Info

Publication number
WO2014100950A1
WO2014100950A1 PCT/CN2012/087331 CN2012087331W WO2014100950A1 WO 2014100950 A1 WO2014100950 A1 WO 2014100950A1 CN 2012087331 W CN2012087331 W CN 2012087331W WO 2014100950 A1 WO2014100950 A1 WO 2014100950A1
Authority
WO
WIPO (PCT)
Prior art keywords
scanning device
handheld scanning
handheld
data processing
placement information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2012/087331
Other languages
French (fr)
Inventor
Yannick Glinec
Weifeng GU
Qinran Chen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Carestream Health Inc
Original Assignee
Carestream Health Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Carestream Health Inc filed Critical Carestream Health Inc
Priority to PCT/CN2012/087331 priority Critical patent/WO2014100950A1/en
Publication of WO2014100950A1 publication Critical patent/WO2014100950A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0082Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
    • A61B5/0088Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes for oral or dental tissue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0062Arrangements for scanning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0073Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by tomography, i.e. reconstruction of 3D images from 2D projections
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/06Devices, other than using radiation, for detecting or locating foreign bodies ; Determining position of diagnostic devices within or on the body of the patient
    • A61B5/065Determining position of the probe employing exclusively positioning means located on or in the probe, e.g. using position sensors arranged on the probe
    • A61B5/067Determining position of the probe employing exclusively positioning means located on or in the probe, e.g. using position sensors arranged on the probe using accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2560/00Constructional details of operational features of apparatus; Accessories for medical measuring apparatus
    • A61B2560/04Constructional details of apparatus
    • A61B2560/0431Portable apparatus, e.g. comprising a handle or case
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1077Measuring of profiles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1079Measuring physical dimensions, e.g. size of the entire body or parts thereof using optical or photographic means

Definitions

  • THREE-DIMENSIONAL IMAGING SYSTEM AND HANDHELD SCANNING DEVICE FOR THREE-DIMENSIONAL IMAGING
  • This invention relates to three-dimensional (3D) imaging of physical object. More specifically, it relates to 3D imaging by handheld camera device.
  • 3D imaging technology has been playing an increasingly important role for a wide range of applications, including the production of movies and video games, computer-aided industrial design, orthotics and prosthetics, reverse engineering and prototyping, quality control and inspection, documentation of cultural artifacts, and the like.
  • electronic handheld 3D camera product configured with imaging optics have been developed. Due to its portability, such camera devices can be easily handled by the user to analyze a real-world obj ect or environment as needed.
  • 3D camera product is designed to collect data on the shape of the object and possibly its appearance, which is then recorded as data points within three-dimensional space. Once a point cloud of geometric samples on the surface of the subject has been obtained, these points can then be used to extrapolate the shape of the subject (a process called reconstruction), for example be converted into a triangulated mesh and then a computer-aided design model. For most situations, it requires multiple scans from many different directions to produce a complete model of the object. These scans have to be brought into a common reference system, a process that is usually called alignment or registration, and then merged to create a complete model.
  • a three-dimensional (3D) imaging system comprising: a data processing unit; a handheld scanning device for acquiring a plurality of two-dimensional (2D) image sets of an object from a plurality of perspectives, one 2D image set corresponding to one perspective; wherein the handheld scanning device is equipped with an assistant sensing unit which is configured to provide placement information of the handheld scanning device, and wherein said placement information is used by the data processing unit to control the acquisition of 2D images sets and the stitching of 3D views from each 2D image set so as to obtain a global surface of the object.
  • said assistant sensing unit comprises at least one sensor adapted to provide signals related to at least one degree of freedom of the handheld scanning device when each 2D image set is acquired; and a sensor controller adapted to generate said placement information from the signals.
  • the placement information of the handheld scanning device is generated as the position and orientation of the image sensing unit provided inside the handheld scanning device when each 2D image set is acquired.
  • said at least one sensor is selected from the group consisting of accelerometer, gyroscope, magnetometer and GPS.
  • the 2D image sets are acquired by the image sensing unit with structured light projected onto the object by the illuminating unit provided inside the handheld scanning device.
  • the structured light is of sinusoidal pattern or multi-line pattern.
  • the data processing unit is configured to use said placement information to determine the arrangement of one 3D view relative to another for the stitching, wherein the two 3D views are successively acquired.
  • the data processing unit when an iterative algorithm is used by the data processing unit to refine the arrangement of a new 3D view relative to an already stitched 3D view, the data processing unit is configured to use said placement information to determine the initial arrangement of said new 3D view.
  • 3D views is stitched sequentially or simultaneously.
  • the data processing unit is configured to determine whether the handheld device is stable based on said placement information and trigger the acquisition of 2D image sets when the handheld device is determined as stable.
  • the data processing unit is configured to detect the unstable motions of the handheld device based on said placement information and discard the 2D image sets acquired during those unstable motions.
  • the handheld scanning device is an intra-oral camera.
  • the obj ect is the j aw, tooth or gum of a patient, an implant or a preparation.
  • the image sensing unit is a charge-coupled device (CCD) or a Complementary metal-oxide-semiconductor (CMOS) unit.
  • CCD charge-coupled device
  • CMOS Complementary metal-oxide-semiconductor
  • a handheld scanning device for 3D imaging, wherein the handheld scanning device is configured to acquire a plurality of two-dimensional (2D) image sets of an object from a plurality of perspectives, one 2D image set corresponding to one perspective, and wherein the handheld scanning device is equipped with an assistant sensing unit which is configured to provide placement information of the handheld scanning device, said placement information is used to control the acquisition of 2D image sets and the stitching of 3D views converted from each 2D image set so as to obtain a global surface of the object.
  • 2D two-dimensional
  • This invention introduces the use of additional sensors in the handheld 3D camera product to provide orientation and/or position information of the image acquisition device relative to the object.
  • the 3D imaging system according to the present invention may significantly increase the efficiency and reliability in generating the stereo vision of object surface through the matching and merging of images taken from random views. This way, more accurate results could be obtained with less computational intensity.
  • Fig. l illustrates the functional block diagram of the 3D imaging system according to one embodiment of the invention.
  • Fig.2 illustrates the relationship between the world coordinates system and the handpiece coordinate system and the six parameters describing the 3D transform.
  • Fig.3 provides schematic illustrations of various placement sensors and the relationship between the sensor output and placement information.
  • Fig.4 illustrates an example workflow of stitching between 2 views
  • Fig.5 illustrates comparison of the merging results with and without the device placement information according to one embodiment of the invention.
  • the methods and arrangements of the present disclosure may be illustrated by being used in dental implant imaging.
  • teaching of the present disclosure can be applied to the 3D imaging product with similar architecture, where the generation of the object surface in three dimensions is based on a sequence of images acquired at arbitrary orientation relative to the object.
  • Fig. l illustrates the functional block diagram of a 3D imaging system according to one embodiment of the invention.
  • the 3D imaging system provided in the present invention may be used to analyzes a real-world object or environment to collect data on its shape and possibly its appearance.
  • the 3D imaging system may comprise a handheld scanning device 10 for acquiring a plurality of two-dimensional (2D) image sets of an object from a plurality of perspectives.
  • This kind of handheld scanning device also termed as handpiece herein for simplicity, could be configured into any shape, as long as it could provide the user with sufficient convenience in its moving and operating.
  • the various 2D image sets could be acquired using structured light projected onto the object by the illuminating unit 103 provided inside the handheld scanning device. Projecting a narrow band of light onto a three-dimensionally shaped surface may produce a hne of illumination that appears distorted from other perspectives than that of the projector, and can be used for an exact geometric reconstruction of the surface shape.
  • the reflected light would be received by the image sensing unit 102, which for example could be implemented as a charge-coupled device (CCD) or a Complementary metal-oxide-semiconductor (CMOS).
  • CCD charge-coupled device
  • CMOS Complementary metal-oxide-semiconductor
  • an image is projected through a lens onto the capacitor array within the image sensing unit 102, causing the accumulation of an electric charge proportional to the light intensity at that location, a two-dimensional array, used in video and still cameras, captures a two-dimensional picture corresponding to the scene projected onto the focal plane of the sensing unit.
  • the structured light could be of sinusoidal pattern or multi-line pattern. Seen from different viewpoints, the pattern appears geometrically distorted due to the surface shape of the object.
  • Each image set may be acquired at an arbitrary orientation relative to the object and contains a fraction of the surface information to be constructed.
  • the various captured 2D image sets will then be transported to the data processing unit 20 for further conversion.
  • the data processing unit 20 may be implemented as, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, or the like, and may be implemented using any suitable combination of hardware and/or software. It is to be understood that though the data processing unit 20 is illustrated or described as separate from the handheld scanning device, this is not necessary. In another embodiment of the present invention, for example, this processing unit could be integrated into the handheld scanning device. In yet another embodiment of the present invention, part of the functions as described with respect to the data processing unit 20 could be carried out by an additional processor within the handheld part.
  • each 2D image set is converted into a 3D data structure in the data processing unit 20, forming a 3D view from one particular perspective, while the data processing unit 20 is further configured to match and merge these 3D views in one common reference coordinate system so as to provide a global surface for the object of interest.
  • the reference coordinate system could be Cartesian coordinate system with origin and axis directions fixed relative to the space in which the object, the user and the imaging system are located. The orientation of this coordinate system could be defined relative to the first set of images being captured.
  • each 3D view is acquired by placing the handheld device at a different position and orientation.
  • the arrangement of each 3D view relative to each other can be determined by using reference features on the surface being scanned.
  • the assistant sensing unit 101 within the handheld scanning device 10 is configured to record placement information of device 10 when each 2D image set is acquired. With the help of this information about the perspective of each 3D view, it may cut off huge amount of computational burden for the data processing unit 20 and raise up the result generation speed.
  • the sensor controller could be the ITG-3200 from InvenSense
  • the assistant sensing unit 101 would be the gyroscope component on the integrated circuit.
  • the image sensing unit 102 could be composed of imaging optics (lenses) and a VITA 1300 from ON Semiconductors, which is a 1.3 Megapixel 150 FPS Global Shutter CMOS Image Sensor.
  • the illuminating unit 103 could be composed of Red, Green and
  • Blue LEDs combined with dielectric mirrors and concentrated with a micro-lens array onto a DMD array, following by imaging optics (lenses) for imaging on the teeth surface.
  • the assistant sensing unit comprises at least one sensor adapted to provide signals related to at least one degree of freedom of the handheld scanning device 10 when each 2D image set is acquired; and a sensor controller adapted to generate the placement information from the signals .
  • the sensors incorporated in the assistant sensing unit could be a piece of hardware embedded onto the handheld scanning device, which delivers signal amplitude, like voltage, intensity, when a set of 2D images is acquired. This signal can be converted by the sensor controller into placement information. In other embodiment of the present invention, the signals from sensors could also be communicated to the data processing unit for the conversion together with the processing of image data, or any other processing module within the whole system.
  • placement of the handheld scanning device 10 is fully determined by 6 parameters: 3 rotations and 3 translations.
  • One sensor might not provide all the placement information.
  • Several sensors might be combined, where one or more position sensors are used to acquire the position data in three translational degrees of freedom, and one or more orientation sensors acquire the angular data in three rotational degrees of freedom. Thus, sufficient data (even redundant in some cases) would be obtained to determine the placement of the handheld device in a 3D space.
  • Fig.2 shows a typical definition of the 6 parameters which describe the position of the handpiece relative to the world coordinate system.
  • the world coordinate system is an arbitrary reference system, which doesn't undergo acceleration during the experiment (it is also called an inertial coordinate system or Lorentz coordinate system). It is not unique and the world coordinate system labeled X, Y, Z has been arbitrarily chosen.
  • the 6 parameters which control the position of the handpiece relative to the world coordinate system X, Y, Z are the three coordinates (x, y, z) defining the system translation plus the roll, pitch and yaw defining the system rotation.
  • the handpiece in the "reference position” gives a placement example when all 6 parameters are set to 0. This location is defined by factory during system calibration and on system startup.
  • the handpiece in the "actual position” gives an arbitrary placement example when the user holds the system.
  • a three-axis micro-electromechanical accelerometer may be utilized as the position sensor. By integrating acceleration twice over time, it is possible to obtain relative 3D position information during the acquisition sequence.
  • a three-axis gyroscope may be employed as the orientation sensor, which measures relative orientation to an internal element spinning fast to give it inertia. System orientation can also be obtained using a magnetometer, which measures the direction of the earth magnetic field as it is a constant vector during the acquisition sequence.
  • a GPS system could be incorporated to provide position information.
  • radar images are captured from airplanes which use the GPS navigation system to locate the plane and track it. This location could be similarly used when assembling all 3D views into a surface map.
  • the sensor controller inside the assistant sensing unit may extract the position data and orientation data from the sensed position signal and orientation signals respectively, and derive coordinates from geometric transformations of the position data and orientation data.
  • the obj ect being scanned does not move during the image acquisition, but also, the system could handle a tolerance of 10 degree for the object movement around its axis of symmetry.
  • the sensors embedded for obtaining the placement information of the handheld scanning device could be arranged close to the image sensing unit 102. It is to be easily understood that the displacement information of the handheld scanning device when the images are captured would more precisely be determined as the image sensing unit 102 where the images are actually sensed.
  • the position of the image sensing unit 102 relative to the assistant sensing unit 101 could be predetermined at the manufacture and stored. When calculating the position and orientation with respect to where the images are taken based on the signals from the assistant sensing unit, it would be advantageous to take said position information about the image sensing unit 102 into consideration, which helps to generate a more accurate results. Without this placement information, surface matching may need searching in a large space of 6 degrees of freedom, which requires computing time and might be inaccurate.
  • the displacement information of the scanning device when the 2D images are captured then may be stored in association with each corresponding 3D views for use in their matching and merging.
  • Fig.4 provides an example of a workflow for stitching two views together with the placement information provided by the assistant sensing unit.
  • the generation of candidate relative transform Tl, T2, T3 as illustrated in Fig.4, i.e., different combination results of the two 3D views (i.e., vl and v2 in Fig.4), may use standard computer vision technique as follow: (i) computation of keypoint locations, (ii) computation of feature descriptor at each keypoint location, (in) creation of a list of corresponding features between both views, (iv) computation of transforms using the list of corresponding features.
  • a keypoint is a position defining a region of interest as detected by an algorithm.
  • a feature descriptor is a vector describing the neighborhood of the keypoint.
  • the feature descriptor is a compact, local representation of the object which is ideally unique, invariant to any 3D transform (in term of translation, rotation, scaling and the like) for unambiguous matching.
  • Examples of algorithms to compute keypoints and feature descriptors in 3D are Normal Aligned Radial Feature (NARF) or Point Feature Histogram (PFH) or the Fast PFH (FPFH) detectors.
  • NARF Normal Aligned Radial Feature
  • PFH Point Feature Histogram
  • FPFH Fast PFH
  • the computation of feature correspondences between views is usually based on a search of the N closest features in the other view for each feature in the first, the second or both views together, where N denotes a natural non-null integer.
  • the computation of transforms from the list of correspondences is frequently based on a RANdom SAmple Consensus (RANSAC), which is an iterative random search of the largest group of correspondences describing the same transformation.
  • RANSAC
  • RANSAC is detecting transforms satisfying the largest number of correspondences, it will frequently select transforms which only place both surfaces on top of each other, regardless of the local curvature. This happens because there exist several incorrect correspondences generated from non-unique feature descriptors (i.e. incorrect pairing of features).
  • the correct transform usually has only a small overlap with the other views, thus limiting the number of feature correspondences which comply with the correct transform. Incorrect transforms should be removed or ranked low quickly to avoid spending computational resources on them. Therefore, to distinguish between the generated transforms, the use of external data from position sensors is helpful.
  • This transform V can be compared to candidate transforms from the ANSAC algorithm to reject or reduce the rank of incompatible transforms, as described below.
  • the transform evaluation from Fig 4 usually involves the Iterative Closest Point (ICP) algorithm which is sensitive to the initial candidate transform. ICP iteratively performs the following operations until a convergence criterion is met: (i) selection of a subset of points, (ii) computation of point correspondences on the other 3D view, (in) correspondence weighting and outlier rejection, (iv) assignation of an error metric, (v) minimization of the error for that set of correspondences until a second convergence criterion is met. Because of its iterative nature, the ICP algorithm is the most time consuming part of the stitching process, especially if the initial candidate is wrong, i.e. more iterations will be performed than for the correct initial candidate.
  • ICP Iterative Closest Point
  • Matrix V is completely determined for the rigid transform and can directly be used as a candidate transform.
  • the plurality of 3D views could be stitched sequentially or simultaneously.
  • the data processing unit 20 may be configured to use said placement information to determine the arrangement of one 3D view relative to another where the two 3D views are successively acquired.
  • Sequential stitching means that the new acquired view is only matched against the previously acquired views (not against the future views).
  • a simultaneous stitching would be done after all views have been acquired, to increase the likelihood of a view to be successfully stitched. Each view would be tested against the entire dataset. It could be expected that simultaneous stitching has a higher stitching success rate than the sequential one and will be less interactive.
  • an iterative algorithm may be used by the data processing unit to refine the arrangement of a new 3D view relative to an already stitched 3D view.
  • iterative closest point ICP
  • the data processing unit 20 may be configured to use said placement information to determine the initial arrangement of said new 3D view for those iterative algorithms, without introducing undesired errors at the very beginning.
  • the data processing unit may be further configured to determine whether the handheld device is stable based on said placement information and trigger the acquisition of 2D image sets automatically when the handheld device is determined as stable.
  • many handheld scanning devices commercially available may be configured with a "capture" button for the customer to start the operation.
  • the user is not actually ready when they push the start button, rendering the device in an unstable status.
  • it may even be intended to automatically trigger the capture.
  • the assistant sensing unit may start to collect the placement information of the handheld device at a certain interval in response to an internal command. The information provided by the assistant sensing unit could be very helpful in determining when the system is stable and automatically trigger the capture accordingly.
  • the data processing unit may be configured to detect the unstable motions of the handheld device based on said placement information and discard the 2D image sets acquired during those unstable motions.
  • the operator might move his hand during the capture of a set of 2D images from one perspective, which may have a negative impact on the generation of a 3D view.
  • the assistant sensing unit Based on the information provided by the assistant sensing unit, it can be detected in the data processing unit if there exist jitter or bigger move during one capture and discard these data automatically, which may avoid much unnecessary effort in calculating on the wrong data.
  • a global surface of the object such as the jaw, tooth or gum of a patient, an implant or a preparation in dental application, would be shown to the user on display 30 for further analysis.
  • Available prior information from the assistant sensing unit helps reduce the computation and speed up the display of a more accurate result.
  • Fig.5 illustrates the merging results with and without the device placement information according to one embodiment of the invention. It is to be noted that the example in Fig.5 is only shown in 2D for the sake of simplicity, which is still enough to provide a clear vision of the benefits from the present invention.
  • the example illustrated in Fig.5 is the application of the 3D imaging system in dental field, where the goal is to reconstruct tooth surface from images acquired by the handheld scanning device operated by the dentist.
  • the terms “plurality” and “a plurality” as used herein may include, for example, “multiple” or “two or more”.
  • the terms “plurality” or “a plurality” may be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A three-dimensional (3D) imaging system, comprising: a data processing unit; a handheld scanning device for acquiring a plurality of two-dimensional (2D) image set of an object from a plurality of perspectives, one 2D image set corresponding to one perspective; wherein the handheld scanning device is equipped with an assistant sensing unit which is configured to provide placement information of the handheld scanning device, and wherein said placement information is used by the data processing unit to control the acquisition of 2D image sets and the stitching of 3D views converted from each 2D image set so as to obtain a global surface of the object. A handheld scanning device for 3D imaging is also provided.

Description

THREE-DIMENSIONAL IMAGING SYSTEM AND HANDHELD SCANNING DEVICE FOR THREE-DIMENSIONAL IMAGING
Technical Field
This invention relates to three-dimensional (3D) imaging of physical object. More specifically, it relates to 3D imaging by handheld camera device.
Background
3D imaging technology has been playing an increasingly important role for a wide range of applications, including the production of movies and video games, computer-aided industrial design, orthotics and prosthetics, reverse engineering and prototyping, quality control and inspection, documentation of cultural artifacts, and the like. To facilitate the extensive use of 3D imaging technique by ordinary people, electronic handheld 3D camera product configured with imaging optics have been developed. Due to its portability, such camera devices can be easily handled by the user to analyze a real-world obj ect or environment as needed.
Generally speaking, 3D camera product is designed to collect data on the shape of the object and possibly its appearance, which is then recorded as data points within three-dimensional space. Once a point cloud of geometric samples on the surface of the subject has been obtained, these points can then be used to extrapolate the shape of the subject (a process called reconstruction), for example be converted into a triangulated mesh and then a computer-aided design model. For most situations, it requires multiple scans from many different directions to produce a complete model of the object. These scans have to be brought into a common reference system, a process that is usually called alignment or registration, and then merged to create a complete model.
For the handheld 3D camera products which allow the user to operate freely, i.e. to collect two-dimensional images of the object from substantially random orientations or positions relative to the object, there may exist particular difficulties during the alignment or registration of these images collected for 3D reconstructions. Without information about the true position and orientation when each image is acquired, it is often difficult to guess the relative placement of the camera device and thus surface matching on the computer needs searching in a large space, which may require considerable computing time while the results might be inaccurate. Brief Summary of the Invention
It is accordingly an object of the invention to provide a design of handheld 3D camera product which overcomes the above-mentioned disadvantages within the known devices of this general type and which enables more efficient and accurate 3D reconstructions .
With the foregoing and other objects in view, there provides a three-dimensional (3D) imaging system, comprising: a data processing unit; a handheld scanning device for acquiring a plurality of two-dimensional (2D) image sets of an object from a plurality of perspectives, one 2D image set corresponding to one perspective; wherein the handheld scanning device is equipped with an assistant sensing unit which is configured to provide placement information of the handheld scanning device, and wherein said placement information is used by the data processing unit to control the acquisition of 2D images sets and the stitching of 3D views from each 2D image set so as to obtain a global surface of the object.
In accordance with one embodiment of the present invention, said assistant sensing unit comprises at least one sensor adapted to provide signals related to at least one degree of freedom of the handheld scanning device when each 2D image set is acquired; and a sensor controller adapted to generate said placement information from the signals.
In accordance with one embodiment of the present invention, the placement information of the handheld scanning device is generated as the position and orientation of the image sensing unit provided inside the handheld scanning device when each 2D image set is acquired.
In accordance with one embodiment of the present invention, said at least one sensor is selected from the group consisting of accelerometer, gyroscope, magnetometer and GPS.
In accordance with one embodiment of the present invention, the 2D image sets are acquired by the image sensing unit with structured light projected onto the object by the illuminating unit provided inside the handheld scanning device.
In accordance with one embodiment of the present invention, the structured light is of sinusoidal pattern or multi-line pattern.
In accordance with one embodiment of the present invention, the data processing unit is configured to use said placement information to determine the arrangement of one 3D view relative to another for the stitching, wherein the two 3D views are successively acquired.
In accordance with one embodiment of the present invention, when an iterative algorithm is used by the data processing unit to refine the arrangement of a new 3D view relative to an already stitched 3D view, the data processing unit is configured to use said placement information to determine the initial arrangement of said new 3D view.
In accordance with one embodiment of the present invention, the plurality of
3D views is stitched sequentially or simultaneously.
In accordance with one embodiment of the present invention, the data processing unit is configured to determine whether the handheld device is stable based on said placement information and trigger the acquisition of 2D image sets when the handheld device is determined as stable.
In accordance with one embodiment of the present invention, the data processing unit is configured to detect the unstable motions of the handheld device based on said placement information and discard the 2D image sets acquired during those unstable motions.
In accordance with one embodiment of the present invention, the handheld scanning device is an intra-oral camera.
In accordance with one embodiment of the present invention, the obj ect is the j aw, tooth or gum of a patient, an implant or a preparation.
In accordance with one embodiment of the present invention, the image sensing unit is a charge-coupled device (CCD) or a Complementary metal-oxide-semiconductor (CMOS) unit.
On another aspect of the present invention, there is also provided a handheld scanning device for 3D imaging, wherein the handheld scanning device is configured to acquire a plurality of two-dimensional (2D) image sets of an object from a plurality of perspectives, one 2D image set corresponding to one perspective, and wherein the handheld scanning device is equipped with an assistant sensing unit which is configured to provide placement information of the handheld scanning device, said placement information is used to control the acquisition of 2D image sets and the stitching of 3D views converted from each 2D image set so as to obtain a global surface of the object.
This invention introduces the use of additional sensors in the handheld 3D camera product to provide orientation and/or position information of the image acquisition device relative to the object. With the knowledge about the device placement, the 3D imaging system according to the present invention may significantly increase the efficiency and reliability in generating the stereo vision of object surface through the matching and merging of images taken from random views. This way, more accurate results could be obtained with less computational intensity.
Other features of the invention, its nature and various advantages will be apparent from the accompanying drawings and the following detailed description of certain preferred embodiments. Brief Description of the Drawings
The foregoing and other objects, features, and advantages of the invention will be apparent from the following more particular description of preferred embodiments as illustrated in the accompanying drawings, in which:
Fig. l illustrates the functional block diagram of the 3D imaging system according to one embodiment of the invention.
Fig.2 illustrates the relationship between the world coordinates system and the handpiece coordinate system and the six parameters describing the 3D transform.
Fig.3 provides schematic illustrations of various placement sensors and the relationship between the sensor output and placement information.
Fig.4 illustrates an example workflow of stitching between 2 views
Fig.5 illustrates comparison of the merging results with and without the device placement information according to one embodiment of the invention.
Detailed Description
"While the invention covers various modifications and alternative constructions, embodiments of the invention are shown in the drawings and will hereinafter be described in detail. However it should be understood that the specific description and drawings are not intended to limit the invention to the specific forms disclosed. On the contrary, it is intended that the scope of the claimed invention includes all modifications and alternative constructions thereof falling within the scope of the invention as expressed in the appended claims.
Unless defined in the context of the present description, otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
Moreover, by way of a non-limiting example, the methods and arrangements of the present disclosure may be illustrated by being used in dental implant imaging. However, it should be understood that the teaching of the present disclosure can be applied to the 3D imaging product with similar architecture, where the generation of the object surface in three dimensions is based on a sequence of images acquired at arbitrary orientation relative to the object.
Fig. l illustrates the functional block diagram of a 3D imaging system according to one embodiment of the invention. The 3D imaging system provided in the present invention may be used to analyzes a real-world object or environment to collect data on its shape and possibly its appearance. The 3D imaging system may comprise a handheld scanning device 10 for acquiring a plurality of two-dimensional (2D) image sets of an object from a plurality of perspectives. This kind of handheld scanning device, also termed as handpiece herein for simplicity, could be configured into any shape, as long as it could provide the user with sufficient convenience in its moving and operating. Generally, in order to obtaining a view for the global surface of an object, More than one 2D image sets are required to be taken from different perspectives, while in most cases the scanning device can be operated freely in taking pictures of the object and there is generally no rules for the sequence of the images to be taken.
The various 2D image sets could be acquired using structured light projected onto the object by the illuminating unit 103 provided inside the handheld scanning device. Projecting a narrow band of light onto a three-dimensionally shaped surface may produce a hne of illumination that appears distorted from other perspectives than that of the projector, and can be used for an exact geometric reconstruction of the surface shape. The reflected light would be received by the image sensing unit 102, which for example could be implemented as a charge-coupled device (CCD) or a Complementary metal-oxide-semiconductor (CMOS). In the case of CCD, an image is projected through a lens onto the capacitor array within the image sensing unit 102, causing the accumulation of an electric charge proportional to the light intensity at that location, a two-dimensional array, used in video and still cameras, captures a two-dimensional picture corresponding to the scene projected onto the focal plane of the sensing unit.
For example, the structured light could be of sinusoidal pattern or multi-line pattern. Seen from different viewpoints, the pattern appears geometrically distorted due to the surface shape of the object. Each image set may be acquired at an arbitrary orientation relative to the object and contains a fraction of the surface information to be constructed.
The various captured 2D image sets will then be transported to the data processing unit 20 for further conversion. The data processing unit 20 may be implemented as, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, or the like, and may be implemented using any suitable combination of hardware and/or software. It is to be understood that though the data processing unit 20 is illustrated or described as separate from the handheld scanning device, this is not necessary. In another embodiment of the present invention, for example, this processing unit could be integrated into the handheld scanning device. In yet another embodiment of the present invention, part of the functions as described with respect to the data processing unit 20 could be carried out by an additional processor within the handheld part.
Typically, the data contained within each 2D image set is converted into a 3D data structure in the data processing unit 20, forming a 3D view from one particular perspective, while the data processing unit 20 is further configured to match and merge these 3D views in one common reference coordinate system so as to provide a global surface for the object of interest. For example, the reference coordinate system could be Cartesian coordinate system with origin and axis directions fixed relative to the space in which the object, the user and the imaging system are located. The orientation of this coordinate system could be defined relative to the first set of images being captured.
The process of matching and merging could be jointly referred to as "stitching" herein. As mentioned above, each 3D view is acquired by placing the handheld device at a different position and orientation. On one hand, the arrangement of each 3D view relative to each other can be determined by using reference features on the surface being scanned. On the other hand, the assistant sensing unit 101 within the handheld scanning device 10 is configured to record placement information of device 10 when each 2D image set is acquired. With the help of this information about the perspective of each 3D view, it may cut off huge amount of computational burden for the data processing unit 20 and raise up the result generation speed.
For instance, the sensor controller could be the ITG-3200 from InvenSense
Inc, which is a single-chip, digital-output, 3-axis Microelectromechanical systems (MEMS) gyroscope integrated circuit. In such case, the assistant sensing unit 101 would be the gyroscope component on the integrated circuit.
For instance, the image sensing unit 102 could be composed of imaging optics (lenses) and a VITA 1300 from ON Semiconductors, which is a 1.3 Megapixel 150 FPS Global Shutter CMOS Image Sensor.
For instance, the illuminating unit 103 could be composed of Red, Green and
Blue LEDs combined with dielectric mirrors and concentrated with a micro-lens array onto a DMD array, following by imaging optics (lenses) for imaging on the teeth surface.
In one embodiment of the present invention, the assistant sensing unit comprises at least one sensor adapted to provide signals related to at least one degree of freedom of the handheld scanning device 10 when each 2D image set is acquired; and a sensor controller adapted to generate the placement information from the signals .
The sensors incorporated in the assistant sensing unit could be a piece of hardware embedded onto the handheld scanning device, which delivers signal amplitude, like voltage, intensity, when a set of 2D images is acquired. This signal can be converted by the sensor controller into placement information. In other embodiment of the present invention, the signals from sensors could also be communicated to the data processing unit for the conversion together with the processing of image data, or any other processing module within the whole system.
In practice, placement of the handheld scanning device 10 is fully determined by 6 parameters: 3 rotations and 3 translations. One sensor might not provide all the placement information. Several sensors might be combined, where one or more position sensors are used to acquire the position data in three translational degrees of freedom, and one or more orientation sensors acquire the angular data in three rotational degrees of freedom. Thus, sufficient data (even redundant in some cases) would be obtained to determine the placement of the handheld device in a 3D space.
Fig.2 shows a typical definition of the 6 parameters which describe the position of the handpiece relative to the world coordinate system. The world coordinate system is an arbitrary reference system, which doesn't undergo acceleration during the experiment (it is also called an inertial coordinate system or Lorentz coordinate system). It is not unique and the world coordinate system labeled X, Y, Z has been arbitrarily chosen. The 6 parameters which control the position of the handpiece relative to the world coordinate system X, Y, Z are the three coordinates (x, y, z) defining the system translation plus the roll, pitch and yaw defining the system rotation. The handpiece in the "reference position" gives a placement example when all 6 parameters are set to 0. This location is defined by factory during system calibration and on system startup. The handpiece in the "actual position" gives an arbitrary placement example when the user holds the system.
For example, a three-axis micro-electromechanical accelerometer may be utilized as the position sensor. By integrating acceleration twice over time, it is possible to obtain relative 3D position information during the acquisition sequence. A three-axis gyroscope may be employed as the orientation sensor, which measures relative orientation to an internal element spinning fast to give it inertia. System orientation can also be obtained using a magnetometer, which measures the direction of the earth magnetic field as it is a constant vector during the acquisition sequence.
Schematic drawings of these sensors are depicted in Fig.3, respectively a mass-spring system, a Hall probe and a Foucault gyroscope. It must be noted that there exist many variants of these systems which are known to those skilled in the art familiar with the technology used in MEMS integrated circuit. An example implementation for the combination of the output from several sensors to track placement in three dimensions can be found in "An efficient orientation filter for inertial and inertial/magnetic sensor arrays", Technical report,. Department of Mechanical Engineering, University of Bristol by S.O.H. Madgwick (2010), which describes tracking by combining output from three-axis gyroscope, accelerometer with/without magnetometer, and is hereby incorporated by reference.
In one embodiment of the present invention, a GPS system could be incorporated to provide position information. In traditional terrain mapping, radar images are captured from airplanes which use the GPS navigation system to locate the plane and track it. This location could be similarly used when assembling all 3D views into a surface map.
Combining both direction and position systems inside the equipment, it is possible to obtain simultaneously the 6 parameters, relative to a world coordinate system, i.e., a non-accelerating coordinate system. The sensor controller inside the assistant sensing unit may extract the position data and orientation data from the sensed position signal and orientation signals respectively, and derive coordinates from geometric transformations of the position data and orientation data. We assume here that the obj ect being scanned does not move during the image acquisition, but also, the system could handle a tolerance of 10 degree for the object movement around its axis of symmetry.
It is to be understood by those skilled in the art that any number of sensors or any types of sensors could be embedded into the handheld scanning device of the 3D imaging system according to the present invention.
Preferably, the sensors embedded for obtaining the placement information of the handheld scanning device could be arranged close to the image sensing unit 102. It is to be easily understood that the displacement information of the handheld scanning device when the images are captured would more precisely be determined as the image sensing unit 102 where the images are actually sensed. The position of the image sensing unit 102 relative to the assistant sensing unit 101 could be predetermined at the manufacture and stored. When calculating the position and orientation with respect to where the images are taken based on the signals from the assistant sensing unit, it would be advantageous to take said position information about the image sensing unit 102 into consideration, which helps to generate a more accurate results. Without this placement information, surface matching may need searching in a large space of 6 degrees of freedom, which requires computing time and might be inaccurate. The displacement information of the scanning device when the 2D images are captured then may be stored in association with each corresponding 3D views for use in their matching and merging.
Fig.4 provides an example of a workflow for stitching two views together with the placement information provided by the assistant sensing unit. Generally, each set of 2D images is converted into a 3D view, while corresponding handpiece placement information is stored as a 4x4 matrix structure, the internal structure of which is usually M = [sR T; O3 1], where s is a scalar defining the scale, R is a 3x3 matrix with determinant 1 (i.e. a direct rotation) which depends on the roll pitch yaw from Fig.4, T is a 1x3 translation vector corresponding to (x, y, z)T from Fig.4, vector O3 is a 3x1 vector made of zeros; and 1 is the unit scalar.
The generation of candidate relative transform Tl, T2, T3 as illustrated in Fig.4, i.e., different combination results of the two 3D views (i.e., vl and v2 in Fig.4), may use standard computer vision technique as follow: (i) computation of keypoint locations, (ii) computation of feature descriptor at each keypoint location, (in) creation of a list of corresponding features between both views, (iv) computation of transforms using the list of corresponding features. A keypoint is a position defining a region of interest as detected by an algorithm. A feature descriptor is a vector describing the neighborhood of the keypoint. The feature descriptor is a compact, local representation of the object which is ideally unique, invariant to any 3D transform (in term of translation, rotation, scaling and the like) for unambiguous matching. Examples of algorithms to compute keypoints and feature descriptors in 3D are Normal Aligned Radial Feature (NARF) or Point Feature Histogram (PFH) or the Fast PFH (FPFH) detectors. The computation of feature correspondences between views is usually based on a search of the N closest features in the other view for each feature in the first, the second or both views together, where N denotes a natural non-null integer. The computation of transforms from the list of correspondences is frequently based on a RANdom SAmple Consensus (RANSAC), which is an iterative random search of the largest group of correspondences describing the same transformation.
Because RANSAC is detecting transforms satisfying the largest number of correspondences, it will frequently select transforms which only place both surfaces on top of each other, regardless of the local curvature. This happens because there exist several incorrect correspondences generated from non-unique feature descriptors (i.e. incorrect pairing of features). On the opposite side, the correct transform usually has only a small overlap with the other views, thus limiting the number of feature correspondences which comply with the correct transform. Incorrect transforms should be removed or ranked low quickly to avoid spending computational resources on them. Therefore, to distinguish between the generated transforms, the use of external data from position sensors is helpful. If VI and V2 are two transformation matrices describing the placement of the handpiece as measured by the assistant sensing unit for views vl and v2 respectively, the relative transformation from vl to v2 is written as V=V2-Vr1. This transform V can be compared to candidate transforms from the ANSAC algorithm to reject or reduce the rank of incompatible transforms, as described below.
There exist various ways to compare the information stored in matrix V and a candidate transform T calculated from algorithms. It depends on the available placement sensors. Let us define M = V_1T which should be close to identity if both transform are identical. We will refer to the matrix decomposition (sR, T) as denned above for M. For instance, if 3D orientation (rotation) data is available, the rotation matrix has the following property trace(R)= l+2cos(0), where Θ is the rotation angle for this matrix. The closer Θ from 0, the closer V and T are. Therefore ranking for transform T should decrease as angle Θ increases. If 3D translation information is available, then the remaining displacement in matrix M should be small, i.e. the ranking should decrease as the norm of vector T increases.
The transform evaluation from Fig 4 usually involves the Iterative Closest Point (ICP) algorithm which is sensitive to the initial candidate transform. ICP iteratively performs the following operations until a convergence criterion is met: (i) selection of a subset of points, (ii) computation of point correspondences on the other 3D view, (in) correspondence weighting and outlier rejection, (iv) assignation of an error metric, (v) minimization of the error for that set of correspondences until a second convergence criterion is met. Because of its iterative nature, the ICP algorithm is the most time consuming part of the stitching process, especially if the initial candidate is wrong, i.e. more iterations will be performed than for the correct initial candidate.
In the case where all 6 degrees of freedom from the placement of the handpiece are measured precisely, the workflow from Fig.4 is not even needed. Matrix V is completely determined for the rigid transform and can directly be used as a candidate transform.
Typically, the plurality of 3D views could be stitched sequentially or simultaneously. In the case where the 3D views are to be combined one by one, the data processing unit 20 may be configured to use said placement information to determine the arrangement of one 3D view relative to another where the two 3D views are successively acquired. Sequential stitching means that the new acquired view is only matched against the previously acquired views (not against the future views). A simultaneous stitching would be done after all views have been acquired, to increase the likelihood of a view to be successfully stitched. Each view would be tested against the entire dataset. It could be expected that simultaneous stitching has a higher stitching success rate than the sequential one and will be less interactive.
In another embodiment where a new 3D view may need to be added into an already stitched result, an iterative algorithm may be used by the data processing unit to refine the arrangement of a new 3D view relative to an already stitched 3D view. For example, iterative closest point (ICP) is often used to reconstruct 2D or 3D geometries from different scans as described above. Without an appropriate start, this kind of algorithm is prone to accumulative errors, which can lead to the mapping algorithm failure and significant computation time. Using the 3D imaging system according to the present invention, the data processing unit 20 may be configured to use said placement information to determine the initial arrangement of said new 3D view for those iterative algorithms, without introducing undesired errors at the very beginning.
In yet another embodiment of the present invention, the data processing unit may be further configured to determine whether the handheld device is stable based on said placement information and trigger the acquisition of 2D image sets automatically when the handheld device is determined as stable. In practice, many handheld scanning devices commercially available may be configured with a "capture" button for the customer to start the operation. However, in many cases, the user is not actually ready when they push the start button, rendering the device in an unstable status. In some other cases, it may even be intended to automatically trigger the capture. In this case, the assistant sensing unit may start to collect the placement information of the handheld device at a certain interval in response to an internal command. The information provided by the assistant sensing unit could be very helpful in determining when the system is stable and automatically trigger the capture accordingly.
In still another embodiment of the present invention, the data processing unit may be configured to detect the unstable motions of the handheld device based on said placement information and discard the 2D image sets acquired during those unstable motions. In many situations, even if the system was stable at the beginning, the operator might move his hand during the capture of a set of 2D images from one perspective, which may have a negative impact on the generation of a 3D view. Based on the information provided by the assistant sensing unit, it can be detected in the data processing unit if there exist jitter or bigger move during one capture and discard these data automatically, which may avoid much unnecessary effort in calculating on the wrong data.
After the stitching of 3D views is completed, a global surface of the object, such as the jaw, tooth or gum of a patient, an implant or a preparation in dental application, would be shown to the user on display 30 for further analysis. Available prior information from the assistant sensing unit helps reduce the computation and speed up the display of a more accurate result.
Fig.5 illustrates the merging results with and without the device placement information according to one embodiment of the invention. It is to be noted that the example in Fig.5 is only shown in 2D for the sake of simplicity, which is still enough to provide a clear vision of the benefits from the present invention. The example illustrated in Fig.5 is the application of the 3D imaging system in dental field, where the goal is to reconstruct tooth surface from images acquired by the handheld scanning device operated by the dentist.
As can be seen from the left part in below Fig.5. without placement information of the camera device, i.e., without knowing in which perspective the image is taken, the estimation for proper pose of the respective image can easily generate incorrect results which are difficult to discriminate from the correct one. Even if the results turn out to be the case, it may have already consumed notable resources and time. As mentioned above, there are a total of 6 degrees of freedom in 3D space for camera device placement. With the help of the placement information from the assistant sensing unit, the alignment among those 3D views obtained separately would become much easier and more accurate.
Although embodiments of the invention are not limited in this regard, the terms "plurality" and "a plurality" as used herein may include, for example, "multiple" or "two or more". The terms "plurality" or "a plurality" may be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like.
It should be noted that the aforesaid embodiments are illustrative of this invention instead of restricting it, substitute embodiments may be designed by those skilled in the art without departing from the scope of the claims below. The wordings such as "include", "including", "comprise" and "comprising" do not exclude elements or steps which are present but not listed in the description and the claims. It also shall be noted that as used herein and in the appended claims, the singular forms "a", "an", and "the" include plural referents unless the context clearly dictates otherwise. This invention can be achieved by means of hardware including several different elements or by means of a suitably programmed computer. In the unit claims that list several means, several ones among these means can be specifically embodied in the same hardware item. The use of such words as first, second, third does not represent any order, which can be simply explained as names .

Claims

Claims
1. A three-dimensional (3D) imaging system, comprising:
a data processing unit,
a handheld scanning device for acquiring a plurality of two-dimensional (2D) image sets of an object from a plurality of perspectives, one 2D image set
corresponding to one perspective;
wherein the handheld scanning device is equipped with an assistant sensing unit which is configured to provide placement information of the handheld scanning device, and
wherein said placement information is used by the data processing unit to control the acquisition of 2D image sets and the stitching of 3D views converted from each 2D image set so as to obtain a global surface of the object.
2. The 3D imaging system according to claim 1, wherein said assistant sensing unit comprises at least one sensor adapted to provide signals related to at least one degree of freedom of the handheld scanning device when each 2D image set is acquired; and a sensor controller adapted to generate said placement information from the signals.
3. The 3D imaging system according to claim 2, wherein the placement information of the handheld scanning device is generated as the position and orientation of the image sensing unit provided inside the handheld scanning device when each 2D image set is acquired.
4. The 3D imaging system according to claim 2, wherein said at least one sensor is selected from the group consisting of accelerometer, gyroscope, magnetometer and GPS.
5. The 3D imaging system according to claim 1, wherein the 2D image sets are acquired by the image sensing unit with structured light projected onto the object by the illuminating unit provided inside the handheld scanning device.
6. The 3D imaging system according to claim 5, wherein the structured light is of sinusoidal pattern or multi-line pattern.
7. The 3D imaging system according to claim 1, wherein the data processing unit is configured to use said placement information to determine the arrangement of one 3D view relative to another for the stitching, wherein the two 3D views are
successively acquired.
8. The 3D imaging system according to claim 1, wherein when an iterative algorithm is used by the data processing unit to refine the arrangement of a new 3D view relative to an already stitched 3D view, the data processing unit is configured to use said placement information to determine the initial arrangement of said new 3D view.
9. The 3D imaging system according to claim 1, wherein the plurality of 3D views is stitched sequentially or simultaneously.
10. The 3D imaging system according to claim 1, wherein the data processing unit is configured to determine whether the handheld device is stable based on said placement information and trigger the acquisition of 2D image sets when the handheld device is determined as stable.
11. The 3D imaging system according to claim 1, wherein the data processing unit is configured to detect the unstable motions of the handheld device based on said placement information and discard the 2D image sets acquired during those unstable motions.
12. The 3D imaging system according to claim 1, wherein the handheld scanning device is an intra-oral camera.
13. The 3D imaging system according to claim 1, wherein the object is the jaw, tooth or gum of a patient, an implant or a preparation.
14. The 3D imaging system according to claim 3, wherein the image sensing unit is a charge-coupled device (CCD) or a Complementary metal-oxide-semiconductor (CMOS) unit.
15. A handheld scanning device for three-dimensional (3D) imaging, wherein the handheld scanning device is configured to acquire a plurality of
two-dimensional (2D) image sets of an object from a plurality of perspectives, one 2D image set corresponding to one perspective, and
wherein the handheld scanning device is equipped with an assistant sensing unit which is configured to provide placement information of the handheld scanning device,
said placement information is used to control the acquisition of 2D image sets and the stitching of 3D views converted from each 2D image set so as to obtain a global surface of the object.
16. The handheld scanning device according to claim 15, wherein said assistant sensing unit comprises at least one sensor adapted to provide signals related to at least one degree of freedom of the handheld scanning device when each 2D image set is acquired; and a sensor controller adapted to generate said placement information from the signals.
17. The handheld scanning device according to claim 16, wherein the placement information of the handheld scanning device is generated as the position and orientation of the image sensing unit provided inside the handheld scanning device when each 2D image set is acquired.
18. The handheld scanning device according to claim 16, wherein said at least one sensor is selected from the group consisting of accelerometer, gyroscope,
magnetometer and GPS.
19. The handheld scanning device according to claim 15, wherein the 2D image sets are acquired by the image sensing unit with structured light projected onto the object by the illuminating unit provided inside the handheld scanning device.
20. The handheld scanning device according to claim 19, wherein the structured light is of sinusoidal pattern or multi-line pattern.
21. The handheld scanning device according to claim 15, wherein the handheld scanning device further comprises a data processing unit and the data processing unit is configured to determine the arrangement of one 3D view relative to another for the stitching based on said placement information, wherein the two 3D views are successively acquired.
22. The handheld scanning device according to claim 15, wherein the handheld scanning device further comprises a data processing unit, and when an iterative algorithm is used to refine the arrangement of a new 3D view relative to an already stitched 3D view, the data processing unit is configured to use said placement information to determine the initial arrangement of said new 3D view.
23. The handheld scanning device according to claim 15, wherein the plurality of 3D views is stitched sequentially or simultaneously.
24. The handheld scanning device according to claim 15, wherein the handheld scanning device further comprises a data processing unit and the data processing unit is configured to determine whether the handheld device is stable based on said placement information and trigger the acquisition of 2D image sets when the handheld device is determined as stable
25. The handheld scanning device according to claim 15, wherein the handheld scanning device further comprises a data processing unit, and the data processing unit is configured to detect the unstable motions of the handheld device based on said placement information and discard the 2D image sets acquired during those unstable motions.
26. The handheld scanning device according to claim 15, wherein the handheld scanning device is an intra-oral camera.
27. The handheld scanning device according to claim 15, wherein the object is the j aw, tooth or gum of a patient, an implant or a preparation.
28. The handheld scanning device according to claim 17, wherein the image sensing unit is a charge-coupled device (CCD) or a Complementary
metal-oxide-semiconductor (CMOS) unit.
PCT/CN2012/087331 2012-12-24 2012-12-24 Three-dimensional imaging system and handheld scanning device for three-dimensional imaging Ceased WO2014100950A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2012/087331 WO2014100950A1 (en) 2012-12-24 2012-12-24 Three-dimensional imaging system and handheld scanning device for three-dimensional imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2012/087331 WO2014100950A1 (en) 2012-12-24 2012-12-24 Three-dimensional imaging system and handheld scanning device for three-dimensional imaging

Publications (1)

Publication Number Publication Date
WO2014100950A1 true WO2014100950A1 (en) 2014-07-03

Family

ID=51019639

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/087331 Ceased WO2014100950A1 (en) 2012-12-24 2012-12-24 Three-dimensional imaging system and handheld scanning device for three-dimensional imaging

Country Status (1)

Country Link
WO (1) WO2014100950A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016164238A1 (en) * 2015-04-10 2016-10-13 3M Innovative Properties Company A dental light irradiation device
ITUB20160307A1 (en) * 2016-01-18 2017-07-18 Gabriel Maria Scozzarro Device for three-dimensional reconstruction of organs of the human body
EP3186783A4 (en) * 2014-08-27 2018-01-17 Carestream Dental Technology Topco Limited Automatic restitching of 3-d surfaces
CN107644454A (en) * 2017-08-25 2018-01-30 欧阳聪星 A kind of image processing method and device
WO2018053046A1 (en) * 2016-09-14 2018-03-22 Dental Imaging Technologies Corporation Multiple-dimension imaging sensor with fault condition detection
CN107909609A (en) * 2017-11-01 2018-04-13 欧阳聪星 A kind of image processing method and device
US10213180B2 (en) 2016-09-14 2019-02-26 Dental Imaging Technologies Corporation Multiple-dimension imaging sensor with operation based on magnetic field detection
US10220172B2 (en) 2015-11-25 2019-03-05 Resmed Limited Methods and systems for providing interface components for respiratory therapy
JP2019532790A (en) * 2016-09-28 2019-11-14 クレヴァーデント エルティディ. Dental aspirator with camera
WO2020042943A1 (en) * 2018-08-27 2020-03-05 Shanghai United Imaging Healthcare Co., Ltd. System and method for determining a target point for a needle biopsy
US10853957B2 (en) * 2015-10-08 2020-12-01 Carestream Dental Technology Topco Limited Real-time key view extraction for continuous 3D reconstruction
US10890444B2 (en) 2015-05-15 2021-01-12 Tata Consultancy Services Limited System and method for estimating three-dimensional measurements of physical objects
US10932733B2 (en) 2016-09-14 2021-03-02 Dental Imaging Technologies Corporation Multiple-dimension imaging sensor with operation based on movement detection
CN113613578A (en) * 2019-01-24 2021-11-05 皇家飞利浦有限公司 Method for determining the position and/or orientation of a handheld device relative to an object, corresponding device and computer program product
CN113824946A (en) * 2020-06-18 2021-12-21 和硕联合科技股份有限公司 electronic stylus

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1900741A (en) * 2005-11-18 2007-01-24 北京航空航天大学 High spectrum full polarization three dimension imaging integrate detecting system
US20090087050A1 (en) * 2007-08-16 2009-04-02 Michael Gandyra Device for determining the 3D coordinates of an object, in particular of a tooth
US20100128109A1 (en) * 2008-11-25 2010-05-27 Banks Paul S Systems And Methods Of High Resolution Three-Dimensional Imaging
CN202218880U (en) * 2011-07-27 2012-05-16 深圳市恩普电子技术有限公司 Ultrasonic three-dimensional imaging probe

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1900741A (en) * 2005-11-18 2007-01-24 北京航空航天大学 High spectrum full polarization three dimension imaging integrate detecting system
US20090087050A1 (en) * 2007-08-16 2009-04-02 Michael Gandyra Device for determining the 3D coordinates of an object, in particular of a tooth
US20100128109A1 (en) * 2008-11-25 2010-05-27 Banks Paul S Systems And Methods Of High Resolution Three-Dimensional Imaging
CN202218880U (en) * 2011-07-27 2012-05-16 深圳市恩普电子技术有限公司 Ultrasonic three-dimensional imaging probe

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3186783A4 (en) * 2014-08-27 2018-01-17 Carestream Dental Technology Topco Limited Automatic restitching of 3-d surfaces
US10758126B2 (en) 2015-04-10 2020-09-01 3M Innovative Properties Company Dental irradiation device
WO2016164238A1 (en) * 2015-04-10 2016-10-13 3M Innovative Properties Company A dental light irradiation device
US10890444B2 (en) 2015-05-15 2021-01-12 Tata Consultancy Services Limited System and method for estimating three-dimensional measurements of physical objects
US10853957B2 (en) * 2015-10-08 2020-12-01 Carestream Dental Technology Topco Limited Real-time key view extraction for continuous 3D reconstruction
US11791042B2 (en) 2015-11-25 2023-10-17 ResMed Pty Ltd Methods and systems for providing interface components for respiratory therapy
US11103664B2 (en) 2015-11-25 2021-08-31 ResMed Pty Ltd Methods and systems for providing interface components for respiratory therapy
US10220172B2 (en) 2015-11-25 2019-03-05 Resmed Limited Methods and systems for providing interface components for respiratory therapy
ITUB20160307A1 (en) * 2016-01-18 2017-07-18 Gabriel Maria Scozzarro Device for three-dimensional reconstruction of organs of the human body
WO2018053046A1 (en) * 2016-09-14 2018-03-22 Dental Imaging Technologies Corporation Multiple-dimension imaging sensor with fault condition detection
CN109982642A (en) * 2016-09-14 2019-07-05 登塔尔图像科技公司 Multiplanar imaging sensor with fault condition detection
US10390788B2 (en) 2016-09-14 2019-08-27 Dental Imaging Technologies Corporation Multiple-dimension imaging sensor with operation based on detection of placement in mouth
US10932733B2 (en) 2016-09-14 2021-03-02 Dental Imaging Technologies Corporation Multiple-dimension imaging sensor with operation based on movement detection
US10925571B2 (en) 2016-09-14 2021-02-23 Dental Imaging Technologies Corporation Intra-oral imaging sensor with operation based on output of a multi-dimensional sensor
US10299742B2 (en) 2016-09-14 2019-05-28 Dental Imaging Technologies Corporation Multiple-dimension imaging sensor with fault condition detection
US10213180B2 (en) 2016-09-14 2019-02-26 Dental Imaging Technologies Corporation Multiple-dimension imaging sensor with operation based on magnetic field detection
JP2019532790A (en) * 2016-09-28 2019-11-14 クレヴァーデント エルティディ. Dental aspirator with camera
WO2019037582A1 (en) * 2017-08-25 2019-02-28 欧阳聪星 Image processing method and device
TWI691933B (en) * 2017-08-25 2020-04-21 大陸商北京奇禹科技有限公司 Image processing method and device
CN107644454B (en) * 2017-08-25 2020-02-18 北京奇禹科技有限公司 An image processing method and device
US10937238B2 (en) 2017-08-25 2021-03-02 Beijing Keeyoo Technologies Co., Ltd. Image processing method and device
CN107644454A (en) * 2017-08-25 2018-01-30 欧阳聪星 A kind of image processing method and device
CN107909609B (en) * 2017-11-01 2019-09-20 欧阳聪星 An image processing method and device
US11107188B2 (en) 2017-11-01 2021-08-31 Beijing Keeyoo Technologies Co., Ltd Image processing method and device
CN107909609A (en) * 2017-11-01 2018-04-13 欧阳聪星 A kind of image processing method and device
WO2020042943A1 (en) * 2018-08-27 2020-03-05 Shanghai United Imaging Healthcare Co., Ltd. System and method for determining a target point for a needle biopsy
US12496132B2 (en) 2018-08-27 2025-12-16 Shanghai United Imaging Healthcare Co., Ltd. System and method for determining a target point for a needle biopsy
CN113613578A (en) * 2019-01-24 2021-11-05 皇家飞利浦有限公司 Method for determining the position and/or orientation of a handheld device relative to an object, corresponding device and computer program product
CN113824946A (en) * 2020-06-18 2021-12-21 和硕联合科技股份有限公司 electronic stylus

Similar Documents

Publication Publication Date Title
WO2014100950A1 (en) Three-dimensional imaging system and handheld scanning device for three-dimensional imaging
JP6845895B2 (en) Image-based position detection methods, devices, equipment and storage media
EP2543483B1 (en) Information processing apparatus and information processing method
CN111442721B (en) A calibration device and method based on multi-laser ranging and angle measurement
Ait-Aider et al. Simultaneous object pose and velocity computation using a single view from a rolling shutter camera
US6094215A (en) Method of determining relative camera orientation position to create 3-D visual images
TWI555379B (en) Panoramic fisheye camera image correction, synthesis and depth of field reconstruction method and system thereof
CN111951326B (en) Target object skeleton key point positioning method and device based on multiple camera devices
CN110230983B (en) Vibration-resisting optical three-dimensional positioning method and device
CN113379822A (en) Method for acquiring 3D information of target object based on pose information of acquisition equipment
JP2004157850A (en) Motion detector
CN111445529B (en) Calibration equipment and method based on multi-laser ranging
JP6969121B2 (en) Imaging system, image processing device and image processing program
CN110268701B (en) Image forming apparatus
Samson et al. The agile stereo pair for active vision
JP2003296708A (en) Data processing method, data processing program and recording medium
JPH1023465A (en) Imaging method and apparatus
JP2003006618A (en) Method and device for generating three-dimensional model and computer program
JP2009216480A (en) Three-dimensional position and attitude measuring method and system
Castanheiro et al. Modeling hyperhemispherical points and calibrating a dual-fish-eye system for close-range applications
JPH09119819A (en) Three-dimensional information reconstruction apparatus
JP2697917B2 (en) 3D coordinate measuring device
Urquhart The active stereo probe: the design and implementation of an active videometrics system
CN111325780B (en) A Rapid Construction Method of 3D Model Based on Image Screening
Fabian et al. One-point visual odometry using a RGB-depth camera pair

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12890704

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12890704

Country of ref document: EP

Kind code of ref document: A1