[go: up one dir, main page]

WO2023113823A1 - Génération d'une représentation tridimensionnelle d'un objet - Google Patents

Génération d'une représentation tridimensionnelle d'un objet Download PDF

Info

Publication number
WO2023113823A1
WO2023113823A1 PCT/US2021/064120 US2021064120W WO2023113823A1 WO 2023113823 A1 WO2023113823 A1 WO 2023113823A1 US 2021064120 W US2021064120 W US 2021064120W WO 2023113823 A1 WO2023113823 A1 WO 2023113823A1
Authority
WO
WIPO (PCT)
Prior art keywords
scan
target object
dimensional representation
scan data
optical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2021/064120
Other languages
English (en)
Inventor
Stephen Bernard Pollard
Fraser John Dickin
Guy De Warrenne Bruce Adams
Markus Ernst RILK
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Development Co LP
Original Assignee
Hewlett Packard Development Co LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Development Co LP filed Critical Hewlett Packard Development Co LP
Priority to PCT/US2021/064120 priority Critical patent/WO2023113823A1/fr
Publication of WO2023113823A1 publication Critical patent/WO2023113823A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Definitions

  • Three-dimensional (3D) representations of objects can be used for a range of uses, including obtaining a model for 3D printing, computer aided design applications, production of movies and video games, or virtual reality environments.
  • highly accurate 3D models of real-world objects are desirable, where a manually designed 3D model using computer aided design software may not provide enough accuracy, or the time or effort to produce an accurate model this way may be prohibitive.
  • 3D optical scanning systems can be used to generate a 3D representation of a target object without the need to manually create a 3D model using a computer aided design software or similar, or can determine how a 3D printed model deviates from an intended design.
  • Figures 1 a and 1 b illustrate an example scan environment
  • Figures 2a and 2b illustrate an example of scans taken from multiple views of a scan environment including a target object
  • Figure 2c illustrates an example of a scan of a scan environment in the absence of a target object
  • Figure 3 is a flowchart illustrating an example method of generating a three- dimensional representation of a target object
  • Figure 4 is a flowchart illustrating a further example method of generating a three-dimensional representation of a target object
  • Figure 5 is an example of a device comprising a computer-readable storage medium coupled to a processor.
  • Three-dimensional (3D) optical scanning systems such as a structured light scanning system, can be used to generate a 3D representation of an target object.
  • a 3D representation of the target object may comprise a mathematical coordinate-based model of a surface of the target object.
  • the 3D representation may comprise a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, polygons, or similar, for example to represent a physical body.
  • the 3D representation may be generated using one or more photogrammetry techniques.
  • Example file formats for a 3D representation may include stereolithography format (.stl), Wavefront OBJ format (. obj), 3D Manufacturing Format (.3mf), or similar.
  • the 3D representation may comprise a Computer Aided Design (CAD) model.
  • the 3D representation may be represented using polygonal modelling, curve modelling, or digital sculpting techniques.
  • collection of points may be converted into a polygon mesh by connecting each point to the respective nearest neighbour using straight lines. The density of these points can vary, with a larger number of points improving the ability to re-construct fine features of a target object.
  • a file for the 3D representation may comprise a 3D position for each of a plurality of vertices, a normal direction for each of the vertices, and for each face a list of vertices that form said face.
  • a structured light scanning system uses a projected light pattern and a camera, or similar imaging device, in order to obtain 3D data relating to a target object.
  • the projected light pattern may use visible or non-visible light.
  • Alternative methods of 3D scanning may include modulated light scanning, laser scanning, or similar 3D imaging techniques.
  • Some 3D scanning systems may use a plurality of scans comprising multiple views of a target object to obtain data relating to, for example, both a front face and a back face of the target object, in order to accurately measure a dimension of the target object, for example a thickness of a part.
  • the multiple views may be transformed to a common coordinate system using registration or alignment techniques, for example the multiple views may be registered by using the overlapping 3D data as a key, by identifying common features within one or more of the multiple views, or by registering data using photogrammetry techniques from a 2D image to a generated 3D data set. Therefore, many views of an object, with sufficient overlap for registering common features, may be used to obtain an accurate 360-degree representation of the target object.
  • Additional objects may be added to the scan environment, in the field of view of the optical imaging device, which move in conjunction with the scanned object.
  • the scanned object and the additional objects may be stationary whilst the imaging device moves about the target object.
  • These additional objects can help with the registration of a plurality of scans comprising multiple views of the target object.
  • two- or three-dimensional fiducial markers can be used in order to facilitate registration of multiple views of a target object.
  • These fiducial markers can be attached to the background of the scan environment or to the object itself.
  • the visible fiducial markers in each overlapping view of the target object can be used to constrain and register these views.
  • a 2D fiducial marker may comprise a marker placed on a surface and identified independently in each image.
  • a 3D fiducial marker may comprise a geometric construct such as a sphere that may be identified and accurately located (i.e. it’s centre) in 3D mesh data. In each case the fiducial marker may be representable by a single coordinate position in the 3D space.
  • Attaching fiducial markers to the target object can be impractical or a burden to the ease of use of the scanning system. Furthermore, the scanned target object may obscure the number of fiducial markers visible in each view, which can compromise the subsequent accuracy of the registration of the available fiducial markers, leading to measurements with reduced accuracy.
  • a method of improving the accuracy of a scan of a target object may be provided, by generating a 3D representation of the scan environment in the absence of a target object, for example based on first optical scan data.
  • This 3D representation of the scan environment can then be used as a reference against which scan data including the target object is registered which may improve the registration accuracy whilst substantially reducing the number of scan views of the target object used to generate the 3D representation.
  • fiducial markers or other identifiable features in the target object scan data may be registered against corresponding fiducial markers or other identifiable features in the 3D representation of the scan environment.
  • a first plurality of scans is obtained, the first plurality of scans comprising multiple views of the scan environment in the absence of a target object.
  • the scan environment may include any of 2D or 3D fiducial markers, support structures, platforms, or similar.
  • the 3D representation of the scan environment may then be generated based on the first plurality of scans by identifying and registering a common feature(s) between multiple views of the scan environment. For example, registration may be based on 2D or 3D fiducials, 3D scan data of a support structure, an extracted feature (for example an edge or vertex) of a support structure, etc., or a combination thereof.
  • the 3D representation of the scan environment can then be generated by combining the aligned multiple views to obtain an accurate representation of the scan environment.
  • Registration and combination of the first plurality of scans may be done by compositing the scan data from the aligned views.
  • the scan data from the aligned views may be fused into a single mesh.
  • Positions of fiducial markers that are aligned may also be combined to generate a more accurate location. Combination can be achieved by simply averaging the positions of the aligned fiducial markers or other common feature(s), or by using a more complex statistical process taking into account the error distribution of the individual measurements.
  • This process may be performed iteratively, wherein each set of fiducial markers for each view is aligned to a current best combined estimate of the fiducial marker positions. From this alignment, a new best combined estimate may be derived before further rounds of alignment and combination.
  • a single scan of the scan environment may be used to generate the 3D representation of the scan environment.
  • generation of a 3D representation of the target object is achieved by registering a second plurality of scans with the 3D representation of the scan environment, the second plurality of scans comprising multiple views of the target object within the scan environment.
  • Registering the second plurality of scans with the 3D representation of the scan environment may comprise identifying a feature(s) in a scan of the second plurality of scans and registering said feature with a corresponding feature in the 3D representation of the scan environment.
  • the registration may be based on 2D or 3D fiducials, an edge or vertex of a support structure, or a combination thereof.
  • the 3D representation of the target object may then be generated based on the registered second plurality of scans, by combining each registered scan of the registered second plurality of scans to generate a composite 3D representation of the target object.
  • the registered second plurality of scans may be used to generate an intermediate 3D representation of the scan environment including the target object.
  • Objects identified in the 3D representation of the scan environment may then be removed from the intermediate 3D representation in order to generate a 3D representation of the target object that does not include those objects or features present in the scan environment and not related to the target object. Removal of these objects may be done by subtraction of the 3D representation of the scan environment from the intermediate 3D representation. For example, subtracting the 3D representation of the scan environment from the intermediate 3D representation of the target object may remove any fiducial markers or support structures from the intermediate 3D representation, resulting in a 3D representation of the target object which includes the target object without other objects or features which are not of interest.
  • removing a support structure from the intermediate 3D representation may comprise defining a region around the support structure, and, once the second plurality of scans have been registered or aligned, deleting any objects within the defined region from the intermediate 3D representation.
  • generation of a 3D representation of the target object is achieved by registering a first scan of the second plurality of scans with a second scan of the second plurality of scans, in order to generate an intermediate 3D representation of the target object based on the second plurality of scans.
  • This intermediate 3D representation may be generated based on the second plurality of scans by identifying and aligning a common feature(s) between multiple views of the target object.
  • the 3D representation of the scan environment may then be subtracted from the intermediate 3D representation of the target object in order to generate a 3D representation of the target object that does not include objects or features present relating to the scan environment itself, i.e. , without other objects or features which are not of interest.
  • Scanning the scan environment in the absence of the target object, and generating a 3D representation of the scan environment based on said scan(s), may allow for fewer scan views of the target object to be used whilst maintaining accurate registration of the target object within the scan environment, and accurate registration between multiple scan views of the target object.
  • the target object scan data By registering the target object scan data to the 3D representation of the scan environment, multiple views of a target object may be accurately registered even when there is little or no overlap between the multiple views. Accordingly, accurate registration may be possible with fewer overall scan views of the target object. Additionally, registration of the target object scans to the 3D representation of the scan environment may be less computationally demanding compared to registration of the target object scans to other scans within the set of target object scans.
  • a single set of first scans may be used to generate a 3D representation of the scan environment which can then be used as a reference against which to register scan data relating to multiple different target objects. Accordingly, through reuse of a common scan environment representation, the overall number of scan views for sequentially scanning multiple target objects may be reduced.
  • Figures 1 a and 1 b show example scan environments 100, 101 comprising a target object 102.
  • Figure 1 a shows an example scan environment 100 comprising a target object 102, a platform 104, and one or more fiducial markers 106a-106h.
  • the target object 102 is a simple cuboid object, but it will be appreciated that the method herein is also applicable to more complex shapes.
  • the fiducial markers 106 of Figure 1a are illustrated as 2D fiducial markers, but examples may use 3D fiducial markers or a combination of 2D and 3D fiducial markers.
  • the fiducial markers 106 may be positioned randomly within the scan environment.
  • Figure 1 b shows the example scan environment 101 including a target object 102 and a support structure 108.
  • the support structure 108 may be any of a platform, a turntable, a clamp, a robotic arm, a pillar, or any similar structure suitable for supporting the target object in the scan environment within the field of view of an optical imaging device.
  • Figures 2a and 2b show an example of scans of a scan environment 200 including a target object 202 taken from multiple views.
  • the example of Figures 2a and 2b shows approximately 180 degrees of rotation in the plane of platform 204 between the view of Figure 2a and the view of Figure 2b.
  • a plurality of fiducial markers 206a-206h are positioned within scan environment 200.
  • Fiducial markers 206a, 206b, 206c and 206d are visible in both the view of Figure 2a and the view of Figure 2b.
  • Fiducial markers 206e and 206f are visible in the view of Figure 2a, but are visually obstructed by the target object 202 in the view of Figure 2b.
  • fiducial markers 206g and 206h are visible in the view of Figure 2b, but are visually obstructed by the target object 202 in the view of Figure 2a.
  • Figure 2c shows an example of a 3D representation of a scan of the scan environment 200 in the absence of target object 202.
  • the example of Figure 2c is a view from the same angle as Figure 2a, but with target object 202 not present. With the target object 202 absent, fiducial markers 206g and 206h, which were previously obscured by target object 202, are now visible to the imaging device.
  • the 3D representation of Figure 2c can be used to provide a baseline or reference point against which to compare or register the scans of the scan environment 200 including the target object 202.
  • the scan(s) of the environment in the absence of a target object may be taken a different view to any of the scans of the scan environment including the target object.
  • the fiducial markers visible in both views of Figures 2a and 2b may be used to align the views including the target object 202 with each other.
  • a 3D representation of the scan environment 200 will include all fiducial markers, as none of the fiducial markers will be visually obstructed by the target object 202, because the target object 202 is not present in the 3D representation of the scan environment 200.
  • all fiducial markers visible in a scan comprising the target object 202 may be used to register the target object scan data with the 3D representation of the 3D environment, even if they are visible in one view of the scan environment 200 including the target object 202 but not in other views.
  • aligning based on more fiducial markers, as opposed to just those common to target object scan views the accuracy of registration and alignment of the target object, second, scan data may be improved.
  • Figure 3 shows a flowchart of an example method 300 of generating a three- dimensional representation of a target object, for example target object 102 of Figure 1 a or 1 b, or target object 202 of Figure 2a or 2b.
  • first scan data is obtained, the first scan data corresponding to the scan environment without the target object included.
  • a 3D representation of the scan environment is generated based on the first scan data.
  • second scan data is obtained, the second scan data corresponding to the scan environment with the target object included.
  • a 3D representation of the target object is generated based on the first and second scan data.
  • a 3D representation of the target object may be generated by compositing aligned scans.
  • this alignment can be achieved based on the 3D positions of fiducial markers which may be used to calculate a transformation that best aligns the fiducial markers between multiple views.
  • the alignment can be based on calculating an alignment which best aligns 3D mesh data (or a point cloud extracted from the 3D mesh data).
  • alignment can be based on a combination of fiducial marker and mesh or point cloud data.
  • Accurate alignment of 3D mesh data or point clouds may comprise roughly aligning the 3D mesh data or point clouds, and then applying a refinement process based on an iterative closest point (ICP) approach to improve the accuracy of the alignment.
  • ICP iterative closest point
  • Example techniques for alignment of mesh data or point clouds are described in Winkelbach, S., Molkenstruck, S., and Wahl, F. M. (2006), ‘Low-cost laser range scanner and fast surface registration approach’, Pattern Recognition, pages 718-728; and in Azhar, F., Pollard, S., and Adams, G.
  • Example techniques for computing a 3D transformation based on a set of corresponding points are described in Lorusso, A., Eggert, D., and Fisher, R. (1995), ‘A comparison of four algorithms for estimating 3-D rigid transformations’, BMVC. which describes techniques using a singular value decomposition of a matrix, orthonormal matrices, unit quaternions and dual quaternions.
  • the composited scan data may be combined in order to reduce multiple overlapping meshes to a single mesh.
  • Techniques for combining the composited scan data are described in Kazhdan, M., Hoppe, H. (2012), ‘Screened Poisson Surface Reconstruction’, ACM Transactions on Graphics (ToG) 32, no.3, pages 1 -13, which describes techniques to explicitly incorporate oriented point sets as interpolation constraints. This combination may also comprise applying smoothing, hole filling, or similar techniques to the 3D mesh data.
  • the first scan data may be obtained prior to the second scan data.
  • the second scan data may be obtained prior to the first scan data.
  • Figure 4 shows a flowchart of an example method 400 of generating a three- dimensional representation of a target object, for example target object 102 of Figure 1 a or 1 b, or target object 202 of Figure 2a or 2b.
  • first scan data is obtained, the first scan data corresponding to the scan environment without the target object included.
  • a 3D representation of the scan environment is generated based on the first scan data.
  • second scan data is obtained, the second scan data corresponding to the scan environment with the target object included.
  • the second scan data is registered with the 3D representation of the scan environment to create registered second scan data.
  • a 3D representation of the target object is generated based on the registered second scan data.
  • FIG. 3 shows an example 500 of a device comprising a computer-readable storage medium 530 coupled to a processor 520.
  • Processors suitable for the execution of computer program code include, by way of example, both general and special purpose microprocessors, application specific integrated circuits (ASIC) or field programmable gate arrays (FPGA) operable to retrieve and act on instructions and/or data from the computer-readable storage medium 530.
  • ASIC application specific integrated circuits
  • FPGA field programmable gate arrays
  • the computer-readable storage medium 530 may be any media that can contain, store, or maintain programs and data for use by or in connection with an instruction execution system (e.g., non-transitory computer readable media).
  • Computer-readable media can comprise any one of many physical media such as, for example, electronic, magnetic, optical, electromagnetic, or semiconductor media. More specific examples of suitable machine-readable media include, but are not limited to, a hard drive, a random-access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory, or a portable disc.
  • the computer-readable storage medium comprises program code to, when executed on a computing device: obtain 502 first scan data of a scan environment in the absence of a target object, generate 504 a 3D representation of the scan environment based on the first scan data, obtain 506 second scan data of the scan environment within which the target object is present, and generate 508 a 3D representation of the target object based on the first and second scan data.
  • the computer-readable storage medium 530 may comprise program code to perform any of the methods, or parts thereof, illustrated in Figures 3 and 4, and discussed above.
  • All of the features disclosed in this specification may be combined in any combination, except combinations where some of such features are mutually exclusive.
  • Each feature disclosed in this specification, including any accompanying claims, abstract, and drawings may be replaced by alternative features serving the same, equivalent, or similar purpose, unless expressly stated otherwise.
  • each feature disclosed is one example of a generic series of equivalent or similar features.
  • a method of generating a three- dimensional representation of a target object comprising: obtaining first optical scan data of a scan volume comprising a scan environment in the absence of the target object; generating a three-dimensional representation of the scan environment based on the first optical scan data; obtaining second optical scan data of the scan volume comprising the scan environment and the target object; and generating a three- dimensional representation of the target object based on the first and second optical scan data.
  • generating the three-dimensional representation of the target object may comprise: registering the second optical scan data with the three- dimensional representation of the scan environment; and generating a three- dimensional representation of the target object based on the registered second optical scan data.
  • the first optical scan data may comprise a first plurality of scans, wherein each scan of the first plurality of scans is from a different view of the scan volume.
  • the second optical scan data may comprise a second plurality of scans, wherein each scan of the second plurality of scans is from a different view of the scan volume.
  • the second plurality of scans may be less than the first plurality of scans.
  • generating the three-dimensional representation of the scan environment may comprise identifying and aligning a feature in the first optical scan data.
  • the feature in the first optical scan data may comprise a fiducial marker, a support structure, or a combination thereof.
  • registering the second optical scan data may comprise: identifying a feature in the second optical scan data; and aligning the identified feature with a corresponding feature in the three-dimensional representation of the scan environment.
  • the feature in the second optical scan data may comprise a fiducial marker, a support structure, or a combination thereof.
  • generating the three-dimensional representation of the target object may comprise: generating an intermediate three-dimensional representation based on the second optical scan data; and subtracting the three-dimensional representation of the scan environment from the intermediate three-dimensional representation based on the second optical scan data.
  • generating the three-dimensional representation of the target object may further comprise: generating an intermediate three-dimensional representation based on the registered second optical scan data; and subtracting the three-dimensional representation of the scan environment from the intermediate three- dimensional representation based on the registered second optical scan data.
  • a non-transitory computer- readable storage medium comprising instructions that when executed cause a processor of a computing device to: obtain first optical scan data of a scan volume comprising a scan environment in the absence of a target object; generate a three- dimensional representation of the scan environment based on the first optical scan data; obtain second optical scan data of the scan volume comprising the scan environment and the target object; generate a three-dimensional representation of the target object based on the first and second optical scan data.
  • generating a three-dimensional representation of the target object may comprise: registering the second optical scan data with the three- dimensional representation of the scan environment; and generating a three- dimensional representation of the target object based on the registered second optical scan data.
  • the first optical scan data may comprise a first plurality of images
  • the second optical scan data may comprise a second plurality of images, the second plurality of images being less than the first plurality of images.
  • registering the second optical scan data may comprise: identifying a feature in the second optical scan data; and aligning the identified feature with a corresponding feature in the three-dimensional representation of the scan environment.
  • a system for generating a three- dimensional representation of a target object comprising: an optical imaging device; a memory; and a processor, the processor programmed to: receive, from the optical imaging device, first optical scan data of a scan volume comprising a scan environment in the absence of the target object; generate, based on the first optical scan data, a three-dimensional representation of the scan environment; receive, from the optical imaging device, second optical scan data of the scan volume comprising the scan environment and the target object; and generate, based on the first and second optical scan data, a three-dimensional representation of the target object.
  • the optical imaging device may be a three-dimensional capture device.
  • the optical imaging device may further comprise a projector arranged to project a structured light pattern on the scan environment.
  • the optical imaging device may comprise a camera.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Selon des aspects de la présente divulgation, l'invention concerne un procédé de génération d'une représentation tridimensionnelle d'un objet cible, consistant à : obtenir des premières données de balayage optique d'un volume de balayage comprenant un environnement de balayage en l'absence de l'objet cible ; générer une représentation tridimensionnelle de l'environnement de balayage sur la base des premières données de balayage optique ; obtenir des secondes données de balayage optique du volume de balayage comprenant l'environnement de balayage et l'objet cible ; et générer une représentation tridimensionnelle de l'objet cible sur la base des premières et secondes données de balayage optique.
PCT/US2021/064120 2021-12-17 2021-12-17 Génération d'une représentation tridimensionnelle d'un objet Ceased WO2023113823A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2021/064120 WO2023113823A1 (fr) 2021-12-17 2021-12-17 Génération d'une représentation tridimensionnelle d'un objet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2021/064120 WO2023113823A1 (fr) 2021-12-17 2021-12-17 Génération d'une représentation tridimensionnelle d'un objet

Publications (1)

Publication Number Publication Date
WO2023113823A1 true WO2023113823A1 (fr) 2023-06-22

Family

ID=86773277

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/064120 Ceased WO2023113823A1 (fr) 2021-12-17 2021-12-17 Génération d'une représentation tridimensionnelle d'un objet

Country Status (1)

Country Link
WO (1) WO2023113823A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117893696A (zh) * 2024-03-15 2024-04-16 之江实验室 一种三维人体数据生成方法、装置、存储介质及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160284104A1 (en) * 2013-11-27 2016-09-29 Hewlett-Packard Development Company, Lp. Determine the Shape of a Representation of an Object
US20180197328A1 (en) * 2015-09-30 2018-07-12 Hewlett-Packard Development Company, L.P. Three-dimensional model generation
CN110268449A (zh) * 2017-04-26 2019-09-20 惠普发展公司有限责任合伙企业 在对象上定位感兴趣区域
WO2020222781A1 (fr) * 2019-04-30 2020-11-05 Hewlett-Packard Development Company, L.P. Compensations géométriques
US20210093414A1 (en) * 2018-06-19 2021-04-01 Tornier, Inc. Mixed-reality surgical system with physical markers for registration of virtual models

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160284104A1 (en) * 2013-11-27 2016-09-29 Hewlett-Packard Development Company, Lp. Determine the Shape of a Representation of an Object
US20180197328A1 (en) * 2015-09-30 2018-07-12 Hewlett-Packard Development Company, L.P. Three-dimensional model generation
CN110268449A (zh) * 2017-04-26 2019-09-20 惠普发展公司有限责任合伙企业 在对象上定位感兴趣区域
US20210093414A1 (en) * 2018-06-19 2021-04-01 Tornier, Inc. Mixed-reality surgical system with physical markers for registration of virtual models
WO2020222781A1 (fr) * 2019-04-30 2020-11-05 Hewlett-Packard Development Company, L.P. Compensations géométriques

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117893696A (zh) * 2024-03-15 2024-04-16 之江实验室 一种三维人体数据生成方法、装置、存储介质及电子设备
CN117893696B (zh) * 2024-03-15 2024-05-28 之江实验室 一种三维人体数据生成方法、装置、存储介质及电子设备

Similar Documents

Publication Publication Date Title
CN105741346B (zh) 用于校准深度照相机的方法
JP5206366B2 (ja) 3次元データ作成装置
US20100328308A1 (en) Three Dimensional Mesh Modeling
JP2016119086A (ja) 3dモデル化オブジェクトのテクスチャリング
JP2011192214A (ja) 幾何特徴抽出装置、幾何特徴抽出方法、及びプログラム、三次元計測装置、物体認識装置
CN116638766A (zh) 3d打印偏差检测方法、装置和计算机设备
KR101602472B1 (ko) 2차원 이미지 변환을 통한 3차원 프린팅 파일 생성 장치 및 방법
KR102023042B1 (ko) 족부 스캔 장치 및 그의 족부 스캔 방법
JP2000268179A (ja) 三次元形状情報取得方法及び装置,二次元画像取得方法及び装置並びに記録媒体
US20240312033A1 (en) Method to register facial markers
JP2004234350A (ja) 画像処理装置、画像処理方法、及び画像処理プログラム
JPH11504452A (ja) 2次元投影図に基づいて3次元の対象物を再現し、取り扱うための装置と方法
WO2023113823A1 (fr) Génération d'une représentation tridimensionnelle d'un objet
Mao et al. Robust surface reconstruction of teeth from raw pointsets
JP2007322351A (ja) 3次元物体照合装置
EP4345748A1 (fr) Procédé mis en uvre par ordinateur pour fournir un modèle tridimensionnel léger, support lisible par ordinateur non transitoire et structure de données
KR101533494B1 (ko) 템플릿 모델 기반의 3d 영상 생성 방법 및 장치
Sosa et al. 3D surface reconstruction of entomological specimens from uniform multi-view image datasets
CN114511637A (zh) 一种基于构造强特征的弱特征物体图像三维重建系统及方法
Thomas et al. A study on close-range photogrammetry in image based modelling and rendering (imbr) approaches and post-processing analysis
CN116523927B (zh) 一种基于vtk的文物ct影像与三维外壳融合切割的方法
Budd et al. Temporal alignment of 3d video sequences using shape and appearance
JP2013254300A (ja) 画像処理方法
JP5904976B2 (ja) 3次元データ処理装置、3次元データ処理方法及びプログラム
WO2023069071A1 (fr) Matrices de transformation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21968362

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21968362

Country of ref document: EP

Kind code of ref document: A1