WO2016209276A1 - Balayage en trois dimensions - Google Patents
Balayage en trois dimensions Download PDFInfo
- Publication number
- WO2016209276A1 WO2016209276A1 PCT/US2015/038056 US2015038056W WO2016209276A1 WO 2016209276 A1 WO2016209276 A1 WO 2016209276A1 US 2015038056 W US2015038056 W US 2015038056W WO 2016209276 A1 WO2016209276 A1 WO 2016209276A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- support frame
- representation
- pose
- scanning
- view
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
Definitions
- Three dimensional (3D) representations of objects may be stored in files on a non-transitory machine readable storage medium.
- the representation of an object includes information about features of the object recorded from a plurality of view-points and integrated to a single three dimensional coordinate frame.
- the 3D representation of the object may allow reconstruction of part, or all, of the object in 3D such that it can be displayed by a monitor, projector or television, or reproduced by 3D printing or milling etc.
- a display of a computing device may rotate the 3D representation so that it can be seen from different sides.
- the 3D representation may include 3D information about the shape, surface and structure of the object and may include color information, or may be in monochrome or grey scale.
- a 3D scanner is a scanner that may be used, alone or in combination with other scanners, to gather scanning data suitable for generating a 3D representation of an object.
- a turntable is used to rotate an object in front of a 3D scanner, or the 3D scanner is rotated about an object, so as to scan the object from a plurality of different view-points and thus gather information about a plurality of different sides of the object.
- Another approach is to use a plurality of stationary 3D scanners arranged around the object and combine the information from the plurality of 3D scanners to generate a 3D representation of the object.
- Figure 1 shows an example system including a three dimensional (3D) scanner and a support frame for an object;
- Figure 2 is a flow chart showing an example method of 3D scanning according to the present disclosure
- Figure 3A shows an example of an object inside a support frame
- Figure 3B shows the example object and support frame of Figure 3A in a second pose
- Figure 3C shows the example object inside a different example support frame; wherein the object and support frame are in a first pose;
- Figure 3D shows the example object and support frame of Figure 3C in a second pose
- Figure 3E shows a view from above of the example support frame of Figure 3C in a plurality of different orientations
- Figure 4 is a flow chart showing an example method of generating a 3D representation of an object according to the present disclosure
- Figure 5 shows an example method of generating a 3D representation of an object based on scanning data
- Figure 6A shows an example of a 2D image of an object and support frame from a single view-point
- Figure 6B shows an example of a 3D image of an object and support frame from a single view-point
- Figure 6C shows an example of the 3D image of Figure 6B after the support frame has been removed
- Figure 6D shows an example of a plurality of 3D images, each corresponding to a different view-point of an object
- Figure 6E shows an example 3D representation of an object
- Figure 7 shows another example method of generating a 3D representation of an object based on scanning data
- Figure 8 shows yet another example method of generating a 3D representation of an object based on scanning data
- Figure 9 is a schematic diagram, showing an example system for generating a 3D representation of an object.
- the present disclosure discusses three dimensional (3D) scanning.
- 3D scanning One consideration for 3D scanning is that an object has a plurality of sides, but not all of the sides are visible from a single view-point.
- a view-point is a direction from which the object is viewed. For instance, if an object is viewed from above then details of the top surface of the object can be seen, but details of the bottom surface of the object may not be seen.
- the present disclosure also discusses generating a 3D representation of an object based on scanning data.
- the 3D representation may also be referred to as a 3D model of the object.
- the 3D representation may include information about the three dimensional spatial relationship between various features of the object and may include information gathered from a plurality of different view-points and integrated to the same coordinate system.
- the 3D representation may be of part of the object, e.g. as seen from a few sides, or may be of the whole object, e.g. as seen from every side.
- the object may be scanned from a plurality of view-points and scanning data from the plurality of view-points may be combined to generate a 3D representation of the object.
- One way of doing this is to have a plurality of 3D scanners positioned around the object and use each 3D scanner to scan the object from a different view point.
- this may be relatively expensive and cumbersome to set up.
- Another approach is to rotate a 3D scanner around the object, however this involves carefully controlling and monitoring of the path of the 3D scanner as it moves around and scans the object.
- a turntable can be used to rotate the object in front of the scanner, but in that case the scanning is limited by the pose of the object and the angle at which the scanner views the turntable. For instance, with a top-down scanner and turntable rotating about a vertical axis, it may be difficult to scan the bottom and other sides of the object.
- the present disclosure proposes a support frame for the object which is to be scanned.
- a support frame for the object which is to be scanned.
- the scanning data may then be processed to generate a 3D representation of the object based on determined poses of the support frame in the scanning data.
- Figure 1 shows an example system for 3D imaging.
- the system includes a 3D scanner 10 and a polyhedral support frame 20.
- the 3D scanner is to scan an object that may be placed inside the support frame 20.
- a computing device 1 which may include a main body 2 and a display 3.
- the main body 2 may house a processor, memory, hard disk, I/O interfaces and other computing device components.
- the display 3 is integrated into the main body 2, but in other examples the display and main body may be separate.
- the display 3 may be supported in a generally upright position by the main body 2, or by stand or other support member.
- the 3D scanner 10 may be a "fixed" scanner that is fixed to, or integrated into, the computing device 1 such that it is stationary and able to scan an object from a single view point.
- the 3D scanner is a fixed top-down scanner that is to scan from above objects placed below it.
- the fixing may be temporary or permanent, for instance the 3D scanner may have a clip to temporarily fix it to a computer display or main body.
- the present disclosure is not limited thereto and in other examples, the 3D scanner may be fixed to a different location, so as to scan the object from below or the side.
- the 3D scanner may be separate from the computing device, for instance having its own stand and being positioned separately.
- the 3D scanner may not be fixed and may be movable relative to the computing device.
- a mat 5 may be provided in front of the display 3.
- the mat 5 may act as a known surface on which to place objects to be scanned and may also provide I/O functionality, or a user interface, such as a touch sensitive surface.
- the system may not include a mat and objects to be scanned may be placed on any surface in the line of sight of the 3D scanner.
- the 3D scanner 10 is able to scan an object placed in its line of sight in order to gather 3D scanning data.
- the 3D scanner may include a camera and a light projector. The 3D scanner may then project various light patterns onto an object to be scanned and the camera may capture images of the object and light patterns projected onto the object.
- the 3D scanner may include a structured light source, such as a laser, that projects a structured light pattern onto the object and a sensor to sense structured light reflected back to the scanner. This pattern may for example be a 1 D point, a 2D line, or a 2D pseudo random structured pattern.
- the pattern can either be fixed and stationary, or panned across the object.
- a temporally modulated light source and appropriate time sampled sensor may be used to measure phase differences.
- the 3D scanner scans the object and collects scanning data that may be used to build up a 3D representation of the object.
- an object that is to be scanned is secured to the support frame.
- an object 30 may be placed in a volume of space inside the support frame 20 and secured to the support frame.
- the support frame 20 is rotated through a plurality of poses and scanned to capture scanning data of the object and support frame in each pose.
- a pose is a stable position of the support frame.
- the object rotates together with the support frame, as it is secured to the support frame.
- two possible poses of the support frame are shown in Figures 3A and 3B.
- the 3D scanner 10 may scan the support frame and object in each pose.
- the pose of the object corresponds to the pose of the support frame.
- scanning the support frame and object in the first pose shown in Figure 3A may generate a first set of scanning data corresponding to a first pose.
- Scanning the support frame and object in the second pose shown in Figure 3B may generate a second set of scanning data corresponding to a second pose. While just two poses are shown in Figures 3A and 3B, the support frame may be rotated to further poses and scanned in each further pose.
- the scanning data of the plurality of poses of the object and support frame is processed to generate a 3D representation of the object. Examples of how the scanning data may be processed go generate a 3D representation of the object are discussed in more detail later.
- the support frame 20 may have a polyhedral shape with a plurality of stable sides.
- the support frame thus makes it relatively simple to stably position the object in a plurality of different poses.
- the object 30 in the example of Figures 3A and 3B would not have a stable pose in the position shown in Figure 3B.
- it was not for the support frame it could be difficult for a fixed top-down 3D scanner to scan the lower surface of the object.
- a fixed top-down scanner can scan the lower surface of the object by rotating the support frame to the pose shown in Figure 3B.
- the support frame and object may be rotated through as many poses as there are stable poses of the support frame and scanned in each pose.
- a polyhedral support frame may be rotated to rest on each face in turn.
- there are potentially at least twelve stable poses e.g. a pose corresponding to each face of the dodecahedron.
- the support frame and object may be rotated to and scanned in all stable poses, or some but not all of the stable poses. For instance, in the case of a dodecahedral support frame, instead of scanning in twelve different poses, the support frame and object may rotated to and scanned in nine poses, or just six poses.
- Scanning the support frame and object in each pose generates a plurality of sets of scanning data, each set of scanning data corresponding to a respective pose of the support frame.
- FIG. 3A shows a support frame having a dodecahedral shape
- FIG. 3C and 3D show a support frame having the shape of a cube.
- the present disclosure is not limited to this and in other examples the support frame may have a different polyhedral shape such as another regular or non-regular polyhedron, a hexahedron other than a cube, an octahedron etc.
- the faces of the support frame are hollow and the edges are formed by elongate members. That makes placing of the object inside the support frame relatively easy and helps to keep the area of the support frame area relatively small, so as to minimize obscuring of parts of the object by the support frame.
- the faces may be solid and for example formed of transparent material, but that makes placement of the object inside the frame more difficult and may introduce more optical complexity to the scanning.
- an object 30 is secured inside the support frame 20.
- the object 30 may be secured inside the support frame by any suitable method.
- the object may be secured using support members, screws, string, adhesive, vacuum pads etc.
- a plurality of support members 40 extend inwardly from edges or vertices of the support frame and the object is secured to the support members.
- the object 30 is secured to a single support member 40 extending from the support frame.
- the purpose of securing the object 30 to the support frame 20 is to prevent it from moving relative to the support frame, as the support frame is rotated through different poses. Otherwise it would be difficult to reconcile the scans of the object taken in different poses of the support frame.
- the pose of the object may be inferred from the pose of the support frame. For instance, even though the support frame and object have been rotated through 180 degrees between Figures 3A and 3B, the relative position of the object and support frame to each other remains the same.
- securing the object to the support frame and rotating the object through a plurality of poses may, for instance, be carried out by a human.
- Scanning the object in each pose may, for instance, be carried out by the 3D scanner, under control of the user and/or by computer software or hardware.
- Processing the scanning data, to generate a 3D representation of the object may be carried out by computer software or hardware.
- a computer system may re-construct part, or all, of the object based on the 3D representation by display on a display apparatus, or in some cases by 3D printing or milling etc. In some cases the 3D
- 3D representation may allow reconstruction of the whole object as seen from any side. For instance if an object is placed in the dodecahedral support frame of Figure 3A and scanned in a pose corresponding to the support frame resting on each of its twelve faces, then other intermediate view-points can be interpolated from the 3D representation. The same is true for the hexahedral support frame of Figure 3C if poses corresponding to all six faces are scanned, although the reconstruction may be less accurate or complete.
- the term 3D representation also includes representations which combine data from a more limited number of view-points. For instance, a 3D representation generated from scans of just two adjoining faces of the support frame may allow part, but not all of the object, to be reconstructed.
- the object is scanned in a plurality of poses, including at least a first pose in which the support frame rests on a first face and a second pose in which the support frame rests on a second face, wherein the first and second faces are separated from each other by at least 30 degrees.
- Figure 4 is a flow diagram of the example method of generating a 3D representation of the object by a computer system which is to process the scanning data.
- it may be implemented by machine readable instructions stored on a non-transitory storage medium and executed by a processor of the computing system 1 .
- each set of scanning data corresponds to a respective view-point of the object in the support frame.
- a view-point of the object is the object as seen from a particular angle. For instance a scan carried out on the object in the position shown in Figure 3A will give a view of the top of the object, while a scan carried out on the object in the position shown in Figure 3B will give a view of the bottom of the object.
- each set of scanning data corresponds to a respective view-point of the object in the support fame and each view-point corresponds to a respective pose of the support frame.
- a pose of the support frame in each set of scanning data is determined. Determining a pose of the support frame may be based on knowledge of the three-dimensional shape and structure of the support frame, or based on special fiducial markers of the support frame, or both.
- the system may recognize the support frame in the scanning data and determine a pose of the scanning frame based on known characteristics of the support frame. For example the system may store a 3D model of the support frame and compare the model with the scanning data to recognize the support frame and determine its pose. This may include recognizing edges and/or corners of the support frame and inferring a pose of the support frame from the position of the edges and/or corners of the support frame in the scanning data.
- the pose of the support frame may be
- the support frame 20 has fiducial markers that are detectable by an imaging device and the pose of the support frame can be determined based on the position and/or orientation of the fiducial markers.
- the imaging device that detects the fiducial markers may be the 3D scanner, or may be another imaging device. Examples of fiducial markers will be explained later below.
- the pose of the support frame may be determined based on a combination of a known shape of the support frame and fiducial markers of the support frame.
- a pose of the object in each set of scanning data is determined based on the determined poses of the support frame. As explained above, as the object does not move relative to the support frame as the support frame is rotated, the pose of the support frame corresponds directly to the pose of the object in each view-point.
- a 3D representation of the object is generated based on the sets of scanning data and determined poses of the object in each set. This may include fusing together, i.e. combining, the scanning data from different view-points of the object based on a relationship between the determined poses of the object in each view-point.
- fiducial markers are a marker that by itself, or in combination with other markers, uniquely identifies a face of the support frame. Fiducial markers may also be used to help determine an orientation of the support frame. By recognizing a fiducial marker, or several fiducial markers, imaging software can determine the pose of the support frame. For instance if each face was marked with a number, then the number would identify the face of the support frame and an orientation of the number may identify the orientation of the support frame.
- the fiducial makers are on either the edges, or vertices, of the frame. This helps to minimize obscuring of the object by the fiducial markers.
- Figures 3A and 3B show an example in which the fiducial markers are on vertices of the support frame 20.
- the support frame 20 includes a plurality of edges 21 that connect at vertices 22 and define the faces 23, 24 of the polyhedron.
- Each vertex has a respective fiducial marker. This enables each face to be distinguished from other faces by the combination of fiducial markers at its vertices. For instance each face may have a combination of fiducial makers at its vertices, which uniquely distinguishes it from other faces of the support frame.
- the fudicial makers may be color coded, be marked with bar codes or have another recognizable identifying feature detectable by the 3D scanner, or another an imaging device.
- each fiducial maker is unique, while in other examples the fiducial markers are not unique, but are distinguishable from at least some of the other fiducial markers.
- the combination of fiducial makers should be sufficient to distinguish each face of the support frame from other faces of the support frame.
- the support frame of Figures 3C and 3D has a hexahedral shape and thus has eight vertices 22A to 22G.
- a respective fiducial marker is provided on each of the vertices.
- the vertex 22A has a red marker
- the vertex 22B has a green maker
- the vertex 22C has a yellow marker
- the vertex 22D has a blue maker.
- the face 24 can be recognized based on the four vertices 22A, 22B, 22C and 22D of that face having respectively red, green, yellow and blue makers.
- the other faces do not have that particular combination of fiducial makers and therefore the face 24 can be recognized, based on the fiducial markers of its vertices. Meanwhile, the face 23 may be recognized based on its vertices 22E, 22F, 22G and 22H having red, blue, red and yellow markers respectively.
- a top-down scanner scanning the support frame and object in the pose shown in Figure 3C may determine, based on the colors or other indicia of the fiducial makers at the vertices 22A to 22D, that the support frame is in a pose in which face 24 is facing upwards towards the 3D scanner.
- the same top-down scanner may determine, based on the fiducial makers at the vertices 22E to 22H, that the support frame is in a pose in which face 23 is facing upwards towards the 3D scanner. In this way by determining the poses of the support frame, the spatial relationship between the different sets of scanning data may be determined.
- the system may also determine an orientation of the support frame in each pose.
- the support frame 20 may have many different orientations while resting on face 23 with face 24 facing the 3D scanner.
- Figure 3E is a schematic view of the support as seen from above.
- the orientation of the support frame in each pose may be determined from the outline of the support frame and/or from the fiducial markers. If a fiducial marker is not symmetric then the orientation of the fiducial marker may indicate the orientation of the support frame. Otherwise the orientation may be determined from the relative position of a plurality of fiducial markers. In the example of Figure 3E, the orientation may be determined from the relative position of the fiducial markers at the vertices.
- the orientation of the support frame in each pose forms part of the spatial relationship between the sets of scanning data corresponding to each view point.
- the system may take this into account when fusing together the scanning data from the different view-points to generate a 3D representation of the object.
- the fiducial markers may be on edges of the support frame. For instance, some, or all, edges of the support frame may be color coded, or marked with a bar code or other identifiable feature. In that way a face and orientation of the support frame may be determined based on the combination of fiducial makers at its edges.
- scanning data may be processed to generate a 3D representation of the object.
- One example method will now be described. Scanning the object and support frame in a plurality of poses, results in plural sets of scanning data. Each set of scanning data relates to a particular pose and thus particular view point of the object and support frame.
- scanning data is used to mean any data derived from scanning by the 3D scanner.
- scanning data may be used to refer both to raw data from the initial scanning and processed data at intermediate stages of the processing up until the 3D representation of the object is generated.
- Each set of scanning data may start with raw data including a combination of data relating to 2D images of the object, such as data gathered by a camera of the 3D scanner, and data from which 3D information may be derived, such as patterns of light projected onto the object, or time of flight data etc.
- the raw data including the 2D image data of a view-point, may be transformed, by imaging hardware or software, into 3D image data of the viewpoint.
- 3D data is data that is capable of providing a 3D image of the object from a particular view-point.
- 3D image data is distinct from a true 3D representation, because 3D image data includes just one view-point of the object, e.g. as seen from above, and does not provide information about the object from other viewpoints. So, for example, 3D image data of a view-point from one side of the object, will not include data about the object as seen from other sides.
- the 3D image data for each view-point may then be fused together to form a 3D representation of the object.
- this fusing is based on the determined spatial relationship between the plurality of view-points. This spatial relationship may be determined based on the pose and orientation of the support frame in each set of scanning data.
- the support frame may be removed from the scanning data. There are various ways in which this may be done. For example, the support frame may be removed at various stages in the process. Some examples are given below with reference to Figures 5, 7 and 8.
- the support frame is removed from the scanning data after the 2D image data is transformed to 3D image data.
- scanning data including 2D image data of a plurality of view-points, is received.
- the scanning data may include a plurality of sets of scanning data, each set of scanning data corresponding to a different view-point of the object and support frame and including 2D image data.
- Figure 6A is a visual representation of one such set of scanning data, which includes two- dimensional (2D) image data of one view-point of an object and support frame. While only one view-point is shown in Figure 6A, at this point in the process there are plurality of sets of scanning data, each including 2D image data of a respective view-point of the object and support frame.
- Figure 6B is a visual representation of 3D image data of one view-point of the object and support frame. While only one viewpoint is shown in Figure 6B, at this point in the process there are plurality of sets of scanning data, each including 3D image data of a respective view-point of the object and support frame.
- the pose of the support frame in each set of scanning data may be determined, by using any of the methods described above.
- the pose of the support frame in each set of scanning data may be determined either after block 510, or after block 520 for example.
- the support frame is removed from the scanning data.
- This removal of the support frame may be based on the determined pose of the support frame and/or by recognizing the support frame in the 2D image data, or 3D image data.
- the support frame may be recognized based on a known shape of the support frame.
- the support frame may have a particular color, or other optical properties to facilitate easy recognition and removal from the scanning data.
- FIG. 6C is a visual representation of 3D image data, corresponding to one view-point of the object, after the support frame has been removed.
- Figure 6D is a visual representation of the plurality of sets of 3D image data, each corresponding to a respective view-point of the object, after the support frame has been removed.
- the 3D image data of each view-point is fused together to generate a 3D representation of the object.
- Figure 6E shows the 3D representation of the object.
- the 3D representation may be in the form of a file and includes information of the visual characteristics of every side of the object. Based on the 3D representation the object may be displayed from any viewpoint, for example it may be displayed as a rotating 3D image on a display. In one example the 3D representation may be a point cloud.
- scanning data including 2D image data of a plurality of view-points is received, the same as for block 510 of Figure 5.
- a pose of the support frame in each set of scanning data (i.e. each view-point), is determined.
- the pose of the object in each set of scanning data may be determined based on the pose of the support frame.
- the support frame is removed from the 2D image data of each view-point. This is similar to block 530 of Figure 5, except that the support frame is removed from 2D image data, rather than 3D image data. This removal may for example may be based on a determined pose of the support frame, calculating a projection of the known shape of the support frame onto the 2D image data and/or based on a color of the support frame, or otherwise.
- the 2D image data of each view-point is converted to 3D image data of each view-point. This is similar to block 520 of Figure 5, except that the support frame has already been removed from the scanning data.
- Figure 8 shows an example method which is similar to the method of Figure 5, except that the support frame is removed from the scanning data after the 3D images have been fused together.
- scanning data including 2D image data of a plurality of view-points, is received the same as for block 510 of Figure 5.
- the pose of the support frame, and thus pose of the object, in each view-point is determined either after block 810, or after block 820.
- the 3D image data of each view-point is fused together to generate data capable of providing a 3D representation of the object together with the support frame.
- the support frame in each view-point may obscure part of the object.
- the methods may compensate for this by generating image data to fill in the missing parts which were obscured by the support frame. This compensation may be carried out at any appropriate stage, for example after removing the support frame, or as a refinement after an initial 3D representation of the object has been generated.
- the image data to compensate for the missing parts may be based on extrapolation, derived from image data of the obscured parts in other view-points, or a combination of both. In some cases, a part of the object which is obscured by the support frame in one view-point, may not be obscured in other view-points, thus enabling different view-points to compensate for each other.
- the 3D representation may be stored in a file on a machine readable storage medium. Part, or all of the object may be re-constructed based on at least one of display on a display apparatus, 3D printing or milling. Reconstruction of the object, or part of the object, may include interpolation between data points in the 3D representation where appropriate.
- the methods of Figures 4, 5, 7 and 8 may be performed by imaging hardware and/or software in combination with appropriate hardware such as a processor.
- the methods are computer executed methods implemented on a computing system, such as that shown in Figure 1 .
- FIG. 9 is a schematic diagram of a computing system 900 that is capable of implementing the above methods.
- the computing system includes a processor 910 and a non-transitory storage medium 920.
- the non-transitory storage medium may for example be a memory, hard disk, CD ROM etc.
- the computing system further includes an in/out (I/O) interface 913 to facilitate communication with external interfaces such as a display, keyboard, user device, 3D scanner etc.
- the system may receive scanning data from a 3D scanner, from an external storage medium, external device or a network connection
- the system is to process scanning data in the manner described in the description above, for example with reference to any of Figures 4 to 8.
- the non-transitory storage medium stores modules of machine readable instructions that are executable by the processor. Included in the modules of machine readable instructions are a pose determining module 922, a 2D to 3D module, a support frame removing module 926 and a fusing module 928. These are just examples and the non-transitory storage medium may store just some of these modules, or may store other modules as well.
- the pose determining module 922 is to determine a pose of the support frame in received scanning data.
- the pose determining module 922 may also determine a pose of the object based on a pose of the support frame.
- the pose determining module 922 may further determine an orientation of the support frame in each pose.
- the 2D to 3D module is to transform 2D image data to 3D image data as described above.
- the support frame removing module 926 is to remove the support frame from the scanning data as described above.
- the fusing module 928 is to fuse together image data from the plurality of sets of scanning data, based on determined poses of the support frame in each set of scanning data, in order to generate a 3D representation of the object.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
L'invention concerne, dans un exemple, un objet et un cadre de support qui sont balayés en trois dimensions (3D) dans une pluralité de poses. Une représentation 3D de l'objet peut être générée sur la base des données de balayage.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2015/038056 WO2016209276A1 (fr) | 2015-06-26 | 2015-06-26 | Balayage en trois dimensions |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2015/038056 WO2016209276A1 (fr) | 2015-06-26 | 2015-06-26 | Balayage en trois dimensions |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2016209276A1 true WO2016209276A1 (fr) | 2016-12-29 |
Family
ID=57586606
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2015/038056 Ceased WO2016209276A1 (fr) | 2015-06-26 | 2015-06-26 | Balayage en trois dimensions |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2016209276A1 (fr) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114964039A (zh) * | 2022-05-10 | 2022-08-30 | 深圳市纵维立方科技有限公司 | 扫描方法、装置及电子设备 |
| EP4379668A1 (fr) * | 2022-11-29 | 2024-06-05 | Bandai Co., Ltd. | Génération de vidéo réaliste de réalité virtuelle à partir d'un objet cible capturé |
| EP4617619A1 (fr) * | 2024-03-11 | 2025-09-17 | The Boeing Company | Appareils et procédés d'inspection de bosses de fuselage sans fil de grande surface |
| TWI903262B (zh) | 2022-11-29 | 2025-11-01 | 日商萬代股份有限公司 | 圖像處理方法、資訊處理裝置及電腦程式 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6591512B2 (en) * | 1998-02-02 | 2003-07-15 | Daimlerchrysler | Device for use as a navigation link when measuring objects |
| US20040252811A1 (en) * | 2003-06-10 | 2004-12-16 | Hisanori Morita | Radiographic apparatus |
| US20080084589A1 (en) * | 2006-10-10 | 2008-04-10 | Thomas Malzbender | Acquiring three-dimensional structure using two-dimensional scanner |
| US20110007071A1 (en) * | 2009-07-08 | 2011-01-13 | Marcus Pfister | Method for Supporting Puncture Planning in a Puncture of an Examination Object |
| US20140111621A1 (en) * | 2011-04-29 | 2014-04-24 | Thrombogenics Nv | Stereo-vision system |
-
2015
- 2015-06-26 WO PCT/US2015/038056 patent/WO2016209276A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6591512B2 (en) * | 1998-02-02 | 2003-07-15 | Daimlerchrysler | Device for use as a navigation link when measuring objects |
| US20040252811A1 (en) * | 2003-06-10 | 2004-12-16 | Hisanori Morita | Radiographic apparatus |
| US20080084589A1 (en) * | 2006-10-10 | 2008-04-10 | Thomas Malzbender | Acquiring three-dimensional structure using two-dimensional scanner |
| US20110007071A1 (en) * | 2009-07-08 | 2011-01-13 | Marcus Pfister | Method for Supporting Puncture Planning in a Puncture of an Examination Object |
| US20140111621A1 (en) * | 2011-04-29 | 2014-04-24 | Thrombogenics Nv | Stereo-vision system |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114964039A (zh) * | 2022-05-10 | 2022-08-30 | 深圳市纵维立方科技有限公司 | 扫描方法、装置及电子设备 |
| EP4379668A1 (fr) * | 2022-11-29 | 2024-06-05 | Bandai Co., Ltd. | Génération de vidéo réaliste de réalité virtuelle à partir d'un objet cible capturé |
| TWI903262B (zh) | 2022-11-29 | 2025-11-01 | 日商萬代股份有限公司 | 圖像處理方法、資訊處理裝置及電腦程式 |
| EP4617619A1 (fr) * | 2024-03-11 | 2025-09-17 | The Boeing Company | Appareils et procédés d'inspection de bosses de fuselage sans fil de grande surface |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US6781618B2 (en) | Hand-held 3D vision system | |
| US20100328308A1 (en) | Three Dimensional Mesh Modeling | |
| KR20180003535A (ko) | 라이더 스테레오 융합 실사 3d 모델 가상 현실 비디오 | |
| CN106526605B (zh) | 激光雷达与深度相机的数据融合方法及系统 | |
| US8917317B1 (en) | System and method for camera calibration | |
| CN104335005A (zh) | 3d扫描以及定位系统 | |
| CN111292239B (zh) | 一种三维模型拼接设备及方法 | |
| WO2012129252A1 (fr) | Caméra 3d numérique utilisant un éclairage périodique | |
| JP2001503514A (ja) | 3次元カラースキャニング方法および装置 | |
| JP3524147B2 (ja) | 3次元画像表示装置 | |
| CN107346040B (zh) | 裸眼3d显示设备的光栅参数的确定方法、装置及电子设备 | |
| JP7657308B2 (ja) | シーンの3次元モデルを生成するための方法、装置、およびシステム | |
| CN109559349A (zh) | 一种用于标定的方法和装置 | |
| KR101785202B1 (ko) | 열상 센서와 rgb-d 센서 융합을 위한 자동 캘리브레이션 시스템과 그 방법 | |
| JP2013024608A (ja) | 3次元形状の取得装置、処理方法およびプログラム | |
| JP2005195335A (ja) | 3次元画像撮影装置および方法 | |
| WO2016209276A1 (fr) | Balayage en trois dimensions | |
| KR20090000777A (ko) | 감각형 오브젝트를 이용한 증강현실 시스템 및 증강현실제공 방법 | |
| JP2022022133A (ja) | 実オブジェクトの3dスキャンニングのための方法 | |
| CN111340959B (zh) | 一种基于直方图匹配的三维模型无缝纹理贴图方法 | |
| JP7398819B2 (ja) | 三次元再構成の方法及び装置 | |
| JP4599500B2 (ja) | 座標情報収集システム及び3次元形状推定システム | |
| US11302073B2 (en) | Method for texturing a 3D model | |
| JP5732424B2 (ja) | 3次元形状計測装置とそのキャリブレーション方法 | |
| JP5642561B2 (ja) | 家屋異動判読支援装置、家屋異動判読支援方法及び家屋異動判読支援プログラム |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15896562 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 15896562 Country of ref document: EP Kind code of ref document: A1 |