WO2016209276A1 - Three dimensional scanning - Google Patents
Three dimensional scanning Download PDFInfo
- Publication number
- WO2016209276A1 WO2016209276A1 PCT/US2015/038056 US2015038056W WO2016209276A1 WO 2016209276 A1 WO2016209276 A1 WO 2016209276A1 US 2015038056 W US2015038056 W US 2015038056W WO 2016209276 A1 WO2016209276 A1 WO 2016209276A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- support frame
- representation
- pose
- scanning
- view
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
Definitions
- Three dimensional (3D) representations of objects may be stored in files on a non-transitory machine readable storage medium.
- the representation of an object includes information about features of the object recorded from a plurality of view-points and integrated to a single three dimensional coordinate frame.
- the 3D representation of the object may allow reconstruction of part, or all, of the object in 3D such that it can be displayed by a monitor, projector or television, or reproduced by 3D printing or milling etc.
- a display of a computing device may rotate the 3D representation so that it can be seen from different sides.
- the 3D representation may include 3D information about the shape, surface and structure of the object and may include color information, or may be in monochrome or grey scale.
- a 3D scanner is a scanner that may be used, alone or in combination with other scanners, to gather scanning data suitable for generating a 3D representation of an object.
- a turntable is used to rotate an object in front of a 3D scanner, or the 3D scanner is rotated about an object, so as to scan the object from a plurality of different view-points and thus gather information about a plurality of different sides of the object.
- Another approach is to use a plurality of stationary 3D scanners arranged around the object and combine the information from the plurality of 3D scanners to generate a 3D representation of the object.
- Figure 1 shows an example system including a three dimensional (3D) scanner and a support frame for an object;
- Figure 2 is a flow chart showing an example method of 3D scanning according to the present disclosure
- Figure 3A shows an example of an object inside a support frame
- Figure 3B shows the example object and support frame of Figure 3A in a second pose
- Figure 3C shows the example object inside a different example support frame; wherein the object and support frame are in a first pose;
- Figure 3D shows the example object and support frame of Figure 3C in a second pose
- Figure 3E shows a view from above of the example support frame of Figure 3C in a plurality of different orientations
- Figure 4 is a flow chart showing an example method of generating a 3D representation of an object according to the present disclosure
- Figure 5 shows an example method of generating a 3D representation of an object based on scanning data
- Figure 6A shows an example of a 2D image of an object and support frame from a single view-point
- Figure 6B shows an example of a 3D image of an object and support frame from a single view-point
- Figure 6C shows an example of the 3D image of Figure 6B after the support frame has been removed
- Figure 6D shows an example of a plurality of 3D images, each corresponding to a different view-point of an object
- Figure 6E shows an example 3D representation of an object
- Figure 7 shows another example method of generating a 3D representation of an object based on scanning data
- Figure 8 shows yet another example method of generating a 3D representation of an object based on scanning data
- Figure 9 is a schematic diagram, showing an example system for generating a 3D representation of an object.
- the present disclosure discusses three dimensional (3D) scanning.
- 3D scanning One consideration for 3D scanning is that an object has a plurality of sides, but not all of the sides are visible from a single view-point.
- a view-point is a direction from which the object is viewed. For instance, if an object is viewed from above then details of the top surface of the object can be seen, but details of the bottom surface of the object may not be seen.
- the present disclosure also discusses generating a 3D representation of an object based on scanning data.
- the 3D representation may also be referred to as a 3D model of the object.
- the 3D representation may include information about the three dimensional spatial relationship between various features of the object and may include information gathered from a plurality of different view-points and integrated to the same coordinate system.
- the 3D representation may be of part of the object, e.g. as seen from a few sides, or may be of the whole object, e.g. as seen from every side.
- the object may be scanned from a plurality of view-points and scanning data from the plurality of view-points may be combined to generate a 3D representation of the object.
- One way of doing this is to have a plurality of 3D scanners positioned around the object and use each 3D scanner to scan the object from a different view point.
- this may be relatively expensive and cumbersome to set up.
- Another approach is to rotate a 3D scanner around the object, however this involves carefully controlling and monitoring of the path of the 3D scanner as it moves around and scans the object.
- a turntable can be used to rotate the object in front of the scanner, but in that case the scanning is limited by the pose of the object and the angle at which the scanner views the turntable. For instance, with a top-down scanner and turntable rotating about a vertical axis, it may be difficult to scan the bottom and other sides of the object.
- the present disclosure proposes a support frame for the object which is to be scanned.
- a support frame for the object which is to be scanned.
- the scanning data may then be processed to generate a 3D representation of the object based on determined poses of the support frame in the scanning data.
- Figure 1 shows an example system for 3D imaging.
- the system includes a 3D scanner 10 and a polyhedral support frame 20.
- the 3D scanner is to scan an object that may be placed inside the support frame 20.
- a computing device 1 which may include a main body 2 and a display 3.
- the main body 2 may house a processor, memory, hard disk, I/O interfaces and other computing device components.
- the display 3 is integrated into the main body 2, but in other examples the display and main body may be separate.
- the display 3 may be supported in a generally upright position by the main body 2, or by stand or other support member.
- the 3D scanner 10 may be a "fixed" scanner that is fixed to, or integrated into, the computing device 1 such that it is stationary and able to scan an object from a single view point.
- the 3D scanner is a fixed top-down scanner that is to scan from above objects placed below it.
- the fixing may be temporary or permanent, for instance the 3D scanner may have a clip to temporarily fix it to a computer display or main body.
- the present disclosure is not limited thereto and in other examples, the 3D scanner may be fixed to a different location, so as to scan the object from below or the side.
- the 3D scanner may be separate from the computing device, for instance having its own stand and being positioned separately.
- the 3D scanner may not be fixed and may be movable relative to the computing device.
- a mat 5 may be provided in front of the display 3.
- the mat 5 may act as a known surface on which to place objects to be scanned and may also provide I/O functionality, or a user interface, such as a touch sensitive surface.
- the system may not include a mat and objects to be scanned may be placed on any surface in the line of sight of the 3D scanner.
- the 3D scanner 10 is able to scan an object placed in its line of sight in order to gather 3D scanning data.
- the 3D scanner may include a camera and a light projector. The 3D scanner may then project various light patterns onto an object to be scanned and the camera may capture images of the object and light patterns projected onto the object.
- the 3D scanner may include a structured light source, such as a laser, that projects a structured light pattern onto the object and a sensor to sense structured light reflected back to the scanner. This pattern may for example be a 1 D point, a 2D line, or a 2D pseudo random structured pattern.
- the pattern can either be fixed and stationary, or panned across the object.
- a temporally modulated light source and appropriate time sampled sensor may be used to measure phase differences.
- the 3D scanner scans the object and collects scanning data that may be used to build up a 3D representation of the object.
- an object that is to be scanned is secured to the support frame.
- an object 30 may be placed in a volume of space inside the support frame 20 and secured to the support frame.
- the support frame 20 is rotated through a plurality of poses and scanned to capture scanning data of the object and support frame in each pose.
- a pose is a stable position of the support frame.
- the object rotates together with the support frame, as it is secured to the support frame.
- two possible poses of the support frame are shown in Figures 3A and 3B.
- the 3D scanner 10 may scan the support frame and object in each pose.
- the pose of the object corresponds to the pose of the support frame.
- scanning the support frame and object in the first pose shown in Figure 3A may generate a first set of scanning data corresponding to a first pose.
- Scanning the support frame and object in the second pose shown in Figure 3B may generate a second set of scanning data corresponding to a second pose. While just two poses are shown in Figures 3A and 3B, the support frame may be rotated to further poses and scanned in each further pose.
- the scanning data of the plurality of poses of the object and support frame is processed to generate a 3D representation of the object. Examples of how the scanning data may be processed go generate a 3D representation of the object are discussed in more detail later.
- the support frame 20 may have a polyhedral shape with a plurality of stable sides.
- the support frame thus makes it relatively simple to stably position the object in a plurality of different poses.
- the object 30 in the example of Figures 3A and 3B would not have a stable pose in the position shown in Figure 3B.
- it was not for the support frame it could be difficult for a fixed top-down 3D scanner to scan the lower surface of the object.
- a fixed top-down scanner can scan the lower surface of the object by rotating the support frame to the pose shown in Figure 3B.
- the support frame and object may be rotated through as many poses as there are stable poses of the support frame and scanned in each pose.
- a polyhedral support frame may be rotated to rest on each face in turn.
- there are potentially at least twelve stable poses e.g. a pose corresponding to each face of the dodecahedron.
- the support frame and object may be rotated to and scanned in all stable poses, or some but not all of the stable poses. For instance, in the case of a dodecahedral support frame, instead of scanning in twelve different poses, the support frame and object may rotated to and scanned in nine poses, or just six poses.
- Scanning the support frame and object in each pose generates a plurality of sets of scanning data, each set of scanning data corresponding to a respective pose of the support frame.
- FIG. 3A shows a support frame having a dodecahedral shape
- FIG. 3C and 3D show a support frame having the shape of a cube.
- the present disclosure is not limited to this and in other examples the support frame may have a different polyhedral shape such as another regular or non-regular polyhedron, a hexahedron other than a cube, an octahedron etc.
- the faces of the support frame are hollow and the edges are formed by elongate members. That makes placing of the object inside the support frame relatively easy and helps to keep the area of the support frame area relatively small, so as to minimize obscuring of parts of the object by the support frame.
- the faces may be solid and for example formed of transparent material, but that makes placement of the object inside the frame more difficult and may introduce more optical complexity to the scanning.
- an object 30 is secured inside the support frame 20.
- the object 30 may be secured inside the support frame by any suitable method.
- the object may be secured using support members, screws, string, adhesive, vacuum pads etc.
- a plurality of support members 40 extend inwardly from edges or vertices of the support frame and the object is secured to the support members.
- the object 30 is secured to a single support member 40 extending from the support frame.
- the purpose of securing the object 30 to the support frame 20 is to prevent it from moving relative to the support frame, as the support frame is rotated through different poses. Otherwise it would be difficult to reconcile the scans of the object taken in different poses of the support frame.
- the pose of the object may be inferred from the pose of the support frame. For instance, even though the support frame and object have been rotated through 180 degrees between Figures 3A and 3B, the relative position of the object and support frame to each other remains the same.
- securing the object to the support frame and rotating the object through a plurality of poses may, for instance, be carried out by a human.
- Scanning the object in each pose may, for instance, be carried out by the 3D scanner, under control of the user and/or by computer software or hardware.
- Processing the scanning data, to generate a 3D representation of the object may be carried out by computer software or hardware.
- a computer system may re-construct part, or all, of the object based on the 3D representation by display on a display apparatus, or in some cases by 3D printing or milling etc. In some cases the 3D
- 3D representation may allow reconstruction of the whole object as seen from any side. For instance if an object is placed in the dodecahedral support frame of Figure 3A and scanned in a pose corresponding to the support frame resting on each of its twelve faces, then other intermediate view-points can be interpolated from the 3D representation. The same is true for the hexahedral support frame of Figure 3C if poses corresponding to all six faces are scanned, although the reconstruction may be less accurate or complete.
- the term 3D representation also includes representations which combine data from a more limited number of view-points. For instance, a 3D representation generated from scans of just two adjoining faces of the support frame may allow part, but not all of the object, to be reconstructed.
- the object is scanned in a plurality of poses, including at least a first pose in which the support frame rests on a first face and a second pose in which the support frame rests on a second face, wherein the first and second faces are separated from each other by at least 30 degrees.
- Figure 4 is a flow diagram of the example method of generating a 3D representation of the object by a computer system which is to process the scanning data.
- it may be implemented by machine readable instructions stored on a non-transitory storage medium and executed by a processor of the computing system 1 .
- each set of scanning data corresponds to a respective view-point of the object in the support frame.
- a view-point of the object is the object as seen from a particular angle. For instance a scan carried out on the object in the position shown in Figure 3A will give a view of the top of the object, while a scan carried out on the object in the position shown in Figure 3B will give a view of the bottom of the object.
- each set of scanning data corresponds to a respective view-point of the object in the support fame and each view-point corresponds to a respective pose of the support frame.
- a pose of the support frame in each set of scanning data is determined. Determining a pose of the support frame may be based on knowledge of the three-dimensional shape and structure of the support frame, or based on special fiducial markers of the support frame, or both.
- the system may recognize the support frame in the scanning data and determine a pose of the scanning frame based on known characteristics of the support frame. For example the system may store a 3D model of the support frame and compare the model with the scanning data to recognize the support frame and determine its pose. This may include recognizing edges and/or corners of the support frame and inferring a pose of the support frame from the position of the edges and/or corners of the support frame in the scanning data.
- the pose of the support frame may be
- the support frame 20 has fiducial markers that are detectable by an imaging device and the pose of the support frame can be determined based on the position and/or orientation of the fiducial markers.
- the imaging device that detects the fiducial markers may be the 3D scanner, or may be another imaging device. Examples of fiducial markers will be explained later below.
- the pose of the support frame may be determined based on a combination of a known shape of the support frame and fiducial markers of the support frame.
- a pose of the object in each set of scanning data is determined based on the determined poses of the support frame. As explained above, as the object does not move relative to the support frame as the support frame is rotated, the pose of the support frame corresponds directly to the pose of the object in each view-point.
- a 3D representation of the object is generated based on the sets of scanning data and determined poses of the object in each set. This may include fusing together, i.e. combining, the scanning data from different view-points of the object based on a relationship between the determined poses of the object in each view-point.
- fiducial markers are a marker that by itself, or in combination with other markers, uniquely identifies a face of the support frame. Fiducial markers may also be used to help determine an orientation of the support frame. By recognizing a fiducial marker, or several fiducial markers, imaging software can determine the pose of the support frame. For instance if each face was marked with a number, then the number would identify the face of the support frame and an orientation of the number may identify the orientation of the support frame.
- the fiducial makers are on either the edges, or vertices, of the frame. This helps to minimize obscuring of the object by the fiducial markers.
- Figures 3A and 3B show an example in which the fiducial markers are on vertices of the support frame 20.
- the support frame 20 includes a plurality of edges 21 that connect at vertices 22 and define the faces 23, 24 of the polyhedron.
- Each vertex has a respective fiducial marker. This enables each face to be distinguished from other faces by the combination of fiducial markers at its vertices. For instance each face may have a combination of fiducial makers at its vertices, which uniquely distinguishes it from other faces of the support frame.
- the fudicial makers may be color coded, be marked with bar codes or have another recognizable identifying feature detectable by the 3D scanner, or another an imaging device.
- each fiducial maker is unique, while in other examples the fiducial markers are not unique, but are distinguishable from at least some of the other fiducial markers.
- the combination of fiducial makers should be sufficient to distinguish each face of the support frame from other faces of the support frame.
- the support frame of Figures 3C and 3D has a hexahedral shape and thus has eight vertices 22A to 22G.
- a respective fiducial marker is provided on each of the vertices.
- the vertex 22A has a red marker
- the vertex 22B has a green maker
- the vertex 22C has a yellow marker
- the vertex 22D has a blue maker.
- the face 24 can be recognized based on the four vertices 22A, 22B, 22C and 22D of that face having respectively red, green, yellow and blue makers.
- the other faces do not have that particular combination of fiducial makers and therefore the face 24 can be recognized, based on the fiducial markers of its vertices. Meanwhile, the face 23 may be recognized based on its vertices 22E, 22F, 22G and 22H having red, blue, red and yellow markers respectively.
- a top-down scanner scanning the support frame and object in the pose shown in Figure 3C may determine, based on the colors or other indicia of the fiducial makers at the vertices 22A to 22D, that the support frame is in a pose in which face 24 is facing upwards towards the 3D scanner.
- the same top-down scanner may determine, based on the fiducial makers at the vertices 22E to 22H, that the support frame is in a pose in which face 23 is facing upwards towards the 3D scanner. In this way by determining the poses of the support frame, the spatial relationship between the different sets of scanning data may be determined.
- the system may also determine an orientation of the support frame in each pose.
- the support frame 20 may have many different orientations while resting on face 23 with face 24 facing the 3D scanner.
- Figure 3E is a schematic view of the support as seen from above.
- the orientation of the support frame in each pose may be determined from the outline of the support frame and/or from the fiducial markers. If a fiducial marker is not symmetric then the orientation of the fiducial marker may indicate the orientation of the support frame. Otherwise the orientation may be determined from the relative position of a plurality of fiducial markers. In the example of Figure 3E, the orientation may be determined from the relative position of the fiducial markers at the vertices.
- the orientation of the support frame in each pose forms part of the spatial relationship between the sets of scanning data corresponding to each view point.
- the system may take this into account when fusing together the scanning data from the different view-points to generate a 3D representation of the object.
- the fiducial markers may be on edges of the support frame. For instance, some, or all, edges of the support frame may be color coded, or marked with a bar code or other identifiable feature. In that way a face and orientation of the support frame may be determined based on the combination of fiducial makers at its edges.
- scanning data may be processed to generate a 3D representation of the object.
- One example method will now be described. Scanning the object and support frame in a plurality of poses, results in plural sets of scanning data. Each set of scanning data relates to a particular pose and thus particular view point of the object and support frame.
- scanning data is used to mean any data derived from scanning by the 3D scanner.
- scanning data may be used to refer both to raw data from the initial scanning and processed data at intermediate stages of the processing up until the 3D representation of the object is generated.
- Each set of scanning data may start with raw data including a combination of data relating to 2D images of the object, such as data gathered by a camera of the 3D scanner, and data from which 3D information may be derived, such as patterns of light projected onto the object, or time of flight data etc.
- the raw data including the 2D image data of a view-point, may be transformed, by imaging hardware or software, into 3D image data of the viewpoint.
- 3D data is data that is capable of providing a 3D image of the object from a particular view-point.
- 3D image data is distinct from a true 3D representation, because 3D image data includes just one view-point of the object, e.g. as seen from above, and does not provide information about the object from other viewpoints. So, for example, 3D image data of a view-point from one side of the object, will not include data about the object as seen from other sides.
- the 3D image data for each view-point may then be fused together to form a 3D representation of the object.
- this fusing is based on the determined spatial relationship between the plurality of view-points. This spatial relationship may be determined based on the pose and orientation of the support frame in each set of scanning data.
- the support frame may be removed from the scanning data. There are various ways in which this may be done. For example, the support frame may be removed at various stages in the process. Some examples are given below with reference to Figures 5, 7 and 8.
- the support frame is removed from the scanning data after the 2D image data is transformed to 3D image data.
- scanning data including 2D image data of a plurality of view-points, is received.
- the scanning data may include a plurality of sets of scanning data, each set of scanning data corresponding to a different view-point of the object and support frame and including 2D image data.
- Figure 6A is a visual representation of one such set of scanning data, which includes two- dimensional (2D) image data of one view-point of an object and support frame. While only one view-point is shown in Figure 6A, at this point in the process there are plurality of sets of scanning data, each including 2D image data of a respective view-point of the object and support frame.
- Figure 6B is a visual representation of 3D image data of one view-point of the object and support frame. While only one viewpoint is shown in Figure 6B, at this point in the process there are plurality of sets of scanning data, each including 3D image data of a respective view-point of the object and support frame.
- the pose of the support frame in each set of scanning data may be determined, by using any of the methods described above.
- the pose of the support frame in each set of scanning data may be determined either after block 510, or after block 520 for example.
- the support frame is removed from the scanning data.
- This removal of the support frame may be based on the determined pose of the support frame and/or by recognizing the support frame in the 2D image data, or 3D image data.
- the support frame may be recognized based on a known shape of the support frame.
- the support frame may have a particular color, or other optical properties to facilitate easy recognition and removal from the scanning data.
- FIG. 6C is a visual representation of 3D image data, corresponding to one view-point of the object, after the support frame has been removed.
- Figure 6D is a visual representation of the plurality of sets of 3D image data, each corresponding to a respective view-point of the object, after the support frame has been removed.
- the 3D image data of each view-point is fused together to generate a 3D representation of the object.
- Figure 6E shows the 3D representation of the object.
- the 3D representation may be in the form of a file and includes information of the visual characteristics of every side of the object. Based on the 3D representation the object may be displayed from any viewpoint, for example it may be displayed as a rotating 3D image on a display. In one example the 3D representation may be a point cloud.
- scanning data including 2D image data of a plurality of view-points is received, the same as for block 510 of Figure 5.
- a pose of the support frame in each set of scanning data (i.e. each view-point), is determined.
- the pose of the object in each set of scanning data may be determined based on the pose of the support frame.
- the support frame is removed from the 2D image data of each view-point. This is similar to block 530 of Figure 5, except that the support frame is removed from 2D image data, rather than 3D image data. This removal may for example may be based on a determined pose of the support frame, calculating a projection of the known shape of the support frame onto the 2D image data and/or based on a color of the support frame, or otherwise.
- the 2D image data of each view-point is converted to 3D image data of each view-point. This is similar to block 520 of Figure 5, except that the support frame has already been removed from the scanning data.
- Figure 8 shows an example method which is similar to the method of Figure 5, except that the support frame is removed from the scanning data after the 3D images have been fused together.
- scanning data including 2D image data of a plurality of view-points, is received the same as for block 510 of Figure 5.
- the pose of the support frame, and thus pose of the object, in each view-point is determined either after block 810, or after block 820.
- the 3D image data of each view-point is fused together to generate data capable of providing a 3D representation of the object together with the support frame.
- the support frame in each view-point may obscure part of the object.
- the methods may compensate for this by generating image data to fill in the missing parts which were obscured by the support frame. This compensation may be carried out at any appropriate stage, for example after removing the support frame, or as a refinement after an initial 3D representation of the object has been generated.
- the image data to compensate for the missing parts may be based on extrapolation, derived from image data of the obscured parts in other view-points, or a combination of both. In some cases, a part of the object which is obscured by the support frame in one view-point, may not be obscured in other view-points, thus enabling different view-points to compensate for each other.
- the 3D representation may be stored in a file on a machine readable storage medium. Part, or all of the object may be re-constructed based on at least one of display on a display apparatus, 3D printing or milling. Reconstruction of the object, or part of the object, may include interpolation between data points in the 3D representation where appropriate.
- the methods of Figures 4, 5, 7 and 8 may be performed by imaging hardware and/or software in combination with appropriate hardware such as a processor.
- the methods are computer executed methods implemented on a computing system, such as that shown in Figure 1 .
- FIG. 9 is a schematic diagram of a computing system 900 that is capable of implementing the above methods.
- the computing system includes a processor 910 and a non-transitory storage medium 920.
- the non-transitory storage medium may for example be a memory, hard disk, CD ROM etc.
- the computing system further includes an in/out (I/O) interface 913 to facilitate communication with external interfaces such as a display, keyboard, user device, 3D scanner etc.
- the system may receive scanning data from a 3D scanner, from an external storage medium, external device or a network connection
- the system is to process scanning data in the manner described in the description above, for example with reference to any of Figures 4 to 8.
- the non-transitory storage medium stores modules of machine readable instructions that are executable by the processor. Included in the modules of machine readable instructions are a pose determining module 922, a 2D to 3D module, a support frame removing module 926 and a fusing module 928. These are just examples and the non-transitory storage medium may store just some of these modules, or may store other modules as well.
- the pose determining module 922 is to determine a pose of the support frame in received scanning data.
- the pose determining module 922 may also determine a pose of the object based on a pose of the support frame.
- the pose determining module 922 may further determine an orientation of the support frame in each pose.
- the 2D to 3D module is to transform 2D image data to 3D image data as described above.
- the support frame removing module 926 is to remove the support frame from the scanning data as described above.
- the fusing module 928 is to fuse together image data from the plurality of sets of scanning data, based on determined poses of the support frame in each set of scanning data, in order to generate a 3D representation of the object.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
In an example an object and support frame are three dimensionally (3D) scanned in a plurality of poses. A 3D representation of the object may be generated based on the scanning data.
Description
THREE DIMENSIONAL SCANNING
BACKGROUND
[0001] Three dimensional (3D) representations of objects may be stored in files on a non-transitory machine readable storage medium. A 3D
representation of an object includes information about features of the object recorded from a plurality of view-points and integrated to a single three dimensional coordinate frame. The 3D representation of the object may allow reconstruction of part, or all, of the object in 3D such that it can be displayed by a monitor, projector or television, or reproduced by 3D printing or milling etc. In some cases a display of a computing device may rotate the 3D representation so that it can be seen from different sides. Depending on the recording process the 3D representation may include 3D information about the shape, surface and structure of the object and may include color information, or may be in monochrome or grey scale.
[0002] A 3D scanner is a scanner that may be used, alone or in combination with other scanners, to gather scanning data suitable for generating a 3D representation of an object. In one known approach, a turntable is used to rotate an object in front of a 3D scanner, or the 3D scanner is rotated about an object, so as to scan the object from a plurality of different view-points and thus gather information about a plurality of different sides of the object. Another approach is to use a plurality of stationary 3D scanners arranged around the object and combine the information from the plurality of 3D scanners to generate a 3D representation of the object.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] Examples will now be described, by way of non-limiting example only, with reference to the accompanying drawings, in which:
Figure 1 shows an example system including a three dimensional (3D) scanner and a support frame for an object;
Figure 2 is a flow chart showing an example method of 3D scanning according to the present disclosure;
Figure 3A shows an example of an object inside a support frame;
wherein the object and support frame are in a first pose;
Figure 3B shows the example object and support frame of Figure 3A in a second pose;
Figure 3C shows the example object inside a different example support frame; wherein the object and support frame are in a first pose;
Figure 3D shows the example object and support frame of Figure 3C in a second pose;
Figure 3E shows a view from above of the example support frame of Figure 3C in a plurality of different orientations;
Figure 4 is a flow chart showing an example method of generating a 3D representation of an object according to the present disclosure;
Figure 5 shows an example method of generating a 3D representation of an object based on scanning data;
Figure 6A shows an example of a 2D image of an object and support frame from a single view-point;
Figure 6B shows an example of a 3D image of an object and support frame from a single view-point;
Figure 6C shows an example of the 3D image of Figure 6B after the support frame has been removed;
Figure 6D shows an example of a plurality of 3D images, each corresponding to a different view-point of an object;
Figure 6E shows an example 3D representation of an object;
Figure 7 shows another example method of generating a 3D representation of an object based on scanning data;
Figure 8 shows yet another example method of generating a 3D representation of an object based on scanning data; and
Figure 9 is a schematic diagram, showing an example system for generating a 3D representation of an object.
DETAILED DESCRIPTION
[0004] For simplicity and illustrative purposes, the present disclosure is described by referring mainly to an example thereof. In the following
description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be readily apparent however, that the present disclosure may be practiced without limitation to these specific details. In other instances, some methods and structures have not been described in detail so as not to unnecessarily obscure the present disclosure. Throughout the present disclosure, the terms "a" and "an" are intended to denote at least one of a particular element. As used herein, the term "includes" means includes but not limited to, the term "including" means including but not limited to. The term "based on" means based at least in part on.
[0005] The present disclosure discusses three dimensional (3D) scanning. One consideration for 3D scanning is that an object has a plurality of sides, but not all of the sides are visible from a single view-point. A view-point is a direction from which the object is viewed. For instance, if an object is viewed from above then details of the top surface of the object can be seen, but details of the bottom surface of the object may not be seen.
[0006] The present disclosure also discusses generating a 3D representation of an object based on scanning data. The 3D representation may also be referred to as a 3D model of the object. The 3D representation may include information about the three dimensional spatial relationship between various features of the object and may include information gathered from a plurality of different view-points and integrated to the same coordinate system. The 3D representation may be of part of the object, e.g. as seen from a few sides, or may be of the whole object, e.g. as seen from every side.
[0007] In order to generate a 3D representation, the object may be scanned from a plurality of view-points and scanning data from the plurality of view-points may be combined to generate a 3D representation of the object. One way of doing this is to have a plurality of 3D scanners positioned around the object and
use each 3D scanner to scan the object from a different view point. However, this may be relatively expensive and cumbersome to set up. Another approach is to rotate a 3D scanner around the object, however this involves carefully controlling and monitoring of the path of the 3D scanner as it moves around and scans the object. In another approach a turntable can be used to rotate the object in front of the scanner, but in that case the scanning is limited by the pose of the object and the angle at which the scanner views the turntable. For instance, with a top-down scanner and turntable rotating about a vertical axis, it may be difficult to scan the bottom and other sides of the object.
[0008] The present disclosure proposes a support frame for the object which is to be scanned. By placing the object inside a support frame it is possible to rotate the support frame and object through a plurality of poses and scan the object and support frame in each pose. Each pose may correspond to a stable position of the support frame. This approach to scanning may be relatively simple and intuitive for the person carrying out the scanning. The scanning data may then be processed to generate a 3D representation of the object based on determined poses of the support frame in the scanning data.
[0009] Figure 1 shows an example system for 3D imaging. The system includes a 3D scanner 10 and a polyhedral support frame 20. The 3D scanner is to scan an object that may be placed inside the support frame 20. Also shown is a computing device 1 , which may include a main body 2 and a display 3. The main body 2 may house a processor, memory, hard disk, I/O interfaces and other computing device components. In this example the display 3 is integrated into the main body 2, but in other examples the display and main body may be separate. The display 3 may be supported in a generally upright position by the main body 2, or by stand or other support member.
[0010] The 3D scanner 10 may be a "fixed" scanner that is fixed to, or integrated into, the computing device 1 such that it is stationary and able to scan an object from a single view point. In the illustrated example, the 3D scanner is a fixed top-down scanner that is to scan from above objects placed below it. The fixing may be temporary or permanent, for instance the 3D scanner may have a clip to temporarily fix it to a computer display or main body.
However, the present disclosure is not limited thereto and in other examples, the 3D scanner may be fixed to a different location, so as to scan the object from below or the side. In still other examples the 3D scanner may be separate from the computing device, for instance having its own stand and being positioned separately. In still other examples the 3D scanner may not be fixed and may be movable relative to the computing device.
[0011] A mat 5 may be provided in front of the display 3. The mat 5 may act as a known surface on which to place objects to be scanned and may also provide I/O functionality, or a user interface, such as a touch sensitive surface. In other examples the system may not include a mat and objects to be scanned may be placed on any surface in the line of sight of the 3D scanner.
[0012] The 3D scanner 10 is able to scan an object placed in its line of sight in order to gather 3D scanning data. There are various types of 3D scanner and the present disclosure is not limited to any particular type. The 3D scanner may include a camera and a light projector. The 3D scanner may then project various light patterns onto an object to be scanned and the camera may capture images of the object and light patterns projected onto the object. In other examples, the 3D scanner may include a structured light source, such as a laser, that projects a structured light pattern onto the object and a sensor to sense structured light reflected back to the scanner. This pattern may for example be a 1 D point, a 2D line, or a 2D pseudo random structured pattern. The pattern can either be fixed and stationary, or panned across the object. In another approach, a temporally modulated light source and appropriate time sampled sensor may be used to measure phase differences. In any case, the 3D scanner scans the object and collects scanning data that may be used to build up a 3D representation of the object.
[0013] An example method of 3D scanning and generating a 3D
representation of an object, which may be carried out by the apparatus of Figure 1 , will now be briefly described with reference to Figures 2, 3A and 3B.
[0014] At block 210 of Figure 2, an object that is to be scanned is secured to the support frame. For instance, as shown in Figure 3A, an object 30 may be
placed in a volume of space inside the support frame 20 and secured to the support frame.
[0015] At block 220, the support frame 20 is rotated through a plurality of poses and scanned to capture scanning data of the object and support frame in each pose. A pose is a stable position of the support frame. As the support frame is rotated through a plurality of poses, the object rotates together with the support frame, as it is secured to the support frame. For instance, two possible poses of the support frame are shown in Figures 3A and 3B. The 3D scanner 10 may scan the support frame and object in each pose.
[0016] As the object 30 is secured to the support frame 20, the pose of the object corresponds to the pose of the support frame. Thus scanning the support frame and object in the first pose shown in Figure 3A may generate a first set of scanning data corresponding to a first pose. Scanning the support frame and object in the second pose shown in Figure 3B may generate a second set of scanning data corresponding to a second pose. While just two poses are shown in Figures 3A and 3B, the support frame may be rotated to further poses and scanned in each further pose.
[0017] At block 230 of Figure 2, the scanning data of the plurality of poses of the object and support frame is processed to generate a 3D representation of the object. Examples of how the scanning data may be processed go generate a 3D representation of the object are discussed in more detail later.
[0018] The support frame 20 may have a polyhedral shape with a plurality of stable sides. The support frame thus makes it relatively simple to stably position the object in a plurality of different poses. For example, it can be clearly seen that if it was not for the support frame 20, the object 30 in the example of Figures 3A and 3B, would not have a stable pose in the position shown in Figure 3B. Thus, if it was not for the support frame, it could be difficult for a fixed top-down 3D scanner to scan the lower surface of the object. However, when the object is supported by the support frame it has a stable pose in the position shown in Figure 3B and a fixed top-down scanner can scan the lower surface of the object by rotating the support frame to the pose shown in Figure 3B.
[0019] In general, if a support frame was not used, then for many objects, it may be difficult to rotate the object through a plurality of poses and scan each pose. Most objects have a limited number of stable poses and scanning these may not provide sufficient information about the object to generate a good 3D representation. Further, even if the poses did give sufficient information, it may be hard to reconcile the scanning data from each pose without a support frame, as the number of stable poses of an object and relationship between the stable poses will be different for each object. Similar difficulties may arise with a user using their hands to support the object in plurality of different poses.
[0020] The support frame and object may be rotated through as many poses as there are stable poses of the support frame and scanned in each pose. For example a polyhedral support frame may be rotated to rest on each face in turn. In the case of a dodecahedral support frame there are potentially at least twelve stable poses, e.g. a pose corresponding to each face of the dodecahedron. The support frame and object may be rotated to and scanned in all stable poses, or some but not all of the stable poses. For instance, in the case of a dodecahedral support frame, instead of scanning in twelve different poses, the support frame and object may rotated to and scanned in nine poses, or just six poses.
Scanning the support frame and object in each pose generates a plurality of sets of scanning data, each set of scanning data corresponding to a respective pose of the support frame.
[0021] Examples of the support frame 20 will be described in more detail with reference to Figures 3A to 3D. The example of Figures 3A and 3B show a support frame having a dodecahedral shape, while the example of Figures 3C and 3D show a support frame having the shape of a cube. However, the present disclosure is not limited to this and in other examples the support frame may have a different polyhedral shape such as another regular or non-regular polyhedron, a hexahedron other than a cube, an octahedron etc.
[0022] In the examples of Figures 3A to 3D, the faces of the support frame are hollow and the edges are formed by elongate members. That makes placing of the object inside the support frame relatively easy and helps to keep the area of the support frame area relatively small, so as to minimize obscuring of parts
of the object by the support frame. In other examples the faces may be solid and for example formed of transparent material, but that makes placement of the object inside the frame more difficult and may introduce more optical complexity to the scanning.
[0023] As mentioned above, an object 30 is secured inside the support frame 20. Of course, the particular object shown in Figures 3A to 3D is just an example, and any object desired to be scanned may be placed inside the support frame. The object 30 may be secured inside the support frame by any suitable method. For example the object may be secured using support members, screws, string, adhesive, vacuum pads etc. In the example of Figure 3A and 3B a plurality of support members 40 extend inwardly from edges or vertices of the support frame and the object is secured to the support members. In the example of Figures 3C and 3D the object 30 is secured to a single support member 40 extending from the support frame.
[0024] The purpose of securing the object 30 to the support frame 20 is to prevent it from moving relative to the support frame, as the support frame is rotated through different poses. Otherwise it would be difficult to reconcile the scans of the object taken in different poses of the support frame. However, if the relative position of the support frame and the object remains the same as they are rotated, then in each set of scanning data, the pose of the object may be inferred from the pose of the support frame. For instance, even though the support frame and object have been rotated through 180 degrees between Figures 3A and 3B, the relative position of the object and support frame to each other remains the same.
[0025] In the method described above, securing the object to the support frame and rotating the object through a plurality of poses may, for instance, be carried out by a human. Scanning the object in each pose may, for instance, be carried out by the 3D scanner, under control of the user and/or by computer software or hardware. Processing the scanning data, to generate a 3D representation of the object, may be carried out by computer software or hardware.
[0026] As mentioned above, a computer system may re-construct part, or all, of the object based on the 3D representation by display on a display apparatus, or in some cases by 3D printing or milling etc. In some cases the 3D
representation may allow reconstruction of the whole object as seen from any side. For instance if an object is placed in the dodecahedral support frame of Figure 3A and scanned in a pose corresponding to the support frame resting on each of its twelve faces, then other intermediate view-points can be interpolated from the 3D representation. The same is true for the hexahedral support frame of Figure 3C if poses corresponding to all six faces are scanned, although the reconstruction may be less accurate or complete. In the context of this disclosure, the term 3D representation also includes representations which combine data from a more limited number of view-points. For instance, a 3D representation generated from scans of just two adjoining faces of the support frame may allow part, but not all of the object, to be reconstructed. In one example the object is scanned in a plurality of poses, including at least a first pose in which the support frame rests on a first face and a second pose in which the support frame rests on a second face, wherein the first and second faces are separated from each other by at least 30 degrees.
[0027] Figure 4 is a flow diagram of the example method of generating a 3D representation of the object by a computer system which is to process the scanning data. For example, it may be implemented by machine readable instructions stored on a non-transitory storage medium and executed by a processor of the computing system 1 .
[0028] At block 410 the system receives a plurality of sets of scanning data, each set of scanning data corresponding to a respective view-point of the object in the support frame. A view-point of the object is the object as seen from a particular angle. For instance a scan carried out on the object in the position shown in Figure 3A will give a view of the top of the object, while a scan carried out on the object in the position shown in Figure 3B will give a view of the bottom of the object. In the context of this disclose each set of scanning data corresponds to a respective view-point of the object in the support fame and each view-point corresponds to a respective pose of the support frame.
[0029] At block 420 a pose of the support frame in each set of scanning data is determined. Determining a pose of the support frame may be based on knowledge of the three-dimensional shape and structure of the support frame, or based on special fiducial markers of the support frame, or both.
[0030] For instance, the system may recognize the support frame in the scanning data and determine a pose of the scanning frame based on known characteristics of the support frame. For example the system may store a 3D model of the support frame and compare the model with the scanning data to recognize the support frame and determine its pose. This may include recognizing edges and/or corners of the support frame and inferring a pose of the support frame from the position of the edges and/or corners of the support frame in the scanning data.
[0031] In another example, the pose of the support frame may be
determined based on fiducial makers of the support frame. In this case the support frame 20 has fiducial markers that are detectable by an imaging device and the pose of the support frame can be determined based on the position and/or orientation of the fiducial markers. The imaging device that detects the fiducial markers may be the 3D scanner, or may be another imaging device. Examples of fiducial markers will be explained later below.
[0032] In still another example, the pose of the support frame may be determined based on a combination of a known shape of the support frame and fiducial markers of the support frame.
[0033] At block 430 a pose of the object in each set of scanning data is determined based on the determined poses of the support frame. As explained above, as the object does not move relative to the support frame as the support frame is rotated, the pose of the support frame corresponds directly to the pose of the object in each view-point.
[0034] At block 440 a 3D representation of the object is generated based on the sets of scanning data and determined poses of the object in each set. This may include fusing together, i.e. combining, the scanning data from different view-points of the object based on a relationship between the determined poses of the object in each view-point.
[0035] Examples of fiducial markers will now be described in more detail. A fiducial marker is a marker that by itself, or in combination with other markers, uniquely identifies a face of the support frame. Fiducial markers may also be used to help determine an orientation of the support frame. By recognizing a fiducial marker, or several fiducial markers, imaging software can determine the pose of the support frame. For instance if each face was marked with a number, then the number would identify the face of the support frame and an orientation of the number may identify the orientation of the support frame.
[0036] In one example, the fiducial makers are on either the edges, or vertices, of the frame. This helps to minimize obscuring of the object by the fiducial markers. Figures 3A and 3B, show an example in which the fiducial markers are on vertices of the support frame 20. As shown in these figures, the support frame 20 includes a plurality of edges 21 that connect at vertices 22 and define the faces 23, 24 of the polyhedron. Each vertex has a respective fiducial marker. This enables each face to be distinguished from other faces by the combination of fiducial markers at its vertices. For instance each face may have a combination of fiducial makers at its vertices, which uniquely distinguishes it from other faces of the support frame.
[0037] The fudicial makers may be color coded, be marked with bar codes or have another recognizable identifying feature detectable by the 3D scanner, or another an imaging device. In one example each fiducial maker is unique, while in other examples the fiducial markers are not unique, but are distinguishable from at least some of the other fiducial markers. The combination of fiducial makers should be sufficient to distinguish each face of the support frame from other faces of the support frame.
[0038] This will now be explained, by way of example, with reference to Figures 3C and 3D. The support frame of Figures 3C and 3D has a hexahedral shape and thus has eight vertices 22A to 22G. A respective fiducial marker is provided on each of the vertices. The vertex 22A has a red marker, the vertex 22B has a green maker, the vertex 22C has a yellow marker and the vertex 22D has a blue maker. Thus the face 24 can be recognized based on the four vertices 22A, 22B, 22C and 22D of that face having respectively red, green,
yellow and blue makers. The other faces do not have that particular combination of fiducial makers and therefore the face 24 can be recognized, based on the fiducial markers of its vertices. Meanwhile, the face 23 may be recognized based on its vertices 22E, 22F, 22G and 22H having red, blue, red and yellow markers respectively.
[0039] For instance, a top-down scanner scanning the support frame and object in the pose shown in Figure 3C, may determine, based on the colors or other indicia of the fiducial makers at the vertices 22A to 22D, that the support frame is in a pose in which face 24 is facing upwards towards the 3D scanner. When scanning the object and support frame in the pose of Figure 3D, the same top-down scanner may determine, based on the fiducial makers at the vertices 22E to 22H, that the support frame is in a pose in which face 23 is facing upwards towards the 3D scanner. In this way by determining the poses of the support frame, the spatial relationship between the different sets of scanning data may be determined.
[0040] In addition to determining a pose of the support frame, the system may also determine an orientation of the support frame in each pose. For example, as shown in Figure 3E, the support frame 20 may have many different orientations while resting on face 23 with face 24 facing the 3D scanner. Figure 3E is a schematic view of the support as seen from above. The orientation of the support frame in each pose may be determined from the outline of the support frame and/or from the fiducial markers. If a fiducial marker is not symmetric then the orientation of the fiducial marker may indicate the orientation of the support frame. Otherwise the orientation may be determined from the relative position of a plurality of fiducial markers. In the example of Figure 3E, the orientation may be determined from the relative position of the fiducial markers at the vertices. The orientation of the support frame in each pose forms part of the spatial relationship between the sets of scanning data corresponding to each view point. The system may take this into account when fusing together the scanning data from the different view-points to generate a 3D representation of the object.
[0041] In other examples, the fiducial markers may be on edges of the support frame. For instance, some, or all, edges of the support frame may be color coded, or marked with a bar code or other identifiable feature. In that way a face and orientation of the support frame may be determined based on the combination of fiducial makers at its edges.
[0042] There are various ways in which the scanning data may be processed to generate a 3D representation of the object. One example method will now be described. Scanning the object and support frame in a plurality of poses, results in plural sets of scanning data. Each set of scanning data relates to a particular pose and thus particular view point of the object and support frame.
[0043] In the context of this disclose the term "scanning data" is used to mean any data derived from scanning by the 3D scanner. Thus the term "scanning data" may be used to refer both to raw data from the initial scanning and processed data at intermediate stages of the processing up until the 3D representation of the object is generated.
[0044] Each set of scanning data may start with raw data including a combination of data relating to 2D images of the object, such as data gathered by a camera of the 3D scanner, and data from which 3D information may be derived, such as patterns of light projected onto the object, or time of flight data etc.
[0045] The raw data, including the 2D image data of a view-point, may be transformed, by imaging hardware or software, into 3D image data of the viewpoint. 3D data is data that is capable of providing a 3D image of the object from a particular view-point. 3D image data is distinct from a true 3D representation, because 3D image data includes just one view-point of the object, e.g. as seen from above, and does not provide information about the object from other viewpoints. So, for example, 3D image data of a view-point from one side of the object, will not include data about the object as seen from other sides.
[0046] The 3D image data for each view-point may then be fused together to form a 3D representation of the object. As discussed above, this fusing is based on the determined spatial relationship between the plurality of view-points. This
spatial relationship may be determined based on the pose and orientation of the support frame in each set of scanning data.
[0047] In the process of generating the 3D representation of the object, the support frame may be removed from the scanning data. There are various ways in which this may be done. For example, the support frame may be removed at various stages in the process. Some examples are given below with reference to Figures 5, 7 and 8.
[0048] In the method of Figure 5, the support frame is removed from the scanning data after the 2D image data is transformed to 3D image data.
[0049] At block 510 scanning data, including 2D image data of a plurality of view-points, is received.
[0050] For example, the scanning data may include a plurality of sets of scanning data, each set of scanning data corresponding to a different view-point of the object and support frame and including 2D image data. Figure 6A is a visual representation of one such set of scanning data, which includes two- dimensional (2D) image data of one view-point of an object and support frame. While only one view-point is shown in Figure 6A, at this point in the process there are plurality of sets of scanning data, each including 2D image data of a respective view-point of the object and support frame.
[0051] At block 520 the 2D image data of each view-point is converted to 3D image data of each view-point. Figure 6B is a visual representation of 3D image data of one view-point of the object and support frame. While only one viewpoint is shown in Figure 6B, at this point in the process there are plurality of sets of scanning data, each including 3D image data of a respective view-point of the object and support frame.
[0052] The pose of the support frame in each set of scanning data may be determined, by using any of the methods described above. The pose of the support frame in each set of scanning data may be determined either after block 510, or after block 520 for example.
[0053] At block 530 the support frame is removed from the scanning data. This removal of the support frame may be based on the determined pose of the support frame and/or by recognizing the support frame in the 2D image data, or
3D image data. For instance, the support frame may be recognized based on a known shape of the support frame. In some examples the support frame may have a particular color, or other optical properties to facilitate easy recognition and removal from the scanning data.
[0054] Thus after block 530, there are a plurality of sets of scanning data, each set including 3D image data of the object without the support frame. Figure 6C is a visual representation of 3D image data, corresponding to one view-point of the object, after the support frame has been removed. Figure 6D is a visual representation of the plurality of sets of 3D image data, each corresponding to a respective view-point of the object, after the support frame has been removed.
[0055] At block 540 the 3D image data of each view-point is fused together to generate a 3D representation of the object. Figure 6E shows the 3D representation of the object. The 3D representation may be in the form of a file and includes information of the visual characteristics of every side of the object. Based on the 3D representation the object may be displayed from any viewpoint, for example it may be displayed as a rotating 3D image on a display. In one example the 3D representation may be a point cloud.
[0056] The example method of Figure 7, is similar to that of Figure 5, except that the support frame is removed from the 2D image data, instead of the 3D image data.
[0057] Thus at block 710 scanning data, including 2D image data of a plurality of view-points is received, the same as for block 510 of Figure 5.
[0058] Not shown in Figure 7, but between blocks 710 and 720, a pose of the support frame in each set of scanning data (i.e. each view-point), is determined. The pose of the object in each set of scanning data may be determined based on the pose of the support frame.
[0059] At block 720 the support frame is removed from the 2D image data of each view-point. This is similar to block 530 of Figure 5, except that the support frame is removed from 2D image data, rather than 3D image data. This removal may for example may be based on a determined pose of the support frame, calculating a projection of the known shape of the support frame onto the 2D image data and/or based on a color of the support frame, or otherwise.
[0060] At block 730 the 2D image data of each view-point is converted to 3D image data of each view-point. This is similar to block 520 of Figure 5, except that the support frame has already been removed from the scanning data.
[0061] At block 740 the 3D image data of each view-point is fused together to generate a 3D representation of the object. This is the same as block 540 of Figure 5.
[0062] Figure 8 shows an example method which is similar to the method of Figure 5, except that the support frame is removed from the scanning data after the 3D images have been fused together.
[0063] Thus at block 810, scanning data, including 2D image data of a plurality of view-points, is received the same as for block 510 of Figure 5.
[0064] At block 820 the 2D image data of each view-point is converted to 3D image data of each view-point, the same as block 520 of Figure 5.
[0065] The pose of the support frame, and thus pose of the object, in each view-point is determined either after block 810, or after block 820.
[0066] At block 830 the 3D image data of each view-point is fused together to generate data capable of providing a 3D representation of the object together with the support frame.
[0067] At block 840 the support frame is removed, so that the result is a 3D representation of the object without the support frame.
[0068] In each of the methods of Figures 5, 7 and 8, in each view-point the support frame may obscure part of the object. The methods may compensate for this by generating image data to fill in the missing parts which were obscured by the support frame. This compensation may be carried out at any appropriate stage, for example after removing the support frame, or as a refinement after an initial 3D representation of the object has been generated. The image data to compensate for the missing parts may be based on extrapolation, derived from image data of the obscured parts in other view-points, or a combination of both. In some cases, a part of the object which is obscured by the support frame in one view-point, may not be obscured in other view-points, thus enabling different view-points to compensate for each other.
[0069] The 3D representation may be stored in a file on a machine readable storage medium. Part, or all of the object may be re-constructed based on at least one of display on a display apparatus, 3D printing or milling. Reconstruction of the object, or part of the object, may include interpolation between data points in the 3D representation where appropriate.
[0070] The methods of Figures 4, 5, 7 and 8 may be performed by imaging hardware and/or software in combination with appropriate hardware such as a processor. In one example the methods are computer executed methods implemented on a computing system, such as that shown in Figure 1 .
[0071] Figure 9 is a schematic diagram of a computing system 900 that is capable of implementing the above methods. The computing system includes a processor 910 and a non-transitory storage medium 920. The non-transitory storage medium may for example be a memory, hard disk, CD ROM etc. The computing system further includes an in/out (I/O) interface 913 to facilitate communication with external interfaces such as a display, keyboard, user device, 3D scanner etc. The system may receive scanning data from a 3D scanner, from an external storage medium, external device or a network connection The system is to process scanning data in the manner described in the description above, for example with reference to any of Figures 4 to 8.
[0072] The non-transitory storage medium stores modules of machine readable instructions that are executable by the processor. Included in the modules of machine readable instructions are a pose determining module 922, a 2D to 3D module, a support frame removing module 926 and a fusing module 928. These are just examples and the non-transitory storage medium may store just some of these modules, or may store other modules as well.
[0073] The pose determining module 922 is to determine a pose of the support frame in received scanning data. The pose determining module 922 may also determine a pose of the object based on a pose of the support frame. The pose determining module 922 may further determine an orientation of the support frame in each pose.
[0074] The 2D to 3D module is to transform 2D image data to 3D image data as described above. The support frame removing module 926 is to remove the
support frame from the scanning data as described above. The fusing module 928 is to fuse together image data from the plurality of sets of scanning data, based on determined poses of the support frame in each set of scanning data, in order to generate a 3D representation of the object.
[0075] All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the blocks of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or blocks are mutually exclusive.
[0076] Each feature disclosed in this specification (including any
accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
Claims
1 . A method of three-dimensional (3D) scanning, comprising:- securing an object inside a support frame having a plurality of stable sides; rotating the support frame through a plurality of poses, wherein in each pose the support frame rests on one of said stable sides;
scanning the object in each pose to capture scanning data of the object in each pose; and
processing the scanning data of the plurality of poses to generate a 3D representation of the object.
2. The method of claim 1 wherein the processing includes removing the support frame from the scanning data so that the support frame does not appear in the 3D representation of the object.
3. The method of claim 1 comprising detecting fiducial markers of the support frame and determining poses of the object in the scanning data based on fiducial markers of the support frame.
4. An apparatus for supporting an object during three dimensional (3D) scanning, the apparatus including:- a support frame defining a polyhedron having a plurality of faces;
a plurality of markers that are detectable by an imaging device, whereby the markers distinguish each face of the support frame from other faces of the support frame and provide an indication of the orientation of the support frame; a volume of space inside the support frame to contain an object to be imaged; and
a support to support an object inside the support frame.
5. The apparatus of claim 4 wherein the support frame has a plurality of vertices and the markers are on vertices of the support frame, each marker to distinguish the vertex it is on from at least some of the other vertices.
6. The apparatus of claim 4 wherein the support frame has a plurality of edges and at least some edges include one of said plurality of markers to distinguish the edge from at least some of the other edges.
7. The apparatus of claim 4 wherein the markers are color coded.
8. The apparatus of claim 4 in combination with a three dimensional (3D) scanner.
9. A non-transitory machine readable storage medium, storing machine readable instructions that are executable by a processor to:
receive a plurality of sets of scanning data, each set of scanning data corresponding to a respective view-point of the object in a support frame;
determine a pose of the support frame in each set of scanning data based on markers of the support frame;
determine a pose of the object in each set of scanning data based on the pose of the support frame;
and generate a three dimensional (3D) representation of the object based on the scanning data and determined poses.
10. The non-transitory machine readable storage medium of claim 9 wherein the instructions to generate a 3D representation of the object include instructions to remove the support frame, such that the support frame does not appear in the 3D representation of the object.
1 1 . The non-transitory machine readable storage medium of claim 9 wherein the instructions to generate a 3D representation of the object include instructions to compensate for parts of the object obscured by the support frame.
12. The non-transitory machine readable storage medium of claim 9 including instructions to determine an orientation of the support frame in each pose based on fiducial markers of the support frame.
13. The non-transitory machine readable storage medium of claim 9 wherein the instructions to generate a 3D representation of the object include
instructions to fuse together the scanning data from the plurality of view-points, based on a relationship between the determined poses of the object in each view-point, so as to form a 3D representation of the object.
14. The non-transitory machine readable storage medium of claim 9 wherein the instructions to generate a 3D representation of the object include
instructions to convert 2D image data of each view-point to 3D image data of each view-point and fuse together the 3D image data of the plurality of the viewpoints to form a 3D representation of the object.
15. The non-transitory machine readable storage medium of claim 14 wherein the instructions to generate a 3D representation of the object include instructions to remove the support frame, either before or after, converting the 2D image data of each view-point to 3D image data of each view-point.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2015/038056 WO2016209276A1 (en) | 2015-06-26 | 2015-06-26 | Three dimensional scanning |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2015/038056 WO2016209276A1 (en) | 2015-06-26 | 2015-06-26 | Three dimensional scanning |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2016209276A1 true WO2016209276A1 (en) | 2016-12-29 |
Family
ID=57586606
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2015/038056 Ceased WO2016209276A1 (en) | 2015-06-26 | 2015-06-26 | Three dimensional scanning |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2016209276A1 (en) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114964039A (en) * | 2022-05-10 | 2022-08-30 | 深圳市纵维立方科技有限公司 | Scanning method and device and electronic equipment |
| EP4379668A1 (en) * | 2022-11-29 | 2024-06-05 | Bandai Co., Ltd. | Generation of realistic vr video from captured target object |
| EP4617619A1 (en) * | 2024-03-11 | 2025-09-17 | The Boeing Company | Apparatuses and methods for large area wireless fuselage dent inspection |
| TWI903262B (en) | 2022-11-29 | 2025-11-01 | 日商萬代股份有限公司 | Image processing methods, information processing devices and computer programs |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6591512B2 (en) * | 1998-02-02 | 2003-07-15 | Daimlerchrysler | Device for use as a navigation link when measuring objects |
| US20040252811A1 (en) * | 2003-06-10 | 2004-12-16 | Hisanori Morita | Radiographic apparatus |
| US20080084589A1 (en) * | 2006-10-10 | 2008-04-10 | Thomas Malzbender | Acquiring three-dimensional structure using two-dimensional scanner |
| US20110007071A1 (en) * | 2009-07-08 | 2011-01-13 | Marcus Pfister | Method for Supporting Puncture Planning in a Puncture of an Examination Object |
| US20140111621A1 (en) * | 2011-04-29 | 2014-04-24 | Thrombogenics Nv | Stereo-vision system |
-
2015
- 2015-06-26 WO PCT/US2015/038056 patent/WO2016209276A1/en not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6591512B2 (en) * | 1998-02-02 | 2003-07-15 | Daimlerchrysler | Device for use as a navigation link when measuring objects |
| US20040252811A1 (en) * | 2003-06-10 | 2004-12-16 | Hisanori Morita | Radiographic apparatus |
| US20080084589A1 (en) * | 2006-10-10 | 2008-04-10 | Thomas Malzbender | Acquiring three-dimensional structure using two-dimensional scanner |
| US20110007071A1 (en) * | 2009-07-08 | 2011-01-13 | Marcus Pfister | Method for Supporting Puncture Planning in a Puncture of an Examination Object |
| US20140111621A1 (en) * | 2011-04-29 | 2014-04-24 | Thrombogenics Nv | Stereo-vision system |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114964039A (en) * | 2022-05-10 | 2022-08-30 | 深圳市纵维立方科技有限公司 | Scanning method and device and electronic equipment |
| EP4379668A1 (en) * | 2022-11-29 | 2024-06-05 | Bandai Co., Ltd. | Generation of realistic vr video from captured target object |
| TWI903262B (en) | 2022-11-29 | 2025-11-01 | 日商萬代股份有限公司 | Image processing methods, information processing devices and computer programs |
| EP4617619A1 (en) * | 2024-03-11 | 2025-09-17 | The Boeing Company | Apparatuses and methods for large area wireless fuselage dent inspection |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN104335005B (en) | 3D is scanned and alignment system | |
| US6781618B2 (en) | Hand-held 3D vision system | |
| US20100328308A1 (en) | Three Dimensional Mesh Modeling | |
| US7463305B2 (en) | Photographing assist device and image processing method for achieving simple stereoscopic photographing | |
| KR20180003535A (en) | Rider Stereo Fusion Photographed 3D Model Virtual Reality Video | |
| CN106526605B (en) | Data fusion method and system of lidar and depth camera | |
| US8917317B1 (en) | System and method for camera calibration | |
| WO2012129252A1 (en) | Digital 3d camera using periodic illumination | |
| JP2001503514A (en) | Three-dimensional color scanning method and apparatus | |
| JP3524147B2 (en) | 3D image display device | |
| CN107346040B (en) | Method and device for determining grating parameters of naked eye 3D display equipment and electronic equipment | |
| JP7657308B2 (en) | Method, apparatus and system for generating a three-dimensional model of a scene - Patents.com | |
| CN109559349A (en) | A kind of method and apparatus for calibration | |
| KR101785202B1 (en) | Automatic Calibration for RGB-D and Thermal Sensor Fusion and Method thereof | |
| CN110163898A (en) | A kind of depth information method for registering and device | |
| JP2013024608A (en) | Apparatus for acquiring three-dimensional shape, processing method and program | |
| JP2005195335A (en) | Three-dimensional image photographing equipment and method | |
| WO2016209276A1 (en) | Three dimensional scanning | |
| KR20090000777A (en) | Augmented Reality System and Method for Providing Augmented Reality Using Sensory Object | |
| JP2022022133A (en) | Method for 3d scanning of real object | |
| CN111340959B (en) | Three-dimensional model seamless texture mapping method based on histogram matching | |
| JP7398819B2 (en) | Three-dimensional reconstruction method and device | |
| JP4599500B2 (en) | Coordinate information collection system and three-dimensional shape estimation system | |
| US11302073B2 (en) | Method for texturing a 3D model | |
| JP5732424B2 (en) | Three-dimensional shape measuring apparatus and calibration method thereof |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15896562 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 15896562 Country of ref document: EP Kind code of ref document: A1 |