US20160012629A1 - Apparatus and method for applying a two-dimensional image on a three-dimensional model - Google Patents
Apparatus and method for applying a two-dimensional image on a three-dimensional model Download PDFInfo
- Publication number
- US20160012629A1 US20160012629A1 US14/771,553 US201414771553A US2016012629A1 US 20160012629 A1 US20160012629 A1 US 20160012629A1 US 201414771553 A US201414771553 A US 201414771553A US 2016012629 A1 US2016012629 A1 US 2016012629A1
- Authority
- US
- United States
- Prior art keywords
- triangle
- dimensional image
- point
- vertices
- dimensional model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/001—Texturing; Colouring; Generation of texture or colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/10—Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
- G06T17/205—Re-meshing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/21—Indexing scheme for image data processing or generation, in general involving computational photography
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/52—Parallel processing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/016—Exploded view
Definitions
- This invention relates to an apparatus and method for applying a two-dimensional image on a three-dimensional model. More specifically, but not exclusively, this invention relates to applying a label on a three-dimensional model.
- a method of applying a two-dimensional image to a three-dimensional model, the three-dimensional model composed of a polygonal mesh having a plurality of vertices comprising the steps of: identifying a first point corresponding to a vertex of the plurality of vertices; identifying a second point corresponding to a vertex of the plurality of vertices and proximal to the first point; calculating spatial data between the second point and the first point; iteratively identifying successive points, wherein each successive point corresponds to a vertex of the plurality of vertices and is proximal to a previously identified point, and calculating spatial data between each successive point and the previously identified point, until a stop point is identified, the stop point being a point outside the boundary of the two dimensional image; transforming the points and spatial data into UV-coordinates; and applying the two-dimensional image to the three-dimensional model using the UV-coordinates.
- the present invention provides a method which may extract UV-coordinates for a particular area of the three-dimensional model, corresponding to the area on which the two-dimensional image is to be applied. Accordingly, the computational power required to apply the two-dimensional image is significantly reduced, as the conventional step of UV-mapping the entire model is not carried out.
- the user may therefore select a first point in an area on the three-dimensional model to apply the two-dimensional image, and the processor extracts spatial data from that area until it identifies a point on the model being a greater distance from the first point than the distance from the centre to the boundary of the two-dimensional image.
- the two-dimensional image may therefore be applied to the three-dimensional model and displayed on the computing apparatus' display device in real time. This also allows the user to ‘drag’ the two-dimensional image across the three-dimensional model, i.e. selecting successive first points on the model, and the processor may apply the two-dimensional image to the model on the fly.
- the method may further comprise the step of generating a single three-dimensional model from a plurality of three-dimensional models to form a membrane, wherein the first point is on the membrane.
- the present invention may therefore apply the two-dimensional image to the membrane enveloping the models.
- the membrane may contain most or all the vertices of the three-dimensional model, or in the case of a plurality of three-dimensional models, each representing various parts of a complex product, the membrane may contain most or all the vertices of each three-dimensional model.
- the method may further comprise the step of applying a smoothing technique to the membrane.
- a smoothing technique to the membrane.
- any sudden changes in curvature on the membrane e.g. due to a gap in the membrane
- the two-dimensional image may then be applied in a real-life manner.
- the label may be applied to the model such that it passes over the gap, in a similar manner to real-life.
- a computer program product comprising computer executable code which when executed on a computer may cause the computer to perform the method of the first aspect of the invention.
- a computing apparatus comprising a processor arranged to apply a two-dimensional image to a three-dimensional model, the three-dimensional model composed of a polygonal mesh having a plurality of vertices, the processor configured for: identifying a first point corresponding to a vertex of the plurality of vertices; identifying a second point corresponding to a vertex of the plurality of vertices and proximal to the first point; calculating spatial data between the second point and the first point; iteratively identifying successive points, wherein each successive point corresponds to a vertex of the plurality of vertices and is proximal to a previously identified point, and calculating spatial data between each successive point and the previously identified point, until a stop point is identified, the stop point being a greater distance from the first point than the distance from an outer edge to the boundary of the two-dimensional image; transforming the points and spatial data into UV-coordinates; and applying the two-dimensional image to the
- the computing apparatus may further comprise a display device, wherein the processor is configured to cause the display device to display the three-dimensional model including the applied two-dimensional image.
- FIG. 1 is flow diagram representing an embodiment of a method of the present invention.
- FIG. 2 is a schematic diagram illustrating a computing apparatus configured to carry out the steps of the method of FIG. 1 .
- FIG. 1 An embodiment of a method of applying a two-dimensional image to a three-dimensional model in a virtual environment will now be described with reference to FIG. 1 .
- the embodiment will detail an example of placing a label (the two-dimensional image) onto a bottle (the three-dimensional model), but, on reviewing the following description, the skilled person will understand that the method may be applied to any type of two-dimensional image being placed on a three-dimensional model.
- a first image file representing a label is created.
- the first image file may be created in a suitable graphics editing computer program, such as Adobe Illustrator, and consists of a two-dimensional image representing a label (and shall hereinafter be referred to as the “label”).
- the label is rectangular and includes a graphical portion enveloped by cutting lines.
- a first model file representing a bottle is created.
- the first model file may be created in any suitable three-dimensional modelling software, such as 3DS Max or AutoCAD.
- the first model file consists of a three-dimensional representation of a bottle (and shall hereinafter be referred to as the “bottle”), defined by a triangular mesh.
- the triangular mesh represents an outer surface of the bottle, and is a non-UV mapped model.
- the triangular mesh is used to create a set of data which records the adjacency information of all the triangles within the mesh, in one embodiment a structure such as a half-edge data structure could be used. This allows traversal of the mesh starting from a single point on the mesh. This is labelled as the membrane which consists of a set of triangles with adjacency information.
- a plurality of triangular meshes may be merged together (such as one representing a bottle and another representing a bottle cap).
- a single membrane which includes the form of both the cap and bottle is created, this is achieved by starting with a simple shape which encompasses both cap and bottle which is then shrunk to the form of the bottle and cap combined. This single mesh is then used to generate a membrane which is used to place the 2D label on both bottle and cap.
- a ‘smoothing’ operation is applied to the membrane.
- smoothing techniques may be used, such as Laplacian polygon smoothing (however, the skilled person will understand that other smoothing techniques may be used).
- This technique ensures that the membrane closely follows a real-life surface that the label will be applied to. For example, if the bottle contained a small gap, the membrane may be smoothed such that it passes over the gap without any change in curvature. This closely represents how the real-life label would be applied to the real-life bottle, as the real-life label would simply be pasted over the gap without any change in curvature.
- the label is then applied to the membrane using the following algorithm.
- an initial point of reference is taken as the centre of the label.
- a collision routine is used to correlate the centre of the label with an initial hit point on the triangle mesh.
- the initial hit point may be any arbitrary point on the triangle mesh.
- the triangle containing the starting point on the membrane is chosen.
- the starting point is determined as a point on the membrane which is closest to the initial hit point on the triangular mesh.
- the distance between this point (horizontal and vertical) and each of the three vertices in the starting triangle is calculated and stored, then each of the three neighbouring triangles is added to a set of triangles to be processed.
- An iterative process is then employed in the following manner. For each of the triangles in the triangle list, calculate the distance between the vertex in which the distance to the starting point is known and the vertex whose distance is still to be calculated, and store this distance. This distance is calculated using geodesic techniques.
- the process can be split so that the label is processed as two separate labels; the processing order defines the overlap direction.
- an additional 2D rotation can be applied to all the calculated distances, which would have the effect of rotating the label around the centre point.
- this process results in a data set containing a set of points on the membrane and a set of distances (horizontal and vertical) between each point in the set of points on the membrane and the starting point.
- the data set may then be converted into normalized UV coordinates using a transform.
- the distances are stored as vectors from the starting point to each point, and are converted to UV coordinates by a translation by 0.5 and scaled to the particular height and width of the label.
- the normalized UV coordinates may then be used to apply the label to the bottle. That is, the label may be treated as a texture which may then be correlated with the UV coordinates. Accordingly, a plurality of points within the label may be correlated with particular UV coordinates, such that the label may then be placed on the bottle in the virtual environment.
- the process is suitable for applying a two-dimensional image onto any three-dimensional model.
- the three dimensional model extracts UV coordinates for only a part of the three-dimensional model that the label is to be applied to, and thus reduces the computational power required for prior art techniques which involve a fully mapped UV model.
- the two-dimensional image may be applied to any portion of the three-dimensional model, and the algorithm detailed above applies the centre of the label to a starting point on the membrane in that portion and extracts the UV coordinates from the membrane of the surrounding area.
- the image may be applied in real-time. That is, the user may slide the image across the membrane of the three-dimensional model and the computational power required is reduced to the point that the image may be applied to the three-dimensional model “on-the-fly”.
- a plurality of triangular meshes may be merged together.
- a label may be applied to a complex product (i.e. an object having multiple constituent parts, such as a bottle and bottle cap).
- a smoothing operation may be applied to the membrane to smooth any gaps or grooves between the parts of the complex product to ensure the label is applied appropriately.
- the three-dimensional model consists of one or more triangular meshes to which a single membrane is generated.
- any form of polygonal meshed model would be appropriate for the present invention.
- the application of a membrane is preferable as it allows the user to apply a smoothing technique such that the two-dimensional image is applied in a real-life manner.
- the smoothing step is non-essential.
- the algorithm to extract the UV coordinates may be based on the vertices of the triangular mesh rather than points of a smoothed membrane.
- a label is applied to a model of a bottle.
- the method of the present invention is suitable for applying any two-dimensional image to any three-dimensional model.
- the method would be suitable for applying graphics to the bodywork of a vehicle in a virtual environment.
- the method of the present invention described above may be implemented on a computing apparatus 1 , such as a personal computer or mobile computing device (e.g. a tablet).
- the computing device 1 includes a processor 3 configured for implementing the method of the present invention, and a display device 5 configured for displaying the three-dimensional model including the applied two-dimensional image.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
Abstract
A method is provided for applying a two-dimensional image on a three-dimensional model composed of a polygonal mesh. The method comprises generating an adjacency structure for all triangles within the mesh, identifying a triangle within membrane containing the desired centre, calculating spatial distances between the three vertices and desired centre point; checking each triangle edge to see if the calculated distances show an intersection, if a collision is detected add the neighbouring triangle to the list of triangles to process and iteratively processing the triangles until the list is empty; calculate the spatial data of the single unknown vertex, check the two edges of the triangle to see if the calculated distances show an intersection, if an intersection occurs add this new triangle to the list; transforming into UV-coordinates; and applying the two-dimensional image to the three-dimensional model using the UV-coordinates.
Description
- The present application claims priority to PCT International Application No. PCT/GB2014/050694 filed on Oct. 3, 2014, which claims priority to British Patent Application No. GB1304321.1 filed Mar. 11, 2013, the entirety of the disclosures of which are expressly incorporated herein by reference.
- Not Applicable
- This invention relates to an apparatus and method for applying a two-dimensional image on a three-dimensional model. More specifically, but not exclusively, this invention relates to applying a label on a three-dimensional model.
- In product development, the visual appearance of the product is an important part of the marketing strategy. Accordingly, developers typically use computer editing suites to design their packaging. In some industries, containers for products are manufactured separately to the brand material, and the brand materials are subsequently fixed to the container. This is common practice in, for example, the drinks industry, in which bottles are manufactured to contain the product, and a label is produced separately and applied to the bottle.
- Separate computer editing suites exist for creating the label and creating a model of the bottle. To visualize the label on the bottle, a UV map of the model is created and the label is applied as a texture. This is computationally expensive and also requires a lot of user intervention to ensure the label is applied correctly. This leads to a significant delay for the computer to render the label onto the model. Accordingly, the label cannot be placed on the bottle and moved in real-time, as the processing requirements outweigh the processing power available.
- It is therefore desirable to alleviate some or all of the above problems.
- According to a first aspect of the invention, there is provided a method of applying a two-dimensional image to a three-dimensional model, the three-dimensional model composed of a polygonal mesh having a plurality of vertices, the method comprising the steps of: identifying a first point corresponding to a vertex of the plurality of vertices; identifying a second point corresponding to a vertex of the plurality of vertices and proximal to the first point; calculating spatial data between the second point and the first point; iteratively identifying successive points, wherein each successive point corresponds to a vertex of the plurality of vertices and is proximal to a previously identified point, and calculating spatial data between each successive point and the previously identified point, until a stop point is identified, the stop point being a point outside the boundary of the two dimensional image; transforming the points and spatial data into UV-coordinates; and applying the two-dimensional image to the three-dimensional model using the UV-coordinates.
- The present invention provides a method which may extract UV-coordinates for a particular area of the three-dimensional model, corresponding to the area on which the two-dimensional image is to be applied. Accordingly, the computational power required to apply the two-dimensional image is significantly reduced, as the conventional step of UV-mapping the entire model is not carried out.
- On a computing apparatus adapted to carry out the method of the present invention, the user may therefore select a first point in an area on the three-dimensional model to apply the two-dimensional image, and the processor extracts spatial data from that area until it identifies a point on the model being a greater distance from the first point than the distance from the centre to the boundary of the two-dimensional image. As the processing power required has been reduced significantly, the two-dimensional image may therefore be applied to the three-dimensional model and displayed on the computing apparatus' display device in real time. This also allows the user to ‘drag’ the two-dimensional image across the three-dimensional model, i.e. selecting successive first points on the model, and the processor may apply the two-dimensional image to the model on the fly.
- The method may further comprise the step of generating a single three-dimensional model from a plurality of three-dimensional models to form a membrane, wherein the first point is on the membrane. The present invention may therefore apply the two-dimensional image to the membrane enveloping the models. The membrane may contain most or all the vertices of the three-dimensional model, or in the case of a plurality of three-dimensional models, each representing various parts of a complex product, the membrane may contain most or all the vertices of each three-dimensional model.
- The method may further comprise the step of applying a smoothing technique to the membrane. Thus, any sudden changes in curvature on the membrane (e.g. due to a gap in the membrane) may be smoothed over. Accordingly, the two-dimensional image may then be applied in a real-life manner. For example, in the case of a label being applied to a three-dimensional model, the label may be applied to the model such that it passes over the gap, in a similar manner to real-life.
- A computer program product comprising computer executable code which when executed on a computer may cause the computer to perform the method of the first aspect of the invention.
- According to a second aspect of the invention, there is provided a computing apparatus comprising a processor arranged to apply a two-dimensional image to a three-dimensional model, the three-dimensional model composed of a polygonal mesh having a plurality of vertices, the processor configured for: identifying a first point corresponding to a vertex of the plurality of vertices; identifying a second point corresponding to a vertex of the plurality of vertices and proximal to the first point; calculating spatial data between the second point and the first point; iteratively identifying successive points, wherein each successive point corresponds to a vertex of the plurality of vertices and is proximal to a previously identified point, and calculating spatial data between each successive point and the previously identified point, until a stop point is identified, the stop point being a greater distance from the first point than the distance from an outer edge to the boundary of the two-dimensional image; transforming the points and spatial data into UV-coordinates; and applying the two-dimensional image to the three-dimensional model using the UV-coordinates.
- The computing apparatus may further comprise a display device, wherein the processor is configured to cause the display device to display the three-dimensional model including the applied two-dimensional image.
- These and other features and advantages of the various embodiments disclosed herein will be better understood with respect to the following description and drawings, in which like numbers refer to like parts throughout, and in which:
-
FIG. 1 is flow diagram representing an embodiment of a method of the present invention; and -
FIG. 2 is a schematic diagram illustrating a computing apparatus configured to carry out the steps of the method ofFIG. 1 . - An embodiment of a method of applying a two-dimensional image to a three-dimensional model in a virtual environment will now be described with reference to
FIG. 1 . The embodiment will detail an example of placing a label (the two-dimensional image) onto a bottle (the three-dimensional model), but, on reviewing the following description, the skilled person will understand that the method may be applied to any type of two-dimensional image being placed on a three-dimensional model. - As an initial step, a first image file representing a label is created. The first image file may be created in a suitable graphics editing computer program, such as Adobe Illustrator, and consists of a two-dimensional image representing a label (and shall hereinafter be referred to as the “label”). In this embodiment, the label is rectangular and includes a graphical portion enveloped by cutting lines.
- As a further initial step, a first model file representing a bottle is created. The first model file may be created in any suitable three-dimensional modelling software, such as 3DS Max or AutoCAD. The first model file consists of a three-dimensional representation of a bottle (and shall hereinafter be referred to as the “bottle”), defined by a triangular mesh. The skilled person will understand that the triangular mesh represents an outer surface of the bottle, and is a non-UV mapped model.
- The triangular mesh is used to create a set of data which records the adjacency information of all the triangles within the mesh, in one embodiment a structure such as a half-edge data structure could be used. This allows traversal of the mesh starting from a single point on the mesh. This is labelled as the membrane which consists of a set of triangles with adjacency information.
- As explained in more detail below, a plurality of triangular meshes may be merged together (such as one representing a bottle and another representing a bottle cap).
- In this embodiment, a single membrane which includes the form of both the cap and bottle is created, this is achieved by starting with a simple shape which encompasses both cap and bottle which is then shrunk to the form of the bottle and cap combined. This single mesh is then used to generate a membrane which is used to place the 2D label on both bottle and cap.
- In this embodiment, a ‘smoothing’ operation is applied to the membrane. Various smoothing techniques may be used, such as Laplacian polygon smoothing (however, the skilled person will understand that other smoothing techniques may be used). This technique ensures that the membrane closely follows a real-life surface that the label will be applied to. For example, if the bottle contained a small gap, the membrane may be smoothed such that it passes over the gap without any change in curvature. This closely represents how the real-life label would be applied to the real-life bottle, as the real-life label would simply be pasted over the gap without any change in curvature.
- The label is then applied to the membrane using the following algorithm. As a first step, an initial point of reference is taken as the centre of the label. A collision routine is used to correlate the centre of the label with an initial hit point on the triangle mesh. The initial hit point may be any arbitrary point on the triangle mesh.
- In a next step of the algorithm (as shown in
FIG. 1 ), the triangle containing the starting point on the membrane is chosen. The starting point is determined as a point on the membrane which is closest to the initial hit point on the triangular mesh. The distance between this point (horizontal and vertical) and each of the three vertices in the starting triangle is calculated and stored, then each of the three neighbouring triangles is added to a set of triangles to be processed. An iterative process is then employed in the following manner. For each of the triangles in the triangle list, calculate the distance between the vertex in which the distance to the starting point is known and the vertex whose distance is still to be calculated, and store this distance. This distance is calculated using geodesic techniques. Check the remaining two edges of the triangle to see if the 2D coordinates of the triangle intersect the boundary rectangle of the two dimensional label. If an edge does intersect then add the triangle neighbour which uses that edge to the list of triangles to process. The process is run in an iterative manner until there are no more triangles within the list to process. - If the 2D label is likely to overlap itself, the process can be split so that the label is processed as two separate labels; the processing order defines the overlap direction.
- In one embodiment, an additional 2D rotation can be applied to all the calculated distances, which would have the effect of rotating the label around the centre point.
- The skilled person will understand that this process results in a data set containing a set of points on the membrane and a set of distances (horizontal and vertical) between each point in the set of points on the membrane and the starting point. The data set may then be converted into normalized UV coordinates using a transform. For example, the distances are stored as vectors from the starting point to each point, and are converted to UV coordinates by a translation by 0.5 and scaled to the particular height and width of the label.
- The normalized UV coordinates may then be used to apply the label to the bottle. That is, the label may be treated as a texture which may then be correlated with the UV coordinates. Accordingly, a plurality of points within the label may be correlated with particular UV coordinates, such that the label may then be placed on the bottle in the virtual environment.
- The skilled person will understand that the process is suitable for applying a two-dimensional image onto any three-dimensional model. The three dimensional model extracts UV coordinates for only a part of the three-dimensional model that the label is to be applied to, and thus reduces the computational power required for prior art techniques which involve a fully mapped UV model. The two-dimensional image may be applied to any portion of the three-dimensional model, and the algorithm detailed above applies the centre of the label to a starting point on the membrane in that portion and extracts the UV coordinates from the membrane of the surrounding area.
- The skilled person will also realise that by reducing the computational power required to apply the two-dimensional image to the three-dimensional model, the image may be applied in real-time. That is, the user may slide the image across the membrane of the three-dimensional model and the computational power required is reduced to the point that the image may be applied to the three-dimensional model “on-the-fly”.
- As noted above, a plurality of triangular meshes may be merged together. In this manner, a label may be applied to a complex product (i.e. an object having multiple constituent parts, such as a bottle and bottle cap). Next, a smoothing operation may be applied to the membrane to smooth any gaps or grooves between the parts of the complex product to ensure the label is applied appropriately.
- In the above embodiments, the three-dimensional model consists of one or more triangular meshes to which a single membrane is generated. The skilled person will understand that any form of polygonal meshed model would be appropriate for the present invention. Furthermore, the skilled person will understand that the application of a membrane is preferable as it allows the user to apply a smoothing technique such that the two-dimensional image is applied in a real-life manner. However, the smoothing step is non-essential. For example, the algorithm to extract the UV coordinates may be based on the vertices of the triangular mesh rather than points of a smoothed membrane.
- In the above embodiment, a label is applied to a model of a bottle. However, the skilled person will understand that the method of the present invention is suitable for applying any two-dimensional image to any three-dimensional model. For example, the method would be suitable for applying graphics to the bodywork of a vehicle in a virtual environment.
- The method of the present invention described above may be implemented on a
computing apparatus 1, such as a personal computer or mobile computing device (e.g. a tablet). As shown inFIG. 2 , thecomputing device 1 includes aprocessor 3 configured for implementing the method of the present invention, and adisplay device 5 configured for displaying the three-dimensional model including the applied two-dimensional image. - The skilled person will understand that any combination of elements is possible within the scope of the invention, as claimed.
Claims (9)
1. A method of applying as two-dimensional image to a three-dimensional model, the three-dimensional model including a polygon mesh having a plurality of vertices, the method comprising the steps of:
creating a membrane by generating an initial adjacency structure for all triangles within the polygon mesh;
identifying a triangle having three vertices within the membrane which contains a desired centre point of the two-dimensional image;
calculating spatial distances between the three vertices of the triangle and desired centre point of the two-dimensional image;
checking each triangle edge of the identified triangle to see if the calculated distances show an intersection on the two-dimensional image, and if a collision is detected adding a neighbouring triangle to a list of triangles to process;
iteratively processing all triangles in the list until the list is empty;
calculating spatial data of thee single unknown vertex within the triangle;
checking two edges of the triangle to see if the calculated distances show an intersection, with the two-dimensional image, and if an intersection occurs adding this new triangle to the triangle list;
transforming points corresponding to the plurality of vertices and spatial data into UV-coordinates; and
applying the two-dimensional image to the three-dimensional model using the UV-coordinates.
2. A method as claimed in claim 1 , comprising the step of applying a smoothing technique to the membrane.
3. A method as claimed in claim 1 , wherein the three-dimensional model is composed of a plurality of polygonal meshes and the membrane is applied to the plurality of polygonal meshes.
4. A computer program product comprising computer executable code which when executed on a computer causes the computer to perform the method of claim 1 .
5. A computing apparatus comprising a processor arranged to apply a two-dimensional image to a three-dimensional model, the three-dimensional model including a polygon mesh having a plurality of vertices, the processor being configured for:
creating a membrane by generating an initial adjacency structure for all triangles within the polygon mesh;
identifying a triangle having three vertices within the membrane which contains a desired centre point of the two-dimensional image;
calculating spatial distances between the three vertices of the triangle and desired centre point of the two-dimensional image;
checking each triangle edge of the identified triangle to see if the calculated distances show an intersection on the two-dimensional image, and if a collision is detected adding a neighbouring triangle to a list of triangles to process;
iteratively processing all the triangles in the list until the list is empty;
calculating the spatial data of a single unknown vertex within the triangle;
checking the two edges of the triangle to see if the calculated distances show an intersection with the two-dimensional image, and if an intersection occurs adding this new triangle to the triangle list;
transforming points corresponding to the plurality of vertices and spatial data into UV-coordinates; and
applying the two-dimensional image to the three-dimensional model using the UV-coordinates.
6. A computing apparatus as claimed in claim 5 , further configured for applying a smoothing technique to the membrane.
7. A computing apparatus as claimed in claim 5 , wherein the three-dimensional model is composed on a plurality of polygonal meshes to form the membrane.
8. A computing apparatus as claimed in claim 5 , further comprising a display device, wherein the processor is configured to cause the display device to display the three-dimensional model including the applied two-dimensional image.
9. A method of applying a two-dimensional image to a three-dimensional model, the three-dimensional model of including a polygonal mesh having a plurality of vertices, the method comprising the steps of:
identifying a first point corresponding to a vertex of the plurality of vertices;
identifying a second point corresponding to a vertex of the plurality of vertices and proximal to the first point;
calculating spatial data between the second point and the first point;
iteratively identifying successive points, wherein each successive point corresponds to a vertex of the plurality of vertices and is proximal to a previously identified point, and calculating, spatial data between each successive point and the previously identified point, until a stop point is identified, the stop point being a point outside the boundary of the two dimensional image;
transforming the points and spatial data into UV-coordinates; and
applying the two-dimensional image to the three-dimensional model using the UV-coordinates.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/645,935 US10275929B2 (en) | 2013-03-11 | 2017-07-10 | Apparatus and method for applying a two-dimensional image on a three-dimensional model |
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GBGB1304321.1A GB201304321D0 (en) | 2013-03-11 | 2013-03-11 | Apparatus and method for applying a two-dimensional image on a three-dimensional model |
| GB1304321.1 | 2013-03-11 | ||
| PCT/GB2014/050694 WO2014140540A1 (en) | 2013-03-11 | 2014-03-10 | Apparatus and method for applying a two-dimensional image on a three-dimensional model |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/GB2014/050694 A-371-Of-International WO2014140540A1 (en) | 2013-03-11 | 2014-03-10 | Apparatus and method for applying a two-dimensional image on a three-dimensional model |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/645,935 Continuation US10275929B2 (en) | 2013-03-11 | 2017-07-10 | Apparatus and method for applying a two-dimensional image on a three-dimensional model |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20160012629A1 true US20160012629A1 (en) | 2016-01-14 |
Family
ID=48189693
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US14/771,553 Abandoned US20160012629A1 (en) | 2013-03-11 | 2014-10-03 | Apparatus and method for applying a two-dimensional image on a three-dimensional model |
| US15/645,935 Active US10275929B2 (en) | 2013-03-11 | 2017-07-10 | Apparatus and method for applying a two-dimensional image on a three-dimensional model |
Family Applications After (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/645,935 Active US10275929B2 (en) | 2013-03-11 | 2017-07-10 | Apparatus and method for applying a two-dimensional image on a three-dimensional model |
Country Status (5)
| Country | Link |
|---|---|
| US (2) | US20160012629A1 (en) |
| EP (1) | EP2973421B1 (en) |
| CN (1) | CN105190702B (en) |
| GB (1) | GB201304321D0 (en) |
| WO (1) | WO2014140540A1 (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180158241A1 (en) * | 2016-12-07 | 2018-06-07 | Samsung Electronics Co., Ltd. | Methods of and devices for reducing structure noise through self-structure analysis |
| US10275929B2 (en) | 2013-03-11 | 2019-04-30 | Creative Edge Software Llc | Apparatus and method for applying a two-dimensional image on a three-dimensional model |
| US10521970B2 (en) * | 2018-02-21 | 2019-12-31 | Adobe Inc. | Refining local parameterizations for applying two-dimensional images to three-dimensional models |
| CN112614046A (en) * | 2020-12-17 | 2021-04-06 | 武汉达梦数据技术有限公司 | Method and device for drawing three-dimensional model on two-dimensional plane |
| US20220035970A1 (en) * | 2020-07-29 | 2022-02-03 | The Procter & Gamble Company | Three-Dimensional (3D) Modeling Systems and Methods for Automatically Generating Photorealistic, Virtual 3D Package and Product Models from 3D and Two-Dimensional (2D) Imaging Assets |
| US20220203736A1 (en) * | 2019-09-20 | 2022-06-30 | Hewlett-Packard Development Company, L.P. | Media modification marks based on image content |
Families Citing this family (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108738336B (en) | 2016-01-28 | 2021-08-10 | 西门子医疗保健诊断公司 | Method and apparatus for multi-view characterization |
| GB201715952D0 (en) | 2017-10-02 | 2017-11-15 | Creative Edge Software Llc | 3D computer modelling method |
| EP3667623A1 (en) * | 2018-12-12 | 2020-06-17 | Twikit NV | A system for optimizing a 3d mesh |
| CN112652033B (en) * | 2019-10-10 | 2025-02-14 | 中科星图股份有限公司 | A two-dimensional and three-dimensional integrated polygonal graphics generation method, device and storage medium |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6124858A (en) * | 1997-04-14 | 2000-09-26 | Adobe Systems Incorporated | Raster image mapping |
| US7027050B1 (en) * | 1999-02-04 | 2006-04-11 | Canon Kabushiki Kaisha | 3D computer graphics processing apparatus and method |
| US7280106B2 (en) * | 2002-10-21 | 2007-10-09 | Canon Europa N.V. | Apparatus and method for generating texture maps for use in 3D computer graphics |
| US7511718B2 (en) * | 2003-10-23 | 2009-03-31 | Microsoft Corporation | Media integration layer |
| US8654121B1 (en) * | 2009-10-02 | 2014-02-18 | Pixar | Structured polygonal mesh retesselation |
Family Cites Families (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5903270A (en) * | 1997-04-15 | 1999-05-11 | Modacad, Inc. | Method and apparatus for mapping a two-dimensional texture onto a three-dimensional surface |
| US7728848B2 (en) * | 2000-03-28 | 2010-06-01 | DG FastChannel, Inc. | Tools for 3D mesh and texture manipulation |
| US7065242B2 (en) * | 2000-03-28 | 2006-06-20 | Viewpoint Corporation | System and method of three-dimensional image capture and modeling |
| US7538764B2 (en) * | 2001-01-05 | 2009-05-26 | Interuniversitair Micro-Elektronica Centrum (Imec) | System and method to obtain surface structures of multi-dimensional objects, and to represent those surface structures for animation, transmission and display |
| WO2006088429A1 (en) * | 2005-02-17 | 2006-08-24 | Agency For Science, Technology And Research | Method and apparatus for editing three-dimensional images |
| JP2009042811A (en) * | 2007-08-06 | 2009-02-26 | Univ Of Tokyo | Three-dimensional shape development apparatus, three-dimensional shape development method, and program for three-dimensional shape development |
| US9305390B2 (en) * | 2009-02-24 | 2016-04-05 | Textron Innovations Inc. | System and method for mapping two-dimensional image data to a three-dimensional faceted model |
| WO2012037157A2 (en) * | 2010-09-13 | 2012-03-22 | Alt Software (Us) Llc | System and method for displaying data having spatial coordinates |
| CN102521852B (en) * | 2011-11-24 | 2015-03-25 | 中国船舶重工集团公司第七0九研究所 | Showing method for target label independent of three-dimensional scene space |
| GB201304321D0 (en) | 2013-03-11 | 2013-04-24 | Creative Edge Software Llc | Apparatus and method for applying a two-dimensional image on a three-dimensional model |
-
2013
- 2013-03-11 GB GBGB1304321.1A patent/GB201304321D0/en not_active Ceased
-
2014
- 2014-03-10 WO PCT/GB2014/050694 patent/WO2014140540A1/en active Application Filing
- 2014-03-10 EP EP14713886.1A patent/EP2973421B1/en active Active
- 2014-03-10 CN CN201480025336.1A patent/CN105190702B/en active Active
- 2014-10-03 US US14/771,553 patent/US20160012629A1/en not_active Abandoned
-
2017
- 2017-07-10 US US15/645,935 patent/US10275929B2/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6124858A (en) * | 1997-04-14 | 2000-09-26 | Adobe Systems Incorporated | Raster image mapping |
| US7027050B1 (en) * | 1999-02-04 | 2006-04-11 | Canon Kabushiki Kaisha | 3D computer graphics processing apparatus and method |
| US7280106B2 (en) * | 2002-10-21 | 2007-10-09 | Canon Europa N.V. | Apparatus and method for generating texture maps for use in 3D computer graphics |
| US7511718B2 (en) * | 2003-10-23 | 2009-03-31 | Microsoft Corporation | Media integration layer |
| US8654121B1 (en) * | 2009-10-02 | 2014-02-18 | Pixar | Structured polygonal mesh retesselation |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10275929B2 (en) | 2013-03-11 | 2019-04-30 | Creative Edge Software Llc | Apparatus and method for applying a two-dimensional image on a three-dimensional model |
| US20180158241A1 (en) * | 2016-12-07 | 2018-06-07 | Samsung Electronics Co., Ltd. | Methods of and devices for reducing structure noise through self-structure analysis |
| US10521959B2 (en) * | 2016-12-07 | 2019-12-31 | Samsung Electronics Co., Ltd. | Methods of and devices for reducing structure noise through self-structure analysis |
| US10521970B2 (en) * | 2018-02-21 | 2019-12-31 | Adobe Inc. | Refining local parameterizations for applying two-dimensional images to three-dimensional models |
| US20220203736A1 (en) * | 2019-09-20 | 2022-06-30 | Hewlett-Packard Development Company, L.P. | Media modification marks based on image content |
| US20220035970A1 (en) * | 2020-07-29 | 2022-02-03 | The Procter & Gamble Company | Three-Dimensional (3D) Modeling Systems and Methods for Automatically Generating Photorealistic, Virtual 3D Package and Product Models from 3D and Two-Dimensional (2D) Imaging Assets |
| US12204828B2 (en) * | 2020-07-29 | 2025-01-21 | The Procter & Gamble Company | Three-dimensional (3D) modeling systems and methods for automatically generating photorealistic, virtual 3D package and product models from 3D and two-dimensional (2D) imaging assets |
| CN112614046A (en) * | 2020-12-17 | 2021-04-06 | 武汉达梦数据技术有限公司 | Method and device for drawing three-dimensional model on two-dimensional plane |
Also Published As
| Publication number | Publication date |
|---|---|
| EP2973421B1 (en) | 2021-02-24 |
| US20170309058A1 (en) | 2017-10-26 |
| WO2014140540A1 (en) | 2014-09-18 |
| US10275929B2 (en) | 2019-04-30 |
| GB201304321D0 (en) | 2013-04-24 |
| CN105190702A (en) | 2015-12-23 |
| CN105190702B (en) | 2018-06-26 |
| EP2973421A1 (en) | 2016-01-20 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10275929B2 (en) | Apparatus and method for applying a two-dimensional image on a three-dimensional model | |
| US12406325B1 (en) | Tile based computer graphics | |
| US10140736B2 (en) | Graphics processing systems | |
| US9135750B2 (en) | Technique for filling holes in a three-dimensional model | |
| US8269770B1 (en) | Tessellation of trimmed parametric surfaces by walking the surface | |
| US8718368B2 (en) | Text flow in and around irregular containers | |
| CN102184522A (en) | Vertex data storage method, graphic processing unit and refiner | |
| US9367943B2 (en) | Seamless fracture in a production pipeline | |
| US20130257856A1 (en) | Determining a View of an Object in a Three-Dimensional Image Viewer | |
| US20160379332A1 (en) | Apparatus and method for verifying image data comprising mapped texture image data | |
| CN107016716A (en) | Determine the graphic processing apparatus and method of level of detail | |
| CN109410213A (en) | Polygon pel method of cutting out, computer readable storage medium, electronic equipment based on bounding box | |
| US9858709B2 (en) | Apparatus and method for processing primitive in three-dimensional (3D) graphics rendering system | |
| Li et al. | Skeleton-enhanced line drawings for 3D models | |
| US8659600B2 (en) | Generating vector displacement maps using parameterized sculpted meshes | |
| KR100624455B1 (en) | Method and apparatus for processing light map in 3D graphics environment | |
| US20250173949A1 (en) | Mapping visibility state to texture maps for accelerating light transport simulation | |
| Sinha et al. | Semantic Stylization and Shading via Segmentation Atlas utilizing Deep Learning Approaches | |
| CN119850401A (en) | GPU image rendering method and system based on embedded equipment | |
| Ohta et al. | Photo-based Desktop Virtual Reality System Implemented on a Web-browser | |
| CN120339454A (en) | Image processing method, device, electronic device, storage medium and product | |
| JP2016033761A (en) | Image processor, image processing method and image processing program |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: CREATIVE EDGE SOFTWARE LLC, DELAWARE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JENNINGS, AMY;JENNINGS, STUART;REEL/FRAME:037880/0425 Effective date: 20160127 |
|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |