[go: up one dir, main page]

NZ736107B2 - Virtual trying-on experience - Google Patents

Virtual trying-on experience Download PDF

Info

Publication number
NZ736107B2
NZ736107B2 NZ736107A NZ73610716A NZ736107B2 NZ 736107 B2 NZ736107 B2 NZ 736107B2 NZ 736107 A NZ736107 A NZ 736107A NZ 73610716 A NZ73610716 A NZ 73610716A NZ 736107 B2 NZ736107 B2 NZ 736107B2
Authority
NZ
New Zealand
Prior art keywords
user
model
models
face
item
Prior art date
Application number
NZ736107A
Other versions
NZ736107A (en
Inventor
Jerome Boisson
David Mark Groves
Original Assignee
Specsavers Optical Group Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB1503831.8A external-priority patent/GB2536060B/en
Application filed by Specsavers Optical Group Limited filed Critical Specsavers Optical Group Limited
Publication of NZ736107A publication Critical patent/NZ736107A/en
Publication of NZ736107B2 publication Critical patent/NZ736107B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Electronic shopping [e-shopping] utilising user interfaces specially adapted for shopping
    • G06Q30/0643Electronic shopping [e-shopping] utilising user interfaces specially adapted for shopping graphically representing goods, e.g. 3D product representation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/16Cloth
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models

Abstract

There is provided a method of providing a virtual trying on experience to a user comprising extracting at least one image from a video including a plurality of video frames of a user in different orientations to provide at least one extracted imageacquiring 3D models of an item to be tried on the user and a generic representation of a human, combining the acquired 3D models and at least one extracted image as the background, and generating an output image representative of the virtual trying-on experience. There is also provided apparatus to carry out the methods. er and a generic representation of a human, combining the acquired 3D models and at least one extracted image as the background, and generating an output image representative of the virtual trying-on experience. There is also provided apparatus to carry out the methods.

Description

(12) Granted patent specificaon (19) NZ (11) 736107 (13) B2 (47) Publicaon date: 2021.12.24 (54) VIRTUAL TRYING-ON EXPERIENCE (51) Internaonal Patent Classificaon(s): G06T 19/00 (22) Filing date: (73) Owner(s): 2016.03.07 SPECSAVERS OPTICAL GROUP LIMITED (23) Complete caon filing date: (74) Contact: 2016.03.07 AJ PARK (30) Internaonal Priority Data: (72) Inventor(s): GB 1.8 2015.03.06 GROVES, David Mark BOISSON, Jerome (86) Internaonal Applicaon No.: 2016/050596 (87) Internaonal Publicaon number: WO/2016/142668 (57) Abstract: There is provided a method of providing a virtual trying on experience to a user comprising extracng at least one image from a video including a plurality of video frames of a user in different aons to provide at least one extracted imageacquiring 3D models of an item to be tried on the user and a generic representaon of a human, combining the acquired 3D models and at least one extracted image as the background, and ng an output image entave of the virtual trying-on experience. There is also provided apparatus to carry out the methods.
NZ 736107 B2 Virtual Trying-On Experience Field of the invention Embodiments of the invention relate to a computer implemented method for providing a visual representation of an item being tried on a user.
Summary The present invention provides a method of ing a virtual trying on experience to a user as described in the accompanying claims.
In a ular aspect, the present invention provides a method of providing a virtual trying on experience to a user comprising: extracting at least one image from a video including a plurality of video frames of a user in different orientations to provide at least one extracted image; acquiring a 3D model of an item to be tried on the user and a 3D model of a generic entation of a human; and combining the acquired 3D models with at least one extracted image, having the at least one extracted image as a background, to generate an output image representative of the virtual trying-on experience; wherein each of the 3D models comprises an origin point, and the combining the acquired 3D models with the at least one extracted image comprises aligning the origin points of each of the 3D models in 3D space.
In another particular aspect, the present invention es a method of providing a virtual trying on ence for a user, comprising: receiving a plurality of video frames of a user’s head in different orientations to provide captured oriented user images; identifying an origin point on a 3D model of a generic user; identifying an origin point on a 3D model of a user-selected item to be tried on; aligning the origin point of the 3D model of the generic user and the 3D model of an item to be tried on; combining each captured oriented user image as a background with a generated representation of the elected item to be tried on based on the aligned 3D model of the generic user and the 3D model of the item to e a series of combined images representative of the virtual trying on experience; and displaying the series of combined .
[FOLLOWED BY PAGE 1a] - 1a - Specific examples of the invention are set forth in the dependent .
These and other aspects of the invention will be apparent from and elucidated with reference to the examples described hereinafter.
Brief ption of the drawings Further details, aspects and ments of the invention will be described, by way of example only, with reference to the drawings. In the drawings, like reference numbers are used to fy like or functionally similar ts. Elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale.
Figure 1 shows an example method of providing a virtual trying on experience to a user according to an example embodiment of the invention; Figure 2 shows first and second more detailed portions of the method of Figure 1, according to an example embodiment of the invention; Figure 3 shows a third more detailed portion of the method of Figure 1, according to an example embodiment of the invention; Figure 4 shows a high level diagram of the face tracking method, ing to an example embodiment of the invention; Figure 5 shows how the method retrieves faces in video sequences, according to an example embodiment of the invention; Figure 6 shows a detected face, according to an example embodiment of the invention; Figure 7 shows detected features of a face, ing to an e embodiment of the invention; Figure 8 shows a pre-processing phase of the method that has the objective to find the most reliable frame containing a face from the video sequence, according to an example embodiment of the invention; Figure 9 shows an optional face model building phase of the method that serves to construct a le face model entation, according to an example embodiment of the invention; [FOLLOWED BY PAGE 2] W0 42668 Figure 10 shows a processed video frame along with its corresponding (eg. generic) 3D model of a head, according to an example embodiment ofthe invention; Figure 11 shows a sequential face tracking portion of the disclosed method, according to an example embodiment of the invention; Figure 12 shows an exemplary embodiment of computer hardware on which the disclosed method may be run, Figure 13 shows another exemplary embodiment of computer re on which the disclosed method may be run.
Detailed description Because the illustrated embodiments of the present invention may for the most part be implemented using electronic components and circuits known to those skilled in the art, details will not be explained in any greater extent than that considered necessary to rate the invention to a person skilled in the relevant art. This is done for the understanding and appreciation of the underlying concepts of the t invention, without unduly obfuscating or distracting from the teachings ofthe present ion.
Examples provide a method, apparatus and system for generating "a virtual try-on experience" of an item on a user, such as a pair of spectacles/glasses being tried on a user’s head.
The virtual try-on experience may be displayed on a computer display, for example on a smartphone or tablet screen. Examples also provide a computer program (or “app”) comprising instructions, which when executed by one or more processors, carry out the disclosed methods. The disclosed virtual try on experience methods and apparatuses allow a user to see what a ed item would look like on their person, lly their head. Whilst the following has been cast in terms of trying on s on a human head, similar methods may also be used to virtually try on any other readily 3D model-able items that may be worn or attached to another object, ally a human object, including, but not limited to: earrings, tattoos, shoes, makeup, and the like.
Examples may use one or more c 3D models of a human head, together with a one or more 3D models of the item(s) to be tried on, for example models of selected pairs of glasses. The one or more generic 3D models of a human head may include a female generic head and a male generic head. In some embodiments, different body shape generic head 3D models may be provided and selected n to be used in the generation of the “virtual try-on experience". For example, the different body shape generic heads may comprise different widths and/or s of heads, or hat sizes.
According to some examples, the 3D models (of both the generic human heads and/or the items to be placed on the head) may be placed into a 3D space by reference to an . The origin ofthe 3D models may be defined as a location in the 3D space at which the coordinates of each 3D W0 2016/142668 model is to be referenced from, in order to locate any given portion of the 3D model. The origin of each model may correspond to one another, and to a specified nominally universal location, such as the location of a bridge of the nose. Thus, the s of the 3D models may be readily co-located in the 3D space, together with a corresponding location of the item to be virtually tried on, so that they may be naturally/suitably aligned. There may also be provided one or more attachment points for the item being tried on to the 3D model of a generic human head. In the trying on of glasses e, these may be, for example, where the arms of the glasses rest on a human ear.
The origin is not in itself a point in the model. It is merely a on by which points in the 3D models (both of the generic human head, but also of any item being tried on, such as glasses) may be referenced and suitably aligned. This is to say, examples may place both 3D models (i.e. the selected generic human head + item being tried on) into the same 3D space in a suitable (i.e. realistic) alignment by reference to the respective origins. The 3D model of the head may not be made visible, but only used for occlusion or other calculations of the 3D model of the glasses. The combined generic head (invisible) and s 3D models (suitably occluded) can then be placed on a background comprising an extracted image of the user taken from a video, so that the overall combination of the rendered 3D model of the glasses and the extracted video gives the sion of the s being worn by the user. This combination process, as well as the occlusion calculations using the “invisible” c human head, may be repeated for a number of extracted images at different nominal rotations.
By using pre—defined generic 3D head models, examples do not need to te a 3D model of a user’s head, and therefore reduce the processing overhead requirements. However, the utility of the examples is not materially affected as key issues pertaining to the virtual try on experience are ined, such as occlusion of portions of the glasses by head extremities (e.g. eyes, nose, etc), during rotation, as discussed in more detail below.
Examples map the 3D models in the 3D space onto suitably captured and arranged images of the actual user of the system. This mapping process may include trying to find images of a user’s head having pre-defined angles of view matching predetermined angles. This ng may se determining, for a captured head rotation video, a ermined number of angles of head between the two maximum angles of head rotation contained within the captured head on video. In such a way, examples enable use of the specific captured head rotation video, regardless of whether or not a pre-determined preferable maximum of head rotation has occurred (i.e. these examples would not require the user to ture a new video because the user had not turned their head sufficiently in the original capturing of their head rotation). Thus, examples are more efficient than the prior art that requires a minimum head rotation.
In examples, by establishing angles of images based on maximum angle of user head rotation in a captured video (and therefore under the direct control ofthe user), the g angle(s) may be W0 2016/142668 etermined. This enables the system to portray the generated 3D try on experience in a way particularly desirable to the user, as opposed to only being portrayed in a c, pre-determined manner that the user must abide by in order for the system to work. Thus, examples are more “natural” to use than the prior art.
There now follows a detailed description of an exemplary embodiment of the present invention, in particular an embodiment in the form of a software application (often simply referred to as an “app”) used on hone device. The e software application is in the form of a Virtualized method for a human user to try on glasses including a face tracking portion described in more detail below, where face tracking is used in an application according to examples to ‘recognize’ a user’s face (i.e. compute a user’s head/face pose).
Examples of the disclosed method may e extracting a still image(s) of a user (or just user’s head portion) from a captured video of the user. A movement and/or orientation ofthe user’s head, i.e. position and viewing direction, may be determined from the extracted still image(s). The image of the user may be used as a background image for a 3D space including 3D models of the item, such as glasses, to be virtually tried on, thereby creating the appearance of the item being tried on the user actual capture head. A 3D model of a generic head, i.e. not of the actual user, may also be placed into the 3D space, overlying the background image of the user, In this way, the generic human head model may be used as a mask, to allow suitable occlusion culling (i.e. hidden e determination) to be d out on the 3D model of the item being tried on, in relation to the user’s head. Use of a generic human head model provides higher processing efficiency/speed, without significantly reducing cy ofthe end result.
An origin of the 3D model of a generic human head may be located at a pre-determined point in the model, for e, corresponding to a bridge of a nose in the model. Other locations and numbers of reference points may be used instead. A position at which the 3D model is located within the 3D space may also set with reference to the origin of the model, i.e. by specifying the location of the origin of the 3D model within the 3D space. The ation of the 3D model may correspond to the determined viewing direction ofthe user.
A 3D model ofthe selected item to be tried on, for example the selected pair of glasses, may be placed into the 3D space. An orientation of the glasses model may correspond to the viewing direction of the user. An origin of the 3D glasses model may be ed and located at a point corresponding to the same point as the 3D model of the c human head, for example also being at a bridge of a nose in the glasses model. A position at which the 3D glasses model is located within the 3D space may be set with reference to the origin of the glasses 3D model, i.e. by ying the location of the origin of the 3D model within the 3D space. The origin of the 3D model of the glasses may be set so that the glasses substantially align to the normal wearing position on the 3D model of the human head.
W0 2016/142668 An image of the glasses located on the user’s head may then be generated based on the 3D models of the glasses and generic head (which may be used to mask portions of the glasses model which should not be visible and to generate shadow) and the background image of the user.
The on of the glasses ve to the head may be d by moving the location of the 3D glasses model in the 3D space, i.e. by setting a different location of an origin of the model, or by moving the origin of the 3D glasses model out of alignment with the origin ofthe 3D model of a generic human head.
The example application also may include video capture, which may refer to capturing a video of the user’s head and splitting that video up into a plurality of video frames. In some examples, the video capture may occur e of the device displaying the visualization. Each video frame may therefore comprise an image extracted from a video capture device or a video ce captured by that or another video capture device. Examples may include one or more 3D models, where a 3D model is a 3D representation of an . In ic examples, the 3D models may be of a generic human head and of an item to be visualized upon the head, such as a pair of glasses. A 3D model as used herein may comprise a data set ing one or more of: a set of locations in a 3D space defining the item being ed, a set of data representing a texture or material of the item (or portion thereof) in the model, a mesh of data points defining the object, an origin, or reference point for the model, and other data usefiJl in defining the physical item about which the 3D model relates. Examples may also use a scene, where the scene may contain one or more models, and including, for example, all the meshes for the 3D models used to visualize the glasses on a user’s head. Other data sets that may also be used in some es include: a material data set describing how a 3D model should be rendered, often based upon textures, a mesh data set that may be the technical 3D representation of the 3D model, a texture data set that may include a graphic file that may be applied to a 3D model in order to give it a texture and/or a color.
Data sets that may be used in some embodiments may e CSV (for Comma ted Values), which is an exchange format used in re such as ExcelTM, JSON (for JavaScript Object Notation) is an exchange format used mainly in Web, and metrics, that are a way to record, for example, the usage of the application. Other data sets are also envisaged for use in examples, and the invention is not so limited.
Example embodiments may comprise code portions or software modules including, but not limited to: code portions provided by or through a Software Development Kit (SDK) of the target ing System (OS), operable to enable execution of the application on that target OS, for example portions provided in the iOS SDK environment, XCode®, 3D model rendering, lighting and shadowing code portion (for example, for applying the glasses on user’s face), face tacking code portions; and metric provision code portions.
W0 2016/142668 The software application comprises three core actions: video recording of the user’s face with face-tracking, 3D model download and interpretation/representation of the 3D models (of generic user head and s being visualized on the user’s head); and display of the ation of the 3D Models and recorded video imagery. Examples may also include cloud / web enabled services catalog handling, thereby enabling onward use of the visualization to the user, for example for providing the selected glasses to the user for orld trying on and/or sale.
Figure 1 shows an example method 100 of providing a l try on experience for glasses on a user’s head.
The method starts by capturing video 110 of the user’s head rotating. However, due to the beneficial aspects of the disclosed examples (in particular, the freedom to use any xtent of head rotation), in the alternative, a previously captured video may be used instead.
The method then extracts images 120, for later processing, as disclosed in more detail below.
From the extracted images, the method determines the object (in this example, the user’s head) movement in the extracted images 130. Next, 3D models of the items (ie. glasses) to be placed, and a 3D model of a generic human head on which to place the item models, are acquired 140, either from local storage (e.g. in the case of the generic human head model) or from a remote data repository (e.g. in the case of the item/glasses, as this may be a new model). More detailed description ofthese process 130 and 140 are disclosed below with reference to Figure 2.
The 3D models are combined with one another and the extracted images (as background) at step 150. Then, an image of the visual representation of the object (user’s head) with the item (glasses) thereon can be generated 160. This is described in more detail with respect of the Figure 3, below.
Optionally, the location ofthe items with respect to the object may be ed 170, lly according to user input. This step may occur after display of the image, as a result of the user desiring a ly ent output image.
Figure 2 shows a more ed view 200 of a n of the method, in particular, the object movement determination step 130 and 3D model acquisition step 140.
The object movement determination step 130, may be broken down in to sub steps in which a maximum rotation of the object (i.e. head) in a first direction (e.g. to the left) is determined 132, then the m rotation in the second direction (e.g. to the right) may then be determined 134, finally, for this portion of the method, output values may be provided 136 indicative of the maximum rotation of the head in both first and second directions, for use in the subsequent processing of the ted images and/or 3D models for placement within the 3D space relating to each extracted image. In some examples, the different steps noted above in respect of the object movement determination may only be optional.
W0 2016/142668 The 3D model ition step 140, may be broken down in to sub steps in which a 3D model of a generic head is acquired 142, or optionally, to include a selection step 144 of a one 3D model of a generic human head out of a number of ed 3D generic models of a human head (e.g. choosing between a male or female generic head 3D model). The choice of c head model may be under direct User control, or by automated selection, as described in more detail below. Next, the 3D models of the item(s) to be placed on the head, e.g. s may then be acquired 146. Whilst the two acquisition steps 142 and 146 may be carried out either way round, it is advantageous to choose the generic human head in use, because this may allow the choice of 3D models of the items to be placed to be filtered so that only applicable models are available for subsequent acquisition. For example, choosing a female generic human head 3D model can filter out all male glasses.
Figure 3 shows a more detailed view 300 of the image generation step 160 of Figure 1.
The image generation step 160 may start by applying an extracted image as the background 162 to the visual representation of the item being tried on the user’s head. Then, using the face ng data (i.e. detected movement, such as the extent of rotation values discussed above, at step 136) may be used to align the 3D models of the c human head and the 3D model of the glasses to the extracted image used as background 164 (the 3D models may already have been aligned to one another, for example using their origins, or that alignment can be carried out at this point as well, instead).
Hidden surface detection calculations (i.e. occlusion calculations) 166 may be d out on the 3D model of the glasses, using the 3D model of the generic head, so that any parts of the glasses that should not be visible in the context of the particular ted image in use at this point in time may be left out of the overall end 3D rendering of the combined scene (comprising extracted image background, and 3D model of glasses “on top”). The combined scene may then be output as a rendered image 168. The process may repeat for a number of different ted images, each ing a different rotation ofthe user’s head in space.
The extracted images used above may be taken from a video ing of the user’s face, which may be carried out with a face tracking portion of the example method. This allows the user to record a Video of themselves, so that the virtual glasses can be shown as they would look on their actual person. This is achieved in multiple steps. First the application records a video capture of the user's head. Then the application will intelligently split this video into frames and send these to the Face tracking library module. The face tracking library module may then return the location results for each frame (i.e. where the user’s face is in the frame and/or 3D space/world (related to a Coordinate System, (CS) that is linked to the camera). These results may be used to position the 3D glasses on the users face virtually.
W0 42668 The face recording may be approximately 8 seconds long, and may be captured in high resolution video.
There is now described in more detail an exemplary chain of production describing how the virtual glasses are suitably rendered on the captured video of user’s head end.
Video recording and face—tracking: When starting the application, the application may prompt the user to record a video of their head turning in a non—predefined, i.e. user-controllable, substantially horizontal sweep of the user’s head. The camera is typically located dead-ahead of the user’s face, when the user’s head is at the l point of the overall sweep, such that the entirety of the user’s head visible in the frame of the video. However, in other examples, the camera may not be so aligned. The user has to move his head left and right to give the best results possible. The location of the head in the sweep may be ed by the face tracking module prior to capture of the video for use in the method, such that the user may be prompted to re-align their head before capture. In this way, the user may be ly prompted so that only a single video capture is necessary, which tely provides a better user experience. However, in some examples, the method captures the video as is provided by the user, and carries on without requiring a second video capture.
When the video is recorded (and, optionally, the user is happy with it), the video may then be processed through the following steps.
Video split The captured video is to be interpreted by the face-tracking process carried out by the face- tracking module. However, to aid this, the captured video of the user’s head may be sampled, so that only a sub-set of the captured video images are used in the later sing steps. This may result in faster and/or more efficient processing, which in turn may also allow the example application to be performed by lesser processing resources or at greater energy efficiency.
One ary way to provide this sampling of the captured video images is to split the video into comprehensible . Initially, this splitting action may involve the video that is recorded at a higher initial capture rate (e.g. of 30 frame per seconds, at 8 seconds total length, that gives a total of 240 video frames), but only selecting or r processing a pre-determined or user definable number of those frames. For e, the ing process may select every third fame of the originally capture video, which in the above example provides 80 output frame for subsequent processing, at a rate of 10 frames per second, Thus the processing ad is now approximately 33% of the original processing load.
Face-tracking process The sub-selected 80 video frames (i.e. 80 ct images) are then sent to the Face-tracking module for analysis, as described in more detail below with respect to figures 3 to 10. By the end of W0 2016/142668 the Face-tracking process, the application may have 80 sets of data: one for each sub—selected video frame. These sets of data contain, for each video frame, the position and orientation of the face.
Face-tracking data selection It may be ssary for the application to process all 80 sets of data at this point, so the application may include a step of selecting a pre-defined number of best frames offered by the s returned by the face-tracking module. For example, the 9 best frames may be selected, based upon the face orientation, thereby covering all the angles ofthe face as it turns from left to right (or vice .
The selection may be made as follows: for frame 1 (left most), the face may be turned 35 degrees to the left, for frame 2, the face may be turned 28 degrees to the left, for frame 3, the face may be tumed 20 degrees to the left; for frame 4, the face may be turned 10 degrees to the left; for frame 5, the face may be centered, for frame 6, the face may be turned 10 degrees to the right, for frame 7, the face may be tumed 20 degrees to the right; for frame 8, the face may be turned 28 degrees to the right; for frame 9, the face may be turned 35 degrees to the right. Other specific angles selected for each of the selection of best frames may be used, and may also be defined by the user instead.
In some examples, non—linear/contiguous capture of images/frames of the head in the 3D space may be used. This is to say, in these alternative examples, the user’s head may pass through any given target angle more than once during a recording. For example, if one degree left of centre were a target angle and the recording starts from a straight ahead position, then the head being ed passes through this one degree left of centre angle twice — once en route *to* the left—most on and once more after rebound *from* the left-most position. Thus, in these examples, the method has the option to decide which of the different instances is the best version of the angle to use for actual display to the user. Thus, the images ly used to display to the user may not all be contiguous/sequential in time.
In an ative example, instead of selecting best frames for further processing according to pre—defined angles (which s a pre-defined head sweep — e.g. 180 degrees sweep, with 90 degree (left and right) max turn from l dead-ahead position), the method may instead use any arbitrary user provided tum of head, and determine the actual maximum turn in each direction, and then split that ined actual head turn into a discreet number of ‘best frames’. This process may also take into account a lack of symmetry of the overall head tum (i.e. more tum to the left that right, or vice versa). For example, the actual head turn may be, in actual fact, 35 degrees left and 45 degrees right. Therefore, a total of 70 degrees, which in turn may be then split into 9 frames at 7.77 degrees each, or simply 3 on the left, one central, and 4 on the right).
By the end of the face-tracking data ion portion of the overall method, the application may have selected 9 frames and associated sets of data. In some examples, if the application was W0 2016/142668 not able to select a suitable number of “best” frames, the user’s video may be rejected, and the user may be kindly asked to take a new head turning video. For example, if the leftmost frame does not offer a face turned at least 20 degrees to the left, or the right most frame does not offer a face turned at least 20 degrees to the right, the user's video will be rejected.
Face-tracking process end When the application has the requisite number (e .g. 9) best frames, the tive best frame images and data sets are saved within the ation data e location. These may then be used in a later state, with the 3D models, which may also be stored in the application data e location, or another memory on in the device carrying out the example application, or even in a networked location, such as central cloud storage repository. 3D chain of production and process All the best frames are produced within the application, following 3D ng techniques known in the art. For example, the application may start from the captured High Definition, High polygons models (eg. of the glasses (or other product) to be tried on). Since the application has to run on mobile devices, these 3D models may be reworked in order to adapt to low calculation power and low memory offered by the mobile devices, for example to reduce the number of polygons in each of the models.
Then, the application can work on the textures. The textures may be images and, if not reworked, may overflow the device memory and lead to application crashes. For this application, there may be two sets of es generated each time: one for a first type of device (e.g. a mobile device such as a smartphone, using iOS, where the textures used may be r, and hence more suited for a 3G connection) and one for a second type of device, such as a portable device like a tablet (i.e. using es that may be more suited for a physically larger screen, and/or higher rate wifi connection). Once the number of polygons has been reduced and/or the textures have been treated according to ble advantages to the target execution environment, the final 3D models may be exported, for example in a mesh format. The 3D models may be exported in any suitable 3D model data format, and the invention is not so d. An example of a le data format is the Ogre3D format. 3D models in the cloud The 3D models may be located in a central data repository, e.g. on a server, and may be optionally compressed, for example, archived in a ZIP format. When compression is used to store the 3D model data, in order to reduce data storage and transmission requirements, then the application may include respective decompression modules.
In order to get the 3D models of the glasses (and generic heads), the application may download them from the server and unzip them. When that is done, the application can pass the 3D models to the rendering engine.
W0 2016/142668 3D Rendering Engine The 3D rendering engine used in this ation gets a 3D model, and it will pass all the rendered files along with the face tracking data sets and the respective video frames from the video to the cs/display engine. The 3D graphics engine may render the end image according to the s as described in relation to Figure 3.
Thus, in the example discussed above, using 9 ted images, the rendering engine may do the ing steps to create an image of the user g the virtual images: 1) open the 3D files and interpret them to create a 3D representation (e.g. the 3D glasses), 2) for each of the 9 frames used in the app: apply the video frame in the background (so the user’s face is in the background) and then display the 3D glasses in front of the background; using the face tracking data set (face position and orientation), the engine will position the 3D models exactly on the user’s face; 3) a “screenshot” of the 3D frames placed on the background will be taken, 4) the 9 screenshots are then displayed to the user. 3D Rendering Process end Using inbuilt swipe gestures of the target OS, the user may now “browse” through the rendered screenshots for each , in which the illusion of the rendered glasses are on the user’s face.
Web services, cloud and catalogs The catalog containing all the frames is downloaded by the application from a static URL on the server. The g will allow the ation to know where to look for 3D glasses and when to display them. This catalog will for example describe all the frames for the “Designer” category, so the application can fetch the corresponding 3D files. The catalog may use a CSV format for the data storage.
As described above, example applications include processes to: carry out video recording, processing and face tracking data extraction, download 3D models from a server, interpreting and adjusting those models according face-tracking data. The downloading of the 3D models may comprise downloading a catalog of different useable 3D models of the items to be shown (e.g. glasses), or different generic human head 3D models.
Face tracking analysis process The following describes the face-tracking algorithms, as used in offline (i.e. non real-time) application scenarios, such as when the disclosed example methods, s and devices detect and track human faces on a corded video sequence. This example discloses use of the ing terms/notations: a frame is an image extracted from a Video captured by a video capture device or a previously captured input Video sequence, a face model is a 3D mesh that represents a face; a key point (also named interest point) is a point that corresponds to an interesting location in the image W0 2016/142668 because of its neighborhood variations; a pose is a vector composed of a position and a translation to describe rigid affine transformations in space.
Figure 4 shows a high level diagram of the face tracking method. A set of input images 402 are used by face ng module 410 to provide an output set of vectors 402, which may be referred to as “pose vectors”.
Figure 5 s how the Software retrieves faces in video sequences. The face-tracking process may include a face-tracking engine that may be decomposed into four main phases: (1) Pre-processing the (pre-recorded) video sequence 510, in order to find the frame containing the most “reliable” face 520, (2) optionally the method may include building a 2.5D face model corresponding to the current user’s face, or choosing a most applicable generic model of a human head to the captured user head image 530; (3) Tracking the face model sequentially using the (part or whole) video ce 540. 11) Pro-Processing phase Figure 8 shows a pre-processing phase of the method 800 that has the objective to find the most reliable frame containing a face from the video sequence. This phase is decomposed into 3 main sub-steps: - (a) Face detection step 810 (and figure 6) which includes detecting the presence of a face in each video frame. When a face is found, its position is ated. - (b) Non-rigid face ion step 830 (and figure 7) which includes discovering face features positions (e.g. eyes, nose, mouth, etc). - (c) Retrieving the video frame containing the most reliable face image 870 out of a number of candidates 850.
The face detection step (a) 810 may discover faces in the video frames using a sliding windows technique. This technique includes comparing each part of the frame using dal images ques and finding if a part of the frame is similar to a face signature, Face ure(s) is stored in a file or a data structure and is named a classifier. To learn the classifier, thousands of previously known face images may have been processed. The face detection reiterates 820 until a suitable face is .
The Non-rigid face detection step (b) is more complex since it tries to detect elements of the face (also called face features, or landmarks). This gid face detection step may take advantage of the fact that a face has been correctly detected in step (a). Then face detection is refined to detect face ts, for example using face detections techniques known in the art. As in (a), a signature of face elements has been learnt using hundreds of face representations. This step (b) is then able to compute a 2D shape that corresponds to the face features (see an illustration in figure 7).
W0 42668 Steps (a) and (b) may be repeated on all or on a subset of the captured frames that comprises the video sequence being assessed. The number of frames processed depends on the total number of frames of the video sequence, or the sub-selection of video frames used. These may be based upon, for example, the processing capacity of the system (e.g. processor, memory, etc), or on the time the user is (or deemed to be) willing to wait before results to appear. (c) If steps (a) and (b) have succeeded for at least one frame, then step (c) is processed to find the frame in the video sequence that contains the most reliable face. Notion of reliable face can be defined as follow: - find the candidate frames with facing orientation i.e. faces that looks toward the camera, using a threshold value on the angle (e.g. less than few radians), and - amongst these candidate frames, find the frame containing a face not too far and not too close to the camera using two threshold values as well.
Once the frame(s) with the most reliable face is found in the video sequence, the face- ng algorithm changes state and tries to construct a face model representation, or chose a most appropriate c head model for use, or simply uses a standard generic model without any ion thereof 890. 12) Building 3D face model phase Figure 9 shows the optional face model building phase of the method 900 that serves to construct a suitable face model representation, i.e. building an approximate geometry of the face along with a textured signature of the face and corresponding keypoints. In some examples, this textured 3D model is referred as a keyframe. The imate geometry ofthe face may instead be taken from a pre-determined generic 3D model of a human face.
The keyframe may be constructed using the most reliable frame of the video sequence. This phase is decomposed in ing steps: - (a) Creating a 3D model/mesh of the face 910 using the position of the face and the non- rigid face shape built during phase (1). - (b) Finding keypoints on the face image 920 and re-project them on the 3D mesh to find their 3D positions. - (c) Saving a 2D image of the face by cropping the face available in the most reliable frame.
In respect of step (a), the on of the face elements may be used to create the 3D model of the face. These face elements may give essential information about the ation of the face.
A mean (i.e. average) 3D face model ble statically is then ed using these 2D face elements. This face model may then be positioned and oriented according to the camera position.
This may be done by optimizing an energy filllCthl’l that is sed using the image position of face elements and their corresponding 3D position on the model.
W0 2016/142668 In respect of step (b), keypoints (referred sometimes as interest points or comer points) may be computed on the face image using the most reliable frame. In some examples, a keypoint can be detected at a specific image location if the neighboring pixels intensities are varying substantially in both horizontal and vertical ions.
In respect of step (c), along with the 3D model and keypoints, the face representation (an image of the face) may also be memorized (i.e. saved) so that the s can match its appearance in the remaining frames of the video e.
Steps (a), (b) and (c) aim to uct a keyframe of the face. This keyframe is used to track the face ofthe user in the remaining video frames. (3) Tracking the face sequentially phase (see Figure 11).
Once the face model of the user has been reconstructed, or generic model chosen, the remaining video frames may be processed with the objective to track the face sequentially.
Assuming that the face's appearance in contiguous video frames is similar helps the described method track the face frame after frame. This is because the portion of image around each keypoint(s) does not change too much from one frame to another, ore comparing/matching keypoint(s) (in fact neighbouring image appearance) is easier. Any suitable technique to track the face sequentially known in the art may be used. For example, as described in “Stable Real-Time 3D Tracking using Online and Offline ation” — by L. Vacchetti, V. Lepetit and P. Fua, where the me may be used to match keypoints computed in earlier described face model building phase and keypoints computed in each video frame. The pose of the face (i.e. its on and orientation) may then be computed for each new frame using an optimization technique.
Figure 10 shows a processed video frame along with its corresponding (e.g. generic) 3D model of a head.
When the video ce is completed, face poses (and, in some examples, the corresponding generic human face model) are sent to the 3D rendering engine, so that rendering module can use this information to display virtual objects on top of the video sequence. This process is shown in Figure 11, and includes tracking sequentially the Face model using the me 1110, and returning Face poses when available 1130, via iterative process 1120 whilst frames are available for processing, until no more frames are available for processing.
The invention may be ented as a computer program for running on a computer system, said computer system comprising at least one processer, where the computer program includes executable code ns for execution by the said at least one processor, in order for the computer system to perform any method according to the described examples. The computer system may be a programmable apparatus, such as, but not limited to a al computer, tablet or smartphone apparatus.
W0 2016/142668 Figure 12 shoes an exemplary generic embodiment of such a computer system 1200 comprising one or more processor(s) 1240, system control logic 1220 coupled with at least one of the processor(s) 1240, system memory 1210 coupled with system control logic 1220, non-volatile memory WVM)/storage 1230 coupled with system control logic 1220, and a network interface 1260 coupled with system control logic 1220. The system control logic 1220 may also be coupled to Input/Output devices 1250.
Processor(s) 1240 may include one or more -core or multi-core processors.
Processor(s) 1240 may include any combination of general-purpose sors and dedicated processors (e.g., graphics processors, application processors, etc). Processors 1240 may be operable to carry out the above described s, using suitable instructions or programs (i.e. operate via use of processor, or other logic, instructions). The instructions may be stored in system memory 1210, as glasses visualisation application 1205, or additionally or alternatively may be stored in (NVM)/storage 1230, as NVM glasses visualisation ation portion 1235, to thereby instruct the one or more sors 1240 to carry out the virtual trying on experience methods described herein. The system memory 1210 may also include 3D model data 1215, whilst NVM storage 1230 may include 3D model Data 1237. These may serve to store 3D models of the items to be placed, such as glasses, and one or more c 3D models of a human head.
System l logic 1220 for one embodiment may include any suitable interface controllers to provide for any suitable interface to at least one of the processor(s) 1240 and/or to any suitable device or component in communication with system control logic 1220.
System control logic 1220 for one embodiment may include one or more memory controller(s) (not shown) to provide an interface to system memory 1210. System memory 1210 may be used to load and store data and/or ctions, for example, for system 1200. System memory 1210 for one embodiment may include any suitable volatile , such as suitable dynamic random access memory (DRAM), for example.
NVM/storage 1230 may include one or more le, non-transitory computer-readable media used to store data and/or instructions, for example. NVM/storage 1230 may e any suitable non-volatile memory, such as flash memory, for example, and/or may include any suitable non—volatile storage device(s), such as one or more hard disk s) (HDD(s)), one or more compact disk (CD) drive(s), and/or one or more digital versatile disk (DVD) drive(s), for example.
The NVM/storage 1230 may include a storage resource physically part of a device on which the system 1200 is installed or it may be ible by, but not arily a part of, the device.
For example, the NVM/storage 1230 may be accessed over a network via the network interface 1260.
W0 2016/142668 System memory 1210 and orage 1230 may tively include, in particular, temporal and tent copies of, for example, the instructions memory portions holding the glasses visualisation application 1205 and 1235, respectively.
Network interface 1260 may provide a radio interface for system 1200 to communicate over one or more network(s) (e.g. wireless communication network) and/or with any other suitable device.
Figure 13 shows more specific example device to carry out the disclosed virtual trying experience method, in particular a smartphone embodiment 1300, where the method is carried out by an “app” downloaded to the smartphone 1300 via antenna 1310, to be run on a computer system 1200 (as per figure 12) within the smartphone 1300. The smartphone 1300 further includes a display and/or touch screen display 1320 for displaying the virtual try-on experience image formed according to the above bed examples. The smartphone 1300 may optionally also include a set of dedicated input devices, such as keyboard 1320, ally when a touchscreen display is not provided.
A computer program may be formed of a list of able instructions such as a particular application program and/or an operating . The computer m may for example include one or more of: a subroutine, a function, a procedure, an object method, an object implementation, an executable application (“app”), an applet, a servlet, a source code portion, an object code portion, a shared library/dynamic load library and/or any other sequence of instructions designed for execution on a le computer system.
The computer program may be stored internally on a computer readable storage medium or transmitted to the computer system via a computer readable transmission medium. All or some of the computer program may be provided on computer readable media permanently, removably or remotely coupled to the programmable apparatus, such as an information processing system. The computer readable media may include, for example and without tion, any one or more of the following: magnetic storage media including disk and tape storage media; optical storage media such as compact disk media (e.g., , CD-R, Blu-Ray®, etc.) digital video disk storage media (DVD, DVD-R, DVD-RW, etc) or high density optical media (e.g. Blu-Ray®, etc); non- volatile memory storage media including semiconductor-based memory units such as FLASH memory, EEPROM, EPROM, ROM; ferromagnetic l memories, MRAM; volatile storage media including ers, buffers or caches, main memory, RAM, DRAM, DDR RAM etc; and data transmission media ing er networks, point-to-point telecommunication equipment, and carrier wave transmission media, and the like. Embodiments of the invention may e tangible and non-tangible embodiments, transitory and non-transitory embodiments and are not limited to any specific form of er readable media used.
W0 2016/142668 A computer process typically includes an executing (running) program or n of a program, current program values and state information, and the resources used by the operating system to manage the execution of the process. An operating system (OS) is the software that manages the sharing of the resources of a computer and provides programmers with an ace used to access those resources. An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs of the .
The computer system may for ce include at least one sing unit, associated memory and a number of input/output (I/O) devices. When executing the computer program, the computer system processes information according to the computer m and produces resultant output information via I/O devices.
In the foregoing ication, the invention has been described with reference to specific examples of embodiments of the invention. It will, however, be evident that various modifications and changes may be made therein without departing from the broader scope of the invention as set forth in the appended claims.
Those skilled in the art will recognize that the boundaries between logic blocks are merely illustrative and that alternative embodiments may merge logic blocks or circuit elements or impose an alternate osition of functionality upon various logic blocks or circuit elements. Thus, it is to be understood that the architectures depicted herein are merely exemplary, and that in fact many other ectures can be implemented which achieve the same fimctionality.
Any arrangement of components to e the same fimctionality is effectively "associated" such that the desired functionality is achieved. Hence, any two ents herein combined to achieve a particular onality can be seen as "associated with" each other such that the desired fimctionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being "operably connected," or "operably coupled," to each other to achieve the desired functionality.
Furthermore, those d in the art will recognize that boundaries between the above described operations are merely rative. The multiple operations may be combined into a single operation, a single operation may be distributed in additional operations and operations may be executed at least partially overlapping in time. Moreover, alternative ments may include le instances of a particular operation, and the order of operations may be altered in various other embodiments.
Also for example, the examples, or portions thereof, may be implemented as soft or code entations of physical circuitry or of logical representations convertible into physical circuitry, such as in a hardware description language of any appropriate type.
W0 2016/142668 Also, the invention is not limited to physical devices or units implemented in non- programmable hardware but can also be applied in programmable devices or units able to perform the desired device functions by operating in accordance with suitable program code, such as mainframes, mputers, servers, workstations, personal computers, notepads, personal digital assistants, electronic games, automotive and other ed s, cell phones and various other ss devices, commonly denoted in this application as ‘computer systems’.
However, other modifications, variations and alternatives are also possible. The specifications and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense.
In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word ising’ does not exclude the presence of other elements or steps then those listed in a claim. Furthermore, the terms “a” or “an,” as used herein, are defined as one or more than one. Also, the use of introductory phrases such as “at least one” and “one or more” in the claims should not be construed to imply that the uction of another claim element by the indefinite articles "a" or "an" limits any ular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory s "one or more" or "at least one" and indefinite articles such as "a" or "an." The same holds true for the use of definite articles. Unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the ts such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements. The mere fact that certain es are recited in mutually ent claims does not indicate that a combination of these measures cannot be used to advantage.
Examples e a method of providing a virtual trying on ence to a user comprising extracting at least one image from a Video including a plurality of Video frames of a user in different orientations to provide at least one extracted image, determining user movement in the at least one extracted image, acquiring 3D models of an item to be tried on the user and a generic representation of a human, combining the acquired 3D models and at least one extracted image as the background, and generating an output image representative ofthe l trying—on experience.
In some examples, the determining user nt in the at least one extracted image further comprises determining a m angle of rotation of the user in a first direction.
In some examples, the determining user movement in the at least one extracted image further comprises determining a maximum angle of rotation ofthe user in a second direction.
In some examples, the determining user movement in the at least one extracted image further comprises outputting a value indicative of the determined maximum angle of rotation ofthe user in the first or second directions.
W0 2016/142668 In some examples, the acquiring 3D models of an item to be tried on the user and a generic representation of a human further comprises selecting a one of a plurality of 3D models of available generic .
In some examples, the method further comprises determining an origin point in each of the 3D models used, wherein the respective origin point in each 3D model is placed to allow alignment of the 3D models with one another.
In some examples, the method further comprises determining an ation ofthe user in the at least one extracted image and corresponding the orientation of the 3D models in a 3D space according to the determined orientation ofthe user.
In some es, the method further comprises adjusting an origin of at least one 3D model.
In some examples, the method further comprises aligning the origins ofthe 3D models.
In some examples, the method r comprises ng the maximum rotation ofthe user in first and second directions into a predetermined number of set angles, and extracting as many images as there are ined number of set angles.
In some examples, the method further comprises adjusting respective positions of the 3D models and the background according to user input.
In some examples, the method further comprises capturing the rotation ofthe user using a video capture device.
In some es, the method further ses determining user movement comprises determining movement of a user’s head.
There is also provided a method of providing a virtual trying on experience for a user, comprising receiving a plurality of video frames of a user’s head in different orientations to provide ed oriented user , identifying an origin reference points on the captured oriented user images, fying an origin on a 3D model of a generic user, identifying an origin reference point on a 3D model of a user-selected item to be tried on, ng the reference points of the selected captured oriented user images, the 3D model of a generic user and the 3D model of an item to be tried on, combining the captured oriented user images with a generated representation of user— selected item to be tried on to provide a combined image and displaying the combined image.
In some examples, the receiving a plurality of video frames of a user’s head in different orientations to provide captured oriented user images fithher comprises selecting only a subset of all the ed video frames to use in the subsequent processing of the captured oriented user images.
In some examples, the selecting only a subset is a termined sub set, or user—selectable.
In some examples, the method fithher comprises identifying one or more attachment points of the item to the user.
W0 2016/142668 In some es, the method further comprises rotating or translating the attachment points in the 3D space to re-align the item to the user in a user specified way.
In some examples, the ing a virtual trying on experience for a user comprises generating a visual representation of a user trying on an item, and wherein the trying on of an item on a user comprises trying on an item on a user’s head. In some examples, the item being tried on is a pair of glasses.
There is also provided a method of providing a l trying on experience for a user comprising generating a visual representation of a user trying on an item from at least one 3D model of an item to be tried on, at least one 3D generic model of a human head and at least one extracted image of the user head.
Unless otherwise stated as incompatible, or the physics or otherwise ofthe embodiments prevent such a combination, the features of the following claims may be integrated together in any suitable and beneficial ement. This is to say that the combination of features is not limited by the claims specific form, particularly the form ofthe dependent claims, such as claim numbering and the like.

Claims (22)

Claims
1. A method of providing a virtual trying on experience to a user comprising: extracting at least one image from a video including a plurality of video frames of a 5 user in different orientations to e at least one extracted image; acquiring a 3D model of an item to be tried on the user and a 3D model of a generic representation of a human; and combining the acquired 3D models with at least one extracted image, having the at least one extracted image as a background, to te an output image representative of the 10 virtual trying-on experience; wherein each of the 3D models comprises an origin point, and the combining the acquired 3D models with the at least one extracted image comprises aligning the origin points of each of the 3D models in 3D space.
2. The method of claim 1, comprising determining user movement in the at least one ted 15 image.
3. The method of claim 2, wherein determining user movement in the at least one extracted image further comprises ining a maximum angle of rotation of the user in a first direction. 20
4. The method of claim 2 or 3, wherein determining user movement in the at least one extracted image further comprises determining a maximum angle of on of the user in a second direction.
5. The method of claim 3 or 4, wherein determining user movement in the at least one extracted image further comprises outputting a value indicative of the determined maximum angle of on 25 of the user in the first direction when dependent on claim 3 or the second ion when ent on claim 4.
6. The method of any one of the preceding claims, wherein the acquiring the 3D model of an item to be tried on the user and the 3D model of a generic representation of a human further comprises 30 selecting a one of a plurality of 3D models of c humans.
7. The method of any one of the preceding claims, further comprising determining an orientation of the user in the at least one extracted image and corresponding the orientation of the 3D models in a 3D space according to the determined orientation of the user.
8. The method of claim 1, further comprising adjusting the origin point of at least one of the 3D models.
9. The method of any one of claims 3 to 5 or any one of the claims dependent thereon, further 5 comprising dividing the maximum rotation of the user in the first or the second direction into a predetermined number of set angles, and extracting as many images as there are determined number of set angles.
10. The method of any one of the preceding claims, further comprising adjusting respective 10 positions of the 3D models and the background ing to user input.
11. The method of any one of the preceding claims, r sing capturing the rotation of the user using a video capture device. 15
12. The method of any one of the preceding claims, wherein determining user movement comprises determining movement of a user’s head
13. A method of providing a virtual trying on experience for a user, comprising: receiving a ity of video frames of a user’s head in different orientations to provide 20 captured oriented user images; identifying an origin point on a 3D model of a generic user; identifying an origin point on a 3D model of a user-selected item to be tried on; aligning the origin point of the 3D model of the generic user and the 3D model of an item to be tried on; 25 combining each captured oriented user image as a background with a generated representation of the user-selected item to be tried on based on the d 3D model of the generic user and the 3D model of the item to provide a series of ed images representative of the virtual trying on experience; and displaying the series of combined images.
14. The method of claim 13, wherein receiving a plurality of video frames of a user’s head in different ations to provide ed oriented user images further comprises selecting only a subset of all the captured video frames to use in the subsequent sing of the captured oriented user images
15. The method of claim 14, wherein the selecting only a subset is a pre-determined sub set, or electable.
16. The method of any one of claims 13 to 15, r comprising identifying one or more 5 attachment points of the item to the user.
17. The method of claim 16 wherein the method r comprises rotating or translating the attachment points in the 3D space to re-align the item to the user in a user specified way. 10
18. The method of any one of claims 13 to 17, wherein providing a virtual trying on experience for a user ses generating a visual representation of a user trying on an item, and wherein the trying on of an item on a user comprises trying on an item on a user’s head.
19. The method of claim 18 wherein the item is a pair of glasses.
20. A computer readable medium comprising instructions, which, when executed by one or processors, result in the one or more processors carrying out the method of any one of the preceding claims. 20
21. A computer system arranged to carry out any one of the ing method claims or provide instructions to carry out any one of the preceding method claims.
22. The method of claim 1 or 13, substantially as herein described with reference to any one of the Examples and/or
NZ736107A 2015-03-06 2016-03-07 Virtual trying-on experience NZ736107B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
GB1503831.8 2015-03-06
GB1503831.8A GB2536060B (en) 2015-03-06 2015-03-06 Virtual trying-on experience
PCT/GB2016/050596 WO2016142668A1 (en) 2015-03-06 2016-03-07 Virtual trying-on experience

Publications (2)

Publication Number Publication Date
NZ736107A NZ736107A (en) 2021-08-27
NZ736107B2 true NZ736107B2 (en) 2021-11-30

Family

ID=

Similar Documents

Publication Publication Date Title
KR102697772B1 (en) Augmented reality content generators that include 3D data within messaging systems
KR102650051B1 (en) Method and appartus for learning-based generating 3d model
US11481869B2 (en) Cross-domain image translation
US11138306B2 (en) Physics-based CAPTCHA
JP7556839B2 (en) DEVICE AND METHOD FOR GENERATING DYNAMIC VIRTUAL CONTENT IN MIXED REALITY - Patent application
KR102867215B1 (en) Interactive augmented reality content including facial synthesis
US20180276882A1 (en) Systems and methods for augmented reality art creation
US11276238B2 (en) Method, apparatus and electronic device for generating a three-dimensional effect based on a face
WO2020029554A1 (en) Augmented reality multi-plane model animation interaction method and device, apparatus, and storage medium
KR20230162987A (en) Facial compositing in augmented reality content for third-party applications
KR20230162107A (en) Facial synthesis for head rotations in augmented reality content
KR20230162096A (en) Facial compositing in content for online communities using selection of facial expressions
KR20230162972A (en) Face compositing in augmented reality content for advertising
WO2018122167A1 (en) Device and method for generating flexible dynamic virtual contents in mixed reality
KR20230162971A (en) Face compositing in overlaid augmented reality content
US20160110909A1 (en) Method and apparatus for creating texture map and method of creating database
AU2016230943B2 (en) Virtual trying-on experience
Mahmud et al. Atr harmonisar: A system for enhancing victim detection in robot-assisted disaster scenarios
US10825258B1 (en) Systems and methods for graph-based design of augmented-reality effects
CN115984343A (en) Utilize heightmaps to generate shadows for digital objects within digital images
US11308669B1 (en) Shader for graphical objects
NZ736107B2 (en) Virtual trying-on experience
US12373995B2 (en) Methods and systems for using compact object image data to construct a machine learning model for pose estimation of an object
US20240257449A1 (en) Generating hard object shadows for general shadow receivers within digital images utilizing height maps
CN110827411A (en) Self-adaptive environment augmented reality model display method, device, equipment and storage medium