[go: up one dir, main page]

WO2015172229A1 - Systèmes de miroir virtuel et procédés associés - Google Patents

Systèmes de miroir virtuel et procédés associés Download PDF

Info

Publication number
WO2015172229A1
WO2015172229A1 PCT/CA2015/000312 CA2015000312W WO2015172229A1 WO 2015172229 A1 WO2015172229 A1 WO 2015172229A1 CA 2015000312 W CA2015000312 W CA 2015000312W WO 2015172229 A1 WO2015172229 A1 WO 2015172229A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
item
model
image
predetermined portion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CA2015/000312
Other languages
English (en)
Inventor
Tiberiu POPA
Sudhir MUDUR
Alex CONSOL
Krisztian G. BIRKAS
Siyu QUAN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Valorbec LP
Original Assignee
Valorbec LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Valorbec LP filed Critical Valorbec LP
Publication of WO2015172229A1 publication Critical patent/WO2015172229A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Definitions

  • This invention relates to virtual mirrors and more particularly to virtually fitting an item to an individual in real-time based upon a physically-based reconstructed map of the user and fitting the item to the physically-based reconstructed map with the physical constraints of the item.
  • an image acquisition system providing location and depth information relating to a predetermined portion of a user's body forming a predetermined portion of the image
  • the item model comprising at least one of a wireframe map and a contour map and a plurality of metrics relating to physical characteristics of the item;
  • an image acquisition system for acquiring an image providing location and depth information relating to a predetermined portion of a user's body forming a predetermined portion of the image
  • a first microprocessor forming part of a first electronic device for processing the location and depth information to generate a user model of the predetermined portion of a user, the user model comprising at least one of a wireframe map and a contour map and a plurality of metrics relating to the predetermined portion of a user from the location and depth information;
  • a user interface forming part of a second electronic device for receiving from the user identification of an item
  • a modelling module forming part of a third electronic device, the modelling module for: retrieving an item model of the item, the item model comprising at least one of a wireframe map and a contour map and a plurality of metrics relating to physical characteristics of the item;
  • executable software stored upon a non-transient physical medium, wherein the executable software when executed provides a user with a virtual mirror through a series of modules, the series of modules including
  • a first module for acquiring image data relating to an image from an image acquisition system, the image data including location and depth information relating to a predetermined portion of a user's body forming a predetermined portion of the image; a second module providing for boundary detection based upon at least one of the acquired image data and the image, the boundary detection establishing a predetermined number of boundary points relating to the predetermined portion of the user's body; a third module providing for feature extraction based upon the boundary points established by the second module, the feature extraction comprising the generation of predetermined geometric shapes based upon first subsets of the boundary points and dimensions established upon second subsets of the boundary points;
  • a classification module for determining a type of the predetermined portion of the user's body in dependence upon the features extracted by the third module and a plurality of datasets, each of the datasets established from feature extraction performed upon a training set of images relating to a defined type of a plurality of defined types of the predetermined portion of the user's body;
  • a recommendation engine for recommending an item to the user based upon the determined type of the predetermined portion of the user's body and the results of a survey executed in respect of the plurality of defined types of the predetermined portion of the user's body relating to the type of item;
  • a modelling engine for retrieving an item model of the item comprising at least one of a wireframe map and a contour map and a plurality of metrics relating to physical characteristics of the item and for generating a deployable model of the item in dependence upon at least the item model and the dimensions established by the third module and positioning the deployable model relative to the predetermined portion of the user's body; and a rendering engine for rendering the deployable model as part of virtual mirror image for presentation to the user comprising the rendered deployable model in the determined position overlaid to the image acquired by the first module.
  • Figure 1 depicts a network environment within which embodiments of the invention may be employed
  • Figure 2 depicts a wireless portable electronic device supporting communications to a network such as depicted in Figure 1 and as supporting embodiments of the invention
  • Figure 3 depicts facial scanning and facial mapping system according to the prior art
  • Figure 4 depicts an exemplary process flow for providing a virtual mirror to a user according to an embodiment of the invention
  • Figure 5 depicts an exemplary process flow for providing product virtualization to a user via a virtual mirror according to an embodiment of the invention
  • Figure 6 depicts an exemplary process flow for providing a virtual mirror to a user according to an embodiment of the invention
  • Figure 7 depicts an exemplary process flow for providing user virtualization to a user via a virtual mirror according to an embodiment of the invention
  • Figure 8 depicts an exemplary process flow for providing product virtualization to a user via a virtual mirror according to an embodiment of the invention
  • Figure 9 depicts schematically an eyewear recommendation system according to an embodiment of the invention.
  • Figure 10 depicts a Face Shape Recognition System (FSRS) process flow for a recommendation system according to an embodiment of the invention
  • Figure 1 1 depicts a sample output for a detected face boundary step within an FSRS process flow for a recommendation system according to an embodiment of the invention
  • Figure 12 depicts the facial contour points for a detected face boundary step within an FSRS process flow for a recommendation system according to an embodiment of the invention
  • Figure 13 depicts six different facial types employed within a Face Shape Recognition System (FSRS) for a recommendation system according to an embodiment of the invention.
  • FSRS Face Shape Recognition System
  • Figure 14 depicts an exemplary scenario of CBR for a FSRS according to an embodiment of the invention.
  • the present invention is directed to virtual mirrors and more particularly to virtually fitting an item to an individual in real-time based upon a physically-based reconstructed map of the user and fitting the item to the physically-based reconstructed map with the physical constraints of the item.
  • a "portable electronic device” refers to a wireless device used for communications and other applications that requires a battery or other independent form of energy for power. This includes devices, but is not limited to, such as a cellular telephone, smartphone, personal digital assistant (PDA), portable computer, pager, portable multimedia player, portable gaming console, laptop computer, tablet computer, and an electronic reader.
  • PDA personal digital assistant
  • a "fixed electronic device” refers to a wireless and /or wired device used for communications and other applications that requires connection to a fixed interface to obtain power. This includes, but is not limited to, a laptop computer, a personal computer, a computer server, a kiosk, a gaming console, a digital set-top box, an analog set-top box, an Internet enabled appliance, an Internet enabled television, and a multimedia player.
  • An "application” (commonly referred to as an “app") as used herein may refer to, but is not limited to, a "software application", an element of a “software suite”, a computer program designed to allow an individual to perform an activity, a computer program designed to allow an electronic device to perform an activity, and a computer program designed to communicate with local and / or remote electronic devices.
  • An application thus differs from an operating system (which runs a computer), a utility (which performs maintenance or general-purpose chores), and a programming tools (with which computer programs are created).
  • an application is generally presented in respect of software permanently and / or temporarily installed upon a PED and / or FED.
  • a “social network” or “social networking service” as used herein may refer to, but is not limited to, a platform to build social networks or social relations among people who may, for example, share interests, activities, backgrounds, or real-life connections. This includes, but is not limited to, social networks such as U.S.
  • based services such as Facebook, Google+, Tumblr and Twitter; as well as Nexopia, Badoo, Bebo, VKontakte, Delphi, Hi5, Hyves, iWiW, Nasza-Klasa, Soup, Glocals, Skyrock, The Sphere, StudiVZ, Tagged, Tuenti, XING, Orkut, Mxit, Cyworld, Mixi, renren, weibo and Wretch.
  • Social media or “social media services” as used herein may refer to, but is not limited to, a means of interaction among people in which they create, share, and/or exchange information and ideas in virtual communities and networks. This includes, but is not limited to, social media services relating to magazines, Internet forums, weblogs, social blogs, microblogging, wikis, social networks, podcasts, photographs or pictures, video, rating and social bookmarking as well as those exploiting blogging, picture-sharing, video logs, wall- posting, music-sharing, crowdsourcing and voice over IP, to name a few.
  • Social media services may be classified, for example, as collaborative projects (for example, Wikipedia); blogs and microblogs (for example, TwitterTM); content communities (for example, YouTube and DailyMotion); social networking sites (for example, FacebookTM); virtual game-worlds (e.g., World of WarcraftTM); and virtual social worlds (e.g. Second LifeTM).
  • collaborative projects for example, Wikipedia
  • blogs and microblogs for example, TwitterTM
  • content communities for example, YouTube and DailyMotion
  • social networking sites for example, FacebookTM
  • virtual game-worlds e.g., World of WarcraftTM
  • virtual social worlds e.g. Second LifeTM
  • An "enterprise” as used herein may refer to, but is not limited to, a provider of a service and / or a product to a user, customer, or consumer. This includes, but is not limited to, a retail outlet, a store, a market, an online marketplace, a manufacturer, an online retailer, a charity, a utility, and a service provider. Such enterprises may be directly owned and controlled by a company or may be owned and operated by a franchisee under the direction and management of a franchiser.
  • a "service provider” as used herein may refer to, but is not limited to, a third party provider of a service and / or a product to an enterprise and / or individual and / or group of individuals and / or a device comprising a microprocessor. This includes, but is not limited to, a retail outlet, a store, a market, an online marketplace, a manufacturer, an online retailer, a utility, an own brand provider, and a service provider wherein the service and / or product is at least one of marketed, sold, offered, and distributed by the enterprise solely or in addition to the service provider.
  • a 'third party' or “third party provider” as used herein may refer to, but is not limited to, a so-called “arm's length” provider of a service and / or a product to an enterprise and / or individual and / or group of individuals and / or a device comprising a microprocessor wherein the consumer and / or customer engages the third party but the actual service and / or product that they are interested in and / or purchase and / or receive is provided through an enterprise and / or service provider.
  • a "user” as used herein may refer to, but is not limited to, an individual or group of individuals whose biometric data may be, but not limited to, monitored, acquired, stored, transmitted, processed and analysed either locally or remotely to the user wherein by their engagement with a service provider, third party provider, enterprise, social network, social media etc. via a dashboard, web service, website, software plug-in, software application, graphical user interface acquires, for example, electronic content.
  • the user may further include, but not be limited to, mechanical systems, robotic systems, android systems, etc. that may be characterised by a portion of the body being identifiable to a human as a face.
  • User information may refer to, but is not limited to, user behavior information and / or user profile information. It may also include a user's biometric information, an estimation of the user's biometric information, or a projection / prediction of a user's biometric information derived from current and / or historical biometric information.
  • a “wearable device” or “wearable sensor” relates to miniature electronic devices that are worn by the user including those under, within, with or on top of clothing and are part of a broader general class of wearable technology which includes “wearable computers” which in contrast are directed to general or special purpose information technologies and media development.
  • Such wearable devices and / or wearable sensors may include, but not be limited to, smartphones, smart watches, e-textiles, smart shirts, activity trackers, smart glasses, sensors, drug delivery systems, medical testing and diagnosis devices, and motion sensors.
  • Electronic content (also referred to as “content” or “digital content”) as used herein may refer to, but is not limited to, any type of content that exists in the form of digital data as stored, transmitted, received and / or converted wherein one or more of these steps may be analog although generally these steps will be digital.
  • Digital content include, but are not limited to, information that is digitally broadcast, streamed or contained in discrete files.
  • types of digital content include popular media types such as MP3, JPG, AVI, TIFF, AAC, TXT, RTF, HTML, XHTML, PDF, XLS, SVG, WMA, MP4, FLV, and PPT, for example, as well as others, see for example http://en.wikipedia.org/wiki/List_ofJile Jbrmats.
  • digital content may include any type of digital information, e.g. digitally updated weather forecast, a GPS map, an eBook, a photograph, a video, a VineTM, a blog posting, a FacebookTM posting, a TwitterTM tweet, online TV, etc.
  • the digital content may be any digital data that is at least one of generated, selected, created, modified, and transmitted in response to a user request, said request may be a query, a search, a trigger, an alarm, and a message for example.
  • FIG. 1 there is depicted a network environment 100 within which embodiments of the invention may be employed supporting publishing systems and publishing applications / platforms (VMSVMAPs) according to embodiments of the invention.
  • VMSVMAPs for example supporting multiple channels and dynamic content.
  • first and second user groups 100A and 100B respectively interface to a telecommunications network 100.
  • a remote central exchange 180 communicates with the remainder of a telecommunication service providers network via the network 100 which may include for example long-haul OC- 48 / OC-192 backbone elements, an OC-48 wide area network (WAN), a Passive Optical Network, and a Wireless Link.
  • WAN wide area network
  • Passive Optical Network a Wireless Link
  • the central exchange 180 is connected via the network 100 to local, regional, and international exchanges (not shown for clarity) and therein through network 100 to first and second cellular APs 195 A and 195B respectively which provide Wi- Fi cells for first and second user groups 100A and 100B respectively.
  • first and second Wi-Fi nodes 110A and HOB are also connected to the network 100.
  • Second Wi-Fi node HOB is associated with Enterprise 160, e.g. LuxoticaTM, within which other first and second user groups 100A and 100B are disposed.
  • Second user group 100B may also be connected to the network 100 via wired interfaces including, but not limited to, DSL, Dial-Up, DOCSIS, Ethernet, G.hn, ISDN, MoCA, PON, and Power line communication (PLC) which may or may not be routed through a router such as router 105.
  • wired interfaces including, but not limited to, DSL, Dial-Up, DOCSIS, Ethernet, G.hn, ISDN, MoCA, PON, and Power line communication (PLC) which may or may not be routed through a router such as router 105.
  • PLC Power line communication
  • first group of users 100A may employ a variety of PEDs including for example, laptop computer 155, portable gaming console 135, tablet computer 140, smartphone 150, cellular telephone 145 as well as portable multimedia player 130.
  • second group of users 100B which may employ a variety of FEDs including for example gaming console 125, personal computer 1 15 and wireless / Internet enabled television 120 as well as cable modem 105.
  • First and second cellular APs 195A and 195B respectively provide, for example, cellular GSM (Global System for Mobile Communications) telephony services as well as 3G and 4G evolved services with enhanced data transport support.
  • GSM Global System for Mobile Communications
  • Second cellular AP 195B provides coverage in the exemplary embodiment to first and second user groups 100A and 100B.
  • first and second user groups 100A and 100B may be geographically disparate and access the network 100 through multiple APs, not shown for clarity, distributed geographically by the network operator or operators.
  • First cellular AP 195A as show provides coverage to first user group 100A and environment 170, which comprises second user group 100B as well as first user group 100A.
  • the first and second user groups 100A and 100B may according to their particular communications interfaces communicate to the network 100 through one or more wireless communications standards such as, for example, IEEE 802.1 1 , IEEE 802.15, IEEE 802.16, IEEE 802.20, UMTS, GSM 850, GSM 900, GSM 1800, GSM 1900, GPRS, ITU-R 5.138, ITU-R 5.150, ITU-R 5.280, and IMT-1000.
  • wireless communications standards such as, for example, IEEE 802.1 1 , IEEE 802.15, IEEE 802.16, IEEE 802.20, UMTS, GSM 850, GSM 900, GSM 1800, GSM 1900, GPRS, ITU-R 5.138, ITU-R 5.150, ITU-R 5.280, and IMT-1000.
  • GSM services such as telephony and SMS and Wi-Fi / WiMAX data transmission, VOIP and Internet access.
  • portable electronic devices within first user group 100A may form associations either through standards such as IEEE 802.15 and Bluetooth as well in an
  • SOCNETS Social Networks
  • first and second online communities 170A and 170B respectively, e.g. FacebookTM and LinkedlnTM
  • first to second retailers 175A and 175B respectively e.g. WalMartTM and AmazonTM
  • first and second online retailers 175C and 175D respectively, e.g. Warby ParkerTM and AvonTM
  • first and second servers 190A and 190B which together with others, not shown for clarity.
  • First and second servers 190A and 190B may host according to embodiments of the inventions multiple services associated with a provider of virtual mirror systems and virtual mirror applications / platforms (VMSVMAPs); a provider of a SOCNET or Social Media (SOME) exploiting VMSVMAP features; a provider of a SOCNET and / or SOME not exploiting VMSVMAP features; a provider of services to PEDS and / or FEDS; a provider of one or more aspects of wired and / or wireless communications; an Enterprise 160 exploiting VMSVMAP features; license databases; content databases; image databases; content libraries; customer databases; websites; and software applications for download to or access by FEDs and / or PEDs exploiting and / or hosting VMSVMAP features.
  • First and second primary content servers 190A and 190B may also host for example other Internet services such as a search engine, financial services, third party applications and other Internet based services.
  • a user may exploit a PED and / or FED within an Enterprise 160, for example, and access one of the first or second primary content servers 190A and 190B respectively to perform an operation such as accessing / downloading an application which provides VMSVMAP features according to embodiments of the invention; execute an application already installed providing VMSVMAP features; execute a web based application providing VMSVMAP features; or access content.
  • a user may undertake such actions or others exploiting embodiments of the invention exploiting a PED or FED within first and second user groups 100A and 100B respectively via one of first and second cellular APs 195 A and 195B respectively and first Wi-Fi nodes 1 10A.
  • Electronic device 204 may, for example, be a PED and / or FED and may include additional elements above and beyond those described and depicted.
  • the protocol architecture is depicted within the electronic device 204 that includes an electronic device 204, such as a smartphone 155, an access point (AP) 206, such as first AP 1 10, and one or more network devices 207, such as communication servers, streaming media servers, and routers for example such as first and second servers 190 A and 190B respectively.
  • AP access point
  • network devices 207 such as communication servers, streaming media servers, and routers for example such as first and second servers 190 A and 190B respectively.
  • Network devices 207 may be coupled to AP 206 via any combination of networks, wired, wireless and/or optical communication links such as discussed above in respect of Figure 1 as well as directly as indicated.
  • Network devices 207 are coupled to network 100 and therein Social Networks (SOCNETS) 165, first and second online communities 170A and 170B respectively, e.g. FacebookTM and LinkedlnTM, first to second retailers 175A and 175B respectively, e.g. WalMartTM and AmazonTM, first and second online retailers 175C and 175D respectively, e.g. Warby ParkerTM and AvonTM.
  • SOCNETS Social Networks
  • the electronic device 204 includes one or more processors 210 and a memory 212 coupled to processor(s) 210.
  • AP 206 also includes one or more processors 21 1 and a memory 213 coupled to processor(s) 210.
  • processors 210 and 21 1 includes a central processing unit (CPU), a digital signal processor (DSP), a reduced instruction set computer (RISC), a complex instruction set computer (CISC) and the like.
  • processors 210 and 21 1 may be part of application specific integrated circuits (ASICs) or may be a part of application specific standard products (ASSPs).
  • ASICs application specific integrated circuits
  • ASSPs application specific standard products
  • memories 212 and 213 includes any combination of the following semiconductor devices such as registers, latches, ROM, EEPROM, flash memory devices, non-volatile random access memory devices (NVRAM), SDRAM, DRAM, double data rate (DDR) memory devices, SRAM, universal serial bus (USB) removable memory, and the like.
  • semiconductor devices such as registers, latches, ROM, EEPROM, flash memory devices, non-volatile random access memory devices (NVRAM), SDRAM, DRAM, double data rate (DDR) memory devices, SRAM, universal serial bus (USB) removable memory, and the like.
  • Electronic device 204 may include an audio input element 214, for example a microphone, and an audio output element 216, for example, a speaker, coupled to any of processors 210.
  • Electronic device 204 may include a video input element 218, for example, a video camera or camera, and a video output element 220, for example an LCD display, coupled to any of processors 210.
  • Electronic device 204 also includes a keyboard 215 and touchpad 217 which may for example be a physical keyboard and touchpad allowing the user to enter content or select functions within one of more applications 222. Alternatively the keyboard 215 and touchpad 217 may be predetermined regions of a touch sensitive element forming part of the display within the electronic device 204.
  • the one or more applications 222 that are typically stored in memory 212 and are executable by any combination of processors 210.
  • Electronic device 204 also includes accelerometer 260 providing three- dimensional motion input to the process 210 and GPS 262 which provides geographical location information to processor 210.
  • Electronic device 204 includes a protocol stack 224 and AP 206 includes a communication stack 225.
  • protocol stack 224 is shown as IEEE 802.1 1 protocol stack but alternatively may exploit other protocol stacks such as an Internet Engineering Task Force (IETF) multimedia protocol stack for example.
  • IETF Internet Engineering Task Force
  • AP stack 225 exploits a protocol stack but is not expanded for clarity. Elements of protocol stack 224 and AP stack 225 may be implemented in any combination of software, firmware and/or hardware.
  • Protocol stack 224 includes an IEEE 802.1 1 -compatible PHY module 226 that is coupled to one or more Front-End Tx/Rx & Antenna 228, an IEEE 802.1 1 -compatible MAC module 230 coupled to an IEEE 802.2-compatible LLC module 232.
  • Protocol stack 224 includes a network layer IP module 234, a transport layer User Datagram Protocol (UDP) module 236 and a transport layer Transmission Control Protocol (TCP) module 238.
  • UDP User Datagram Protocol
  • TCP Transmission Control Protocol
  • Protocol stack 224 also includes a session layer Real Time Transport Protocol (RTP) module 240, a Session Announcement Protocol (SAP) module 242, a Session Initiation Protocol (SIP) module 244 and a Real Time Streaming Protocol (RTSP) module 246.
  • Protocol stack 224 includes a presentation layer media negotiation module 248, a call control module 250, one or more audio codecs 252 and one or more video codecs 254.
  • Applications 222 may be able to create maintain and/or terminate communication sessions with any of devices 207 by way of AP 206. Typically, applications 222 may activate any of the SAP, SIP, RTSP, media negotiation and call control modules for that purpose.
  • information may propagate from the SAP, SIP, RTSP, media negotiation and call control modules to PHY module 226 through TCP module 238, IP module 234, LLC module 232 and MAC module 230.
  • elements of the electronic device 204 may also be implemented within the AP 206 including but not limited to one or more elements of the protocol stack 224, including for example an IEEE 802.1 1 -compatible PHY module, an IEEE 802.1 1 -compatible MAC module, and an IEEE 802.2-compatible LLC module 232.
  • the AP 206 may additionally include a network layer IP module, a transport layer User Datagram Protocol (UDP) module and a transport layer Transmission Control Protocol (TCP) module as well as a session layer Real Time Transport Protocol (RTP) module, a Session Announcement Protocol (SAP) module, a Session Initiation Protocol (SIP) module and a Real Time Streaming Protocol (RTSP) module, media negotiation module, and a call control module.
  • a network layer IP module a transport layer User Datagram Protocol (UDP) module and a transport layer Transmission Control Protocol (TCP) module
  • RTP Real Time Transport Protocol
  • SAP Session Announcement Protocol
  • SIP Session Initiation Protocol
  • RTSP Real Time Streaming Protocol
  • Portable and fixed electronic devices represented by electronic device 204 may include one or more additional wireless or wired interfaces in addition to the depicted IEEE 802.1 1 interface which may be selected from the group comprising IEEE 802.15, IEEE 802.16, IEEE 802.20, UMTS, GSM 850, GSM 900, GSM 1800, GSM 1900, GPRS, ITU-R 5.138, ITU-R 5.150, ITU-R 5.280, IMT-1000, DSL, Dial-Up, DOCSIS, Ethernet, G.hn, ISDN, MoCA, PON, and Power line communication (PLC).
  • IEEE 802.1 1 interface which may be selected from the group comprising IEEE 802.15, IEEE 802.16, IEEE 802.20, UMTS, GSM 850, GSM 900, GSM 1800, GSM 1900, GPRS, ITU-R 5.138, ITU-R 5.150, ITU-R 5.280, IMT-1000, DSL, Dial-Up, DOCSIS, Ethernet, G.hn, ISDN, MoCA, PON,
  • Motion Capture (MC) 205 comprising a Motion Capture (MC) Stack 275 coupled to Antenna 278 supporting bidirectional communications with the Electronic Device 204.
  • MC 205 may exploit the same protocol as communications between the Electronic Device 204 and AP 206 or it may exploit an alternate protocol wherein Electronic Device 204 may comprise a second Protocol Stack other than Protocol Stack 224 which is not shown for clarity.
  • the MC Stack 275 is in communication with MC Microprocessor ( ⁇ ) 271 which communicates with Memory 272, Light 273, and first and second Video In 274 and 275.
  • Light 273 may, for example, be a visible light source, an infrared light source, or a light source switchable between visible and infrared. These may be continuous and / or pulsed optical sources.
  • first and second Video In 274 and 275 may be continuously or periodically capturing images within the visible and / or infrared regions of the electromagnetic spectrum.
  • Examples of such MC 205 devices include, but are not limited to, MicrosoftTM KinectTM, PLAYSTATION EyeTM, PlayStationTM Camera, an infrared motion sensing device, a laser scanning device, a time-of-flight scanning device, an optical motion sensing device, a plurality of cameras, and combinations thereof.
  • the MicrosoftTM KinectTM device provides a depth sensor by combining an infrared laser projector with a monochrome CMOS sensor, which captures video data in 3D under any ambient light conditions.
  • MC 205 transmits image data to the Electronic Device 204 including, but not limited to, image data in a standard picture and / or video file format and image depth data.
  • the sensing range of the depth sensor may be adjusted under control of MC ⁇ 271 of MC 205 or Processor 210 of Electronic Device 204.
  • MC 205 may alternatively be connected to AP 206 rather than Electronic Device 204 and therein to Electronic Device 204 or the network 100 and thereafter an Electronic Device 204 via another AP 206 such that the MC 205 and Electronic Device 206 are remote to one another.
  • the MC 205 is connected to a remote server such as first and second servers 190A and 190B such that the acquired data is transmitted via Network 100 to the remoter server from MC 205 and then processed data from the remote server transmitted to the Electronic Device 204.
  • FIG. 3 there is depicted facial mapping system according to the prior art wherein a user 310 has their image captured and processed by a MicrosoftTM KinectTM gaming controller (MKGC) 320 wherein the captured data is then processed in step 330 to generate:
  • MKGC MicrosoftTM KinectTM gaming controller
  • FIG. 3 An example of the bounding box and rendered wireframe mesh are depicted in image 340 of the user exploiting a software development kit (SDK) provided by MicrosoftTM.
  • SDK software development kit
  • processing decisions may be made as to identity of user using previously acquired data from the MKGC 320 on registered account holders and to provide input to gaming software.
  • processing of acquired data from the MKGC 320 of the user's body can be used to provide gesture recognition, body posture identification, etc. to similarly provide input to software such as that in execution upon Electronic Device 204, for example, in Figure 2 which would typically be a MicrosoftTM XboxTM gaming console within the prior art.
  • first IAS 410 captures a single image or a sequence of images (video frames) which can be captured live or not, stored locally or transferred through a network such as the Internet or Network 100.
  • the wireframe and / or other data may be generated in real time, offline within a predetermined latency, and offline.
  • the wireframe and / or other data may be generated fully per acquired image or partially per acquired image based upon prior image wireframe and / or other data.
  • a denser 3D mesh (dense contour map) may be generated in combination with or in isolation from a facial template for each image in dense mapping process 425 yielding, if presented to the user, third image 430.
  • these meshes, contour maps etc. and their associated tracking of facial orientation, facial features, etc. can be tracked locally and / or remotely and in real time or offline.
  • an anatomically correct and dense generic 3D template mesh can be fitted to the user facial features such that the selected features match although if the IAS 410 provided sufficiently high resolution the process may be simplified as no fitting and / or interpolation / extrapolation from a first fitted wireframe (mesh) to a denser mesh may be required.
  • the denser 3D mesh generated in dense mapping process 425 and depicted in third image 430 has been scaled and rigidly transformed to roughly fit the facial features obtained from the initial tracking step, mapping process 415 and depicted in second image 420.
  • this may be singular value decomposition of the covariance matrix obtained from the feature points' correspondence.
  • an offline, delayed and or real-time deformation technique is applied to deform the 3D template mesh generated in dense mapping process 425 to accurately match the face of the user, depicted in first image 405.
  • This deformed facial mask is depicted in fourth image 450 together with a 3D model of an eyeglasses frame selected by the user within a parallel item selection process step 435 wherein upon selection of an item within a database 430 a 3D model of the selected item is extracted and fitted to the deformed facial mask based upon the core facial dimensions of the user and dimensions of the selected item.
  • Generation of the deformed facial mask of the user may, for example, be performed in a manner such that every vertex of the dense 3D template mesh is deformed using a Laplacian operator and constraints provided by the IAS 410.
  • facial orientation information employed during the generation of the deformed facial mask tracker with or without mapping of a selected item may be obtained from a second IAS 440 such that the output of the dense mapping process 425 is stored after generation and extracted subsequently.
  • the 3D model of the selected item e.g. eyewear frame
  • the 3D model of the selected item may be retrieved with associated physical properties weight, options, dimensional tolerances dimensional variations supported, friction coefficients, material properties, etc.
  • additional properties are not provided, these may be estimated.
  • virtual mirror process 455 the deformed facial mask is removed and the selected item rendered such that the user is provided with an image, fifth image 460, of themselves with the rendered selected item.
  • the placement of the selected item upon the user's deformed facial mask may be established through a two-stage process wherein the first stage of the process comprises roughly positioning the selected item in a pre-processing stage with respect to the 3D deformed facial mask.
  • This pre-processing stage may, for example, be manually placing the item by the user and / or using an automatic method based on the position of the facial features.
  • the automated method may itself be a two-step process wherein initially a generic mesh template is employed followed by the deformed facial mask.
  • the automated method may be a two-step process wherein initially a mesh / contour template generated in mapping process 415 or dense mapping process 425 is employed followed by the deformed facial mask.
  • the second stage of the process comprises a physics-based simulation of the selected item to determine its accurate position either in a single image or in every video frame of a series of images such that the user can visualize the item from a plurality of perspectives.
  • the physics based simulation may, for example, exploit rigid body motion and gravitational forces, friction forces and collision forces.
  • an earring may be simulated to hang from the user's ear and its position relative to the user's ear, neck, etc. depicted including, for long dangling earrings its touching the user's body at certain head positions.
  • the user may in some embodiments, fine-tune the position of the selected item, e.g. eyewear frame glasses on their nose using a user interface (keyboard, mouse or a hand gesture recognition interface) as minor adjustments may have limited impact on the user's vision but substantial differences visually.
  • the process may recover through a mechanism of iterating back to a previous stage within the sequence as discussed with respect to Figure 4.
  • This iterating back may be, for example, to a predetermined step in dependence upon the length of the failed tracking or sequentially iterating stages and regressing upon determination of an error established, for example, as a measure of distance between the user's face and the generated contour.
  • the VMSVMAP may provide the user with information relating to an estimated comfort level and / or fit level relating to the selected item, where appropriate.
  • an estimated comfort fit may be established by applying a skin deformation model to the digitally mapped and rendered nose using real or estimated measurements for the weight of the glasses as well as the elasticity of the user's skin, again real or estimated based upon demographic data such as age, race, etc.
  • the comfort level can be conveyed to the user using visual feedback (i.e. color highlighting on the rendering of the skin) or using a numerical scale.
  • mismatch between the width of the user's head across their temples derived from the measurements extracted for the user's head from the initial IAS scan may imply pressure to the temple area of the user as the frames are too narrow and / or deformation of the eyewear frame with time and hence a variation in the visual aesthetic as the eyewear frame become loose.
  • an estimated fit level for the eyewear frame may be derived by calculations of dimensions of the selected item relative to the mapped and rendered body. In the instances that the selected item lacks substantial physical structure, e.g. an item of apparel versus the eyewear frame then physical modelling may be provided to simulate the hang or lay of the selected item.
  • VMSVMAP may, as discussed below, in respect of Figures 5 and 8, for example, provide recommendations based upon the user's physical dimensions and features as well as, optionally, accumulated user taste data, and user social network recommendations for example.
  • Examples of facial fitting and recommendations with respect of eyewear frames are presented in Figures 9 to 18 although similar processes exploiting user data capture, ratio generation, reference aethesthic data sets etc. may be applied to other regions and / or portions of the user's body without departing from the scope of the invention.
  • a single image and / or video of the user view within the virtual mirror using a VMSVMAP may be posted to one or more SOCNETs by the user for their friends to provide feedback, comments, etc. and perhaps even vote on their favorite item of a range of selected item the user is selecting between.
  • the generated single image and / or video of the user may be posted from their PED / FED directly or from a remote server according to the VMSVMAP the user is accessing.
  • the user may make a series of initial selections which are rendered at a first resolution and all or a selected subset of these may then be submitted for rendering offline and / or remotely for further processing to a higher resolution.
  • FIG. 5 there is depicted an exemplary process flow for providing product virtual ization to a user via a virtual mirror according to an embodiment of the invention.
  • a user 505 employs an image acquisition system (IAS) 515, e.g. MicrosoftTM KinectTM connected to a FED 510, e.g. MicrosoftTM XboxTM One console, which is coupled to VMSVMAP 580 via Network 100.
  • IAS image acquisition system
  • the user 505 may execute a process such as described above in respect of Figure 4 and may select via the VMSVMAP 580 to view a series of items, e.g. eyewear frames, from an enterprise, e.g. from Warby Parker 175C in Figure 1.
  • the user selecting items from a first database 525 these being depicted as first to fourth sunglasses 570A to 570D respectively.
  • the VMSVMAP 580 accesses a corresponding second database 520 which stores wireframe models of the items within the first database 525, depicted as first to fourth wireframes 560A to 560D which each correspond to one of the first to fourth retail items 570A to 570D respectively.
  • the user has selected three items in series as depicted with first to third virtual mirror (VM) images 530, 540, and 550 respectively wherein the user is depicted with first to third selected items 535, 545, and 555 respectively which have been presented using wireframe models to try and focus the user's 505 attention to the overall style / design rather than aesthetic elements such as frame colour etc.
  • First to third selected items 535, 545, 555 being based upon first to third sunglasses 560A to 560C respectively associated with first to third retail items 570A to 570C respectively.
  • the user selects to view a fully rendered selected item, e.g.
  • FIG. 6 there is depicted an exemplary process flow for a VMSVMAP providing a virtual mirror to a user according to an embodiment of the invention.
  • the process begins at step 605 with a user initiating a virtual mirror process upon a local and / or remote VMSVMAP and then progresses to step 610 wherein the user determines whether a new rendering process should be initiated or a prior rendered image retrieved for use.
  • steps 620 to 650 wherein these comprise:
  • Step 620 wherein the user sits to establish the scan via an image acquisition system (AIS) under control of the VMSVMAP wherein the VMSVMAP knowing the characteristics of the AIS provides information to the user with respect to the distance that they should be away from the AIS to allow the AIS to define the physical dimensions of the user's body as acquired;
  • AIS image acquisition system
  • Step 630 wherein the captured image(s) are processed to generate coarse contour map with core facial dimensions that may be referenced to the known scale of the image;
  • Steps 635 wherein the VMSVMAP generates an overlay to the image presented to user with facial recognition template and dense contour map generated by VMSVMAP in step 640;
  • Step 645 wherein the facial type of the user is define and key metrics are established for subsequent use in recommendation processes and / or manipulating selected items for overlap to the user's image;
  • Step 650 wherein the facial type, coarse and dense contour maps, and key metrics are stored within a database against user credentials allowing the user to subsequently retrieve and exploit them.
  • the AIS may ask the user to hold an object of defined dimensions in order to define the magnification of the acquired scan(s). From step 650 the process proceeds to step 615. If the user in step 610 elects to retrieve and employ a previously stored mapping then the process also progresses to step 615 from step 610 wherein the user is provided with a series of options for displaying and exploiting their stored facial profile. These options being depicted through first to third rendering flows 655, 665, and 675 which are associated with user options, Controller, Preview and Full respectively. Within first rendering flow 655, Controller, the user may employ a controller 656 separate to or integrated with their PED / FED they are accessing the VMSVMAP upon. Accordingly, they can through the controller 656 perform tilts, turns, rolls, etc. of the facial rendering based upon their stored profile information. In this manner the user may exploit their rendered profile without requiring access to an image acquisition system every time they wish to employ the virtual mirror.
  • first rendering flow 655, Controller the user may employ a controller 656 separate to
  • second rendering flow 665 the user when exploiting the virtual mirror through the VMSVMAP upon their PED / FED is presented with images, including first to third images 670A to 670C, that comprise a view of the user with recognition frame and dense wireframe together with rendered facial images overlaid to the image.
  • third rendering flow 675 the user when exploiting the virtual mirror through the VMSVMAP upon their PED / FED is presented with images, including fourth to sixth images 680A to 680C, that comprise a view of the user together with rendered facial images overlaid to the image. It would be evident that the user may also be presented with a further rendering flow, not shown for clarity, wherein the only images presented to the user are rendered facial images overlaid to the captured image.
  • Such a rendering flow in combination with first and second rendering flows 665 and 675 may be performed upon a single image, a short video file, a video file, and / or a live video camera stream according to the selection of the user.
  • first or second rendering flows 665 and 675 may provide a series of still images on the left hand side and a raw video flow in the right hand side or vice versa. As such the still images may be automatically selected at limits of user head motion or selected by the user with the raw processed video feed also presented.
  • Figure 7 depicts an exemplary process flow for providing user virtualization to a user via a virtual mirror provided by a VMSVMAP according to an embodiment of the invention.
  • the process begins at step 705 with a user initiating a virtual mirror process upon a local and / or remote VMSVMAP and then progresses to step 710 wherein the user determines whether a new rendering process should be initiated or a prior rendered image retrieved for use.
  • the process proceeds to sub-flow 720 comprising, for example, steps 620 to 650 of Figure 6 above resulting in rendered user 725 before the process flows to step 715.
  • step 710 If the user in step 710 elects to retrieve and employ a previously stored mapping then the process also progresses to step 715 from step 710 wherein the user is provided with a series of options for displaying and exploiting their stored facial profile.
  • These options being depicted through first to third rendering flows 730A, 730B, and 730C respectively which are associated with user options, Controller and retrieve, wherein two different users, User A and User B, respectively exploit the VMSVMAP.
  • First renderings flow 730A may for example be one of the first to third rendering flows 655, 665, and 675 in Figure 6, for example.
  • second rendering flow 73 OB relates to first user 735, User A, wherein the user applies two different processes from a menu of processing options 740 with respect to their stored profile.
  • first process 745 the user has selected a styling option wherein they wish to see what they look like bald and accordingly facial profile information relating to their facial shape is employed on render the upper surface of their head.
  • second process 750 the user has elected to adjust their hair colour wherein facial profile information relating to their facial shape may be used to aid derivation of the hair line.
  • third rendering flow 730C relating to a second user 755, User B, wherein stages in provisioning of virtual mirror images to the second user 755 are depicted. These stages being a result of the user selection options from a menu of processing options 760 with respect to their stored profile.
  • a template 765 is retrieved before being processed to generate rendered profile 770.
  • the user's profile is retrieved for a process wherein the user has selected a styling option wherein they wish to see what they look like with different hair styles and accordingly facial profile information relating to their facial shape is employed on the template to define the upper surface of their head with modified template 775 which is then rendered as modified profile 780.
  • the user may then select a hair style which is rendered onto their modified profile 780.
  • FIG. 8 there is depicted an exemplary process flow for providing product virtualization to a user via a virtual mirror generated using a VMSVMAP according to an embodiment of the invention.
  • the process flow in Figure 8 is similar to that depicted and described with respect to Figure 6 except that rather than the user profile being facial it is their full body. Accordingly, as depicted the process begins at step 805 with a user initiating a virtual mirror process upon a local and / or remote VMSVMAP and then progresses to step 810 wherein the user determines whether a new rendering process should be initiated or a prior rendered image retrieved for use.
  • Step 825 wherein the user stands to establish the scan via an image acquisition system (AIS) under control of the VMSVMAP wherein the VMSVMAP knowing the characteristics of the AIS provides information to the user with respect to the distance that they should be away from the AIS to allow the AIS to define the physical dimensions of the user's body as acquired.
  • AIS image acquisition system
  • the AIS captures images with and without the user as depicted by first and second images 825A and 825B respectively.
  • first image 825A captures their body profile;
  • Step 830 wherein the captured first and second image(s) are processed to generate an isolated image 830A of the user;
  • Steps 835 wherein the VMSVMAP generates from the isolate body image 830A of the user a coarse contour map with core body dimensions ratios and thereafter a dense contour map 835A;
  • Step 840 wherein based upon the characteristics of the AIS and the dense contour map 835 A a body type of the user is defined and key metrics are established for subsequent use in recommendation processes and / or manipulating selected items for overlap to the user's image;
  • Step 845 wherein the body type, coarse and dense contour maps, isolate image and key metrics are stored within a database against user credentials allowing the user to subsequently retrieve and exploit them.
  • step 810 If the user in step 810 elects to retrieve and employ a previously stored mapping then the process also progresses to step 815 from step 810 wherein the user is provided with a series of options for displaying and exploiting their stored facial profile. These options being depicted through first to third rendering flows 820A, 820B and 820C respectively. First and second rendering flows 820A and 820B are not depicted in detail and end at step 875A. Third rendering flow 820C proceeds with the user selecting an option from a series of options in step 880 and selecting one or more retail items from one or more enterprises through processes known within the prior art.
  • the retail items presented to the user are those for which wireframe models exist allowing the VMSVMAP to position and align the items selected by the user upon the wireframe mapping of the user and once rendered depict these atop the image of the user.
  • the user may select from a plurality of databases including, for example, first to third databases 850A to 850C respectively. Based upon the selected items these are rendered to the user's wireframe(s), aligned / positioned / fitted to their wireframe according to the key metrics and therein presented to the user.
  • Exemplary presented images to the user are depicted in first to fourth images 855 to 870 respectively before the process ends at step 875B.
  • the placement of the selected item upon the user's rendered wireframe contour map may be established through a two-stage process wherein the first stage of the process comprises roughly positioning the selected item in a pre-processing stage with respect to the 3D body wireframe contour map.
  • This pre-processing stage may, for example, be manually placing the item by the user and / or using an automatic method based on the position of the user's body / physical features.
  • the automated method may itself be a two-step process wherein initially a generic mesh template is employed followed by the body wireframe contour map.
  • the automated method may be a two-step process wherein initially a mesh / contour template generated in mapping process 415 or dense mapping process 425 is employed followed by the body wireframe contour map.
  • the 3D model of the selected item may be retrieved with associated physical properties weight, options, dimensional tolerances, dimensional variations supported, friction coefficients, material properties, etc.
  • these may be estimated.
  • a further stage of the process may comprise a physics-based simulation of the selected item to determine its accurate position either in a single image or in every video frame of a series of images such that the user can visualize the item from a plurality of perspectives.
  • the physics based simulation may, for example, exploit rigid body motion and gravitational forces, friction forces and collision forces.
  • the user may in some embodiments, fine-tune the position of the selected item on their body using a user interface (keyboard, mouse or a hand gesture recognition interface) as minor adjustments may have limited impact on the user's vision but substantial differences visually.
  • the VMSVMAP may provide the user with information relating to an estimated comfort level and / or fit level relating to the selected item, where appropriate.
  • an estimated comfort fit may be established by applying a deformation model on the elasticated item of apparel's base dimension and user's body metrics to establish pressure applied to the user's body.
  • the comfort level can be conveyed to the user using visual feedback (i.e. color highlighting on the rendering of the skin) or using a numerical scale.
  • the visual feedback may include the location of the localized pressure.
  • the selected item lacks substantial physical structure, e.g.
  • the VMSVMAP may in addition to eyewear and jewelry allow virtual mirrors to be provided for a range of items including, but not limited to, underwear, lingerie, shirts, blouses, skirts, trousers, pants, jackets, hats, sports equipment, and shoes.
  • the user may establish a short video sequence, e.g. walking towards IAS, turning around, sitting and standing, etc. In this manner the user's body is isolated and wireframed frame by frame allowing the virtual mirroring process with the VMSVMAP to map the selected item(s) onto the user's wireframe and therein rendered as part of the virtual mirror.
  • embodiments of the invention may feel comfortable providing to a remote server some categories of images relating to portions of the user's body and uncomfortable with respect to other categories. For example, an image of the user's head may be acceptable but an image of the user standing naked or in underwear / lingerie unacceptable. Accordingly, embodiments of the invention may provide for user based selection of local and / or remote processing of the images in order to provide the virtual mirror to the user. It would be evident that acquired, stored, processed images and their associated wireframes, contours, characteristics, dimensions etc.
  • images rendered with items selected and / or recommended to the user may be stored encrypted within some embodiments of the invention, unencrypted within some embodiments of the invention, and protected by security measures based upon user credentials, device identity, etc. as known within the prior art.
  • a VMSVMAP generates one or more contour maps / wireframes from acquired image data with an image acquisition system.
  • Embodiments of the invention may employ, for example, the Canny Edge Detection algorithm in order to provide an "optimal" edge detector by which the inventors mean one that has a low error rate, meaning good detection of only existent edges; good localization, such that the distance between edge pixels detected and real edge pixels is minimized; and minimal response, such that there is only on detected response per edge.
  • Canny the calculus of variations which finds the function which optimizes a given functional.
  • the optimal function in Canny's detector is described by the sum of four exponential terms, but it can be approximated by the first derivative of a Gaussian. Accordingly, Canny edge detection is a four step process:
  • a gradient operator is applied for obtaining the gradients' intensity and direction
  • Non-maximum suppression determines if the pixel is a better candidate for an edge than its neighbours
  • embodiments of the invention may generate contours, i.e. curves joining all the continuous points (along the boundary), having same color or intensity. Such contours being useful tools for shape analysis, object detection, and object recognition.
  • the method given by Equation (4) calculates the contours of an input array, e.g. the image processed by the Canny edge detection algorithm.
  • detection of edges e.g. face edge, body edge, neck edge, etc. are a necessary step within the establishment of wireframes, contour maps, and rendered surfaces for VMSVMAPs.
  • edges e.g. face edge, body edge, neck edge, etc.
  • GED Algorithm uses 5 neighbouring pixels to determine local gradient and predict the current pixel value. These 5 adjacent pixels are A and D in the same row, E and B in the same column and C which is a diagonal pixel.
  • the GED Algorithm uses fixed single direction, e.g. from left to right or from top to bottom, to analyze the image but it may not predict the proper pixel values. Accordingly, a multi - directional template of the GED Algorithm was employed with the image divided into 4 parts and each part is processed individually. This is because the central part of the image covers the most of the information about the picture and the regional characteristics and local gradient directions are mainly considered.
  • Prediction Pixel Value is calculated using the following routine:
  • T is the best segmentation threshold and edges of the image are identified as below:
  • Edge image pixel matrix (from previous step) contains multi-pixel wide edges. It has to be skeletonised to the single-pixel wide edge so that the contour points of the face can be determined. Table 1 below provides the terms used within the Edge Detection mechanism.
  • embodiments of the invention exploit a state of the art edge tracker which tracks with all of the above rules and definitions.
  • the output of the tracker algorithm then provides the input to subsequent steps of VMSVMAP algorithms.
  • a VMSVMAP is described as presenting virtual mirror images to the user which include rendered images of items selected by the user.
  • a user may be faced with a bewildering array of options and the VMSVMAP may incorporate a recommendation engine providing options to the user or options may be filtered based upon a classification of the user's body, a region of the user's body, face, etc.
  • embodiments of the invention exploit classification methods of faces and frame types into different categories.
  • the inventors selected the "Case-Based Reasoning" (CBR) classification method to classify face and frame shapes.
  • CBR Case-Based Reasoning
  • the recommendation engine forming part of VMSVMAPs exploit a two-step process wherein in the first part, the user's body (in part or its entirety) is classified by a classification procedure and in the second part the recommendation system provides recommendations to the user.
  • Figure 9 there is depicted schematically an overview of the overall system for such a recommendation engine with respect to eyewear frames and user's facial geometry.
  • the processes and procedures described may be applied to other item / body combinations such as for example, apparel necklines and user's shoulders / neck, earrings and user's ears, and lower body apparel such as skirts, dresses, and trousers and user's hips / legs.
  • the recommendation comprises a first portion 900A relates to the data collection pipeline and 900B the recommendation pipeline.
  • the system undertakes Face Shape Recognition and Frame shape Recognition.
  • Face Shape Recognition and Frame shape Recognition.
  • Each of these algorithms contains pre-processing followed by feature extraction and classification phases.
  • an image database 910 of subjects wearing eyeglasses are presented to the system as input.
  • the geometric features 920 are extracted from the faces and automatic frame shapes 930 are extracted based on these processes.
  • facial classification 940 and frame shape classification 950 are executed and the results stored within a first database 970A.
  • such a survey or surveys may be executed by pulling combinations of faces and frames from the first database 970A, obtaining user's votes through a variety of mechanisms including crowdsourcing, social media, dedicated web marketing, dedicated surveys, etc. Accordingly, the ratings of these images from the survey(s) are stored together with the face and eyeframe classifications within the second database 970B from which we can extract the most popular combinations. It would be evident that surveys and their results may be generated for a range of scenarios such that popular combinations are generated and stored for a variety of demographic and / or socioeconomic and / or geographical scenarios. Accordingly, the voting process 960 may be performed in different parts of a single country or region to capture variations within the country and / or region (e.g. California versus Texas, Bavaria versus Schleswig-Holstein, 18- 25 year old Italian men, 55-70 year old North American women, etc.)
  • Second portion 900B now addresses the eyeframe recommendation system.
  • face shape detection is performed on the image and based upon the results from this a recommendation is made from different types of the glasses available in the system 985.
  • the user is presented with a frame of the recommended type overlapped to their face wherein as their facial profile has been analysed the frame can be aligned and / or scaled relative to their facial image or vice-versa.
  • the user may be presented with a couple of recommendations.
  • FSRS Face Shape Recognition System
  • the FSRS employs a typical sequence of phases for pattern recognition wherein data sets are required for building categories and to compare similarities between the test data and each category. Accordingly input data is passing through a pre-processing stage of the stored raw data wherein a sequence of data preprocessing operations are applied to the images in order to put them into a suitable format ready for feature extraction.
  • each raw data item within the data sets is transformed into a set of features, and the classifier is mainly trained on these feature representations.
  • similar data pre-processing is carried out followed by the same sequence of operations and then features are added into the trained classifier.
  • the output of the classifier will be the optimal class label (sometimes with the classification accuracy) or a rejection note (return to manual classification).
  • a method employed by the inventors utilizes an analysis of different face shapes based on the boundary points across the edges of the face.
  • the inventors have established that such an algorithm yields good results using a subset of feature points than many other typical applications.
  • the image database used for these experiments was an adequate number of images retrieved from online resources although it would be evident that other databases may be employed such as social media profiles, Government photo-identity databases, etc.
  • a subset of these images were chosen to reduce problems associated with lighting, facial expressions and facial details although it would be evident that embodiments of the invention may be extended to exploit image processing / filtering techniques as described supra with respect to removing background images etc.
  • the image files used were in .PNG format and an initial training data set of over 300 different face shapes were employed that were recognized by domain experts i.e. Fashion Designers, Hair Stylists, etc. In testing data the inventors took 100 new face shapes and 50 randomly selected from training data.
  • Embodiments of the invention implemented by the inventors for face shape recognition as a method to identify a person through a photograph of their face employs four modules.
  • the first module in face recognition processing is known as face detection which provides an initial estimate of the location and scale of specified boundary points on the face.
  • the second module in the face recognition processing is face point creation wherein accurate localization is required.
  • the third module for processing is feature extraction, where we seek to extract effective information that will potentially distinguish faces of different persons wherein it is important in this module to obtain stable geometrical representations of the extracted features.
  • feature matching the extracted feature vectors of a particular image are compared to the face images stored in the database to determine the classification to apply to the face in the image. Referring to Figure 10 this process flow of an FSRS system according to an embodiment of the invention is outlined wherein the steps presented are:
  • Step 1020 Face boundary detection
  • the classification step 1040 employs a Case Based Reasoning (CBR) 1060 classification methodology comprising steps of retrieve 1080A, re-use 1080B, revise 1080C, and retain 1080D wherein classifications 1070 for the output are defined as being Oval, Round, Square, Oblong, Diamond and Heart. It would be evident other classifications may be employed and that the number of classifications may also be adjusted within other embodiments of the invention.
  • CBR Case Based Reasoning
  • FIG. 1 1 there is depicted an image of a face after processing in first image 1 100A.
  • Second image 1 100B has been digitally processed using a graphics engine to increase contrast and overlaid with markers allowing the feature points 1 1 10 to be viewed more clearly along with the symmetric boundary 1 120 is depicted with blue curved line.
  • Figure 12 shows the resulting feature points 1 1 10 and boundary 1 120 absent the image indicating their numbering and position.
  • FIG. 13 there are depicted representative images of the six facial types employed within the classifications of the FSRS processes according to embodiments of the invention by the inventors.
  • This classification procedure represents the third step 1040 in FSRS process flow depicted in Figure 10.
  • first to sixth images 1310A to 1360A representing oval, round, square, oblong, diamond, and heart face types with their associated extracted geometric feature points as discussed supra and as would be generated by a feature extraction process representing the second step 1030 in FSRS process flow depicted in Figure 10.
  • seventh to twelfth images 131 OB to 1360B representing the same 6 individuals but now their image has been overlaid with generated geometric features exploiting the 17 extracted feature points depicted in first to sixth images 1310A to 1360A respectively.
  • geometric features are depicted, these being:
  • First line being diagonal lines on lower face
  • Second line being jaw line
  • [00111] Distances. By taking the distances of facial points from drawn ellipses in previous step (1) where these are calculated from the respective ellipse to recognise which type of ellipse covers maximum points. Moreover, a threshold value is set for the distances during training phase. For example, consider the ellipse that covers chin points. If the chin is pointed, then the distances of the chin points from ellipse will be more than the threshold value such that these type of faces can be categorised in heart or diamond face shape. However, if the distances are less than threshold value, then face may be categorized in round or oval face shape.
  • Eye Line Length is calculated by joining PI and P17. This property length is compared with cheek bone line and jaw line. It will help to determine if these lengths are equal for oval or round faces, or differ substantially as with heart and diamond face shapes.
  • Jaw Line Length Calculation of Jaw line may not be accurate because few face shapes have explicitly defined jaw lines such as round or oval. Therefore, the Jaw Line is defined by averaging 3 different lines that join face points P6 - P12, P7— PI 1 , and P8- P10.
  • Classification Within the FSRS according to embodiments of the invention six different types of face shapes are considered for identification. However, in other embodiments of the invention more or less types of face shapes may be employed within the identification process. Within the current embodiment these are Heart, Oblong, Oval, Diamond, Square and Round as depicted in Figure 13. Facial properties for each shape are summarized in Table 3. The variables employed within these determinations are listed Table 4.
  • the next step is to perform classification based on the retrieved features.
  • the main purpose of the classification is to assign each point in the space with a class label.
  • the term "classifier” is an algorithm that implements a classification and a variety of classification methods can be applied depends on the problem definition.
  • CBR the classification method to classify face shape into different categories. It would be evident that within other embodiments of the invention that other classification methods may be employed. Referring to Figure 14 there is depicted an exemplary scenario of CBR for FSRS.
  • the inventors defined the distance with the CBR as that given by Equation (3) where T is the feature vector extracted from using the test image, F k is the feature vector of the k 's element of the training images set, the subscript i denotes the components of the 8 dimensional feature vector and w j is a weight associated to each of the features in order to compensate for the relative magnitude and importance of each feature. These weights were determined experimentally and were set only once. These wei hts being (1.2,1.0,1.0,1.2,1.1,1.2,1.1,1. l) r . [00120] However, in the training only the dominant face shape was considered. Within the training process the inventors employed a set of 100 images extracted from the Internet and evaluated the accuracy of the classification process upon a different set of 300 pictures.
  • a user may classify their facial type and determine appropriate recommendations with respect to eyewear.
  • additional recommendation engines may exploit the core facial shape determination engine and leverage this to other aspects of the user's appearance and / or purchases.
  • recommendation engines fed from a facial shape determination engine may include those for hairstyles, hair accessories, wigs, sunglasses, hats, earrings, necklaces, beards, moustaches, tattoos, piercings, makeup, etc.
  • the facial shape determination, recommendation engine etc. may be provided to a user locally upon a PED, FED, terminal, etc. or that the user may exploit such engines remotely via a network or the Internet through remote server based provisioning of the required engines, classifications, etc.
  • surveys as presented and discussed supra may be supported and enabled through networks such as the Internet either as managed and controlled by manufacturers, providers, third party providers, etc. or as driven through other network based activities such as crowd sourcing, social media, social networks, etc.
  • networks such as the Internet either as managed and controlled by manufacturers, providers, third party providers, etc. or as driven through other network based activities such as crowd sourcing, social media, social networks, etc.
  • Such online surveys may rapidly establish results across geographical boundaries, across wide demographics, etc.
  • a user may submit an image, receive a recommendation, and post that to their social network.
  • Their social network may provide feedback that is added to the survey results such that over time the survey results are augmented / expanded.
  • the recommendations / simulations of the user with an item of apparel / style etc., e.g. eyewear may include the appropriately dimensioned augmentation of their uploaded / captured image rather than the crude inaccurate overlays within the prior art.
  • the user may, for example, acquire an image upon their PED, transfer this to a remote server via a network for processing and receive back a recommendation. In this manner the classification engine etc.
  • the overall system may be contextually aware such that the user's location, for example, within an eyewear retailer leads to recommendations driven by the available eyewear frames, for example, available from that retailer.
  • the user may establish a profile with a provider of classification based recommendation services such that upon subsequent online visits the user does not to provide an image for facial type analysis as the results of their previous visit(s) are stored against their unique account credentials.
  • a granular hierarchy of recommendations may be presented to the user such that upon picking, for example, oblong eyeframes they are then presented with recommendations with respect to other aspects of the eyewear frame such as visible frame, invisible frame, colour, arms attaching middle of frame or high on frame, etc. It would be evident that some finer granularity recommendations may require additional characterization of the user's facial profile, e.g. large nose, low nose, wide bridge, high ears, low ears, etc.
  • Implementation of the techniques, blocks, steps and means described above may be done in various ways. For example, these techniques, blocks, steps and means may be implemented in hardware, software, or a combination thereof.
  • the processing units may be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described above and/or a combination thereof.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • processors controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described above and/or a combination thereof.
  • the embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process is terminated when its operations are completed, but could have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function.
  • embodiments may be implemented by hardware, software, scripting languages, firmware, middleware, microcode, hardware description languages and/or any combination thereof.
  • the program code or code segments to perform the necessary tasks may be stored in a machine readable medium, such as a storage medium.
  • a code segment or machine-executable instruction may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a script, a class, or any combination of instructions, data structures and/or program statements.
  • a code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters and/or memory content. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.
  • the methodologies may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein.
  • Any machine-readable medium tangibly embodying instructions may be used in implementing the methodologies described herein.
  • software codes may be stored in a memory.
  • Memory may be implemented within the processor or external to the processor and may vary in implementation where the memory is employed in storing software codes for subsequent execution to that when the memory is employed in executing the software codes.
  • the term "memory” refers to any type of long term, short term, volatile, nonvolatile, or other storage medium and is not to be limited to any particular type of memory or number of memories, or type of media upon which memory is stored.
  • the term “storage medium” may represent one or more devices for storing data, including read only memory (ROM), random access memory (RAM), magnetic RAM, core memory, magnetic disk storage mediums, optical storage mediums, flash memory devices and/or other machine readable mediums for storing information.
  • ROM read only memory
  • RAM random access memory
  • magnetic RAM magnetic RAM
  • core memory magnetic disk storage mediums
  • optical storage mediums flash memory devices and/or other machine readable mediums for storing information.
  • machine-readable medium includes, but is not limited to portable or fixed storage devices, optical storage devices, wireless channels and/or various other mediums capable of storing, containing or carrying instruction(s) and/or data.
  • the methodologies described herein are, in one or more embodiments, performable by a machine which includes one or more processors that accept code segments containing instructions. For any of the methods described herein, when the instructions are executed by the machine, the machine performs the method. Any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine are included.
  • a typical machine may be exemplified by a typical processing system that includes one or more processors.
  • Each processor may include one or more of a CPU, a graphics-processing unit, and a programmable DSP unit.
  • the processing system further may include a memory subsystem including main RAM and/or a static RAM, and/or ROM.
  • a bus subsystem may be included for communicating between the components. If the processing system requires a display, such a display may be included, e.g., a liquid crystal display (LCD). If manual data entry is required, the processing system also includes an input device such as one or more of an alphanumeric input unit such as a keyboard, a pointing control device such as a mouse, and so forth.
  • a display e.g., a liquid crystal display (LCD).
  • LCD liquid crystal display
  • the processing system also includes an input device such as one or more of an alphanumeric input unit such as a keyboard, a pointing control device such as a mouse, and so forth.
  • the memory includes machine-readable code segments (e.g. software or software code) including instructions for performing, when executed by the processing system, one of more of the methods described herein.
  • the software may reside entirely in the memory, or may also reside, completely or at least partially, within the RAM and/or within the processor during execution thereof by the computer system.
  • the memory and the processor also constitute a system comprising machine-readable code.
  • the machine operates as a standalone device or may be connected, e.g., networked to other machines, in a networked deployment, the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer or distributed network environment.
  • the machine may be, for example, a computer, a server, a cluster of servers, a cluster of computers, a web appliance, a distributed computing environment, a cloud computing environment, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • the term "machine” may also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Selon l'état de la technique, l'achat en ligne ou le commerce électronique d'articles se rapportant à l'apparence d'un utilisateur comprend la visualisation par l'utilisateur d'images des articles proposés en ligne par des fournisseurs/détaillants, lesdites images pouvant inclure des mannequins ou être de simples images de l'article, parfois de plusieurs points de vue différents. Cependant, l'utilisateur n'a aucun moyen de visualiser ces éléments sur lui-même avant la commande, ni de recevoir des recommandations d'articles fondées sur l'évaluation d'une partie prédéterminée du corps de l'utilisateur à laquelle l'article se rapporte. C'est pourquoi des modes de réalisation de l'invention proposent à l'utilisateur des solutions de miroir virtuel qui permettent de visualiser un rendu de l'article adapté à la partie prédéterminée du corps de l'utilisateur. Un utilisateur peut par exemple, au moyen d'un miroir virtuel selon l'invention, obtenir le rendu d'une monture de lunettes sur son visage en temps réel ou visualiser des recommandations basées sur la classification de la forme de la tête de l'utilisateur.
PCT/CA2015/000312 2014-05-13 2015-05-13 Systèmes de miroir virtuel et procédés associés Ceased WO2015172229A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201461992291P 2014-05-13 2014-05-13
US61/992,291 2014-05-13

Publications (1)

Publication Number Publication Date
WO2015172229A1 true WO2015172229A1 (fr) 2015-11-19

Family

ID=54479073

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2015/000312 Ceased WO2015172229A1 (fr) 2014-05-13 2015-05-13 Systèmes de miroir virtuel et procédés associés

Country Status (1)

Country Link
WO (1) WO2015172229A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3435278A1 (fr) * 2017-07-25 2019-01-30 Cal-Comp Big Data, Inc. Appareil d'analyse d'informations corporelles pouvant indiquer des zones d'ombrage
CN109508581A (zh) * 2017-09-15 2019-03-22 丽宝大数据股份有限公司 身体信息分析装置及其腮红分析方法
WO2019220208A1 (fr) * 2018-05-16 2019-11-21 Matthewman Richard John Systèmes et procédés permettant de fournir une recommandation de style
CN114222995A (zh) * 2019-10-25 2022-03-22 深圳市欢太科技有限公司 图像处理方法、装置以及电子设备
US11630566B2 (en) 2020-06-05 2023-04-18 Maria Tashjian Technologies for virtually trying-on items

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6144388A (en) * 1998-03-06 2000-11-07 Bornstein; Raanan Process for displaying articles of clothing on an image of a person
US20050162419A1 (en) * 2002-03-26 2005-07-28 Kim So W. System and method for 3-dimension simulation of glasses
WO2012110828A1 (fr) * 2011-02-17 2012-08-23 Metail Limited Procédés et systèmes mis en œuvre par ordinateur pour créer des modèles corporels virtuels pour visualisation de l'ajustement d'un vêtement
US20130088490A1 (en) * 2011-04-04 2013-04-11 Aaron Rasmussen Method for eyewear fitting, recommendation, and customization using collision detection
US20130182005A1 (en) * 2012-01-12 2013-07-18 Cisco Technology, Inc. Virtual fashion mirror system
WO2013177464A1 (fr) * 2012-05-23 2013-11-28 1-800 Contacts, Inc. Systèmes et procédés pour générer un modèle 3d d'un produit d'essayage virtuel
US20140016823A1 (en) * 2012-07-12 2014-01-16 Cywee Group Limited Method of virtual makeup achieved by facial tracking

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6144388A (en) * 1998-03-06 2000-11-07 Bornstein; Raanan Process for displaying articles of clothing on an image of a person
US20050162419A1 (en) * 2002-03-26 2005-07-28 Kim So W. System and method for 3-dimension simulation of glasses
WO2012110828A1 (fr) * 2011-02-17 2012-08-23 Metail Limited Procédés et systèmes mis en œuvre par ordinateur pour créer des modèles corporels virtuels pour visualisation de l'ajustement d'un vêtement
US20130088490A1 (en) * 2011-04-04 2013-04-11 Aaron Rasmussen Method for eyewear fitting, recommendation, and customization using collision detection
US20130182005A1 (en) * 2012-01-12 2013-07-18 Cisco Technology, Inc. Virtual fashion mirror system
WO2013177464A1 (fr) * 2012-05-23 2013-11-28 1-800 Contacts, Inc. Systèmes et procédés pour générer un modèle 3d d'un produit d'essayage virtuel
US20140016823A1 (en) * 2012-07-12 2014-01-16 Cywee Group Limited Method of virtual makeup achieved by facial tracking

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3435278A1 (fr) * 2017-07-25 2019-01-30 Cal-Comp Big Data, Inc. Appareil d'analyse d'informations corporelles pouvant indiquer des zones d'ombrage
US20190034699A1 (en) * 2017-07-25 2019-01-31 Cal-Comp Big Data, Inc. Body information analysis apparatus capable of indicating shading-areas
CN109288233A (zh) * 2017-07-25 2019-02-01 丽宝大数据股份有限公司 可标示修容区域的身体信息分析装置
JP2019025288A (ja) * 2017-07-25 2019-02-21 麗寶大數據股▲フン▼有限公司 リペア領域を標示可能な生体情報解析装置
US10521647B2 (en) 2017-07-25 2019-12-31 Cal-Comp Big Data, Inc. Body information analysis apparatus capable of indicating shading-areas
CN109508581A (zh) * 2017-09-15 2019-03-22 丽宝大数据股份有限公司 身体信息分析装置及其腮红分析方法
WO2019220208A1 (fr) * 2018-05-16 2019-11-21 Matthewman Richard John Systèmes et procédés permettant de fournir une recommandation de style
CN114222995A (zh) * 2019-10-25 2022-03-22 深圳市欢太科技有限公司 图像处理方法、装置以及电子设备
US11630566B2 (en) 2020-06-05 2023-04-18 Maria Tashjian Technologies for virtually trying-on items

Similar Documents

Publication Publication Date Title
US20180268458A1 (en) Automated recommendation and virtualization systems and methods for e-commerce
EP3479296B1 (fr) Système d'habillage virtuel utilisant un traitement d'image, un apprentissage automatique et une vision artificielle
CN111787242B (zh) 用于虚拟试衣的方法和装置
US11783557B2 (en) Virtual try-on systems and methods for spectacles
US11010896B2 (en) Methods and systems for generating 3D datasets to train deep learning networks for measurements estimation
US10489683B1 (en) Methods and systems for automatic generation of massive training data sets from 3D models for training deep learning networks
US9245499B1 (en) Displaying glasses with recorded images
KR102293008B1 (ko) 정보 디스플레이 방법, 디바이스, 및 시스템
US9254081B2 (en) Fitting glasses frames to a user
CN110609617B (zh) 虚拟镜子的装置、系统和方法
US11854069B2 (en) Personalized try-on ads
JP2020522285A (ja) 全身測定値抽出のためのシステムおよび方法
US20220188897A1 (en) Methods and systems for determining body measurements and providing clothing size recommendations
Singh et al. AVATRY: virtual fitting room solution
US10685457B2 (en) Systems and methods for visualizing eyewear on a user
WO2015172229A1 (fr) Systèmes de miroir virtuel et procédés associés
JP2023539159A (ja) バーチャルフィッティングサービス提供方法、装置およびそのシステム
KR20200025291A (ko) 사용자 단말기를 이용한 퍼스널 컬러진단을 통한 쇼핑서비스 제공방법 및 그 시스템
Marelli et al. Faithful fit, markerless, 3d eyeglasses virtual try-on
CN116703507A (zh) 图像处理方法、显示方法及计算设备
US10922579B2 (en) Frame recognition system and method
CN113011932A (zh) 试衣镜系统、图像处理方法、装置及设备
CN114339434A (zh) 货品试穿效果的展示方法及装置
CN111429213A (zh) 用于衣物模拟试穿的方法及装置、设备
US12387445B2 (en) Three-dimensional models of users wearing clothing items

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15793256

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15.02.2017)

122 Ep: pct application non-entry in european phase

Ref document number: 15793256

Country of ref document: EP

Kind code of ref document: A1