WO2015017687A2 - Systems and methods for producing predictive images - Google Patents
Systems and methods for producing predictive images Download PDFInfo
- Publication number
- WO2015017687A2 WO2015017687A2 PCT/US2014/049216 US2014049216W WO2015017687A2 WO 2015017687 A2 WO2015017687 A2 WO 2015017687A2 US 2014049216 W US2014049216 W US 2014049216W WO 2015017687 A2 WO2015017687 A2 WO 2015017687A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- cosmetic
- data set
- subject
- digital image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
- A61B5/004—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
- A61B5/7275—Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
Definitions
- the present invention related in general to systems and methods for producing images predictive of a subject's appearance resulting from cosmetic or medical treatment.
- the present teachings include a system for producing images predictive of a subject's appearance resulting from cosmetic or medical treatment, the system comprising a controller, an optional image input device, an optional image output device, and a computer program disposed in the controller for receiving a first digital image of a subject prior to cosmetic or medical treatment, generating a first data set obtained by evaluation of the first digital image, receiving a second data set representing known anatomical measurements from other subjects resulting from the selected cosmetic or medical treatment, and a predictive analysis function for generating a second digital image predictive of the subject based upon comparison of the first data set and second data set, the second digital image being available via the optional image output device.
- the first digital image can be provided to the controller via a camera.
- the camera can include a webcam or a computation device integrated camera.
- the controller is comprised by a mobile device including, but not limited to, a tablet computer, laptop computer, and smartphone.
- the first digital image can be provided to the controller using two-dimensional coordinates or using three-dimensional coordinates.
- the second data set includes a parameters-based medical guideline, parameters-based surgical guideline, and/or a clinical trial summary.
- a system for producing images predictive of a subject's appearance resulting from cosmetic or medical treatment, the system comprising a controller comprising an algorithm for receiving a first digital image of a subject prior to cosmetic or medical treatment, generating a first data set obtained by evaluation of the first digital image, receiving a second data set representing known anatomical measurements from other subjects resulting from the selected cosmetic or medical treatment, and generating a second digital image predictive of the subject based upon comparison of the first data set and second data set, the second digital image being available via the optional image output device.
- a method for generating a predictive image of a subject resulting from a cosmetic or medical treatment comprising acquiring a first digital image of the subject on a computer system prior to cosmetic or medical treatment, querying the subject to select at least one of an individual cosmetic or medical treatment from a listing of known cosmetic and medical treatments, and an anatomical feature of the subject depicted by the first digital image, collecting a first data set of anatomical measurements of the subject anatomical features obtained by evaluation of the first digital image, the first data set stored on the computer system, comparing the first data set to a second data set comprising known anatomical measurements from other subjects resulting from the selected cosmetic or medical treatment or anatomical feature of the subject depicted by the first digital image, wherein the second data set includes, but is not limited to, a parameters-based medical guideline, parameters-based surgical guideline, and a clinical trial summary related to the selected cosmetic or medical treatment, and generating a second digital image predictive of the result of the cosmetic or medical treatment in
- the method further includes the act of visualizing the predictive image on an image displaying device.
- the method further includes the act of transmitting the predictive image to a physician skilled in the cosmetic or medical treatment.
- the method further includes the act of comparing the selected cosmetic or medical treatment to a third data set comprising a listing of physicians skilled in the cosmetic or medical treatment, and generating a subset of the listing based on a predetermined value, including the geographical location of the subject.
- the first digital image is acquired using an image capturing device.
- the first digital image can be two-dimensional or three-dimensional.
- the first digital image can be captured by a camera, including a webcam or and a computation device integrated camera.
- the computation device can be a mobile device including, but not limited to, a tablet computer, laptop computer, and smartphone.
- the digital image can be a digital file stored in the computer system memory.
- the digital image can be an image of at least one anatomical feature of the subject body, including the subject face.
- the image capturing device and image displaying device are the same device.
- the anatomical measurements can be two-dimensional coordinates of the anatomical feature or three-dimensional coordinates of the anatomical feature.
- a method for producing an image predictive of cosmetic or medical treatment outcome in a subject comprising acquiring a first digital image of the subject prior to cosmetic or medical treatment, calculating a first data set of anatomical measurements for the subject, comparing the first data set to a second data set comprising previously existing anatomical measurements obtained from prior cosmetic or medical treatment outcomes, and modifying the first digital image based upon the comparison to the second data set and said first data set to provide a second digital image predictive of the subject following the cosmetic or medical treatment.
- the second data set includes, but is not limited to, a parameters-based medical guideline, parameters-based surgical guideline, and a clinical trial summary.
- a segment of the digital image can be selected by a user.
- the anatomical measurements are two-dimensional coordinates of the anatomical feature or three-dimensional coordinates of the anatomical feature.
- One embodiment is a method for determining medical treatment outcomes that: (a) acquires an image of a user through an image file upload or through a build-in camera in a computation device, such as a smartphone, or webcams connected with a computer; (b) displays an image of a user, also referred to herein as a prospective patient, (the image could be an image of: a face, a body, a part of the body, or one or more anatomical features of the body) and (c) enables an anatomical area of the image and specific features in the area to be identified, which anatomical area has been a subject of cosmetic,
- the method further includes: (a) allowing the user to point to ("touch") the anatomical areas that they wish to change, without knowing the specific name of the anatomical area; (b) determining a level of "severity" of a defect in the anatomical area of the image in light of data or a meta-data set from the one or more clinically validated studies or trials relating to medical treatments, (c), allowing the user to visualize different potential treatment outcomes by selecting different treatment method and/or clinical procedures.
- determining a level of severity comprises correlating the image of the identified anatomical area to a clinically validated scale of severity in the data or the meta data set from the clinically validated studies or trials.
- determining likely treatment outcomes comprises providing a morphed image of the identified anatomical area based on the likely treatment outcome wherein the morphed image is provided using data or a meta data set from the one or more clinically validated studies or trials.
- Another embodiment is a method for determining medical treatment outcomes that: (a) acquires an image of a user through an image file upload or through a build-in camera in a computation device, such as iphone, ipad, or webcams connected with a computer; (b) displays an image of a user, also referred to herein as a prospective patient, (the image could be an image of a face, a body, a part of the body, or one or more anatomical features of the body) and (c) enables an anatomical area of the image and specific features in the area to be identified, which anatomical area has been a subject of cosmetic,
- the method further includes: (a) determining a level of "severity" of a defect in the identified anatomical area of the image in light of data or a meta-data set from the one or more clinically validated studies or trials using the approved drug, biologic, medical device or combination product, and (b) determining one or more likely treatment outcomes for the prospective patient.
- determining a level of severity comprises correlating the image of the identified anatomical area to a clinically validated scale of severity in the data or the meta data set from the clinically validated studies or trials.
- determining likely treatment outcomes comprises providing a morphed image of the identified anatomical area based on the likely treatment outcome wherein the morphed image is provided using data or a meta data set from the one or more clinically validated studies or trials.
- Figure 1 is a conceptual diagram illustrating an example computing device for obtaining a cosmetic or medical or surgical treatment simulation.
- Figure 2 is a conceptual diagram illustrating an example computing device with 3D image headset for obtaining a cosmetic or medical or surgical treatment simulation.
- Figure 3 is a flowchart illustrating details of an example user for obtaining a cosmetic or medical or surgical treatment simulation.
- Figure 4 is a flowchart illustrating details of the simulation process for an example user for obtaining a cosmetic or medical or surgical treatment simulation.
- Figure 5 is a depiction of severity scores for a nasolabial fold.
- Figure 6 is a before and after representation of a surgical procedure.
- Figure 7 is an a conceptual diagram illustrating an example facial image that includes example facial anatomical areas that are subject to cosmetic, medical, and plastic surgery treatment.
- Figure 8 is a block diagram illustrating details of an example computing device assembly for capturing human images, detecting and displaying treatment areas, and generating treatment outcome simulations.
- Figure 9 is a flowchart illustrating a method for determining medical treatment outcomes in accordance with one or more embodiments.
- Figure 10 shows published data on the duration of an effect of a volumizing treatment.
- Figure 1 1 is a detailed block diagram illustrating details of an example computing device for capturing human images, detecting and displaying treatment areas, and generating treatment outcome simulations.
- Figure 12 is a comparison between (a) Luxand FaceSDK feature points and (b) Stasm feature points.
- Figure 13 is an example of located clean skin patches.
- Figure 14 is a comparison between (a) wrinkle region example and (b) result image obtained by inpainting.
- Figure 15 is a comparison between (a) original image and (b) simulation of lateral brow lift procedure.
- Figure 16 is a comparison between (a) original image and (b) simulation of forehead lift procedure.
- Figure 17 is a comparison between (a) original image and (b) simulation of lower blepharoplasty procedure.
- Figure 18 is a comparison between (a) original image and (b) simulation of glabella procedure.
- Figure 19 is a comparison between (a) original image and (b) simulation of marionette lines procedure.
- Figure 20 is a comparison between (a) original image and (b) simulation of nasolabial folds procedure.
- Figure 21 is a comparison between (a) original image and (b) simulation of laser resurfacing procedure.
- Cosmetic Treatment means any treatment performed and/or prescribed by a physician for the purpose of improving the patient's appearance, including but not limited to, treatment for acne, rosacea, scars, age spots, uneven skin tone, spider veins, unwanted hair, moles, birthmarks, wrinkles, dark coloration, undesirable facial and body contours.
- Medical Treatment includes any surgery procedures which modify or improve the appearances of a physician feature, irregularity, and defect.
- Controller refers to a computer system capable of executing application software including a processor, computer-readable memory, and data storage.
- the controller is an operator-programmable microcomputer that can connect to other computers wirelessly and to which system visual monitors, visual detectors (including cameras, such as a webcam or and a smartphone integrated camera), and optical sensors (including MICROSOFT® KINECT® and three- dimensional headsets), are connected.
- the controller can be programmed either locally or remotely. In another configuration, it can be controlled in real time by an operator.
- An example of a controller is the "computer device" provided in Fig. 8 below.
- Smartphone As used herein, the term “smartphone” is broadly defined to include mobile and consumer devices which optionally communicate with the Internet, and which are programmable and configurable to provide the predictive images herein, such as IPHONE®, IP AD®, KINDLE®, FIRE®, and ANDROID® devices such as NEXUS®, AND GALAXY®.
- Predictive Image means a morphed image of an identified anatomical area of a subject based on the likely cosmetic or medical treatment outcome.
- the morphed image is generated based on the comparison between a first data set of original anatomical coordinates, and a second data set comprising one or more clinically validated studies (e.g., a parameters-based medical guideline, parameters-based surgical guideline, and a clinical trial summary) related to a selected cosmetic or medical treatment.
- the morphed image can also be correlated with a severity score based on the association of data or a meta data set from the one or more clinically validated studies.
- Systems and methods are provided for producing images predictive of a subject's appearance resulting from cosmetic or medical treatment. Such systems and methods provide a subject the ability to share the predictive images with their friends and family and doctors as well as to exchange information with other individuals regarding the best options for them and the experience after their treatments.
- a system 100 is provided in Figs. 1 and 2, more fully described below, for producing images predictive of a subject's appearance resulting from cosmetic or medical treatment.
- the system comprises various components which operate together to generate a predictive image.
- a system for producing images predictive of a subject's appearance resulting from cosmetic or medical treatment.
- the system can comprise a controller, an optional image input device, and an optional image output device.
- the controller can be programmed for receiving a first digital image of a subject prior to cosmetic or medical treatment, generating a first data set obtained by evaluation of the first digital image, receiving a second data set representing known anatomical measurements from other subjects resulting from the selected cosmetic or medical treatment, and a predictive analysis function for generating a second digital image predictive of the subject based upon comparison of the first data set and second data set, the second digital image being available via the optional image output device.
- the first digital image can be provided to the controller via a camera.
- the camera can include a webcam or a computation device integrated camera.
- the camera can be a three-dimensional sensor, e.g. a MICROSOFT® KINECT® or other three-dimensional sensor or display known in the art can be integrated in the system.
- Fig. 1 depicts an example of the system 150 illustrating various components that may be utilized by a user to obtain a predictive images of various cosmetic treatments and plastic surgery procedures.
- System 150 could include a smartphone, other mobile devices, other computation devices, connected through wireless or wired internet connection to the internet.
- Component 152 is a built-in camera in this embodiment. It can be standalone camera or other digital image-capturing device. It can also be removed if the software application is run exclusively on stored image files either through the cloud or stored in the computation device 150.
- Controller 154 is a computation device capable of either running the simulation software by itself, or containing web browsers such as Internet Explorer, Google Chrome, and/or Firefox, so that device 154, through a website, can send the digital image to a remote server, receive and process instructions from the server, and display the resulting webpages to the user.
- Device 150 has a user interface so that users can select specific anatomic areas on the digital image and send instructions to a server.
- Fig. 2 is a variation of the example system 150 where a 3D camera system 156 replaces the traditional 2D camera system as a way to capture the original digital image.
- the controller 154 of the system can be comprised by a mobile device including a tablet computer, laptop computer, and smartphone.
- the first digital image of the system can therefore be provided to the controller using two-dimensional coordinates or using three- dimensional coordinates.
- the system can produce images predictive of a subject's appearance resulting from cosmetic or medical treatment.
- the controller can comprise an algorithm for receiving a first digital image of a subject prior to cosmetic or medical treatment, generating a first data set obtained by evaluation of the first digital image, receiving a second data set representing known anatomical measurements from other subjects resulting from the selected cosmetic or medical treatment, and generating a second digital image predictive of the subject based upon comparison of the first data set and second data set, the second digital image being available via the optional image output device.
- Fig. 3 is a flow chart illustrating the use of the system.
- Process 300 may begin when a subject interested in cosmetic treatment and/or plastic surgery may improve his appearance.
- the subject may first start by uploading a digital image of the subject as illustrated in 302, or taking a photograph with a built-in camera or three-dimensional sensor from a computation device, as illustrated in 304, or use a previously created image of the subject saved in computer memory, as illustrated in 306.
- the captured image is pre-processed in 310.
- the image may be corrected for insufficient quality, low lighting condition, blurry image, wrong orientation, partial image, and other deficiencies.
- the system may also display an error message to the user pointing out the deficiencies and suggest ways to correct the deficiencies, or it may display a message to the user to upload another image, or re-take another photograph.
- the system will also make adjustment to the size of the digital image to prepare the image for further analysis and manipulation.
- the system in 320 will process the image and display the anatomic areas of the specific anatomical area targeted for cosmetic or medical treatment.
- the specific anatomic areas such as the subject face, can be identified by the user in step 330 through a touch screen or a mouse, or devices which may interact with the displayed image on the screen. Based on the user selection, the system will generate the likely treatment outcome simulation and display it side-by-side with the original image in process 340.
- the detailed process and architecture of step 340 is described in detail in Fig. 4.
- the subject may proceed to choose a physician, as illustrated in process 350.
- the physicians may be provided by the system based on variables associated with the subject and the physician. Such variables can include geographical location, the cosmetic or medical procedure, the subject's budget, the subject's ethnicity, the subject's sex, the subject's age, the subject's health, and so on.
- the subject Prior to selecting a physician practitioner, the subject may desire to conduct more research on the treatment itself, as provided by the system, and as illustrated in process 352, and/or on the specific medical products used for the selected procedures, as illustrated in process 354.
- the user may also choose to save the original and simulation images and related information in an electronic album, which may be an Internet commerce subroutine, as illustrated in process 356.
- the subject may send a request for office visit through process 360. They may also conduct additional research on the physician practitioner using the system, as illustrated in step 362, before sending a request for office visit. This process may be integrated with the physician's online calendar and enable the user to schedule the office visit directly.
- Fig. 4 is a flow chart illustrating the predictive visualization simulation aspect of the system.
- a subject may achieve a medically realistic prediction for the individual subject.
- the system obtains a qualified image from process 310.
- the system further identifies the potential medical treatment areas of the image in process 320.
- a non-limiting example of the treatment areas is illustrated in Fig. 5.
- Process 334 defines the specific cosmetic or medical procedures which have an impact on the target anatomic area, and then the system queries and generates a medical and/or clinical database to define the predictive image simulation algorithms.
- the simulation outcome is displayed in step 340, side-by-side with the original image.
- a Python interface is provided so that the web front end (the subject) can invoke image processing operations on the web server.
- the main procedures that can be invoked are the following:
- All modules and platforms above may be cross-platform (e.g., WINDOWS®, LINUX®, IOS®, OSX®, and ANDROID®) and can be programmed in any suitable programming language (e.g., C++, JAVA, PHP, SQL, PERL, PASCAL, SWIFT, and PYTHON).
- WINDOWS® e.g., WINDOWS®, LINUX®, IOS®, OSX®, and ANDROID®
- Any suitable programming language e.g., C++, JAVA, PHP, SQL, PERL, PASCAL, SWIFT, and PYTHON.
- a library is provided that implements all functionality of the system, i.e., all cosmetic surgery procedures. It implements high-level functionalities such as loading, preprocessing and processing of image, and specifying which cosmetic surgery procedures should be simulated. Exemplary procedures are provided in the Examples below.
- Tables 1 through 6 provide non-limiting examples of parameters-based clinical trial summaries.
- the co-primary efficacy endpoints were the investigator's rating of glabellar line severity at maximum frown and the subject's global assessment of change in appearance of glabellar lines, both at Day 30 post- injection.
- a responder was defined as having a severity grade of 0 or
- Fig. 5 provides an example of the severity scores applied to nasolabial folds, and predictive images produced visualizing such severity scores.
- a library is also provided that serves as a gateway between the software modules above and third party facial feature detection (FFD) libraries. It provides universal interface for accessing points of e.g. eye, eyebrows, nose and other face features, no matter which third party facial feature detection library is used. In addition to that, it enables dynamic creation of third party FFD library instance, and testing whether face is found in the image.
- FFD facial feature detection
- Another library can be provided that performs eyebrow lifting functionality with the use of image warping algorithms known to those of skill in the art. It can include two image warping algorithms: Poisson coordinates and cubic mean value coordinates.
- Another library can be provided that implements inpainting functionality which is mainly used for the removal of wrinkles. It includes two inpainting algorithms: Poisson blending and healing.
- Another library can be provided that that performs filtering of images with the purpose of reducing fine wrinkles, age spots, etc.
- Another library can be provided can be provided with utilities that are commonly needed in other libraries, such as working with strings, parsing command line parameters. It can also contains geometry algorithms for working with points, lines, and angles. The library can also extend functionality of other libraries for working with polygons.
- a variety of third party libraries known to those of skill in the art can be used in the system. These include OpenCV, Luxand Face SDK, Stasm, Clipper, and OpenBlas.
- Facial Feature Detection The purpose of facial feature detection (FFD) algorithms is to find the coordinates of facial feature points in an image. This usually includes eyes, eye contours, eyebrows, lip contours, nose tip, and so on.
- FFD facial feature detection
- Two third party libraries can be used to detect facial features: Luxand FaceSDK (Luxand Inc., Alexandria VA) and Stasm (See Active Shape Models with SIFT Descriptors and MARS. Milborrow, Stephen and Nicolls, Fred. 2, 2014, VISAPP, Vol. 1).
- Stasm is an open source library which uses active shape model (ASM; See Active Shape Models - 'Smart Snakes'. T.F.Cootes, C.J.Taylor. 1992. British Machine Vision Conference), to detect facial features.
- ASM active shape model
- the ASM starts the search for landmarks from the mean training face shape aligned to the position and size of the image face determined by a global face detector. It then repeats the following two steps until convergence:
- Luxand FaceSDK returns the coordinates of 66 facial feature points, while Stasm returns 77 points, as can be seen in Fig. 12.
- the main differences between Luxand FaceSDK and Stasm are (1) Stasm returns points around eyebrow, while Luxand FaceSDK return only points in the middle of eyebrow (2) Luxand FaceSDK returns locations of smile lines (nasolabial folds), while Stasm does not, and (3) Stasm returns more points from head contour, while Luxand FaceSDK only returns point from the chin.
- Point locations returned by Luxand FaceSDK and Stasm are only slightly different.
- One example is when points are on the eye contour. With Stasm, the points are closer to the eye center than with Luxand. Different offsets can be used, however, depending on which FFD algorithm is currently used.
- Contours around facial areas that are targets for cosmetic and medical procedures can be provided. Those of skill in the art will recognize that other anatomical areas can be contoured similarly. See, e.g., Fig. 7.
- Image warping is the process of digitally manipulating an image such that shapes portrayed in the image have been distorted.
- warping means that original pixels are mapped to new coordinates without changing the colors. This can be achieved by defining an interpolation function that maps original coordinates to new coordinates.
- An image warping operation can be used to simulate cosmetic and medical procedures, such as the eyebrow lift among other procedures. This can be accomplished by defining a closed- form interpolation that produces natural-looking functions while allowing flexible control of boundary constraints. In such a way, the warping affects only the area close to the target (eyebrows in an eyebrow lift procedure, for example) and not other facial features.
- a variety of image warping algorithms can be used including cubic mean value coordinates (See Cubic mean value coordinates. Li, Xian-Ying, Ju, Tao and Hu, Shi- Min. 4, 2013, ACM Transactions on Graphics (TOG) - SIGGRAPH 2013 Conference, Vol. 32) and Poisson Coordinates (See Poisson Coordinates. Li, Xian-Ying and Hu, Shi-Min. 2, 2013, IEEE Transactions on Visualization and Computer Graphics, Vol. 19, pp. 344-352).
- CMV coordinates allow shape deformation using curved cage networks.
- CMV interpolate was first introduced by Michael S. Floater and Christian Schulz (See Pointwise radial minimization: Hermite interpolation on arbitrary domains. Floater, Michael S and Schulz, Christian. 2008. Computer graphics forum). This method was later extended (See Li, Xian-Ying. Cubic Mean Value Coordinates. 2013.
- a boundary constraint may have an impact on distant locations when interpolating with CMV coordinates in a non-convex polygon.
- biharmonic coordinates have the advantage of producing more "shape-aware" functions, since the impact of a boundary constraint is propagated only through the interior of the domain.
- Poisson coordinates are a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values.
- Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. Explicit formulae for Poisson coordinates in both continuous and 2D discrete forms can be provided.
- Poisson coordinates are pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls).
- Step 1 Determination of clean skin patches detection within face region. See Fig. 13.
- the system can detect rectangular regions on forehead, chin and cheeks, where the skin does not contain any wrinkle or hair.
- an edge detector is used which is sensitive to small edges (described in Edge extraction and enhancement with coordinate logic filters using block represented images. Mertzios, Basil G, et al, et al. 1995. Proceedings of the 12 th European Conference on Circuit Theory and Design (ECCTD '95)), and then select the maximal rectangles which do not contain any edges, here we are using integral images or summed area table approach to speed up rectangle selection (see, e.g., Integral Images.
- a matrix is created corresponding to edges mask where cell is set to some very small negative value if corresponding pixel value in edges mask is not 0 otherwise the cell value is set to 0.
- the rectangle with longest perimeter in integral image of the matrix which has zero sum of its cell values corresponds to mask which does not contain any edges.
- An example of detected clean skin patches is illustrated in Fig. 8.
- Step 2 Blending.
- a source region S For each wrinkle region we define a source region S, mask M, and overlay image O.
- overlay image is obtained from step 1.
- the blending mask M is obtained from wrinkles mask according to wrinkle type.
- the source region S is a rectangular region of source image corresponding to bounding rectangle of mask M.
- a healing algorithm can be used as described by Xian-Ying Li Mixed-Domain Edge-Aware Image Manipulation. IEEE Transactions on Image Processing, 2013, 22(5): 1915-1925. http://cg.cs ⁇ singhua.edu n/people/ ⁇ xianying/Papers/MixedDomain/index.html), and blend overlay image O onto source image S, using mask M.
- the procedure defined in Step 2.1 can be used.
- Step 2.1 Skin patches enlargement. Clean skin patches can be enlarged using tiling technique with random rotations to avoid undesired pattern effects. The same healing algorithm as described in step 2 can be used for tiling.
- Fig. 14 is an example of the wrinkle region and result image obtained by the described algorithm (before and after in panels (a) and (b) respectively).
- a mixed domain filter can be used for laser resurfacing procedure simulation to produce predictive images.
- the filter can be based on the Xian-Ying Li algorithm and its implementation (see citations above).
- the algorithm can perform edge-aware processing (processing of edge regions to decrease blurring effect) instead of ordinary blurring.
- the general algorithm can be implemented to perform recursive procedures in both time and frequency domains within different image resolutions. Choosing suitable algorithm parameters that enable high-quality laser resurfacing can be performed.
- a method for generating a predictive image of a subject resulting from a cosmetic or medical treatment comprising acquiring a first digital image of the subject on a computer system prior to cosmetic or medical treatment, querying the subject to select at least one of an individual cosmetic or medical treatment from a listing of known cosmetic and medical treatments, and an anatomical feature of the subject depicted by the first digital image, collecting a first data set of anatomical measurements of the subject anatomical features obtained by evaluation of the first digital image, the first data set stored on the computer system, comparing the first data set to a second data set comprising known anatomical measurements from other subjects resulting from the selected cosmetic or medical treatment or anatomical feature of the subject depicted by the first digital image, wherein the second data set includes, but is not limited to, a parameters-based medical guideline, parameters-based surgical guideline, and a clinical trial summary related to the selected cosmetic or medical treatment, and generating a second digital image predictive of the result of the cosmetic or medical treatment in the subject based on the
- the method further includes the act of visualizing the predictive image on an image displaying device.
- the method further includes the act of transmitting the predictive image to a physician skilled in the cosmetic or medical treatment.
- the method further includes the act of comparing the selected cosmetic or medical treatment to a third data set comprising a listing of physicians skilled in the cosmetic or medical treatment, and generating a subset of the listing based on a predetermined value, including the geographical location of the subject.
- the first digital image is acquired using an image capturing device.
- the first digital image can be two-dimensional or three-dimensional.
- the first digital image can be captured by a camera, including a webcam or and a computation device integrated camera.
- the computation device can be a mobile device including, but not limited to, a tablet computer, laptop computer, and smartphone.
- the digital image can be a digital file stored in the computer system memory.
- the digital image can be an image of at least one anatomical feature of the subject body, including the subject face.
- the image capturing device and image displaying device are the same device.
- the anatomical measurements can be two-dimensional coordinates of the anatomical feature or three-dimensional coordinates of the anatomical feature.
- a method for producing an image predictive of cosmetic or medical treatment outcome in a subject comprising acquiring a first digital image of the subject prior to cosmetic or medical treatment, calculating a first data set of anatomical measurements for the subject, comparing the first data set to a second data set comprising previously existing anatomical measurements obtained from prior cosmetic or medical treatment outcomes, and modifying the first digital image based upon the comparison to the second data set and said first data set to provide a second digital image predictive of the subject following the cosmetic or medical treatment.
- the second data set includes, but is not limited to, a parameters-based medical guideline, parameters-based surgical guideline, and a clinical trial summary.
- a segment of the digital image can be selected by a user.
- the anatomical measurements are two-dimensional coordinates of the anatomical feature or three-dimensional coordinates of the anatomical feature.
- the method may be enabled as a web application operating, for example and without limitation: (a) in the cloud; (b) as an application operating on computing devices such as, for example and without limitation, desktop computers, laptop computers, hand-held devices such as tablets, and so forth; or (c) as a mobile application operating on a mobile device such as, for example and without limitation, a smart phone such as an Android device.
- a visual representation of a treatment outcome is provided as a postprocedure image.
- the method is driven by clinically validated "severity" rating scales and clinically validated trial data or clinically validated meta data.
- resulting post-procedure images are medically credible.
- one or more further embodiments relate to methods for generating post-surgical procedure outcome images which indicate the duration of a post-procedure clinical effect along a time-course, for example and without limitation, along a time course set forth in one or more clinical trials.
- Fig. 8 shows computer device 810 used to provide one or more embodiments of apparatus 800.
- Computer 810 may be a desktop computer, a laptop computer, a hand-held device such as, for example and without limitation, a tablet, and a mobile device such as, for example and without limitation, a smart phone such as an Android device.
- Computer device 810 includes user input device 820, and display 830.
- User input device 820 may include a keyboard and/or a mouse and/or a touch screen for entering information.
- Display 830 may be a display such as a computer display or laptop display or a mobile device screen and so forth.
- computer device 810 may display images on a remote display, which remote display may be accessed by computer device 810 over a network such as, for example, the Internet using communications protocols that are well known to those of ordinary skill in the art.
- computer device 810 includes a memory for storing information and programs and a processing unit for executing software in the form of programs or applications.
- computer device 810 may include optional communications device(s) 840 to enable it to input data from various devices such as, for example and without limitation, disk drives, memory devices such as, for example and without limitation, “thumb drives,” and devices that enable communication over networks such as, for example, and without limitation, the Internet and hence, for example and without limitation, the "cloud.”
- computer device 810 may include image capture device 850 such as, for example and without limitation, a camera.
- computer device 810 may be a server in the "cloud" and user input device 820, display 830 and image capture device 850 may be contained on a web-enabled user device (i.e., a device that can interact with a server over the Internet) such as, for example and without limitation, a laptop computer, a tablet, a smartphone and so forth.
- a web-enabled user device i.e., a device that can interact with a server over the Internet
- Fig. 9 is a flowchart of a method for determining medical treatment outcomes in accordance with one or more embodiments.
- a prospective patient uses user input device 820 to input personal data such as, for example and without limitation, name, address and email address.
- computing device 810 of apparatus 800 acquires an image of a body part, for example and without limitation, a facial image of a prospective patient.
- the image may be acquired: (a) by requesting the prospective patient (i) to position his/herself appropriately with respect to image capture device 850, (ii) to the request the prospective patient to use user input device 820 to cause image capture device, for example and without limitation, a camera, to take a picture, for example and without limitation, of the prospective patient's face, and (iii) to upload the image using communications device(s) 840; or (b) by requesting the prospective patient to upload the image using communications device(s) 140.
- image capture device for example and without limitation, a camera
- computer 810 of apparatus 800 executes software that adjusts the image.
- the software includes algorithms to center, position and adjust the lighting of the image to provide a standard size and appearance, which adjustments may be useful in facilitating identification of specific treatment areas for plastic and/or reconstructive surgeries and/or cosmetic dermatological treatments.
- the standard appearance is provided by, for example and without limitation, scaling the distance between the eyes to a predetermined dimension.
- Software which includes algorithms to center, position and adjust the lighting of the displayed image is publicly available as open source software.
- An example of such open source software is GIMP software ("Gnu Image Manipulation Program" which is available at http://www.gimp.org/).
- computer device 810 causes the adjusted image to be displayed on display 830 in accordance with any one of a number of methods that are well known to those of ordinary skill in the art.
- computer device 810 executes software which includes facial recognition algorithms that identify and locate anatomical features.
- anatomical metrics such as, for example and without limitation, one or more of: (a) distance and angle between anatomical features; (b) feature contours (for example, lines); (c) pixilation (i.e., shading); (d) feature texture (i.e., smoothness of lines and shading across a given area); (e) feature coloration (for example, coloration using a red, green and blue scale); and so forth.
- Software which includes one or more facial recognition algorithms is publicly available as open source software.
- An example of one such open source software is Matlab software (available at
- nasolabial fold As follows. First, one maps an area surrounding a straight line between the nose and the corner of the mouth. In one embodiment, the map is a pixel map of an area expected to contain the nasolabial fold. Next, one uses contrast in texture and shading in the area to identify the nasolabial fold, for example and without limitation, by comparing dark pixels in the cheek area in pixel map.
- One of ordinary skill in the art can provide software to carry out these steps routinely and without undue experimentation. Further, in light of the above, it should be clear to one of ordinary skill in the art how to provide software to identify and locate areas which may be of interest for plastic and reconstructive surgery and for cosmetic
- dermatological treatments such as the hairline, eyebrows, eyes, nose, corners of the mouth wrinkles, cheeks, lips, chin, ears, neck, breasts, upper arms, lower arms, hands, abdomen, waist, buttocks, thigh, lower legs, feet, cellulite, scars, and skin, routinely and without undue experimentation.
- computer device 810 executes software that causes a menu of names and/or visual icons of areas of interest (each of which areas of interest have been a subject of one or more clinically validated studies or trials relating to medical treatments) to be displayed using any one of a number of methods that are well known to those of ordinary skill in the art.
- a menu of names and/or visual icons of areas of interest each of which areas of interest have been a subject of one or more clinically validated studies or trials relating to medical treatments
- the prospective patient selects (or specifies) an anatomical area for prospective treatment.
- computer device 810 executes software that causes the selected or specified anatomical area in the patient's image to be indicated on display 830 (for example and without limitation, using arrows or dotted lines and so forth), which software is well known and readily available to those of ordinary skill in the art.
- a clinically validated trial or study includes, for example and without limitation, an FDA-approved clinical study, summary clinical data from an FDA-approved Instruction for Use and a clinical study approved by local authorities such as, for example and without limitation, the European Union and Health Canada.
- the nasolabial fold of a prospective patient is compared with a Visual Analog Scale (VAS).
- VAS is an FDA-approved wrinkle severity scale, and Table 1 above shows a clinically validated Visual Analog Scale (VAS) of the nasolabial fold as well as Fig. 5.
- the VAS comprises five (5) images showing the nasolabial fold, and each image has a "severity" score ranging from 0 - 4 corresponding to the appearance of the nasolabial fold in the image.
- the software first compares the shading and texture of the nasolabial fold with surrounding tissues in the images from the VAS, and creates a VAS "scale library" of images of the nasolabial fold using linear expansion of the VAS images, which scale library may contain as many as fifty (50) graphic models (i.e., images) of a wrinkle, where each graphic model, i.e., image, has its own severity score (for example, VAS severity scores 0.1, 0.2, ...3.8, 3.9, 4.0).
- the software produces the VAS scale library of images by interpolating the images in the VAS using well known linear expansion methods to provide interpolated (i.e., morphed or simulated) images.
- the software compares the area of the nasolabial fold in the image input by the prospective patient with the graphic models, i.e., images in VAS scale library, to determine the "best-match.”
- the severity score of the "best match” image from the VAS scale library will be the "best match” severity score of the input image.
- software for comparison includes, for example and without limitation, '3 ⁇ 4est match” algorithms, proportional matching algorithms and other matching algorithms that are well known to those of ordinary skill in the art.
- step 280 computer device 810 executes software that causes a menu of names and/or visual icons of medical treatments for the specified anatomical area to be displayed using any one of a number of methods that are well known to those of ordinary skill in the art.
- step 290 utilizing user input device 820, the prospective patient chooses a prospective medical treatment from a number of such treatments displayed on display 830.
- computer device 810 executes software that compares the "best match" severity score with one or more clinical trial summaries established in the clinically validated (for example and without limitation, FDA- approved) trial(s) to determine a "likely” or “most likely” treatment outcome and its duration, where the treatment outcome is based on the clinically validated trial data or "meta-data” (see the description of meta-data below).
- Table 1 above shows outcome data for an FDA approved injectable dermal filler (trade name "Juvederm 30HV”) to treat facial wrinkles and folds, for example, the nasolabial fold.
- the baseline VAS severity score of the patient is 2.6 (in the VAS severity scale, "0" indicates little or no wrinkle, and "4" indicates a severe wrinkle.
- the expected average VAS severity score is 0.5. This represents an improvement of 2.1 in the VAS severity scale.
- the proportional improvement is used to provide an image which displays the effects of the improvement achieved by this treatment. For example, if the input image of a prospective patient has a "best match" severity score of 3.0, the treatment is expected to provide an 81% improvement.
- Meta-data is data derived from clinically validated trial or study data. For example, if there are multiple products (i.e., treatments) that are approved to treat nasolabial folds, and if the prospective patient does not specify which product (treatment) he/she wants to use, the software will average clinical trial data from all the approved products, or from the most commonly used products, available for treatment of the specified area. For example, if product 1 results in an 80% improvement at week 2, product 2 results in an 83% improvement at the same time point, and product 3 results in an 84% improvement at the same point, the embodiment will use the average improvement score from these three products to simulate the treatment outcome.
- computer device 810 executes software that utilizes the "likely” or “most likely” treatment outcome as source data to produce a visual representation of the treatment outcome and to cause the visual
- the visual representation of the "most likely” treatment outcome is simulated by replacing the specified anatomical area (for example, the nasolabial fold (wrinkle) region) in the input image with the corresponding region of an image from the scale library having the severity score corresponding to the severity score of "most likely” treatment outcome.
- specified anatomical area for example, the nasolabial fold (wrinkle) region
- one embodiment simulates the duration of the effect of the treatment over time. In accordance with one such embodiment, this is provided as a series of images or as an animation showing changes to the input image.
- Fig. 10 shows data which reflects the duration of an effect of a volumizing treatment over time. The data displayed on the curve represents percentage improvement of the effect as a function of time.
- Computer device 810 executes software that uses this data to compute severity scores for the specified anatomical area after treatment over time.
- computer device 810 executes software that uses these scores to create images in the same manner described above for showing the "likely" outcome of a treatment, which images can be displayed, for example and without limitation, on display 830 as a slide show or as an animation and so forth.
- one embodiment displays images of the effect of treatment, for various treatment procedures (sometimes referred as products) side-by-side on display 830 so the prospective patient can see any differences that might occur (such side-by-side comparisons can also be over time).
- software will first create a best-fit curve (this can be created, for example and without limitation, using MS Excel Tools) to provide more time points, and then the duration effect is simulated as described above.
- computer device 810 executes software that accesses a database to retrieve and display the names and contact information of, for example and without limitation, physicians who are board-certified to perform specific procedure(s) at the specified treatment area(s). Further, user input device 120, the prospective patient selects a physician.
- computer device 810 executes software that sends the selected physician the "before-and-after images," and the list of selected treatment(s), for example and without limitation, by e-mail.
- the clinical trial or study data would be stored in a database accessible to computer device 810.
- the data base may be a disk or other random access storage device connected to computer device 810 or it might be a database stored in remote storage (for example and without limitation, in the cloud) which is accessible by computer device 810 over a network such as, for example and without limitation, the Internet or the database may be combination of these.
- the medical treatment data would be stored in a manner similar to that described above.
- post-treatment images created for prospective patients may be stored in patent accounts in a patent database in a manner similar to that described above.
- Example 1 Selection of Cosmetic Process
- Example A An alternative to Example A is where clinical data set is not available, but outcomes can be measured from physician database.
- Lower blepharoplasty, as illustrated in Fig. 6, is an example where clinical dataset is not available from Regulatory Agencies such as FDA, but we have obtained a large library of clinical data from physicians libraries and from public sources, where we can curate the data and in essence, repeat the steps in Example A, i.e., develop a scale of severity and measure the average improvement after treatment.
- Example B the synthesis step in process 335 of Fig. 4: A user "touches” or “clicks” on the brow to indicate his desire to see the result of brow lift (such as the brow region provided in Fig. 7).
- the algorithm database which includes physician before and after photos, indicates that the following areas are affected by Brow Lift.
- the brows are lifted by an average of less than 0.5 inch, the forehead wrinkles are smoothed out.
- the glabella wrinkles are smoothed out, and the Crows Feet wrinkles are smoothed out on the upper portion.
- Image warping is achieved by selecting source polygon to be equal to eyebrow contour returned by a FFD algorithm described above.
- Destination polygon will be the same as source polygon in the inner part, while in the outer part (lateral side of the brow) it is gradually lifted up. Such choice of source and destination polygons results with lateral side of the brow is lifted up.
- glabella and crow's feet polygons returned by a FFD library described above were used.
- the outer border was determined (this is usually edge between glabella area and hair, or glabella area and image background) by detecting edges and then finding most significant edge. Then in these areas inpainting is performed with blending ratios with original image equal to 0.9. This results in smoothing wrinkles away by 90%.
- Example of performed lateral brow lift procedure is given in 15.
- the system described above can also execute software that compares the selected or specified anatomical area with clinically validated images having "severity" scores, which severity scores are obtained from a clinically validated trial or study, to determine a "best match” severity score of the selected or specified anatomical area of the prospective patient's original image.
- a predicted image showing the predicted results of the procedure, among other data, can then be provided to the subject according to the Methods section above.
- the implementation again combines image warping and inpainting techniques.
- Image warping is achieved by selecting source polygon to be equal to eyebrow contour returned by FFD algorithm, while destination polygon has the same shape as source polygon, but is lifted to upper direction. Such choice of source and destination polygons results with eyebrows being lifted up.
- the system described above can also execute software that compares the selected or specified anatomical area with clinically validated images having "severity" scores, which severity scores are obtained from a clinically validated trial or study, to determine a "best match” severity score of the selected or specified anatomical area of the prospective patient's original image.
- a predicted image showing the predicted results of the procedure, among other data, can then be provided to the subject according to the Methods section above.
- the system described above can also execute software that compares the selected or specified anatomical area with clinically validated images having "severity" scores, which severity scores are obtained from a clinically validated trial or study, to determine a "best match” severity score of the selected or specified anatomical area of the prospective patient's original image.
- a predicted image showing the predicted results of the procedure, among other data, can then be provided to the subject according to the Methods section above.
- Glabella [0151] Glabella
- This visual effect uses the same technique as in lateral brow lift procedure, i.e. glabella area polygon in FFD library were computed and perform inpainting of resulting area with clean skin patch.
- An example of a glabella procedure is given in Fig. 18.
- the system described above can also execute software that compares the selected or specified anatomical area with clinically validated images having "severity" scores, which severity scores are obtained from a clinically validated trial or study, to determine a "best match” severity score of the selected or specified anatomical area of the prospective patient's original image.
- a predicted image showing the predicted results of the procedure, among other data, can then be provided to the subject according to the Methods section above.
- This procedure is performed by injecting the filler material superficially at the marionette lines.
- the main effect of this is reducing of wrinkles in affected area.
- This procedure was simulated by computing marionette area polygon in FFD library and performing inpainting of this polygon with the clean skin patch.
- An example of a marionette lines procedure can be seen in Fig. 19.
- the system described above can also execute software that compares the selected or specified anatomical area with clinically validated images having "severity" scores, which severity scores are obtained from a clinically validated trial or study, to determine a "best match” severity score of the selected or specified anatomical area of the prospective patient's original image.
- a predicted image showing the predicted results of the procedure, among other data, can then be provided to the subject according to the Methods section above.
- Nasolabial folds wrinkles are the wrinkles that "bracket" the mouth when one smiles. This procedure is performed by injecting the filler material superficially and/or sub-dermally at the nasolabial folds. This reduces the nasolabial fold wrinkles by 80%.
- the system described above can also execute software that compares the selected or specified anatomical area with clinically validated images having "severity" scores, which severity scores are obtained from a clinically validated trial or study, to determine a "best match” severity score of the selected or specified anatomical area of the prospective patient's original image.
- a predicted image showing the predicted results of the procedure, among other data, can then be provided to the subject according to the Methods section above.
- This procedure reduces wrinkles in crow's feet area.
- the procedure is already described and illustrated as e.g. part of lateral brow lift procedure. The only difference is that this procedure is isolated to affect only the crow's feet area and nothing else should change.
- the system described above can also execute software that compares the selected or specified anatomical area with clinically validated images having "severity" scores, which severity scores are obtained from a clinically validated trial or study, to determine a "best match" severity score of the selected or specified anatomical area of the prospective patient's original image.
- a predicted image showing the predicted results of the procedure, among other data, can then be provided to the subject according to the Methods section above.
- Laser Resurfacing causes a global change of the skin texture, color, elasticity, and tightness. This does not change the underlying bone and tissue structure.
- Results of this procedure are: smooth out all the fine wrinkles by 90%; under eye bags are tightened by 50%; skin color will turn slightly pink; acne scars and other small scars are reduced by 90%; the brown age spots will be reduced by 90%.
- the system described above can also execute software that compares the selected or specified anatomical area with clinically validated images having "severity" scores, which severity scores are obtained from a clinically validated trial or study, to determine a "best match” severity score of the selected or specified anatomical area of the prospective patient's original image.
- a predicted image showing the predicted results of the procedure, among other data, can then be provided to the subject according to the Methods section above.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Surgery (AREA)
- Radiology & Medical Imaging (AREA)
- Biomedical Technology (AREA)
- Physics & Mathematics (AREA)
- Pathology (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Robotics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
A system is provided for producing images predictive of a subject's appearance resulting from cosmetic or medical treatment, the system comprising a controller, an optional image input device, an optional image output device, and a computer program including a predictive analysis function for generating a digital image predictive of the subject based upon comparison of a first data set and a second data set. Methods are also provided for generating a predictive image of a subject resulting from a cosmetic or medical treatment.
Description
SYSTEMS AND METHODS FOR PRODUCING PREDICTIVE IMAGES
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority from U.S. Provisional Application Serial No. 61/860,661 filed on July 31, 2013, which is incorporated herein by reference in its entirety.
FIELD
[0002] The present invention related in general to systems and methods for producing images predictive of a subject's appearance resulting from cosmetic or medical treatment.
INTRODUCTION
[0003] Many individuals all over the world seek to improve their physical appearance through the use of cosmetic treatment, dermatological treatment, and plastic surgery. From the patient's perspective, lack of knowledge about what medical aesthetic treatment can offer limits the individual's options, and lack of graphic representation limits the individual's ability to communicate with the physicians. Individuals have the desire to "design" their own face and body, under the guidance of medical principles and guidelines.
[0004] However, physicians may share before-and-after pictures of other patients with a prospective patient to enable him/her to get a sense of what the surgical outcome will be. Many prospective patients find such information to be unhelpful because the other patients look different from the prospective patient, and the surgical outcome for other patients may be irrelevant to the surgical outcome for the prospective patient.
[0005] In addition, many computer programs are on the market provide a facial and other anatomy recognition function, and that "morph" or "warp" a digital image into another digital image based on predetermined criteria. Such software, however, does not provide a user an image predictive of a resulting cosmetic or medical treatment. They are based on computing algorithms without any clinical data to support the clinical feasibility. In addition, individuals all over the world desire to make their own choice based on the personalized benefit to their appearances, the cost of the treatment, the potential downtime, and the risk of adverse events. Current morphing and warping software does not associate that information with the resulting digital image.
[0006] Therefore, what is needed are systems and methods for producing images predictive of a subject's appearance resulting from the specific cosmetic or medical treatment.
In addition, what is needed are systems and methods of associating clinical outcome data with the predictive image.
SUMMARY
[0007] The present teachings include a system for producing images predictive of a subject's appearance resulting from cosmetic or medical treatment, the system comprising a controller, an optional image input device, an optional image output device, and a computer program disposed in the controller for receiving a first digital image of a subject prior to cosmetic or medical treatment, generating a first data set obtained by evaluation of the first digital image, receiving a second data set representing known anatomical measurements from other subjects resulting from the selected cosmetic or medical treatment, and a predictive analysis function for generating a second digital image predictive of the subject based upon comparison of the first data set and second data set, the second digital image being available via the optional image output device.
[0008] In one aspect, the first digital image can be provided to the controller via a camera. In various other aspects, the camera can include a webcam or a computation device integrated camera.
[0009] In accordance with a further aspect, the controller is comprised by a mobile device including, but not limited to, a tablet computer, laptop computer, and smartphone. The first digital image can be provided to the controller using two-dimensional coordinates or using three-dimensional coordinates.
[0010] In another aspect, the second data set includes a parameters-based medical guideline, parameters-based surgical guideline, and/or a clinical trial summary.
[0011] In another embodiment, a system is provided for producing images predictive of a subject's appearance resulting from cosmetic or medical treatment, the system comprising a controller comprising an algorithm for receiving a first digital image of a subject prior to cosmetic or medical treatment, generating a first data set obtained by evaluation of the first digital image, receiving a second data set representing known anatomical measurements from other subjects resulting from the selected cosmetic or medical treatment, and generating a second digital image predictive of the subject based upon comparison of the first data set and second data set, the second digital image being available via the optional image output device.
[0012] In yet another embodiment, a method is provided for generating a predictive image of a subject resulting from a cosmetic or medical treatment, the method comprising
acquiring a first digital image of the subject on a computer system prior to cosmetic or medical treatment, querying the subject to select at least one of an individual cosmetic or medical treatment from a listing of known cosmetic and medical treatments, and an anatomical feature of the subject depicted by the first digital image, collecting a first data set of anatomical measurements of the subject anatomical features obtained by evaluation of the first digital image, the first data set stored on the computer system, comparing the first data set to a second data set comprising known anatomical measurements from other subjects resulting from the selected cosmetic or medical treatment or anatomical feature of the subject depicted by the first digital image, wherein the second data set includes, but is not limited to, a parameters-based medical guideline, parameters-based surgical guideline, and a clinical trial summary related to the selected cosmetic or medical treatment, and generating a second digital image predictive of the result of the cosmetic or medical treatment in the subject based on the comparison between the first data set and second data set.
[0013] In certain aspects, the method further includes the act of visualizing the predictive image on an image displaying device. In another aspect, the method further includes the act of transmitting the predictive image to a physician skilled in the cosmetic or medical treatment. In another aspect, the method further includes the act of comparing the selected cosmetic or medical treatment to a third data set comprising a listing of physicians skilled in the cosmetic or medical treatment, and generating a subset of the listing based on a predetermined value, including the geographical location of the subject.
[0014] In yet another aspect, the first digital image is acquired using an image capturing device. The first digital image can be two-dimensional or three-dimensional. In another aspect, the first digital image can be captured by a camera, including a webcam or and a computation device integrated camera. The computation device can be a mobile device including, but not limited to, a tablet computer, laptop computer, and smartphone. In a further aspect, the digital image can be a digital file stored in the computer system memory.
[0015] In another aspect, the digital image can be an image of at least one anatomical feature of the subject body, including the subject face. In further aspects, the image capturing device and image displaying device are the same device. In other aspects, the anatomical measurements can be two-dimensional coordinates of the anatomical feature or three-dimensional coordinates of the anatomical feature.
[0016] In another embodiment, a method is provided for producing an image predictive of cosmetic or medical treatment outcome in a subject, the method comprising acquiring a first digital image of the subject prior to cosmetic or medical treatment,
calculating a first data set of anatomical measurements for the subject, comparing the first data set to a second data set comprising previously existing anatomical measurements obtained from prior cosmetic or medical treatment outcomes, and modifying the first digital image based upon the comparison to the second data set and said first data set to provide a second digital image predictive of the subject following the cosmetic or medical treatment.
[0017] In various aspects, the second data set includes, but is not limited to, a parameters-based medical guideline, parameters-based surgical guideline, and a clinical trial summary. In further aspects, a segment of the digital image can be selected by a user. In yet other aspects, the anatomical measurements are two-dimensional coordinates of the anatomical feature or three-dimensional coordinates of the anatomical feature.
[0018] One embodiment is a method for determining medical treatment outcomes that: (a) acquires an image of a user through an image file upload or through a build-in camera in a computation device, such as a smartphone, or webcams connected with a computer; (b) displays an image of a user, also referred to herein as a prospective patient, (the image could be an image of: a face, a body, a part of the body, or one or more anatomical features of the body) and (c) enables an anatomical area of the image and specific features in the area to be identified, which anatomical area has been a subject of cosmetic,
dermatological, and plastic surgery treatment, and/or one or more clinically validated studies or trials relating to medical treatments. The method further includes: (a) allowing the user to point to ("touch") the anatomical areas that they wish to change, without knowing the specific name of the anatomical area; (b) determining a level of "severity" of a defect in the anatomical area of the image in light of data or a meta-data set from the one or more clinically validated studies or trials relating to medical treatments, (c), allowing the user to visualize different potential treatment outcomes by selecting different treatment method and/or clinical procedures. In accordance with one or more such embodiments, determining a level of severity comprises correlating the image of the identified anatomical area to a clinically validated scale of severity in the data or the meta data set from the clinically validated studies or trials. In addition, and in accordance with one or more such
embodiments, determining likely treatment outcomes comprises providing a morphed image of the identified anatomical area based on the likely treatment outcome wherein the morphed image is provided using data or a meta data set from the one or more clinically validated studies or trials.
[0019] Another embodiment is a method for determining medical treatment outcomes that: (a) acquires an image of a user through an image file upload or through a
build-in camera in a computation device, such as iphone, ipad, or webcams connected with a computer; (b) displays an image of a user, also referred to herein as a prospective patient, (the image could be an image of a face, a body, a part of the body, or one or more anatomical features of the body) and (c) enables an anatomical area of the image and specific features in the area to be identified, which anatomical area has been a subject of cosmetic,
dermatological, and plastic surgery treatment, and/or one or more clinically validated studies or trials relating to a treatment target of an approved drug, biologic, medical device or combination product. The method further includes: (a) determining a level of "severity" of a defect in the identified anatomical area of the image in light of data or a meta-data set from the one or more clinically validated studies or trials using the approved drug, biologic, medical device or combination product, and (b) determining one or more likely treatment outcomes for the prospective patient. In accordance with one or more such embodiments, determining a level of severity comprises correlating the image of the identified anatomical area to a clinically validated scale of severity in the data or the meta data set from the clinically validated studies or trials. In addition, and in accordance with one or more such embodiments, determining likely treatment outcomes comprises providing a morphed image of the identified anatomical area based on the likely treatment outcome wherein the morphed image is provided using data or a meta data set from the one or more clinically validated studies or trials.
[0020] These and other features, aspects and advantages of the present teachings will become better understood with reference to the following description, examples and appended claims.
DRAWINGS
[0021 ] Those of skill in the art will understand that the drawings, described below, are for illustrative purposes only. The drawings are not intended to limit the scope of the present teachings in any way.
[0022] Figure 1 is a conceptual diagram illustrating an example computing device for obtaining a cosmetic or medical or surgical treatment simulation.
[0023] Figure 2 is a conceptual diagram illustrating an example computing device with 3D image headset for obtaining a cosmetic or medical or surgical treatment simulation.
[0024] Figure 3 is a flowchart illustrating details of an example user for obtaining a cosmetic or medical or surgical treatment simulation.
[0025] Figure 4 is a flowchart illustrating details of the simulation process for an
example user for obtaining a cosmetic or medical or surgical treatment simulation.
[0026] Figure 5 is a depiction of severity scores for a nasolabial fold.
[0027] Figure 6 is a before and after representation of a surgical procedure.
[0028] Figure 7 is an a conceptual diagram illustrating an example facial image that includes example facial anatomical areas that are subject to cosmetic, medical, and plastic surgery treatment.
[0029] Figure 8 is a block diagram illustrating details of an example computing device assembly for capturing human images, detecting and displaying treatment areas, and generating treatment outcome simulations.
[0030] Figure 9 is a flowchart illustrating a method for determining medical treatment outcomes in accordance with one or more embodiments.
[0031 ] Figure 10 shows published data on the duration of an effect of a volumizing treatment.
[0032] Figure 1 1 is a detailed block diagram illustrating details of an example computing device for capturing human images, detecting and displaying treatment areas, and generating treatment outcome simulations.
[0033] Figure 12 is a comparison between (a) Luxand FaceSDK feature points and (b) Stasm feature points.
[0034] Figure 13 is an example of located clean skin patches.
[0035] Figure 14 is a comparison between (a) wrinkle region example and (b) result image obtained by inpainting.
[0036] Figure 15 is a comparison between (a) original image and (b) simulation of lateral brow lift procedure.
[0037] Figure 16 is a comparison between (a) original image and (b) simulation of forehead lift procedure.
[0038] Figure 17 is a comparison between (a) original image and (b) simulation of lower blepharoplasty procedure.
[0039] Figure 18 is a comparison between (a) original image and (b) simulation of glabella procedure.
[0040] Figure 19 is a comparison between (a) original image and (b) simulation of marionette lines procedure.
[0041] Figure 20 is a comparison between (a) original image and (b) simulation of nasolabial folds procedure.
[0042] Figure 21 is a comparison between (a) original image and (b) simulation of
laser resurfacing procedure.
DETAILED DESCRIPTION
[0043] Abbreviations and Definitions
[0044] To facilitate understanding of the invention, a number of terms and abbreviations as used herein are defined below as follows:
[0045] Cosmetic Treatment: The term "cosmetic treatments" means any treatment performed and/or prescribed by a physician for the purpose of improving the patient's appearance, including but not limited to, treatment for acne, rosacea, scars, age spots, uneven skin tone, spider veins, unwanted hair, moles, birthmarks, wrinkles, dark coloration, undesirable facial and body contours.
[0046] Medical Treatment: The term "medical treatment", which may also be referred to using the term "plastic surgery", includes any surgery procedures which modify or improve the appearances of a physician feature, irregularity, and defect.
[0047] Controller: As used herein, the term "controller" refers to a computer system capable of executing application software including a processor, computer-readable memory, and data storage. In one configuration, the controller is an operator-programmable microcomputer that can connect to other computers wirelessly and to which system visual monitors, visual detectors (including cameras, such as a webcam or and a smartphone integrated camera), and optical sensors (including MICROSOFT® KINECT® and three- dimensional headsets), are connected. The controller can be programmed either locally or remotely. In another configuration, it can be controlled in real time by an operator. An example of a controller is the "computer device" provided in Fig. 8 below.
[0048] Smartphone: As used herein, the term "smartphone" is broadly defined to include mobile and consumer devices which optionally communicate with the Internet, and which are programmable and configurable to provide the predictive images herein, such as IPHONE®, IP AD®, KINDLE®, FIRE®, and ANDROID® devices such as NEXUS®, AND GALAXY®.
[0049] Predictive Image: As used herein, the term "predictive image" means a morphed image of an identified anatomical area of a subject based on the likely cosmetic or medical treatment outcome. The morphed image is generated based on the comparison between a first data set of original anatomical coordinates, and a second data set comprising one or more clinically validated studies (e.g., a parameters-based medical guideline, parameters-based surgical guideline, and a clinical trial summary) related to a selected
cosmetic or medical treatment. The morphed image can also be correlated with a severity score based on the association of data or a meta data set from the one or more clinically validated studies.
[0050] Unless otherwise defined, all technical and scientific terms used herein has the common meaning understood by one of the skilled artisans to which this invention pertains. The terminology used in the description of the invention herein is for the purpose of describing the particular embodiments and is not intended to be limiting of the invention.
[0051 ] Systems and Methods for Predictive Visualization
[0052] Systems and methods are provided for producing images predictive of a subject's appearance resulting from cosmetic or medical treatment. Such systems and methods provide a subject the ability to share the predictive images with their friends and family and doctors as well as to exchange information with other individuals regarding the best options for them and the experience after their treatments.
[0053] A system 100 is provided in Figs. 1 and 2, more fully described below, for producing images predictive of a subject's appearance resulting from cosmetic or medical treatment. The system comprises various components which operate together to generate a predictive image.
[0054] Therefore, a system is provided for producing images predictive of a subject's appearance resulting from cosmetic or medical treatment. The system can comprise a controller, an optional image input device, and an optional image output device.
[0055] The controller can be programmed for receiving a first digital image of a subject prior to cosmetic or medical treatment, generating a first data set obtained by evaluation of the first digital image, receiving a second data set representing known anatomical measurements from other subjects resulting from the selected cosmetic or medical treatment, and a predictive analysis function for generating a second digital image predictive of the subject based upon comparison of the first data set and second data set, the second digital image being available via the optional image output device.
[0056] Optionally, the first digital image can be provided to the controller via a camera. In various aspects, the camera can include a webcam or a computation device integrated camera. In other aspects, the camera can be a three-dimensional sensor, e.g. a MICROSOFT® KINECT® or other three-dimensional sensor or display known in the art can be integrated in the system.
[0057] Fig. 1 depicts an example of the system 150 illustrating various components
that may be utilized by a user to obtain a predictive images of various cosmetic treatments and plastic surgery procedures. System 150 could include a smartphone, other mobile devices, other computation devices, connected through wireless or wired internet connection to the internet. Component 152 is a built-in camera in this embodiment. It can be standalone camera or other digital image-capturing device. It can also be removed if the software application is run exclusively on stored image files either through the cloud or stored in the computation device 150. Controller 154 is a computation device capable of either running the simulation software by itself, or containing web browsers such as Internet Explorer, Google Chrome, and/or Firefox, so that device 154, through a website, can send the digital image to a remote server, receive and process instructions from the server, and display the resulting webpages to the user. Device 150 has a user interface so that users can select specific anatomic areas on the digital image and send instructions to a server.
[0058] Fig. 2 is a variation of the example system 150 where a 3D camera system 156 replaces the traditional 2D camera system as a way to capture the original digital image. The controller 154 of the system can be comprised by a mobile device including a tablet computer, laptop computer, and smartphone. The first digital image of the system can therefore be provided to the controller using two-dimensional coordinates or using three- dimensional coordinates.
[0059] The system can produce images predictive of a subject's appearance resulting from cosmetic or medical treatment. The controller can comprise an algorithm for receiving a first digital image of a subject prior to cosmetic or medical treatment, generating a first data set obtained by evaluation of the first digital image, receiving a second data set representing known anatomical measurements from other subjects resulting from the selected cosmetic or medical treatment, and generating a second digital image predictive of the subject based upon comparison of the first data set and second data set, the second digital image being available via the optional image output device.
[0060] Fig. 3 is a flow chart illustrating the use of the system. Process 300 may begin when a subject interested in cosmetic treatment and/or plastic surgery may improve his appearance. The subject may first start by uploading a digital image of the subject as illustrated in 302, or taking a photograph with a built-in camera or three-dimensional sensor from a computation device, as illustrated in 304, or use a previously created image of the subject saved in computer memory, as illustrated in 306.
[0061] The captured image is pre-processed in 310. The image may be corrected for insufficient quality, low lighting condition, blurry image, wrong orientation, partial image,
and other deficiencies. The system may also display an error message to the user pointing out the deficiencies and suggest ways to correct the deficiencies, or it may display a message to the user to upload another image, or re-take another photograph. In the pre-processing step 310, the system will also make adjustment to the size of the digital image to prepare the image for further analysis and manipulation.
[0062] Once the image is accepted according to pre-programmed parameters, the system in 320 will process the image and display the anatomic areas of the specific anatomical area targeted for cosmetic or medical treatment. The specific anatomic areas, such as the subject face, can be identified by the user in step 330 through a touch screen or a mouse, or devices which may interact with the displayed image on the screen. Based on the user selection, the system will generate the likely treatment outcome simulation and display it side-by-side with the original image in process 340. The detailed process and architecture of step 340 is described in detail in Fig. 4.
[0063] Once the subject has selected the predictive simulation of the selected cosmetic and plastic surgery procedures, the subject may proceed to choose a physician, as illustrated in process 350. The physicians may be provided by the system based on variables associated with the subject and the physician. Such variables can include geographical location, the cosmetic or medical procedure, the subject's budget, the subject's ethnicity, the subject's sex, the subject's age, the subject's health, and so on. Prior to selecting a physician practitioner, the subject may desire to conduct more research on the treatment itself, as provided by the system, and as illustrated in process 352, and/or on the specific medical products used for the selected procedures, as illustrated in process 354. The user may also choose to save the original and simulation images and related information in an electronic album, which may be an Internet commerce subroutine, as illustrated in process 356.
[0064] If the subject selects a specific physician practitioner, the subject may send a request for office visit through process 360. They may also conduct additional research on the physician practitioner using the system, as illustrated in step 362, before sending a request for office visit. This process may be integrated with the physician's online calendar and enable the user to schedule the office visit directly.
[0065] Producing Predictive Images
[0066] Fig. 4 is a flow chart illustrating the predictive visualization simulation aspect of the system. Using the system, a subject may achieve a medically realistic prediction for the individual subject. The system obtains a qualified image from process 310. The
system further identifies the potential medical treatment areas of the image in process 320. A non-limiting example of the treatment areas is illustrated in Fig. 5.
[0067] Once the specific anatomic features, which are target of cosmetic or medical procedures, are selected and displayed as an image to the user, the user has the option of (1) "touch" or "point and click" the anatomic area, or a dot/circle representing the anatomic area, illustrated in process 331, or (2) selecting specific cosmetic or medical procedures, which is illustrated in process 332.
[0068] Process 334 defines the specific cosmetic or medical procedures which have an impact on the target anatomic area, and then the system queries and generates a medical and/or clinical database to define the predictive image simulation algorithms. The simulation outcome is displayed in step 340, side-by-side with the original image. The Examples below provide this aspect of the system.
[0069] System Architecture
[0070] The system architecture of various embodiments can be accomplished in a series of modules:
[0071] In one non-limiting example, a Python interface is provided so that the web front end (the subject) can invoke image processing operations on the web server. The main procedures that can be invoked are the following:
a) specifying which cosmetic surgery procedure should be simulated, b) image loading,
c) image pre-processing,
d) image processing and saving, and
e) accessing locations of key points on the face.
All modules and platforms above may be cross-platform (e.g., WINDOWS®, LINUX®, IOS®, OSX®, and ANDROID®) and can be programmed in any suitable programming language (e.g., C++, JAVA, PHP, SQL, PERL, PASCAL, SWIFT, and PYTHON).
[0072] Backend Cross-Platform Libraries
[0073] A library is provided that implements all functionality of the system, i.e., all cosmetic surgery procedures. It implements high-level functionalities such as loading, preprocessing and processing of image, and specifying which cosmetic surgery procedures should be simulated. Exemplary procedures are provided in the Examples below.
[0074] Tables 1 through 6 provide non-limiting examples of parameters-based
clinical trial summaries.
Table 1
JUVEDERM 30HV vs. Control
Independent Expert Reviewer's
NLF Severity Scores
Table 2
BELOTERO BALANCE vs. Control
Table 3
BOTOX vs. Control
The co-primary efficacy endpoints were the investigator's rating of glabellar line severity at maximum frown and the subject's global assessment of change in appearance of glabellar lines, both at Day 30 post- injection. For the investigator rating, using a 4-point grading scale (0=none, 3=severe) a responder was defined as having a severity grade of 0 or
1. For the subject's global assessment of change, the ratings were from +4 (complete improvement) to -4 (very marked worsening). A responder was defined as having a grade of at least +2 (moderate improvement). After completion of the randomized studies, subjects were offered participation in an open label, repeat treatment study to assess the safety of repeated treatment sessions.
120 25% 2% 24%
Table 4
Subject's Assessment of Change in Appearance of Glabellar Lines - Respond
(% and Number of Subjects with at Least Moderate Improvement)
BOTOX vs. Control
Table 5
Investigator's and Subject's Assessment - Responder Rates for Subjects <65 and .65
Years of Age at Day 30
BOTOX vs. Control
Table 6
Studies 1 and 2: Composite Investigator and Subject Assessment of LCL at Maximum Smile at Day 30 - Responder Rates (% and Number of Subjects Achieving .2-Grade
Improvement from Baseline)
[0075] Those of skill in the art will recognize medical guidelines and data provided thereby (e.g., medical guidelines are available at the National Guideline Clearinghouse: www.guideline.gov). Parameters-based surgical guidelines are also available to those of skill in the art (e.g., evidence-based practice guidelines available at the American Society of Plastic Surgeons: www.plasticsurgery.org). A library is provided comprising the data.
[0076] Fig. 5 provides an example of the severity scores applied to nasolabial folds, and predictive images produced visualizing such severity scores.
[0077] A library is also provided that serves as a gateway between the software modules above and third party facial feature detection (FFD) libraries. It provides universal interface for accessing points of e.g. eye, eyebrows, nose and other face features, no matter which third party facial feature detection library is used. In addition to that, it enables dynamic creation of third party FFD library instance, and testing whether face is found in the image. Those of skill in the art will recognize that similar libraries that detect other parts of the anatomy can be provided and used to provide predictive images of those anatomical parts.
[0078] Another library can be provided that performs eyebrow lifting functionality with the use of image warping algorithms known to those of skill in the art. It can include two image warping algorithms: Poisson coordinates and cubic mean value coordinates.
[0079] Another library can be provided that implements inpainting functionality which is mainly used for the removal of wrinkles. It includes two inpainting algorithms: Poisson blending and healing.
[0080] Another library can be provided that that performs filtering of images with the purpose of reducing fine wrinkles, age spots, etc.
[0081 ] Another library can be provided can be provided with utilities that are commonly needed in other libraries, such as working with strings, parsing command line parameters. It can also contains geometry algorithms for working with points, lines, and angles. The library can also extend functionality of other libraries for working with polygons.
[0082] A variety of third party libraries known to those of skill in the art can be used in the system. These include OpenCV, Luxand Face SDK, Stasm, Clipper, and OpenBlas.
[0083] Algorithms
[0084] Facial Feature Detection
[0085] The purpose of facial feature detection (FFD) algorithms is to find the coordinates of facial feature points in an image. This usually includes eyes, eye contours, eyebrows, lip contours, nose tip, and so on. Two third party libraries can be used to detect facial features: Luxand FaceSDK (Luxand Inc., Alexandria VA) and Stasm (See Active Shape Models with SIFT Descriptors and MARS. Milborrow, Stephen and Nicolls, Fred. 2, 2014, VISAPP, Vol. 1).
[0086] Stasm is an open source library which uses active shape model (ASM; See Active Shape Models - 'Smart Snakes'. T.F.Cootes, C.J.Taylor. 1992. British Machine Vision Conference), to detect facial features.
[0087] The ASM starts the search for landmarks from the mean training face shape aligned to the position and size of the image face determined by a global face detector. It then repeats the following two steps until convergence:
(i) Suggest a new shape by adjusting the current positions of the landmarks. To do this at each landmark it samples image patches in the neighborhood of the landmark's current position. The landmark is then moved to the center of the patch which best matches the landmark's model descriptor. (The landmark's model descriptor is generated during model training prior to the search.)
(ii) Conform the suggested shape to a global shape model. This pools the results of the individual matchers and corrects points that are obviously mis-positioned. The shape model is necessary because each matcher sees only a small portion of the face and cannot be completely reliable.
[0088] The entire search is repeated at each level in an image pyramid, typically four levels from coarse to fine resolution.
[0089] Luxand FaceSDK returns the coordinates of 66 facial feature points, while Stasm returns 77 points, as can be seen in Fig. 12. The main differences between Luxand FaceSDK and Stasm are (1) Stasm returns points around eyebrow, while Luxand FaceSDK return only points in the middle of eyebrow (2) Luxand FaceSDK returns locations of smile lines (nasolabial folds), while Stasm does not, and (3) Stasm returns more points from head contour, while Luxand FaceSDK only returns point from the chin.
[0090] Point locations returned by Luxand FaceSDK and Stasm are only slightly different. One example is when points are on the eye contour. With Stasm, the points are closer to the eye center than with Luxand. Different offsets can be used, however, depending on which FFD algorithm is currently used.
[0091 ] Contours around facial areas that are targets for cosmetic and medical
procedures can be provided. Those of skill in the art will recognize that other anatomical areas can be contoured similarly. See, e.g., Fig. 7.
[0092] Image Warping
[0093] Image warping is the process of digitally manipulating an image such that shapes portrayed in the image have been distorted. As used herein, the term "warping" means that original pixels are mapped to new coordinates without changing the colors. This can be achieved by defining an interpolation function that maps original coordinates to new coordinates. An image warping operation can be used to simulate cosmetic and medical procedures, such as the eyebrow lift among other procedures. This can be accomplished by defining a closed- form interpolation that produces natural-looking functions while allowing flexible control of boundary constraints. In such a way, the warping affects only the area close to the target (eyebrows in an eyebrow lift procedure, for example) and not other facial features.
[0094] A variety of image warping algorithms can be used including cubic mean value coordinates (See Cubic mean value coordinates. Li, Xian-Ying, Ju, Tao and Hu, Shi- Min. 4, 2013, ACM Transactions on Graphics (TOG) - SIGGRAPH 2013 Conference, Vol. 32) and Poisson Coordinates (See Poisson Coordinates. Li, Xian-Ying and Hu, Shi-Min. 2, 2013, IEEE Transactions on Visualization and Computer Graphics, Vol. 19, pp. 344-352).
[0095] Cubic mean value (CMV) coordinates allow shape deformation using curved cage networks. CMV interpolate was first introduced by Michael S. Floater and Christian Schulz (See Pointwise radial minimization: Hermite interpolation on arbitrary domains. Floater, Michael S and Schulz, Christian. 2008. Computer graphics forum). This method was later extended (See Li, Xian-Ying. Cubic Mean Value Coordinates. 2013.
http://cg.cs.tsinghua.edu.cn/people/~xianying/Papers/CubicMVCs/index.html ) by discovering a new connection between this important interpolate and the classical mean value interpolate by re-deriving it using certain mean value properties of biharmonic functions. They also provide closed-form expressions for cubic mean value coordinates in 2D discrete case. CMV coordinates are introduced for interpolating Hermite constraints over a 2D polygonal boundary. The coordinates have closed-forms and yield natural interpolations that exactly match values and gradients expressed as cubic and linear functions over each edge. They demonstrate the utility of the coordinates in two applications, shape deformation and vector image representation.
[0096] As in mean value coordinates (and many other coordinates that rely on
Euclidean distances), a boundary constraint may have an impact on distant locations when interpolating with CMV coordinates in a non-convex polygon. However, biharmonic coordinates have the advantage of producing more "shape-aware" functions, since the impact of a boundary constraint is propagated only through the interior of the domain.
[0097] Poisson coordinates are a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. Explicit formulae for Poisson coordinates in both continuous and 2D discrete forms can be provided. Poisson coordinates are pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls).
[0098] Inpainting
[0099] Inpainting of wrinkles can be described in two steps:
[0100] Step 1 : Determination of clean skin patches detection within face region. See Fig. 13. The system can detect rectangular regions on forehead, chin and cheeks, where the skin does not contain any wrinkle or hair. For this purpose an edge detector is used which is sensitive to small edges (described in Edge extraction and enhancement with coordinate logic filters using block represented images. Mertzios, Basil G, et al, et al. 1995. Proceedings of the 12 th European Conference on Circuit Theory and Design (ECCTD '95)), and then select the maximal rectangles which do not contain any edges, here we are using integral images or summed area table approach to speed up rectangle selection (see, e.g., Integral Images.
http://computersciencesource.wordpress.com/2010/09/03/computer-vision-the-integral- image). A matrix is created corresponding to edges mask where cell is set to some very small negative value if corresponding pixel value in edges mask is not 0 otherwise the cell value is set to 0. The rectangle with longest perimeter in integral image of the matrix which has zero sum of its cell values corresponds to mask which does not contain any edges. An example of detected clean skin patches is illustrated in Fig. 8.
[0101] Step 2: Blending. For each wrinkle region we define a source region S, mask M, and overlay image O. Here overlay image is obtained from step 1. The blending mask M is obtained from wrinkles mask according to wrinkle type. The source region S is a rectangular region of source image corresponding to bounding rectangle of mask M. A healing algorithm can be used as described by Xian-Ying Li Mixed-Domain Edge-Aware Image Manipulation. IEEE Transactions on Image Processing, 2013, 22(5): 1915-1925.
http://cg.cs^singhua.edu n/people/~xianying/Papers/MixedDomain/index.html), and blend overlay image O onto source image S, using mask M. In case that skin patches are not big enough to cover whole wrinkle region, the procedure defined in Step 2.1 can be used.
[0102] Step 2.1 : Skin patches enlargement. Clean skin patches can be enlarged using tiling technique with random rotations to avoid undesired pattern effects. The same healing algorithm as described in step 2 can be used for tiling. Fig. 14 is an example of the wrinkle region and result image obtained by the described algorithm (before and after in panels (a) and (b) respectively).
[0103] Mixed Domain Filter
[0104] A mixed domain filter can be used for laser resurfacing procedure simulation to produce predictive images. The filter can be based on the Xian-Ying Li algorithm and its implementation (see citations above). The algorithm can perform edge-aware processing (processing of edge regions to decrease blurring effect) instead of ordinary blurring. Hence, the general algorithm can be implemented to perform recursive procedures in both time and frequency domains within different image resolutions. Choosing suitable algorithm parameters that enable high-quality laser resurfacing can be performed.
[0105] Methods
[0106] A method is provided for generating a predictive image of a subject resulting from a cosmetic or medical treatment, the method comprising acquiring a first digital image of the subject on a computer system prior to cosmetic or medical treatment, querying the subject to select at least one of an individual cosmetic or medical treatment from a listing of known cosmetic and medical treatments, and an anatomical feature of the subject depicted by the first digital image, collecting a first data set of anatomical measurements of the subject anatomical features obtained by evaluation of the first digital image, the first data set stored on the computer system, comparing the first data set to a second data set comprising known anatomical measurements from other subjects resulting from the selected cosmetic or medical treatment or anatomical feature of the subject depicted by the first digital image, wherein the second data set includes, but is not limited to, a parameters-based medical guideline, parameters-based surgical guideline, and a clinical trial summary related to the selected cosmetic or medical treatment, and generating a second digital image predictive of the result of the cosmetic or medical treatment in the subject based on the comparison between the first data set and second data set.
[0107] In certain aspects, the method further includes the act of visualizing the predictive image on an image displaying device. In another aspect, the method further includes the act of transmitting the predictive image to a physician skilled in the cosmetic or medical treatment. In another aspect, the method further includes the act of comparing the selected cosmetic or medical treatment to a third data set comprising a listing of physicians skilled in the cosmetic or medical treatment, and generating a subset of the listing based on a predetermined value, including the geographical location of the subject.
[0108] In yet another aspect, the first digital image is acquired using an image capturing device. The first digital image can be two-dimensional or three-dimensional. In another aspect, the first digital image can be captured by a camera, including a webcam or and a computation device integrated camera. The computation device can be a mobile device including, but not limited to, a tablet computer, laptop computer, and smartphone. In a further aspect, the digital image can be a digital file stored in the computer system memory.
[0109] In another aspect, the digital image can be an image of at least one anatomical feature of the subject body, including the subject face. In further aspects, the image capturing device and image displaying device are the same device. In other aspects, the anatomical measurements can be two-dimensional coordinates of the anatomical feature or three-dimensional coordinates of the anatomical feature.
[0110] In another embodiment, a method is provided for producing an image predictive of cosmetic or medical treatment outcome in a subject, the method comprising acquiring a first digital image of the subject prior to cosmetic or medical treatment, calculating a first data set of anatomical measurements for the subject, comparing the first data set to a second data set comprising previously existing anatomical measurements obtained from prior cosmetic or medical treatment outcomes, and modifying the first digital image based upon the comparison to the second data set and said first data set to provide a second digital image predictive of the subject following the cosmetic or medical treatment.
[0111] In various aspects, the second data set includes, but is not limited to, a parameters-based medical guideline, parameters-based surgical guideline, and a clinical trial summary. In further aspects, a segment of the digital image can be selected by a user. In yet other aspects, the anatomical measurements are two-dimensional coordinates of the anatomical feature or three-dimensional coordinates of the anatomical feature. One or more embodiments relate to methods for determining medical treatment outcomes. In particular, one embodiment is method and apparatus for generating post-surgical procedure outcome images. In accordance with one or more such embodiments, the method may be enabled as a
web application operating, for example and without limitation: (a) in the cloud; (b) as an application operating on computing devices such as, for example and without limitation, desktop computers, laptop computers, hand-held devices such as tablets, and so forth; or (c) as a mobile application operating on a mobile device such as, for example and without limitation, a smart phone such as an Android device. Further, in accordance with one or more such embodiments, a visual representation of a treatment outcome is provided as a postprocedure image. Still further, in accordance with one or more such embodiments, and as will be described below, the method is driven by clinically validated "severity" rating scales and clinically validated trial data or clinically validated meta data. As such, it is believed that resulting post-procedure images are medically credible. In addition, one or more further embodiments relate to methods for generating post-surgical procedure outcome images which indicate the duration of a post-procedure clinical effect along a time-course, for example and without limitation, along a time course set forth in one or more clinical trials.
[0112] Fig. 8 shows computer device 810 used to provide one or more embodiments of apparatus 800. Computer 810 may be a desktop computer, a laptop computer, a hand-held device such as, for example and without limitation, a tablet, and a mobile device such as, for example and without limitation, a smart phone such as an Android device. Computer device 810 includes user input device 820, and display 830. User input device 820 may include a keyboard and/or a mouse and/or a touch screen for entering information. Display 830 may be a display such as a computer display or laptop display or a mobile device screen and so forth. In accordance with one or more further embodiments, computer device 810 may display images on a remote display, which remote display may be accessed by computer device 810 over a network such as, for example, the Internet using communications protocols that are well known to those of ordinary skill in the art. In addition, computer device 810 includes a memory for storing information and programs and a processing unit for executing software in the form of programs or applications. In further addition, computer device 810 may include optional communications device(s) 840 to enable it to input data from various devices such as, for example and without limitation, disk drives, memory devices such as, for example and without limitation, "thumb drives," and devices that enable communication over networks such as, for example, and without limitation, the Internet and hence, for example and without limitation, the "cloud." In further addition, computer device 810 may include image capture device 850 such as, for example and without limitation, a camera. In accordance with further embodiments, computer device 810 may be a server in the "cloud" and user input device 820, display 830 and image capture device 850 may be contained on a web-enabled user device
(i.e., a device that can interact with a server over the Internet) such as, for example and without limitation, a laptop computer, a tablet, a smartphone and so forth.
[0113] Fig. 9 is a flowchart of a method for determining medical treatment outcomes in accordance with one or more embodiments. At optional step 200, a prospective patient (subject) uses user input device 820 to input personal data such as, for example and without limitation, name, address and email address. Then, at step 210, computing device 810 of apparatus 800 acquires an image of a body part, for example and without limitation, a facial image of a prospective patient. The image may be acquired: (a) by requesting the prospective patient (i) to position his/herself appropriately with respect to image capture device 850, (ii) to the request the prospective patient to use user input device 820 to cause image capture device, for example and without limitation, a camera, to take a picture, for example and without limitation, of the prospective patient's face, and (iii) to upload the image using communications device(s) 840; or (b) by requesting the prospective patient to upload the image using communications device(s) 140.
[0114] Next, as indicated in Fig. 9, at step 220, computer 810 of apparatus 800 executes software that adjusts the image. The software includes algorithms to center, position and adjust the lighting of the image to provide a standard size and appearance, which adjustments may be useful in facilitating identification of specific treatment areas for plastic and/or reconstructive surgeries and/or cosmetic dermatological treatments. In accordance with one or more embodiments, the standard appearance is provided by, for example and without limitation, scaling the distance between the eyes to a predetermined dimension. Software which includes algorithms to center, position and adjust the lighting of the displayed image is publicly available as open source software. An example of such open source software is GIMP software ("Gnu Image Manipulation Program" which is available at http://www.gimp.org/). Then, computer device 810 causes the adjusted image to be displayed on display 830 in accordance with any one of a number of methods that are well known to those of ordinary skill in the art.
[0115] Next, as indicated in Fig. 9, at step 230, computer device 810 executes software which includes facial recognition algorithms that identify and locate anatomical features. Note that, in general, methods for providing the identification and location of anatomical features is based on anatomical metrics such as, for example and without limitation, one or more of: (a) distance and angle between anatomical features; (b) feature contours (for example, lines); (c) pixilation (i.e., shading); (d) feature texture (i.e., smoothness of lines and shading across a given area); (e) feature coloration (for example,
coloration using a red, green and blue scale); and so forth. Software which includes one or more facial recognition algorithms is publicly available as open source software. An example of one such open source software is Matlab software (available at
http://www.mathworks.com/discovery/face-recognition.html), which software can be used to identify and locate the eyes, the nose and the corner of the mouth. Then, using these initial anatomical locations, one can identify and locate, for example, the nasolabial fold as follows. First, one maps an area surrounding a straight line between the nose and the corner of the mouth. In one embodiment, the map is a pixel map of an area expected to contain the nasolabial fold. Next, one uses contrast in texture and shading in the area to identify the nasolabial fold, for example and without limitation, by comparing dark pixels in the cheek area in pixel map. One of ordinary skill in the art can provide software to carry out these steps routinely and without undue experimentation. Further, in light of the above, it should be clear to one of ordinary skill in the art how to provide software to identify and locate areas which may be of interest for plastic and reconstructive surgery and for cosmetic
dermatological treatments such as the hairline, eyebrows, eyes, nose, corners of the mouth wrinkles, cheeks, lips, chin, ears, neck, breasts, upper arms, lower arms, hands, abdomen, waist, buttocks, thigh, lower legs, feet, cellulite, scars, and skin, routinely and without undue experimentation.
[0116] Next, as indicated in Fig. 9, at step 240, computer device 810 executes software that causes a menu of names and/or visual icons of areas of interest (each of which areas of interest have been a subject of one or more clinically validated studies or trials relating to medical treatments) to be displayed using any one of a number of methods that are well known to those of ordinary skill in the art. Next, as indicated in Fig. 9, at step 250, utilizing user input device 820, the prospective patient selects (or specifies) an anatomical area for prospective treatment. Next, as indicated in Fig. 9, at step 260, computer device 810 executes software that causes the selected or specified anatomical area in the patient's image to be indicated on display 830 (for example and without limitation, using arrows or dotted lines and so forth), which software is well known and readily available to those of ordinary skill in the art.
[0117] Next, as indicated in Fig. 9, at step 270, computer device 810 executes software that compares the selected or specified anatomical area with clinically validated images having "severity" scores, which severity scores are obtained from a clinically validated trial or study, to determine a "best match" severity score of the selected or specified anatomical area of the prospective patient's original image. In accordance with one or more
embodiments, a clinically validated trial or study includes, for example and without limitation, an FDA-approved clinical study, summary clinical data from an FDA-approved Instruction for Use and a clinical study approved by local authorities such as, for example and without limitation, the European Union and Health Canada. In one such embodiment, the nasolabial fold of a prospective patient is compared with a Visual Analog Scale (VAS). VAS is an FDA-approved wrinkle severity scale, and Table 1 above shows a clinically validated Visual Analog Scale (VAS) of the nasolabial fold as well as Fig. 5. As shown in Fig. 5, the VAS comprises five (5) images showing the nasolabial fold, and each image has a "severity" score ranging from 0 - 4 corresponding to the appearance of the nasolabial fold in the image. In this embodiment, the software first compares the shading and texture of the nasolabial fold with surrounding tissues in the images from the VAS, and creates a VAS "scale library" of images of the nasolabial fold using linear expansion of the VAS images, which scale library may contain as many as fifty (50) graphic models (i.e., images) of a wrinkle, where each graphic model, i.e., image, has its own severity score (for example, VAS severity scores 0.1, 0.2, ...3.8, 3.9, 4.0). In other words, the software produces the VAS scale library of images by interpolating the images in the VAS using well known linear expansion methods to provide interpolated (i.e., morphed or simulated) images. Then, the software compares the area of the nasolabial fold in the image input by the prospective patient with the graphic models, i.e., images in VAS scale library, to determine the "best-match." The severity score of the "best match" image from the VAS scale library will be the "best match" severity score of the input image. In accordance with one or more embodiments, software for comparison includes, for example and without limitation, '¾est match" algorithms, proportional matching algorithms and other matching algorithms that are well known to those of ordinary skill in the art.
[0118] Next, as indicated in Fig. 9, at step 280, computer device 810 executes software that causes a menu of names and/or visual icons of medical treatments for the specified anatomical area to be displayed using any one of a number of methods that are well known to those of ordinary skill in the art. Next, as indicated in Fig. 9, at step 290, utilizing user input device 820, the prospective patient chooses a prospective medical treatment from a number of such treatments displayed on display 830.
[0119] Next, as indicated in Fig. 9, at step 300, computer device 810 executes software that compares the "best match" severity score with one or more clinical trial summaries established in the clinically validated (for example and without limitation, FDA- approved) trial(s) to determine a "likely" or "most likely" treatment outcome and its duration,
where the treatment outcome is based on the clinically validated trial data or "meta-data" (see the description of meta-data below). As an example, Table 1 above shows outcome data for an FDA approved injectable dermal filler (trade name "Juvederm 30HV") to treat facial wrinkles and folds, for example, the nasolabial fold. As indicated in Table 1, before treatment, the baseline VAS severity score of the patient is 2.6 (in the VAS severity scale, "0" indicates little or no wrinkle, and "4" indicates a severe wrinkle. Using the data shown in Table 1, after patients are treated, at week 2, the expected average VAS severity score is 0.5. This represents an improvement of 2.1 in the VAS severity scale. Thus, for a prospective patient starting with a severity score of 2.6, proportionally, the treatment provides an improvement of 2.1/2.6 = 81%. In accordance with one or more embodiments, the proportional improvement is used to provide an image which displays the effects of the improvement achieved by this treatment. For example, if the input image of a prospective patient has a "best match" severity score of 3.0, the treatment is expected to provide an 81% improvement. In particular, this means that the treatment would be expect to result, at week 2, in a VAS severity score of 0.6. In accordance with one embodiment, the above-described method is extended to a "meta-data" set as well. Meta-data is data derived from clinically validated trial or study data. For example, if there are multiple products (i.e., treatments) that are approved to treat nasolabial folds, and if the prospective patient does not specify which product (treatment) he/she wants to use, the software will average clinical trial data from all the approved products, or from the most commonly used products, available for treatment of the specified area. For example, if product 1 results in an 80% improvement at week 2, product 2 results in an 83% improvement at the same time point, and product 3 results in an 84% improvement at the same point, the embodiment will use the average improvement score from these three products to simulate the treatment outcome.
[0120] Next, as indicated in Fig. 9, at step 310, computer device 810 executes software that utilizes the "likely" or "most likely" treatment outcome as source data to produce a visual representation of the treatment outcome and to cause the visual
representation to be displayed on display 830. The visual representation of the "most likely" treatment outcome is simulated by replacing the specified anatomical area (for example, the nasolabial fold (wrinkle) region) in the input image with the corresponding region of an image from the scale library having the severity score corresponding to the severity score of "most likely" treatment outcome.
[0121] Where published data relating to the duration of the effect of a treatment exists, one embodiment simulates the duration of the effect of the treatment over time. In
accordance with one such embodiment, this is provided as a series of images or as an animation showing changes to the input image. Fig. 10 shows data which reflects the duration of an effect of a volumizing treatment over time. The data displayed on the curve represents percentage improvement of the effect as a function of time. Computer device 810 executes software that uses this data to compute severity scores for the specified anatomical area after treatment over time. Then, computer device 810 executes software that uses these scores to create images in the same manner described above for showing the "likely" outcome of a treatment, which images can be displayed, for example and without limitation, on display 830 as a slide show or as an animation and so forth.
[0122] In addition, using the above-described methods, one embodiment displays images of the effect of treatment, for various treatment procedures (sometimes referred as products) side-by-side on display 830 so the prospective patient can see any differences that might occur (such side-by-side comparisons can also be over time). In one or more such embodiments, where treatment results are only available at a few selected points in time, software will first create a best-fit curve (this can be created, for example and without limitation, using MS Excel Tools) to provide more time points, and then the duration effect is simulated as described above.
[0123] Different physicians, using different skills, may create different effects of treatments on the patients. Hence, in accordance with one or more embodiments, as an alternative to using clinical trial or study data, data from a patient library from specific physicians is used to create physician-specific treatment outcome simulations using the data from the physician library in the same as the data from the clinical trial data was used.
[0124] In recent years, improvement in computational power has allowed physicists, mathematicians, and video game producers to develop human images that are more and more life-like by incorporating the laws of physics and mathematics. For example, Dr. Ronald P. Fedkiw at Stanford University has developed and published a set of physics-based image morphing software libraries. In accordance with one or more embodiments, the software to provide the visual representation uses the above-identified software libraries. In addition, there have been various commercial applications to morph an original image into a separate, unrelated target image which may be used to fabricate the above-described embodiments.
[0125] Next, as indicated in Fig. 9, at step 320, optionally, computer device 810 executes software that accesses a database to retrieve and display the names and contact information of, for example and without limitation, physicians who are board-certified to perform specific procedure(s) at the specified treatment area(s). Further, user input device
120, the prospective patient selects a physician.
[0126] Next, as indicated in Fig. 9, at step 330, optionally, computer device 810 executes software that sends the selected physician the "before-and-after images," and the list of selected treatment(s), for example and without limitation, by e-mail.
[0127] In accordance with one or more embodiments, the clinical trial or study data would be stored in a database accessible to computer device 810. As such, the data base may be a disk or other random access storage device connected to computer device 810 or it might be a database stored in remote storage (for example and without limitation, in the cloud) which is accessible by computer device 810 over a network such as, for example and without limitation, the Internet or the database may be combination of these. Similarly, the medical treatment data would be stored in a manner similar to that described above. Further, in accordance with one or more embodiments, post-treatment images created for prospective patients may be stored in patent accounts in a patent database in a manner similar to that described above.
[0128] Embodiments of the present invention described above are exemplary, and many changes and modifications may be made to the description set forth above by those of ordinary skill in the art while remaining within the scope of the invention.
[0129] EXAMPLES
[0130] Aspects of the present teachings may be further understood in light of the following examples, which should not be construed as limiting the scope of the present teachings in any way.
[0131] Example 1 : Selection of Cosmetic Process
[0132] A user "touches" or "clicks" on the nasolabial fold wrinkles. Our algorithm database indicates that there are a number of filler products, such as Juvederm, Restylane, and Radiesse, which are approved by FDA to specifically correct the wrinkles of the nasolabial folds. Companies who received FDA approval established a clinically validated scale, as illustrated in Table 1, which defines the severity of the nasolabial folds wrinkles. Subsequently the companies will conduct clinical trials, where the effectiveness of the treatment is illustrated in Table 1. The effectiveness of the treatment is shown as improving from the baseline wrinkle score of 2.6 to 0.5 at week 2, an average improvement of 81%.
[0133] There are several simulation algorithms which can achieve an average improvement of 81%. In our present embodiment, we use controlled impainting technique
and set the blending parameter to 81%.
[0134] An alternative to Example A is where clinical data set is not available, but outcomes can be measured from physician database. Lower blepharoplasty, as illustrated in Fig. 6, is an example where clinical dataset is not available from Regulatory Agencies such as FDA, but we have obtained a large library of clinical data from physicians libraries and from public sources, where we can curate the data and in essence, repeat the steps in Example A, i.e., develop a scale of severity and measure the average improvement after treatment.
[0135] Example B, the synthesis step in process 335 of Fig. 4: A user "touches" or "clicks" on the brow to indicate his desire to see the result of brow lift (such as the brow region provided in Fig. 7). The algorithm database, which includes physician before and after photos, indicates that the following areas are affected by Brow Lift. The brows are lifted by an average of less than 0.5 inch, the forehead wrinkles are smoothed out. The glabella wrinkles are smoothed out, and the Crows Feet wrinkles are smoothed out on the upper portion.
[0136] Example 2 - Predictive Images of Surgical Procedures [0137] Lateral Brow Lift
[0138] Visually, lateral brow lift procedure results with lateral side of the brow to be lifted up. The medial of the brow stays at the same position. Further, the crow's feet wrinkles and glabella wrinkles are smoothed away by 90%.
[0139] A procedure was implemented by combining image warping and inpainting techniques. Image warping is achieved by selecting source polygon to be equal to eyebrow contour returned by a FFD algorithm described above. Destination polygon will be the same as source polygon in the inner part, while in the outer part (lateral side of the brow) it is gradually lifted up. Such choice of source and destination polygons results with lateral side of the brow is lifted up.
[0140] To achieve reduction of wrinkles, glabella and crow's feet polygons returned by a FFD library described above were used. In case of crow's feet polygon, the outer border was determined (this is usually edge between glabella area and hair, or glabella area and image background) by detecting edges and then finding most significant edge. Then in these areas inpainting is performed with blending ratios with original image equal to 0.9. This results in smoothing wrinkles away by 90%. Example of performed lateral brow lift procedure is given in 15.
[0141] The system described above can also execute software that compares the selected or specified anatomical area with clinically validated images having "severity" scores, which severity scores are obtained from a clinically validated trial or study, to determine a "best match" severity score of the selected or specified anatomical area of the prospective patient's original image. A predicted image showing the predicted results of the procedure, among other data, can then be provided to the subject according to the Methods section above.
[0142] Forehead Lift
[0143] The main visual effects of this procedure are: both eyebrows lifted up, the distance between eyebrow and the hairline is reduced, the wrinkles in the upper face, including the forehead, glabella, crow's feet, are all significantly reduced (by 90%).
[0144] The implementation again combines image warping and inpainting techniques. Image warping is achieved by selecting source polygon to be equal to eyebrow contour returned by FFD algorithm, while destination polygon has the same shape as source polygon, but is lifted to upper direction. Such choice of source and destination polygons results with eyebrows being lifted up.
[0145] To achieve reduction of wrinkles, forehead, glabella and crow's feet polygons were returned by the FFD library. Then in these areas inpainting is performed with blending ratios with original image equal to 0.9. This results in smoothing wrinkles away by 90%. In case of forehead area, we additionally make edge detection in order to precisely detect fine wrinkles. Further, edge detection was used at multiple image scales and then collect significant edges in order to deduce hair line. The area above hair line is excluded from smoothing. An example of forehead lift procedure simulation is given in Fig. 16.
[0146] The system described above can also execute software that compares the selected or specified anatomical area with clinically validated images having "severity" scores, which severity scores are obtained from a clinically validated trial or study, to determine a "best match" severity score of the selected or specified anatomical area of the prospective patient's original image. A predicted image showing the predicted results of the procedure, among other data, can then be provided to the subject according to the Methods section above.
[0147] Lower Blepharoplasty
[0148] Lower blepharoplasty procedure results with changes at the under eye
regions, so that the under eye volume and sagging, which cause the "baggy" appearance, are removed.
[0149] For this procedure the under eye regions were returned by FFD library and perform inpainting in this area. Example of lower blepharoplasty is given in Fig. 17.
[0150] The system described above can also execute software that compares the selected or specified anatomical area with clinically validated images having "severity" scores, which severity scores are obtained from a clinically validated trial or study, to determine a "best match" severity score of the selected or specified anatomical area of the prospective patient's original image. A predicted image showing the predicted results of the procedure, among other data, can then be provided to the subject according to the Methods section above.
[0151] Glabella
[0152] In this surgical procedure the toxin is injected at the glabella and along the upper eyebrow in a droplet fashion. This results with reduced wrinkles in glabella area.
[0153] This visual effect uses the same technique as in lateral brow lift procedure, i.e. glabella area polygon in FFD library were computed and perform inpainting of resulting area with clean skin patch. An example of a glabella procedure is given in Fig. 18.
[0154] The system described above can also execute software that compares the selected or specified anatomical area with clinically validated images having "severity" scores, which severity scores are obtained from a clinically validated trial or study, to determine a "best match" severity score of the selected or specified anatomical area of the prospective patient's original image. A predicted image showing the predicted results of the procedure, among other data, can then be provided to the subject according to the Methods section above.
[0155] Marionette Lines
[0156] This procedure is performed by injecting the filler material superficially at the marionette lines. The main effect of this is reducing of wrinkles in affected area.
[0157] This procedure was simulated by computing marionette area polygon in FFD library and performing inpainting of this polygon with the clean skin patch. An example of a marionette lines procedure can be seen in Fig. 19.
[0158] The system described above can also execute software that compares the selected or specified anatomical area with clinically validated images having "severity"
scores, which severity scores are obtained from a clinically validated trial or study, to determine a "best match" severity score of the selected or specified anatomical area of the prospective patient's original image. A predicted image showing the predicted results of the procedure, among other data, can then be provided to the subject according to the Methods section above.
[0159] Nasolabial Folds
[0160] Nasolabial folds wrinkles are the wrinkles that "bracket" the mouth when one smiles. This procedure is performed by injecting the filler material superficially and/or sub-dermally at the nasolabial folds. This reduces the nasolabial fold wrinkles by 80%.
[0161] This procedure is simulated by using nasolabial folds area polygons returned by FFD library. Then in these areas inpainting is performed with blending ratios with original image equal to 0.8, which results in smoothing wrinkles away by 80%. Example of marionette lines procedure can be seen in Fig. 20.
[0162] The system described above can also execute software that compares the selected or specified anatomical area with clinically validated images having "severity" scores, which severity scores are obtained from a clinically validated trial or study, to determine a "best match" severity score of the selected or specified anatomical area of the prospective patient's original image. A predicted image showing the predicted results of the procedure, among other data, can then be provided to the subject according to the Methods section above.
[0163] Crow's Feet
[0164] This procedure reduces wrinkles in crow's feet area. The procedure is already described and illustrated as e.g. part of lateral brow lift procedure. The only difference is that this procedure is isolated to affect only the crow's feet area and nothing else should change. The system described above can also execute software that compares the selected or specified anatomical area with clinically validated images having "severity" scores, which severity scores are obtained from a clinically validated trial or study, to determine a "best match" severity score of the selected or specified anatomical area of the prospective patient's original image. A predicted image showing the predicted results of the procedure, among other data, can then be provided to the subject according to the Methods section above.
[0165] Laser Resurfacing
[0166] Laser resurfacing procedure causes a global change of the skin texture, color, elasticity, and tightness. This does not change the underlying bone and tissue structure.
Results of this procedure are: smooth out all the fine wrinkles by 90%; under eye bags are tightened by 50%; skin color will turn slightly pink; acne scars and other small scars are reduced by 90%; the brown age spots will be reduced by 90%.
[0167] This effect was simulated by first retrieving approximate head contour from FFD library and then applying mixed domain filter to retrieved area. Example of laser resurfacing procedure can be seen in Fig. 21.
[0168] The system described above can also execute software that compares the selected or specified anatomical area with clinically validated images having "severity" scores, which severity scores are obtained from a clinically validated trial or study, to determine a "best match" severity score of the selected or specified anatomical area of the prospective patient's original image. A predicted image showing the predicted results of the procedure, among other data, can then be provided to the subject according to the Methods section above.
[0169] Other Embodiments
[0170] The detailed description set- forth above is provided to aid those skilled in the art in practicing the present invention. However, the invention described and claimed herein is not to be limited in scope by the specific embodiments herein disclosed because these embodiments are intended as illustration of several aspects of the invention. Any equivalent embodiments are intended to be within the scope of this invention. Indeed, various modifications of the invention in addition to those shown and described herein will become apparent to those skilled in the art from the foregoing description which do not depart from the spirit or scope of the present inventive discovery. Such modifications are also intended to fall within the scope of the appended claims.
[0171] References Cited
[0172] All publications, patents, patent applications and other references cited in this application are incorporated herein by reference in their entirety for all purposes to the same extent as if each individual publication, patent, patent application or other reference was specifically and individually indicated to be incorporated by reference in its entirety for all purposes. Citation of a reference herein shall not be construed as an admission that such is prior art to the present invention.
[0173] Other publications incorporated herein by reference in their entirety include:
[0174] FaceSDK. [Online] Luxand. http://www.luxand.com/facesdk/.
[0175] Active Shape Models with SIFT Descriptors and MARS. Milborrow, Stephen and icolls, Fred. 2, 2014, VISAPP, Vol. 1.
[0176] Active Shape Models - 'Smart Snakes'. T.F.Cootes, C.J.Taylor. 1992. British Machine Vision Conference.
[0177] Cubic mean value coordinates. Li, Xian-Ying, Ju, Tao and Hu, Shi-Min. 4,
2013, ACM Transactions on Graphics (TOG) - SIGGRAPH 2013 Conference, Vol. 32.
[0178] Poisson Coordinates. Li, Xian-Ying and Hu, Shi-Min. 2, 2013, IEEE Transactions on Visualization and Computer Graphics, Vol. 19, pp. 344-352.
[0179] Pointwise radial minimization: Hermite interpolation on arbitrary domains. Floater, Michael S and Schulz, Christian. 2008. Computer graphics forum.
[0180] Li, Xian-Ying. Cubic Mean Value Coordinates. [Online] 2013.
http://cg.cs.tsinghua.edu.cn/people/~xianying/Papers/CubicMVCs/index.html.
[0181] Edge extraction and enhancement with coordinate logic filters using block represented images. Mertzios, Basil G, et al, et al. 1995. Proceedings of the 12 th European Conference on Circuit Theory and Design (ECCTD '95).
[0182] Integral Images. [Online]
http://computersciencesource.wordpress.com/2010/09/03/computer-vision-the-integral- image.
[0183] Photoshop healing brush: a tool for seamless cloning. Georgiev, Todor. 2004. Workshop on Applications of Computer Vision (ECCV 2004).
[0184] Mixed-Domain Edge-Aware Image Manipulation. Li, Xian-Ying, et al., et al. 22, 2013, IEEE Transactions on Image Processing, pp. 1915-1925.
[0185] Li, Xian-Ying. Mixed-Domain Edge-Aware Image Manipulation. [Online]
2014. http://cg.cs.tsinghua.edu.cn/people/~xianying/Papers/MixedDomain/index.html.
Claims
1. A system for producing images predictive of a subject's appearance resulting from cosmetic or medical treatment, the system comprising:
a. a controller;
b. an optional image input device;
c. an optional image output device; and
d. a computer program disposed in the controller for receiving a first digital image of a subject prior to cosmetic or medical treatment, generating a first data set obtained by evaluation of the first digital image, receiving a second data set representing known anatomical measurements from other subjects resulting from the selected cosmetic or medical treatment, and a predictive analysis function for generating a second digital image predictive of the subject based upon comparison of the first data set and second data set, the second digital image being available via the optional image output device.
2. The system of claim 1, wherein the first digital image is provided to the controller via the optional image input device is a camera selected from the group consisting of a webcam, and a computation device integrated camera.
3. The system of claim 1, wherein the controller is comprised by a mobile device selected from the group consisting of a tablet computer, laptop computer, and smartphone.
4. The system of claim 1, wherein the first digital image is provided using two- dimensional coordinates.
5. The system of claim 1, wherein the first digital image is provided using three- dimensional coordinates.
6. The system of claim 1, wherein the second data set is selected from the group consisting of a parameters-based medical guideline, parameters-based surgical guideline, and a clinical trial summary.
7. A system for producing images predictive of a subject's appearance resulting from cosmetic or medical treatment, the system comprising:
a controller comprising an algorithm for receiving a first digital image of a subject prior to cosmetic or medical treatment, generating a first data set obtained by evaluation of the first digital image, receiving a second data set representing known anatomical measurements from other subjects resulting from the selected cosmetic or
medical treatment, and generating a second digital image predictive of the subject based upon comparison of the first data set and second data set, the second digital image being available via the optional image output device.
8. A method for generating a predictive image of a subject resulting from a cosmetic or medical treatment, the method comprising:
a. acquiring a first digital image of the subject on a computer system prior to cosmetic or medical treatment;
b. querying the subject to select at least one of an individual cosmetic or medical treatment from a listing of known cosmetic and medical treatments, and an anatomical feature of the subject depicted by the first digital image;
c. collecting a first data set of anatomical measurements of the subject anatomical features obtained by evaluation of the first digital image, the first data set stored on the computer system;
d. comparing the first data set to a second data set comprising known anatomical measurements from other subjects resulting from the selected cosmetic or medical treatment or anatomical feature of the subject depicted by the first digital image, wherein the second data set is selected from the group consisting of a parameters-based medical guideline, parameters-based surgical guideline, and a clinical trial summary related to the selected cosmetic or medical treatment; and e. generating a second digital image predictive of the result of the
cosmetic or medical treatment in the subject based on the comparison between the first data set and second data set.
9. The method of claim 8, further providing the act of visualizing the predictive image on an image displaying device.
10. The method of claim 8, further providing the act of transmitting the predictive image to a physician skilled in the cosmetic or medical treatment.
1 1. The method of claim 8, further including comparing the selected cosmetic or medical treatment to a third data set comprising a listing of physicians skilled in the cosmetic or medical treatment, and generating a subset of the listing based on a predetermined value.
12. The method of claim 11, wherein the predetermined value is the geographical location of the subject.
13. The method of claim 8, wherein the first digital image is acquired using an
image capturing device.
14. The method of claim 8, wherein the first digital image is two-dimensional.
15. The method of claim 8, wherein the first digital image is three-dimensional.
16. The method of claim 8, wherein the first digital image is captured by a camera selected from the group consisting of a webcam, and a computation device integrated camera.
17. The method of claim 16, wherein the computation device is a mobile device selected from the group consisting of a tablet computer, laptop computer, and smartphone.
18. The method of claim 8, wherein the digital image is a digital file stored in the computer system memory.
19. The method of any of claims 8 to 18, wherein the digital image is an image of at least one anatomical feature of the subject body.
20. The method of claim 19, wherein the anatomical feature is the subject face.
21. The method of claim 9, wherein the image capturing device and image displaying device are the same device.
22. The method of claim 8, wherein the anatomical measurements are two- dimensional coordinates of the anatomical feature.
23. The method of claim 8, wherein the anatomical measurements are three- dimensional coordinates of the anatomical feature.
24. A method for producing an image predictive of cosmetic or medical treatment outcome in a subject, the method comprising:
a. acquiring a first digital image of the subject prior to cosmetic or
medical treatment;
b. calculating a first data set of anatomical measurements for the subject; c. comparing the first data set to a second data set comprising previously existing anatomical measurements obtained from prior cosmetic or medical treatment outcomes;
d. modifying the first digital image based upon the comparison to the second data set and said first data set to provide a second digital image predictive of the subject following the cosmetic or medical treatment.
25. The method of claim 24, wherein the second data set is selected from the group consisting of a parameters-based medical guideline, parameters-based surgical guideline, and a clinical trial summary.
26. The method of claim 24, wherein a segment of the digital image is selected by a user.
27. The method of claim 24, wherein the anatomical measurements are two- dimensional coordinates of the anatomical feature.
28. The method of claim 24, wherein the anatomical measurements are three- dimensional coordinates of the anatomical feature.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201361860661P | 2013-07-31 | 2013-07-31 | |
| US61/860,661 | 2013-07-31 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2015017687A2 true WO2015017687A2 (en) | 2015-02-05 |
| WO2015017687A3 WO2015017687A3 (en) | 2015-03-26 |
Family
ID=52432579
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2014/049216 Ceased WO2015017687A2 (en) | 2013-07-31 | 2014-07-31 | Systems and methods for producing predictive images |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2015017687A2 (en) |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP3121744A1 (en) * | 2015-07-24 | 2017-01-25 | Persais, LLC | System and method for virtual treatments based on aesthetic procedures |
| WO2018052755A1 (en) * | 2016-09-19 | 2018-03-22 | L'oreal | Systems, devices, and methods for three-dimensional analysis of eyebags |
| WO2018175357A1 (en) * | 2017-03-21 | 2018-09-27 | The Procter & Gamble Company | Methods for age appearance simulation |
| CN110348496A (en) * | 2019-06-27 | 2019-10-18 | 广州久邦世纪科技有限公司 | A kind of method and system of facial image fusion |
| US10574883B2 (en) | 2017-05-31 | 2020-02-25 | The Procter & Gamble Company | System and method for guiding a user to take a selfie |
| US10614623B2 (en) | 2017-03-21 | 2020-04-07 | Canfield Scientific, Incorporated | Methods and apparatuses for age appearance simulation |
| US10818007B2 (en) | 2017-05-31 | 2020-10-27 | The Procter & Gamble Company | Systems and methods for determining apparent skin age |
| US10839578B2 (en) * | 2018-02-14 | 2020-11-17 | Smarter Reality, LLC | Artificial-intelligence enhanced visualization of non-invasive, minimally-invasive and surgical aesthetic medical procedures |
| WO2021115798A1 (en) | 2019-12-11 | 2021-06-17 | QuantiFace GmbH | Method and system to provide a computer-modified visualization of the desired face of a person |
| WO2021115797A1 (en) | 2019-12-11 | 2021-06-17 | QuantiFace GmbH | Generating videos, which include modified facial images |
| US11055762B2 (en) | 2016-03-21 | 2021-07-06 | The Procter & Gamble Company | Systems and methods for providing customized product recommendations |
| KR102539164B1 (en) * | 2022-12-19 | 2023-05-31 | 안성민 | Method and apparatus for lifting simulation using an artificial intelligence learning model |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TW552129B (en) * | 2003-01-20 | 2003-09-11 | Dung-Wu Liu | System for assisting cosmetic surgery |
| WO2007128117A1 (en) * | 2006-05-05 | 2007-11-15 | Parham Aarabi | Method. system and computer program product for automatic and semi-automatic modification of digital images of faces |
| US8843852B2 (en) * | 2010-12-17 | 2014-09-23 | Orca Health, Inc. | Medical interface, annotation and communication systems |
| US8891881B2 (en) * | 2012-01-25 | 2014-11-18 | General Electric Company | System and method for identifying an optimal image frame for ultrasound imaging |
-
2014
- 2014-07-31 WO PCT/US2014/049216 patent/WO2015017687A2/en not_active Ceased
Cited By (21)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP3121744A1 (en) * | 2015-07-24 | 2017-01-25 | Persais, LLC | System and method for virtual treatments based on aesthetic procedures |
| US11055762B2 (en) | 2016-03-21 | 2021-07-06 | The Procter & Gamble Company | Systems and methods for providing customized product recommendations |
| WO2018052755A1 (en) * | 2016-09-19 | 2018-03-22 | L'oreal | Systems, devices, and methods for three-dimensional analysis of eyebags |
| CN110072433A (en) * | 2016-09-19 | 2019-07-30 | 莱雅公司 | The systems, devices and methods of three dimensional analysis for eye pouch |
| US10395099B2 (en) | 2016-09-19 | 2019-08-27 | L'oreal | Systems, devices, and methods for three-dimensional analysis of eyebags |
| US10621771B2 (en) | 2017-03-21 | 2020-04-14 | The Procter & Gamble Company | Methods for age appearance simulation |
| WO2018175357A1 (en) * | 2017-03-21 | 2018-09-27 | The Procter & Gamble Company | Methods for age appearance simulation |
| CN110326034A (en) * | 2017-03-21 | 2019-10-11 | 宝洁公司 | Method for the simulation of age appearance |
| US10614623B2 (en) | 2017-03-21 | 2020-04-07 | Canfield Scientific, Incorporated | Methods and apparatuses for age appearance simulation |
| US10818007B2 (en) | 2017-05-31 | 2020-10-27 | The Procter & Gamble Company | Systems and methods for determining apparent skin age |
| US10574883B2 (en) | 2017-05-31 | 2020-02-25 | The Procter & Gamble Company | System and method for guiding a user to take a selfie |
| US10839578B2 (en) * | 2018-02-14 | 2020-11-17 | Smarter Reality, LLC | Artificial-intelligence enhanced visualization of non-invasive, minimally-invasive and surgical aesthetic medical procedures |
| CN110348496A (en) * | 2019-06-27 | 2019-10-18 | 广州久邦世纪科技有限公司 | A kind of method and system of facial image fusion |
| CN110348496B (en) * | 2019-06-27 | 2023-11-14 | 广州久邦世纪科技有限公司 | A method and system for facial image fusion |
| WO2021115798A1 (en) | 2019-12-11 | 2021-06-17 | QuantiFace GmbH | Method and system to provide a computer-modified visualization of the desired face of a person |
| WO2021115797A1 (en) | 2019-12-11 | 2021-06-17 | QuantiFace GmbH | Generating videos, which include modified facial images |
| DE212020000466U1 (en) | 2019-12-11 | 2021-09-09 | QuantiFace GmbH | System for providing a computer modified visualization of a person's desired face |
| US11227424B2 (en) | 2019-12-11 | 2022-01-18 | QuantiFace GmbH | Method and system to provide a computer-modified visualization of the desired face of a person |
| DE212020000467U1 (en) | 2019-12-11 | 2022-03-09 | QuantiFace GmbH | Apparatus for providing video with a computer modified image of a desired person's face |
| US11341619B2 (en) | 2019-12-11 | 2022-05-24 | QuantiFace GmbH | Method to provide a video with a computer-modified visual of a desired face of a person |
| KR102539164B1 (en) * | 2022-12-19 | 2023-05-31 | 안성민 | Method and apparatus for lifting simulation using an artificial intelligence learning model |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2015017687A3 (en) | 2015-03-26 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2015017687A2 (en) | Systems and methods for producing predictive images | |
| EP3992919B1 (en) | Three-dimensional facial model generation method and apparatus, device, and medium | |
| US10860838B1 (en) | Universal facial expression translation and character rendering system | |
| JP5999742B2 (en) | System and method for planning flocking | |
| US9697635B2 (en) | Generating an avatar from real time image data | |
| CN104395929B (en) | Constructed using the incarnation of depth camera | |
| JP5400187B2 (en) | Method and apparatus for realistic simulation of wrinkle aging and deaging | |
| CN113628327B (en) | Head three-dimensional reconstruction method and device | |
| CN107516335A (en) | Graphics rendering method and device for virtual reality | |
| US9202312B1 (en) | Hair simulation method | |
| JPWO2018221092A1 (en) | Image processing apparatus, image processing system, image processing method, and program | |
| US10512321B2 (en) | Methods, systems and instruments for creating partial model of a head for use in hair transplantation | |
| CN109886144B (en) | Virtual trial sending method and device, computer equipment and storage medium | |
| US9877791B2 (en) | System and method for virtual treatments based on aesthetic procedures | |
| WO2013078404A1 (en) | Perceptual rating of digital image retouching | |
| US12333659B2 (en) | Systems and methods for displaying layered augmented anatomical features | |
| US20240046555A1 (en) | Arcuate Imaging for Altered Reality Visualization | |
| KR20190043925A (en) | Method, system and non-transitory computer-readable recording medium for providing hair styling simulation service | |
| Neog et al. | Interactive gaze driven animation of the eye region | |
| US10656722B2 (en) | Sensor system for collecting gestural data in two-dimensional animation | |
| KR20250125410A (en) | Virtual makeup try-on methods and devices | |
| US11645813B2 (en) | Techniques for sculpting digital faces based on anatomical modeling | |
| CN116543137A (en) | Method and system for dynamically defining facial focus area | |
| CN114742951B (en) | Material generation, image processing method, device, electronic device and storage medium | |
| CN111444979A (en) | Face-lifting scheme recommendation method, cloud device and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14831703 Country of ref document: EP Kind code of ref document: A2 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 14831703 Country of ref document: EP Kind code of ref document: A2 |