WO2025232815A1 - Facial detection system and process for cosmetic and therapeutic treatment of a subject - Google Patents
Facial detection system and process for cosmetic and therapeutic treatment of a subjectInfo
- Publication number
- WO2025232815A1 WO2025232815A1 PCT/CN2025/093338 CN2025093338W WO2025232815A1 WO 2025232815 A1 WO2025232815 A1 WO 2025232815A1 CN 2025093338 W CN2025093338 W CN 2025093338W WO 2025232815 A1 WO2025232815 A1 WO 2025232815A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- subject
- face
- dimensional
- facial
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/06—Radiation therapy using light
- A61N5/0613—Apparatus adapted for a specific treatment
- A61N5/0616—Skin treatment other than tanning
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/06—Radiation therapy using light
- A61N2005/0626—Monitoring, verifying, controlling systems and methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/06—Radiation therapy using light
- A61N5/067—Radiation therapy using light using laser light
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present invention relates the field of cosmetic, therapeutic and restorative treatment of the skin of a subject in need thereof, and more particularly to a process and system for cosmetic, therapeutic and restorative treatment of the skin of a subject in need thereof, in particular the face of a subject.
- Cosmetic, therapeutic, treatment and medical procedure technologies typically, have been handled by a human being using hand operated tools in the past.
- Such tools typically include a laser type device which, is held by the hand of an operator, and which is utilised to perform cosmetic and therapeutic, as well as clinical procedures on the body of the patient.
- Cosmetic purposes may include removal of blemishes, removal of scarring, treatment of discoloration, removal of unsightly growths or defects, smoothening of and consistency of colour, skin wrinkle and defect restoration and the like.
- Therapeutic purposes may include restoration from damage such as from chemicals, light, trauma, accidents on a minor scale, or on a larger scale disfigurement from injury or trauma.
- Therapeutic purposes may also include the removal of lesions from the skin of a patient, cancer therapy, and removal of potential cancer initiation points such as moles or other defects on the body of the subject.
- the present invention provides a process of determining the regions of the face of a subject for treatment by a cosmetic or therapeutic procedure, said process including:
- the three-dimensional (3D) dataset may be a point cloud model in a [x, y, z, c] format, wherein c is representative of a facial region identification parameter of the face of the subject.
- the facial surface may be represented as landmarks flattened in a one-dimensional (1D) tensor of n points represented by [x1, y1, z1] , [x2, y2, z2] ... [xn, yn, zn] and wherein the x and y coordinates are representative of image pixel coordinates.
- the facial regions may include two or more of the group including left eye, right eye, left eyebrow, right eyebrow, nose, mouth, lip, forehead, left cheek, right cheek, jaw.
- At least two three-dimensional (3D) images of the face of the subject may be acquired.
- the at least two three-dimensional (3D) images of the face of the subject may be overlapping.
- Three three-dimensional (3D) images of the face of the subject may be acquired.
- the three three-dimensional (3D) image of the face of the subject may be overlapping.
- the present invention provides a process of determining one or more region of a face of a subject having a skin disorder, for treatment of said subject during a cosmetic or therapeutic procedure, said process including:
- the predetermined light type may be selected from the group including infrared light, white light, red light, orange light, blue light or UV light.
- the at least one three-dimensional (3D) image of the face of the subject may be acquired in the presence of UV light, acquired in the presence of white light, or is acquired in the presence of orange light.
- the present invention provides a process of determining a treatment regimen for the face of a subject by a cosmetic or therapeutic procedure, said process including the steps of:
- the treatment regimen is preferably performed by a medical tool.
- the medical tool is preferably a medical laser.
- the power of the laser may be adjusted dependent upon the region of the face to which treatment is to be applied.
- the cosmetic or therapeutic procedure may be precluded from any region having a facial region identification parameter indicative of a skin disorder.
- the present invention provides system for determining the regions of the face of a subject for treatment by a cosmetic or therapeutic procedure, said system, comprising
- an image acquisition device for acquiring at least one three-dimensional (3D) image of the face of the subject, wherein said three-dimensional image includes three-dimensional data indicative of the surface topography of the face of the subject and includes two dimensional (2D) data indicative of an optical colour image of the face of a subject;
- a processor for generating a face mesh to define the facial surface of the subject from data from the acquired image comprised of a plurality of landmarks and wherein each landmark is assigned a unique identification label;
- a pre-trained Artificial Intelligence (Al) engine for determining the facial regions of the face of the subject from the two-dimensional (2D) data, wherein the pre-trained Artificial Intelligence (Al) engine is pre-trained so as to classify the face of a subject into facial regions based on the landmark identification labels and wherein said facial regions are predefined anatomical regions of a face;
- the processor assigns a facial region identification parameter to each of the landmarks, wherein the facial region identification parameter is indicative of the facial region of the face of the subject in which the landmark resides, and wherein the processor generates a three-dimensional (3D) dataset of the face of the subject from the face mesh data and wherein the points of three-dimensional (3D) dataset include a facial region identification parameter.
- the three-dimensional (3D) dataset is a point cloud model in a [x, y, z, c] format, wherein c is representative of a facial region identification parameter of the face of the subject.
- the three-dimensional (3D) dataset may be a point cloud model in a [x, y, z, c] format, wherein c is representative of a facial region identification parameter of the face of the subject.
- the facial surface may be represented as landmarks flattened in a one-dimensional (1D) tensor of n points represented by [x1, y1, z1] , [x2, y2, z2] ... [xn, yn, zn] and wherein the x and y coordinates are representative of image pixel coordinates.
- (1D) tensor of n points represented by [x1, y1, z1] , [x2, y2, z2] ... [xn, yn, zn] and wherein the x and y coordinates are representative of image pixel coordinates.
- the facial regions may include two or more of the group including left eye, right eye, left eyebrow, right eyebrow, nose, mouth, lip, forehead, left cheek, right cheek, jaw.
- At least two three-dimensional (3D) images of the face of the subject may be acquired.
- the at least two three-dimensional (3D) images of the face of the subject may be overlapping.
- Three three-dimensional (3D) images of the face of the subject may be acquired.
- the three three-dimensional (3D) image of the face of the subject may be overlapping.
- the present invention provides a system for determining one or more region of a face of a subject having a skin disorder, for treatment of said subject during a cosmetic or therapeutic procedure, said system comprising:
- an image acquisition device for acquiring at least one three-dimensional (3D) image of the face of the subject, wherein said three-dimensional image includes three-dimensional data indicative of the surface topography of the face of the subject and includes two dimensional (2D) data indicative of an optical colour image of the face of a subject;
- a processor for generating a face mesh to define the facial surface of the subject from data from the acquired image comprised of a plurality of landmarks and wherein each landmark is assigned a unique identification label;
- a pre-trained Artificial Intelligence (Al) engine for determining the facial regions of the face of the subject from the two-dimensional (2D) data, wherein the pre-trained Artificial Intelligence (Al) engine is pre-trained so as to classify the face of a subject into facial regions based on the landmark identification labels and wherein said facial regions are predefined anatomical regions of a face; wherein the processor assigns a facial region identification parameter to each of the landmarks, wherein the facial region identification parameter is indicative of the facial region of the face of the subject in which the landmark resides, and wherein the processor generates a three-dimensional (3D) dataset of the face of the subject from the face mesh data and wherein the points of three-dimensional (3D) dataset include a facial region identification parameter, and
- the pre-trained Artificial Intelligence (Al) engine determines the presence of any skin disorders from the two-dimensional (2D) data of the at least one three-dimensional (3D) image of the face of the subject and the region of any skin disorder, wherein the pre-trained Artificial Intelligence (Al) engine is trained to identify skin disorders; and wherein the facial region identification parameter is assigned a skin condition parameter indicative of the skin disorder determined by the pre-trained Artificial Intelligence (Al) engine.
- the system may include one or more light source provides a predetermined light type selected from the group including infrared light, white light, red light, orange light, blue light or UV light.
- the at least one three-dimensional (3D) image of the face of the subject is acquired in the presence of UV light, is acquired in the presence of white light, or is acquired in the presence of orange light.
- the present invention provides an automated system for providing treatment of subject during a cosmetic or therapeutic procedure, said system comprising a system according to the fifth aspect, a processor; and a robotic arm for carrying a medical tool for providing treatment to the subject.
- the medical tool is preferably a medical laser.
- Figure 1a shows schematic representation of a first exemplary embodiment of a process according to the present invention
- Figure 1b shows schematic representation of a second exemplary embodiment of a process according to the present invention
- Figure 1c shows schematic representation of a third exemplary embodiment of a process according to the present invention
- Figure 1 d shows schematic representation of a first exemplary embodiment of a system according to the present invention
- Figure 1e shows schematic representation of a second exemplary embodiment of a system according to the present invention
- Figure 2a shows a three-dimensional rendering of an exemplary embodiment of a system according to the present invention
- Figure 2b shows a further three-dimensional rendering of the exemplary embodiment of Figure 2a
- Figure 2c shows a further three-dimensional rendering of the exemplary embodiment of Figure 2a
- Figure 2d shows a three-dimensional rendering of a further exemplary embodiment of a portion of the system of the present invention
- Figure 2e shows a three-dimensional rendering of an enlarged portion of the system of Figure 2a, Figure 2b and Figure 2c;
- Figure 2f shows a schematic representation of the image acquisition system 260 of figure 2a
- Figure 3a shows a process of generating facemesh from a human face
- Figure 3b shows the facial image captured by different light sources
- Figure 3c shows an example of skin problems detected and identified by the Al engine
- Figure 3d shows an example of the “no-go zones” of the face of a subject
- Figure 3e (i) shows an example of a region for treatment to perform path generation
- Figure 3e (ii) shows an example of the user interface of selecting a region for treatment to perform path generation.
- the present inventors have identified shortcomings in processes and systems of the prior art, and upon identification of the problems with the prior art, have provided a process and system for the cosmetic, therapeutic and restorative treatment of the skin of a subject in need thereof, which overcome the problems of the prior art.
- the present invention may be implemented within the cosmetic and medical fields, in at least the following applications:
- the invention includes the provision of “no-go zones” with respect to the dermis of a subject, which may be detected on a dimensional (2D) image to the 3D profile of the subject, in particular face of the subject, which is preferably detected by way of Artificial Intelligence (Al) implementation.
- 2D dimensional
- Al Artificial Intelligence
- no-go zones such as sensitive areas of skin, areas of skin pigmentation, pimples, or other areas which require sensitivity, provides numerous advantages both clinically and cosmetically, which are not contemplated or anticipated, or provided by systems of the prior art.
- Such human related attributes include as follows:
- the present inventors have identified the above problems, and provided a system and process for cosmetic and therapeutic treatment of a subject, which overcomes the disadvantages as exhibited by the prior art.
- the detection of features of a subject may be performed using an Al engine to locate special Regions of Interest (ROI) on the face of the subject, thus defining:
- ROI Regions of Interest
- no-go zones such as eyebrows, mouth and the like.
- Some conditions can only be detected under specified light sources. Hence, multiple light sources are used to detect different skin conditions.
- the present invention provides for enhanced automated cosmetic and therapeutic treatment of a subject with increased efficiency, increased efficacy for particular subject requirements, and mitigates human error or incorrect judgement by an operator.
- FIG. 1a there is show a schematic representation of an exemplary embodiment of a process 100a of determining the regions of the face of a subject for treatment by a cosmetic or therapeutic procedure according to the present invention.
- the process 100a includes the following steps:
- the three-dimensional (3D) dataset may be a point cloud model in a [x, y, z, c] format, wherein c is representative of a facial region identification parameter of the face of the subject.
- the facial surface may be represented as landmarks flattened in a one-dimensional (1 D) tensor of n points represented by [x1, y1, z1] , [x2, y2, z2] ... [xn, yn, zn]and wherein the x and y coordinates are representative of image pixel coordinates.
- the facial regions may include two or more of the group including left eye, right eye, left eyebrow, right eyebrow, nose, mouth, lip, forehead, left cheek, right cheek, jaw.
- At least two three-dimensional (3D) images of the face of the subject may be acquired.
- the at least two three-dimensional (3D) images of the face of the subject may be overlapping.
- Three three-dimensional (3D) images of the face of the subject may be acquired.
- the three three-dimensional (3D) image of the face of the subject may be overlapping.
- FIG. 1b there is show a schematic representation of a further exemplary embodiment of a process 100b of determining one or more region of a face of a subject having a skin disorder, for treatment of said subject during a cosmetic or therapeutic procedure, according to the present invention.
- the process 100b includes the following steps:
- the predetermined light type may be selected from the group including infrared light, white light, red light, orange light, blue light or UV light.
- the at least one three-dimensional (3D) image of the face of the subject may be acquired in the presence of UV light, acquired in the presence of white light, or is acquired in the presence of orange light.
- FIG. 1c there is show a schematic representation of another exemplary embodiment of a process 100c of of determining a treatment regimen for the face of a subject by a cosmetic or therapeutic procedure according to the present invention.
- the process 100c includes the following steps:
- the treatment regimen is preferably performed by a medical tool.
- the medical tool is preferably a medical laser.
- the power of the laser may be adjusted dependent upon the region of the face to which treatment is to be applied.
- FIG. 1d there is shown a first exemplary embodiment of a system 100d for determining the regions of the face of a subject for treatment by a cosmetic or therapeutic procedure, according to the present invention.
- the system 100d includes an image acquisition device 120d for acquiring at least one three-dimensional (3D) image of the face of the subject 110d, wherein said three-dimensional image 110d includes three-dimensional data 130d indicative of the surface topography of the face of the subject and includes two dimensional (2D) data140d indicative of an optical colour image of the face of a subject.
- 3D three-dimensional
- the system 100d further includes a processor 150d for generating a face mesh 160d to define the facial surface of the subject from data from the acquired image 140d comprised of a plurality of landmarks and wherein each landmark is assigned a unique identification label;
- the system 100d further includes a pre-trained Artificial Intelligence (Al) engine 170d for determining the facial regions of the face of the subject from the two-dimensional (2D) data 140d, wherein the pre-trained Artificial Intelligence (Al) engine 170d is pre-trained so as to classify the face of a subject into facial regions based on the landmark identification labels and wherein the facial regions are predefined anatomical regions of a face.
- Al Artificial Intelligence
- the processor 150d assigns a facial region identification parameter to each of the landmarks, wherein the facial region identification parameter is indicative of the facial region of the face of the subject in which the landmark resides, and wherein the processor 150d generates a three-dimensional (3D) dataset of the face of the subject from the face mesh data and wherein the points of three-dimensional (3D) dataset include a facial region identification parameter.
- the three-dimensional (3D) dataset may be a point cloud model in a [x, y, z, c] format, wherein c is representative of a facial region identification parameter of the face of the subject.
- the facial surface may be represented as landmarks flattened in a one-dimensional (1D) tensor of n points represented by [x1, y1, z1] , [x2, y2, z2] . . . [xn, yn, zn] and wherein the x and y coordinates are representative of image pixel coordinates.
- (1D) tensor of n points represented by [x1, y1, z1] , [x2, y2, z2] . . . [xn, yn, zn] and wherein the x and y coordinates are representative of image pixel coordinates.
- the facial regions may include two or more of the group including left eye, right eye, left eyebrow, right eyebrow, nose, mouth, lip, forehead, left cheek, right cheek, jaw.
- At least two three-dimensional (3D) images of the face of the subject may be acquired.
- the at least two three-dimensional (3D) images of the face of the subject may be overlapping.
- Three three-dimensional (3D) images of the face of the subject may be acquired.
- the three three-dimensional (3D) image of the face of the subject may be overlapping.
- FIG. 1e there is shown a second exemplary embodiment of a system 100e for determining one or more region of a face of a subject having a skin disorder, for treatment of said subject during a cosmetic or therapeutic procedure.
- the system 100e includes an image acquisition device 120e for acquiring at least one three-dimensional (3D) image 110e of the face of the subject, wherein said three-dimensional image includes three-dimensional data 130e indicative of the surface topography of the face of the subject and includes two dimensional (2D) data 140e indicative of an optical colour image of the face of a subject.
- 3D three-dimensional
- the system 100e further includes a processor 150e for generating a face mesh 160e to define the facial surface of the subject from data from the acquired image 110e comprised of a plurality of landmarks and wherein each landmark is assigned a unique identification label.
- the system 100e further includes a pre-trained Artificial Intelligence (Al) engine 170e for determining the facial regions of the face of the subject from the two-dimensional (2D) data 140e.
- Al Artificial Intelligence
- the pre-trained Artificial Intelligence (Al) engine 170e is pre-trained so as to classify the face of a subject into facial regions based on the landmark identification labels and wherein said facial regions are predefined anatomical regions of a face; wherein the processor assigns a facial region identification parameter to each of the landmarks.
- the facial region identification parameter is indicative of the facial region of the face of the subject in which the landmark resides, and wherein the processor generates a three-dimensional (3D) dataset of the face of the subject from the face mesh data and wherein the points of three-dimensional (3D) dataset include a facial region identification parameter.
- the system 100e further includes one or more light source for illuminating the face of the subject by a predetermined light type when the three-dimensional (3D) optical image of the face of the subject is acquired and wherein said predetermined light type provides for identification of a skin disorder present on the face of the subject.
- the pre-trained Artificial Intelligence (Al) engine 170e determines the presence of any skin disorders from the two-dimensional (2D) data of the at least one three-dimensional (3D) image of the face of the subject and the region of any skin disorder, wherein the pre-trained Artificial Intelligence (Al) engine is trained to identify skin disorders; and wherein the facial region identification parameter is assigned a skin condition parameter indicative of the skin disorder determined by the pre-trained Artificial Intelligence (Al) engine 170e.
- the light source may provide a predetermined light type selected from the group including infrared light, white light, red light, orange light, blue light or UV light.
- the at least one three-dimensional (3D) image of the face of the subject may be acquired in the presence of UV light, may be acquired in the presence of white light, or acquired in the presence of orange light.
- FIG. 2a, 2b, 2c and 2d there is shown an exemplary embodiment of a system 200 according to the present invention.
- a “cobot” or collaborative robot is a robot which is intended for direct human-robot interaction within a shared space, or in situations humans and robots are in close proximity.
- a cobot is utilised as this is appropriate within such a clinical or cosmetic environment to which the present invention is directed, for numerous reasons including safety and ease of operability and the like.
- the system 200 includes a cobot arm 210 for performing the therapeutic or cosmetic operations on a subject 220, a bed 215 for a subject 220 to sit or lay on.
- the robotic arm controller 230 Within the system, there is the robotic arm controller 230, a PC (main controller) 240 and an electrical connecting box 250.
- the system 200 further includes the image acquisition system 260, for acquiring images of the subject′sface, as the present embodiment is directed to the dermis of the face of a subject.
- Figure 2e shows an enlarged view of the cobot 210 arm, including a working tool 205, for example a spray head 205, for use during therapeutic procedures.
- Figure 2f shows a schematic representation of the image acquisition system 260, including a stand 265, to support the head of the subject, two cameras 270 and 280, and a user interface 290.
- the 3D camera used may be commercially available 3D camera, which may be readily adapted into the system.
- the 3D camera can be purpose designed and built and implemented common depending on the particular requirements of the system.
- the 3D camera provides software to generate 3D point cloud (in [x, y, z] format) .
- the process of the present invention also makes use of the data captured by the camera to generate 4-dimensional point cloud (in [x, y, z, c] format) .
- the “c” in the 4-dimensional point cloud vector contains information relating to the class of region of the face, for example to be classified by an Al engine.
- the 3D camera actually consists of two cameras, these being:
- the first camera is a normal or typical 2D colour camera 270. This camera is utilised to capture the values of the 3 colour channels, i.e. the Red, Green, and Blue channels, for each pixel in the image acquired.
- the second camera is a depth sensor 280, which detects the depth information, i.e. the corresponding distance between the object and the camera sensor, of each pixel in the image generated by the 2D colour camera.
- the 3D camera of the image acquisition system 260 arrangement consists of a first camera and a second camera separately or integrally formed as a single unit, this may be considered a 3D camera system in accordance with the present invention
- the first step is to generate a facemesh 310 as shown in Figure 3a, and the present invention can make use of the commercially available “MediaPipe Facemesh” engine as developed by Google to generate a facemesh.
- the input image is processed so that the cropped face has, for example, 25%margin on each side, and resized to 256x256.
- the output is a facial surface is thus represented as 478 3D landmarks flattened into a 1D tensor: [x1, y1, z1] , [x2, y2, z2] ..., x-and y-coordinates follow the image pixel coordinates.
- the z-coordinates are different from the point cloud coordinates at this moment, as the z-coordinates are relative to the face centre of mass and are scaled proportionally to the face width.
- Each landmark 315 has is provided a unique number and corresponds to a unique location of a face of the subject.
- landmark no. 55 corresponds to the lower starting position of the right eyebrow, in an embodiment of the invention.
- the Al engine of the invention is pre-trained so as to classify the face of a subject into multiple regions based on the landmark numbers.
- each landmark number is a member of a specific region of the face.
- Such regions may, for example, include left eye, right eye, left eyebrow, right eyebrow, nose, mouth, lip, forehead, left cheek, right cheek, jaw and the like.
- Some landmarks may form the boundaries of the regions, and segmentation of regions are performed correspondingly.
- the landmarks in the mesh inside a boundary of a region are the members of the class of that specific region.
- each pixel in the image is thus also classified by the regions of the subject’s face accordingly.
- the human face is not a flat object and has varying curvature, the image is inclined at the two sides of the image. This lowers the accuracy of the detection and classification results.
- At least 3 images from different angles are acquired, to seek to mitigate or ameliorate such inaccuracies.
- the present invention may also use a facial image capturing device as shown below to identify skin problems or disorders of the subject, and has a 2D camera for capturing face images.
- the subject is required to place his/her head at the opening of a chamber of the image acquisition system 260.
- this device uses multiple light sources, for example infrared, white light, red light, orange light, blue light, UV light, etc.
- the human face is not a flat object and there exists curvature, and the image is inclined at the two sides of the image. This lowers the accuracy of the detection and classification results.
- At least 3 images from different angles are acquired.
- the Al engine of the invention is trained to detect and identify if there exists any skin condition, for example normal, sore, acne, eczema, white spot, ink spot, nevus, wound, scars and the like, from these 2D images.
- the locations of any identified skin conditions are well mapped to the pixels (positions) of the 2D image generated by the 3D camera.
- each pixel in the image is associated with a classification value “c” indicating it is corresponding to which region and which skin condition exists.
- the region and skin condition information of the pixel indicated by the classification value can be extracted from the definition of a one-to-one lookup mapping.
- the classification value “c” is appended to the 3D point cloud to form a 4D point cloud (in [x, y, z, c] format) , which stores also the classification information in addition to the spatial coordinates’ information.
- different regions may require different dosages of laser power during the treatment process.
- normal skin may require 100%laser power
- forehead area may require 80%power
- an area with nevus may require 50%power.
- no-go zone 330 which denotes that such area or regions must not receive any laser treatment, such as is shown in Figure 3d.
- a region for treatment is first selected, for example, the left cheek region 340 is selected as is shown in Figure 3e (i) , 3e (ii) .
- a matrix of treatment points 350 is generated.
- the distance between the treatment points is calculated so that the laser spots cover all required skin areas, however with minimal overlapping.
- the cobot may be programmed to move to positions that make the laser head pointing to the treatment points and with an orientation normal to the skin surface.
- the present invention provides for automated treatment of the skin or dermis of a subject in need thereof.
- ROI Regions of Interest
- the present invention overcomes problems identified by the present inventors, in particular errors due to human related attributes including as follows:
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Radiology & Medical Imaging (AREA)
- Animal Behavior & Ethology (AREA)
- Medical Informatics (AREA)
- Pathology (AREA)
- Biophysics (AREA)
- Fuzzy Systems (AREA)
- Signal Processing (AREA)
- Psychiatry (AREA)
- Physiology (AREA)
- Mathematical Physics (AREA)
- Quality & Reliability (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Evolutionary Computation (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
A process of determining the regions of the face of a subject for treatment by a cosmetic or therapeutic procedure, said process including (i) acquiring at least one three-dimensional (3D) image of the face of the subject; (ii) generating a face mesh to define the facial surface of the subject from data from the acquired image comprised of a plurality of landmarks and wherein each landmark is assigned a unique identification label; (iii) determining the facial regions of the face of the subject from the two-dimensional (2D) data by a pre-trained Artificial Intelligence (Al) engine, (iv) assigning a facial region identification parameter to each of the landmarks; and (v) generating a three-dimensional (3D) dataset of the face of the subject from the face mesh data and wherein the points of three-dimensional (3D) dataset include a facial region identification parameter.
Description
The present invention relates the field of cosmetic, therapeutic and restorative treatment of the skin of a subject in need thereof, and more particularly to a process and system for cosmetic, therapeutic and restorative treatment of the skin of a subject in need thereof, in particular the face of a subject.
Cosmetic, therapeutic, treatment and medical procedure technologies, typically, have been handled by a human being using hand operated tools in the past.
Such tools typically include a laser type device which, is held by the hand of an operator, and which is utilised to perform cosmetic and therapeutic, as well as clinical procedures on the body of the patient.
In particular, such procedures are performed on the face and head of a patient. However, in other cases, the skin or dermis of a subject may be treated for cosmetic or therapeutic purposes.
Cosmetic purposes may include removal of blemishes, removal of scarring, treatment of discoloration, removal of unsightly growths or defects, smoothening of and consistency of colour, skin wrinkle and defect restoration and the like.
Therapeutic purposes also, may include restoration from damage such as from chemicals, light, trauma, accidents on a minor scale, or on a larger scale disfigurement from injury or trauma.
Therapeutic purposes may also include the removal of lesions from the skin of a patient, cancer therapy, and removal of potential cancer initiation points such as moles or other defects on the body of the subject.
Object of the Invention
It is an object of the present invention to provide a process and system for cosmetic, therapeutic and restorative treatment of the skin of a subject in need thereof, in particular the face of a subject which overcome or at least partly ameliorate at least some deficiencies as associated with the prior art.
In a first aspect, the present invention provides a process of determining the regions of the face of a subject for treatment by a cosmetic or therapeutic procedure, said process including:
(i) acquiring at least one three-dimensional (3D) image of the face of the subject, wherein said three-dimensional image includes three dimensional data indicative of the surface topography of the face of the subject and includes two dimensional (2D) data indicative of an optical colour image of the face of the subject;
(ii) generating a face mesh to define the facial surface of the subject from data from the acquired image comprised of a plurality of landmarks and wherein each landmark is assigned a unique identification label;
(iii) determining the facial regions of the face of the subject from the two-dimensional (2D) data by a pre-trained Artificial Intelligence (Al) engine, wherein pre-trained so as to classify the face of a subject into facial regions based on the landmark identification labels and wherein said facial regions are predefined anatomical regions of a face;
(iv) assigning a facial region identification parameter to each of the landmarks, wherein the facial region identification parameter is indicative of the facial region of the face of the subject in which the landmark resides; and
(v) generating a three-dimensional (3D) dataset of the face of the subject from the face mesh data and wherein the points of three-dimensional (3D) dataset include a facial region identification parameter.
The three-dimensional (3D) dataset may be a point cloud model in a [x, y, z, c] format, wherein c is representative of a facial region identification parameter of the face of the subject.
In step (ii) the facial surface may be represented as landmarks flattened in a one-dimensional (1D) tensor of n points represented by [x1, y1, z1] , [x2, y2, z2] ... [xn, yn, zn] and wherein the x and y coordinates are representative of image pixel coordinates.
The facial regions may include two or more of the group including left eye, right eye, left eyebrow, right eyebrow, nose, mouth, lip, forehead, left cheek, right cheek, jaw.
At least two three-dimensional (3D) images of the face of the subject may be acquired. The at least two three-dimensional (3D) images of the face of the subject may be overlapping.
Three three-dimensional (3D) images of the face of the subject may be acquired. The three three-dimensional (3D) image of the face of the subject may be overlapping.
On a second aspect, the present invention provides a process of determining one or more region of a face of a subject having a skin disorder, for treatment of said subject during a cosmetic or therapeutic procedure, said process including:
(i) determining the regions of the face of a subject by the process of the first aspect wherein the face of the subject is illuminated by a predetermined light type when the three-dimensional (3D) optical image of the face of the subject is acquired and wherein said predetermined light type provides for identification of a skin disorder present on the face of the subject;
(ii) determining the presence of any skin disorders from the two-dimensional (2D) data of the at least one three-dimensional (3D) image of the face of the subject and the region of any skin disorder, wherein the pre-trained Artificial Intelligence (Al) engine is trained to identify skin disorders; and wherein the facial region identification parameter is assigned a skin condition parameter indicative of the skin disorder determined by the pre-trained Artificial Intelligence (Al) engine.
The predetermined light type may be selected from the group including infrared light, white light, red light, orange light, blue light or UV light.
The at least one three-dimensional (3D) image of the face of the subject may be acquired in the presence of UV light, acquired in the presence of white light, or is acquired in the presence of orange light.
In a third aspect, the present invention provides a process of determining a treatment regimen for the face of a subject by a cosmetic or therapeutic procedure, said process including the steps of:
(i) determining the regions of the face of said subject according to the process of the first aspect; and
(ii) of determining one or more region of the face of said subject having a skin disorder according to the process of the second aspect;
(iii) determining a treatment regimen for the face of a subject, wherein the treatment regimen is determined based upon the region of the face of the subject and any facial region identification parameter assigned a skin condition parameter indicative of the skin disorder.
The treatment regimen is preferably performed by a medical tool. The medical tool is preferably a medical laser.
The power of the laser may be adjusted dependent upon the region of the face to which treatment is to be applied.
The cosmetic or therapeutic procedure may be precluded from any region having a facial region identification parameter indicative of a skin disorder.
In a fourth aspect, the present invention provides system for determining the regions of the face of a subject for treatment by a cosmetic or therapeutic procedure, said system, comprising
an image acquisition device for acquiring at least one three-dimensional (3D) image of the face of the subject, wherein said three-dimensional image includes three-dimensional data indicative of the surface topography of the face of the subject and includes two dimensional (2D) data indicative of an optical colour image of the face of a subject;
a processor for generating a face mesh to define the facial surface of the subject from data from the acquired image comprised of a plurality of landmarks and wherein each landmark is assigned a unique identification label; and
a pre-trained Artificial Intelligence (Al) engine for determining the facial regions of the face of the subject from the two-dimensional (2D) data, wherein the pre-trained Artificial Intelligence (Al) engine is pre-trained so as to classify the face of a subject into facial regions based on the landmark identification labels and wherein said facial regions are predefined anatomical regions of a face;
wherein the processor assigns a facial region identification parameter to each of the landmarks, wherein the facial region identification parameter is indicative of the facial region of the face of the subject in which the landmark resides, and wherein the processor generates a three-dimensional (3D) dataset of the face of the subject from the face mesh data and wherein the points of three-dimensional (3D) dataset include a facial region identification parameter.
The three-dimensional (3D) dataset is a point cloud model in a [x, y, z, c] format, wherein c is representative of a facial region identification parameter of the face of the subject.
The three-dimensional (3D) dataset may be a point cloud model in a [x, y, z, c] format, wherein c is representative of a facial region identification parameter of the face of the subject.
The facial surface may be represented as landmarks flattened in a one-dimensional (1D) tensor of n points represented by [x1, y1, z1] , [x2, y2, z2] ... [xn, yn, zn] and wherein the x and y coordinates are representative of image pixel coordinates.
The facial regions may include two or more of the group including left eye, right eye, left eyebrow, right eyebrow, nose, mouth, lip, forehead, left cheek, right cheek, jaw.
At least two three-dimensional (3D) images of the face of the subject may be acquired. The at least two three-dimensional (3D) images of the face of the subject may be overlapping.
Three three-dimensional (3D) images of the face of the subject may be acquired. The three three-dimensional (3D) image of the face of the subject may be overlapping.
In a fifth aspect, the present invention provides a system for determining one or more region of a face of a subject having a skin disorder, for treatment of said subject during a cosmetic or therapeutic procedure, said system comprising:
an image acquisition device for acquiring at least one three-dimensional (3D) image of the face of the subject, wherein said three-dimensional image includes three-dimensional data indicative of the surface topography of the face of the subject and includes two dimensional (2D) data indicative of an optical colour image of the face of a subject;
a processor for generating a face mesh to define the facial surface of the subject from data from the acquired image comprised of a plurality of landmarks and wherein each landmark is assigned a unique identification label;
a pre-trained Artificial Intelligence (Al) engine for determining the facial regions of the face of the subject from the two-dimensional (2D) data, wherein the pre-trained Artificial Intelligence (Al) engine is pre-trained so as to classify the face of a subject into facial regions based on the landmark identification labels and wherein said facial regions are predefined anatomical regions of a face; wherein the processor assigns a facial region identification parameter to each of the landmarks, wherein the facial region identification parameter is indicative of the facial region of the face of the subject in which the landmark resides, and wherein the processor generates a three-dimensional (3D) dataset of the face of the subject from the face mesh data and wherein the points of three-dimensional (3D) dataset include a facial region identification parameter, and
one or more light source for illuminating the face of the subject by a predetermined light type when the three-dimensional (3D) optical image of the face of the subject is acquired and wherein said predetermined light type provides for identification of a skin disorder present on the face of the subject; wherein the pre-trained Artificial Intelligence (Al) engine determines the presence of any skin disorders from the two-dimensional (2D) data of the at least one three-dimensional (3D) image of the face of the subject and the region of any skin disorder, wherein the pre-trained Artificial Intelligence (Al) engine is trained to identify skin disorders; and wherein the facial region identification parameter is assigned a skin condition parameter indicative of the skin disorder determined by the pre-trained Artificial Intelligence (Al) engine.
The system may include one or more light source provides a predetermined light type selected from the group including infrared light, white light, red light, orange light, blue light or UV light.
The at least one three-dimensional (3D) image of the face of the subject is acquired in the presence of UV light, is acquired in the presence of white light, or is acquired in the presence of orange light.
In a sixth aspect, the present invention provides an automated system for providing treatment of subject during a cosmetic or therapeutic procedure, said system comprising a system according to the fifth aspect, a processor; and a robotic arm for carrying a medical tool for providing treatment to the subject. The medical tool is preferably a medical laser.
In order that a more precise understanding of the above-recited invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof that are illustrated in the appended drawings.
The drawings presented herein may not be drawn to scale and any reference to dimensions in the drawings or the following description is specific to the embodiments disclosed.
Figure 1a shows schematic representation of a first exemplary embodiment of a process according to the present invention;
Figure 1b shows schematic representation of a second exemplary embodiment of a process according to the present invention;
Figure 1c shows schematic representation of a third exemplary embodiment of a process according to the present invention;
Figure 1 d shows schematic representation of a first exemplary embodiment of a system according to the present invention;
Figure 1e shows schematic representation of a second exemplary embodiment of a system according to the present invention;
Figure 2a shows a three-dimensional rendering of an exemplary embodiment of a system according to the present invention;
Figure 2b shows a further three-dimensional rendering of the exemplary embodiment of Figure 2a;
Figure 2c shows a further three-dimensional rendering of the exemplary embodiment of Figure 2a;
Figure 2d shows a three-dimensional rendering of a further exemplary embodiment of a portion of the system of the present invention;
Figure 2e shows a three-dimensional rendering of an enlarged portion of the system of Figure 2a, Figure 2b and Figure 2c;
Figure 2f shows a schematic representation of the image acquisition system 260 of figure 2a;
Figure 3a shows a process of generating facemesh from a human face;
Figure 3b shows the facial image captured by different light sources;
Figure 3c shows an example of skin problems detected and identified by the Al engine;
Figure 3d shows an example of the “no-go zones” of the face of a subject;
Figure 3e (i) shows an example of a region for treatment to perform path generation; and
Figure 3e (ii) shows an example of the user interface of selecting a region for treatment to perform path generation.
Detailed Description of the Drawings
The present inventors have identified shortcomings in processes and systems of the prior art, and upon identification of the problems with the prior art, have provided a process and system for the cosmetic, therapeutic and restorative treatment of the skin of a subject in need thereof, which overcome the problems of the prior art.
1. Clinical and Commercial Applications of the Present Invention
The present invention may be implemented within the cosmetic and medical fields, in at least the following applications:
(i) Cosmetic facial surgery, such as blemish removal, skin whitening,
(ii) Facial hair removal,
(iii) Eye surgery,
(iv) Dental operation,
(v) Hair growth /removal treatment, and
(vi) Hair transplant extraction and implantation.
2. Overview of the Present Invention
In accordance with the present invention, with the cosmetic and therapeutic applications, the invention includes the provision of “no-go zones” with respect to the dermis of a subject, which may be detected on a dimensional (2D) image to the 3D profile of the subject, in particular face of the subject, which is preferably detected by way of Artificial Intelligence (Al) implementation.
The ability of the present invention to derive “no-go zones” , such as sensitive areas of skin, areas of skin pigmentation, pimples, or other areas which require sensitivity, provides numerous advantages both clinically and cosmetically, which are not contemplated or anticipated, or provided by systems of the prior art.
3. Problems of the Prior Art Identified by Present Inventors
As will be understood and is known, it is necessary to have repeatability and human judgement whilst performing cosmetic or therapeutic procedures, and, as will be understood and is known, human related attributes which are typical in human operated procedures can often result in minor or major errors, which may or may not be rectifiable, and which could cause cosmetic damage as well as functional damage to the dermis of the subject.
Such human related attributes include as follows:
1. incorrect treatment zone of the dermis of a subject,
2. inappropriate treatment regime to the zone of the dermis of a subject, and
3. excessive or insufficient treatment in respect of cosmetic or clinical procedures on the dermis of a subject.
The above can occur for numerous reasons, including human error, human tiredness, inconsistent operation of the device by hand, distraction by the user, subject to variance in decision making, inappropriate selection of device parameters, as well as simple human error due hand dexterity related issues.
Whilst it is acknowledged that there do exist robotic type systems for providing therapeutic and cosmetic surgery on a patient, such systems are generally rudimentary, have numerous deficiencies as which are identified by the present inventors, and still provide or exhibit errors an inconsistency as provided by manual operations.
The present inventors have identified the above problems, and provided a system and process for cosmetic and therapeutic treatment of a subject, which overcomes the disadvantages as exhibited by the prior art.
4. Advantages of Present Invention
Advantageously, as provided by the present invention, the detection of features of a subject, such as facial feature detection in 2D images acquired of the subject, may be performed using an Al engine to locate special Regions of Interest (ROI) on the face of the subject, thus defining:
1. “no-go zones” , such as eyebrows, mouth and the like, and
2. Identification of regions of the face or head, requiring specialized or particular treatment regime, for example reduced dosage, different wavelength, different tool, etc. ) .
Some conditions can only be detected under specified light sources. Hence, multiple light sources are used to detect different skin conditions.
Accordingly, the present invention provides for enhanced automated cosmetic and therapeutic treatment of a subject with increased efficiency, increased efficacy for particular subject requirements, and mitigates human error or incorrect judgement by an operator.
5. Process and System of the Present Invention
Referring to Figure 1a, there is show a schematic representation of an exemplary embodiment of a process 100a of determining the regions of the face of a subject for treatment by a cosmetic or therapeutic procedure according to the present invention. The process 100a includes the following steps:
STEP 1 (110a)
(i) Acquiring at least one three-dimensional (3D) image of the face of the subject, wherein said three-dimensional image includes three dimensional data indicative of the surface topography of the face of the subject and includes two dimensional (2D) data indicative of an optical colour image of the face of the subject.
STEP 2 (120a)
(ii) Generating a face mesh to define the facial surface of the subject from data from the acquired image comprised of a plurality of landmarks and wherein each landmark is assigned a unique identification label.
STEP 3 (130a)
(iii) Determining the facial regions of the face of the subject from the two-dimensional (2D) data by a pre-trained Artificial Intelligence (Al) engine, wherein pre-trained so as to classify the face of a subject into facial regions based on the landmark identification labels and wherein said facial regions are predefined anatomical regions of a face.
STEP 4 (140a)
(iv) assigning a facial region identification parameter to each of the landmarks, wherein the facial region identification parameter is indicative of the facial region of the face of the subject in which the landmark resides.
STEP 5 (150a)
(v) Generating a three-dimensional (3D) dataset of the face of the subject from the face mesh data and wherein the points of three-dimensional (3D) dataset include a facial region identification parameter.
The three-dimensional (3D) dataset may be a point cloud model in a [x, y, z, c] format, wherein c is representative of a facial region identification parameter of the face of the subject.
In step (ii) the facial surface may be represented as landmarks flattened in a one-dimensional (1 D) tensor of n points represented by [x1, y1, z1] , [x2, y2, z2] ... [xn, yn, zn]and wherein the x and y coordinates are representative of image pixel coordinates.
The facial regions may include two or more of the group including left eye, right eye, left eyebrow, right eyebrow, nose, mouth, lip, forehead, left cheek, right cheek, jaw.
At least two three-dimensional (3D) images of the face of the subject may be acquired. The at least two three-dimensional (3D) images of the face of the subject may be overlapping.
Three three-dimensional (3D) images of the face of the subject may be acquired. The three three-dimensional (3D) image of the face of the subject may be overlapping.
Referring to Figure 1b, there is show a schematic representation of a further exemplary embodiment of a process 100b of determining one or more region of a face of a subject having a skin disorder, for treatment of said subject during a cosmetic or therapeutic procedure, according to the present invention.
The process 100b includes the following steps:
STEP 1 (110b)
(i) Determining the regions of the face of a subject by the above process 100a, wherein the face of the subject is illuminated by a predetermined light type when the three-dimensional (3D) optical image of the face of the subject is acquired and wherein said predetermined light type provides for identification of a skin disorder present on the face of the subject.
STEP 2 (120b)
(ii) Determining the presence of any skin disorders from the two-dimensional (2D) data of the at least one three-dimensional (3D) image of the face of the subject and the region of any skin disorder, wherein the pre-trained Artificial Intelligence (Al) engine is trained to identify skin disorders; and wherein the facial region identification parameter is assigned a skin condition parameter indicative of the skin disorder determined by the pre-trained Artificial Intelligence (Al) engine.
The predetermined light type may be selected from the group including infrared light, white light, red light, orange light, blue light or UV light.
The at least one three-dimensional (3D) image of the face of the subject may be acquired in the presence of UV light, acquired in the presence of white light, or is acquired in the presence of orange light.
Referring to Figure 1c, there is show a schematic representation of another exemplary embodiment of a process 100c of of determining a treatment regimen for the face of a subject by a cosmetic or therapeutic procedure according to the present invention.
The process 100c includes the following steps:
Step 1 (100c)
(i) determining the regions of the face of said subject according to the process of the above first embodiment.
Step 2 (120c)
(ii) Determining one or more region of the face of said subject having a skin disorder according to the process of the above second embodiment.
Step 3 (130c)
(iii) Determining a treatment regimen for the face of a subject, wherein the treatment regimen is determined based upon the region of the face of the subject and any facial region identification parameter assigned a skin condition parameter indicative of the skin disorder.
The treatment regimen is preferably performed by a medical tool. The medical tool is preferably a medical laser.
The power of the laser may be adjusted dependent upon the region of the face to which treatment is to be applied.
Referring to Figure 1d, there is shown a first exemplary embodiment of a system 100d for determining the regions of the face of a subject for treatment by a cosmetic or therapeutic procedure, according to the present invention.
The system 100d includes an image acquisition device 120d for acquiring at least one three-dimensional (3D) image of the face of the subject 110d, wherein said three-dimensional image 110d includes three-dimensional data 130d indicative of the surface topography of the face of the subject and includes two dimensional (2D) data140d indicative of an optical colour image of the face of a subject.
The system 100d further includes a processor 150d for generating a face mesh 160d to define the facial surface of the subject from data from the acquired image 140d comprised of a plurality of landmarks and wherein each landmark is assigned a unique identification label;
The system 100d further includes a pre-trained Artificial Intelligence (Al) engine 170d for determining the facial regions of the face of the subject from the two-dimensional (2D) data 140d, wherein the pre-trained Artificial Intelligence (Al) engine 170d is pre-trained so as to classify the face of a subject into facial regions based on the landmark identification labels and wherein the facial regions are predefined anatomical regions of a face.
The processor 150d assigns a facial region identification parameter to each of the landmarks, wherein the facial region identification parameter is indicative of the facial region of the face of the subject in which the landmark resides, and wherein the processor 150d generates a three-dimensional (3D) dataset of the face of the subject from the face mesh data and wherein the points of three-dimensional (3D) dataset include a facial region identification parameter.
The three-dimensional (3D) dataset may be a point cloud model in a [x, y, z, c] format, wherein c is representative of a facial region identification parameter of the face of the subject.
The facial surface may be represented as landmarks flattened in a one-dimensional (1D) tensor of n points represented by [x1, y1, z1] , [x2, y2, z2] . . . [xn, yn, zn] and wherein the x and y coordinates are representative of image pixel coordinates.
The facial regions may include two or more of the group including left eye, right eye, left eyebrow, right eyebrow, nose, mouth, lip, forehead, left cheek, right cheek, jaw.
At least two three-dimensional (3D) images of the face of the subject may be acquired. The at least two three-dimensional (3D) images of the face of the subject may be overlapping. Three three-dimensional (3D) images of the face of the subject may be acquired. The three three-dimensional (3D) image of the face of the subject may be overlapping.
Referring to Figure 1e, there is shown a second exemplary embodiment of a system 100e for determining one or more region of a face of a subject having a skin disorder, for treatment of said subject during a cosmetic or therapeutic procedure.
The system 100e includes an image acquisition device 120e for acquiring at least one three-dimensional (3D) image 110e of the face of the subject, wherein said three-dimensional image includes three-dimensional data 130e indicative of the surface topography of the face of the subject and includes two dimensional (2D) data 140e indicative of an optical colour image of the face of a subject.
The system 100e further includes a processor 150e for generating a face mesh 160e to define the facial surface of the subject from data from the acquired image 110e comprised of a plurality of landmarks and wherein each landmark is assigned a unique identification label.
The system 100e further includes a pre-trained Artificial Intelligence (Al) engine 170e for determining the facial regions of the face of the subject from the two-dimensional (2D) data 140e.
The pre-trained Artificial Intelligence (Al) engine 170e is pre-trained so as to classify the face of a subject into facial regions based on the landmark identification labels and wherein said facial regions are predefined anatomical regions of a face; wherein the processor assigns a facial region identification parameter to each of the landmarks.
The facial region identification parameter is indicative of the facial region of the face of the subject in which the landmark resides, and wherein the processor generates a three-dimensional (3D) dataset of the face of the subject from the face mesh data and wherein the points of three-dimensional (3D) dataset include a facial region identification parameter.
The system 100e further includes one or more light source for illuminating the face of the subject by a predetermined light type when the three-dimensional (3D) optical image of the face of the subject is acquired and wherein said predetermined light type provides for identification of a skin disorder present on the face of the subject.
The pre-trained Artificial Intelligence (Al) engine 170e determines the presence of any skin disorders from the two-dimensional (2D) data of the at least one three-dimensional (3D) image of the face of the subject and the region of any skin disorder, wherein the pre-trained Artificial Intelligence (Al) engine is trained to identify skin disorders; and wherein the facial region identification parameter is assigned a skin condition parameter indicative of the skin disorder determined by the pre-trained Artificial Intelligence (Al) engine 170e.
The light source may provide a predetermined light type selected from the group including infrared light, white light, red light, orange light, blue light or UV light.
The at least one three-dimensional (3D) image of the face of the subject may be acquired in the presence of UV light, may be acquired in the presence of white light, or acquired in the presence of orange light.
Referring now to Figures 2a, 2b, 2c and 2d, there is shown an exemplary embodiment of a system 200 according to the present invention.
With reference to preferred embodiments of the present invention, the term “cobot” is used. For explanatory purposes, a “cobot” or collaborative robot, is a robot which is intended for direct human-robot interaction within a shared space, or in situations humans and robots are in close proximity.
As such, in accordance with the present invention, a cobot is utilised as this is appropriate within such a clinical or cosmetic environment to which the present invention is directed, for numerous reasons including safety and ease of operability and the like.
The system 200 according to the present invention includes a cobot arm 210 for performing the therapeutic or cosmetic operations on a subject 220, a bed 215 for a subject 220 to sit or lay on.
Within the system, there is the robotic arm controller 230, a PC (main controller) 240 and an electrical connecting box 250.
As will be understood by those skilled in the art, there exist numerous technical implementations and manners in which the system can be provided, which will be understood to fall within the scope of the present invention.
The system 200 further includes the image acquisition system 260, for acquiring images of the subject′sface, as the present embodiment is directed to the dermis of the face of a subject.
Figure 2e shows an enlarged view of the cobot 210 arm, including a working tool 205, for example a spray head 205, for use during therapeutic procedures.
Figure 2f shows a schematic representation of the image acquisition system 260, including a stand 265, to support the head of the subject, two cameras 270 and 280, and a user interface 290.
6. Process of the Present Invention
In accordance with the present invention, there is a 3D camera mounted on the cobot arm of the system which is affixed beside the laser head of the system.
The 3D camera used may be commercially available 3D camera, which may be readily adapted into the system. Alternatively, the 3D camera can be purpose designed and built and implemented common depending on the particular requirements of the system.
6.1 IMAGE ACQUISITION
The 3D camera provides software to generate 3D point cloud (in [x, y, z] format) . However, the process of the present invention also makes use of the data captured by the camera to generate 4-dimensional point cloud (in [x, y, z, c] format) .
It should be noted that the “c” in the 4-dimensional point cloud vector contains information relating to the class of region of the face, for example to be classified by an Al engine.
It should be noted that the 3D camera actually consists of two cameras, these being:
(i) the first camera is a normal or typical 2D colour camera 270. This camera is utilised to capture the values of the 3 colour channels, i.e. the Red, Green, and Blue channels, for each pixel in the image acquired.
(ii) the second camera is a depth sensor 280, which detects the depth information, i.e. the corresponding distance between the object and the camera sensor, of each pixel in the image generated by the 2D colour camera.
Regardless as to whether the 3D camera of the image acquisition system 260 arrangement consists of a first camera and a second camera separately or integrally formed as a single unit, this may be considered a 3D camera system in accordance with the present invention
6.2 MESH GENERATION
Referring now to Figure 3a, after a photo of a human face 305 is taken or acquired by the image acquisition system 260, the 2D colour image is then passed to an Al engine.
The first step is to generate a facemesh 310 as shown in Figure 3a, and the present invention can make use of the commercially available “MediaPipe Facemesh” engine as developed by Google to generate a facemesh.
As will be understood, there are numerous manners in which the face mesh may be generated and as such, the present invention is not restricted limited to any such methodology.
In embodiments of the invention, the input image is processed so that the cropped face has, for example, 25%margin on each side, and resized to 256x256.
The output is a facial surface is thus represented as 478 3D landmarks flattened into a 1D tensor: [x1, y1, z1] , [x2, y2, z2] ..., x-and y-coordinates follow the image pixel coordinates.
It should be noted and understood that the z-coordinates are different from the point cloud coordinates at this moment, as the z-coordinates are relative to the face centre of mass and are scaled proportionally to the face width.
Each landmark 315 has is provided a unique number and corresponds to a unique location of a face of the subject. For example, landmark no. 55 corresponds to the lower starting position of the right eyebrow, in an embodiment of the invention.
The Al engine of the invention is pre-trained so as to classify the face of a subject into multiple regions based on the landmark numbers.
In accordance with the invention, each landmark number is a member of a specific region of the face.
Such regions may, for example, include left eye, right eye, left eyebrow, right eyebrow, nose, mouth, lip, forehead, left cheek, right cheek, jaw and the like.
Some landmarks may form the boundaries of the regions, and segmentation of regions are performed correspondingly.
The landmarks in the mesh inside a boundary of a region are the members of the class of that specific region.
By projecting the segment back to the original 2D photograph of the subject, each pixel in the image is thus also classified by the regions of the subject’s face accordingly.
As will be understood, the human face is not a flat object and has varying curvature, the image is inclined at the two sides of the image. This lowers the accuracy of the detection and classification results.
Hence, in accordance with the present invention, for example preferably at least 3 images from different angles (e. g. centre, left, right) are acquired, to seek to mitigate or ameliorate such inaccuracies.
This not only ensures the point cloud for the face at the two sides of the face accurate, but also the overlapping regions of two images also allows for double checking of the reliability of the point cloud generated.
6.3 SKIN ANALYSIS & “NO-GO ZONE”
In addition to the 3D camera, the present invention may also use a facial image capturing device as shown below to identify skin problems or disorders of the subject, and has a 2D camera for capturing face images.
The subject is required to place his/her head at the opening of a chamber of the image acquisition system 260.
In accordance with the present invention, this device uses multiple light sources, for example infrared, white light, red light, orange light, blue light, UV light, etc.
The reason for the provision of multiple light sources in accordance with the present invention, is that it has been found that some skin problems are more identified under certain lighting conditions, as shown in Figure 3b, whereby images are taken with UV 322, white 324, orange light 326 respectively.
As mentioned above, the human face is not a flat object and there exists curvature, and the image is inclined at the two sides of the image. This lowers the accuracy of the detection and classification results.
Hence, at least 3 images from different angles (e. g. centre, left, right) are acquired. The Al engine of the invention is trained to detect and identify if there exists any skin condition, for example normal, sore, acne, eczema, white spot, ink spot, nevus, wound, scars and the like, from these 2D images.
Then, by using the landmarks in facemesh above of the subject, the locations of any identified skin conditions are well mapped to the pixels (positions) of the 2D image generated by the 3D camera.
For example, skin problems or dermal disorders at the left cheek are filled with a contrasted colour, such as pink, denoted as “A” as shown in Figure 3c.
From these classifications, each pixel in the image is associated with a classification value “c” indicating it is corresponding to which region and which skin condition exists.
The region and skin condition information of the pixel indicated by the classification value can be extracted from the definition of a one-to-one lookup mapping.
The classification value “c” is appended to the 3D point cloud to form a 4D point cloud (in [x, y, z, c] format) , which stores also the classification information in addition to the spatial coordinates’ information.
Depending on the classification value, different regions may require different dosages of laser power during the treatment process.
For example, normal skin may require 100%laser power, forehead area may require 80%power, an area with nevus may require 50%power.
Some areas, such as wounds, lip, eye, and eyebrow, etc., can be classified as a “no-go zone” 330 which denotes that such area or regions must not receive any laser treatment, such as is shown in Figure 3d.
To perform path generation for the cobot motion for treatment of the subject, a region for treatment is first selected, for example, the left cheek region 340 is selected as is shown in Figure 3e (i) , 3e (ii) .
Then, depending on the laser spot size for an applicable treatment regimen, a matrix of treatment points 350 is generated. The distance between the treatment points is calculated so that the laser spots cover all required skin areas, however with minimal overlapping.
The cobot may be programmed to move to positions that make the laser head pointing to the treatment points and with an orientation normal to the skin surface.
7. Invention Attributes
The present invention provides for automated treatment of the skin or dermis of a subject in need thereof.
Furthermore, by way of automated detection of the features of a subject, such as facial feature detection in 2D images acquired of the subject, which may be performed using an Al engine to locate special or requisite Regions of Interest (ROI) on the face of the subject, thus defining:
(i) “no-go zones” , such as eyebrows, mouth and the like, and
(ii) Identification of regions of the face or head, requiring specialized or particular treatment regime, for example reduced dosage, different wavelength, different tool, etc. ) .
Hence, the present invention overcomes problems identified by the present inventors, in particular errors due to human related attributes including as follows:
(i) incorrect treatment zone of the dermis of a subject,
(ii) inappropriate treatment regime to the zone of the dermis of a subject, and
(iii) excessive or insufficient treatment in respect of cosmetic or clinical procedures on the dermis of a subject.
The above can occur for numerous reasons, including human error, human tiredness, inconsistent operation of the device by hand, distraction by the user, subject to variance in decision making, inappropriate selection of device parameters, as well as simple human error due hand dexterity related issues.
It should be noted understood that although embodiments of the present invention have been described with reference to the dermis of the face of a subject, in other or alternative parliaments the present invention is also equally applicable to other areas of the body whereby the dermis requires appropriate treatment, to which the features of the present invention are equally applicable.
Claims (29)
- A process of determining the regions of the face of a subject for treatment by a cosmetic or therapeutic procedure, said process including:(i) acquiring at least one three-dimensional (3D) image of the face of the subject, wherein said three-dimensional image includes three dimensional data indicative of the surface topography of the face of the subject and includes two dimensional (2D) data indicative of an optical colour image of the face of the subject;(ii) generating a face mesh to define the facial surface of the subject from data from the acquired image comprised of a plurality of landmarks and wherein each landmark is assigned a unique identification label;(iii) determining the facial regions of the face of the subject from the two-dimensional (2D) data by a pre-trained Artificial Intelligence (Al) engine, wherein pre-trained so as to classify the face of a subject into facial regions based on the landmark identification labels and wherein said facial regions are predefined anatomical regions of a face;(iv) assigning a facial region identification parameter to each of the landmarks, wherein the facial region identification parameter is indicative of the facial region of the face of the subject in which the landmark resides; and(v) generating a three-dimensional (3D) dataset of the face of the subject from the face mesh data and wherein the points of three-dimensional (3D) dataset include a facial region identification parameter.
- A process according to claim 1, wherein the three-dimensional (3D) dataset is a point cloud model in a [x, y, z, c] format wherein c is representative of a facial region identification parameter of the face of the subject.
- A process according to claim 1 or claim 2, wherein in step (ii) the facial surface is represented as landmarks flattened in a one-dimensional (1D) tensor of n points represented by [x1, y1, z1] , [x2, y2, z2] . . . [xn, yn, zn] and wherein the x and y coordinates are representative of image pixel coordinates.
- A process according to any one of the preceding claims, wherein the facial regions includes two or more of the group including left eye, right eye, left eyebrow, right eyebrow, nose, mouth, lip, forehead, left cheek, right cheek and jaw.
- A process according to any one of the preceding claims, wherein at least two three-dimensional (3D) images of the face of the subject are acquired.
- A process according to claim 4, wherein said at least two three-dimensional (3D) images of the face of the subject are overlapping.
- A process according to any one of the preceding claims, wherein three three-dimensional (3D) images of the face of the subject are acquired.
- A process according to claim 7, wherein said three three-dimensional (3D) image of the face of the subject are overlapping.
- A process of determining one or more region of a face of a subject having a skin disorder, for treatment of said subject during a cosmetic or therapeutic procedure, said process including:(i) determining the regions of the face of a subject by the process of any one of claims 1 to 6, wherein the face of the subject is illuminated by a predetermined light type when the three-dimensional (3D) optical image of the face of the subject is acquired and wherein said predetermined light type provides for identification of a skin disorder present on the face of the subject;(ii) determining the presence of any skin disorders from the two-dimensional (2D) data of the at least one three-dimensional (3D) image of the face of the subject and the region of any skin disorder, wherein a pre-trained Artificial Intelligence (Al) engine is trained to identify skin disorders; and wherein the facial region identification parameter is assigned a skin condition parameter indicative of the skin disorder determined by the pre-trained Artificial Intelligence (Al) engine.
- A process according to claim 9, wherein said predetermined light type is selected from the group including infrared light, white light, red light, orange light, blue light or UV light.
- A process according to claim 9, wherein said one three-dimensional (3D) image of the face of the subject is acquired in the presence of UV light, is acquired in the presence of white light, or is acquired in the presence of orange light.
- A process of determining a treatment regimen for the face of a subject by a cosmetic or therapeutic procedure, said process including the steps of:(i) determining the regions of the face of said subject according to the process of any one of claims 1 to 8;(ii) determining one or more region of the face of said subject having a skin disorder according to the process of any one of claims 9 to 11; and(iii) determining a treatment regimen for the face of a subject, wherein the treatment regimen is determined based upon the region of the face of the subject and any facial region identification parameter assigned a skin condition parameter indicative of the skin disorder.
- A process according to claim 12, wherein the treatment regimen is performed by a medical tool.
- A process according to claim 13, wherein the medical tool is a medical laser.
- A process according to claim 14, where in the power of the laser is adjusted dependent upon the region of the face to which treatment is to be applied.
- A process according to any one of claims 12-15, wherein the cosmetic or therapeutic procedure is precluded from any region having a facial region identification parameter indicative of a skin disorder.
- A system for determining the regions of the face of a subject for treatment by a cosmetic or therapeutic procedure, said system, comprising:an image acquisition device for acquiring at least one three-dimensional (3D) image of the face of the subject, wherein said three-dimensional image includes three-dimensional data indicative of the surface topography of the face of the subject and includes two dimensional (2D) data indicative of an optical colour image of the face of a subject;a processor for generating a face mesh to define the facial surface of the subject from data from the acquired image comprised of a plurality of landmarks and wherein each landmark is assigned a unique identification label; anda pre-trained Artificial Intelligence (Al) engine for determining the facial regions of the face of the subject from the two-dimensional (2D) data, wherein the pre-trained Artificial Intelligence (Al) engine is pre-trained so as to classify the face of a subject into facial regions based on the landmark identification labels and wherein said facial regions are predefined anatomical regions of a face;wherein the processor assigns a facial region identification parameter to each of the landmarks, wherein the facial region identification parameter is indicative of the facial region of the face of the subject in which the landmark resides, and wherein the processor generates a three-dimensional (3D) dataset of the face of the subject from the face mesh data and wherein the points of three-dimensional (3D) dataset include a facial region identification parameter.
- A system according to claim 17, wherein the three-dimensional (3D) dataset is a point cloud model in a [x, y, z, c] format, wherein c is representative of a facial region identification parameter of the face of the subject.
- A system according to claim 17 or claim 18, wherein in step (ii) the facial surface is represented as landmarks flattened in a one-dimensional (1 D) tensor of n points represented by [x1, y1, z1] , [x2, y2, z2] . . . [xn, yn, zn] and wherein the x and y coordinates are representative of image pixel coordinates.
- A system according to any one of claims 17 to 19, wherein the facial regions includes two or more of the group including left eye, right eye, left eyebrow, right eyebrow, nose, mouth, lip, forehead, left cheek, right cheek and jaw.
- A system according to any one of claims 17 to 20, wherein at least two three-dimensional (3D) images of the face of the subject are acquired.
- A system according to claim 21, wherein said at least two three-dimensional (3D) images of the face of the subject are overlapping.
- A system according to any one of claims 17 to 22, wherein three three-dimensional (3D) images of the face of the subject are acquired.
- A system according to claim 20, wherein said three three-dimensional (3D) image of the face of the subject are overlapping.
- A system for determining one or more region of the face of a subject having a skin disorder, for treatment of said subject during a cosmetic or therapeutic procedure, said system comprising:an image acquisition device for acquiring at least one three-dimensional (3D) image of the face of the subject, wherein said three-dimensional image includes three-dimensional data indicative of the surface topography of the face of the subject and includes two dimensional (2D) data indicative of an optical colour image of the face of a subject;a processor for generating a face mesh to define the facial surface of the subject from data from the acquired image comprised of a plurality of landmarks and wherein each landmark is assigned a unique identification label; anda pre-trained Artificial Intelligence (Al) engine for determining the facial regions of the face of the subject from the two-dimensional (2D) data, wherein the pre-trained Artificial Intelligence (Al) engine is pre-trained so as to classify the face of a subject into facial regions based on the landmark identification labels and wherein said facial regions are predefined anatomical regions of a face; wherein the processor assigns a facial region identification parameter to each of the landmarks, wherein the facial region identification parameter is indicative of the facial region of the face of the subject in which the landmark resides, and wherein the processor generates a three-dimensional (3D) dataset of the face of the subject from the face mesh data and wherein the points of three-dimensional (3D) dataset include a facial region identification parameter, andone or more light source for illuminating the face of the subject by a predetermined light type when the three-dimensional (3D) optical image of the face of the subject is acquired and wherein said predetermined light type provides for identification of a skin disorder present on the face of the subject; wherein the pre-trained Artificial Intelligence (Al) engine determines the presence of any skin disorders from the two-dimensional (2D) data of the at least one three-dimensional (3D) image of the face of the subject and the region of any skin disorder, wherein the pre-trained Artificial Intelligence (Al) engine is trained to identify skin disorders; and wherein the facial region identification parameter is assigned a skin condition parameter indicative of the skin disorder determined by the pre-trained Artificial Intelligence (Al) engine.
- A system according to claim 25, wherein said one or more light source provides a predetermined light type selected from the group including infrared light, white light, red light, orange light, blue light or UV light.
- A system according to claim 25, wherein said one three-dimensional (3D) image of the face of the subject is acquired in the presence of UV light, is acquired in the presence of white light, or is acquired in the presence of orange light.
- An automated system for providing treatment of subject during a cosmetic or therapeutic procedure, said system comprising:a system according to any one of claims 25 to 27;a processor; anda robotic arm for carrying a medical tool for providing treatment to the subject.
- An automated system for according to claim 28, wherein the medical tool is a medical laser.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| HK32024091241.2A HK30107322A2 (en) | 2024-05-09 | Facial detection system and process for cosmetic and therapeutic treatment of a subject | |
| HK32024091241.2 | 2024-05-09 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025232815A1 true WO2025232815A1 (en) | 2025-11-13 |
Family
ID=97674518
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2025/093338 Pending WO2025232815A1 (en) | 2024-05-09 | 2025-05-08 | Facial detection system and process for cosmetic and therapeutic treatment of a subject |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025232815A1 (en) |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160154994A1 (en) * | 2014-12-02 | 2016-06-02 | Samsung Electronics Co., Ltd. | Method and apparatus for registering face, and method and apparatus for recognizing face |
| CN109065148A (en) * | 2018-06-27 | 2018-12-21 | 南京同仁堂乐家老铺健康科技有限公司 | Intelligent facial diagnosis system and method |
| US20190286884A1 (en) * | 2015-06-24 | 2019-09-19 | Samsung Electronics Co., Ltd. | Face recognition method and apparatus |
| CN113197549A (en) * | 2021-04-29 | 2021-08-03 | 南通大学 | System for diagnosing diseases through face recognition technology |
| US20230351595A1 (en) * | 2019-07-12 | 2023-11-02 | Visionai Gmbh | Method, system and computer program product for generating treatment recommendations based on images of a facial region of a human subject |
| CN117351544A (en) * | 2023-10-10 | 2024-01-05 | 北京纳米能源与系统研究所 | Facial feature acquisition method, device, equipment and medium |
| CN117953560A (en) * | 2023-11-03 | 2024-04-30 | 智慧眼科技股份有限公司 | Facial feature classification device, method, equipment and medium combining traditional Chinese medicine theory |
-
2025
- 2025-05-08 WO PCT/CN2025/093338 patent/WO2025232815A1/en active Pending
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160154994A1 (en) * | 2014-12-02 | 2016-06-02 | Samsung Electronics Co., Ltd. | Method and apparatus for registering face, and method and apparatus for recognizing face |
| US20190286884A1 (en) * | 2015-06-24 | 2019-09-19 | Samsung Electronics Co., Ltd. | Face recognition method and apparatus |
| CN109065148A (en) * | 2018-06-27 | 2018-12-21 | 南京同仁堂乐家老铺健康科技有限公司 | Intelligent facial diagnosis system and method |
| US20230351595A1 (en) * | 2019-07-12 | 2023-11-02 | Visionai Gmbh | Method, system and computer program product for generating treatment recommendations based on images of a facial region of a human subject |
| CN113197549A (en) * | 2021-04-29 | 2021-08-03 | 南通大学 | System for diagnosing diseases through face recognition technology |
| CN117351544A (en) * | 2023-10-10 | 2024-01-05 | 北京纳米能源与系统研究所 | Facial feature acquisition method, device, equipment and medium |
| CN117953560A (en) * | 2023-11-03 | 2024-04-30 | 智慧眼科技股份有限公司 | Facial feature classification device, method, equipment and medium combining traditional Chinese medicine theory |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110896609B (en) | TMS positioning navigation method for transcranial magnetic stimulation treatment | |
| US9858667B2 (en) | Scan region determining apparatus | |
| JP7631332B2 (en) | Apparatus for defining sequences of movements in a generic model - Patents.com | |
| US20220036584A1 (en) | Transcranial magnetic stimulation (tms) positioning and navigation method for tms treatment | |
| Lanitis et al. | Automatic face identification system using flexible appearance models | |
| US9870613B2 (en) | Detection of tooth condition using reflectance images with red and green fluorescence | |
| Maglogiannis et al. | An integrated computer supported acquisition, handling, and characterization system for pigmented skin lesions in dermatological images | |
| US11338443B2 (en) | Device for managing the movements of a robot, and associated treatment robot | |
| US20110218428A1 (en) | System and Method for Three Dimensional Medical Imaging with Structured Light | |
| US20100121201A1 (en) | Non-invasive wound prevention, detection, and analysis | |
| CN111937038B (en) | Method for 3D scanning at least a portion of a surface of an object and optical 3D scanner | |
| US10973585B2 (en) | Systems and methods for tracking the orientation of surgical tools | |
| JP2022516488A (en) | Teeth segmentation using tooth alignment | |
| CA3200325A1 (en) | Method for automatically detecting landmark in three-dimensional dental scan data, and computer-readable recording medium with program for executing same in computer recorded thereon | |
| JP2002024811A (en) | Shadow component removing device | |
| WO2025232815A1 (en) | Facial detection system and process for cosmetic and therapeutic treatment of a subject | |
| Wirtz et al. | Automatic model-based 3-D reconstruction of the teeth from five photographs with predefined viewing directions | |
| US20220202353A1 (en) | A dermatoscopy device and a method for checking skin lesions | |
| Abed et al. | Dental segmentation via enhanced YOLOv8 and image processing techniques | |
| CN111420301A (en) | Robotized localization and tracking system of body surface lesions | |
| CN116725487A (en) | Laser beauty treatment diagnosis and treatment area identification system | |
| CN116687557A (en) | An intelligent laser facial diagnosis and treatment system | |
| CN116763424A (en) | A laser control system for facial diagnosis and treatment | |
| KR102612679B1 (en) | Method, apparatus and recording medium storing commands for processing scanned image of intraoral scanner | |
| KR102703551B1 (en) | 3D Face Scan Automatic Matching Device with Artificial Intelligence and Driving Method Thereof, and Computer Programs Stored on Medium |