WO2008064395A1 - Facial feature processing - Google Patents
Facial feature processing Download PDFInfo
- Publication number
- WO2008064395A1 WO2008064395A1 PCT/AU2007/001169 AU2007001169W WO2008064395A1 WO 2008064395 A1 WO2008064395 A1 WO 2008064395A1 AU 2007001169 W AU2007001169 W AU 2007001169W WO 2008064395 A1 WO2008064395 A1 WO 2008064395A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- image
- aam
- view
- recognition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/149—Segmentation; Edge detection involving deformable models, e.g. active contour models
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/77—Determining position or orientation of objects or cameras using statistical methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/755—Deformable models or variational models, e.g. snakes or active contours
- G06V10/7557—Deformable models or variational models, e.g. snakes or active contours based on appearance, e.g. active appearance models [AAM]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20121—Active appearance model [AAM]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- This invention concerns a method, software and system for facial feature processing. It may be applied to face recognition as well as to virtual face synthesis.
- the invention is a method for facial feature processing, comprising the steps of;
- Capturing an image including a face in any pose Applying a cascade face detecting algorithm to the image to find the location of the face in the image.
- AAM Active Appearance Model
- Use of the invention may provide a face recognition technique that is robust to the orientation of the face, both horizontally and vertically, within the captured image.
- the view of the face that is synthesized is typically the frontal view.
- a final step in the process is applying Adaptive Principal Component Analysis (APCA) or a similar method, to the frontal view of the face for recognition
- APCA Adaptive Principal Component Analysis
- the invention may also be applied to the synthesizing of virtual faces (from an image of any face) for game characters. In this case the synthesized view may be of any desired angle.
- a Viola- Jones based cascade face detecting algorithm may usefully be used to detect the face in real-time and thus solve the initialization problem for the Active Appearance Model search.
- a correlation model may be used to estimate the pose angle in the captured image and to reconstruct frontal model parameters.
- the correlation model may be applied after the AAM search interprets the face.
- the AAM may be used to synthesize the front view of the face.
- the invention is software configured for performing the method.
- the invention is computer hardware programmed with the software.
- Fig. 1 is a flow diagram for a process of face detection.
- Figs. 2(a)(i), (b)(i), (c)(i) and (d)(i) are face images initialized for Active
- FIGs. 2(a)(ii), (b)(ii), (c)(ii) and (d)(ii) are the resulting face images marked with the results of failed searches.
- Fig. 3 is a block of examples of face images used for training a cascade face detector.
- Figs. 4(a)(i), (b)(i), (c)(i) and (d)(i) are images of faces turned to the left.
- Figs. 4(a)(ii), (b)(ii), (c)(ii) and (d)(ii) are images of faces from the front.
- Figs. 4(a)(iii), (b)(iii), (c)(iii) and (d)(iii) are images of faces turned to the right. In all cases the faces have been labelled with 58 points on the main facial features.
- Fig. 5(a)(i) is a captured image of a face from the front and Fig.
- FIG. 5(a)(ii) is the frontal face image represented by AAM.
- Fig. 5 (b)(i) is a captured image of a face from the left;
- Fig. 5(b)(ii) is the same face image represented by AAM, and
- Fig. 5 (b)(iii) is a synthesized version of the face from the front.
- Fig. 6 is a graph comparing the recognition rate for PCA and APCA on original and synthesized images.
- Fig. 1 the overall process for face recognition will be described.
- the face will likely be unknown and will have unknown horizontal and vertical orientation.
- the image is then applied to a cascading face detection algorithm 120 that has been suitably trained to locate the face position 130.
- AAM Active Appearance Model
- a correlation model is used to estimate the horizontal and vertical orientation of the face, and to reconstruct AAM parameters for a frontal view of the face.
- a frontal view of the face is then synthesized 160 using the AAM with the parameters from the correlation model.
- Adaptive Principal Component Analysis (APCA) 170 which is insensitive to illumination and expression changes, is then applied to the frontal view to finally recognize the face 180.
- AAM Active Appearance Model
- a Cascade Face Detector is trained.
- a face training database includes 4916 hand labeled faces. Also, negative training data were randomly collected from the internet and do not contain human faces. After training, a final face detector has 24 layers with a total of 2913 features. Some training face images can be seen in Fig. 3.
- the Viola and Jones method [6] combines weak classifiers based on simple binary features which can be computed extremely fast. Simple rectangular Haar-like features are extracted; face and non-face classification is done using a cascade of successively more complex classifiers which discards non-face regions and only sends face-like candidates to the next layer's classifier. Thus it employs a "coarse-to-fine" strategy as described by Fleuret and Geman [8], Each layer's classifier is trained by the AdaBoost learning algorithm. Adaboost is a boosting learning algorithm which can fuse many weak classifiers into a single more powerful classifier.
- the cascade face detector finds the location of a human face in an input image and provides a good starting point for the subsequent AAM search.
- AAM Active Appearance Model
- a Facial Feature Interpretation such as Active Appearance Models is a powerful tool to describe deformable object images. It demonstrates that a small number of 2D statistical models are sufficient to capture the shape and appearance of a face from any viewpoint.
- the Active Appearance Model uses principal component analysis (PCA) on the linear subspaces to model both the shape and texture changes of a certain object class.
- PCA principal component analysis
- the AAM search precisely marks the major facial features, such as mouth, eyes, nose and so on.
- face image samples were collected from 40 individuals for the rotation model training set. For each person there are 3 images in 3 pose views (left 15°, frontal, right 15°) extracted from the Feret face database [7]. Each of these 120 face images was marked with 58 points around the main features including the eyes, mouth, nose, eyebrows and chin. Some of the labeled training face images can be seen in Fig. 4.
- the combined Adaboost-based cascade face detector and AAM search was applied on the rest of the images of the Feret b-series dataset where each person has 7 pose angles, ranging from left 40°, 25°, 15°, 0° to right 15°, 25°, 40°.
- the AAM search on the face images achieved 95% search accuracy rate.
- Parameters C 0 , c c and c s are learned from the successful AAM search samples.
- c 0 , c c and c s are vectors which are learned from the training data by the AAM search. (Here only head turning is considered, head nodding can be dealt with in a similar way). Given a new face image with parameters c, orientation can be estimated as follows.
- the model can be used to synthesize new views.
- Fig. 5 shows original faces images from the front and turned, the face images represented by AAM in each case, and a synthesized front view of the turned face represented for comparison.
- APCA Adaptive Principal Component Analysis
- APCA Adaptive Principal Component Analysis
- the APCA face recognition model was trained using the Asian Face Database as in [4]. Then face images were chosen from 46 persons with good AAM search results. Both PCA and APCA were applied to the original face images and to synthesized frontal images respectively for testing. The frontal view images were registered into a gallery and use the high pose angle images for testing. The recognition results are shown in Fig. 6. It can be seen from Fig. 6 that the recognition rates of PCA and APCA on synthesized images is much higher than that of the original high pose angle images. The recognition rate increases by up to 500% for images with view angle of 25°. Yet even for smaller rotation angles less than 15°, the accuracy increases by up to 20%. Note that the recognition performance of APCA is always significantly higher than PCA, which is consistent with the results in [4].
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- Geometry (AREA)
- Human Computer Interaction (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Computer Graphics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2007327540A AU2007327540A1 (en) | 2006-08-18 | 2007-08-17 | Facial feature processing |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2006904521A AU2006904521A0 (en) | 2006-08-18 | Robust Face Recognition | |
| AU2006904521 | 2006-08-18 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2008064395A1 true WO2008064395A1 (en) | 2008-06-05 |
Family
ID=39467331
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/AU2007/001169 Ceased WO2008064395A1 (en) | 2006-08-18 | 2007-08-17 | Facial feature processing |
Country Status (2)
| Country | Link |
|---|---|
| AU (1) | AU2007327540A1 (en) |
| WO (1) | WO2008064395A1 (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2008144825A1 (en) * | 2007-06-01 | 2008-12-04 | National Ict Australia Limited | Face recognition |
| WO2013163699A1 (en) * | 2012-05-04 | 2013-11-07 | Commonwealth Scientific And Industrial Research Organisation | System and method for eye alignment in video |
| WO2016110030A1 (en) * | 2015-01-09 | 2016-07-14 | 杭州海康威视数字技术股份有限公司 | Retrieval system and method for face image |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1359536A2 (en) * | 2002-04-27 | 2003-11-05 | Samsung Electronics Co., Ltd. | Face recognition method and apparatus using component-based face descriptor |
| US7050607B2 (en) * | 2001-12-08 | 2006-05-23 | Microsoft Corp. | System and method for multi-view face detection |
| CN1811793A (en) * | 2006-03-02 | 2006-08-02 | 复旦大学 | Automatic positioning method for characteristic point of human faces |
-
2007
- 2007-08-17 WO PCT/AU2007/001169 patent/WO2008064395A1/en not_active Ceased
- 2007-08-17 AU AU2007327540A patent/AU2007327540A1/en not_active Abandoned
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7050607B2 (en) * | 2001-12-08 | 2006-05-23 | Microsoft Corp. | System and method for multi-view face detection |
| EP1359536A2 (en) * | 2002-04-27 | 2003-11-05 | Samsung Electronics Co., Ltd. | Face recognition method and apparatus using component-based face descriptor |
| CN1811793A (en) * | 2006-03-02 | 2006-08-02 | 复旦大学 | Automatic positioning method for characteristic point of human faces |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2008144825A1 (en) * | 2007-06-01 | 2008-12-04 | National Ict Australia Limited | Face recognition |
| WO2013163699A1 (en) * | 2012-05-04 | 2013-11-07 | Commonwealth Scientific And Industrial Research Organisation | System and method for eye alignment in video |
| WO2016110030A1 (en) * | 2015-01-09 | 2016-07-14 | 杭州海康威视数字技术股份有限公司 | Retrieval system and method for face image |
Also Published As
| Publication number | Publication date |
|---|---|
| AU2007327540A1 (en) | 2008-06-05 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Shan et al. | Face recognition robust to head pose from one sample image | |
| Yang et al. | An efficient LDA algorithm for face recognition | |
| Kumar et al. | Real time face recognition using adaboost improved fast PCA algorithm | |
| Wang et al. | Video-based face recognition: A survey | |
| Bagherian et al. | Facial feature extraction for face recognition: a review | |
| Patel et al. | Comparative analysis of face recognition approaches: a survey | |
| US20060056667A1 (en) | Identifying faces from multiple images acquired from widely separated viewpoints | |
| Matin et al. | Recognition of an individual using the unique features of human face | |
| WO2008064395A1 (en) | Facial feature processing | |
| Kang et al. | A comparison of face verification algorithms using appearance models. | |
| Zhang et al. | Component-based cascade linear discriminant analysis for face recognition | |
| Rajalakshmi et al. | A review on classifiers used in face recognition methods under pose and illumination variation | |
| Savvides et al. | Face recognition | |
| Mahmoud et al. | An effective hybrid method for face detection | |
| Martiriggiano et al. | Facial feature extraction by kernel independent component analysis | |
| Peng et al. | A novel scheme of face verification using active appearance models | |
| Gajame et al. | Face detection with skin color segmentation & recognition using genetic algorithm | |
| Bebis et al. | Genetic search for face detection and verification | |
| Urschler et al. | Robust facial component detection for face alignment applications | |
| Jiang et al. | An improved random sampling LDA for face recognition | |
| Yamaguchi | Face recognition technology and its real-world application | |
| Hamid et al. | Radius Based Block LBP for Facial Expression Recognition | |
| Abdelwahab et al. | A novel algorithm for simultaneous face detection and recognition | |
| Gupta | Face Recognition Techniques-A Review | |
| Sabrin et al. | An intensity and size invariant real time face recognition approach |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07870181 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2007327540 Country of ref document: AU |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 2007327540 Country of ref document: AU Date of ref document: 20070817 Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: RU |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 07870181 Country of ref document: EP Kind code of ref document: A1 |