[go: up one dir, main page]

WO2008064395A1 - Facial feature processing - Google Patents

Facial feature processing Download PDF

Info

Publication number
WO2008064395A1
WO2008064395A1 PCT/AU2007/001169 AU2007001169W WO2008064395A1 WO 2008064395 A1 WO2008064395 A1 WO 2008064395A1 AU 2007001169 W AU2007001169 W AU 2007001169W WO 2008064395 A1 WO2008064395 A1 WO 2008064395A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
aam
view
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/AU2007/001169
Other languages
French (fr)
Inventor
Brian Lovell
Ting Shan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Data61
Original Assignee
National ICT Australia Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2006904521A external-priority patent/AU2006904521A0/en
Application filed by National ICT Australia Ltd filed Critical National ICT Australia Ltd
Priority to AU2007327540A priority Critical patent/AU2007327540A1/en
Publication of WO2008064395A1 publication Critical patent/WO2008064395A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/755Deformable models or variational models, e.g. snakes or active contours
    • G06V10/7557Deformable models or variational models, e.g. snakes or active contours based on appearance, e.g. active appearance models [AAM]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20121Active appearance model [AAM]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • This invention concerns a method, software and system for facial feature processing. It may be applied to face recognition as well as to virtual face synthesis.
  • the invention is a method for facial feature processing, comprising the steps of;
  • Capturing an image including a face in any pose Applying a cascade face detecting algorithm to the image to find the location of the face in the image.
  • AAM Active Appearance Model
  • Use of the invention may provide a face recognition technique that is robust to the orientation of the face, both horizontally and vertically, within the captured image.
  • the view of the face that is synthesized is typically the frontal view.
  • a final step in the process is applying Adaptive Principal Component Analysis (APCA) or a similar method, to the frontal view of the face for recognition
  • APCA Adaptive Principal Component Analysis
  • the invention may also be applied to the synthesizing of virtual faces (from an image of any face) for game characters. In this case the synthesized view may be of any desired angle.
  • a Viola- Jones based cascade face detecting algorithm may usefully be used to detect the face in real-time and thus solve the initialization problem for the Active Appearance Model search.
  • a correlation model may be used to estimate the pose angle in the captured image and to reconstruct frontal model parameters.
  • the correlation model may be applied after the AAM search interprets the face.
  • the AAM may be used to synthesize the front view of the face.
  • the invention is software configured for performing the method.
  • the invention is computer hardware programmed with the software.
  • Fig. 1 is a flow diagram for a process of face detection.
  • Figs. 2(a)(i), (b)(i), (c)(i) and (d)(i) are face images initialized for Active
  • FIGs. 2(a)(ii), (b)(ii), (c)(ii) and (d)(ii) are the resulting face images marked with the results of failed searches.
  • Fig. 3 is a block of examples of face images used for training a cascade face detector.
  • Figs. 4(a)(i), (b)(i), (c)(i) and (d)(i) are images of faces turned to the left.
  • Figs. 4(a)(ii), (b)(ii), (c)(ii) and (d)(ii) are images of faces from the front.
  • Figs. 4(a)(iii), (b)(iii), (c)(iii) and (d)(iii) are images of faces turned to the right. In all cases the faces have been labelled with 58 points on the main facial features.
  • Fig. 5(a)(i) is a captured image of a face from the front and Fig.
  • FIG. 5(a)(ii) is the frontal face image represented by AAM.
  • Fig. 5 (b)(i) is a captured image of a face from the left;
  • Fig. 5(b)(ii) is the same face image represented by AAM, and
  • Fig. 5 (b)(iii) is a synthesized version of the face from the front.
  • Fig. 6 is a graph comparing the recognition rate for PCA and APCA on original and synthesized images.
  • Fig. 1 the overall process for face recognition will be described.
  • the face will likely be unknown and will have unknown horizontal and vertical orientation.
  • the image is then applied to a cascading face detection algorithm 120 that has been suitably trained to locate the face position 130.
  • AAM Active Appearance Model
  • a correlation model is used to estimate the horizontal and vertical orientation of the face, and to reconstruct AAM parameters for a frontal view of the face.
  • a frontal view of the face is then synthesized 160 using the AAM with the parameters from the correlation model.
  • Adaptive Principal Component Analysis (APCA) 170 which is insensitive to illumination and expression changes, is then applied to the frontal view to finally recognize the face 180.
  • AAM Active Appearance Model
  • a Cascade Face Detector is trained.
  • a face training database includes 4916 hand labeled faces. Also, negative training data were randomly collected from the internet and do not contain human faces. After training, a final face detector has 24 layers with a total of 2913 features. Some training face images can be seen in Fig. 3.
  • the Viola and Jones method [6] combines weak classifiers based on simple binary features which can be computed extremely fast. Simple rectangular Haar-like features are extracted; face and non-face classification is done using a cascade of successively more complex classifiers which discards non-face regions and only sends face-like candidates to the next layer's classifier. Thus it employs a "coarse-to-fine" strategy as described by Fleuret and Geman [8], Each layer's classifier is trained by the AdaBoost learning algorithm. Adaboost is a boosting learning algorithm which can fuse many weak classifiers into a single more powerful classifier.
  • the cascade face detector finds the location of a human face in an input image and provides a good starting point for the subsequent AAM search.
  • AAM Active Appearance Model
  • a Facial Feature Interpretation such as Active Appearance Models is a powerful tool to describe deformable object images. It demonstrates that a small number of 2D statistical models are sufficient to capture the shape and appearance of a face from any viewpoint.
  • the Active Appearance Model uses principal component analysis (PCA) on the linear subspaces to model both the shape and texture changes of a certain object class.
  • PCA principal component analysis
  • the AAM search precisely marks the major facial features, such as mouth, eyes, nose and so on.
  • face image samples were collected from 40 individuals for the rotation model training set. For each person there are 3 images in 3 pose views (left 15°, frontal, right 15°) extracted from the Feret face database [7]. Each of these 120 face images was marked with 58 points around the main features including the eyes, mouth, nose, eyebrows and chin. Some of the labeled training face images can be seen in Fig. 4.
  • the combined Adaboost-based cascade face detector and AAM search was applied on the rest of the images of the Feret b-series dataset where each person has 7 pose angles, ranging from left 40°, 25°, 15°, 0° to right 15°, 25°, 40°.
  • the AAM search on the face images achieved 95% search accuracy rate.
  • Parameters C 0 , c c and c s are learned from the successful AAM search samples.
  • c 0 , c c and c s are vectors which are learned from the training data by the AAM search. (Here only head turning is considered, head nodding can be dealt with in a similar way). Given a new face image with parameters c, orientation can be estimated as follows.
  • the model can be used to synthesize new views.
  • Fig. 5 shows original faces images from the front and turned, the face images represented by AAM in each case, and a synthesized front view of the turned face represented for comparison.
  • APCA Adaptive Principal Component Analysis
  • APCA Adaptive Principal Component Analysis
  • the APCA face recognition model was trained using the Asian Face Database as in [4]. Then face images were chosen from 46 persons with good AAM search results. Both PCA and APCA were applied to the original face images and to synthesized frontal images respectively for testing. The frontal view images were registered into a gallery and use the high pose angle images for testing. The recognition results are shown in Fig. 6. It can be seen from Fig. 6 that the recognition rates of PCA and APCA on synthesized images is much higher than that of the original high pose angle images. The recognition rate increases by up to 500% for images with view angle of 25°. Yet even for smaller rotation angles less than 15°, the accuracy increases by up to 20%. Note that the recognition performance of APCA is always significantly higher than PCA, which is consistent with the results in [4].

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Graphics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

This invention concerns a method, software and system for facial feature processing. It may be applied to face recognition as well as to virtual face synthesis. The method comprises the steps of: Capturing an image including a face in any pose. Applying a cascade face detecting algorithm to the image to find the location of the face in the image. Applying an Active Appearance Model (AAM) search to interpret the face located in the image. Estimating the (horizontal and vertical) orientation of the face. Synthesizing a view of the face from any angle, or using Adaptive Principal Component Analysis (APCA) to a synthesized frontal view of the face for recognition.

Description

Title
Facial Feature Processing
Technical Field
This invention concerns a method, software and system for facial feature processing. It may be applied to face recognition as well as to virtual face synthesis.
Background Art
Most face recognition systems only work well under constrained conditions. In particular, the illumination conditions, facial expressions, and head pose must be tightly controlled for good recognition performance. For instance, eigenface-derived face recognition algorithms generally perform well only with front face images.
Disclosure of the Invention
The invention is a method for facial feature processing, comprising the steps of;
Capturing an image including a face in any pose. Applying a cascade face detecting algorithm to the image to find the location of the face in the image.
Applying an Active Appearance Model (AAM) to interpret the face located in the image.
Estimating the (horizontal and vertical) orientation of the face. And, Subsequently synthesizing a view of the face from another angle.
Use of the invention may provide a face recognition technique that is robust to the orientation of the face, both horizontally and vertically, within the captured image. In this case the view of the face that is synthesized is typically the frontal view. A final step in the process is applying Adaptive Principal Component Analysis (APCA) or a similar method, to the frontal view of the face for recognition Experiments show that the approach can achieve improved recognition rates on face images by up to a factor of five compared to standard PCA over a wide range of head poses. The invention may also be applied to the synthesizing of virtual faces (from an image of any face) for game characters. In this case the synthesized view may be of any desired angle.
A Viola- Jones based cascade face detecting algorithm may usefully be used to detect the face in real-time and thus solve the initialization problem for the Active Appearance Model search.
A correlation model may be used to estimate the pose angle in the captured image and to reconstruct frontal model parameters. In this case the correlation model may be applied after the AAM search interprets the face. Following application of the correlation model, the AAM may be used to synthesize the front view of the face.
In another aspect the invention is software configured for performing the method.
In a further aspect the invention is computer hardware programmed with the software.
Brief Description of the Drawings
An example of the invention will now be described with reference to the accompanying drawings, in which:
Fig. 1 is a flow diagram for a process of face detection.
Figs. 2(a)(i), (b)(i), (c)(i) and (d)(i) are face images initialized for Active
Appearance Model (AAM) searches, and Figs. 2(a)(ii), (b)(ii), (c)(ii) and (d)(ii) are the resulting face images marked with the results of failed searches.
Fig. 3 is a block of examples of face images used for training a cascade face detector.
Figs. 4(a)(i), (b)(i), (c)(i) and (d)(i) are images of faces turned to the left. Figs. 4(a)(ii), (b)(ii), (c)(ii) and (d)(ii) are images of faces from the front. And, Figs. 4(a)(iii), (b)(iii), (c)(iii) and (d)(iii) are images of faces turned to the right. In all cases the faces have been labelled with 58 points on the main facial features. Fig. 5(a)(i) is a captured image of a face from the front and Fig. 5(a)(ii) is the frontal face image represented by AAM. Fig. 5 (b)(i) is a captured image of a face from the left; Fig. 5(b)(ii) is the same face image represented by AAM, and Fig. 5 (b)(iii) is a synthesized version of the face from the front.
Fig. 6 is a graph comparing the recognition rate for PCA and APCA on original and synthesized images.
Best Modes of the Invention
Referring first to Fig. 1 the overall process for face recognition will be described. First an image of a face is captured 100, say from a surveillance camera. The face will likely be unknown and will have unknown horizontal and vertical orientation.
The image is then applied to a cascading face detection algorithm 120 that has been suitably trained to locate the face position 130.
The output of the cascading algorithm is applied to an Active Appearance Model (AAM) 140 to precisely interpret the facial features and determine the model shape and texture parameters.
A correlation model is used to estimate the horizontal and vertical orientation of the face, and to reconstruct AAM parameters for a frontal view of the face.
A frontal view of the face is then synthesized 160 using the AAM with the parameters from the correlation model.
Adaptive Principal Component Analysis (APCA) 170, which is insensitive to illumination and expression changes, is then applied to the frontal view to finally recognize the face 180.
This process will now be described in greater detail, and some experimental results will be presented. Training of the Cascade Face Detector 130
The initialization of the Active Appearance Model (AAM) search is a critical problem since the AAM search is a local gradient ascent and some AAM searches have failed in the past due to poor initialization; examples of failed searches can be seen in Fig. 2.
To address this issue a Cascade Face Detector is trained. A face training database includes 4916 hand labeled faces. Also, negative training data were randomly collected from the internet and do not contain human faces. After training, a final face detector has 24 layers with a total of 2913 features. Some training face images can be seen in Fig. 3.
Cascade Filtering 120
The Viola and Jones method [6] combines weak classifiers based on simple binary features which can be computed extremely fast. Simple rectangular Haar-like features are extracted; face and non-face classification is done using a cascade of successively more complex classifiers which discards non-face regions and only sends face-like candidates to the next layer's classifier. Thus it employs a "coarse-to-fine" strategy as described by Fleuret and Geman [8], Each layer's classifier is trained by the AdaBoost learning algorithm. Adaboost is a boosting learning algorithm which can fuse many weak classifiers into a single more powerful classifier.
The cascade face detector finds the location of a human face in an input image and provides a good starting point for the subsequent AAM search.
Active Appearance Model (AAM) 140
Facial Feature Interpretation, such as Active Appearance Models is a powerful tool to describe deformable object images. It demonstrates that a small number of 2D statistical models are sufficient to capture the shape and appearance of a face from any viewpoint. The Active Appearance Model uses principal component analysis (PCA) on the linear subspaces to model both the shape and texture changes of a certain object class. Given a collection of training images for a certain object class where the feature points have been manually marked, a shape and texture can be represented by applying PCA to the sample shape and texture distributions as: x = ^ + Psc (1) and g = ~g + Pgc (2) where x is the mean shape, g is the mean texture and Ps , Pg are matrices describing the respective shape and texture variations learned from the training sets. The parameters, c are used to control the shape and texture change.
The AAM search precisely marks the major facial features, such as mouth, eyes, nose and so on.
Experimental Trials
In trials, face image samples were collected from 40 individuals for the rotation model training set. For each person there are 3 images in 3 pose views (left 15°, frontal, right 15°) extracted from the Feret face database [7]. Each of these 120 face images was marked with 58 points around the main features including the eyes, mouth, nose, eyebrows and chin. Some of the labeled training face images can be seen in Fig. 4.
The combined Adaboost-based cascade face detector and AAM search was applied on the rest of the images of the Feret b-series dataset where each person has 7 pose angles, ranging from left 40°, 25°, 15°, 0° to right 15°, 25°, 40°. The AAM search on the face images achieved 95% search accuracy rate. Parameters C0 , cc and cs are learned from the successful AAM search samples.
Correlation Model and Pose Estimation 160
The method of Cootes et al [2] assumes that the model parameter c is related to the viewing angle, θ, approximately by: c = co +cc COs(O1) + cs sin(6>) (3)
where c0 , cc and cs are vectors which are learned from the training data by the AAM search. (Here only head turning is considered, head nodding can be dealt with in a similar way). Given a new face image with parameters c, orientation can be estimated as follows.
We first transform equation (3) to:
/ /cos θλ ... c-c- sineJ ( ) Where R-1 is the left pseudo-inverse of the matrix (cc \ cs ), (4) becomes
ysmθj Where (xa , ya )' = R~x (c - c0) , then the best estimate of the orientation is tan"1 (ya Ix11) -
Frontal-View Synthesis
After the angle θ has been estimated, the model can be used to synthesize new views.
For example to synthesize a frontal view face image, which will be used for face recognition, let cbe the residual vector which is not explained by the rotation model, c,-es = ° - {cQ + cs COS(S) + cs sin(0)) . (6)
To reconstruct at a new angle, α, we simply use the parameters: c(a) - co +cc cos(α) + cs sin(α) + crεs , (7)
here α is 0, so this becomes: C(0) = c0 + Cc + Crø , (8)
The shape and texture at angle 0°can be calculated by: x(0) =;χ + Psc(0) (9)
and g(θ) = g + pεc(θ) (10)
The new frontal face image then can be reconstructed. Fig. 5 shows original faces images from the front and turned, the face images represented by AAM in each case, and a synthesized front view of the turned face represented for comparison. Adaptive Principal Component Analysis (APCA) 170
Adaptive Principal Component Analysis (APCA) inherits merits from both Pattern Classification Algorithms (PCA) and the Fisher Linear Discriminant (FLD), and operates by warping the face subspace according to the within- and between-class co variance. It consists of four steps:
Subspace Projection:
Applying PCA to project face images into the face subspace to generate the m- dimensional feature vectors
Whitening Transformation:
The subspace is whitened according to the eigenvalues of the subspace with a whitening power p cov = diag{^2p,λ-2p,...λ-^} (11)
Filtering the Eigenfaces:
Eigen-features are weighted according to the identification-to-variation value ITV with a filtering power q. γ= diag {ITV ^ JTV2 9 ,... ITV * } (12)
Optimizing the cost function:
Minimize the cost function according to the combination of error rate and the ratio of between-class distance and within-class distance:. ___, τ-iM τ-iκ ■Γ-I d k (13)
High Pose Angle Face Recognition Results
The APCA face recognition model was trained using the Asian Face Database as in [4]. Then face images were chosen from 46 persons with good AAM search results. Both PCA and APCA were applied to the original face images and to synthesized frontal images respectively for testing. The frontal view images were registered into a gallery and use the high pose angle images for testing. The recognition results are shown in Fig. 6. It can be seen from Fig. 6 that the recognition rates of PCA and APCA on synthesized images is much higher than that of the original high pose angle images. The recognition rate increases by up to 500% for images with view angle of 25°. Yet even for smaller rotation angles less than 15°, the accuracy increases by up to 20%. Note that the recognition performance of APCA is always significantly higher than PCA, which is consistent with the results in [4].
Although the invention has been described with reference to a particular example, it should be appreciated that it could be exemplified in many other forms and in combination with other features not mentioned above. For instance, although the example refers to synthesizing the front face view, any other pose angle could be synthesized.
References
[1] A. Pentland, B.Moghaddam and T.Starner, "View-based and Modular Eigenspaces for Face Recognition", Proc. of ITEE Conference on CVPR 94
[2] T.F.Cootes, K. Walker and C.J.Taylor, "View-based Active Appearance Models", Proc. of The Fourth ITEE International Conference on AFGR 2000
[3] V.Blanz and T.Vetter, "Face Recognition Based on Fitting a 3D Morphable
Model", ITEE Transactions of Pattern Analysis and Machine Intelligence, Volume 25,
Issue 9, Pages: 1063 - 1074, 2003
[4] Shaokang Chen and Brian C. Lovell, "Illumination and Expression Invariant Face Recognition with One Sample Image", ICPR, pp.300-303, 17th International
Conference on Pattern Recognition (ICPR' 04) -Volume 1, 2004
[5] T.F.Cootes and C.J.Taylor, "Active Appearance Models", IEEE PAMI, Vol.23,
No.6, pp. 681-685, 2001
[6] Paul Viola and Michael Jones, "Rapid Object Detection using a Boosted Cascade of Simple Features", In Proc on CVPR, pp. 511-518, 2001
[7] http://www.itl.nist.gov/iad/humanid/feret/ [last visited 20-Dec-2005]
[8] F.Fleuret and D.Geman, "Coarse-to-Fine Face Detection", International Journal of
Computer Vision, 41:85-107, 2001
[9] Xiujuan Chai, Shiguang Shan and Wen Gao, "Pose normalization for robust face recognition based on statistical affine transformation", Information, Communications and Signal Processing, and the Fourth Pacific Rim Conference on Multimedia, 2003

Claims

Claims
1. A method for facial feature processing, comprising the steps of: capturing an image including a face in any pose; applying a cascade face detecting algorithm to the image to find the location of the face in the image; applying an Active Appearance Model (AAM) search to interpret the face located in the image; estimating the (horizontal and vertical) orientation of the face; and. subsequently synthesizing a view of the face from another angle.
2. A method according to claim 1 wherein the view of the face that is synthesized is the frontal view.
3. A method according to claim 2, comprising the further step of applying Adaptive Principal Component Analysis (APCA) or a similar method to the frontal view of the face for recognition.
4. A method according to claim 1, applied to the synthesizing of virtual faces from an image of any face.
5. A method according to any preceding claim, wherein a Viola- Jones based cascade face detecting algorithm is used to detect the face in real-time.
6. A method according to any preceding claim, wherein a correlation model is used to estimate the pose angle in the captured image and to reconstruct frontal model parameters.
7. A method according to claim 5, wherein the correlation model is applied after the AAM interprets the face.
8. A method according to claim 6 or 7, wherein following application of the correlation model the AAM is used to synthesize the front view of the face.
9. Software configured for performing the method according to any preceding claim.
10. Computer hardware programmed with the software according to claim 9.
PCT/AU2007/001169 2006-08-18 2007-08-17 Facial feature processing Ceased WO2008064395A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2007327540A AU2007327540A1 (en) 2006-08-18 2007-08-17 Facial feature processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2006904521A AU2006904521A0 (en) 2006-08-18 Robust Face Recognition
AU2006904521 2006-08-18

Publications (1)

Publication Number Publication Date
WO2008064395A1 true WO2008064395A1 (en) 2008-06-05

Family

ID=39467331

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2007/001169 Ceased WO2008064395A1 (en) 2006-08-18 2007-08-17 Facial feature processing

Country Status (2)

Country Link
AU (1) AU2007327540A1 (en)
WO (1) WO2008064395A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008144825A1 (en) * 2007-06-01 2008-12-04 National Ict Australia Limited Face recognition
WO2013163699A1 (en) * 2012-05-04 2013-11-07 Commonwealth Scientific And Industrial Research Organisation System and method for eye alignment in video
WO2016110030A1 (en) * 2015-01-09 2016-07-14 杭州海康威视数字技术股份有限公司 Retrieval system and method for face image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1359536A2 (en) * 2002-04-27 2003-11-05 Samsung Electronics Co., Ltd. Face recognition method and apparatus using component-based face descriptor
US7050607B2 (en) * 2001-12-08 2006-05-23 Microsoft Corp. System and method for multi-view face detection
CN1811793A (en) * 2006-03-02 2006-08-02 复旦大学 Automatic positioning method for characteristic point of human faces

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7050607B2 (en) * 2001-12-08 2006-05-23 Microsoft Corp. System and method for multi-view face detection
EP1359536A2 (en) * 2002-04-27 2003-11-05 Samsung Electronics Co., Ltd. Face recognition method and apparatus using component-based face descriptor
CN1811793A (en) * 2006-03-02 2006-08-02 复旦大学 Automatic positioning method for characteristic point of human faces

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008144825A1 (en) * 2007-06-01 2008-12-04 National Ict Australia Limited Face recognition
WO2013163699A1 (en) * 2012-05-04 2013-11-07 Commonwealth Scientific And Industrial Research Organisation System and method for eye alignment in video
WO2016110030A1 (en) * 2015-01-09 2016-07-14 杭州海康威视数字技术股份有限公司 Retrieval system and method for face image

Also Published As

Publication number Publication date
AU2007327540A1 (en) 2008-06-05

Similar Documents

Publication Publication Date Title
Shan et al. Face recognition robust to head pose from one sample image
Yang et al. An efficient LDA algorithm for face recognition
Kumar et al. Real time face recognition using adaboost improved fast PCA algorithm
Wang et al. Video-based face recognition: A survey
Bagherian et al. Facial feature extraction for face recognition: a review
Patel et al. Comparative analysis of face recognition approaches: a survey
US20060056667A1 (en) Identifying faces from multiple images acquired from widely separated viewpoints
Matin et al. Recognition of an individual using the unique features of human face
WO2008064395A1 (en) Facial feature processing
Kang et al. A comparison of face verification algorithms using appearance models.
Zhang et al. Component-based cascade linear discriminant analysis for face recognition
Rajalakshmi et al. A review on classifiers used in face recognition methods under pose and illumination variation
Savvides et al. Face recognition
Mahmoud et al. An effective hybrid method for face detection
Martiriggiano et al. Facial feature extraction by kernel independent component analysis
Peng et al. A novel scheme of face verification using active appearance models
Gajame et al. Face detection with skin color segmentation & recognition using genetic algorithm
Bebis et al. Genetic search for face detection and verification
Urschler et al. Robust facial component detection for face alignment applications
Jiang et al. An improved random sampling LDA for face recognition
Yamaguchi Face recognition technology and its real-world application
Hamid et al. Radius Based Block LBP for Facial Expression Recognition
Abdelwahab et al. A novel algorithm for simultaneous face detection and recognition
Gupta Face Recognition Techniques-A Review
Sabrin et al. An intensity and size invariant real time face recognition approach

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07870181

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2007327540

Country of ref document: AU

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2007327540

Country of ref document: AU

Date of ref document: 20070817

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: RU

122 Ep: pct application non-entry in european phase

Ref document number: 07870181

Country of ref document: EP

Kind code of ref document: A1