[go: up one dir, main page]

WO2003088131A2 - Method and apparatus for face description and recognition using high-order eigencomponents - Google Patents

Method and apparatus for face description and recognition using high-order eigencomponents Download PDF

Info

Publication number
WO2003088131A2
WO2003088131A2 PCT/JP2003/004550 JP0304550W WO03088131A2 WO 2003088131 A2 WO2003088131 A2 WO 2003088131A2 JP 0304550 W JP0304550 W JP 0304550W WO 03088131 A2 WO03088131 A2 WO 03088131A2
Authority
WO
WIPO (PCT)
Prior art keywords
order
component
face
facial
arrangement operable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2003/004550
Other languages
French (fr)
Other versions
WO2003088131A3 (en
Inventor
Yongsheng Gao
Chak Joo Lee
Sheng Mei Shen
Zhongyang Huang
Takanori Senoh
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Holdings Corp
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Priority to AU2003226455A priority Critical patent/AU2003226455A1/en
Priority to KR10-2004-7012107A priority patent/KR20040101221A/en
Publication of WO2003088131A2 publication Critical patent/WO2003088131A2/en
Publication of WO2003088131A3 publication Critical patent/WO2003088131A3/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods

Definitions

  • the present invention relates to method and apparatus for face description and recognition using High-order Eigencomponents.
  • the invention can be used in face description and recognition for content-based image retrieval, human face identification and verification for bank, security system and videophone; surveillance and tracking; digital library and Internet multimedia database.
  • PCA principal component analysis
  • Karhunen-Loeve expansion is an important branch.
  • Eigenface method is derived from PCA and it is convenient to be computed and has consistent accuracy in identification. Prior work showed that the PCA approach dissociates spontaneously between different types of information.
  • the eigenvectors with large eigenvalues capture information that is common to subsets of faces and eigenvectors with small eigenvalues capture information specific to individual face.
  • the studies show that only the information contained by the eigenvectors with large eigenvalues can be generalized to new faces that are not trained.
  • eigenface method convey information relative to the basic shape and structure of the faces. That means features extracted from eigenvectors with large eigenvalues can be used to describe major characteristics of human faces. However, this is also the weakness of PCA. If we only consider the features extracted from eigenvectors with large eigenvalues, we cannot get the details of faces which corresponds to individual face. If these details of individual face can be described with the common features of human faces, the description of human faces can be more accurate.
  • One drawback of eigenface method is that the contributions of all facial component areas are same.
  • the identity information mainly locates at the certain facial areas, such as eyes, eyebrows, nose, mouth and outline.
  • the check area contains less identity information and is relatively sensitive to lighting condition changes and facial expression changes. If the identity significances of facial components can be used, the recognition of human face can be more accurate. Disclosure of Invention
  • Eigenface method is effective to extract common face characteristics such as shape and structure.
  • the reconstructed faces with the features from eigenvectors with large eigenvalues should be obtained.
  • the residue images between original images and reconstructed matrices can be obtained.
  • These residue faces can be looked as high-passed face images, which still contain rich detailed information for individual face.
  • eigenface method can be used on these residue faces again.
  • the obtained eigenvectors with large eigenvalues will reveal the common characteristics of residue faces.
  • high-order eigenvectors with large eigenvalues can be obtained to extract corresponding features.
  • the combination of these features from different order eigenfaces can be used to describe faces effectively.
  • first order and higher order principal components (eigencomponents) of facial components can be obtained to describe the characteristics of the corresponding facial areas.
  • the combination of these features from different order eigencomponents can be used to describe individual facial components efficiently.
  • human faces can be represented by a combination of different order eigencomponents with different attention weights. For different application fields, different components should have different functionalities (its strongerness or weakness). Different weights should be assigned for that component.
  • the present invention provides a method to interpret human faces which can be used for image retrieval (query by face example), person identification and verification, surveillance and tracking, and other face recognition applications.
  • image retrieval query by face example
  • person identification and verification person identification and verification
  • surveillance and tracking and other face recognition applications.
  • high-order eigencomponents is proposed according to our observation and derivation. At first, all face images are normalized to a standard size. Then the vertical location of eyes is calculated and the face is shifted to a suitable place. When ail these pre-processing procedures are finished, the eigencomponents and high- order eigencomponents can be derived from a set of training face images.
  • the features of the image projected with eigencomponents and high-order eigencomponents can be calculated with the selected eigencomponents and high- order eigencomponents.
  • the combination of these features can be used to describe faces.
  • Euclidean distance can be used for similarity measurement.
  • the features should be weighted.
  • Figure 1 shows a flowchart of the procedure for computing first -order feature W x .
  • Figure 2 shows a flowchart of the procedures for computing i ⁇ -order eigencomponents U' anc
  • Figure 3 shows a flowchart for the training mode operation.
  • Figure 4 shows a flowchart for the test mode operation.
  • the present invention gives a method to extract higher order eigencomponent features and represent a face by combining different order component features.
  • the eigencomponents and high-order eigencomponents can be obtained as follows.
  • which is a one- dimensional vector of raster-scanned facial component
  • the covariance matrix of the data is thus defined as:
  • the residue components are called second -order residue components and the original components are called first -order residue components.
  • Fig. 2 illustrates the procedure to compute the .”'-order eigencomponents U (i and the corresponding transform matrix U t .
  • Pseudo_lnv(B) is the function to calculate the pseudo-inverse of matrix B.
  • the measure of dissimilarity of two faces H 1 and H 2 is defined as a combined distance between various facial component features generated from the projections of eigencomponents (i.e. nieyes, nieyebrows, eigennoses, eigenmouths and eigenoutlines) and eigenfaces.
  • the method for describing a face image can be considered as follows:
  • the operation includes a training mode operation (steps #22-#31) as shown in Fig.4, and a test mode operation (steps #32-#42) as shown in Fig.3.
  • the training mode operation is carried out first to learn and accumulate many face samples and to obtain coefficients of a first order average.
  • the training mode starts from step #22 and continues to step #31.
  • the training mode is provided to generate various parameters to be used in the test mode.
  • step #22 a plurality of sample face images are input.
  • each sample face image is divided into a plurality of facial parts, such as right eye, left eye, right eyebrow, left eyebrow, nose, mouth, facial configuration, and each part is analyzed to obtain basic facial component ⁇ ; ..
  • the facial component ⁇ ; . can be weighted according to the significance of identity of human face.
  • step #24 the facial components of the same part, such as a nose part, are collected from a plurality of sample face images.
  • the facial component of nose part is referred to as a nose component.
  • the collected facial components of each part are averaged to obtain a first order average facial component ⁇ by using equation (1).
  • the same operation is carried out to obtain the first order average facial components for different facial components.
  • the first order average facial component ⁇ is used in the test mode operation in step #34.
  • step #25 the facial component of, for example, nose from each sample face image is subtracted by the first order average facial component ⁇ of nose to obtain a vector of nose given by equation (2).
  • the same operation is carried out for each of different facial components.
  • Steps #22 to #25 taken together is called an analyzing step for analyzing the training face images.
  • step #26 equations (4), (5), (6), (7) and (8) are carried out to obtain the first order eigencomponents U a) .
  • the first order eigencomponents U m is used in the test mode operation in step #35.
  • step #27 an inverse matrix U m+ is generated by using equation (10).
  • the inverse matrix U m+ is nearly equal to U m ⁇ .
  • the inverse matrix H (1)+ is used in the test mode operation in step #37.
  • step #28 the inverse matrix U m+ is used for obtaining a difference of the facial component using equation (11).
  • a difference of facial component with respect to the basic facial component of the original facial images as collected in step #22 is obtained.
  • the data produced in step #28 is referred to as a difference facial component.
  • step #29 a difference between the difference facial component and the first order average facial component is obtained.
  • Steps #27 to #29 taken together is called an analyzing step for analyzing the first -order eigencomponent.
  • a second order K- L coefficient U ⁇ 2) (which is also called second order eigencomponent) is calculated using equations (4), (6), (7), (14) and (16).
  • the second order K-L coefficient U (2) is used in the test mode in K-L conversion (step #40).
  • the test mode starts from step #32 and continues to step #42.
  • the test mode is provided to generate first order component feature W ⁇ and second order component feature W ⁇ 2
  • step #32 a face to be tested is input.
  • step #33 the input face image is divided into a plurality of facial parts, such as right eye, left eye, right eyebrow, left eyebrow, nose, mouth, facial configuration, and each part is analyzed to obtain basic facial component ⁇ ,. .
  • the facial component ⁇ ,. can be weighted according to the significance of identity of human face.
  • step #34 the basic facial component is subtracted by the first order average facial component ⁇ to obtain a first difference r (1) (also called a first order residue component).
  • the first difference r ⁇ is applied to step #35 and to step #39.
  • Steps #32 to #34 taken together is called an analyzing step for analyzing the test face image.
  • step #35 using the first difference r (1) and the first order eigencomponents H (1) , K-L conversion is carried out by the use of equation (9) to obtain the first order component feature W m .
  • step #36 the first order component feature W (1) is produced.
  • the first order component feature W m represents the feature of the test face input in step #32.
  • the first order component feature W m can be used as information representing the test face, but has a relatively large size of data. A further calculation is carried out to reduce the data size.
  • step #37 using the equation (11), K-L inverse conversion is carried out to generate a reconstructed matrix f (1) .
  • step #38 the reconstructed matrix f (1) is produced.
  • step #39 the equation (12) is carried out to produce a difference between the first difference r (1) and the reconstructed matrix f (1) to obtain a second difference r ( ) , which is generally referred to as a second residue component r,. (2) .
  • Steps #37 to #39 taken together is called an analyzing step for analyzing the first -order component feature.
  • step #40 using the second difference r (2) and the second order K-L coefficient f/ (2) , K-L conversion is carried out by equation (17) to generate the second order component feature W (2) .
  • step #41 the second order component feature W (2) is produced.
  • the second order component feature W ⁇ has information representing the test face as entered in the test mode.
  • FIG. 3 and 4 can be arranged by a computer connected with a camera for capturing the sample face images and the test face image. It is possible to prepare two sets of arrangements, one set is for the training mode operation, and another set for the test mode operation. Each set includes a computer and a camera. The set for the training mode operation is programmed to carry out steps #22-#30, and the set for the test mode operation is programmed to carry out steps #32-#42.
  • a memory is provided to previously store information obtained by the set for the training mode operation, such as the first order average facial component ⁇ , the first order eigencomponents U m , the inverse matrix t/ (1)+ , and the second order eigencomponent U ⁇ 2) .
  • This invention is very effective for describing human faces using component-based features. Since the high-order eigencomponents can be calculated only once with the training components, the high-order component features can be obtained as efficient as first-order component features. However, since detailed regional identity information can be revealed with high- order component features, the combination of first-order component features and high-order component features of eyes, eyebrows, nose, mouth and outline with different attention weights has better face description capability compared with the first -order eigenface features or combined first -order and high-order eigenface features.
  • This invention is very effective and efficient in describe human faces which can be used in internet multimedia database retrieval, video editing, digital library, surveillance and tracking, and other applications using face recognition and verification broadly.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Collating Specific Patterns (AREA)

Abstract

High-order eigencomponents are proposed to describe detailed regional information of that particular facial component. A formula is given to calculate high-order transform matrix for projection. The high-order component features can be used individually to describe a facial component or combined with first -order component features. Since detailed regional identity information can be revealed with high-order component features, the combination of first-order component features and high-order component features of eyes, eyebrows, nose, mouth and outline with different attention weights, based on the significance of identity information of the corresponding area, has better face description capability compared with the first -order eigenface features or combined first -order and high-order eigenface features.

Description

DESCRIPTION
Method and Apparatus for Face Description and Recognition using High-order Eigencomponents
Technical Field
The present invention relates to method and apparatus for face description and recognition using High-order Eigencomponents. The invention can be used in face description and recognition for content-based image retrieval, human face identification and verification for bank, security system and videophone; surveillance and tracking; digital library and Internet multimedia database.
Background Art
Human face perception is an active area in the computer vision community. Face recognition will play an important role in multimedia database search and many other applications. In recent years considerable progress has been made on the problems of face detection and recognition. Different techniques were proposed. Among these, neural networks, elastic template matching, Karhunen-Loeve expansion, algebraic moments and isodensity lines are typical methods. Among these methods, principal component analysis (PCA) or Karhunen-Loeve expansion is an important branch. Eigenface method is derived from PCA and it is convenient to be computed and has consistent accuracy in identification. Prior work showed that the PCA approach dissociates spontaneously between different types of information. The eigenvectors with large eigenvalues capture information that is common to subsets of faces and eigenvectors with small eigenvalues capture information specific to individual face. The studies show that only the information contained by the eigenvectors with large eigenvalues can be generalized to new faces that are not trained.
The advantage of eigenface method is that eigenvectors with large eigenvalues convey information relative to the basic shape and structure of the faces. That means features extracted from eigenvectors with large eigenvalues can be used to describe major characteristics of human faces. However, this is also the weakness of PCA. If we only consider the features extracted from eigenvectors with large eigenvalues, we cannot get the details of faces which corresponds to individual face. If these details of individual face can be described with the common features of human faces, the description of human faces can be more accurate. One drawback of eigenface method is that the contributions of all facial component areas are same. Instead of evenly distributed on the whole face, the identity information mainly locates at the certain facial areas, such as eyes, eyebrows, nose, mouth and outline. The check area contains less identity information and is relatively sensitive to lighting condition changes and facial expression changes. If the identity significances of facial components can be used, the recognition of human face can be more accurate. Disclosure of Invention
Eigenface method is effective to extract common face characteristics such as shape and structure. In order to get details of faces that are lost when eigenvectors with small eigenvalues are truncated, the reconstructed faces with the features from eigenvectors with large eigenvalues should be obtained. With the reconstructed face images, the residue images between original images and reconstructed matrices can be obtained. These residue faces can be looked as high-passed face images, which still contain rich detailed information for individual face. In order to describe these residue faces, eigenface method can be used on these residue faces again. The obtained eigenvectors with large eigenvalues will reveal the common characteristics of residue faces. With this method, high-order eigenvectors with large eigenvalues can be obtained to extract corresponding features. The combination of these features from different order eigenfaces can be used to describe faces effectively.
Similarly, first order and higher order principal components (eigencomponents) of facial components can be obtained to describe the characteristics of the corresponding facial areas. The combination of these features from different order eigencomponents can be used to describe individual facial components efficiently. Finally, human faces can be represented by a combination of different order eigencomponents with different attention weights. For different application fields, different components should have different functionalities (its strongerness or weakness). Different weights should be assigned for that component.
The present invention provides a method to interpret human faces which can be used for image retrieval (query by face example), person identification and verification, surveillance and tracking, and other face recognition applications. In order to describe face characteristics, the concept of high-order eigencomponents is proposed according to our observation and derivation. At first, all face images are normalized to a standard size. Then the vertical location of eyes is calculated and the face is shifted to a suitable place. When ail these pre-processing procedures are finished, the eigencomponents and high- order eigencomponents can be derived from a set of training face images. In order to query a face image in a face database, the features of the image projected with eigencomponents and high-order eigencomponents can be calculated with the selected eigencomponents and high- order eigencomponents. The combination of these features can be used to describe faces. With this description, Euclidean distance can be used for similarity measurement. In order to improve the similarity accuracy, the features should be weighted.
Brief Description of Drawings
Figure 1 shows a flowchart of the procedure for computing first -order feature Wx .
Figure 2 shows a flowchart of the procedures for computing i^-order eigencomponents U' anc| the
TJ corresponding transform matrix »'.
Figure 3 shows a flowchart for the training mode operation.
Figure 4 shows a flowchart for the test mode operation.
Best Mode for Carrying Out the Invention
The present invention gives a method to extract higher order eigencomponent features and represent a face by combining different order component features.
With the normalized face images, the eigencomponents and high-order eigencomponents can be obtained as follows.
First use a preset block for the normalized face images to get the facial component (such as eyes, eyebrows, nose, mouth and outline);
Consider a facial component Φ, which is a one- dimensional vector of raster-scanned facial component, define Ψas the average component:
] M Mi=ι (1)
Every facial component differs from the average component by a vector (1)=Φ,. -Ψ. (2)
The covariance matrix of the data is thus defined as:
Q = AmAmT (3) where
Figure imgf000008_0001
Note that Q has dimension whxwh where w is the width of the component and h is the height. The size of this matrix is enormous, but since we only sum up a finite number of component vectors M, the rank of this matrix can not exceed M-1. We note that if v.(1)is the eigenvector of Am Am (i = 1,2,...,M), then
Figure imgf000009_0001
= λ,mv,m where λ are the eigenvalue of Am Am , then Am v^} are the eigenvectors of
AmA )T as we see by multiplying on the left by Am in the previous equation:
AmAoXA mv,m
Figure imgf000009_0002
= λ,mAmv,m ( 5 )
Thus, eigenvector v,(1) and eigenvalue 2,(1) are obtained by the following equation.
Figure imgf000009_0003
But AmTAm\s only of size M M. So defining w,(1)the eigenvector of AmAm we have
Figure imgf000009_0004
The eigenvalue λt m is the variance along the new coordinate space spanned by eigenvectors ut m . From here on we assume that the order of is such that the eigenvalues (1 are decreasing. The eigenvalues are decreasing in exponential fashion. Therefore we can project a facial component r(1)onto only Mλ«M dimensions by computing Wm = {vk m} where wk m = (1)Tr(1) and l≤k≤M,. wk m is the /c-th coordinate of r(1)in the new coordinate system. In this context, Wm is called first -order component features. The vectors «fc (I)are actually images, and are called first- order eigencomponents. Let ^^ - ^ (8) then wm = umτ τm_ (9)
Since Um s an jXRmatrix, we cannot get its inverse.
However, we can use its pseudo inverse to approximate its inverse. Let Um+be the pseudo-inverse of UmT as indicated below,
Um+ = Pseudo - Inv(t/(,)) (10)
then
Figure imgf000010_0001
where f(I) is the reconstructed matrix from Wmand Um . Then, the following equation is carried out to obtain a residue component r(2.
Figure imgf000010_0002
Since a residue facial component vector still contains rich information for the individual component, facial component features should be extracted from the residue component again. Let 42> = |τ 2>r2 (2\..rJ 2)]I /tf}be the eigenvalues of (2)V2) and be the corresponding eigenvectors of Am A{2) . Then ^(2)r^(2) v (2) _^(2) v (2) Based on above discussion, the eigenvectors of Al2)Ai2) are w; (2) = _4(2v,(2) . Therefore we can project a residue component r(2)onto only M2«M dimensions by computing Wmk {2), where
Figure imgf000011_0001
and l≤k≤M2. Since uk 2) are the eigenvectors of the residue component, we call uk 2) the second -order eigencomponents and wk m the second -order component features.
Let
Figure imgf000011_0002
Eq.(13) can be written as
Figure imgf000011_0003
Let
U2 = (t(2)T - U∞τUm+UmT J, (16) we have
Figure imgf000011_0004
Since U2 is a constant transform matrix and it is just calculated once, it will not affect the efficiency of computation. The facial component can be described with
Figure imgf000012_0001
where 1<MX ≤MX. The computational burden does not increase in computing Ω(Φ) compared with only computing component features from eigencomponents U.
The residue components are called second -order residue components and the original components are called first -order residue components.
With the same method, we also can derive third-order, fourth-order ,..., and n^-order eigencomponents. By projecting the residue components of corresponding order, we can get third-order, fourth-order ,..., n^-order component features. With these high-order component features, the similarity of components can be defined as the weighted Euclidean distance between the projections. Fig. 2 illustrates the procedure to compute the ."'-order eigencomponents U(i and the corresponding transform matrix Ut. In the figure, Pseudo_lnv(B) is the function to calculate the pseudo-inverse of matrix B.
The measure of dissimilarity of two faces H1 and H2 is defined as a combined distance between various facial component features generated from the projections of eigencomponents (i.e. eigeneyes, eigeneyebrows, eigennoses, eigenmouths and eigenoutlines) and eigenfaces.
Figure imgf000013_0001
M mle / Ml W-"""
+ ∑ O"W,(I)1"O-)-WI (I)(Φ '|+ ∑ Λ *WJ (2(Φ,BO")-WJ (2)(Φ *J
Figure imgf000013_0002
If j=0, the similarity of face images will be measured only with second -order features.
With the method mentioned above, the method for describing a face image can be considered as follows:
1) Scanning the said facial component by using a raster scan starting at the top-left corner of the component window and finishing at the bottom-right corner of the component window into a single dimensional array of pixels;
2) Subtracting the said single dimensional array of pixels with the average component;
3) Multiplying the said subtracted single dimensional array of pixels with the said first-order and high-order eigencomponents; 4) Using the resulting component features as description of the face;
5) Coding the said features into a coded representation. With this method, the human faces can be described effectively and efficiently in space with different attention weights corresponding to the significance of identity information of human face.
Next, an overall operation of the facial feature extraction according to the present invention is described with reference to Figs.3 and 4. The operation includes a training mode operation (steps #22-#31) as shown in Fig.4, and a test mode operation (steps #32-#42) as shown in Fig.3. The training mode operation is carried out first to learn and accumulate many face samples and to obtain coefficients of a first order average.
The training mode starts from step #22 and continues to step #31. The training mode is provided to generate various parameters to be used in the test mode.
In step #22, a plurality of sample face images are input. In step #23, each sample face image is divided into a plurality of facial parts, such as right eye, left eye, right eyebrow, left eyebrow, nose, mouth, facial configuration, and each part is analyzed to obtain basic facial component Φ;.. The facial component Φ;. can be weighted according to the significance of identity of human face.
In step #24, the facial components of the same part, such as a nose part, are collected from a plurality of sample face images. The facial component of nose part is referred to as a nose component. The collected facial components of each part are averaged to obtain a first order average facial component Ψ by using equation (1). The same operation is carried out to obtain the first order average facial components for different facial components. The first order average facial component Ψ is used in the test mode operation in step #34. In step #25, the facial component of, for example, nose from each sample face image is subtracted by the first order average facial component Ψ of nose to obtain a vector of nose given by equation (2). The same operation is carried out for each of different facial components.
Steps #22 to #25 taken together is called an analyzing step for analyzing the training face images.
In step #26, equations (4), (5), (6), (7) and (8) are carried out to obtain the first order eigencomponents Ua) . The first order eigencomponents Um is used in the test mode operation in step #35.
In step #27, an inverse matrix Um+ is generated by using equation (10). The inverse matrix Um+ is nearly equal to U . The inverse matrix H(1)+ is used in the test mode operation in step #37.
In step #28, the inverse matrix Um+ is used for obtaining a difference of the facial component using equation (11). Thus, a difference of facial component with respect to the basic facial component of the original facial images as collected in step #22 is obtained. The data produced in step #28 is referred to as a difference facial component.
In step #29, a difference between the difference facial component and the first order average facial component is obtained.
Steps #27 to #29 taken together is called an analyzing step for analyzing the first -order eigencomponent. In step #30, using the difference obtained in step #29, a second order K- L coefficient U{2) (which is also called second order eigencomponent) is calculated using equations (4), (6), (7), (14) and (16). The second order K-L coefficient U(2) is used in the test mode in K-L conversion (step #40). The test mode starts from step #32 and continues to step #42. The test mode is provided to generate first order component feature Wω and second order component feature W{2
In step #32, a face to be tested is input.
In step #33, the input face image is divided into a plurality of facial parts, such as right eye, left eye, right eyebrow, left eyebrow, nose, mouth, facial configuration, and each part is analyzed to obtain basic facial component Φ,. . The facial component Φ,. can be weighted according to the significance of identity of human face.
In step #34, the basic facial component is subtracted by the first order average facial component Ψ to obtain a first difference r(1) (also called a first order residue component). The first difference rω is applied to step #35 and to step #39.
Steps #32 to #34 taken together is called an analyzing step for analyzing the test face image. In step #35, using the first difference r(1) and the first order eigencomponents H(1) , K-L conversion is carried out by the use of equation (9) to obtain the first order component feature Wm .
In step #36, the first order component feature W (1) is produced. The first order component feature Wm represents the feature of the test face input in step #32. The first order component feature Wm can be used as information representing the test face, but has a relatively large size of data. A further calculation is carried out to reduce the data size.
In step #37, using the equation (11), K-L inverse conversion is carried out to generate a reconstructed matrix f(1) . In step #38, the reconstructed matrix f (1) is produced.
In step #39, the equation (12) is carried out to produce a difference between the first difference r(1) and the reconstructed matrix f(1) to obtain a second difference r( ) , which is generally referred to as a second residue component r,.(2) . Steps #37 to #39 taken together is called an analyzing step for analyzing the first -order component feature.
In step #40, using the second difference r(2) and the second order K-L coefficient f/(2) , K-L conversion is carried out by equation (17) to generate the second order component feature W (2) . In step #41 , the second order component feature W (2) is produced. The second order component feature W{ ) has information representing the test face as entered in the test mode.
It is to be noted that the flowcharts shown in Figs. 3 and 4 can be arranged by a computer connected with a camera for capturing the sample face images and the test face image. It is possible to prepare two sets of arrangements, one set is for the training mode operation, and another set for the test mode operation. Each set includes a computer and a camera. The set for the training mode operation is programmed to carry out steps #22-#30, and the set for the test mode operation is programmed to carry out steps #32-#42. In the set for the test mode operation, a memory is provided to previously store information obtained by the set for the training mode operation, such as the first order average facial component Ψ , the first order eigencomponents Um , the inverse matrix t/(1)+ , and the second order eigencomponent U{2) .
This invention is very effective for describing human faces using component-based features. Since the high-order eigencomponents can be calculated only once with the training components, the high-order component features can be obtained as efficient as first-order component features. However, since detailed regional identity information can be revealed with high- order component features, the combination of first-order component features and high-order component features of eyes, eyebrows, nose, mouth and outline with different attention weights has better face description capability compared with the first -order eigenface features or combined first -order and high-order eigenface features.
This invention is very effective and efficient in describe human faces which can be used in internet multimedia database retrieval, video editing, digital library, surveillance and tracking, and other applications using face recognition and verification broadly.

Claims

1. A method for extracting component feature for face description, comprising: processing a training mode operation, comprising; analyzing a plurality of training face images; calculating a first -order eigencomponent Um using the analyzed training face images; calculating a second -order eigencomponent £/(2) using the analyzed training face images; and processing a test mode operation, comprising; analyzing a test face image; and obtaining a second -order component feature W{2) for the test face image using the second -order eigencomponents Ui2 .
2. A method for extracting component feature for face description, comprising: processing a training mode operation, comprising; analyzing a plurality of training face images to generate a first - order residue component r(1) of the training face; calculating a first -order eigencomponent Um using the first -order residue component r0) of the training face; analyzing the first -order eigencomponent Um to generate a second -order residue component r(2) of the training face; and calculating a second -order eigencomponent H(2) using the second -order residue component r(2) of the training face; and processing a test mode operation, comprising; analyzing a test face image to generate a first -order residue component r(1) of the test face; obtaining a first -order component feature Wm for the test face image using the first -order eigencomponent Um and the first -order residue component r(1) of the test face; analyzing the first -order component feature Wm to generate a second -order residue component r(2) of the test face; and obtaining a second -order component feature W 2 for the test face image using the second -order eigencomponents H(2) and the second -order residue component r(2) of the test face .
3. The method for extracting component feature as claimed in claim 2, wherein said analyzing a plurality of training face image comprises: dividing each sample face image into facial parts to obtain facial components Φ. of facial parts; averaging the facial component of each facial parts to obtain a first -order average facial component Ψ ; and subtracting the facial component by the first -order average facial component Ψto produce the first -order residue component r(1) of the training face.
4. The method for extracting component feature as claimed in claim 2, wherein said analyzing the first -order eigencomponent comprises: obtaining a reconstructed matrix f(1) ; and subtracting the reconstructed matrix fω from the first -order residue component r(1) of the training face to generate the second -order residue component r(2) of the training face.
5. The method for extracting component feature as claimed in claim 2, wherein said analyzing a test face image comprises: dividing the test face image into facial parts to obtain facial components
Φ, of facial parts; and subtracting the facial component Φ, by the first -order average facial component Ψ to produce the first -order residue component r(1) of the test face.
6. The method for extracting component feature as claimed in claim 2, wherein said analyzing a test face image comprises: obtaining a reconstructed matrix f(1) ; and subtracting the reconstructed matrix f (1) from the first -order residue component r(1) of the test face to generate the second -order residue component r(2) of the test face.
7. The method for extracting component feature as claimed in claim 3, wherein said facial components Φt of facial parts of the training face images can be weighted.
8. The method for extracting component feature as claimed in claim 5, wherein said facial components Φ,. of facial parts of the test face images can be weighted.
9. An apparatus for extracting component feature for face description, comprising: an arrangement operable to processes a training mode operation, comprising; an arrangment operable to analyze a plurality of training face images; an arrangement operable to calculate a first -order eigencomponent Um using the analyzed training face images; an arrangement operable to calculate a second -order eigencomponent U{2 using the analyzed training face images; and an arrangement operable to process a test mode operation, comprising; an arrangement operable to analyze a test face image; and an arrangement operable to obtain a second -order component feature Wm for the test face image using the second -order eigencomponents U{2\
10. An apparatus for extracting component feature for face description, comprising: an arrangement operable to process a training mode operation, comprising; an arrangement operable to analyze a plurality of training face images to generate a first -order residue component r(1) of the training face; an arrangement operable to calculate a first -order eigencomponent Um using the first -order residue component r(1 of the training face; an arrangement operable to analyze the first -order eigencomponent Um to generate a second -order residue component r(2) of the training face; and an arrangement operable to calculate a second -order eigencomponent U(2) using the second -order residue component r(2) of the training face; and an arrangement operable to process a test mode operation, comprising; an arrangement operable to analyze a test face image to generate a first -order residue component r(1) of the test face; an arrangement operable to obtain a first -order component feature Wm for the test face image using the first -order eigencomponent Um and the first -order residue component r(i) of the test face; an arrangement operable to analyze the first -order component feature Wm to generate a second -order residue component r(2) of the test face; and an arrangement operable to obtain a second -order component feature W 2 for the test face image using the second -order eigencomponents U{ ) and the second -order residue component r(2 of the test face .
11. The apparatus for extracting component feature as claimed in claim 10, wherein said arrangement operable to analyze a plurality of training face images comprises: an arrangement operable to divide each sample face image into facial parts to obtain facial components Φ. of facial parts; an arrangement operable to average the facial component of each facial parts to obtain a first -order average facial component Ψ ; and an arrangement operable to subtract the facial component by the first - order average facial component Ψto produce the first -order residue component r(1) of the training face.
12. The apparatus for extracting component feature as claimed in claim 10, wherein said arrangement operable to analyze the first -order eigencomponent comprises: an arrangement operable to obtain a reconstructed matrix f(1) ; and an arrangement operable to subtract the reconstructed matrix f(1) from the first -order residue component r(!) of the training face to generate the second -order residue component r(2) of the training face.
13. The apparatus for extracting component feature as claimed in claim 10, wherein said arrangement operable to analyze a test face image comprises: an arrangement operable to divid the test face image into facial parts to obtain facial components Φ,. of facial parts; and an arrangement operable to subtract the facial component Φ. by the first -order average facial component Ψto produce the first -order residue component r(1 of the test face.
14. The apparatus for extracting component feature as claimed in claim 10, wherein said arrangement operable to analyze a test face image comprises: an arrangement operable to obtain a reconstructed matrix f(1) ; and an arrangement operable to subtract the reconstructed matrix fω from the first -order residue component r(1) of the test face to generate the second - order residue component r(2) of the test face.
15. The apparatus for extracting component feature as claimed in claim 11 , wherein said facial components Φ,. of facial parts of the training face images can be weighted.
16. The apparatus for extracting component feature as claimed in claim 12, wherein said facial components Φ,. of facial parts of the test face images can be weighted.
17. An apparatus for extracting component feature for face description, comprising: an arrangement operable to process a training mode operation, comprising; an arrangement operable to analyze a plurality of training face images to generate a first -order residue component r(1) of the training face; an arrangement operable to calculate a first -order eigencomponent Um using the first -order residue component r(1) of the training face; an arrangement operable to analyze the first -order eigencomponent Um to generate a second -order residue component r(2) of the training face; and an arrangement operable to calculate a second -order eigencomponent U{2) using the second -order residue component r(2) of the training face.
18. An apparatus for extracting component feature for face description, comprising: a memory for storing a first order average facial component Ψ , a first order eigencomponents Um , an inverse matrix Um+ , and a second order eigencomponent C7(2) ; and an arrangement operable to process a test mode operation, comprising; an arrangement operable to analyze a test face image to generate a first -order residue component r(1) of the test face; an arrangement operable to obtain a first -order component feature Wm for the test face image using the first -order eigencomponent Um and the first -order residue component r(1) of the test face; an arrangement operable to analyze the first -order component feature Wm to generate a second -order residue component r(2) of the test face; and an arrangement operable to obtain a second -order component feature W{2) for the test face image using the second -order eigencomponents U{2) and the second -order residue component r(2) of the test face .
PCT/JP2003/004550 2002-04-12 2003-04-10 Method and apparatus for face description and recognition using high-order eigencomponents Ceased WO2003088131A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2003226455A AU2003226455A1 (en) 2002-04-12 2003-04-10 Method and apparatus for face description and recognition using high-order eigencomponents
KR10-2004-7012107A KR20040101221A (en) 2002-04-12 2003-04-10 Method and apparatus for face description and recognition using high-order eigencomponents

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2002110936 2002-04-12
JP2002-110936 2002-04-12

Publications (2)

Publication Number Publication Date
WO2003088131A2 true WO2003088131A2 (en) 2003-10-23
WO2003088131A3 WO2003088131A3 (en) 2004-01-15

Family

ID=29243254

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2003/004550 Ceased WO2003088131A2 (en) 2002-04-12 2003-04-10 Method and apparatus for face description and recognition using high-order eigencomponents

Country Status (4)

Country Link
KR (1) KR20040101221A (en)
CN (1) CN1630875A (en)
AU (1) AU2003226455A1 (en)
WO (1) WO2003088131A2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1598769A1 (en) * 2004-05-17 2005-11-23 Mitsubishi Electric Information Technology Centre Europe B.V. Method and apparatus for face description and recognition
RU2390844C2 (en) * 2007-10-22 2010-05-27 Государственное образовательное учреждение высшего профессионального образования Курский государственный технический университет Method of identifying eyes on images and device for implementing said method
US7835549B2 (en) 2005-03-07 2010-11-16 Fujifilm Corporation Learning method of face classification apparatus, face classification method, apparatus and program
US7936906B2 (en) 2007-06-15 2011-05-03 Microsoft Corporation Face recognition using discriminatively trained orthogonal tensor projections

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1217574A3 (en) * 2000-12-19 2004-05-19 Matsushita Electric Industrial Co., Ltd. A method for lighting- and view-angle-invariant face description with first- and second-order eigenfeatures

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1598769A1 (en) * 2004-05-17 2005-11-23 Mitsubishi Electric Information Technology Centre Europe B.V. Method and apparatus for face description and recognition
US7630526B2 (en) 2004-05-17 2009-12-08 Mitsubishi Denki Kabushiki Kaisha Method and apparatus for face description and recognition
US7835549B2 (en) 2005-03-07 2010-11-16 Fujifilm Corporation Learning method of face classification apparatus, face classification method, apparatus and program
US7936906B2 (en) 2007-06-15 2011-05-03 Microsoft Corporation Face recognition using discriminatively trained orthogonal tensor projections
RU2390844C2 (en) * 2007-10-22 2010-05-27 Государственное образовательное учреждение высшего профессионального образования Курский государственный технический университет Method of identifying eyes on images and device for implementing said method

Also Published As

Publication number Publication date
KR20040101221A (en) 2004-12-02
AU2003226455A1 (en) 2003-10-27
AU2003226455A8 (en) 2003-10-27
WO2003088131A3 (en) 2004-01-15
CN1630875A (en) 2005-06-22

Similar Documents

Publication Publication Date Title
Phillips Matching pursuit filters applied to face identification
KR100731937B1 (en) Generate face metadata
Kak et al. A review of person recognition based on face model
Romdhani et al. Face recognition using 3-D models: Pose and illumination
Barnouti et al. Face recognition: A literature review
Moghaddam et al. Bayesian modeling of facial similarity
CN111310731A (en) Video recommendation method, device and equipment based on artificial intelligence and storage medium
Wiskott et al. Face recognition by elastic bunch graph matching
Bessaoudi et al. A novel hybrid approach for 3d face recognition based on higher order tensor
US20060056667A1 (en) Identifying faces from multiple images acquired from widely separated viewpoints
JP2004272326A (en) A probabilistic facial component fusion method for facial depiction and recognition using subspace component features
WO2003088131A2 (en) Method and apparatus for face description and recognition using high-order eigencomponents
Boussaad et al. The aging effects on face recognition algorithms: the accuracy according to age groups and age gaps
Kaur et al. Comparative study of facial expression recognition techniques
Mansouri Automatic age estimation: A survey
Shermina et al. Recognition of the face images with occlusion and expression
JP2004038937A (en) Method and apparatus for face description and recognition using higher-order eigencomponents
Pande et al. Parallel processing for multi face detection and recognition
Siregar et al. Identity recognition of people through face image using principal component analysis
Szlávik et al. Face identification with CNN-UM
Bhamre et al. Face recognition using singular value decomposition and hidden Markov model
Chihaoui et al. A novel face recognition system based on skin detection, HMM and LBP
Tiwari Gabor Based Face Recognition Using EBGM and PCA
Karizi et al. View-Invariant and Robust Gait Recognition Using Gait Energy Images of Leg Region and Masking Altered Sections.
Katadound Face recognition: study and comparison of PCA and EBGM algorithms

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2003746444

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 1020047012107

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 20038036037

Country of ref document: CN

WWW Wipo information: withdrawn in national office

Ref document number: 2003746444

Country of ref document: EP

122 Ep: pct application non-entry in european phase