WO2003088131A2 - Method and apparatus for face description and recognition using high-order eigencomponents - Google Patents
Method and apparatus for face description and recognition using high-order eigencomponents Download PDFInfo
- Publication number
- WO2003088131A2 WO2003088131A2 PCT/JP2003/004550 JP0304550W WO03088131A2 WO 2003088131 A2 WO2003088131 A2 WO 2003088131A2 JP 0304550 W JP0304550 W JP 0304550W WO 03088131 A2 WO03088131 A2 WO 03088131A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- order
- component
- face
- facial
- arrangement operable
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
Definitions
- the present invention relates to method and apparatus for face description and recognition using High-order Eigencomponents.
- the invention can be used in face description and recognition for content-based image retrieval, human face identification and verification for bank, security system and videophone; surveillance and tracking; digital library and Internet multimedia database.
- PCA principal component analysis
- Karhunen-Loeve expansion is an important branch.
- Eigenface method is derived from PCA and it is convenient to be computed and has consistent accuracy in identification. Prior work showed that the PCA approach dissociates spontaneously between different types of information.
- the eigenvectors with large eigenvalues capture information that is common to subsets of faces and eigenvectors with small eigenvalues capture information specific to individual face.
- the studies show that only the information contained by the eigenvectors with large eigenvalues can be generalized to new faces that are not trained.
- eigenface method convey information relative to the basic shape and structure of the faces. That means features extracted from eigenvectors with large eigenvalues can be used to describe major characteristics of human faces. However, this is also the weakness of PCA. If we only consider the features extracted from eigenvectors with large eigenvalues, we cannot get the details of faces which corresponds to individual face. If these details of individual face can be described with the common features of human faces, the description of human faces can be more accurate.
- One drawback of eigenface method is that the contributions of all facial component areas are same.
- the identity information mainly locates at the certain facial areas, such as eyes, eyebrows, nose, mouth and outline.
- the check area contains less identity information and is relatively sensitive to lighting condition changes and facial expression changes. If the identity significances of facial components can be used, the recognition of human face can be more accurate. Disclosure of Invention
- Eigenface method is effective to extract common face characteristics such as shape and structure.
- the reconstructed faces with the features from eigenvectors with large eigenvalues should be obtained.
- the residue images between original images and reconstructed matrices can be obtained.
- These residue faces can be looked as high-passed face images, which still contain rich detailed information for individual face.
- eigenface method can be used on these residue faces again.
- the obtained eigenvectors with large eigenvalues will reveal the common characteristics of residue faces.
- high-order eigenvectors with large eigenvalues can be obtained to extract corresponding features.
- the combination of these features from different order eigenfaces can be used to describe faces effectively.
- first order and higher order principal components (eigencomponents) of facial components can be obtained to describe the characteristics of the corresponding facial areas.
- the combination of these features from different order eigencomponents can be used to describe individual facial components efficiently.
- human faces can be represented by a combination of different order eigencomponents with different attention weights. For different application fields, different components should have different functionalities (its strongerness or weakness). Different weights should be assigned for that component.
- the present invention provides a method to interpret human faces which can be used for image retrieval (query by face example), person identification and verification, surveillance and tracking, and other face recognition applications.
- image retrieval query by face example
- person identification and verification person identification and verification
- surveillance and tracking and other face recognition applications.
- high-order eigencomponents is proposed according to our observation and derivation. At first, all face images are normalized to a standard size. Then the vertical location of eyes is calculated and the face is shifted to a suitable place. When ail these pre-processing procedures are finished, the eigencomponents and high- order eigencomponents can be derived from a set of training face images.
- the features of the image projected with eigencomponents and high-order eigencomponents can be calculated with the selected eigencomponents and high- order eigencomponents.
- the combination of these features can be used to describe faces.
- Euclidean distance can be used for similarity measurement.
- the features should be weighted.
- Figure 1 shows a flowchart of the procedure for computing first -order feature W x .
- Figure 2 shows a flowchart of the procedures for computing i ⁇ -order eigencomponents U' anc
- Figure 3 shows a flowchart for the training mode operation.
- Figure 4 shows a flowchart for the test mode operation.
- the present invention gives a method to extract higher order eigencomponent features and represent a face by combining different order component features.
- the eigencomponents and high-order eigencomponents can be obtained as follows.
- ⁇ which is a one- dimensional vector of raster-scanned facial component
- the covariance matrix of the data is thus defined as:
- the residue components are called second -order residue components and the original components are called first -order residue components.
- Fig. 2 illustrates the procedure to compute the .”'-order eigencomponents U (i and the corresponding transform matrix U t .
- Pseudo_lnv(B) is the function to calculate the pseudo-inverse of matrix B.
- the measure of dissimilarity of two faces H 1 and H 2 is defined as a combined distance between various facial component features generated from the projections of eigencomponents (i.e. nieyes, nieyebrows, eigennoses, eigenmouths and eigenoutlines) and eigenfaces.
- the method for describing a face image can be considered as follows:
- the operation includes a training mode operation (steps #22-#31) as shown in Fig.4, and a test mode operation (steps #32-#42) as shown in Fig.3.
- the training mode operation is carried out first to learn and accumulate many face samples and to obtain coefficients of a first order average.
- the training mode starts from step #22 and continues to step #31.
- the training mode is provided to generate various parameters to be used in the test mode.
- step #22 a plurality of sample face images are input.
- each sample face image is divided into a plurality of facial parts, such as right eye, left eye, right eyebrow, left eyebrow, nose, mouth, facial configuration, and each part is analyzed to obtain basic facial component ⁇ ; ..
- the facial component ⁇ ; . can be weighted according to the significance of identity of human face.
- step #24 the facial components of the same part, such as a nose part, are collected from a plurality of sample face images.
- the facial component of nose part is referred to as a nose component.
- the collected facial components of each part are averaged to obtain a first order average facial component ⁇ by using equation (1).
- the same operation is carried out to obtain the first order average facial components for different facial components.
- the first order average facial component ⁇ is used in the test mode operation in step #34.
- step #25 the facial component of, for example, nose from each sample face image is subtracted by the first order average facial component ⁇ of nose to obtain a vector of nose given by equation (2).
- the same operation is carried out for each of different facial components.
- Steps #22 to #25 taken together is called an analyzing step for analyzing the training face images.
- step #26 equations (4), (5), (6), (7) and (8) are carried out to obtain the first order eigencomponents U a) .
- the first order eigencomponents U m is used in the test mode operation in step #35.
- step #27 an inverse matrix U m+ is generated by using equation (10).
- the inverse matrix U m+ is nearly equal to U m ⁇ .
- the inverse matrix H (1)+ is used in the test mode operation in step #37.
- step #28 the inverse matrix U m+ is used for obtaining a difference of the facial component using equation (11).
- a difference of facial component with respect to the basic facial component of the original facial images as collected in step #22 is obtained.
- the data produced in step #28 is referred to as a difference facial component.
- step #29 a difference between the difference facial component and the first order average facial component is obtained.
- Steps #27 to #29 taken together is called an analyzing step for analyzing the first -order eigencomponent.
- a second order K- L coefficient U ⁇ 2) (which is also called second order eigencomponent) is calculated using equations (4), (6), (7), (14) and (16).
- the second order K-L coefficient U (2) is used in the test mode in K-L conversion (step #40).
- the test mode starts from step #32 and continues to step #42.
- the test mode is provided to generate first order component feature W ⁇ and second order component feature W ⁇ 2
- step #32 a face to be tested is input.
- step #33 the input face image is divided into a plurality of facial parts, such as right eye, left eye, right eyebrow, left eyebrow, nose, mouth, facial configuration, and each part is analyzed to obtain basic facial component ⁇ ,. .
- the facial component ⁇ ,. can be weighted according to the significance of identity of human face.
- step #34 the basic facial component is subtracted by the first order average facial component ⁇ to obtain a first difference r (1) (also called a first order residue component).
- the first difference r ⁇ is applied to step #35 and to step #39.
- Steps #32 to #34 taken together is called an analyzing step for analyzing the test face image.
- step #35 using the first difference r (1) and the first order eigencomponents H (1) , K-L conversion is carried out by the use of equation (9) to obtain the first order component feature W m .
- step #36 the first order component feature W (1) is produced.
- the first order component feature W m represents the feature of the test face input in step #32.
- the first order component feature W m can be used as information representing the test face, but has a relatively large size of data. A further calculation is carried out to reduce the data size.
- step #37 using the equation (11), K-L inverse conversion is carried out to generate a reconstructed matrix f (1) .
- step #38 the reconstructed matrix f (1) is produced.
- step #39 the equation (12) is carried out to produce a difference between the first difference r (1) and the reconstructed matrix f (1) to obtain a second difference r ( ) , which is generally referred to as a second residue component r,. (2) .
- Steps #37 to #39 taken together is called an analyzing step for analyzing the first -order component feature.
- step #40 using the second difference r (2) and the second order K-L coefficient f/ (2) , K-L conversion is carried out by equation (17) to generate the second order component feature W (2) .
- step #41 the second order component feature W (2) is produced.
- the second order component feature W ⁇ has information representing the test face as entered in the test mode.
- FIG. 3 and 4 can be arranged by a computer connected with a camera for capturing the sample face images and the test face image. It is possible to prepare two sets of arrangements, one set is for the training mode operation, and another set for the test mode operation. Each set includes a computer and a camera. The set for the training mode operation is programmed to carry out steps #22-#30, and the set for the test mode operation is programmed to carry out steps #32-#42.
- a memory is provided to previously store information obtained by the set for the training mode operation, such as the first order average facial component ⁇ , the first order eigencomponents U m , the inverse matrix t/ (1)+ , and the second order eigencomponent U ⁇ 2) .
- This invention is very effective for describing human faces using component-based features. Since the high-order eigencomponents can be calculated only once with the training components, the high-order component features can be obtained as efficient as first-order component features. However, since detailed regional identity information can be revealed with high- order component features, the combination of first-order component features and high-order component features of eyes, eyebrows, nose, mouth and outline with different attention weights has better face description capability compared with the first -order eigenface features or combined first -order and high-order eigenface features.
- This invention is very effective and efficient in describe human faces which can be used in internet multimedia database retrieval, video editing, digital library, surveillance and tracking, and other applications using face recognition and verification broadly.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Collating Specific Patterns (AREA)
Abstract
Description
Claims
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2003226455A AU2003226455A1 (en) | 2002-04-12 | 2003-04-10 | Method and apparatus for face description and recognition using high-order eigencomponents |
| KR10-2004-7012107A KR20040101221A (en) | 2002-04-12 | 2003-04-10 | Method and apparatus for face description and recognition using high-order eigencomponents |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2002110936 | 2002-04-12 | ||
| JP2002-110936 | 2002-04-12 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2003088131A2 true WO2003088131A2 (en) | 2003-10-23 |
| WO2003088131A3 WO2003088131A3 (en) | 2004-01-15 |
Family
ID=29243254
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2003/004550 Ceased WO2003088131A2 (en) | 2002-04-12 | 2003-04-10 | Method and apparatus for face description and recognition using high-order eigencomponents |
Country Status (4)
| Country | Link |
|---|---|
| KR (1) | KR20040101221A (en) |
| CN (1) | CN1630875A (en) |
| AU (1) | AU2003226455A1 (en) |
| WO (1) | WO2003088131A2 (en) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1598769A1 (en) * | 2004-05-17 | 2005-11-23 | Mitsubishi Electric Information Technology Centre Europe B.V. | Method and apparatus for face description and recognition |
| RU2390844C2 (en) * | 2007-10-22 | 2010-05-27 | Государственное образовательное учреждение высшего профессионального образования Курский государственный технический университет | Method of identifying eyes on images and device for implementing said method |
| US7835549B2 (en) | 2005-03-07 | 2010-11-16 | Fujifilm Corporation | Learning method of face classification apparatus, face classification method, apparatus and program |
| US7936906B2 (en) | 2007-06-15 | 2011-05-03 | Microsoft Corporation | Face recognition using discriminatively trained orthogonal tensor projections |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1217574A3 (en) * | 2000-12-19 | 2004-05-19 | Matsushita Electric Industrial Co., Ltd. | A method for lighting- and view-angle-invariant face description with first- and second-order eigenfeatures |
-
2003
- 2003-04-10 CN CNA038036037A patent/CN1630875A/en active Pending
- 2003-04-10 WO PCT/JP2003/004550 patent/WO2003088131A2/en not_active Ceased
- 2003-04-10 KR KR10-2004-7012107A patent/KR20040101221A/en not_active Withdrawn
- 2003-04-10 AU AU2003226455A patent/AU2003226455A1/en not_active Abandoned
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1598769A1 (en) * | 2004-05-17 | 2005-11-23 | Mitsubishi Electric Information Technology Centre Europe B.V. | Method and apparatus for face description and recognition |
| US7630526B2 (en) | 2004-05-17 | 2009-12-08 | Mitsubishi Denki Kabushiki Kaisha | Method and apparatus for face description and recognition |
| US7835549B2 (en) | 2005-03-07 | 2010-11-16 | Fujifilm Corporation | Learning method of face classification apparatus, face classification method, apparatus and program |
| US7936906B2 (en) | 2007-06-15 | 2011-05-03 | Microsoft Corporation | Face recognition using discriminatively trained orthogonal tensor projections |
| RU2390844C2 (en) * | 2007-10-22 | 2010-05-27 | Государственное образовательное учреждение высшего профессионального образования Курский государственный технический университет | Method of identifying eyes on images and device for implementing said method |
Also Published As
| Publication number | Publication date |
|---|---|
| KR20040101221A (en) | 2004-12-02 |
| AU2003226455A1 (en) | 2003-10-27 |
| AU2003226455A8 (en) | 2003-10-27 |
| WO2003088131A3 (en) | 2004-01-15 |
| CN1630875A (en) | 2005-06-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Phillips | Matching pursuit filters applied to face identification | |
| KR100731937B1 (en) | Generate face metadata | |
| Kak et al. | A review of person recognition based on face model | |
| Romdhani et al. | Face recognition using 3-D models: Pose and illumination | |
| Barnouti et al. | Face recognition: A literature review | |
| Moghaddam et al. | Bayesian modeling of facial similarity | |
| CN111310731A (en) | Video recommendation method, device and equipment based on artificial intelligence and storage medium | |
| Wiskott et al. | Face recognition by elastic bunch graph matching | |
| Bessaoudi et al. | A novel hybrid approach for 3d face recognition based on higher order tensor | |
| US20060056667A1 (en) | Identifying faces from multiple images acquired from widely separated viewpoints | |
| JP2004272326A (en) | A probabilistic facial component fusion method for facial depiction and recognition using subspace component features | |
| WO2003088131A2 (en) | Method and apparatus for face description and recognition using high-order eigencomponents | |
| Boussaad et al. | The aging effects on face recognition algorithms: the accuracy according to age groups and age gaps | |
| Kaur et al. | Comparative study of facial expression recognition techniques | |
| Mansouri | Automatic age estimation: A survey | |
| Shermina et al. | Recognition of the face images with occlusion and expression | |
| JP2004038937A (en) | Method and apparatus for face description and recognition using higher-order eigencomponents | |
| Pande et al. | Parallel processing for multi face detection and recognition | |
| Siregar et al. | Identity recognition of people through face image using principal component analysis | |
| Szlávik et al. | Face identification with CNN-UM | |
| Bhamre et al. | Face recognition using singular value decomposition and hidden Markov model | |
| Chihaoui et al. | A novel face recognition system based on skin detection, HMM and LBP | |
| Tiwari | Gabor Based Face Recognition Using EBGM and PCA | |
| Karizi et al. | View-Invariant and Robust Gait Recognition Using Gait Energy Images of Leg Region and Masking Altered Sections. | |
| Katadound | Face recognition: study and comparison of PCA and EBGM algorithms |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| WWE | Wipo information: entry into national phase |
Ref document number: 2003746444 Country of ref document: EP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 1020047012107 Country of ref document: KR |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 20038036037 Country of ref document: CN |
|
| WWW | Wipo information: withdrawn in national office |
Ref document number: 2003746444 Country of ref document: EP |
|
| 122 | Ep: pct application non-entry in european phase |