[go: up one dir, main page]

CN102654903A - Face comparison method - Google Patents

Face comparison method Download PDF

Info

Publication number
CN102654903A
CN102654903A CN2011100517300A CN201110051730A CN102654903A CN 102654903 A CN102654903 A CN 102654903A CN 2011100517300 A CN2011100517300 A CN 2011100517300A CN 201110051730 A CN201110051730 A CN 201110051730A CN 102654903 A CN102654903 A CN 102654903A
Authority
CN
China
Prior art keywords
msub
face
mrow
mover
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011100517300A
Other languages
Chinese (zh)
Inventor
井维兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN2011100517300A priority Critical patent/CN102654903A/en
Publication of CN102654903A publication Critical patent/CN102654903A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a face comparison method, which comprises the following steps of: tracking a face to acquire characteristic points; extracting detail characteristic data of the face; comparing the face, namely comparing the face characteristic data with characteristic data of each fan in a face database to acquire the similarity; judging whether a matched face is found, wherein sigma represents a similarity threshold value; if Smax is greater than sigma, determining that the input face is matched with the fan k' in the database; judging whether the expression is changed remarkably; analyzing a plurality of frames of continuous face characteristic points, comprising but is not limited of opening and closing of a mouth and opening and closing of eyes; and judging whether the expression of the face is changed remarkably, and outputting the matched face. The face comparison method belongs to the technical field of biologic characteristic identification, is used for tracking and comparing the face and can be widely applied to various face comparison systems.

Description

Face comparison method
Technical Field
The invention relates to the technical field of biological feature recognition, in particular to a face comparison method.
Background
The human face is important information of people and is an important basis for distinguishing different people, so the human face comparison is a more natural and direct comparison mode compared with the technologies such as fingerprints and irises.
The face comparison is to extract specific face feature information from a face input by an image or video, compare the extracted face feature information with registered face feature information in a database to obtain the extremely similar degree of a matched face, and determine whether the face is the same as the face in the database.
The face comparison has very important functions in many occasions, such as video multimedia messages in mobile phone multimedia messages, human-computer interfaces, authority control, intelligent monitoring systems and the like. The accuracy, precision and robustness of the alignment have been major concerns in the industry.
In addition, in the face comparison, if a static photo is currently input and compared with the registered face in the database, a matching result is obtained, which causes that the identified object is not a real face and causes that an unauthorized person obtains the authority. Therefore, it is important to determine whether the current input is a real human face or a static photograph, which cannot be solved by the prior art.
Therefore, there is a need for a face comparison technique with high accuracy and robustness that can ensure real input.
Disclosure of Invention
In order to make up for the defects of the prior art, the invention aims to provide a face comparison method, which solves the influence of facial expression change and posture change, improves the accuracy, precision and robustness of comparison and ensures the authenticity of comparison.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a face comparison method comprises the following steps: the face comparison method is characterized by comprising the following steps:
step 601, tracking a human face to acquire feature points;
step 603, extracting detailed face feature data;
step 605, comparing the human face, namely comparing the human face feature data with the feature data of each human face in the human face database to obtain the similarity of the human face feature data and each human face in the human face database; the specific method comprises the following steps:
(1) selecting a feature template library of a face k in a database
Figure BSA00000443811000021
k=0,...,K;
(2) Template for characteristics
Figure BSA00000443811000022
j-0.. M, calculating the characteristics of the input face
Figure BSA00000443811000023
And
Figure BSA00000443811000024
similarity between them Skji
(3) Calculating input face and feature template
Figure BSA00000443811000025
Degree of similarity of <math> <mrow> <msub> <mi>S</mi> <mi>kj</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <msub> <mi>S</mi> <mi>kji</mi> </msub> <mo>;</mo> </mrow> </math>
(4) Calculating the similarity between the input face and the face k as S k = max j { S kj } ;
(5) Repeating the steps (1) to (4) to obtain the similarity between the input human face and all K human faces in the database, and taking the largest one of the input human faces
Figure BSA00000443811000028
Obtaining a corresponding human face k';
step 607, judging whether a matched face is found; delta is the similarity threshold, if SmaxIf the value is larger than delta, judging that the input human face is matched with the human face k' in the database;
step 608, judging whether the expression has significant changes; the analysis is performed according to continuous multiframe human face characteristic points, including but not limited to: opening and closing the mouth and eyes, and judging whether the expression of the face is obviously changed;
when the facial expression has a significant change, step 609 is executed to output the face in the ratio.
The specific method for extracting the detailed face feature data in step 603 is as follows:
interpolating to obtain the positions of other selected human face characteristic points according to the accurate human face characteristic point positions obtained by the human face detection and tracking in the step 601;
normalizing the image according to the positions of the two eyes;
calculating to obtain Gabor characteristics of the face characteristic point i
Figure BSA00000443811000029
The Gabor features of all feature points form face feature data
Figure BSA000004438110000210
N, N is the number of selected human face feature points.
The human face characteristic points are significant characteristic points on human faces, and all 80 Gabor complex coefficients are selected for the characteristics of the human face characteristic points to express complete human face information and completely express differences among different human faces.
In this step 601, face tracking is performed, and the face features selected by the feature points are features of commonality of faces.
Further, the face comparison method further comprises a step 604 of face registration; storing the face characteristic data to a face database; the specific method comprises the following steps:
adding the detailed face feature data obtained in the step 603 into the face feature template library of the person
Figure BSA000004438110000211
j is 0, M is the number of the characteristic templates of the person, and the number is stored in a database.
In the step 601, face tracking is performed, and the feature points obtained specifically include an offline training method and an online tracking method;
the off-line training method comprises a multi-layer structure face model training method and an off-line template training method of face characteristic points;
the multilayer structure human face model training method provides a human face model for the online tracking method, and the off-line template training method provides an off-line template of human face characteristic points for the online tracking method;
the multilayer structure face model training method comprises the following steps:
301, selecting a proper face image as a training sample;
step 302, marking the characteristic points of the face image;
3031-3061, obtaining a reference shape model;
3032-3062, obtaining a global shape model;
3033-3063, a local shape model is obtained.
The reference shape model, the global shape model and the local shape model are obtained by the following steps:
denote a face shape vector by s:
<math> <mrow> <mi>s</mi> <mo>=</mo> <mover> <mi>s</mi> <mo>&OverBar;</mo> </mover> <mo>+</mo> <mi>Pb</mi> <mo>,</mo> </mrow> </math>
wherein,
Figure BSA00000443811000032
is the average face shape; p is a set of orthogonal main shape change modes; b is a shape parameter vector;
the face shape vector s is expressed as(s)R,sG,sL)TWherein s isR、sGAnd sLRespectively representing a reference feature point, a global feature point and a local feature point;
point distribution model of rigid reference shape <math> <mrow> <msub> <mi>s</mi> <mi>R</mi> </msub> <mo>=</mo> <mover> <msub> <mi>s</mi> <mi>R</mi> </msub> <mo>&OverBar;</mo> </mover> <mo>+</mo> <msub> <mi>P</mi> <mi>R</mi> </msub> <msub> <mi>b</mi> <mi>R</mi> </msub> </mrow> </math>
Point distribution model of global reference shape <math> <mrow> <msub> <mi>s</mi> <mi>G</mi> </msub> <mo>=</mo> <mover> <msub> <mi>s</mi> <mi>G</mi> </msub> <mo>&OverBar;</mo> </mover> <mo>+</mo> <msub> <mi>P</mi> <mi>G</mi> </msub> <msub> <mi>b</mi> <mi>G</mi> </msub> </mrow> </math>
Point distribution model of local shape model <math> <mrow> <msub> <mi>s</mi> <mrow> <mi>Gi</mi> <mo>,</mo> <mi>Li</mi> </mrow> </msub> <mo>=</mo> <mover> <msub> <mi>s</mi> <mrow> <mi>Gi</mi> <mo>,</mo> <mi>Li</mi> </mrow> </msub> <mo>&OverBar;</mo> </mover> <mo>+</mo> <msub> <mi>P</mi> <mrow> <mi>Gi</mi> <mo>,</mo> <mi>Li</mi> </mrow> </msub> <msub> <mi>b</mi> <mrow> <mi>Gi</mi> <mo>,</mo> <mi>Li</mi> </mrow> </msub> </mrow> </math>
The ith local shape vector is sGi,Li={sGi,sLiIn which s isGi,sLiRepresenting global and local feature points belonging to the ith local shape, respectively.
The expression method of the human face characteristic points comprises the following steps:
given a grayscale image
Figure BSA00000443811000036
One pixel in (2)
Figure BSA00000443811000037
A series of Gabor coefficients
Figure BSA00000443811000038
The local appearance near the point can be expressed and can be defined as:
<math> <mrow> <msub> <mi>J</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mo>&Integral;</mo> <mi>I</mi> <mrow> <mo>(</mo> <msup> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <msub> <mi>&psi;</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>-</mo> <msup> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> <msup> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>&prime;</mo> </msup> </mrow> </math>
wherein the Gabor nucleus psijA plane wave defined for a gaussian envelope function,
<math> <mrow> <msub> <mi>&psi;</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <msubsup> <mi>k</mi> <mi>j</mi> <mn>2</mn> </msubsup> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> </mfrac> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <mrow> <msubsup> <mi>k</mi> <mi>j</mi> <mn>2</mn> </msubsup> <msup> <mi>x</mi> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>[</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mi>i</mi> <msub> <mover> <mi>k</mi> <mo>&RightArrow;</mo> </mover> <mi>j</mi> </msub> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>)</mo> </mrow> <mo>-</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>]</mo> </mrow> </math>
Figure BSA00000443811000042
Figure BSA00000443811000043
Figure BSA00000443811000044
wherein k isvIn order to be the frequency of the radio,
Figure BSA00000443811000045
for the purposes of orientation, the present invention preferably designates v 0, 1, 9, and μ 0, 1, 7. j ═μ+8v,And the frequency bandwidth is set as 2 pi;
the Gabor kernel consists of 80 Gabor complex coefficients with 10 frequencies and 8 directions and is used for expressing appearance characteristics near pixel points and using a jet vector
Figure BSA00000443811000047
Denotes these coefficients, Jj=αjexp(iφj),j=0,1,...,79
Wherein alpha isjAnd phijThe amplitude and the phase of the jth Gabor coefficient respectively;
and (4) carrying out experimental screening on 80 Gabor complex coefficients to obtain the wavelet characteristics used for expressing the human face characteristic points.
The off-line template training method of the face characteristic points comprises the following steps:
step 401, selecting N appropriate face images as training samples;
step 402, marking the characteristic points of the face image;
step 403, normalizing the image;
step 404, calculating Gabor characteristics of all samples;
step 405, obtaining similarity among Gabor characteristics of each sample;
<math> <mrow> <msub> <mi>S</mi> <mi>&phi;</mi> </msub> <mrow> <mo>(</mo> <mover> <mi>J</mi> <mo>&RightArrow;</mo> </mover> <mo>,</mo> <msup> <mover> <mi>J</mi> <mo>&RightArrow;</mo> </mover> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mi>j</mi> </msub> <msub> <mi>&alpha;</mi> <mi>j</mi> </msub> <msub> <msup> <mi>&alpha;</mi> <mo>&prime;</mo> </msup> <mi>j</mi> </msub> <mi>cos</mi> <mrow> <mo>(</mo> <msub> <mi>&phi;</mi> <mi>j</mi> </msub> <mo>-</mo> <msub> <msup> <mi>&phi;</mi> <mo>&prime;</mo> </msup> <mi>j</mi> </msub> <mo>-</mo> <mover> <mi>d</mi> <mo>&RightArrow;</mo> </mover> <msub> <mover> <mi>k</mi> <mo>&RightArrow;</mo> </mover> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> <msqrt> <msub> <mi>&Sigma;</mi> <mi>j</mi> </msub> <msubsup> <mi>&alpha;</mi> <mi>j</mi> <mn>2</mn> </msubsup> <msubsup> <mi>&alpha;</mi> <mi>j</mi> <mrow> <mo>&prime;</mo> <mn>2</mn> </mrow> </msubsup> </msqrt> </mfrac> </mrow> </math>
wherein,
Figure BSA00000443811000049
and
Figure BSA000004438110000410
is a Gabor feature;is composed ofAnd
Figure BSA000004438110000413
relative displacement therebetween;
<math> <mrow> <mover> <mi>d</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <mover> <mi>J</mi> <mo>&RightArrow;</mo> </mover> <mo>,</mo> <msup> <mover> <mi>J</mi> <mo>&RightArrow;</mo> </mover> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>d</mi> <mi>x</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>d</mi> <mi>y</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <msub> <mi>&Gamma;</mi> <mi>xx</mi> </msub> <msub> <mi>&Gamma;</mi> <mi>yy</mi> </msub> <mo>-</mo> <msub> <mi>&Gamma;</mi> <mi>xy</mi> </msub> <msub> <mi>&Gamma;</mi> <mi>yx</mi> </msub> </mrow> </mfrac> <mo>&times;</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>&Gamma;</mi> <mi>yy</mi> </msub> </mtd> <mtd> <mo>-</mo> <msub> <mi>&Gamma;</mi> <mi>yx</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <msub> <mi>&Gamma;</mi> <mi>xy</mi> </msub> </mtd> <mtd> <msub> <mi>&Gamma;</mi> <mi>xx</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>&Phi;</mi> <mi>x</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&Phi;</mi> <mi>y</mi> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
if f isxxΓyyxyΓyxNot equal to 0, wherein
Φx=∑jαjα′jkjxj-φ′j),
Γxy=∑jαjα′jkjxkjy
Φy,Γxx,ΓyxAnd ΓyyAre defined similarly;
for each feature point, calculating the similarity between every two N Gabor features, and when the similarity is greater than a threshold value STWhen they are similar, STCan be selected by experiment, such as 0.85;
step 406, calculating the number n of similar features of Gabor features of each sample;
step 407, selecting a sample Gabor feature with the maximum n;
step 408, determine whether n is greater than nT
If the determination result in the step 408 is negative, execute the step 411 to process the next feature point, and then return to the step 404 to continue executing;
if the judgment result in the step 408 is yes, executing a step 409, and adding the Gabor characteristics into the offline template; for each Gabor feature, n is setiA Gabor feature similar thereto, niThe value is maximum and greater than a threshold value nTGabor feature of (a) add sample feature set
Figure BSA00000443811000051
nTAlso selected by experiment, n can be selectedT=2;
Step 410, deleting the Gabor feature from the sample, and simultaneously, enabling the similarity of the Gabor feature to be larger than a threshold value ST' Gabor characteristics of
Figure BSA00000443811000052
Middle deletion, ST' greater than STAnd optionally 0.9;
returning to the step 405, and performing iterative computation on the steps 405-409; to pair
Figure BSA00000443811000053
Repeating the above process until no sample can be selected;
final sample feature set
Figure BSA00000443811000054
The feature sample of the face feature point is used as an off-line template of the face feature and provided for the on-line tracking method.
The online tracking method comprises the following steps:
step 501, initializing variables and setting parameters, wherein the parameters include but are not limited to image format, resolution, color space and tracking mode;
step 502, inputting a frame of image;
step 503, normalizing the image, and converting the input image into an image with a standard size;
step 504, judging whether to detect again;
if the determination result in the step 504 is yes, executing a step 505, and aligning the reference feature points based on the ASM shape constraint by using the reference shape model;
step 506, aligning global feature points by using a global shape model based on ASM shape constraints;
step 507, aligning local feature points by using a local shape model based on ASM shape constraint;
step 508, updating the online characteristic template, and updating the wavelet characteristics of the online characteristic template as the online characteristic template of the face according to the position of the obtained face characteristic point;
step 515, estimating the pose of the human face, and estimating the pose of the human face according to the positions of the six basic points;
returning to the step 502 to circularly execute the steps of the method and executing the step 516 to output the human face characteristic points and the human face posture information;
if the judgment result in the step 504 is negative, executing a step 509, and updating the eye corner point based on the online feature template;
then, step 510 is executed to adjust the eye corner point based on the offline feature template;
then, step 511 is executed to update other feature points;
then executing step 512, updating the average shape of each shape model according to the face pose of the previous frame;
then step 513 is performed to update the global feature points based on the shape constraints;
then, step 514 is executed to update the local feature points based on the shape constraints;
then, the procedure returns to step 508 to continue the steps of the method.
The invention has the beneficial effects that:
1. the invention selects the significant characteristic points on the human face as comparison bases, and the characteristics of the characteristic points of the human face are selected from all 80 Gabor complex coefficients, thereby expressing complete human face information, maximizing the difference between different human faces and having better accuracy and robustness of human face comparison.
2. The human face comparison method eliminates the influence of human face expression and posture, judges the authenticity of the human face in comparison and ensures that the tracking and comparison accuracy, precision and robustness are higher.
3. By using the method and the device, whether the current input is a real face or a static photo can be judged.
Drawings
The technical solution and other advantages of the present invention will become apparent from the following detailed description of specific embodiments of the present invention, which is to be read in connection with the accompanying drawings.
FIG. 1 is a block diagram of the face tracking method of the present invention;
FIG. 2 is a schematic diagram of a face feature point according to the present invention;
FIG. 3 is a flow chart of a multi-layered structure human face model training method of the present invention;
FIG. 4 is a flowchart of an off-line template training method for the facial feature points of the present invention;
FIG. 5 is a flow chart of a face tracking method of the present invention;
fig. 6 is a flowchart of the face comparison method of the present invention.
Detailed Description
In order to make the technical means, the creation characteristics, the achievement purposes and the effects of the invention easy to understand, the invention is further explained below.
The human face comparison method mainly comprises two parts, namely a training part, namely a registration part, and a comparison part. Whether the training part or the comparison part detects and tracks the face features, so that the accurate positions of the face feature points are obtained.
Fig. 1-5 show a specific method for tracking and acquiring feature points by face detection. The following is a detailed description:
FIG. 1 shows the component framework of the tracking method of the present invention. The face tracking method comprises an off-line training method 102 and an on-line tracking method 101. The offline training method 102 comprises: a multi-layer structure human face model training method 1021 and an off-line template training method for human face feature points 1022; the former provides the face model 103 for the online tracking method 101, and the latter provides the face feature point offline template 104 for the face tracking method 101.
Fig. 2 is a schematic diagram of the facial feature points of the present inventors. FIG. 3 is a flowchart of a multi-layered structure human face model training method of the present invention. The multi-layer structure human face model training method of the present invention is described in detail below with reference to fig. 2 and 3.
The facial features of people have great similarity, and the relative motion of the feature points expresses the change of facial expressions and facial poses. Given the feature points of the faces, the face model is represented by the statistical relationship of the face feature point set, i.e. a Point Distribution Model (PDM) can be constructed to express the possible shape change of the face.
The invention is based on the principle of ASM, and a multilayer structure face model is obtained from a series of face image training.
The multi-layer structure human face model training method firstly executes step 301, and selects a proper human face image as a training sample. Then, step 302 is executed to mark feature points of the face image.
Steps 3031-3061 are then performed to obtain a reference shape model. The method specifically comprises the following steps: step 3031, forming a shape vector based on the rigid reference points to represent the positions of the reference characteristic points; then executing step 3041, aligning all shape vectors to a uniform coordinate frame according to Procrustcs transformation; then, step 3051 is performed to obtain shape constraint parameters by the PCA method, and step 3061, a reference shape model is obtained.
Step 3032-3062 is performed, resulting in a global shape model. The method specifically comprises the following steps: step 3032, forming a shape vector based on the global reference points to represent the positions of the global feature points; then, executing step 3042, aligning all the shape vectors to a uniform coordinate frame according to Procrustes transformation; then, step 3052 is executed to obtain shape constraint parameters by the PCA method, and step 3062, a global shape model is obtained.
And step 3033-3063 is executed, and the local shape model is obtained. The method specifically comprises the following steps: step 3033, forming a shape vector based on the local reference points to represent the positions of the local feature points; then, executing step 3043, aligning all the shape vectors to a uniform coordinate frame according to Procrustes transformation; then, step 3053 is performed to obtain shape constraint parameters by the PCA method, and step 3063, a local shape model is obtained.
The calculation method of the steps 3031-3061, the steps 3032-3062 and the steps 3033-3063 comprises the following steps:
denote a person's face shape by vector s:
<math> <mrow> <mi>s</mi> <mo>=</mo> <mover> <mi>s</mi> <mo>&OverBar;</mo> </mover> <mo>+</mo> <mi>Pb</mi> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein,is the average face shape; p is a set of orthogonal main shape change patterns(ii) a b is a shape parameter vector.
The existing ASM method searches the face shape through an iteration process, and the positions of all feature points in the iteration are updated simultaneously, namely the mutual influence among the feature points is a simple parallel relation. In view of the complex structure of the human face and the rich expression, the simple parallel mechanism is not enough to describe the interrelationship between the feature points. For example, if the corner of the eye is fixed, the opening and closing of the eye cannot affect the positioning of the characteristic points of the mouth and nose.
The invention organizes the human face characteristic points into a plurality of layers so as to better adapt to different influences of head movement, expression change and the like on the positions of the characteristic points, and the human face characteristic points are called a multilayer structure human face model. The first type is a reference feature point, which is substantially affected only by the head pose, such as the corners of the eyes, nose, etc. The second type is global feature points, which are used to constrain the global shape of the whole face, including reference feature points and other key points, such as mouth corners, eyebrow ends, etc. The third type is local feature points, which are only used to constrain the detail features of the components of the face, such as eyes, mouth, eyebrows, on the contour boundaries, such as contour points of upper and lower lips, upper and lower eyelids, etc., and are mainly affected by the expression changes. Based on the above, the multilayer structure face model constructed by the invention is described as follows:
as described above, the face shape vector s may be represented as(s)R,sG,sL)TWherein s isR、sGAnd sLRespectively representing a reference feature point, a global feature point, and a local feature point. Based on this, the face shape model can be divided into a rigid reference shape, a global reference shape, and the following local shapes: left eyebrow, right eyebrow, left eye, right eye, nose, mouth, etc. For rigid reference shapes and global reference shapes, their Point Distribution Model (PDM) can be learned from training data as follows,
<math> <mrow> <msub> <mi>s</mi> <mi>R</mi> </msub> <mo>=</mo> <mover> <msub> <mi>s</mi> <mi>R</mi> </msub> <mo>&OverBar;</mo> </mover> <mo>+</mo> <msub> <mi>P</mi> <mi>R</mi> </msub> <msub> <mi>b</mi> <mi>R</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mi>s</mi> <mi>G</mi> </msub> <mo>=</mo> <mover> <msub> <mi>s</mi> <mi>G</mi> </msub> <mo>&OverBar;</mo> </mover> <mo>+</mo> <msub> <mi>P</mi> <mi>G</mi> </msub> <msub> <mi>b</mi> <mi>G</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
for the local shape model, the ith local shape vector is sGi,Li={sGi,sLiIn which s isGi,sLiRepresenting global and local feature points belonging to the ith local shape, respectively. There are also a number of ways in which,
<math> <mrow> <msub> <mi>s</mi> <mrow> <mi>Gi</mi> <mo>,</mo> <mi>Li</mi> </mrow> </msub> <mo>=</mo> <mover> <msub> <mi>s</mi> <mrow> <mi>Gi</mi> <mo>,</mo> <mi>Li</mi> </mrow> </msub> <mo>&OverBar;</mo> </mover> <mo>+</mo> <msub> <mi>P</mi> <mrow> <mi>Gi</mi> <mo>,</mo> <mi>Li</mi> </mrow> </msub> <msub> <mi>b</mi> <mrow> <mi>Gi</mi> <mo>,</mo> <mi>Li</mi> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
the above three formulas (2), (3) and (4) constitute the multi-layer structure human face model of the invention. Wherein each parameter is obtained by training based on the principle of ASM. Fig. 2 shows a preferred set of feature points of the present invention, wherein all star points 201 are reference feature points, all star points 201 and a hollow origin 202 constitute global feature points, and a solid origin 203 is a local feature point.
FIG. 4 is a flowchart of the method for training the face feature points of the present invention.
The feature expression of the human face feature point is various, such as a gray scale feature, an edge feature, a wavelet feature and the like. The invention adopts multi-scale and multi-direction Gabor wavelets to model local appearances near the characteristic points and express the human face characteristic points. The feature expression based on the Gabor wavelet has a psychophysical basis of human vision, and has good robustness for expression recognition, face recognition, feature point representation and the like under illumination change and appearance change.
The wavelet characteristic calculation method comprises the following steps:
given a grayscale image
Figure BSA00000443811000091
One pixel in (2)
Figure BSA00000443811000092
A series of Gabor coefficientsThe local appearance near the point can be expressed and can be defined as:
<math> <mrow> <msub> <mi>J</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mo>&Integral;</mo> <mi>I</mi> <mrow> <mo>(</mo> <msup> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <msub> <mi>&psi;</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>-</mo> <msup> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> <msup> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>&prime;</mo> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein the Gabor nucleus psijPlane waves defined for the envelope function of gauss
<math> <mrow> <msub> <mi>&psi;</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <msubsup> <mi>k</mi> <mi>j</mi> <mn>2</mn> </msubsup> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> </mfrac> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <mrow> <msubsup> <mi>k</mi> <mi>j</mi> <mn>2</mn> </msubsup> <msup> <mi>x</mi> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>[</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mi>i</mi> <msub> <mover> <mi>k</mi> <mo>&RightArrow;</mo> </mover> <mi>j</mi> </msub> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>)</mo> </mrow> <mo>-</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>]</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
Figure BSA00000443811000096
Figure BSA00000443811000097
Figure BSA00000443811000098
Wherein k isvIn order to be the frequency of the radio,
Figure BSA00000443811000099
for the purposes of orientation, the present invention preferably designates v 0, 1, 9, and μ 0, 1, 7. j ═ μ +8v,and the frequency bandwidth is set to σ 2 pi.
Therefore, the preferred Gabor kernel of the present invention comprises 80 Gabor complex coefficients with 10 frequencies and 8 directions, and is used for expressing appearance characteristics near the pixel points.In particular, a jet vector may be usedRepresenting these coefficients, can be written as
Jj=αjexp(iφj),j=0,1,...,79 (8)
Wherein alpha isjAnd phijThe amplitude and phase of the jth Gabor coefficient, respectively.
Given an image, each marked face feature point can be calculated to obtain a jet vector of the Gabor wavelet, and the jet vector expresses the feature of the point. However, for each individual face feature point, not all 80 Gabor complex coefficients are suitable for expressing the feature. In order to express the common characteristics of various human faces, 80 Gabor complex coefficients are required to be screened experimentally. Taking the characteristic point of the mouth angle as an example, the preferred Gabor complex coefficient of the invention is as follows: j-24.
Thus, preferred are the wavelet features used in the method of the present invention.
The off-line template training method of the face characteristic points of the inventor is as follows:
first, step 401 is executed to select N suitable face images as training samples.
Step 402, marking the characteristic points of the face image.
And 403, performing normalization processing on the image to ensure that the calculation conditions of the Gabor features of all the feature points are similar, so as to ensure the accuracy of feature sampling. According to the positions of the two eyes, the midpoint of the two eyes is obtained as a reference point, the line of the two eyes is taken as the horizontal axis of the image, the perpendicular bisector of the line of the two eyes is taken as the vertical axis, the image is rotated, and simultaneously the image is zoomed so that the distance (interpupillary distance) between the two eyes reaches a specific value. The accuracy and robustness of Gabor feature expression can be ensured through the normalization processing.
Step 404 is then performed to calculate Gabor features for all samples. The specific method comprises the following steps:
and (4) converting the marked feature point coordinates into the normalized image, and calculating the Gabor feature of each person face feature point according to the formulas (5) to (8). Then for each feature point, a total of N Gabor features are obtained
Figure BSA00000443811000101
i=0,...,N。
Then, step 405 is executed to obtain the similarity between Gabor features of each sample; the method comprises the following steps:
assuming Gabor characteristicsAnd
Figure BSA00000443811000103
the similarity can be calculated by the following formula:
<math> <mrow> <msub> <mi>S</mi> <mi>&phi;</mi> </msub> <mrow> <mo>(</mo> <mover> <mi>J</mi> <mo>&RightArrow;</mo> </mover> <mo>,</mo> <msup> <mover> <mi>J</mi> <mo>&RightArrow;</mo> </mover> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mi>j</mi> </msub> <msub> <mi>&alpha;</mi> <mi>j</mi> </msub> <msub> <msup> <mi>&alpha;</mi> <mo>&prime;</mo> </msup> <mi>j</mi> </msub> <mi>cos</mi> <mrow> <mo>(</mo> <msub> <mi>&phi;</mi> <mi>j</mi> </msub> <mo>-</mo> <msub> <msup> <mi>&phi;</mi> <mo>&prime;</mo> </msup> <mi>j</mi> </msub> <mo>-</mo> <mover> <mi>d</mi> <mo>&RightArrow;</mo> </mover> <msub> <mover> <mi>k</mi> <mo>&RightArrow;</mo> </mover> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> <msqrt> <msub> <mi>&Sigma;</mi> <mi>j</mi> </msub> <msubsup> <mi>&alpha;</mi> <mi>j</mi> <mn>2</mn> </msubsup> <msubsup> <mi>&alpha;</mi> <mi>j</mi> <mrow> <mo>&prime;</mo> <mn>2</mn> </mrow> </msubsup> </msqrt> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein,is composed of
Figure BSA00000443811000106
And
Figure BSA00000443811000107
relative displacement between them can be obtained by the following formula
<math> <mrow> <mover> <mi>d</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <mover> <mi>J</mi> <mo>&RightArrow;</mo> </mover> <mo>,</mo> <msup> <mover> <mi>J</mi> <mo>&RightArrow;</mo> </mover> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>d</mi> <mi>x</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>d</mi> <mi>y</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <msub> <mi>&Gamma;</mi> <mi>xx</mi> </msub> <msub> <mi>&Gamma;</mi> <mi>yy</mi> </msub> <mo>-</mo> <msub> <mi>&Gamma;</mi> <mi>xy</mi> </msub> <msub> <mi>&Gamma;</mi> <mi>yx</mi> </msub> </mrow> </mfrac> <mo>&times;</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>&Gamma;</mi> <mi>yy</mi> </msub> </mtd> <mtd> <mo>-</mo> <msub> <mi>&Gamma;</mi> <mi>yx</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <msub> <mi>&Gamma;</mi> <mi>xy</mi> </msub> </mtd> <mtd> <msub> <mi>&Gamma;</mi> <mi>xx</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>&Phi;</mi> <mi>x</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&Phi;</mi> <mi>y</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow> </math>
If f isxxΓyyxyΓyxNot equal to 0, wherein
Φx=∑jαjα′jkjxj-φ′j),
Γxy=∑jαjα′jkjxkjy
Φy,Γxx,ΓyxAnd ΓyyThe definition is made similarly.
For each feature point, calculating the similarity between every two N Gabor features according to the formulas (9) and (10), and when the similarity is greater than a threshold value STWhen they are similar, STCan be selected by experiment, such as 0.85.
Then, step 406 is executed to calculate the number n of similar features of the Gabor features of each sample.
Then step 407 is performed to select the sample Gabor feature with the maximum n.
Then, step 408 is executed to determine whether n is greater than nT
If the determination result in the step 408 is negative, step 411 is executed to process the next feature point. Then returns to step 404 to continue the method of the present invention.
If the determination result in the step 408 is yes, then step 409 is executed to add the Gabor feature to the offline template. For each Gabor feature, n is setiA Gabor feature similar thereto, niThe value is maximum and greater than a threshold value nTGabor feature of (a) add sample feature set
Figure BSA00000443811000111
nTAlso selected by experiment, n can be selectedT=2。
Then, step 410 is executed to delete the Gabor feature from the sample, and simultaneously, the similarity of the Gabor feature and the Gabor feature is greater than the threshold value ST' Gabor characteristics of
Figure BSA00000443811000112
Middle deletion, here ST' should be greater than STAnd optionally 0.9.
Then returning to step 405, iterative computations are performed for steps 405-409. To pair
Figure BSA00000443811000113
The above process is repeated until no sample can be selected.
Final sample feature set
Figure BSA00000443811000114
The feature sample of the face feature point is used as an off-line template of the face feature and provided for on-line tracking.
FIG. 5 is a flow chart of the face tracking method of the present invention.
The method of the invention comprises the following steps:
and step 501, initializing. This step mainly initializes the engine, including: initialization variables, parameter settings including image format, resolution, color space, tracking mode, etc.
Then, step 502 is executed to input a frame of image. This step is to input one frame of image data according to the format set in step 501.
Then, step 503 is performed to normalize the image. This step is to perform normalization processing on the input image. That is, the input image is converted into an image of a standard size, based on the face information of the previous frame, mainly the positional information of both eyes, and the preferred size may be 256 × 256.
The normalization processing is carried out on the face image, so as to ensure that the calculation conditions of all the feature points are similar, thereby ensuring the accuracy of feature sampling. According to the positions of the two eyes, the midpoint of the two eyes is obtained as a reference point, the line of the two eyes is taken as the horizontal axis of the image, the perpendicular bisector of the line of the two eyes is taken as the vertical axis, the image is rotated, and simultaneously the image is zoomed so that the distance (interpupillary distance) between the two eyes reaches a specific value. The accuracy and robustness of Gabor feature expression can be ensured through the normalization processing.
Step 504 is then performed to determine whether to re-detect. The step is to judge whether to carry out face feature detection again according to the detection result of the previous frame, and if the image is the first frame image, the feature detection is directly carried out.
If the determination result in step 504 is yes, the process proceeds to step 505, and the reference feature points are obtained based on the shape constraint. In this step, the reference shape model 517 is used, and the reference feature points are aligned based on the ASM shape constraint, and the reference feature points do not move due to the change of the expression, such as the canthus and the nose. Please refer to fig. 2 and fig. 3 and the corresponding description for a method for obtaining the reference shape model 517.
Step 505 is a specific method for obtaining the reference feature points based on the shape constraint, which is as follows:
firstly, the image needs to be normalized.
Next, the position of the rigid reference point is determined from the positions of both eyes. And aligning the rigid reference points by utilizing a rigid reference shape model in the face model according to the positions of the eyes to obtain the initial positions of the reference points. And then iteratively updating the shape parameters according to the formula (2) until an iteration termination condition is met, namely obtaining the accurate position of the rigid reference point. In the iteration process, the precision of the rigid reference point is judged according to the similarity between the Gabor characteristic and the offline characteristic template. The method comprises the following specific steps:
(1) for each rigid reference point i, calculating its current position
Figure BSA00000443811000121
Gabor characteristics of
Figure BSA00000443811000122
(2) Calculating according to equations (9) and (10)
Figure BSA00000443811000123
And off-line characteristic template
Figure BSA00000443811000124
The similarity of each Gabor feature is taken as the one with the maximum similaritySimilarity to template SiAnd obtain a relative displacement thereof of
Figure BSA00000443811000126
(3) When one of the following conditions is met, the iteration process is ended, otherwise, the step 4 is carried out: a) average similarity of all rigid reference points
Figure BSA00000443811000127
Less than the average similarity of the last iteration
Figure BSA00000443811000128
b) The absolute displacement value of 90% or more of the points is sufficiently small, i.e.
Figure BSA00000443811000129
Here the threshold value dTDetermined according to the accuracy to be guaranteed, e.g. optional dT=2;
(4) For relative displacement value
Figure BSA000004438110001210
Limiting to reduce mutation error to enable | dxi|≤dxT,|dyi|≤dyTHere threshold dxTAnd dyTDetermined according to the accuracy to be guaranteed, e.g. optional dxT=dyT=10;
(5) Root of herbaceous plantAccording toUpdating the rigid reference point coordinates: <math> <mrow> <msub> <mover> <mi>X</mi> <mo>&RightArrow;</mo> </mover> <mi>i</mi> </msub> <mo>=</mo> <msub> <mover> <mi>X</mi> <mo>&RightArrow;</mo> </mover> <mi>i</mi> </msub> <mo>+</mo> <msub> <mover> <mi>d</mi> <mo>&RightArrow;</mo> </mover> <mi>i</mi> </msub> <mo>;</mo> </mrow> </math>
(6) according to the updated coordinatesAnd updating the shape parameters by the rigid reference shape model and the formula (2). Obtaining a new rigid reference point coordinate value according to the updated shape parameter;
(7) the number of iterations t is increased by 1. If t exceeds the threshold value, the iteration process is ended, otherwise, the step (1) is carried out.
Step 506 is then performed to obtain global feature points based on the shape constraints. This step is to align the global feature points based on the ASM shape constraints using the global shape model 518. The global feature points include 8 reference feature points, and also include other points which are less affected by expressions, such as corners of the mouth, the eyebrows, and the like. For a specific method of obtaining the global shape model 518, please refer to fig. 2 and fig. 3 and their corresponding descriptions.
The specific method of obtaining global feature points based on shape constraints of step 506 is the same as step 505, except that it utilizes the position of the rigid reference points and the global reference shape model, and fixes the position of the rigid reference points unchanged in the iteration.
Then, step 507 is performed to obtain local feature points based on the shape constraint. In this step, local feature points are aligned based on ASM shape constraints by using the local shape model 519 for each local feature of the face. The local feature points of the human face mainly comprise contour points of a left eye, a right eye, a mouth and a nose, for example, the left (right) eye comprises an eye corner, an upper eyelid, a lower eyelid and the like, the mouth comprises two mouth corners, a middle point of an upper lip and a lower lip, a contour point between the middle point of the upper lip and the lower lip and the mouth corner and the like. For a specific method of obtaining the local shape model 519, refer to fig. 2 and 3 and their corresponding descriptions.
The specific method for obtaining the local feature points based on the shape constraint in step 507 is the same as that in step 505, except that a local shape model is used and the position of the global reference point is fixed.
Step 508 is then performed to update the online feature template. The step is to calculate the Gabor wavelet feature of the face according to the obtained face feature points and use the Gabor wavelet feature as a new online feature template
Figure BSA00000443811000131
Then step 515 is performed to estimate the face pose. The step is to estimate the pose of the face according to the positions of 6 basic points, wherein the 6 basic points are as follows: 4 angular points and 2 nasal endpoints.
The invention can construct a face model with a multilayer structure to adapt to the change of the face expression, and can also construct face shape models at different angles to adapt to the change of the face angle, thereby not being repeated.
However, the constructed face model can only sample limited angles, such as a front face, a left face 45 degrees, a right face 45 degrees, and the like. In order to ensure the accuracy of face feature tracking, the angle of the face needs to be estimated to select a proper face shape model, and angle compensation is performed on the face shape model. The invention can better estimate the face angle according to the position of the rigid reference characteristic point of the face, and the description is as follows.
In order to reduce the influence of the facial expression, the standard feature points of the face are selected to estimate the facial pose, and 4 eye corner points and 2 nose end points are selected as references. To estimate a personThe three-dimensional coordinates of these six points must be initialized first for the pose of the face. In general, the three-dimensional coordinates X of the feature pointsi=(xi,yi,zi) In practical application, a universal three-dimensional face model can require a user to face a camera to obtain a face image of the front face of the user, and according to a detection result and x of feature pointsiAnd yiThe value is automatically adjusted to the value of the user, and the depth value is still approximated by the value of the three-dimensional model. Let the human face pose parameter alphaface=(σpan,φtilt,κswingλ), wherein (σ)pan,φtilt,κswing) Is the euler angles of the face in three directions, and lambda is the scaled value of the face size. The specific steps of estimating the face pose in step 515 are as follows:
1) n triangles are constructed. Selecting any three non-collinear feature points to form a triangle TiFor each TiConstructing a local coordinate system Ct
2) A projection matrix M is obtained from each triangle. Image coordinate and local coordinate system CiCan be expressed as
c - c 0 r - r 0 = M x t - x t 0 y t - y t 0 - - - ( 11 )
Wherein (C, r) represents a coordinate system CtMiddle three-dimensional point (x)t,yt0) projection image of (c)0,r0) Is the reference point (x)t0,yt00), M is a 2 × 2 projection matrix. By limiting the Euler angle in
Figure BSA00000443811000142
To
Figure BSA00000443811000143
Can recover two groups of human face posture parameters from M to generate a complete projection matrix PiBut only one of them is correct.
3) The projection deviation of the complete projection matrix is calculated. From the complete projection matrix PiProjecting the three-dimensional coordinates of the characteristic points into the image to further obtain the deviation d between the three-dimensional coordinates and the actual image coordinates of the characteristic pointserror. If d iserrorIf the value is larger than the threshold value d, deleting the matrix; otherwise, the matrix is retained and its weight is set to ωi=(d-derror)2
4) And weighting to obtain a final result. Through the detection of N triangles, K complete projection matrixes P are finally obtainediK, and its corresponding weight ω 1iK, i 1. For each PiA unique set of parameters is availableNumber alphai=(σpan,φtilt,κswingλ). The final face pose parameters are:
<math> <mrow> <msub> <mi>&alpha;</mi> <mi>face</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>K</mi> </msubsup> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> <mo>*</mo> <msub> <mi>&omega;</mi> <mi>i</mi> </msub> </mrow> <mrow> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </msubsup> <msub> <mi>&omega;</mi> <mi>i</mi> </msub> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow> </math>
then, the process returns to step 502 to execute the steps of the method in a loop, and step 516 is executed to output the face feature points and the face pose information.
If the determination result in the step 504 is negative, then step 509 is executed to update the eye-point based on the online template. In the step, the displacement of 4 eye corner points is calculated based on the comparison of the online template and the wavelet characteristics of the last frame position of the characteristic points, so that the new position of the eye corner is obtained. The specific obtaining method in step 509 is as follows:
(1) carrying out normalization preprocessing on the image according to the positions of the two eyes of the previous frame;
(2) updating the canthus feature points in the rigid reference points according to the online feature template: for the characteristic points of the corner of the eye
Figure BSA00000443811000145
Computing Gabor characteristics of the current image
Figure BSA00000443811000146
Then, the calculation is performed according to the formula (10)
Figure BSA00000443811000147
And online characteristic template
Figure BSA00000443811000148
Displacement of (2)
Figure BSA00000443811000149
The canthus feature points may be updated as:
Figure BSA000004438110001410
step 510 is then performed to adjust the eye corner points based on the offline feature template. The method comprises the steps of calculating the distance and the similarity between an off-line training characteristic template and an on-line characteristic template, and modifying the eye angle position according to the distance and the similarity to obtain a new position.
The specific method for obtaining the offline feature template is shown in fig. 4 and the corresponding description thereof.
The specific calculation method of step 510 is: and (3) re-correcting the eye corner feature points according to the off-line feature template: for the characteristic points of the corner of the eye
Figure BSA000004438110001411
Calculating the on-line characteristic template according to the formulas (9) and (10)
Figure BSA000004438110001412
And off-line characteristic templateOf similarity S'iAnd displacement of
Figure BSA00000443811000151
The corner of the eye feature point can be further corrected to
Figure BSA00000443811000152
Where epsilon is a similarity adjustment value, and is set according to the precision requirement, and preferably, epsilon is 0.55.
Then, step 511 is executed to update other feature points. Firstly, calculating the average displacement of the position of a new canthus feature point and the position of the previous frame as the initial estimation of the rigid motion of the human face, and updating the coordinates of all feature points of other feature points as follows:
Figure BSA00000443811000153
then, for each feature point, steps 509 and 510 are repeated to update the positions of other feature points except the feature point of the eye corner.
Then, step 512 is executed to update the average shape of each shape model according to the face pose of the previous frame. The method comprises the following steps of carrying out error compensation according to the human face posture estimated in the previous frame, and updating the shape model of the human face to obtain the shape model under the posture.
Step 513 is then performed to update the global feature points based on the shape constraints. The method comprises the following steps of carrying out shape constraint on the global feature points according to a compensated global shape model to obtain shape parameters, and obtaining accurate global feature points according to the shape parameters. This step is to update the locations of the global feature points based on the shape model constraints updated in step 512.
Step 514 is then performed to update the local feature points based on the shape constraints. In the step, the global feature points are not updated in the process aiming at each local feature of the human face. This step is to update the locations of its local feature points based on the shape model constraints updated in step 512.
Then, step 508 is executed to calculate Gabor features of all feature points as a new online feature template
Figure BSA00000443811000154
The detection and positioning of the human face characteristic points are completed according to the detected human face and the positions of human eyes in the process. Due to the difference of each face, the similarity of the Gabor characteristics of the characteristic points of the face is different from that of the offline characteristic template. Therefore, Gabor characteristics are obtained according to the position of the current face characteristic point and are used as a characteristic template for face tracking of a subsequent frame, namely an online characteristic templateThe efficiency and the precision of the face feature tracking are improved.
Fig. 6 is a flowchart of the face comparison method of the present invention. The method of the invention comprises the following steps:
step 601, tracking the human face, and acquiring feature points. The method comprises the steps of processing the face in the input video or the real-time picture of the camera to obtain the accurate position of the feature point. The detailed method is detailed in fig. 1-5 and the corresponding description.
It should be noted that the face features selected by the tracking part of the present invention are features of commonality of faces, such as 28 feature points as shown in fig. 2.
Then, step 602 is executed to detect the image quality and determine whether the condition is satisfied. In this step, the quality of the image obtained in step 601 is determined, and whether the image and the extraction result of the feature point satisfy the condition of registration or comparison is determined. The detected parameters include the brightness of the image, the uniformity of illumination, and the like.
If the determination result in the step 602 is negative, go to step 610.
If the determination result in step 602 is yes, step 603 is executed to extract detailed face feature data. It should be noted that: in order to fully express the difference between different faces, appropriate face feature points need to be extracted so as to fully express the face information. The invention selects the significant feature points on the human face as comparison basis, and adds the midpoint between the two eyebrows, the nasion, namely the midpoint between the two eyes, the nasal tip and the like in addition to the 28 feature points shown in fig. 2. The selection of the feature points can be properly adjusted according to the requirements of precision, operation performance and the like. And the characteristics of the face feature points must select all 80 Gabor complex coefficients in the formula (8) to express complete face information, so as to maximize the difference between different faces. The specific method of step 603 is:
according to the accurate human face feature point position obtained by the human face detection tracking, the positions of other selected human face feature points are obtained by interpolation, such as: the nasal root is the midpoint of the positions of both eyes, the nasal tip is the central point of 4 nasal measuring points, and the like.
And carrying out normalization processing on the image according to the positions of the two eyes.
Calculating the Gabor characteristic of the face characteristic point i according to the formula (8)
Figure BSA00000443811000161
Gabor features of all feature points form a face feature template
Figure BSA00000443811000162
N, N is the number of selected human face feature points.
Then, the face registration or face comparison in step 604 is performed or step 605 is performed.
Step 604, face registration is to store the face feature data in the face database. The specific method comprises the following steps:
comparing the detailed face feature data obtained in the step 603 with the existing face feature template library of the person, if the similarity S is more than STIf not, the feature is added into the human face feature template library
Figure BSA00000443811000163
j is 0, M is the number of the characteristic templates of the person, and the number is stored in a database. Threshold value STAccording to experimental selection, the specific calculation method of the similarity S is as follows:
(3) template for characteristics
Figure BSA00000443811000164
j ═ 0.. times, M, the features of the input face were calculated according to equation (9)
Figure BSA00000443811000165
Figure BSA00000443811000166
Similarity between them Sji
(4) Calculating input face and feature templateThe similarity of (A) is as follows: <math> <mrow> <msub> <mi>S</mi> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <msub> <mi>S</mi> <mi>ji</mi> </msub> <mo>;</mo> </mrow> </math>
(5) calculating the similarity between the input face and the face k as follows: S = max j { S j } .
after step 604 is performed, execution 606 exits.
Step 605 is to compare the face feature data with the feature data of each face in the face database to obtain the similarity, and store the similarity value between the face feature data and each face in the database. The specific method comprises the following steps:
if the database has a feature template library of K faces
(1) Selecting a feature template library of a face k in a database
Figure BSA000004438110001610
k=0,...,K;
(2) Template for characteristics
Figure BSA00000443811000171
j ═ 0.. times, M, the features of the input face were calculated according to equation (9)
Figure BSA00000443811000172
And
Figure BSA00000443811000173
similarity between them Skji
(3) Calculating input face and feature templateThe similarity of (A) is as follows: <math> <mrow> <msub> <mi>S</mi> <mi>kj</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <msub> <mi>S</mi> <mi>kji</mi> </msub> <mo>;</mo> </mrow> </math>
(4) calculating the similarity between the input face and the face k as follows: S k = max j { S kj } ;
(5) repeating the steps (1) to (4) to obtain the similarity between the input human face and all K human faces in the database, and taking the largest one of the input human faces
Figure BSA00000443811000177
The corresponding face k' is obtained.
Step 607 is then executed to determine whether a matching face has been found. When the similarity value obtained in step 605 exceeds the set threshold, it is determined that a matching face is found. Let δ be the similarity threshold, which can be determined experimentally. If SmaxIf the number of the faces is more than delta, the face k 'matched with the input face is considered to be the face k' in the database, otherwise, the face without matching in the database is considered to be not matched.
If the determination result in the step 607 is negative, go to the step 610.
If the determination result in the step 607 is yes, the process continues to a step 608 to determine whether the expression has a significant change. The method comprises the steps of analyzing according to continuous multi-frame human face characteristic points, such as opening and closing of a mouth, opening and closing of eyes and the like, and judging whether the expression of the human face is remarkably changed or not. This step is to determine whether the current input is a real person or a static photograph. The current input is considered to be a still photograph with no significant change in expression. Conversely, if there is a significant change in expression, the current input is considered to be a real face.
If the determination result in the step 608 is negative, the step 610 is executed.
If the judgment result in the step 608 is yes, the step 609 is executed to output the face in the ratio. In this step, a plurality of faces in the ratio are output, and the output sequence may be defined, for example: according to the sequence of similarity from big to small; or according to the sequence of similarity from small to large; or define other orders.
Step 606 is then executed, exiting.
Step 610 is to determine if the exit condition is satisfied. The invention can set a plurality of exit conditions, such as: the time for processing the video exceeds a certain time length, or a matching face is not found after a certain comparison between the face and the database, and the like.
The invention selects the salient feature points on the human face as comparison bases, such as: the midpoint between the two eyebrows, the nasion, i.e., the midpoint between the two eyes, the tip of the nose, etc. The selection of the feature points can be properly adjusted according to the requirements of precision, operation performance and the like. The characteristics of the human face characteristic points are selected from all 80 Gabor complex coefficients in the formula (8), so that complete human face information is expressed, and the difference between different human faces is maximized. The accuracy and robustness of the face comparison are better.
The human face comparison method eliminates the influence of human face expression and posture, judges the authenticity of the human face in comparison and ensures that the tracking and comparison accuracy, precision and robustness are higher.
By using the method and the device, whether the current input is a real face or a static photo can be judged.
The above description and drawings are only for clarity and understanding of the present invention, and those skilled in the art should be able to add or subtract certain steps or make simple changes to certain steps, and all such simple changes and additions fall within the scope of the present invention.

Claims (10)

1. A face comparison method, comprising:
step 601, tracking a human face to acquire feature points;
step 603, extracting detailed face feature data;
step 605, comparing the human face, namely comparing the human face feature data with the feature data of each human face in the human face database to obtain the similarity of the human face feature data and each human face in the human face database; the specific method comprises the following steps:
(1) selecting a feature template library of a face k in a database
Figure FSA00000443810900011
k=0,...,K;
(2) Template for characteristics
Figure FSA00000443810900012
j-0.. M, calculating the characteristics of the input face
Figure FSA00000443810900013
And
Figure FSA00000443810900014
similarity between them Skji
(3) Calculating input face and feature template
Figure FSA00000443810900015
Degree of similarity of <math> <mrow> <msub> <mi>S</mi> <mi>kj</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <msub> <mi>S</mi> <mi>kji</mi> </msub> <mo>;</mo> </mrow> </math>
(4) Calculating the similarity between the input face and the face k as S k = max j { S kj } ;
(5) Repeating the steps (1) to (4) to obtain the similarity between the input human face and all K human faces in the database, and taking the largest one of the input human faces
Figure FSA00000443810900018
Obtaining a corresponding human face k';
step 607, judging whether a matched face is found; deltaIs the similarity threshold, if SmaxIf the value is larger than delta, judging that the input human face is matched with the human face k' in the database;
step 608, judging whether the expression has significant changes; the analysis is performed according to continuous multiframe human face characteristic points, and comprises the following steps: opening and closing the mouth and eyes, and judging whether the expression of the face is obviously changed;
when the facial expression has a significant change, step 609 is executed to output the face in the ratio.
2. The method of claim 1, wherein the specific method for extracting the detailed face feature data in the step 603 is as follows:
interpolating to obtain the positions of other selected human face characteristic points according to the accurate human face characteristic point positions obtained by the human face detection and tracking in the step 601;
normalizing the image according to the positions of the two eyes;
calculating to obtain Gabor characteristics of the face characteristic point i
Figure FSA00000443810900019
The Gabor features of all feature points form face feature data
Figure FSA000004438109000110
N, N is the number of selected human face feature points.
3. The method of claim 2, wherein the face feature points are significant feature points on a face, and all 80 Gabor complex coefficients are selected for the features of the face feature points to express complete face information and completely express differences between different faces.
4. The method of claim 1, wherein the step 601 of face tracking obtains the face features selected by the feature points as the features of commonality of faces.
5. The method of claim 1, further comprising a step 604 of face registration; storing the face characteristic data to a face database; the specific method comprises the following steps:
comparing the detailed face feature data obtained in the step 603 with the existing face feature template library of the person, if the similarity S is more than STIf not, the feature is added into the human face feature template library
Figure FSA00000443810900021
j is 0, M is the number of the characteristic templates of the person, and the number is stored in a database; the specific calculation method of the similarity S comprises the following steps:
(1) template for characteristics
Figure FSA00000443810900022
j ═ 0.. times, M, the features of the input face and are calculated according to equation (9)
Figure FSA00000443810900023
Figure FSA00000443810900024
Similarity between them Sji
(2) Calculating input face and feature template
Figure FSA00000443810900025
The similarity of (A) is as follows: <math> <mrow> <msub> <mi>S</mi> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <msub> <mi>S</mi> <mi>ji</mi> </msub> <mo>;</mo> </mrow> </math>
(3) calculating the similarity between the input face and the face k as follows: S = max j { S j } .
6. the method of claim 1, wherein the step 601 of face tracking, the obtaining of feature points specifically comprises an offline training method and an online tracking method;
the off-line training method comprises a multi-layer structure face model training method and an off-line template training method of face characteristic points;
the multilayer structure human face model training method provides a human face model for the online tracking method, and the off-line template training method provides an off-line template of human face characteristic points for the online tracking method;
the multilayer structure face model training method comprises the following steps:
301, selecting a proper face image as a training sample;
step 302, marking the characteristic points of the face image;
3031-3061, obtaining a reference shape model;
3032-3062, obtaining a global shape model;
3033-3063, a local shape model is obtained.
7. The method of claim 6, wherein the reference shape model, the global shape model and the local shape model are obtained by:
denote a face shape vector by s:
<math> <mrow> <mi>s</mi> <mo>=</mo> <mover> <mi>s</mi> <mo>&OverBar;</mo> </mover> <mo>+</mo> <mi>Pb</mi> <mo>,</mo> </mrow> </math>
wherein,
Figure FSA00000443810900031
is the average face shape; p is a set of orthogonal main shape change modes; b is a shape parameter vector;
the face shape vector s is expressed as(s)R,sG,sL)TWherein s isR、sGAnd sLRespectively representing a reference feature point, a global feature point and a local feature point;
point distribution model of rigid reference shape <math> <mrow> <msub> <mi>s</mi> <mi>R</mi> </msub> <mo>=</mo> <mover> <msub> <mi>s</mi> <mi>R</mi> </msub> <mo>&OverBar;</mo> </mover> <mo>+</mo> <msub> <mi>P</mi> <mi>R</mi> </msub> <msub> <mi>b</mi> <mi>R</mi> </msub> </mrow> </math>
Point distribution model of global reference shape <math> <mrow> <msub> <mi>s</mi> <mi>G</mi> </msub> <mo>=</mo> <mover> <msub> <mi>s</mi> <mi>G</mi> </msub> <mo>&OverBar;</mo> </mover> <mo>+</mo> <msub> <mi>P</mi> <mi>G</mi> </msub> <msub> <mi>b</mi> <mi>G</mi> </msub> </mrow> </math>
Point distribution model of local shape model <math> <mrow> <msub> <mi>s</mi> <mrow> <mi>Gi</mi> <mo>,</mo> <mi>Li</mi> </mrow> </msub> <mo>=</mo> <mover> <msub> <mi>s</mi> <mrow> <mi>Gi</mi> <mo>,</mo> <mi>Li</mi> </mrow> </msub> <mo>&OverBar;</mo> </mover> <mo>+</mo> <msub> <mi>P</mi> <mrow> <mi>Gi</mi> <mo>,</mo> <mi>Li</mi> </mrow> </msub> <msub> <mi>b</mi> <mrow> <mi>Gi</mi> <mo>,</mo> <mi>Li</mi> </mrow> </msub> </mrow> </math>
The ith local shape vector is sGi,Li={sGi,sLiIn which s isGi,sLiRepresenting global and local feature points belonging to the ith local shape, respectively.
8. The method for comparing human faces according to claim 6, wherein the expression method of the human face feature points comprises:
given a grayscale image
Figure FSA00000443810900035
One pixel in (2)
Figure FSA00000443810900036
A series of Gabor coefficients
Figure FSA00000443810900037
The local appearance near the point can be expressed and can be defined as:
<math> <mrow> <msub> <mi>J</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mo>&Integral;</mo> <mi>I</mi> <mrow> <mo>(</mo> <msup> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <msub> <mi>&psi;</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>-</mo> <msup> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> <msup> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>&prime;</mo> </msup> </mrow> </math>
wherein the Gabor nucleus psijA plane wave defined for a gaussian envelope function,
<math> <mrow> <msub> <mi>&psi;</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <msubsup> <mi>k</mi> <mi>j</mi> <mn>2</mn> </msubsup> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> </mfrac> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <mrow> <msubsup> <mi>k</mi> <mi>j</mi> <mn>2</mn> </msubsup> <msup> <mi>x</mi> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>[</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mi>i</mi> <msub> <mover> <mi>k</mi> <mo>&RightArrow;</mo> </mover> <mi>j</mi> </msub> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>)</mo> </mrow> <mo>-</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>]</mo> </mrow> </math>
Figure FSA000004438109000310
Figure FSA000004438109000311
Figure FSA000004438109000312
wherein k isvIn order to be the frequency of the radio,
Figure FSA000004438109000313
for the purposes of orientation, the present invention preferably designates v 0, 1, 9, and μ 0, 1, 7. j ═ μ +8v,
Figure FSA000004438109000314
and the frequency bandwidth is set as 2 pi;
the Gabor kernel consists of 80 Gabor complex coefficients with 10 frequencies and 8 directions and is used for expressing appearance characteristics near pixel points and using a jet vector
Figure FSA000004438109000315
Denotes these coefficients, Jj=αjexp(iφj),j=0,1,...,79
Wherein alpha isjAnd phijThe amplitude and the phase of the jth Gabor coefficient respectively;
and (4) carrying out experimental screening on 80 Gabor complex coefficients to obtain the wavelet characteristics used for expressing the human face characteristic points.
9. The method of claim 6, wherein the off-line template training method for the face feature points comprises:
step 401, selecting N appropriate face images as training samples;
step 402, marking the characteristic points of the face image;
step 403, normalizing the image;
step 404, calculating Gabor characteristics of all samples;
step 405, obtaining similarity among Gabor characteristics of each sample;
<math> <mrow> <msub> <mi>S</mi> <mi>&phi;</mi> </msub> <mrow> <mo>(</mo> <mover> <mi>J</mi> <mo>&RightArrow;</mo> </mover> <mo>,</mo> <msup> <mover> <mi>J</mi> <mo>&RightArrow;</mo> </mover> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mi>j</mi> </msub> <msub> <mi>&alpha;</mi> <mi>j</mi> </msub> <msub> <msup> <mi>&alpha;</mi> <mo>&prime;</mo> </msup> <mi>j</mi> </msub> <mi>cos</mi> <mrow> <mo>(</mo> <msub> <mi>&phi;</mi> <mi>j</mi> </msub> <mo>-</mo> <msub> <msup> <mi>&phi;</mi> <mo>&prime;</mo> </msup> <mi>j</mi> </msub> <mo>-</mo> <mover> <mi>d</mi> <mo>&RightArrow;</mo> </mover> <msub> <mover> <mi>k</mi> <mo>&RightArrow;</mo> </mover> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> <msqrt> <msub> <mi>&Sigma;</mi> <mi>j</mi> </msub> <msubsup> <mi>&alpha;</mi> <mi>j</mi> <mn>2</mn> </msubsup> <msubsup> <mi>&alpha;</mi> <mi>j</mi> <mrow> <mo>&prime;</mo> <mn>2</mn> </mrow> </msubsup> </msqrt> </mfrac> </mrow> </math>
wherein,
Figure FSA00000443810900042
and
Figure FSA00000443810900043
is a Gabor feature;
Figure FSA00000443810900044
is composed ofAnd
Figure FSA00000443810900046
relative displacement therebetween;
<math> <mrow> <mover> <mi>d</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <mover> <mi>J</mi> <mo>&RightArrow;</mo> </mover> <mo>,</mo> <msup> <mover> <mi>J</mi> <mo>&RightArrow;</mo> </mover> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>d</mi> <mi>x</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>d</mi> <mi>y</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <msub> <mi>&Gamma;</mi> <mi>xx</mi> </msub> <msub> <mi>&Gamma;</mi> <mi>yy</mi> </msub> <mo>-</mo> <msub> <mi>&Gamma;</mi> <mi>xy</mi> </msub> <msub> <mi>&Gamma;</mi> <mi>yx</mi> </msub> </mrow> </mfrac> <mo>&times;</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>&Gamma;</mi> <mi>yy</mi> </msub> </mtd> <mtd> <mo>-</mo> <msub> <mi>&Gamma;</mi> <mi>yx</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <msub> <mi>&Gamma;</mi> <mi>xy</mi> </msub> </mtd> <mtd> <msub> <mi>&Gamma;</mi> <mi>xx</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>&Phi;</mi> <mi>x</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&Phi;</mi> <mi>y</mi> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
if f isxxΓyyxyΓyxNot equal to 0, wherein
Φx=∑jαjα′jkjxj-φ′j),
Γxy=∑jαjα′jkjxkjy
Φy,Γxx,ΓyxAnd ΓyyAre defined similarly;
for each feature point, calculating the similarity between every two N Gabor features, and when the similarity is greater than a threshold value STWhen the two are considered to be similar,STcan be selected by experiment, such as 0.85;
step 406, calculating the number n of similar features of Gabor features of each sample;
step 407, selecting a sample Gabor feature with the maximum n;
step 408, determine whether n is greater than nT
If the determination result in the step 408 is negative, execute the step 411 to process the next feature point, and then return to the step 404 to continue executing;
if the judgment result in the step 408 is yes, executing a step 409, and adding the Gabor characteristics into the offline template; for each Gabor feature, n is setiA Gabor feature similar thereto, niThe value is maximum and greater than a threshold value nTGabor feature of (a) add sample feature set
Figure FSA00000443810900048
nTAlso selected by experiment, n can be selectedT=2;
Step 410, deleting the Gabor feature from the sample, and simultaneously, enabling the similarity of the Gabor feature to be larger than a threshold value ST' Gabor characteristics of
Figure FSA00000443810900049
Middle deletion, ST' greater than STAnd optionally 0.9;
returning to the step 405, and performing iterative computation on the steps 405-409; to pair
Figure FSA000004438109000410
Repeating the above process until no sample can be selected; final sample feature setThe feature sample of the face feature point is used as an off-line template of the face feature and provided for the on-line tracking method.
10. The method of claim 6, wherein the on-line tracking method comprises:
step 501, initializing variables and setting parameters, wherein the parameters include but are not limited to image format, resolution, color space and tracking mode;
step 502, inputting a frame of image;
step 503, normalizing the image, and converting the input image into an image with a standard size;
step 504, judging whether to detect again;
if the determination result in the step 504 is yes, executing a step 505, and aligning the reference feature points based on the ASM shape constraint by using the reference shape model;
step 506, aligning global feature points by using a global shape model based on ASM shape constraints;
step 507, aligning local feature points by using a local shape model based on ASM shape constraint;
step 508, updating the online characteristic template, and updating the wavelet characteristics of the online characteristic template as the online characteristic template of the face according to the position of the obtained face characteristic point;
step 515, estimating the pose of the human face, and estimating the pose of the human face according to the positions of the six basic points;
returning to the step 502 to circularly execute the steps of the method and executing the step 516 to output the human face characteristic points and the human face posture information;
if the judgment result in the step 504 is negative, executing a step 509, and updating the eye corner point based on the online feature template;
then, step 510 is executed to adjust the eye corner point based on the offline feature template;
then, step 511 is executed to update other feature points;
then executing step 512, updating the average shape of each shape model according to the face pose of the previous frame;
then step 513 is performed to update the global feature points based on the shape constraints;
then, step 514 is executed to update the local feature points based on the shape constraints;
then, the procedure returns to step 508 to continue the steps of the method.
CN2011100517300A 2011-03-04 2011-03-04 Face comparison method Pending CN102654903A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011100517300A CN102654903A (en) 2011-03-04 2011-03-04 Face comparison method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011100517300A CN102654903A (en) 2011-03-04 2011-03-04 Face comparison method

Publications (1)

Publication Number Publication Date
CN102654903A true CN102654903A (en) 2012-09-05

Family

ID=46730529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011100517300A Pending CN102654903A (en) 2011-03-04 2011-03-04 Face comparison method

Country Status (1)

Country Link
CN (1) CN102654903A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077376A (en) * 2012-12-30 2013-05-01 信帧电子技术(北京)有限公司 Method for re-identifying human body image based on video image
CN103679159A (en) * 2013-12-31 2014-03-26 海信集团有限公司 Face recognition method
CN104765739A (en) * 2014-01-06 2015-07-08 南京宜开数据分析技术有限公司 Large-scale face database searching method based on shape space
CN106022313A (en) * 2016-06-16 2016-10-12 湖南文理学院 Scene-automatically adaptable face recognition method
WO2017032243A1 (en) * 2015-08-26 2017-03-02 阿里巴巴集团控股有限公司 Image feature extraction method, apparatus, terminal device, and system
CN106980819A (en) * 2017-03-03 2017-07-25 竹间智能科技(上海)有限公司 Similarity judgement system based on human face five-sense-organ
CN106980845A (en) * 2017-04-24 2017-07-25 西安电子科技大学 The crucial independent positioning method of face based on structured modeling
WO2017166652A1 (en) * 2016-03-29 2017-10-05 乐视控股(北京)有限公司 Permission management method and system for application of mobile device
CN107330359A (en) * 2017-05-23 2017-11-07 深圳市深网视界科技有限公司 A kind of method and apparatus of face contrast
CN107437067A (en) * 2017-07-11 2017-12-05 广东欧珀移动通信有限公司 Face liveness detection method and related products
CN107609535A (en) * 2017-09-28 2018-01-19 天津大学 Face datection, Attitude estimation and localization method based on shared pool hybrid coordination tree model
CN108121957A (en) * 2017-12-19 2018-06-05 北京麒麟合盛网络技术有限公司 The method for pushing and device of U.S. face material
CN108133177A (en) * 2017-12-06 2018-06-08 山东超越数控电子股份有限公司 A kind of method for improving Face datection reliability
CN109165307A (en) * 2018-09-19 2019-01-08 腾讯科技(深圳)有限公司 A kind of characteristic key method, apparatus and storage medium
WO2019061662A1 (en) * 2017-09-30 2019-04-04 平安科技(深圳)有限公司 Electronic device, insured domestic animal recognition method and computer readable storage medium
CN109922355A (en) * 2019-03-29 2019-06-21 广州虎牙信息科技有限公司 Virtual image live broadcasting method, virtual image live broadcast device and electronic equipment
CN110472459A (en) * 2018-05-11 2019-11-19 华为技术有限公司 Method and device for extracting feature points
CN110619320A (en) * 2019-09-28 2019-12-27 华东理工大学 Intelligent control method for intelligent bathing machine and bathing machine

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077376B (en) * 2012-12-30 2016-07-20 信帧电子技术(北京)有限公司 Method for distinguishing is known again based on the human body image in video image
CN103077376A (en) * 2012-12-30 2013-05-01 信帧电子技术(北京)有限公司 Method for re-identifying human body image based on video image
CN103679159A (en) * 2013-12-31 2014-03-26 海信集团有限公司 Face recognition method
CN104765739B (en) * 2014-01-06 2018-11-02 南京宜开数据分析技术有限公司 Extensive face database search method based on shape space
CN104765739A (en) * 2014-01-06 2015-07-08 南京宜开数据分析技术有限公司 Large-scale face database searching method based on shape space
WO2017032243A1 (en) * 2015-08-26 2017-03-02 阿里巴巴集团控股有限公司 Image feature extraction method, apparatus, terminal device, and system
WO2017166652A1 (en) * 2016-03-29 2017-10-05 乐视控股(北京)有限公司 Permission management method and system for application of mobile device
CN106022313A (en) * 2016-06-16 2016-10-12 湖南文理学院 Scene-automatically adaptable face recognition method
CN106980819A (en) * 2017-03-03 2017-07-25 竹间智能科技(上海)有限公司 Similarity judgement system based on human face five-sense-organ
CN106980845A (en) * 2017-04-24 2017-07-25 西安电子科技大学 The crucial independent positioning method of face based on structured modeling
CN107330359A (en) * 2017-05-23 2017-11-07 深圳市深网视界科技有限公司 A kind of method and apparatus of face contrast
CN107437067A (en) * 2017-07-11 2017-12-05 广东欧珀移动通信有限公司 Face liveness detection method and related products
WO2019011073A1 (en) * 2017-07-11 2019-01-17 Oppo广东移动通信有限公司 Human face live detection method and related product
CN107609535A (en) * 2017-09-28 2018-01-19 天津大学 Face datection, Attitude estimation and localization method based on shared pool hybrid coordination tree model
WO2019061662A1 (en) * 2017-09-30 2019-04-04 平安科技(深圳)有限公司 Electronic device, insured domestic animal recognition method and computer readable storage medium
CN108133177A (en) * 2017-12-06 2018-06-08 山东超越数控电子股份有限公司 A kind of method for improving Face datection reliability
CN108121957A (en) * 2017-12-19 2018-06-05 北京麒麟合盛网络技术有限公司 The method for pushing and device of U.S. face material
CN108121957B (en) * 2017-12-19 2021-09-03 麒麟合盛网络技术股份有限公司 Method and device for pushing beauty material
CN110472459A (en) * 2018-05-11 2019-11-19 华为技术有限公司 Method and device for extracting feature points
CN109165307A (en) * 2018-09-19 2019-01-08 腾讯科技(深圳)有限公司 A kind of characteristic key method, apparatus and storage medium
CN109922355A (en) * 2019-03-29 2019-06-21 广州虎牙信息科技有限公司 Virtual image live broadcasting method, virtual image live broadcast device and electronic equipment
CN109922355B (en) * 2019-03-29 2020-04-17 广州虎牙信息科技有限公司 Live virtual image broadcasting method, live virtual image broadcasting device and electronic equipment
CN110619320A (en) * 2019-09-28 2019-12-27 华东理工大学 Intelligent control method for intelligent bathing machine and bathing machine

Similar Documents

Publication Publication Date Title
CN101964064A (en) Human face comparison method
CN102654903A (en) Face comparison method
CN101968846A (en) Face tracking method
Ploumpis et al. Towards a complete 3D morphable model of the human head
CN113129425B (en) Face image three-dimensional reconstruction method, storage medium and terminal equipment
Alp Guler et al. Densereg: Fully convolutional dense shape regression in-the-wild
Kim et al. Inversefacenet: Deep monocular inverse face rendering
CN106652025B (en) A kind of three-dimensional face modeling method and printing equipment based on video flowing Yu face multi-attribute Matching
US7876931B2 (en) Face recognition system and method
CN105868716B (en) A kind of face identification method based on facial geometric feature
CN109299643B (en) A face recognition method and system based on large pose alignment
Xiong et al. Supervised descent method for solving nonlinear least squares problems in computer vision
WO2018205801A1 (en) Facial animation implementation method, computer device, and storage medium
Kong et al. Head pose estimation from a 2D face image using 3D face morphing with depth parameters
CN104123749A (en) Picture processing method and system
CN101499128A (en) Three-dimensional human face action detecting and tracing method based on video stream
CN107093189A (en) Method for tracking target and system based on adaptive color feature and space-time context
CN107016319A (en) A kind of key point localization method and device
CN114882545B (en) Multi-angle face recognition method based on 3D intelligent reconstruction
CN114422832A (en) Anchor virtual image generation method and device
CN105893984A (en) Face projection method for facial makeup based on face features
CN108717527A (en) Face alignment method based on posture priori
CN104036299B (en) A kind of human eye contour tracing method based on local grain AAM
CN108154176A (en) A kind of 3D human body attitude algorithm for estimating for single depth image
Paterson et al. 3D head tracking using non-linear optimization.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
DD01 Delivery of document by public notice

Addressee: Jing Weilan

Document name: Notification of Publication of the Application for Invention

DD01 Delivery of document by public notice

Addressee: Jing Weilan

Document name: Notification of before Expiration of Request of Examination as to Substance

DD01 Delivery of document by public notice

Addressee: Jing Weilan

Document name: Notification that Application Deemed to be Withdrawn

C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120905