CN1811793A - Automatic positioning method for characteristic point of human faces - Google Patents
Automatic positioning method for characteristic point of human faces Download PDFInfo
- Publication number
- CN1811793A CN1811793A CN 200610024307 CN200610024307A CN1811793A CN 1811793 A CN1811793 A CN 1811793A CN 200610024307 CN200610024307 CN 200610024307 CN 200610024307 A CN200610024307 A CN 200610024307A CN 1811793 A CN1811793 A CN 1811793A
- Authority
- CN
- China
- Prior art keywords
- shape
- face
- texture
- chromosome
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
本发明是一种快速的人脸特征点定位方法,对于一张任意输入的人脸正面或侧面(偏转角度在45度以内)数字图像,本算法能快速有效地定位出面部的大量显著特征点的位置。并且本算法的框架可以扩展运用到其他物体的特征点定位上。本方法综合利用了人脸的形状和纹理信息,在训练过程中只需要一定数量手工标定特征点的人脸图像,然后分别建立可变的形状和纹理模型,对于任意输入的人脸图片,首先要通过人脸检测的方法来初始化模型的参数,然后采用实时AAM和遗传算法来优化形状系数,最后再通过边缘检测以及肤色区间检测的方法来对部分特征点进一步微调就可达到准确的特征点定位的目的。The present invention is a fast facial feature point positioning method. For a random input digital image of the front or side of the face (the deflection angle is within 45 degrees), the algorithm can quickly and effectively locate a large number of prominent feature points of the face s position. And the framework of this algorithm can be extended to the feature point location of other objects. This method comprehensively utilizes the shape and texture information of the face. In the training process, only a certain number of face images with manually marked feature points are needed, and then the variable shape and texture models are respectively established. For any input face pictures, first The parameters of the model are initialized by face detection, and then real-time AAM and genetic algorithm are used to optimize the shape factor, and finally some feature points are further fine-tuned by edge detection and skin color interval detection to achieve accurate feature points. purpose of positioning.
Description
技术领域technical field
本发明属机器视觉和图像处理技术领域,具体涉及一种正面和侧面人脸图像中的特征点自动定位方法。The invention belongs to the technical field of machine vision and image processing, and in particular relates to a method for automatically locating feature points in frontal and side face images.
技术背景technical background
精确的、快速的人脸特征点定位在人脸识别、3D人脸图像的重建等方面有着非常广泛的运用。人脸特征点技术一般都与人脸检测技术相结合,从而缩小对特征点找寻的区域,成为一个实用的系统。Accurate and fast face feature point location has a very wide range of applications in face recognition and 3D face image reconstruction. Face feature point technology is generally combined with face detection technology to narrow the area for feature point search and become a practical system.
在人脸检测方面,比较著名的是Paul Viola和Michael Jones于2001年提出的基于Adaboost的人脸检测算法[1],该方法是统计学习的一种改进,它通过组合大量简单分类器来达到人脸检测。由于其中的每个简单分类器使用了计算速度非常快的特征,因此从根本上解决的检测的速度问题。精确的、快速的人脸特征点定位在人脸识别、3D人脸图像的重建等方面有着非常广泛的运用。早期的特征点定位是基于人脸的几何特征和先验知识,如双眼的对称性、眼珠是黑色等,但这种方法的鲁棒性很差,对光照的影响敏感。1995年Cootes等人利用人脸特征点之间位置相关性很强的特点提出了著名的活动形状模型(ASM)[2],它是一个以统计方式对特征点组的坐标位置建立的可变形状模型,从而可搜索到人脸的整体形状,但因在纹理上只利用了特征点附近的灰度值,因此对初值和背景很敏感,为了克服其不足提出了很多改进的算法。比较著名的有贝叶斯形状模型方法[3],它结合整体形状模型和局部器官形状模型来优化一个联合后验概率分布函数;还有贝叶斯正切形状模型的方法[4],它则在形状模型的正切空间通过EM算法来计算最大后验概率估计。但它们不能根本上克服ASM方法固有的缺点。2001年Cootes等人在原ASM的基础上提出了形状和纹理结合的活动外表模型(AAM)[5],获得了比ASM更优的结果。但因考虑到形状和纹理的两部分可变形参数,它的计算复杂性大,容易陷入局部最小。文献[6]采用了先ASM然后再AAM的搜索方法来提取特征点,分成两步之后也无法完全解决两种算法的不足。2004年Simon Baker等人提出了实时的活动外表模型算法(Realtime-AAM)[7][8],它在纹理基的补空间上先优化形状参数,然后直接计算纹理。改进的优化算法基本解决了AAM算法的速度问题,但仍然不能完全解决局部最小的问题,并且,由于优化是建立在与平均纹理误差的基础上,而人脸面颊部分的纹理比较平坦,传统的AAM算法和实时AAM算法在提取下巴上的特征点时都很容易陷入局部最小。In terms of face detection, the more famous one is the Adaboost-based face detection algorithm [1] proposed by Paul Viola and Michael Jones in 2001. This method is an improvement of statistical learning, which is achieved by combining a large number of simple classifiers. Face Detection. Since each of the simple classifiers uses features with very fast calculation speed, it fundamentally solves the speed problem of detection. Accurate and fast face feature point location has a very wide range of applications in face recognition and 3D face image reconstruction. The early feature point positioning was based on the geometric features and prior knowledge of the face, such as the symmetry of the eyes, the blackness of the eyes, etc., but the robustness of this method is poor and it is sensitive to the influence of light. In 1995, Cootes et al. proposed the famous active shape model (ASM) [2] by utilizing the strong positional correlation between feature points of the face, which is a variable model established for the coordinate positions of feature point groups in a statistical manner. Shape model, so that the overall shape of the face can be searched, but because only the gray value near the feature point is used in the texture, it is very sensitive to the initial value and the background. In order to overcome its shortcomings, many improved algorithms have been proposed. The well-known Bayesian shape model method [3] combines the overall shape model and the local organ shape model to optimize a joint posterior probability distribution function; there is also the Bayesian tangent shape model method [4], which then The maximum a posterior probability estimate is computed by the EM algorithm in the tangent space of the shape model. But they cannot fundamentally overcome the inherent shortcomings of the ASM method. In 2001, Cootes et al. proposed the Active Appearance Model (AAM) [5] based on the original ASM and obtained better results than ASM. However, due to the consideration of the two deformable parameters of shape and texture, its computational complexity is large, and it is easy to fall into a local minimum. Literature [6] adopted the search method of ASM first and then AAM to extract feature points, which cannot completely solve the shortcomings of the two algorithms after being divided into two steps. In 2004, Simon Baker et al. proposed a real-time active appearance model algorithm (Realtime-AAM) [7] [8], which first optimizes the shape parameters on the complement space of the texture base, and then directly calculates the texture. The improved optimization algorithm basically solves the speed problem of the AAM algorithm, but still cannot completely solve the local minimum problem, and, because the optimization is based on the average texture error, and the texture of the cheek part of the face is relatively flat, the traditional Both the AAM algorithm and the real-time AAM algorithm are prone to fall into the local minimum when extracting the feature points on the chin.
本发明与上述方法相比,主要特点有:(1)在实时AAM的代价函数中加入与图像的边缘有关的项,将得到的新的代价函数作为对下巴轮廓特征点做进一步搜索的目标,搜索算法采用遗传算法。(2)对不同角度的人脸分别建立模型,从而使人脸在一定角度范围内的图像都可以处理。(3)结合图像边缘和人脸肤色信息,对优化后的特征点进一步作精确的搜索,从而得到更优的结果。The present invention compares with above-mentioned method, and main feature has: (1) in the cost function of real-time AAM, add the item relevant with the edge of image, with the new cost function that obtains as the target that further search is done to chin contour feature point, The search algorithm uses genetic algorithm. (2) Build models for faces at different angles, so that images of faces within a certain range of angles can be processed. (3) Combining the edge of the image and the skin color information of the face, the optimized feature points are further precisely searched to obtain better results.
下面介绍与本发明相关的一些概念:Some concepts related to the present invention are introduced below:
1、形状和纹理模型1. Shape and texture model
假设Ω是具有N幅人脸图片的训练集,可以表示为
A0是平均形状下的平均纹理图像,如图2所示;Ai是纹理的PCA的基,式(1)和(2)中的pi t和qi t分别是第t个人脸图像的形状和纹理系数,可以写成矢量pt=(p1 t,p2 t,...,pm t)T∈Rm,qt=(q1 t,q2 t,...,qn t)T∈Rn。在下面为了简化起见,我们把系数肩上的t省去。A 0 is the average texture image under the average shape, as shown in Fig. 2; A i is the PCA basis of the texture, and p i t and q i t in equations (1) and (2) are the t-th face image respectively can be written as a vector p t = (p 1 t , p 2 t , ..., p m t ) T ∈ R m , q t = (q 1 t , q 2 t , ..., q n t ) T ∈ R n . In the following, for simplicity, we omit the t on the coefficient shoulder.
2、实时活动外表模型算法(Realtime-AAM)2. Real-time active appearance model algorithm (Realtime-AAM)
对于一张真实输入的照片I(x),要最小化下列与形状和纹理系数p,q有关的目标函数使得它们重建出来的人脸图片与实际图片之间的误差最小,For a real input photo I(x), it is necessary to minimize the following objective functions related to the shape and texture coefficients p, q so that the error between the reconstructed face picture and the actual picture is the smallest,
式(3)中括号内的前两项是纹理在参数q下的重建结果,同样给定形状参数矢量p后,通过式(1)可以重建出形状S,它所围成的区域为U,那么W(x|p)表示U0上的所有点分段仿射变换到U后的坐标。通过Project-Out的方法可以先在纹理基的正交补空间上迭代形状参数,因补空间的基和纹理基正交,此时,式(3)括弧中的第二项为零,这样代价函数可以简化成,The first two items in the brackets in formula (3) are the reconstruction results of the texture under the parameter q. Similarly, after the shape parameter vector p is given, the shape S can be reconstructed by formula (1), and the area enclosed by it is U. Then W(x|p) represents the coordinates of all points on U 0 after piecewise affine transformation to U. Through the Project-Out method, the shape parameters can be iterated on the orthogonal complement space of the texture base first, because the base of the complement space and the texture base are orthogonal, at this time, the second item in the brackets of formula (3) is zero, so the cost The function can be simplified to,
这儿Ai ⊥为纹理基的补空间,Here A i ⊥ is the complement space of the texture base,
实时AAM算法的迭代步骤可以总结为:The iterative steps of the real-time AAM algorithm can be summarized as:
①对于输入图像I和初始的形状参数p,计算变形到平均形状下的纹理I(W(x|p));① For the input image I and the initial shape parameter p, calculate the texture I(W(x|p)) deformed to the average shape;
②计算差值图像I(W(x|p))-A0(x),并乘上预先算好的参数 ②Calculate the difference image I(W(x|p))-A 0 (x), and multiply the pre-calculated parameters
③计算形状参数的增量Δp,满足③Calculate the increment Δp of the shape parameter, satisfying
④根据第r次迭代时的Wr(x|p)和Δp,采用文献[6]中的Lucas-Kanade算法计算下一次迭代的Wt+1(x|p),r=r+1;④ According to W r (x|p) and Δp in the rth iteration, the Lucas-Kanade algorithm in [6] is used to calculate the W t+1 (x|p) of the next iteration, r=r+1;
⑤重复①直到满足收敛条件或者达到最大迭代次数。⑤ Repeat ① until the convergence condition is met or the maximum number of iterations is reached.
如果考虑形状的全局平移、旋转和缩放变化,可以在形状基中添加代表这些变化的正交基,然后用类似的方法优化得到对应的全局变化参数。If the global translation, rotation, and scaling changes of the shape are considered, an orthogonal basis representing these changes can be added to the shape base, and then the corresponding global change parameters can be obtained by optimizing in a similar way.
3、线性判别分析(LDA)3. Linear Discriminant Analysis (LDA)
LDA算法是对不同类的高维样本(假设为d维,d>>1)的一种常用的有教师的线性降维方法,它寻找一个低维的线性子空间,以使得不同类的样本在该子空间上投影的类内样本分布更紧密、类间样本散布得更分散,以便于识别和分类。以人脸图像为例,具体做法如下:The LDA algorithm is a commonly used linear dimensionality reduction method with teachers for different types of high-dimensional samples (assumed to be d-dimensional, d>>1). It looks for a low-dimensional linear subspace so that different types of samples The distribution of intra-class samples projected on this subspace is tighter, and the distribution of inter-class samples is more scattered, so as to facilitate identification and classification. Taking the face image as an example, the specific method is as follows:
首先我们将N个的两维人脸图像按照行序或者列序排列成列向量的形式xi∈Rd,i=1,2,...N,这样一幅图像对应了高维空间中的一个样本。我们假设这些样本共分为c类,每类有Ni个样本,则有:First, we arrange N two-dimensional face images in the form of column vectors x i ∈ R d in row or column order, i=1, 2,...N, such an image corresponds to the A sample of . We assume that these samples are divided into c categories, and each category has N i samples, then:
构成LDA子空间的基
Sbwi=λiSwwi S b w i =λ i S w w i
参考文献references
[1].P.Viola,M.Jones.Robust real time object detection.8th IEEE InternationalConference on Computer Vision (ICCV),2001,Vancouver,British Columbia.[1].P.Viola, M.Jones.Robust real time object detection.8th IEEE International Conference on Computer Vision (ICCV), 2001, Vancouver, British Columbia.
[2].Cootes T,Taylor C,Cooper D,et al.Active shape models-their training andapplication[J].Computer vision and image understanding,1995,61(1):38-59.[2]. Cootes T, Taylor C, Cooper D, et al. Active shape models-their training and application [J]. Computer vision and image understanding, 1995, 61(1): 38-59.
[3].Zhong X,Stan Z,Toeh E K.Bayesian shape model for facial feature extractionand recognition[J].Pattern recognition,2003,23(12):2819-2833.[3]. Zhong X, Stan Z, Toeh E K. Bayesian shape model for facial feature extraction and recognition [J]. Pattern recognition, 2003, 23(12): 2819-2833.
[4].Zhou Y,Gu L,Zhang H J.Bayesian tangent shape model:estimating shape andpose parameters via Bayesian inference[C].IEEE conference on computervision and pattern recognition,2003.[4]. Zhou Y, Gu L, Zhang H J. Bayesian tangent shape model: estimating shape and pose parameters via Bayesian inference[C]. IEEE conference on computervision and pattern recognition, 2003.
[5].Cootes T,Edwards G,Taylor C.Active appearance models[J].IEEE TransPattern Analysis and Machine Intelligence,2001,23(6):681-685.[5]. Cootes T, Edwards G, Taylor C. Active appearance models[J]. IEEE TransPattern Analysis and Machine Intelligence, 2001, 23(6): 681-685.
[6].Shan S,Gao W,Zhao D,et al.Enhanced active shape models with globaltexture constraints for image analysis[EB/OL].[6]. Shan S, Gao W, Zhao D, et al. Enhanced active shape models with globaltexture constraints for image analysis[EB/OL].
http://www.jdl.ac.cn/user/sgshan/pub/Shan-ISMIS-2003.pdf,2004-06-1/2005-04-23.http://www.jdl.ac.cn/user/sgshan/pub/Shan-ISMIS-2003.pdf, 2004-06-1/2005-04-23.
[7].Matthews I,Baker S.Active appearance models revisited [J].Int’l J.ComputerVision,2004,60(2).[7]. Matthews I, Baker S. Active appearance models revisited [J]. Int’l J. ComputerVision, 2004, 60(2).
[8].Baker S,Matthews I,Schneider J.Automatic construction of active appearancemodels as an image coding problem[J].IEEE Trans Pattern Analysis andMachine Intelligence,2004,26(10):1380-1384.[8]. Baker S, Matthews I, Schneider J. Automatic construction of active appearance models as an image coding problem [J]. IEEE Trans Pattern Analysis and Machine Intelligence, 2004, 26(10): 1380-1384.
发明内容Contents of the invention
本发明的目的在于提出一种数字图像中人脸特征点自动定位方法。该方法对正面和侧面人脸均能处理,其中侧面人脸的偏转角度要求在45度以内。The object of the invention is to propose a method for automatically locating facial feature points in a digital image. This method can handle both frontal and side faces, and the deflection angle of the side faces is required to be within 45 degrees.
本发明提出的对数字图像中人脸特征点自动定位方法包括离线训练部分和在线计算部分。其中离线部分通过手工标注好特征点的训练图片来建立形状和纹理的统计模型;在线计算部分包括人脸自动检测、姿态识别、通过模型优化算法对基于活动模型的特征点定位以及基于边缘和肤色的校准处理几个步骤(各步骤均由相应模块构成)。图1显示了系统流程图。下面具体介绍相应的步骤和详细算法。The method for automatically locating facial feature points in a digital image proposed by the invention includes an offline training part and an online calculation part. In the offline part, the statistical model of shape and texture is established by manually marking the training pictures with feature points; the online calculation part includes automatic face detection, gesture recognition, feature point positioning based on the active model through model optimization algorithms, and edge and skin color based The calibration process of the method has several steps (each step is composed of corresponding modules). Figure 1 shows the system flow chart. The corresponding steps and detailed algorithm are introduced in detail below.
1、建立形状和纹理模型1. Create shape and texture models
本模块是离线训练部分,要求一定数量的统一大小的人脸图像,并且手工标定好预先定义的特征点坐标,例如图2(a)。This module is an offline training part, which requires a certain number of face images of a uniform size, and manually calibrates the predefined feature point coordinates, such as Figure 2(a).
在形状上:对于每幅图片上的v个特征点坐标排列为形状矢量,S=(x1,...,xv,y1,...,yv),St∈R2v,对不同的人脸进行归一化处理以去除全局仿射变换的影响,从而不同的形状矢量仅体现不同人脸内在的形状差异。归一化步骤如下:In shape: the coordinates of v feature points on each picture are arranged as a shape vector, S=(x 1 ,...,x v , y 1 ,...,y v ), S t ∈ R 2v , Different faces are normalized to remove the influence of global affine transformation, so that different shape vectors only reflect the inherent shape differences of different faces. The normalization steps are as follows:
(a)对所有形状矢量去除均值,转移到质心坐标系下。(a) Remove the mean of all shape vectors and transfer to the centroid coordinate system.
(b)选择一个样本作为初始的平均形状,并校准尺度使得|S|=1。(b) Choose a sample as the initial mean shape, and calibrate the scale such that |S|=1.
(c)将初始估计的平均形状记为 并定义为参考帧。(c) Denote the initial estimated mean shape as And defined as a reference frame.
(d)将所有训练样本形状通过仿射变换校准到当前平均形状上。(d) Calibrate all training sample shapes to the current mean shape by affine transformation.
(e)对校准后的所有样本重新计算平均形状。(e) Recalculate the mean shape for all samples after calibration.
(f)将当前的平均形状校准到 上,并且使得|S|=1。(f) Calibrate the current mean shape to , and make |S|=1.
(g)如果平均形状的改变仍然大于给定的阈值,回到(d)。对于归一化后的形状矢量,可以通过式(1)的PCA方法建立统计形状模型。(g) If the change in average shape is still larger than the given threshold, go back to (d). For the normalized shape vector, the statistical shape model can be established by the PCA method of formula (1).
在建立纹理模型前,要求每幅图片的纹理矢量具有相同的长度,将所有图像中人脸区域内的纹理都采用变形算法校准到由平均形状所围成的人脸区域内,从而将不同人之间形状和纹理的差异分离开。图像变形算法可以采用分段仿射变换的方法,其中网格的划分如图2(b)所示。最后得到的平均纹理,例如图2(c)所示。同样,通过式(2)可以建立统计纹理模型。Before building the texture model, the texture vectors of each picture are required to have the same length, and the textures in the face area in all images are calibrated to the face area surrounded by the average shape by the deformation algorithm, so that different people Differences in shape and texture between separate. The image deformation algorithm can adopt the method of segmented affine transformation, and the grid division is shown in Figure 2(b). The resulting average texture, for example, is shown in Figure 2(c). Similarly, the statistical texture model can be established through formula (2).
2、人脸检测和姿态识别2. Face detection and gesture recognition
人脸检测模块采用了比较成熟的Adaboost方法[1],标识出包含人脸的图像子区域;姿态识别模块采用了LDA的特征抽取方法。The face detection module uses the relatively mature Adaboost method [1] to identify the image sub-region containing the face; the gesture recognition module uses the feature extraction method of LDA.
对于姿态识别,是将同一姿态的人脸图像构成一类,按(6)(7)式分别计算类内散布矩阵Sw和类间散布矩阵Sb,并得到姿态识别LDA基。将各样本投影到这些基上得到了各样本降维后的特征。求取同一类样本的均值,作为该姿态人脸图像的特征。测试时,对于人脸区域进行姿态识别,即将其投影到LDA姿态识别基上,得到了降维后的特征,然后与已有的姿态特征进行比较,用最近邻判决法进行分类,得到该人脸图像的姿态。For gesture recognition, face images of the same pose are formed into a class, and the intra-class scatter matrix S w and the inter-class scatter matrix S b are calculated according to (6) and (7), respectively, and the LDA basis for pose recognition is obtained. The dimensionality-reduced features of each sample are obtained by projecting each sample onto these bases. Calculate the mean value of the same type of samples as the feature of the face image of the pose. During the test, pose recognition is performed on the face area, that is, it is projected onto the LDA pose recognition base, and the feature after dimensionality reduction is obtained, and then compared with the existing pose feature, and the nearest neighbor judgment method is used for classification to obtain the person. The pose of the face image.
3、模型优化算法3. Model optimization algorithm
首先采用实时AAM算法对代价函数式(4)进行优化,由于算法在纹理的补空间上先优化形状可以简化计算提高收敛的速度和精度,但是所有输入图片都是与平均理匹配的,因此特征点定位不够准确,特别是对于训练集以外的测试人脸,下巴轮廓线一般都很难收敛。接下来在AAM的代价函数式(4)中引入一项代表特征点处的边缘信息,然后用遗传算法来优化新的代价函数。First, the real-time AAM algorithm is used to optimize the cost function (4). Since the algorithm first optimizes the shape in the complement space of the texture, it can simplify the calculation and improve the speed and accuracy of convergence. However, all input images are matched with the average theory, so the feature The point positioning is not accurate enough, especially for the test faces outside the training set, the jaw contour line is generally difficult to converge. Next, an item representing the edge information at the feature point is introduced into the AAM cost function (4), and then the genetic algorithm is used to optimize the new cost function.
对边缘图像的提取采用了9×9的Laplace高通滤波核KLaplace。The edge image is extracted using a 9×9 Laplace high-pass filter kernel K Laplace .
Iedge(x)=I(x)*KLaplace. (8)I edge (x)=I(x)*K Laplace . (8)
对于经过(8)式滤波后图像Iedge还需要归一化到[0,1]之间实数。此时,代价函数可以表示为,For the image I edge filtered by formula (8), it needs to be normalized to a real number between [0, 1]. At this point, the cost function can be expressed as,
其中在参数p下的形状S可以由(1)式算出,L表示S中下巴上的l个特征点坐标集合,α是一个常系数。当式(9)代价函数达到最小时,说明求出的输入人脸的特征点所围成的区域的纹理与平均纹理的距离最小,而且这些特征点是落在人脸图像的边缘上。The shape S under the parameter p can be calculated by formula (1), L represents the coordinate set of l feature points on the chin in S, and α is a constant coefficient. When the cost function of formula (9) reaches the minimum, it shows that the distance between the texture of the area surrounded by the feature points of the input face and the average texture is the smallest, and these feature points fall on the edge of the face image.
假设输入图像I(x)上粗搜索得到的下巴特征点序列ρ={x1,...xl,y1,...yl},(xi,yi)为特征点的坐标,那么取这一序列为染色体,长度为2l。初始染色体群体是通过在特征点处的法线方向上每隔单位长度取一个点来得到,染色体群体数目与在法线方向上搜索的最大范围[-Pmax,Pmax]相关,如图3所示的下巴的上下边界之间的区域内随机选取染色体。Assume that the chin feature point sequence ρ={x 1 ,...x l ,y 1 ,...y l } obtained by rough search on the input image I(x), ( xi , y i ) is the coordinate of the feature point , then take this sequence as a chromosome with a length of 2l. The initial chromosome group is obtained by taking a point every unit length in the normal direction at the feature point, and the number of chromosome groups is related to the maximum range [-P max , P max ] searched in the normal direction, as shown in Figure 3 Chromosomes were randomly selected within the region between the upper and lower borders of the jaw as indicated.
根据(9)式可以计算出每条染色体对应的代价函数值ψ={ψ1,ψ2,...,ψη},其中η表示群体中染色体的总数。在下一代生成时适应度的计算采用了rank-scale的方法,首先对染色体群体的代价函数值进行升序排列,那么排在第j位的染色体的适应度为,According to formula (9), the cost function value ψ={ψ 1 , ψ 2 , ..., ψ η } corresponding to each chromosome can be calculated, where η represents the total number of chromosomes in the population. The calculation of the fitness in the generation of the next generation adopts the rank-scale method. First, the cost function values of the chromosome population are sorted in ascending order, then the fitness of the chromosome ranked j is,
十分明显,代价函数值越小,适应度φj就越大。根据群体中每条染色体的适应度,以轮盘赌的方法选取η条父代染色体作交叉运算,其中适应度佳的染色体被选中的概率也大,并且可能被多次选中。Obviously, the smaller the value of the cost function is, the larger the fitness φ j is. According to the fitness of each chromosome in the population, select n parent chromosomes for crossover operation by means of roulette, and the chromosome with the best fitness has a high probability of being selected, and may be selected multiple times.
假设选中的一对父代染色体ρa={xa1,...xal|ya1,...yal},ρb={xb1,...xbl|yb1,...ybl},作交叉时采用单点分段交叉的方法,将x和y坐标分成两段染色体,对于每段染色体随机选取1个交叉点,然后作交叉算子产生下一代染色体。变异时对于以一定概率选中的染色体(一般概率比较小),加上一个随机[-2,2]像素之间的扰动,用来避免陷入局部最小的情况。Assume that the selected pair of parental chromosomes ρ a ={x a1 ,...x al |y a1 ,...y al }, ρ b ={x b1 ,...x bl |y b1 ,... y bl }, the method of single-point segmental crossover is used for crossover, the x and y coordinates are divided into two chromosomes, and one crossover point is randomly selected for each chromosome, and then the crossover operator is used to generate the next generation of chromosomes. When mutating, for the chromosome selected with a certain probability (the general probability is relatively small), add a disturbance between random [-2, 2] pixels to avoid falling into the local minimum situation.
对于遗传下来的子代染色体要通过所建立的形状模型加以约束,可以平滑轮廓曲线并且消除个别奇异点。假设第k条染色体对应的整体形状Sk,由于PCA建模得到的形状基ξ=[S1,S2,...,Sm]是一组正交向量组,因此根据式(1)形状Sk对应的系数Pk很容易求得,The inherited offspring chromosomes are constrained by the established shape model, which can smooth the contour curve and eliminate individual singular points. Assuming the overall shape S k corresponding to the kth chromosome, since the shape base ξ=[S 1 , S 2 ,...,S m ] obtained by PCA modeling is a set of orthogonal vectors, according to formula (1) The coefficient P k corresponding to the shape S k is easy to obtain,
Pd=ξT(Sk-S0). (11)P d =ξ T (S k -S 0 ). (11)
对于系数矢量Pi k需要加以约束,其每个系数必须取在 之间,其中λi是做形状PCA时第i个特征向量对应的特征值。然后通过(1)式可求出加完约束后的新的染色群体,进入下一步的迭代。For the coefficient vector P i k needs to be constrained, each coefficient must be taken in Between , where λ i is the eigenvalue corresponding to the i-th eigenvector when doing shape PCA. Then, the new coloring group after adding constraints can be obtained through formula (1), and enter the next iteration.
4、基于边缘和肤色的校准处理4. Calibration processing based on edges and skin tones
对于经过上面优化算法得到的特征点位置,可以再通过Canny算子抽取到的二值化图像边缘或者人脸的肤色信息作进一步的校准,特别是对于下巴轮廓上的特征点,它们和背景相区分一般都有比较明显的边缘或者位于肤色区间的分界线上,因此用上述方法可以方便快捷的得到精确的特征点位置。图4为Canny算子提取边缘的结果。对于肤色空间的检测,首先将彩色图像从RGB空间转换到YCrCb空间,一般用色度值来限定肤色区域,通过实验我们选择归一化后的色度满足Cb∈[0.455,0.500],Cr∈[0.555,0.630]的像素为肤色区间,如图5所示。For the position of the feature points obtained by the above optimization algorithm, further calibration can be performed through the edge of the binarized image extracted by the Canny operator or the skin color information of the face, especially for the feature points on the outline of the chin, which are related to the background. The distinction generally has a relatively obvious edge or is located on the boundary line of the skin color interval, so the above method can be used to obtain the precise feature point position conveniently and quickly. Figure 4 is the result of edge extraction by Canny operator. For the detection of skin color space, the color image is first converted from RGB space to YCrCb space, and the color value is generally used to limit the skin color area. Through experiments, we choose the normalized color to satisfy Cb ∈ [0.455, 0.500], Cr ∈ The pixels of [0.555, 0.630] are skin color intervals, as shown in Figure 5.
但是无论是通过边缘还是肤色,都会存在一些干扰的点,影响最后的结果。因此在每一步搜索完成后通过形状模型加以约束,以消除个别奇异点。具体搜索步骤如下:But whether it is through the edge or the skin color, there will be some interference points, which will affect the final result. Therefore, after each step of the search, it is constrained by the shape model to eliminate individual singularities. The specific search steps are as follows:
(h)首先通过姿态识别后,如果是正面人脸,在人脸检测区域内通过canny算子得到二值化的边缘图像;如果是侧面人脸,在人脸检测区域内通过肤色检测方法得到二值化图像。例如图1中的系统框架图所示。(h) Firstly, after gesture recognition, if it is a frontal face, obtain a binarized edge image through the canny operator in the face detection area; if it is a profile face, obtain it through the skin color detection method in the face detection area Binarize the image. For example, as shown in the system frame diagram in Figure 1.
(i)计算需要做校准处理的下巴轮廓线上各点的法线方向。(i) Calculate the normal direction of each point on the jaw contour line that needs to be calibrated.
(j)在法线方向上一定范围内寻找最临界的边缘点或者二值化图像的分界点。(j) Find the most critical edge point or the boundary point of the binarized image within a certain range in the normal direction.
(k)通过式(11)利用建立好的形状模型对搜索到的新特征点位置做约束。(k) Use the established shape model to constrain the searched new feature point positions through formula (11).
(l)如果特征点收敛,即迭代前后特征点的改变量小于某一阈值,退出循环;否则回到(b)(l) If the feature points converge, that is, the change of the feature points before and after the iteration is less than a certain threshold, exit the loop; otherwise, return to (b)
本发明的优点:Advantages of the present invention:
本发明结合了人脸检测、姿态识别、特征点定位等算法,使得本算法对于不同角度的二维人脸照片都可以快速准确的定位出大量显著特征点的位置。人脸检测模块缩小了特征点的候选区域,姿态识别的引入使得不同角度的人脸可以通过不同角度的模型加以约束和优化。而在优化的方法上,本发明采用了实时AAM和遗传算法相结合的方法,克服了实时AAM的误差函数只是与平均纹理差值的结果,导致提取的特征点不够精确容易陷入局部最小的问题。最后还通过canny算子提取边缘和肤色空间分割的方法来对部分处于明显图像边缘或者肤色区间边缘上的特征点作精确校正。实验表明本方法对人脸特征点的精确定位十分有效,并且时间开销也在允许的范围内。The invention combines algorithms such as face detection, posture recognition, and feature point positioning, so that the algorithm can quickly and accurately locate the positions of a large number of prominent feature points for two-dimensional face photos from different angles. The face detection module narrows down the candidate area of feature points, and the introduction of gesture recognition enables faces from different angles to be constrained and optimized by models from different angles. In the optimization method, the present invention adopts the method of combining real-time AAM and genetic algorithm, which overcomes the problem that the error function of real-time AAM is only the result of the difference with the average texture, and the extracted feature points are not accurate enough and easy to fall into the local minimum problem. . Finally, the canny operator is used to extract the edge and the skin color space segmentation method to accurately correct some feature points on the edge of the obvious image or the edge of the skin color interval. Experiments show that this method is very effective for the precise positioning of facial feature points, and the time overhead is also within the allowable range.
附图说明Description of drawings
图1人脸特征点定位系统框架。Fig. 1 Framework of face feature point localization system.
图2形状和纹理模型,其中(a)手工标定60个特征点,(b)平均形状的网格,(c)平均形状下的平均纹理。Fig. 2 Shape and texture model, where (a) manually calibrated 60 feature points, (b) grid of average shape, (c) average texture under average shape.
图3下巴轮廓搜索的区域。Figure 3. Regions searched for jaw contours.
图4(a)原始正面图像。Figure 4(a) Original frontal image.
图4(b)二值化边缘图像。Figure 4(b) Binarized edge image.
图5(a)原始侧面图像。Fig. 5(a) Original profile image.
图5(b)二值化的肤色区域。Figure 5(b) Binarized skin color region.
图6实时AAM方法和本文方法的部分搜索结果Figure 6 Partial search results of the real-time AAM method and the method in this paper
图6(a)模型初始位置。Figure 6(a) Initial position of the model.
图6(b)实时AAM方法搜索结果。Fig. 6(b) Real-time AAM method search results.
图6(c)本文方法搜索结果。Fig. 6(c) Search results of the method in this paper.
图7复杂背景下本文方法的搜索结果。Fig. 7 Search results of the method in this paper under complex background.
具体实施方式Detailed ways
一、建立形状和纹理模型:1. Create a shape and texture model:
1、形状模型:1. Shape model:
将每幅图片上的v个特征点坐标排列为形状矢量,S=(x1,...,xv,y1,...,yv)′,St∈R2v。然后用下面方法对N幅图像的形状矢量进行规一化处理:Arrange the coordinates of v feature points on each picture as a shape vector, S=(x 1 ,...,x v , y 1 ,...,y v )′, S t ∈R 2v . Then use the following method to normalize the shape vectors of N images:
(a)对所有形状矢量去除均值,转移到质心坐标系下。(a) Remove the mean of all shape vectors and transfer to the centroid coordinate system.
(b)选择一个样本作为初始的平均形状,并校准尺度使得|S|=1。(b) Choose a sample as the initial mean shape, and calibrate the scale such that |S|=1.
(c)将初始估计的平均形状记为 并定义为参考帧。(c) Denote the initial estimated mean shape as And defined as a reference frame.
(d)将所有训练样本形状通过仿射变换校准到当前平均形状上。(d) Calibrate all training sample shapes to the current mean shape by affine transformation.
(e)对校准后的所有样本重新计算平均形状。(e) Recalculate the mean shape for all samples after calibration.
(f)将当前的平均形状校准到 上,并且使得|S|=1。(f) Calibrate the current mean shape to , and make |S|=1.
(g)如果平均形状的改变仍然大于给定的阈值,回到(d)。(g) If the change in average shape is still larger than the given threshold, go back to (d).
通过式(1)的PCA方法建立统计形状模型:The statistical shape model is established by the PCA method of formula (1):
其中S0表示平均形状矢量,ξ=[S1,S2,...,Sm]为形状的PCA基。where S 0 represents the average shape vector, and ξ=[S 1 , S 2 , . . . , S m ] is the PCA basis of the shape.
2、纹理模型:2. Texture model:
(a)将所有图像中人脸区域内的纹理都采用变形算法校准到由平均形状S0所围成的人脸区域U0内。(a) The textures in the face area in all images are calibrated to the face area U 0 surrounded by the average shape S 0 using the deformation algorithm.
(b)将不同人在区域U0内的纹理展成矢量形式At。(b) Develop the textures of different people in the area U 0 into a vector form A t .
(c)通过式(2)建立统计纹理模型。(c) Establish a statistical texture model through formula (2).
A0是平均形状下的平均纹理图像,如图2所示;Ai是纹理的PCA的基,式(1)和(2)中的pi t和qi t分别是第t个人脸图像的形状和纹理系数,可以写成矢量pt=(p1 t,p2 t,...,pm t)T∈Rm,qt=(q1 t,q2 t,...,qn t)T∈Rn。A 0 is the average texture image under the average shape, as shown in Fig. 2; A i is the PCA basis of the texture, and p i t and q i t in equations (1) and (2) are the t-th face image respectively can be written as a vector p t = (p 1 t , p 2 t , ..., p m t ) T ∈ R m , q t = (q 1 t , q 2 t , ..., q n t ) T ∈ R n .
二、人脸检测和姿态识别:2. Face detection and gesture recognition:
1、采用比较成熟的Adaboost方法,标识出包含人脸的图像子区域;1. Use the more mature Adaboost method to identify the image sub-region containing the face;
2、将检测出来的人脸区域内的纹理展成一列矢量,将其投影到LDA姿态识别基上,得到了降维后的特征;2. Expand the texture in the detected face area into a column of vectors, project it onto the LDA pose recognition base, and obtain the features after dimensionality reduction;
3、对降维后的特征,与训练好的姿态特征进行比较,用最近邻判决法进行分类,得到该人脸图像的姿态。3. Compare the features after dimensionality reduction with the trained pose features, and use the nearest neighbor judgment method to classify to obtain the pose of the face image.
三、模型优化算法:3. Model optimization algorithm:
采用两阶段优化的方法:A two-stage optimization approach is used:
1、实时AAM优化算法:1. Real-time AAM optimization algorithm:
(a)对于输入图像I和初始的形状参数p,计算变形到平均形状下的纹理I(W(x|p));(a) For the input image I and the initial shape parameter p, calculate the texture I(W(x|p)) deformed to the average shape;
(b)计算差值图像I(W(x|p))-A0(x),并乘上预先算好的参数 (b) Calculate the difference image I(W(x|p))-A 0 (x), and multiply it by the pre-calculated parameters
(c)计算形状参数的增量Δp,满足(c) Calculate the increment Δp of the shape parameter, satisfying
(d)根据第r次迭代时的Wr(x|p)和Δp,采用文献[6]中的Lucas-Kanade算法计算下一次迭代的Wr+1(x|p),r=r+1;(d) According to W r (x|p) and Δp at the rth iteration, use the Lucas-Kanade algorithm in [6] to calculate W r+1 (x|p) for the next iteration, r=r+ 1;
(e)重复(a)直到满足收敛条件或者达到最大迭代次数。(e) Repeat (a) until the convergence condition is met or the maximum number of iterations is reached.
2、遗传算法:2. Genetic algorithm:
(a)假设输入图像I(x)上粗搜索得到的下巴特征点序列ρ={x1,...xl,y1,...yl},(xi,yi)为特征点的坐标,那么取这一序列为染色体,长度为2l。(a) Assume that the chin feature point sequence ρ={x 1 ,...x l , y 1 ,...y l } obtained by rough search on the input image I(x), ( xi , y i ) is the feature The coordinates of the point, then take this sequence as a chromosome with a length of 2l.
(b)初始染色体群体是通过在特征点处的法线方向上每隔单位长度取一个点来得到,染色体群体数目与在法线方向上搜索的最大范围[-Pmax,Pmax]相关,如图3所示的下巴的上下边界之间的区域内随机选取染色体。(b) The initial chromosome group is obtained by taking a point every unit length in the normal direction at the feature point, and the number of chromosome groups is related to the maximum range [-P max , P max ] searched in the normal direction, Chromosomes were randomly selected in the region between the upper and lower borders of the jaw as shown in Figure 3.
(c)根据(9)式可以计算出每条染色体对应的代价函数值ψ={ψ1,ψ2,...,ψη},其中η表示群体中染色体的总数。(c) According to formula (9), the cost function value ψ={ψ 1 , ψ 2 ,...,ψ η } corresponding to each chromosome can be calculated, where η represents the total number of chromosomes in the population.
其中在参数p下的形状S可以由(1)式算出,L表示S中下巴上的l个特征点坐标集合,α是一个常系数。对边缘图像Iedge的提取采用了9×9的Laplace高通滤波核KLaplace。The shape S under the parameter p can be calculated by formula (1), L represents the coordinate set of l feature points on the chin in S, and α is a constant coefficient. The edge image I edge is extracted using a 9×9 Laplace high-pass filter kernel K Laplace .
Iedge(x)=I(x)*KLaplace. (8)I edge (x)=I(x)*K Laplace . (8)
对于经过(8)式滤波后图像Iedge还需要归一化到[0,1]之间实数。For the image I edge filtered by formula (8), it needs to be normalized to a real number between [0, 1].
(d)在下一代生成时适应度的计算采用了rank-scale的方法,首先对染色体群体的代价函数值进行升序排列,那么排在第j位的染色体的适应度为,(d) The calculation of the fitness in the generation of the next generation adopts the rank-scale method. First, the cost function values of the chromosome population are sorted in ascending order, then the fitness of the chromosome ranked j is,
(e)根据群体中每条染色体的适应度,以轮盘赌的方法选取η条父代染色体作交叉运算,其中适应度佳的染色体被选中的概率也大,并且可能被多次选中。(e) According to the fitness of each chromosome in the population, select n parent chromosomes for crossover operation by using the roulette method, and the chromosome with the best fitness has a high probability of being selected, and may be selected multiple times.
(f)任意选中的一对父代染色体ρa={xa1,...xal|ya1,...yal},ρb={xb1,...xbl|yb1,...ybl},作交叉时采用单点分段交叉的方法,将x和y坐标分成两段染色体,对于每段染色体随机选取1个交叉点,然后作交叉算子产生下一代染色体。(f) Randomly selected pair of parental chromosomes ρ a ={x a1 ,...x al |y a1 ,...y al }, ρ b ={x b1 ,...x bl |y b1 , ...y bl }, when doing crossover, adopt the method of single-point segmental crossover, divide the x and y coordinates into two chromosomes, randomly select one intersection point for each chromosome, and then use the crossover operator to generate the next generation of chromosomes.
(g)变异算子,对于以一定概率选中的染色体(一般概率比较小),加上一个随机[-2,2]像素之间的扰动。(g) Mutation operator, for the chromosome selected with a certain probability (the general probability is relatively small), add a disturbance between random [-2, 2] pixels.
(h)如果新的染色体群体的代价函数值进化前后保持稳定,或达到最大的遗传次数,则退出循环;否则回到(c)进行下一次的遗传。(h) If the cost function value of the new chromosome population remains stable before and after evolution, or reaches the maximum number of inheritance, exit the cycle; otherwise, return to (c) for the next inheritance.
四、校准处理:4. Calibration processing:
1、首先通过姿态识别后的结果进行判断。如果是正面人脸,在人脸检测区域内通过canny算子得到二值化的边缘图像;如果是侧面人脸,在人脸检测区域内通过肤色检测方法得到二值化图像。例如图1中的系统框架图所示。1. First, make a judgment based on the result of gesture recognition. If it is a frontal face, use the canny operator to obtain a binarized edge image in the face detection area; if it is a side face, use a skin color detection method to obtain a binarized image in the face detection area. For example, as shown in the system frame diagram in Figure 1.
2、计算需要做校准处理的下巴轮廓线上各点的法线方向。2. Calculate the normal direction of each point on the jaw contour line that needs to be calibrated.
3、在法线方向上一定范围内寻找最临界的边缘点或者二值化图像的分界点。3. Find the most critical edge point or the boundary point of the binarized image within a certain range in the normal direction.
4、通过式(11)利用建立好的形状模型对搜索到的新特征点位置做约束。4. Use the established shape model to constrain the searched new feature point positions through formula (11).
Pk=ξT(Sk-S0). (11)P k =ξ T (S k -S 0 ). (11)
对于系数矢量Pi k加以约束,其每个系数必须取在 之间,其中λi是做形状PCA时第i个特征向量对应的特征值。For the coefficient vector P i k to be constrained, each coefficient must be taken in Between , where λ i is the eigenvalue corresponding to the i-th eigenvector when doing shape PCA.
5、如果特征点收敛,即迭代前后特征点的改变量小于某一阈值,退出循环;否则回到2。5. If the feature points converge, that is, the change amount of the feature points before and after the iteration is less than a certain threshold, exit the loop; otherwise, return to 2.
本发明采用实际拍摄的照片来测试算法的有效性,其中训练集包含120张正面脸和120张右侧30度的图片,用来分别建立正面和侧面的形状和纹理模型,测试集包含100张正面脸和侧面图片。所有图片都手工标定了60个特征点,特征点的定义如图1(a)所示。对测试图片,算法收敛得到的特征点与手工标定的特征点的平均距离作为搜索精度的判别标准,The present invention uses actual photographs to test the effectiveness of the algorithm, wherein the training set contains 120 frontal faces and 120 pictures of 30 degrees on the right side, which are used to establish the shape and texture models of the front and side faces respectively, and the test set contains 100 pictures Front and profile pictures. All pictures are manually calibrated with 60 feature points, and the definition of feature points is shown in Figure 1(a). For the test picture, the average distance between the feature points obtained by the algorithm convergence and the manually calibrated feature points is used as the criterion for the search accuracy.
其中Ci和 分别为收敛得到的和手工标定的第i个特征点,v=60。where C i and are the i-th feature point obtained by convergence and manually calibrated, v=60.
1.训练阶段:1. Training stage:
A)LDA姿态识别基:A) LDA gesture recognition base:
将姿态分为-30°、0°和30°三个区间,每类姿态各用200幅Adaboost方法检测出的人脸子图像作为样本训练LDA姿态识别基。Adaboost的输出为128×128的彩色人脸子图像,为了进一步减少背景对姿态识别的影响,在进行训练前,需要对人脸检测输出的人脸子图像进行灰度化和切割,使之成为100×100的灰度图像。最终得到2个10000维的LDA姿态识别基。再将所有的灰度图像投影到这组基上,并对降维后的特征求均值,得到了每类姿态的特征。Divide the pose into three intervals of -30°, 0° and 30°, and use 200 face sub-images detected by the Adaboost method for each type of pose as the sample training base for LDA pose recognition. The output of Adaboost is a 128×128 color face sub-image. In order to further reduce the influence of the background on gesture recognition, before training, it is necessary to grayscale and cut the face sub-image output by face detection to make it a 100× 100 grayscale images. Finally, two 10,000-dimensional LDA pose recognition bases are obtained. Then all grayscale images are projected onto this set of bases, and the features after dimensionality reduction are averaged to obtain the features of each type of pose.
B)形状和纹理模型的建立:B) Establishment of shape and texture models:
对于训练集中的正面和侧面各120张图片,以及手工标定的特征点,建立形状和纹理模型。For the 120 front and side images in the training set, as well as the manually calibrated feature points, a shape and texture model is established.
2.测试阶段:2. Testing phase:
(a)对于输入的人脸图像,利用Adaboost方法进行人脸检测,标识出128×128的人脸子区域。(a) For the input face image, use the Adaboost method for face detection, and identify the 128×128 face sub-region.
(b)将其投影到LDA姿态识别基上,并与已有的姿态特征作比较,用最近邻判别法进行分类,得到人脸姿态。(b) Project it to the LDA pose recognition base, compare with the existing pose features, classify with the nearest neighbor discriminant method, and get the face pose.
(c)根据不同的姿态采用不同的模型,如-30°则将图片左右翻转然后用右侧30°模型来处理。(c) Use different models according to different poses, such as -30°, flip the picture left and right and use the right 30° model for processing.
(d)通过实时AAM算法做第一步优化。(d) Do the first step of optimization through the real-time AAM algorithm.
(e)对加入边缘信息后的代价函数,采用遗传算法优化。遗传算法中初始染色体数目为31,交叉概率为0.8,变异概率为0.005,迭代次数为10次。(e) The cost function after adding edge information is optimized by genetic algorithm. In the genetic algorithm, the initial chromosome number is 31, the crossover probability is 0.8, the mutation probability is 0.005, and the number of iterations is 10.
(f)如果是正面照片,则采用canny算子提取边缘,然后对下巴特征点进行校准;如果是侧面照片,则采用人脸肤色检测方法对下巴特征点进行校准;(f) If it is a front photo, use the canny operator to extract the edge, and then calibrate the chin feature points; if it is a side photo, use the face skin color detection method to calibrate the chin feature points;
表1列出了在训练集和测试集上的平均搜索结果,在实时AAM方法的基础上特征点的搜索精度平均提高1个像素左右。在计算时间上,本方法在P43.6G机器上搜索一幅图片耗时2~4s,也在能够容忍的范围内。图6中列出了一些测试的例子,第一列显示的是初始放置的位置,中间是实时AAM方法搜索的结果,可以看出大部分情况下巴轮廓线都没法收敛到,最后一列是本方法得到的结果,都能收敛到较好的位置。图7是在一般背景下的比较结果。Table 1 lists the average search results on the training set and test set. Based on the real-time AAM method, the search accuracy of feature points is increased by about 1 pixel on average. In terms of calculation time, this method takes 2 to 4 seconds to search a picture on the P43.6G machine, which is also within the tolerable range. Some test examples are listed in Figure 6. The first column shows the initial placement position, and the middle is the real-time AAM method search results. It can be seen that the chin contour line cannot converge in most cases, and the last column is the original The results obtained by the method can all converge to a better position. Figure 7 is the comparison result in a general context.
表1不同方法匹配结果的误差(以像素为单位)
Claims (5)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CNB2006100243070A CN100375108C (en) | 2006-03-02 | 2006-03-02 | A method for automatic location of facial feature points |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CNB2006100243070A CN100375108C (en) | 2006-03-02 | 2006-03-02 | A method for automatic location of facial feature points |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN1811793A true CN1811793A (en) | 2006-08-02 |
| CN100375108C CN100375108C (en) | 2008-03-12 |
Family
ID=36844705
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CNB2006100243070A Expired - Fee Related CN100375108C (en) | 2006-03-02 | 2006-03-02 | A method for automatic location of facial feature points |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN100375108C (en) |
Cited By (26)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2008064395A1 (en) * | 2006-08-18 | 2008-06-05 | National Ict Australia Limited | Facial feature processing |
| CN100414562C (en) * | 2006-10-10 | 2008-08-27 | 南京搜拍信息技术有限公司 | Face Feature Point Location Method in Face Recognition System |
| WO2008151577A1 (en) * | 2007-06-14 | 2008-12-18 | Tsinghua University | Tracking method and device adopting a series of observation models with different lifespans |
| CN101770643A (en) * | 2008-12-26 | 2010-07-07 | 富士胶片株式会社 | Image processing apparatus, image processing method, and image processing program |
| CN102013011A (en) * | 2010-12-16 | 2011-04-13 | 重庆大学 | Front-face-compensation-operator-based multi-pose human face recognition method |
| CN102402691A (en) * | 2010-09-08 | 2012-04-04 | 中国科学院自动化研究所 | Method for tracking human face posture and motion |
| CN102473311A (en) * | 2009-07-23 | 2012-05-23 | 日本电气株式会社 | Marker generateon device, marker generateon detection system, marker generation detection device, marker, marker generateon method and program therefor |
| CN102473313A (en) * | 2009-07-23 | 2012-05-23 | 日本电气株式会社 | Marker generateon device, marker generation detection system, marker generateon detection device, marker, marker generateon method and program |
| CN101561875B (en) * | 2008-07-17 | 2012-05-30 | 清华大学 | A method for two-dimensional face image localization |
| CN102479322A (en) * | 2010-11-30 | 2012-05-30 | 财团法人资讯工业策进会 | System, device and method for analyzing facial defects by using facial image with angle |
| CN103208007A (en) * | 2013-03-19 | 2013-07-17 | 湖北微驾技术有限公司 | Face recognition method based on support vector machine and genetic algorithm |
| CN103577815A (en) * | 2013-11-29 | 2014-02-12 | 中国科学院计算技术研究所 | Face alignment method and system |
| CN104732247A (en) * | 2015-03-09 | 2015-06-24 | 北京工业大学 | Human face feature positioning method |
| CN104881657A (en) * | 2015-06-08 | 2015-09-02 | 微梦创科网络科技(中国)有限公司 | Profile face identification method and system, and profile face construction method and system |
| CN105205482A (en) * | 2015-11-03 | 2015-12-30 | 北京英梅吉科技有限公司 | Quick facial feature recognition and posture estimation method |
| CN105938551A (en) * | 2016-06-28 | 2016-09-14 | 深圳市唯特视科技有限公司 | Video data-based face specific region extraction method |
| CN107145741A (en) * | 2017-05-05 | 2017-09-08 | 必应(上海)医疗科技有限公司 | Ear based on graphical analysis examines collecting method and device |
| CN107637072A (en) * | 2015-03-18 | 2018-01-26 | 阿凡达合并第二附属有限责任公司 | Background modification in video conferencing |
| CN108108694A (en) * | 2017-12-21 | 2018-06-01 | 北京搜狐新媒体信息技术有限公司 | A kind of man face characteristic point positioning method and device |
| CN108717730A (en) * | 2018-04-10 | 2018-10-30 | 福建天泉教育科技有限公司 | A kind of method and terminal that 3D personage rebuilds |
| CN108898601A (en) * | 2018-05-31 | 2018-11-27 | 清华大学 | Femoral head image segmentation device and dividing method based on random forest |
| CN108985212A (en) * | 2018-07-06 | 2018-12-11 | 深圳市科脉技术股份有限公司 | Face identification method and device |
| CN109002799A (en) * | 2018-07-19 | 2018-12-14 | 苏州市职业大学 | Face identification method |
| CN109839111A (en) * | 2019-01-10 | 2019-06-04 | 王昕� | An indoor multi-robot formation system based on visual positioning |
| US11514947B1 (en) | 2014-02-05 | 2022-11-29 | Snap Inc. | Method for real-time video processing involving changing features of an object in the video |
| CN116115182A (en) * | 2021-11-15 | 2023-05-16 | 中国科学院上海营养与健康研究所 | Facial skin phenotype characteristic information acquisition method and system |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102254180B (en) * | 2011-06-28 | 2014-07-09 | 北京交通大学 | Geometrical feature-based human face aesthetics analyzing method |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2000357221A (en) * | 1999-06-15 | 2000-12-26 | Minolta Co Ltd | Method and device for image processing and recording medium with image processing program recorded |
| CN100356387C (en) * | 2004-05-17 | 2007-12-19 | 香港中文大学 | Face recognition method based on random sampling |
-
2006
- 2006-03-02 CN CNB2006100243070A patent/CN100375108C/en not_active Expired - Fee Related
Cited By (45)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2008064395A1 (en) * | 2006-08-18 | 2008-06-05 | National Ict Australia Limited | Facial feature processing |
| CN100414562C (en) * | 2006-10-10 | 2008-08-27 | 南京搜拍信息技术有限公司 | Face Feature Point Location Method in Face Recognition System |
| WO2008151577A1 (en) * | 2007-06-14 | 2008-12-18 | Tsinghua University | Tracking method and device adopting a series of observation models with different lifespans |
| CN101325691B (en) * | 2007-06-14 | 2010-08-18 | 清华大学 | Method and apparatus for tracing a plurality of observation model with fusion of differ durations |
| US8548195B2 (en) | 2007-06-14 | 2013-10-01 | Omron Corporation | Tracking method and device adopting a series of observation models with different life spans |
| CN101561875B (en) * | 2008-07-17 | 2012-05-30 | 清华大学 | A method for two-dimensional face image localization |
| CN101770643A (en) * | 2008-12-26 | 2010-07-07 | 富士胶片株式会社 | Image processing apparatus, image processing method, and image processing program |
| CN102473313A (en) * | 2009-07-23 | 2012-05-23 | 日本电气株式会社 | Marker generateon device, marker generation detection system, marker generateon detection device, marker, marker generateon method and program |
| CN102473311A (en) * | 2009-07-23 | 2012-05-23 | 日本电气株式会社 | Marker generateon device, marker generateon detection system, marker generation detection device, marker, marker generateon method and program therefor |
| US8693781B2 (en) | 2009-07-23 | 2014-04-08 | Nec Corporation | Marker generation device, marker generation detection system, marker generation detection device, marker, marker generation method, and program therefor |
| US8705864B2 (en) | 2009-07-23 | 2014-04-22 | Nec Corporation | Marker generation device, marker generation detection system, marker generation detection device, marker, marker generation method, and program |
| CN102473313B (en) * | 2009-07-23 | 2014-08-13 | 日本电气株式会社 | Marker generateon device, marker generation detection system, marker generateon detection device, and marker generateon method |
| CN102473311B (en) * | 2009-07-23 | 2014-10-08 | 日本电气株式会社 | Logo generation device, logo generation detection system, logo generation detection device and logo generation method |
| CN102402691A (en) * | 2010-09-08 | 2012-04-04 | 中国科学院自动化研究所 | Method for tracking human face posture and motion |
| CN102479322A (en) * | 2010-11-30 | 2012-05-30 | 财团法人资讯工业策进会 | System, device and method for analyzing facial defects by using facial image with angle |
| CN102013011B (en) * | 2010-12-16 | 2013-09-04 | 重庆大学 | Front-face-compensation-operator-based multi-pose human face recognition method |
| CN102013011A (en) * | 2010-12-16 | 2011-04-13 | 重庆大学 | Front-face-compensation-operator-based multi-pose human face recognition method |
| CN103208007A (en) * | 2013-03-19 | 2013-07-17 | 湖北微驾技术有限公司 | Face recognition method based on support vector machine and genetic algorithm |
| CN103208007B (en) * | 2013-03-19 | 2017-02-08 | 湖北微驾技术有限公司 | Face recognition method based on support vector machine and genetic algorithm |
| CN103577815A (en) * | 2013-11-29 | 2014-02-12 | 中国科学院计算技术研究所 | Face alignment method and system |
| CN103577815B (en) * | 2013-11-29 | 2017-06-16 | 中国科学院计算技术研究所 | A kind of face alignment method and system |
| US11651797B2 (en) | 2014-02-05 | 2023-05-16 | Snap Inc. | Real time video processing for changing proportions of an object in the video |
| US11514947B1 (en) | 2014-02-05 | 2022-11-29 | Snap Inc. | Method for real-time video processing involving changing features of an object in the video |
| CN104732247A (en) * | 2015-03-09 | 2015-06-24 | 北京工业大学 | Human face feature positioning method |
| CN104732247B (en) * | 2015-03-09 | 2018-04-27 | 北京工业大学 | A kind of human face characteristic positioning method |
| CN107637072A (en) * | 2015-03-18 | 2018-01-26 | 阿凡达合并第二附属有限责任公司 | Background modification in video conferencing |
| US11290682B1 (en) | 2015-03-18 | 2022-03-29 | Snap Inc. | Background modification in video conferencing |
| CN104881657A (en) * | 2015-06-08 | 2015-09-02 | 微梦创科网络科技(中国)有限公司 | Profile face identification method and system, and profile face construction method and system |
| CN104881657B (en) * | 2015-06-08 | 2019-01-25 | 微梦创科网络科技(中国)有限公司 | Profile recognition method, profile construction method and system |
| CN105205482A (en) * | 2015-11-03 | 2015-12-30 | 北京英梅吉科技有限公司 | Quick facial feature recognition and posture estimation method |
| CN105205482B (en) * | 2015-11-03 | 2018-10-26 | 北京英梅吉科技有限公司 | Fast face feature recognition and posture evaluation method |
| CN105938551A (en) * | 2016-06-28 | 2016-09-14 | 深圳市唯特视科技有限公司 | Video data-based face specific region extraction method |
| CN107145741A (en) * | 2017-05-05 | 2017-09-08 | 必应(上海)医疗科技有限公司 | Ear based on graphical analysis examines collecting method and device |
| CN107145741B (en) * | 2017-05-05 | 2020-06-05 | 必应(上海)医疗科技有限公司 | Ear diagnosis data acquisition method and device based on image analysis |
| CN108108694B (en) * | 2017-12-21 | 2020-09-29 | 北京搜狐新媒体信息技术有限公司 | A method and device for locating facial feature points |
| CN108108694A (en) * | 2017-12-21 | 2018-06-01 | 北京搜狐新媒体信息技术有限公司 | A kind of man face characteristic point positioning method and device |
| CN108717730A (en) * | 2018-04-10 | 2018-10-30 | 福建天泉教育科技有限公司 | A kind of method and terminal that 3D personage rebuilds |
| CN108898601B (en) * | 2018-05-31 | 2020-09-29 | 清华大学 | Femoral head image segmentation device and segmentation method based on random forest |
| CN108898601A (en) * | 2018-05-31 | 2018-11-27 | 清华大学 | Femoral head image segmentation device and dividing method based on random forest |
| CN108985212A (en) * | 2018-07-06 | 2018-12-11 | 深圳市科脉技术股份有限公司 | Face identification method and device |
| CN108985212B (en) * | 2018-07-06 | 2021-06-04 | 深圳市科脉技术股份有限公司 | Face recognition method and device |
| CN109002799B (en) * | 2018-07-19 | 2021-08-24 | 苏州市职业大学 | face recognition method |
| CN109002799A (en) * | 2018-07-19 | 2018-12-14 | 苏州市职业大学 | Face identification method |
| CN109839111A (en) * | 2019-01-10 | 2019-06-04 | 王昕� | An indoor multi-robot formation system based on visual positioning |
| CN116115182A (en) * | 2021-11-15 | 2023-05-16 | 中国科学院上海营养与健康研究所 | Facial skin phenotype characteristic information acquisition method and system |
Also Published As
| Publication number | Publication date |
|---|---|
| CN100375108C (en) | 2008-03-12 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN1811793A (en) | Automatic positioning method for characteristic point of human faces | |
| CN100336070C (en) | Method of robust human face detection in complicated background image | |
| CN103839223B (en) | Image processing method and device | |
| Tsalakanidou et al. | Real-time 2D+ 3D facial action and expression recognition | |
| CN102110285B (en) | Data correction device, its control method, and image discrimination device | |
| JP4234381B2 (en) | Method and computer program product for locating facial features | |
| CN107316333B (en) | A method for automatically generating Japanese cartoon portraits | |
| CN1924897A (en) | Image processing apparatus and method and program | |
| CN105894047B (en) | A kind of face classification system based on three-dimensional data | |
| US7720284B2 (en) | Method for outlining and aligning a face in face processing of an image | |
| CN1924894A (en) | Multiple attitude human face detection and track system and method | |
| CN1552041A (en) | Face metadata generation and face similarity calculation | |
| Lee et al. | Tensor-based AAM with continuous variation estimation: Application to variation-robust face recognition | |
| CN101038623A (en) | Feature point detecting device, feature point detecting method, and feature point detecting program | |
| CN101055620A (en) | Shape comparison device and method | |
| CN1871622A (en) | Image comparison system and image comparison method | |
| CN101057257A (en) | Face feature point detector and feature point detector | |
| CN102654903A (en) | Face comparison method | |
| WO2020215697A1 (en) | Tongue image extraction method and device, and a computer readable storage medium | |
| CN1975759A (en) | Human face identifying method based on structural principal element analysis | |
| Song et al. | Facial expression recognition based on mixture of basic expressions and intensities | |
| Choi et al. | Face recognition based on 2D images under illumination and pose variations | |
| CN103927554A (en) | Image sparse representation facial expression feature extraction system and method based on topological structure | |
| CN1170251C (en) | Beer Bottle Convex Character Extraction and Recognition Hardware System and Processing Method | |
| CN107895154B (en) | Method and system for forming facial expression intensity calculation model |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| C14 | Grant of patent or utility model | ||
| GR01 | Patent grant | ||
| C17 | Cessation of patent right | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20080312 Termination date: 20110302 |