[go: up one dir, main page]

CN105719326A - Realistic face generating method based on single photo - Google Patents

Realistic face generating method based on single photo Download PDF

Info

Publication number
CN105719326A
CN105719326A CN201610035432.5A CN201610035432A CN105719326A CN 105719326 A CN105719326 A CN 105719326A CN 201610035432 A CN201610035432 A CN 201610035432A CN 105719326 A CN105719326 A CN 105719326A
Authority
CN
China
Prior art keywords
texture
face
model
photo
alpha
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610035432.5A
Other languages
Chinese (zh)
Inventor
谈国新
孙传明
张文元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central China Normal University
Original Assignee
Central China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central China Normal University filed Critical Central China Normal University
Priority to CN201610035432.5A priority Critical patent/CN105719326A/en
Publication of CN105719326A publication Critical patent/CN105719326A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • G06V40/173Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明公开了一种基于单张照片的真实感人脸生成方法,首先,构建标准化的人脸模型库,交互式选取输入照片的面部特征点,并基于脸型特征,匹配最佳模型;其次,通过三角形变形和双线性插值的方法实现照片到三维模型的纹理映射,并引入Alpha图实现人脸覆盖纹理到模型中性纹理的融合过渡;最后,采用网格调节方法从整体到细节分层次的调整模型以达到生成真实感三维人脸的目的。本发明使用的方法对用户友好,所需特征点较少,只需一张正面照片和较少的特征点,即可由用户参与自主完成的真实感人脸创建方法。

The invention discloses a method for generating a realistic human face based on a single photo. First, construct a standardized human face model library, interactively select the facial feature points of the input photo, and match the best model based on the facial features; secondly, through The method of triangle deformation and bilinear interpolation realizes the texture mapping from photos to 3D models, and introduces the Alpha map to realize the blending and transition from the face covering texture to the neutral texture of the model; finally, adopts the grid adjustment method from the whole to the level of detail Adjust the model to achieve the purpose of generating realistic 3D faces. The method used in the present invention is user-friendly, requires fewer feature points, and only needs one frontal photo and fewer feature points, so that the user can participate in and complete the realistic face creation method independently.

Description

一种基于单张照片的真实感人脸生成方法A realistic face generation method based on a single photo

技术领域technical field

本发明涉及一种三维人脸快速生成方法,具体地说,涉及一种基于单张照片的真实感人脸生成方法。The invention relates to a method for quickly generating a three-dimensional human face, in particular to a method for generating a realistic human face based on a single photo.

背景技术Background technique

人脸是情感表达和身份识别的重要部分。生活中我们通过人脸来识别身份,通过人脸来表达喜怒哀乐,人脸在我们平时的生活中起着举足轻重的作用。随着计算机图形学的发展,针对特定图形的人脸重建开始在许多领域被广泛应用。构造一个具有真实感人脸的三维虚拟角色对增强用户的兴趣与参与主动性具有积极的作用。在游戏动漫、影视广告、三维校园,数字旅游景区、虚拟更衣室等领域作为提高用户感官体验的手段之一,研究真实感程度较好、生成成本低廉,生成过程快速简单的人脸模型生成技术具有重要的意义。Human faces are an important part of emotional expression and identity recognition. In life, we use faces to identify identities and express emotions through faces. Faces play a pivotal role in our daily lives. With the development of computer graphics, face reconstruction for specific graphics has been widely used in many fields. Constructing a three-dimensional virtual character with a realistic human face has a positive effect on enhancing the user's interest and participation initiative. In the fields of game animation, film and television advertisements, 3D campuses, digital tourist attractions, virtual locker rooms and other fields, as one of the means to improve the user's sensory experience, research on face model generation technology with better realism, low generation cost, and fast and simple generation process is of great significance.

现有的人脸快速生成方法主要有通过三维扫描仪等设备获取人脸三维数据,生成的模型逼真度高,但依赖大量的点云数据,技术要求较高,价格昂贵;此外还有使用多张正面和侧面图像通过视觉相关理论和算法实现人脸模型重建,该方法需要手工标定较多的特征点,输入条件较为复杂。所以传统人脸模型生成方法存在设备昂贵、实现过程复杂、实时性不足等问题,并不能完全满足在虚拟场景中实时交互的需求。The existing rapid face generation methods mainly use 3D scanners and other equipment to obtain 3D face data. The generated model has high fidelity, but relies on a large amount of point cloud data, which requires high technical requirements and is expensive; A frontal and side image is used to reconstruct the face model through visual related theories and algorithms. This method requires manual calibration of many feature points, and the input conditions are relatively complicated. Therefore, the traditional face model generation method has problems such as expensive equipment, complex implementation process, and insufficient real-time performance, and cannot fully meet the needs of real-time interaction in virtual scenes.

发明内容Contents of the invention

本发明的目的在于克服上述技术存在的缺陷,提供一种基于单张照片的真实感人脸生成方法,本技术使用的方法对用户友好,所需特征点较少,只需一张正面照片和较少的特征点,即可由用户参与自主完成的真实感人脸创建方法。该方法将用户引入到人脸的创建过程,通过交互式选取少量的特征点,匹配适宜的人脸模型,实现从覆盖纹理到中性纹理的映射,以及面部边界的肤色融合,最后通过整体到细节分层次的模型调整,生成具有个性化的人脸模型。The purpose of the present invention is to overcome the defects of the above-mentioned technologies, and provide a method for generating a realistic human face based on a single photo. The method used in this technology is user-friendly and requires fewer feature points. With few feature points, the realistic face creation method can be completed independently by the user. This method introduces the user into the face creation process, selects a small number of feature points interactively, matches the appropriate face model, realizes the mapping from covering texture to neutral texture, and the skin color fusion of the face boundary, and finally through the whole to The detailed model is adjusted in layers to generate a personalized face model.

其具体技术方案为:Its specific technical plan is:

一种基于单张照片的真实感人脸生成方法,包括以下步骤:A method for generating a realistic human face based on a single photo, comprising the following steps:

Step1.特征点标定:获取正面二维人脸照片,对双眼、鼻子、嘴等7个部分选取13个特征点进行标记,确定整个面部五官的大致位置;Step1. Feature point calibration: Obtain a frontal two-dimensional face photo, select 13 feature points for 7 parts such as eyes, nose, and mouth to mark, and determine the approximate position of the entire facial features;

Step2.模型匹配:比较照片与模型之间的特征点距离,在模型库中匹配相似度最大的模型;Step2. Model matching: compare the feature point distance between the photo and the model, and match the model with the highest similarity in the model library;

Step3.纹理映射:根据人脸的纹理贴图模拟面部真实颜色,实现覆盖纹理到中性纹理的映射,中性纹理是模型默认的纹理,覆盖纹理为人脸照片,通过在两层纹理中13个特征点及其周围区域的映射,向匹配模型加入人脸照片的纹理特征;Step3. Texture mapping: Simulate the real color of the face according to the texture map of the face, and realize the mapping from the covering texture to the neutral texture. The neutral texture is the default texture of the model, and the covering texture is a face photo. Through 13 features in the two-layer texture Mapping of points and their surrounding areas, adding texture features of face photos to the matching model;

Step4.肤色融合:使用一张Alpha图,在人脸融合的边界部位形成从覆盖纹理到中性纹理的平滑过渡,同时,利用Alpha图对眉毛、鼻梁等细节部位进行细微调整;Step4. Skin color fusion: use an Alpha map to form a smooth transition from covering texture to neutral texture at the boundary of face fusion, and at the same time, use the Alpha map to fine-tune the eyebrows, bridge of the nose and other details;

Step5.模型调节:通过向量差值的方法对匹配模型的嘴巴、眼睛、脸型等五官大小、位置等特性进行调节,生成符合其面部特征的个性化模型。Step5. Model adjustment: Adjust the size and position of facial features such as the mouth, eyes, and face shape of the matching model through the method of vector difference to generate a personalized model that conforms to its facial features.

优选地,Step5中构造一个可用于调整人脸五官细微变化的模型调节器,生成个性化的逼真人脸;Preferably, in Step5, construct a model regulator that can be used to adjust the subtle changes of human facial features to generate a personalized realistic human face;

如果把人脸网格中的顶点Vi=(x,y,z)T看成是向量组H0=(V1,V2,V3,...,Vn),n表示顶点的数量,对眼睛进行变换后(如改变大小),一些点的位置发生改变,形成向量组H1=(V1’,V2’,V3’,...Vn’),对这种变换增加一个权值W,那么,从H0到H1中间所有的变换状态Hx都能通过线性插值计算出来;If the vertex V i = (x, y, z) T in the face grid is regarded as a vector group H 0 = (V 1 , V 2 , V 3 ,..., V n ), n represents the Quantity, after the eyes are transformed (such as changing the size), the positions of some points are changed to form a vector group H 1 =(V 1 ', V 2 ', V 3 ',...V n '), for this The transformation adds a weight W, then all the transformation states H x from H 0 to H 1 can be calculated by linear interpolation;

Hx=H0+W*(H1-H0)(1)H x =H 0 +W*(H 1 -H 0 )(1)

一种变换难以满足人脸模型的需求,需要根据人脸的特征,选择多种变换,为每个变换赋予一个权重值W,则m个变换的混合公式为:It is difficult for one transformation to meet the needs of the face model. It is necessary to select multiple transformations according to the characteristics of the face, and assign a weight value W to each transformation. Then the mixing formula of m transformations is:

Hh xx == Hh 00 ++ ΣΣ ii == 11 mm WW ii ** (( Hh ii -- Hh 11 )) -- -- -- (( 22 ))

其中m的数量以实际应用中需要的数量为准,在实验中,定义了23种不同变换的网格调节器,包括:眼睛高度、眼睛距离、眼睛大小、眉毛高度、眉毛深度、鼻子大小、鼻子深度、鼻子高度、鼻梁深度、鼻梁宽度、鼻孔宽度、嘴巴高度、嘴巴厚度、嘴巴宽度、嘴唇覆盖、颧骨深度、颧骨宽度、颧骨高度、面颊宽度、下巴高度、下巴深度、下巴宽度、下颚宽度。调节器通过控制权重来实现网格变形。The number of m is subject to the number required in the actual application. In the experiment, 23 grid regulators with different transformations are defined, including: eye height, eye distance, eye size, eyebrow height, eyebrow depth, nose size, Nose Depth, Nose Height, Nose Bridge Depth, Nose Bridge Width, Nostril Width, Mouth Height, Mouth Thickness, Mouth Width, Lip Coverage, Cheekbone Depth, Cheekbone Width, Cheekbone Height, Cheek Width, Chin Height, Chin Depth, Chin Width , jaw width. Conditioners implement mesh deformation by controlling weights.

优选地,Step2中模型匹配的方法如下:首先,对照片进行归一化处理,计算出主要特征点之间的距离和比例关系;其次,根据特征点距离和比例关系,确定脸型的大致分类;最后,在相应脸型的模型库中匹配模型和照片相应特征点之间的欧几里得距离,采用公式(3)寻找到距离最近的三维模型;Preferably, the method of model matching in Step2 is as follows: first, carry out normalization processing to photograph, calculate the distance and the proportional relation between main characteristic points; Finally, match the Euclidean distance between the model and the corresponding feature points of the photo in the model library of the corresponding face shape, and use the formula (3) to find the closest three-dimensional model;

EE. == ΣΣ ii == 11 nno (( DD. ii -- DD. ii ,, )) 22 ** λλ ii -- -- -- (( 33 ))

其中,Di为照片中某特征点间的距离,Di’为三维模型中相应特征点间的距离,λi为每一特征点距离的权重值,根据其贡献程度大小和经验公式确定。n为所比较的距离数量。Among them, D i is the distance between certain feature points in the photo, D i ' is the distance between corresponding feature points in the 3D model, and λ i is the weight value of the distance of each feature point, which is determined according to its contribution degree and empirical formula. n is the number of distances being compared.

优选地,Step3中假设中性纹理和覆盖纹理分别标记的特征点位置分别形成V1,V2两个二维点集,由于手工标注的差异,V1,V2中相应的点必然不会十分精确的在同一位置,需要一种映射方法使其自然融合,取P1∈V1,P2∈V2。将P1,P2扩展到三维向量,P1=(X1,Y1,1),P2=(X2,Y2,1),并假设有矩阵M,则可得:Preferably, in Step3, it is assumed that the positions of the feature points marked by the neutral texture and the covering texture respectively form two two-dimensional point sets V 1 and V 2 , and due to the difference in manual labeling, the corresponding points in V 1 and V 2 must not Very precisely at the same position, a mapping method is needed to make it merge naturally, take P 1 ∈ V 1 , P 2 ∈ V 2 . Extending P 1 and P 2 to three-dimensional vectors, P 1 = (X 1 , Y 1 , 1), P 2 = (X 2 , Y 2 , 1), and assuming a matrix M, we can get:

Mm == PP 11 ** PP 22 -- 11 -- -- -- (( 44 ))

得到每个点的变换矩阵后,将特征点的变换应用到特征点周围的像素中,该流程使用常见的三角形变形方式将覆盖纹理应用到模型纹理,将特征点连接,形成覆盖面部的网格,此时的网格中兼有三角形和四边形,通过拆分,将多边形均简化为三角形,处于三角形中间的像素同时受到三角形三个顶点的影响;After obtaining the transformation matrix of each point, the transformation of the feature point is applied to the pixels around the feature point. This process uses the common triangle deformation method to apply the covering texture to the model texture, and connect the feature points to form a mesh covering the face. , the grid at this time contains both triangles and quadrilaterals. By splitting, the polygons are simplified into triangles, and the pixels in the middle of the triangle are affected by the three vertices of the triangle at the same time;

假设三角形的三个顶点为P1,P2,P3,将它们作为矩阵的三个列,形成3*3矩阵(P1,P2,P3),假设变换后的三角形是(P1’,P2’,P3’),则:Assume that the three vertices of the triangle are P 1 , P 2 , P 3 , and use them as three columns of the matrix to form a 3*3 matrix (P 1 , P 2 , P 3 ), assuming that the transformed triangle is (P 1 ', P 2 ', P 3 '), then:

M*(P1,P2,P3)=(P1’,P2’,P3’)(5)M*(P 1 , P 2 , P 3 )=(P 1 ′, P 2 ′, P 3 ′)(5)

M=(P1’,P2’,P3’)*(P1,P2,P3)-1(6)M=(P 1 ', P 2 ', P 3 ')*(P 1 , P 2 , P 3 ) -1 (6)

对三角形中间的某i点,使用M*Pi就可得到Pi,以此确定三角形内所有点映射后的坐标,对于未映射的点,坐标值全部为整数,但映射之后的坐标可能不为整数,导致无法与图上的像素一一对应。For a point i in the middle of the triangle, use M*P i to get P i , so as to determine the mapped coordinates of all points in the triangle. For unmapped points, the coordinate values are all integers, but the mapped coordinates may not be is an integer, resulting in a one-to-one correspondence with the pixels on the map.

优选地,Step4中的实际融合过程中,模型中性纹理、Alpha图、覆盖纹理将处理成一样的大小,Alpha图中的像素作为混合的插值权重参与混合,最终形成自然融合的纹理,三张图的混合公式为:Preferably, during the actual fusion process in Step4, the model neutral texture, Alpha map, and overlay texture will be processed into the same size, and the pixels in the Alpha map will be used as interpolation weights for blending to participate in the blending, and finally form a naturally blended texture. Three The blending formula for the graph is:

C(x,y)=Cbase(x,y)*(255-CAlpha(x,y))+Coverlay(x,y)*CAlpha(x,y)其中0≤CAlpha(x,y)≤255(7)C(x, y)=C base (x, y)*(255-C Alpha (x, y))+C overlay (x, y)*C Alpha (x, y) where 0≤C Alpha (x, y)≤255(7)

Cbase(x,y)、CAlpha(x,y)、Coverlay(x,y)分别表示中性纹理、Alpha图、覆盖纹理在(x,y)坐标处的颜色值。C base (x, y), C Alpha (x, y), and C overlay (x, y) represent the color values at (x, y) coordinates of the neutral texture, Alpha map, and overlay texture, respectively.

与现有技术相比,本发明的有益效果为:Compared with prior art, the beneficial effect of the present invention is:

本发明使用的方法对用户友好,所需特征点较少,只需一张正面照片和较少的特征点,即可由用户参与自主完成的真实感人脸创建方法。该方法将用户引入到人脸的创建过程,通过交互式选取少量的特征点,匹配适宜的人脸模型,实现从覆盖纹理到中性纹理的映射,以及面部边界的肤色融合,最后通过整体到细节分层次的模型调整,生成具有个性化的人脸模型。The method used in the present invention is user-friendly, requires fewer feature points, and only needs one frontal photo and fewer feature points, so that the user can participate in and complete the realistic face creation method independently. This method introduces the user into the face creation process, selects a small number of feature points interactively, matches the appropriate face model, realizes the mapping from covering texture to neutral texture, and the skin color fusion of the face boundary, and finally through the whole to The detailed model is adjusted in layers to generate a personalized face model.

附图说明Description of drawings

图1是真实感人脸构建流程;Figure 1 is a realistic face construction process;

图2是特征点位置;Figure 2 is the location of feature points;

图3是特征点连接;Fig. 3 is feature point connection;

图4是Alpha图。Figure 4 is the Alpha graph.

具体实施方式detailed description

为了使本发明实现的技术手段、创作特征、达成目的与功效易于明白了解,下面结合附图和具体实例进一步阐述本发明。In order to make the technical means, creative features, goals and effects achieved by the present invention easy to understand, the present invention will be further described below in conjunction with the accompanying drawings and specific examples.

首先,构建标准化的人脸模型库,交互式选取输入照片的面部特征点,并基于脸型特征,匹配最佳模型;其次,通过三角形变形和双线性插值的方法实现照片到三维模型的纹理映射,并引入Alpha图实现人脸覆盖纹理到模型中性纹理的融合过渡;最后,采用网格调节方法从整体到细节分层次的调整模型以达到生成真实感三维人脸的目的。First, build a standardized face model library, interactively select the facial feature points of the input photo, and match the best model based on the facial features; second, realize the texture mapping from the photo to the 3D model by means of triangular deformation and bilinear interpolation , and introduce the Alpha map to realize the fusion transition from the face covering texture to the neutral texture of the model; finally, the grid adjustment method is used to adjust the model from the whole to the details in layers to achieve the purpose of generating a realistic 3D face.

1、交互式生成方法步骤1. Steps of interactive generation method

真实感三维人脸交互式生成方法首先要针对亚洲人脸特点,构建人脸模型库;然后将二维正面照片与人脸模型匹配,利用纹理映射与肤色融合算法,通过模型调节器的细微调节,生成真实感三维模型。The realistic 3D face interactive generation method first needs to build a face model library according to the characteristics of Asian faces; then match the 2D frontal photos with the face model, use texture mapping and skin color fusion algorithms, and fine-tune the model adjuster , generating a realistic 3D model.

1.1模型库构建1.1 Model library construction

现在已有一些标准化的三维人脸模型库,比如UND、BU-3DFE、BJUT等。这些模型库在人脸生成和人脸识别等领域得到了广泛的应用。不同的三维模型库对种族、年龄、性别、光照等均具有不同的标准,本发明方法主要针对亚洲人种,有针对性的建立三维人脸库。Now there are some standardized 3D face model libraries, such as UND, BU-3DFE, BJUT, etc. These model libraries have been widely used in fields such as face generation and face recognition. Different three-dimensional model databases have different standards for race, age, gender, illumination, etc., and the method of the present invention is mainly aimed at Asian races, and establishes a three-dimensional face database in a targeted manner.

常用的亚洲人正面脸型分类有多种标准:比如形态法、字形法、亚洲人法等。为便于模型的脸型能与照片进行较好的匹配,模型库根据特征的明显程度和常用程度对脸型分为椭圆形、倒三角形、长形、方形、圆形5种。人脸中性模型的生成参考瑞典林雪平大学的图像研究组织ICG开发的一个中性人脸线框模型CANDIDE-3[8],它的模型简单且无偿对外开放使用。不过该模型只适用于脸部,本方法参考CANDIDE-3中性人脸模型,特别制作了适合中国人脸型特征的头部中性模型。模型库建设完成后,需要对模型进行预处理,包括修补漏洞、绑定中性肤色纹理、归一化大小、标定特征点等。There are many standards for the commonly used classification of Asian frontal faces: such as morphological method, font method, Asian method, etc. In order to better match the face shape of the model with the photo, the model library divides the face shape into 5 types: oval, inverted triangle, rectangle, square, and circle according to the obviousness and common use of features. The generation of the face neutral model refers to a neutral face wireframe model CANDIDE-3 [8] developed by the image research organization ICG of Linköping University in Sweden. Its model is simple and free for public use. However, this model is only suitable for the face. This method refers to the CANDIDE-3 neutral face model, and specially made a head neutral model suitable for Chinese facial features. After the model library is built, the model needs to be preprocessed, including patching bugs, binding neutral skin tone textures, normalizing size, and calibrating feature points.

1.2生成步骤1.2 Generation steps

三维人脸主要从模型的面部特征和纹理两个方面体现其真实感,用户参与纹理的指定与模型的调整。具体方法流程如图1所示,主要细分为以下5个步骤:The three-dimensional face mainly reflects its realism from the facial features and texture of the model, and the user participates in the designation of the texture and the adjustment of the model. The specific method flow is shown in Figure 1, which is mainly subdivided into the following five steps:

Step1.特征点标定:获取正面二维人脸照片,对双眼、鼻子、嘴等7个部分选取13个特征点进行标记,确定整个面部五官的大致位置。Step1. Feature point calibration: Obtain a frontal two-dimensional face photo, select 13 feature points for 7 parts such as eyes, nose, and mouth, and determine the approximate position of the entire facial features.

Step2.模型匹配:比较照片与模型之间的特征点距离,在模型库中匹配相似度最大的模型。Step2. Model matching: compare the feature point distance between the photo and the model, and match the model with the highest similarity in the model library.

Step3.纹理映射:根据人脸的纹理贴图模拟面部真实颜色,实现覆盖纹理到中性纹理的映射。中性纹理是模型默认的纹理,覆盖纹理为人脸照片。通过在两层纹理中13个特征点及其周围区域的映射,向匹配模型加入目标脸的纹理特征。Step3. Texture mapping: Simulate the real color of the face according to the texture map of the face, and realize the mapping from the covering texture to the neutral texture. The neutral texture is the default texture of the model, and the overlay texture is a face photo. By mapping the 13 feature points and their surrounding areas in the two-layer texture, the texture features of the target face are added to the matching model.

Step4.肤色融合:使用一张Alpha图,在人脸融合的边界部位形成从覆盖纹理到中性纹理的平滑过渡。同时,利用Alpha图对眉毛、鼻梁等细节部位进行细微调整。Step4. Skin color fusion: Use an Alpha map to form a smooth transition from overlay texture to neutral texture at the boundary of face fusion. At the same time, use the Alpha map to make fine adjustments to the eyebrows, nose bridge and other details.

Step5.模型调节:通过向量差值的方法对匹配模型的嘴巴、眼睛、脸型等五官大小、位置等特性进行调节,生成符合其面部特征的个性化模型。Step5. Model adjustment: Adjust the size and position of facial features such as the mouth, eyes, and face shape of the matching model through the method of vector difference to generate a personalized model that conforms to its facial features.

1.3模型调节器设计1.3 Model Regulator Design

模型调节器的作用是对人脸五官的布局进行局部调整,以达到更加接近真实人脸的目的。常用的三维建模软件,如Maya、3DSMax等,均提供blendshapes(或称morph)的功能,用于将模型在两个或多个状态间变换,形成不同的五官布局或者人脸表情。本发明借用这种技术思想,通过构造一个可用于调整人脸五官细微变化的模型调节器,生成个性化的逼真人脸。The role of the model adjuster is to locally adjust the layout of the facial features to achieve the goal of being closer to the real face. Commonly used 3D modeling software, such as Maya, 3DSMax, etc., all provide the function of blendshapes (or morph), which is used to transform the model between two or more states to form different facial features or facial expressions. The present invention borrows this technical idea, and generates a personalized and realistic human face by constructing a model adjuster that can be used to adjust subtle changes in facial features.

如果把人脸网格中的顶点Vi=(x,y,z)T看成是向量组H0=(V1,V2,V3,...,Vn),n表示顶点的数量,对眼睛进行变换后(如改变大小),一些点的位置发生改变,形成向量组H1=(V1’,V2’,V3’,...Vn’),对这种变换增加一个权值W。那么,从H0到H1中间所有的变换状态Hx都能通过线性插值计算出来。If the vertex V i = (x, y, z) T in the face grid is regarded as a vector group H 0 = (V 1 , V 2 , V 3 ,..., V n ), n represents the Quantity, after the eyes are transformed (such as changing the size), the positions of some points are changed to form a vector group H 1 =(V 1 ', V 2 ', V 3 ',...V n '), for this The transformation adds a weight W. Then, all transformation states Hx from H 0 to H 1 can be calculated by linear interpolation.

Hx=H0+W*(H1-H0)(1)H x =H 0 +W*(H 1 -H 0 )(1)

一种变换难以满足人脸模型的需求,需要根据人脸的特征,选择多种变换,为每个变换赋予一个权重值W,则m个变换的混合公式为:It is difficult for one transformation to meet the needs of the face model. It is necessary to select multiple transformations according to the characteristics of the face, and assign a weight value W to each transformation. Then the mixing formula of m transformations is:

Hh xx == Hh 00 ++ ΣΣ ii == 11 mm WW ii ** (( Hh ii -- Hh 11 )) -- -- -- (( 22 ))

其中m的数量以实际应用中需要的数量为准。在实验中,定义了23种不同变换的网格调节器,包括:眼睛高度、眼睛距离、眼睛大小、眉毛高度、眉毛深度、鼻子大小、鼻子深度、鼻子高度、鼻梁深度、鼻梁宽度、鼻孔宽度、嘴巴高度、嘴巴厚度、嘴巴宽度、嘴唇覆盖、颧骨深度、颧骨宽度、颧骨高度、面颊宽度、下巴高度、下巴深度、下巴宽度、下颚宽度。调节器通过控制权重来实现网格变形。The number of m is subject to the number required in practical applications. In the experiment, 23 mesh modifiers with different transformations were defined, including: eye height, eye distance, eye size, eyebrow height, eyebrow depth, nose size, nose depth, nose height, nose bridge depth, nose bridge width, nostril width , mouth height, mouth thickness, mouth width, lip coverage, cheekbone depth, cheekbone width, cheekbone height, cheek width, chin height, chin depth, chin width, chin width. Conditioners implement mesh deformation by controlling weights.

2、真实感人脸生成关键技术2. Key technologies for realistic face generation

真实感人脸生成的关键技术主要有二维人脸照片与三维模型的匹配技术,纹理映射技术和肤色融合技术。The key technologies of realistic face generation mainly include the matching technology of 2D face photos and 3D models, texture mapping technology and skin color fusion technology.

2.1特征点标定及匹配2.1 Feature point calibration and matching

获取二维人脸照片后,需要对照片中的关键部位进行特征点标定,以匹配最佳人脸模型。MPEG4标准在中性人脸上规定了84个特征点。参考MPEG4标准,针对人脸模型匹配和纹理映射需要,为减少人工干预的复杂度,提出在输入照片和人脸模型上选取13个关键部位的特征点作为匹配和映射的标记,如图2所示。根据三维人脸识别的研究成果,这些关键特征点之间的相互关系已经可以大致确定人脸五官的位置及脸型,从而匹配出最适合的模型。在实验部分通过与不同特征点数量的匹配方法进行比较,验证了13个特征点的有效性。After obtaining the 2D face photo, it is necessary to calibrate the key parts in the photo to match the best face model. The MPEG4 standard specifies 84 feature points on a neutral face. Referring to the MPEG4 standard, for the needs of face model matching and texture mapping, in order to reduce the complexity of manual intervention, it is proposed to select 13 key feature points on the input photo and face model as matching and mapping marks, as shown in Figure 2 Show. According to the research results of 3D face recognition, the relationship between these key feature points can roughly determine the position and shape of the facial features, so as to match the most suitable model. In the experimental part, the validity of 13 feature points is verified by comparing with matching methods with different numbers of feature points.

模型匹配的方法如下:首先,对照片进行归一化处理,计算出主要特征点之间的距离和比例关系;其次,根据特征点距离和比例关系,确定脸型的大致分类;最后,在相应脸型的模型库中匹配模型和照片相应特征点之间的欧几里得距离,采用公式(3)寻找到距离最近的三维模型。The method of model matching is as follows: First, normalize the photos to calculate the distance and proportional relationship between the main feature points; secondly, determine the approximate classification of the face shape according to the distance and proportional relationship of the feature points; finally, in the corresponding face shape Match the Euclidean distance between the model and the corresponding feature point of the photo in the model library, and use the formula (3) to find the closest 3D model.

EE. == ΣΣ ii == 11 nno (( DD. ii -- DD. ii ,, )) 22 ** λλ ii -- -- -- (( 33 ))

其中,Di为照片中某特征点间的距离,Di’为三维模型中相应特征点间的距离,λi为每一特征点距离的权重值,根据其贡献程度大小和经验公式确定。n为所比较的距离数量。Among them, D i is the distance between certain feature points in the photo, D i ' is the distance between corresponding feature points in the 3D model, and λ i is the weight value of the distance of each feature point, which is determined according to its contribution degree and empirical formula. n is the number of distances being compared.

2.2纹理映射2.2 Texture Mapping

当前国内外研究人员提出了多种纹理映射方法。Kraevoy等人将参数化后的模型和相应的纹理按照特征点的对应关系,进行三角剖分,建立相应的映射关系,该算法需要建立在大量迭代的基础上,复杂度较高。本发明借鉴Kraevoy的方法,引入三角形变形、双线性差值等方法实现覆盖纹理到中性纹理之间的特征点映射。At present, researchers at home and abroad have proposed a variety of texture mapping methods. Kraevoy and others triangulated the parameterized model and the corresponding texture according to the corresponding relationship of feature points, and established the corresponding mapping relationship. This algorithm needs to be established on the basis of a large number of iterations, and the complexity is high. The present invention uses Kraevoy's method for reference, and introduces methods such as triangular deformation and bilinear difference to realize the feature point mapping between covering texture and neutral texture.

假设中性纹理和覆盖纹理分别标记的特征点位置分别形成V1,V2两个二维点集。由于手工标注的差异,V1,V2中相应的点必然不会十分精确的在同一位置,需要一种映射方法使其自然融合。取P1∈V1,P2∈V2。将P1,P2扩展到三维向量,P1=(X1,Y1,1),P2=(X2,Y2,1),并假设有矩阵M,则可得:It is assumed that the positions of feature points marked by the neutral texture and the covering texture respectively form two two-dimensional point sets V 1 and V 2 . Due to the differences in manual marking, the corresponding points in V 1 and V 2 will not necessarily be at the same exact position, and a mapping method is needed to make them merge naturally. Take P 1 ∈ V 1 , P 2 ∈ V 2 . Extending P 1 and P 2 to three-dimensional vectors, P 1 = (X 1 , Y 1 , 1), P 2 = (X 2 , Y 2 , 1), and assuming a matrix M, we can get:

Mm == PP 11 ** PP 22 -- 11 -- -- -- (( 44 ))

得到每个点的变换矩阵后,将特征点的变换应用到特征点周围的像素中。该流程使用常见的三角形变形方式将覆盖纹理应用到模型纹理。如图3所示,将特征点连接,形成覆盖面部的网格,此时的网格中兼有三角形和四边形,通过拆分,将多边形均简化为三角形。处于三角形中间的像素同时受到三角形三个顶点的影响。After obtaining the transformation matrix of each point, apply the transformation of the feature point to the pixels around the feature point. This process applies the overlay texture to the model texture using the common triangle deformation method. As shown in Figure 3, the feature points are connected to form a grid covering the face. At this time, the grid contains both triangles and quadrilaterals. By splitting, the polygons are simplified into triangles. Pixels in the middle of a triangle are simultaneously affected by all three vertices of the triangle.

假设三角形的三个顶点为P1,P2,P3。将它们作为矩阵的三个列,形成3*3矩阵(P1,P2,P3)。假设变换后的三角形是(P1’,P2’,P3’),则:Suppose the three vertices of the triangle are P 1 , P 2 , P 3 . Take them as three columns of the matrix to form a 3*3 matrix (P 1 , P 2 , P 3 ). Assuming the transformed triangle is (P 1 ', P 2 ', P 3 '), then:

M*(P1,P2,P3)=(P1’,P2’,P3’)(5)M*(P 1 , P 2 , P 3 )=(P 1 ′, P 2 ′, P 3 ′)(5)

M=(P1’,P2’,P3’)*(P1,P2,P3)-1(6)M=(P 1 ', P 2 ', P 3 ')*(P 1 , P 2 , P 3 ) -1 (6)

对三角形中间的某i点,使用M*Pi就可得到Pi,以此确定三角形内所有点映射后的坐标。对于未映射的点,坐标值全部为整数,但映射之后的坐标可能不为整数,导致无法与图上的像素一一对应。此时,利用双线性插值采样方法对图上的颜色进行采样,可取得较好的效果。For a point i in the middle of the triangle, P i can be obtained by using M*P i to determine the mapped coordinates of all points in the triangle. For unmapped points, the coordinate values are all integers, but the coordinates after mapping may not be integers, resulting in a one-to-one correspondence with the pixels on the map. At this time, using the bilinear interpolation sampling method to sample the colors on the image can achieve better results.

2.3肤色融合2.3 Skin color fusion

纹理映射之后,由于中性纹理和覆盖纹理之间的颜色差异,需要一定的融合算法消除视觉上的差异,使肤色融合的效果更加自然。在现有的图像处理软件和投影拼接融合软件中,常用AlphaTransition方法实现重叠图像的平滑过渡,不仅融合速度快,而且效果较好。借用这种思想,本发明根据人脸模型特征和纹理图分布,制作标准的Alpha图,通过设置边缘的过渡区来消除边界突变。采用Alpha图的作用有两个:首先,产生平滑过渡的纹理边界;其次,对面部其它部位细节,如眉毛,鼻梁部分也进行了渐变调整,以降低覆盖纹理对中性纹理的影响。图4为手绘的10张Alpha图,图中白色部分的颜色取自覆盖纹理的颜色,黑色部分取自中性纹理的颜色。介于白色和黑色中间的部分将由Alpha图上的像素为权进行线性插值得到,通过覆盖测试,编号为10的Alpha图可以实现最优的融合效果。After texture mapping, due to the color difference between the neutral texture and the overlay texture, a certain fusion algorithm is required to eliminate the visual difference, so that the effect of skin color fusion is more natural. In the existing image processing software and projection splicing fusion software, the AlphaTransition method is commonly used to realize the smooth transition of overlapping images, which not only has a fast fusion speed, but also has a better effect. Borrowing this idea, the present invention makes a standard Alpha map according to the face model features and texture map distribution, and eliminates the boundary mutation by setting the transition area of the edge. There are two functions of using the Alpha map: first, to produce a smooth transition of the texture boundary; second, to adjust the gradient of other parts of the face, such as eyebrows and the bridge of the nose, to reduce the influence of the overlay texture on the neutral texture. Figure 4 shows 10 hand-painted Alpha images. The color of the white part in the picture is taken from the color of the covering texture, and the color of the black part is taken from the color of the neutral texture. The part between white and black will be obtained by linear interpolation based on the pixels on the Alpha map. Through the coverage test, the Alpha map numbered 10 can achieve the best fusion effect.

在实际融合过程中,模型中性纹理、Alpha图、覆盖纹理将处理成一样的大小。Alpha图中的像素作为混合的插值权重参与混合,最终形成自然融合的纹理。三张图的混合公式为:In the actual fusion process, the model's neutral texture, Alpha map, and overlay texture will be processed into the same size. The pixels in the Alpha map are used as interpolation weights for mixing, and finally form a naturally blended texture. The blending formula for the three images is:

C(x,y)=Cbase(x,y)*(255-CAlpha(x,y))+Coverlay(x,y)*CAlpha(x,y)其中0≤CAlpha(x,y)≤255(7)C(x, y)=C base (x, y)*(255-C Alpha (x, y))+C overlay (x, y)*C Alpha (x, y) where 0≤C Alpha (x, y)≤255(7)

Cbase(x,y)、CAlpha(x,y)、Coverlay(x,y)分别表示中性纹理、Alpha图、覆盖纹理在(x,y)坐标处的颜色值。C base (x, y), C Alpha (x, y), and C overlay (x, y) represent the color values at (x, y) coordinates of the neutral texture, Alpha map, and overlay texture, respectively.

以上所述,仅为本发明最佳实施方式,任何熟悉本技术领域的技术人员在本发明披露的技术范围内,可显而易见地得到的技术方案的简单变化或等效替换均落入本发明的保护范围内。The above is only the best implementation mode of the present invention, any simple changes or equivalent replacements of the technical solutions that can be clearly obtained by any person skilled in the art within the technical scope disclosed in the present invention all fall into the scope of the present invention within the scope of protection.

Claims (5)

1. the Realistic Human face generating method based on single photo, it is characterised in that comprise the following steps:
Step1. characteristic point is demarcated: obtains front two-dimension human face photo, 7 parts is chosen 13 characteristic points and carries out labelling, it is determined that the approximate location of whole face face;
Step2. Model Matching: compare the characteristic point distance between photo and model, the model that matching similarity is maximum in model library;
Step3. texture maps: the texture mapping simulation of facial true colors according to face, realize the mapping covering texture to neutral texture, neutral texture is the texture of model acquiescence, covering texture is human face photo, by the mapping of 13 characteristic points and peripheral region thereof in two layer texture, add the textural characteristics of target face to Matching Model;
Step4. the colour of skin merges: using an Alpha figure, the boudary portion in face fusion is formed from covering texture seamlessly transitting to neutral texture, meanwhile, utilizes Alpha figure that the details position of eyebrow, the bridge of the nose is carried out trickle adjustment;
Step5. model regulates: by the method for vector differentials, the characteristic of the face size of Matching Model, position is adjusted, and generates the personalized model meeting its facial characteristics.
2. the Realistic Human face generating method based on single photo according to claim 1, it is characterised in that construct a Model Regulator for adjusting human face five-sense-organ slight change in Step5, generate personalized face true to nature;
If the summit V in face gridi=(x, y, z)TRegard Vector Groups H as0=(V1, V2, V3..., Vn), n represents the quantity on summit, and after eyes are converted, the position of some points changes, and forms Vector Groups H1=(V1', V2', V3' ... Vn'), this conversion is increased a weights W, then, from H0To H1Middle all of transition state HxCan be transferred through linear interpolation to calculate;
Hx=H0+W*(H1-H0)(1)
A kind of conversion is difficult to meet the demand of faceform, it is necessary to the feature according to face, selects multiple conversion, gives a weighted value W for each conversion, then the mixed formulation of m conversion is:
H x = H 0 + Σ i = 1 m W i * ( H i - H 1 ) - - - ( 2 )
Wherein the quantity of m is as the criterion with the quantity needed in practical application, in an experiment, define the grid actuator of 23 kinds of different conversion, including: the covering of eye-level, eye distance, eyes size, eyebrow height, the eyebrow degree of depth, nose size, the nose degree of depth, nose height, the bridge of the nose degree of depth, nose bridge widths, nostril width, face height, face thickness, face width, lip, the cheekbone degree of depth, cheekbone width, cheekbone height, buccal width, chin height, the chin degree of depth, chin width, lower jaw width, actuator realizes distortion of the mesh by control weight.
3. the Realistic Human face generating method based on single photo according to claim 1, it is characterised in that in Step2, the method for Model Matching is as follows: first, and photo is normalized, and calculates the distance between principal character point and proportionate relationship;Secondly, according to characteristic point distance and proportionate relationship, it is determined that the rough classification of shape of face;Finally, Euclidean distance between Matching Model and photo individual features point in the model library of corresponding shape of face, adopt formula (3) to search out closest threedimensional model;
E = Σ i = 1 n ( D i - D i , ) 2 * λ i - - - ( 3 )
Wherein, DiFor distance between certain characteristic point in photo, Di' for distance between individual features point in threedimensional model, λiFor the weighted value of each characteristic point distance, determining according to its percentage contribution size and empirical equation, n is the distance quantity compared.
4. the Realistic Human face generating method based on single photo according to claim 1, it is characterised in that assume in Step3 that the characteristic point position of neutral texture and covering texture labelling respectively forms V respectively1, V2Two two-dimentional point sets, due to the difference of manual mark, V1, V2In corresponding point necessarily will not be exactly accurate at same position, it is necessary to a kind of mapping method makes its natural fusion, takes P1∈V1, P2∈V2, by P1, P2Expand to three-dimensional vector, P1=(X1, Y1, 1), P2=(X2, Y2, 1), and assume there is matrix M, then:
M=P1*P2 -1(4)
After obtaining the transformation matrix of each point, the conversion of characteristic point is applied in the pixel around characteristic point, this flow process uses common triangle mode of texturing that covering texture is applied to model texture, characteristic point is connected, form the grid covering face, grid now has concurrently triangle and tetragon, by splitting, polygon is all reduced to triangle, is in the pixel in the middle of triangle and is subject to the impact on Atria summit simultaneously;
Three summits assuming triangle are P1, P2, P3, using they three row as matrix, form 3*3 matrix (P1, P2, P3), it is assumed that the triangle after conversion is (P1', P2', P3'), then:
M*(P1, P2, P3)=(P1', P2', P3’)(5)
M=(P1', P2', P3’)*(P1, P2, P3)-1(6)
Certain i point in the middle of diabolo, uses M*PiJust obtain Pi, determine the coordinate after mapping a little in triangle with this, for unmapped point, coordinate figure is all integer, but the coordinate after mapping is likely to not be integer, and causing cannot with the pixel one_to_one corresponding on figure.
5. the Realistic Human face generating method based on single photo according to claim 1, it is characterized in that, in actual fused process in Step4, model neutrality texture, Alpha figure, covering texture will be processed into the same size, pixel in Alpha figure participates in mixing as the interpolation weights of mixing, ultimately forming the texture of natural fusion, the mixed formulation of three figure is:
C (x, y)=Cbase(x, y) * (255-CAlpha(x, y))+Coverlay(x, y) * CAlpha(x, y) wherein 0≤CAlpha(x, y)≤255 (7)
Cbase(x, y), CAlpha(x, y), Coverlay(x y) represents neutral texture, Alpha figure respectively, covers texture in (x, y) color value at coordinate place.
CN201610035432.5A 2016-01-19 2016-01-19 Realistic face generating method based on single photo Pending CN105719326A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610035432.5A CN105719326A (en) 2016-01-19 2016-01-19 Realistic face generating method based on single photo

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610035432.5A CN105719326A (en) 2016-01-19 2016-01-19 Realistic face generating method based on single photo

Publications (1)

Publication Number Publication Date
CN105719326A true CN105719326A (en) 2016-06-29

Family

ID=56147507

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610035432.5A Pending CN105719326A (en) 2016-01-19 2016-01-19 Realistic face generating method based on single photo

Country Status (1)

Country Link
CN (1) CN105719326A (en)

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372333A (en) * 2016-08-31 2017-02-01 北京维盛视通科技有限公司 Method and device for displaying clothes based on face model
CN106407886A (en) * 2016-08-25 2017-02-15 广州御银科技股份有限公司 Apparatus for establishing face model
CN106652025A (en) * 2016-12-20 2017-05-10 五邑大学 Three-dimensional face modeling method and three-dimensional face modeling printing device based on video streaming and face multi-attribute matching
CN106780714A (en) * 2016-11-10 2017-05-31 深圳市咖啡帮餐饮顾问有限公司 The generation method and system of face 3D models
CN106920277A (en) * 2017-03-01 2017-07-04 浙江神造科技有限公司 Simulation beauty and shaping effect visualizes the method and system of online scope of freedom carving
CN106952221A (en) * 2017-03-15 2017-07-14 中山大学 A three-dimensional Beijing opera facial makeup automatic makeup method
CN107316340A (en) * 2017-06-28 2017-11-03 河海大学常州校区 A kind of fast human face model building based on single photo
CN107507263A (en) * 2017-07-14 2017-12-22 西安电子科技大学 A kind of Texture Generating Approach and system based on image
CN107506714A (en) * 2017-08-16 2017-12-22 成都品果科技有限公司 A kind of method of face image relighting
CN107578469A (en) * 2017-09-08 2018-01-12 明利 A kind of 3D human body modeling methods and device based on single photo
CN107680158A (en) * 2017-11-01 2018-02-09 长沙学院 A kind of three-dimensional facial reconstruction method based on convolutional neural networks model
CN107705248A (en) * 2017-10-31 2018-02-16 广东欧珀移动通信有限公司 Image processing method, device, electronic device, and computer-readable storage medium
CN107705355A (en) * 2017-09-08 2018-02-16 郭睿 A kind of 3D human body modeling methods and device based on plurality of pictures
CN108053219A (en) * 2017-12-29 2018-05-18 浙江万里学院 A kind of safe Intelligent logistics reimbursement of expense method
CN108154550A (en) * 2017-11-29 2018-06-12 深圳奥比中光科技有限公司 Face real-time three-dimensional method for reconstructing based on RGBD cameras
CN108305312A (en) * 2017-01-23 2018-07-20 腾讯科技(深圳)有限公司 The generation method and device of 3D virtual images
CN108776983A (en) * 2018-05-31 2018-11-09 北京市商汤科技开发有限公司 Based on the facial reconstruction method and device, equipment, medium, product for rebuilding network
CN108876886A (en) * 2017-05-09 2018-11-23 腾讯科技(深圳)有限公司 Image processing method, device and computer equipment
CN109325990A (en) * 2017-07-27 2019-02-12 腾讯科技(深圳)有限公司 Image processing method and image processing apparatus, storage medium
CN110148082A (en) * 2019-04-02 2019-08-20 杭州趣维科技有限公司 A kind of mobile terminal facial image face real-time deformation adjusting method
WO2019201027A1 (en) * 2018-04-18 2019-10-24 腾讯科技(深圳)有限公司 Face model processing method and device, nonvolatile computer-readable storage medium and electronic device
CN110458924A (en) * 2019-07-23 2019-11-15 腾讯科技(深圳)有限公司 A kind of three-dimensional facial model method for building up, device and electronic equipment
CN110738732A (en) * 2019-10-24 2020-01-31 重庆灵翎互娱科技有限公司 three-dimensional face model generation method and equipment
CN110858411A (en) * 2018-08-22 2020-03-03 金运数字技术(武汉)有限公司 Method for generating 3D head model based on single face picture
CN111382618A (en) * 2018-12-28 2020-07-07 广州市百果园信息技术有限公司 Illumination detection method, device, equipment and storage medium for face image
CN111640056A (en) * 2020-05-22 2020-09-08 构范(厦门)信息技术有限公司 Model adaptive deformation method and system
CN111738087A (en) * 2020-05-25 2020-10-02 完美世界(北京)软件科技发展有限公司 Method and device for generating face model of game role
CN113160412A (en) * 2021-04-23 2021-07-23 福建天晴在线互动科技有限公司 Automatic software model generation method and system based on texture mapping
CN113240810A (en) * 2021-04-28 2021-08-10 深圳羽迹科技有限公司 Face model fusion method, system and equipment
CN115049569A (en) * 2022-06-06 2022-09-13 上海云从企业发展有限公司 Face fusion method and device and computer readable storage medium
CN116503542A (en) * 2022-01-18 2023-07-28 腾讯科技(深圳)有限公司 A method and related device for generating an object expression model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101826217A (en) * 2010-05-07 2010-09-08 上海交通大学 Rapid generation method for facial animation
CN103366400A (en) * 2013-07-24 2013-10-23 深圳市华创振新科技发展有限公司 Method for automatically generating three-dimensional head portrait

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101826217A (en) * 2010-05-07 2010-09-08 上海交通大学 Rapid generation method for facial animation
CN103366400A (en) * 2013-07-24 2013-10-23 深圳市华创振新科技发展有限公司 Method for automatically generating three-dimensional head portrait

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谈国新 等: "一种真实感三维人脸交互式生成方法", 《武汉大学学报.信息科学版》 *

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407886A (en) * 2016-08-25 2017-02-15 广州御银科技股份有限公司 Apparatus for establishing face model
CN106372333A (en) * 2016-08-31 2017-02-01 北京维盛视通科技有限公司 Method and device for displaying clothes based on face model
CN106780714A (en) * 2016-11-10 2017-05-31 深圳市咖啡帮餐饮顾问有限公司 The generation method and system of face 3D models
CN106652025A (en) * 2016-12-20 2017-05-10 五邑大学 Three-dimensional face modeling method and three-dimensional face modeling printing device based on video streaming and face multi-attribute matching
CN106652025B (en) * 2016-12-20 2019-10-01 五邑大学 A kind of three-dimensional face modeling method and printing equipment based on video flowing Yu face multi-attribute Matching
CN108305312A (en) * 2017-01-23 2018-07-20 腾讯科技(深圳)有限公司 The generation method and device of 3D virtual images
CN106920277A (en) * 2017-03-01 2017-07-04 浙江神造科技有限公司 Simulation beauty and shaping effect visualizes the method and system of online scope of freedom carving
CN106952221A (en) * 2017-03-15 2017-07-14 中山大学 A three-dimensional Beijing opera facial makeup automatic makeup method
CN106952221B (en) * 2017-03-15 2019-12-31 中山大学 A three-dimensional Beijing opera facial makeup automatic makeup method
CN108876886B (en) * 2017-05-09 2021-07-27 腾讯科技(深圳)有限公司 Image processing method and device and computer equipment
CN108876886A (en) * 2017-05-09 2018-11-23 腾讯科技(深圳)有限公司 Image processing method, device and computer equipment
CN107316340A (en) * 2017-06-28 2017-11-03 河海大学常州校区 A kind of fast human face model building based on single photo
CN107316340B (en) * 2017-06-28 2020-06-19 河海大学常州校区 A fast face modeling method based on a single photo
CN107507263B (en) * 2017-07-14 2020-11-24 西安电子科技大学 An image-based texture generation method and system
CN107507263A (en) * 2017-07-14 2017-12-22 西安电子科技大学 A kind of Texture Generating Approach and system based on image
CN109325990A (en) * 2017-07-27 2019-02-12 腾讯科技(深圳)有限公司 Image processing method and image processing apparatus, storage medium
CN109325990B (en) * 2017-07-27 2022-11-29 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, and storage medium
CN107506714A (en) * 2017-08-16 2017-12-22 成都品果科技有限公司 A kind of method of face image relighting
CN107705355A (en) * 2017-09-08 2018-02-16 郭睿 A kind of 3D human body modeling methods and device based on plurality of pictures
CN107578469A (en) * 2017-09-08 2018-01-12 明利 A kind of 3D human body modeling methods and device based on single photo
CN107705248A (en) * 2017-10-31 2018-02-16 广东欧珀移动通信有限公司 Image processing method, device, electronic device, and computer-readable storage medium
CN107680158A (en) * 2017-11-01 2018-02-09 长沙学院 A kind of three-dimensional facial reconstruction method based on convolutional neural networks model
CN108154550A (en) * 2017-11-29 2018-06-12 深圳奥比中光科技有限公司 Face real-time three-dimensional method for reconstructing based on RGBD cameras
CN108154550B (en) * 2017-11-29 2021-07-06 奥比中光科技集团股份有限公司 RGBD camera-based real-time three-dimensional face reconstruction method
CN108053219A (en) * 2017-12-29 2018-05-18 浙江万里学院 A kind of safe Intelligent logistics reimbursement of expense method
US11257299B2 (en) 2018-04-18 2022-02-22 Tencent Technology (Shenzhen) Company Limited Face model processing for facial expression method and apparatus, non-volatile computer-readable storage-medium, and electronic device
WO2019201027A1 (en) * 2018-04-18 2019-10-24 腾讯科技(深圳)有限公司 Face model processing method and device, nonvolatile computer-readable storage medium and electronic device
CN108776983A (en) * 2018-05-31 2018-11-09 北京市商汤科技开发有限公司 Based on the facial reconstruction method and device, equipment, medium, product for rebuilding network
CN110858411A (en) * 2018-08-22 2020-03-03 金运数字技术(武汉)有限公司 Method for generating 3D head model based on single face picture
CN111382618A (en) * 2018-12-28 2020-07-07 广州市百果园信息技术有限公司 Illumination detection method, device, equipment and storage medium for face image
CN111382618B (en) * 2018-12-28 2021-02-05 广州市百果园信息技术有限公司 Illumination detection method, device, equipment and storage medium for face image
US11908236B2 (en) 2018-12-28 2024-02-20 Bigo Technology Pte. Ltd. Illumination detection method and apparatus for face image, and device and storage medium
CN110148082B (en) * 2019-04-02 2023-03-28 杭州小影创新科技股份有限公司 Mobile terminal face image face real-time deformation adjusting method
CN110148082A (en) * 2019-04-02 2019-08-20 杭州趣维科技有限公司 A kind of mobile terminal facial image face real-time deformation adjusting method
CN110458924A (en) * 2019-07-23 2019-11-15 腾讯科技(深圳)有限公司 A kind of three-dimensional facial model method for building up, device and electronic equipment
CN110738732A (en) * 2019-10-24 2020-01-31 重庆灵翎互娱科技有限公司 three-dimensional face model generation method and equipment
CN110738732B (en) * 2019-10-24 2024-04-05 重庆灵翎互娱科技有限公司 Three-dimensional face model generation method and equipment
CN111640056A (en) * 2020-05-22 2020-09-08 构范(厦门)信息技术有限公司 Model adaptive deformation method and system
CN111640056B (en) * 2020-05-22 2023-04-11 构范(厦门)信息技术有限公司 Model adaptive deformation method and system
CN111738087A (en) * 2020-05-25 2020-10-02 完美世界(北京)软件科技发展有限公司 Method and device for generating face model of game role
CN111738087B (en) * 2020-05-25 2023-07-25 完美世界(北京)软件科技发展有限公司 Method and device for generating face model of game character
CN113160412A (en) * 2021-04-23 2021-07-23 福建天晴在线互动科技有限公司 Automatic software model generation method and system based on texture mapping
CN113160412B (en) * 2021-04-23 2023-06-30 福建天晴在线互动科技有限公司 Automatic software model generation method and system based on texture mapping
CN113240810A (en) * 2021-04-28 2021-08-10 深圳羽迹科技有限公司 Face model fusion method, system and equipment
CN116503542A (en) * 2022-01-18 2023-07-28 腾讯科技(深圳)有限公司 A method and related device for generating an object expression model
CN115049569A (en) * 2022-06-06 2022-09-13 上海云从企业发展有限公司 Face fusion method and device and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN105719326A (en) Realistic face generating method based on single photo
CN111354079B (en) Three-dimensional face reconstruction network training and virtual face image generation method and device
KR102887595B1 (en) Methods and systems for forming personalized 3D head and face models
CN102426712B (en) Three-dimensional head modeling method based on two images
CN101739719B (en) 3D meshing method for 2D frontal face images
CN101324961B (en) Human face portion three-dimensional picture pasting method in computer virtual world
CN108305312A (en) The generation method and device of 3D virtual images
CN116583878A (en) Method and system for personalizing 3D head model deformations
CN103218846B (en) The ink and wash analogy method of Three-dimension Tree model
KR20230085931A (en) Method and system for extracting color from face images
CN103530907B (en) Complicated three-dimensional model drawing method based on images
CN119888133B (en) Three-dimensional scene reconstruction method and device for structure perception
CN111127642A (en) Human face three-dimensional reconstruction method
US20180005428A1 (en) Method and apparatus for generating graphic images
CN107145224A (en) Human eye sight tracking and device based on three-dimensional sphere Taylor expansion
CN108564619A (en) A kind of sense of reality three-dimensional facial reconstruction method based on two photos
Yu et al. A framework for automatic and perceptually valid facial expression generation
Yuan et al. The Fusion Method of Virtual Reality Technology and 3D Movie Animation Design.
CN111768476A (en) Expression animation redirection method and system based on grid deformation
CN105957139A (en) AR (Augmented Reality) 3D model generation method
Pawaskar et al. Expression transfer: A system to build 3d blend shapes for facial animation
CN115375832A (en) Three-dimensional face reconstruction method, electronic device, storage medium, and program product
Pan et al. Research on technology production in Chinese virtual character industry
CN115631516A (en) Face image processing method, device, equipment and computer-readable storage medium
Madračević et al. 3D modeling from 2D images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160629