CN114820894A - A method and system for generating a virtual character - Google Patents
A method and system for generating a virtual character Download PDFInfo
- Publication number
- CN114820894A CN114820894A CN202210492683.1A CN202210492683A CN114820894A CN 114820894 A CN114820894 A CN 114820894A CN 202210492683 A CN202210492683 A CN 202210492683A CN 114820894 A CN114820894 A CN 114820894A
- Authority
- CN
- China
- Prior art keywords
- image
- head model
- rendering
- face image
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Computing Systems (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
Abstract
本发明提出一种虚拟角色的生成方法及系统,通过获取目标用户的人脸图像,根据立体头部模型模板以生成虚拟角色的立体头部模型,获取初始化渲染参数以对所述立体头部模型进行渲染,将渲染后的所述立体头部模型投影到指定平面以得到平面投影图像,将所述平面投影图像与所述目标用户的人脸图像进行匹配,根据匹配结果对所述立体头部模型和渲染参数进行调整,使用调整后的渲染参数对调整后的所述立体头部模型进行重新渲染,循环执行上述投影、匹配、模型和渲染参数调整以及重新渲染的步骤直至所述平面投影图像与所述目标用户的人脸图像的匹配图符合预设条件,实现了快速生成用户现实形象的虚拟角色。
The present invention proposes a method and system for generating a virtual character. By acquiring a face image of a target user, a stereoscopic head model of a virtual character is generated according to a stereoscopic head model template, and initialization rendering parameters are obtained to generate a stereoscopic head model for the stereoscopic head model. Perform rendering, project the rendered three-dimensional head model to a specified plane to obtain a plane projection image, match the plane projection image with the face image of the target user, and perform a mapping on the stereo head according to the matching result. Adjust the model and rendering parameters, re-render the adjusted three-dimensional head model using the adjusted rendering parameters, and cyclically execute the above-mentioned steps of projection, matching, model and rendering parameter adjustment and re-rendering until the plane projection image The matching map with the face image of the target user complies with the preset conditions, so that the virtual character of the real image of the user can be quickly generated.
Description
技术领域technical field
本发明涉及立体建模技术领域,特别涉及一种虚拟角色的生成方法及系统。The invention relates to the technical field of three-dimensional modeling, in particular to a method and system for generating a virtual character.
背景技术Background technique
近些年来,元宇宙的概念日益受到互联网企业和网络玩家的追捧,越来越多的线上社交活动开始采用立体虚拟形象即虚拟角色模拟现实的交互场景,较为常见的例如线上直播、线上办公或者角色扮演类游戏等。元宇宙中的虚拟角色一般采用系统预置的立体模型进行构建,用户通过系统提供的个性化工具对虚拟形象进行调整以定制个性化风格的虚拟角色,例如个性化的发型、发色、脸型、肤色以及肤质等。为了使得虚拟角色更贴近现实,近期出现了一些采集用户现实形象数据建立虚拟角色的技术,但由于构建方式复杂、构建程序繁琐、耗费时间过长并且模型效果与真实差距太大等原因,这些技术难以得到推广应用。In recent years, the concept of metaverse has been increasingly sought after by Internet companies and online players. More and more online social activities have begun to use three-dimensional avatars, that is, virtual characters to simulate real interaction scenarios. Go to the office or role-playing games, etc. The avatars in the metaverse are generally constructed using the system preset three-dimensional models. Users can adjust the avatars through the personalized tools provided by the system to customize the avatars with personalized styles, such as personalized hairstyles, hair colors, face shapes, skin tone and texture. In order to make virtual characters closer to reality, some technologies have recently emerged to collect real image data of users to create virtual characters. However, due to the complex construction methods, cumbersome construction procedures, time-consuming and large gap between the model effect and the real ones, these technologies It is difficult to be promoted and applied.
发明内容SUMMARY OF THE INVENTION
本发明正是基于上述问题,提出了一种虚拟角色的生成方法及系统,实现了快速生成用户现实形象的虚拟角色。Based on the above problem, the present invention proposes a method and system for generating a virtual character, which realizes the rapid generation of a virtual character of a user's realistic image.
有鉴于此,本发明的第一方面提出了一种虚拟角色生成方法,包括:In view of this, a first aspect of the present invention provides a method for generating a virtual character, including:
获取目标用户的人脸图像;Get the face image of the target user;
根据立体头部模型模板以生成虚拟角色的立体头部模型;According to the stereoscopic head model template to generate the stereoscopic head model of the virtual character;
获取初始化渲染参数以对所述立体头部模型进行渲染;obtaining initialization rendering parameters to render the stereoscopic head model;
将渲染后的所述立体头部模型投影到指定平面以得到平面投影图像;Projecting the rendered three-dimensional head model to a specified plane to obtain a plane projection image;
将所述平面投影图像与所述目标用户的人脸图像进行匹配;matching the planar projection image with the face image of the target user;
根据匹配结果对所述立体头部模型和渲染参数进行调整;Adjust the three-dimensional head model and rendering parameters according to the matching result;
使用调整后的渲染参数对调整后的所述立体头部模型进行重新渲染;re-rendering the adjusted three-dimensional head model using the adjusted rendering parameters;
循环执行上述投影、匹配、模型和渲染参数调整以及重新渲染的步骤直至所述平面投影图像与所述目标用户的人脸图像的匹配图符合预设条件。The above steps of projection, matching, model and rendering parameter adjustment, and re-rendering are performed cyclically until the matching map between the planar projection image and the face image of the target user meets preset conditions.
进一步的,在上述的虚拟角色生成方法中,将所述平面投影图像与所述目标用户的人脸图像进行匹配的步骤具体包括:Further, in the above-mentioned virtual character generation method, the step of matching the plane projection image with the face image of the target user specifically includes:
提取所述平面投影图像与所述人脸图像的至少一个第一图像特征;extracting at least one first image feature of the planar projection image and the face image;
将所述第一图像特征分别转化为对应所述平面投影图像与所述人脸图像的两组坐标向量和两组颜色向量;Converting the first image feature into two sets of coordinate vectors and two sets of color vectors corresponding to the plane projection image and the face image respectively;
计算所述两组坐标向量之间差值以及所述两组颜色向量之间的差值。Calculate the difference between the two sets of coordinate vectors and the difference between the two sets of color vectors.
进一步的,在上述的虚拟角色生成方法中,所述第一图像特征包括五官图像以及面部轮廓图像,根据匹配结果对所述立体头部模型和渲染参数进行调整的步骤具体包括:Further, in the above-mentioned virtual character generation method, the first image features include facial features images and facial contour images, and the steps of adjusting the three-dimensional head model and rendering parameters according to the matching results specifically include:
根据所述两组坐标向量之间的差值预测所述所述立体头部模型的渲染参数中对应所述第一图像特征的多个顶点坐标;Predicting a plurality of vertex coordinates corresponding to the first image feature in the rendering parameters of the three-dimensional head model according to the difference between the two sets of coordinate vectors;
将对应所述第一图像特征的多个顶点坐标修改为预测值。Modifying a plurality of vertex coordinates corresponding to the first image feature to predicted values.
进一步的,在上述的虚拟角色生成方法中,在获取初始化渲染参数以对所述立体头部模型进行渲染的步骤之前,还包括:Further, in the above-mentioned virtual character generation method, before the step of acquiring initialization rendering parameters to render the three-dimensional head model, the method further includes:
获取初始化纹理参数以生成初始化平面纹理空间;Get the initialization texture parameters to generate the initialization plane texture space;
在循环执行上述投影、匹配、模型和渲染参数调整以及重新渲染的步骤直至所述平面投影图像与所述目标用户的人脸图像的匹配图符合预设条件的步骤之后,还包括:After cyclically performing the above steps of projection, matching, model and rendering parameter adjustment and re-rendering until the matching map between the planar projection image and the face image of the target user meets preset conditions, the method further includes:
根据所述立体头部模型建立所述人脸图像与所述初始化平面纹理空间的坐标对应关系;Establishing the coordinate correspondence between the face image and the initialization plane texture space according to the three-dimensional head model;
按照预设的颜色差阈值将所述人脸图像分割为多个区域的局部图像;Divide the face image into partial images of multiple regions according to a preset color difference threshold;
将所述多个区域的局部图像根据所述坐标映射关系映射到所述平面纹理空间。The local images of the multiple regions are mapped to the plane texture space according to the coordinate mapping relationship.
进一步的,在上述的虚拟角色生成方法中,在循环执行上述投影、匹配、模型和渲染参数调整以及重新渲染的步骤直至所述平面投影图像与所述目标用户的人脸图像的匹配图符合预设条件的步骤之后,还包括:Further, in the above-mentioned virtual character generation method, the above-mentioned steps of projection, matching, model and rendering parameter adjustment and re-rendering are performed cyclically until the matching map between the planar projection image and the face image of the target user is in accordance with the pre-set image. After the step of setting the conditions, it also includes:
从所述人脸图像提取至少一个第二图像特征;extracting at least one second image feature from the face image;
根据所述第二图像特征计算环境光参数;calculating an ambient light parameter according to the second image feature;
根据所述环境光参数调整所述所述立体头部模型的渲染参数以及所述平面纹理空间的纹理参数。The rendering parameters of the stereo head model and the texture parameters of the plane texture space are adjusted according to the ambient light parameters.
本发明的第二方面提出了一种虚拟角色生成系统,包括:A second aspect of the present invention provides a virtual character generation system, comprising:
图像获取模块,用于获取目标用户的人脸图像;The image acquisition module is used to acquire the face image of the target user;
模型生成模块,用于根据立体头部模型模板以生成虚拟角色的立体头部模型;The model generation module is used to generate the three-dimensional head model of the virtual character according to the three-dimensional head model template;
模型渲染模块,用于获取初始化渲染参数以对所述立体头部模型进行渲染以及在渲染参数和所述立体头部模型被调整后使用调整后的渲染参数对调整后的所述立体头部模型进行重新渲染;A model rendering module, configured to obtain initialization rendering parameters to render the stereo head model and use the adjusted rendering parameters to render the adjusted stereo head model after the rendering parameters and the stereo head model are adjusted. to re-render;
平面投影模块,用于将渲染后的所述立体头部模型投影到指定平面以得到平面投影图像;a plane projection module for projecting the rendered three-dimensional head model to a specified plane to obtain a plane projection image;
图像匹配模块,用于将所述平面投影图像与所述目标用户的人脸图像进行匹配;an image matching module for matching the planar projection image with the face image of the target user;
参数调整模块,用于根据匹配结果对所述立体头部模型和渲染参数进行调整;a parameter adjustment module for adjusting the stereo head model and rendering parameters according to the matching result;
循环执行模块,用于循环执行投影、匹配和渲染参数调整的步骤直至所述平面投影图像与所述目标用户的人脸图像的匹配图符合预设条件。The loop execution module is used for loop execution of the steps of projection, matching and rendering parameter adjustment until the matching map between the plane projection image and the face image of the target user meets a preset condition.
进一步的,在上述的虚拟角色生成系统中,所述图像匹配模块包括:Further, in the above-mentioned virtual character generation system, the image matching module includes:
特征提取子模块,用于提取所述平面投影图像与所述人脸图像的至少一个第一图像特征。所述第一图像特征包括但不限于五官、脸部轮廓中的一个或多个。A feature extraction sub-module for extracting at least one first image feature of the planar projection image and the face image. The first image features include, but are not limited to, one or more of facial features and facial contours.
向量转化子模块,用于将所述第一图像特征分别转化为对应所述平面投影图像与所述人脸图像的两组坐标向量和两组颜色向量。示例性的,以眼部特征作为所述第一图像特征之一,所述平面投影图像以及所述人脸图像中,将分别从两个图像上提取到的眼部特征转化为相应的坐标向量和颜色向量。假设眼部特征由n个像素点组成,在所述平面投影图像上的坐标值分别为(x1,y1)、(x2,y2)……(xn,yn),颜色值分别为(r1,g1,b1)、(r2,g2,b2)……(rn,gn,bn),在所述人脸图像上的坐标值分别为(x`1,y`1)、(x`2,y`2)……(x`n,y`n),颜色值分别为(r`1,g`1,b`1)、(r`2,g`2,b`2)……(r`n,g`n,b`n),则所述两组坐标向量和两组颜色向量分别构建为:A vector transformation submodule, configured to transform the first image feature into two sets of coordinate vectors and two sets of color vectors corresponding to the plane projection image and the face image, respectively. Exemplarily, taking the eye feature as one of the first image features, in the plane projection image and the face image, the eye features extracted from the two images are converted into corresponding coordinate vectors. and color vector. Assuming that the eye feature consists of n pixels, the coordinate values on the plane projection image are (x1, y1), (x2, y2)...(xn, yn), and the color values are (r1, g1) ,b1), (r2,g2,b2)...(rn,gn,bn), the coordinate values on the face image are (x`1,y`1), (x`2,y` 2)......(x`n,y`n), the color values are (r`1,g`1,b`1), (r`2,g`2,b`2)......(r` n, g`n, b`n), the two sets of coordinate vectors and the two sets of color vectors are constructed as:
平面投影图像眼部特征坐标向量:[x1,y1,x2,y2,……,xn,yn];Eye feature coordinate vector of plane projection image: [x1,y1,x2,y2,...,xn,yn];
人脸图像眼部特征坐标向量:[x`1,y`1,x`2,y`2,……,x`n,y`n];Face image eye feature coordinate vector: [x`1, y`1, x`2, y`2, ..., x`n, y`n];
平面投影图像眼部特征颜色向量:[r1,g1,b1,r2,g2,b2,……,rn,gn,bn];Eye feature color vector of plane projection image: [r1,g1,b1,r2,g2,b2,...,rn,gn,bn];
人脸图像眼部特征颜色向量:[r`1,g`1,b`1,r`2,g`2,b`2,……,r`n,g`n,b`n]。Face image eye feature color vector: [r`1,g`1,b`1,r`2,g`2,b`2,...,r`n,g`n,b`n].
差值计算子模块,用于计算所述两组坐标向量之间差值以及所述两组颜色向量之间的差值。在本发明的一些实施方式中,将所述平面投影图像与所述人脸图像的眼部特征坐标向量及眼部特征颜色向量分别相减得到眼部特征坐标差值向量[dx1,dy1,dx2,dy2,……,dxn,dyn]以及眼部特征颜色差值向量[dr1,dg1,db1,dr2,dg2,db2,……,drn,dgn,dbn]。根据所述坐标差值向量和所述颜色差值向量预测所述立体头部模型和所述渲染参数的至少一组调整值,并据此调整所述所述立体头部模型和所述渲染参数。A difference calculation submodule, configured to calculate the difference between the two sets of coordinate vectors and the difference between the two sets of color vectors. In some embodiments of the present invention, the eye feature coordinate difference vector [dx1, dy1, dx2 is obtained by subtracting the eye feature coordinate vector and the eye feature color vector of the plane projection image and the face image respectively. ,dy2,...,dxn,dyn] and the eye feature color difference vector [dr1,dg1,db1,dr2,dg2,db2,...,drn,dgn,dbn]. Predict at least one set of adjustment values for the stereoscopic head model and the rendering parameters according to the coordinate difference vector and the color difference vector, and adjust the stereoscopic head model and the rendering parameters accordingly .
进一步的,在上述的虚拟角色生成系统中,所述第一图像特征包括五官图像以及面部轮廓图像,所述参数调整模块包括:Further, in the above virtual character generation system, the first image features include facial features images and facial contour images, and the parameter adjustment module includes:
坐标预测子模块,用于根据所述两组坐标向量之间的差值预测所述所述立体头部模型的渲染参数中对应所述第一图像特征的多个顶点坐标。由于从所述立体头部模型到所述平面投影图像的变化是从三维到两维的变化,对应两维图像中的坐标向量差值,映射到三维空间后,所述立体头部模开路中对应区域例如眼部区域,其相应的顶点坐标的对应有多组不同的预测值。而采用其中任一种可能的顶点坐标预测值作为调整后的坐标值,其由于空间坐标的变化,同样会对投影图像中相关联区域的颜色向量带来影响,例如眼窝的深度的不同其相应区域的阴影面积也会有所变化等。A coordinate prediction submodule, configured to predict a plurality of vertex coordinates corresponding to the first image feature in the rendering parameters of the three-dimensional head model according to the difference between the two sets of coordinate vectors. Since the change from the three-dimensional head model to the plane projection image is a change from three-dimensional to two-dimensional, corresponding to the coordinate vector difference in the two-dimensional image, after mapping to the three-dimensional space, the three-dimensional head model is open-circuited. For the corresponding region, such as the eye region, there are multiple sets of different prediction values corresponding to the corresponding vertex coordinates. However, using any of the possible vertex coordinate prediction values as the adjusted coordinate value will also affect the color vector of the associated area in the projected image due to the change of the spatial coordinate. The shaded area of the area will also vary, etc.
坐标修改子模块,用于将对应所述第一图像特征的多个顶点坐标修改为预测值。在循环执行所述投影、匹配、模型和渲染参数调整以及重新渲染的步骤中,通过对上述顶点坐标多组不同的预测值进行逐一测试,在测试过程中根据所述颜色向量坐标的差值变化来判断所述顶点坐标值是否符合预设条件。The coordinate modification submodule is used for modifying the coordinates of a plurality of vertices corresponding to the first image feature into predicted values. In the steps of cyclically executing the projection, matching, model and rendering parameter adjustment and re-rendering, by testing the above-mentioned multiple sets of different predicted values of the vertex coordinates one by one, in the testing process, according to the difference of the color vector coordinates to determine whether the vertex coordinate value meets the preset condition.
进一步的,在上述的虚拟角色生成系统中,所述参数获取模块还用于获取初始化纹理参数以生成初始化平面纹理空间,所述虚拟角色生成系统还包括:Further, in the above virtual character generation system, the parameter acquisition module is further configured to acquire initialization texture parameters to generate an initialization plane texture space, and the virtual character generation system further includes:
坐标关联模块,用于根据所述立体头部模型建立所述人脸图像与所述初始化平面纹理空间的坐标对应关系。在前述将所述平面投影图像与所述目标用户的人脸图像进行匹配的步骤中以及获取初始化纹理参数以生成初始化平面纹理空间的步骤中已经分别建立了所述人脸图像与所述立体头部模型间的映射关系以及所述平面纹理空间与所述立体头部模型间的映射关系,根据三者之间的关系,可以建立所述人脸图像上每个像素的坐标值与所述平面纹理空间上每个像素的坐标值之间的映射关系。A coordinate association module, configured to establish a coordinate correspondence between the face image and the initialized plane texture space according to the three-dimensional head model. In the aforementioned steps of matching the planar projection image with the face image of the target user and the steps of acquiring initialized texture parameters to generate an initialized planar texture space, the face image and the stereo head have been established respectively. The mapping relationship between the head models and the mapping relationship between the plane texture space and the three-dimensional head model, according to the relationship between the three, the coordinate value of each pixel on the face image and the plane can be established. The mapping relationship between the coordinate values of each pixel in texture space.
图像分割模块,用于按照预设的颜色差阈值将所述人脸图像分割为多个区域的局部图像。人脸不同区域包括头发、额头、眉毛、眼睛、脸颊、鼻梁、嘴唇以及下巴等不同区域在颜色上会有较大差异,其中脸颊部分在不同光线的作用下,也可能存在暗色部分和高光部分,以一定的颜色容差范围为阈值,将不同颜色的区域划分成不同的局部图像。An image segmentation module, configured to segment the face image into partial images of multiple regions according to a preset color difference threshold. Different areas of the human face, including hair, forehead, eyebrows, eyes, cheeks, bridge of the nose, lips, and chin, have great differences in color. The cheeks may also have dark and high-light parts under the action of different light. , with a certain color tolerance range as the threshold, the regions of different colors are divided into different partial images.
像素映射模块,用于将所述多个区域的局部图像根据所述坐标映射关系映射到所述平面纹理空间。将上述每个局部图像按照前述的坐标映射关系映射到所述平面纹理空间中,随后使用平滑函数将衔接部分执行平滑处理以避免衔接处的过渡显示非常生硬。A pixel mapping module, configured to map the partial images of the multiple regions to the plane texture space according to the coordinate mapping relationship. Each of the above-mentioned partial images is mapped into the plane texture space according to the aforementioned coordinate mapping relationship, and then a smoothing function is used to perform smoothing processing on the connecting part to prevent the transition at the connecting position from being displayed very bluntly.
进一步的,在上述的虚拟角色生成系统中,还包括:Further, in the above-mentioned virtual character generation system, it also includes:
特征提取模块,用于从所述人脸图像提取至少一个第二图像特征。所述第二图像特征包括所述人脸图像上的暗色部分和高光部分。A feature extraction module for extracting at least one second image feature from the face image. The second image features include dark parts and highlight parts on the face image.
参数计算模块,用于根据所述第二图像特征计算环境光参数。所述环境光参数括但不限于光照方向、环境光颜色及强度、漫反射和镜面反射的强度及颜色等。A parameter calculation module, configured to calculate ambient light parameters according to the second image feature. The ambient light parameters include, but are not limited to, the direction of illumination, the color and intensity of ambient light, the intensity and color of diffuse reflection and specular reflection, and the like.
所述参数调整模块还用于根据所述环境光参数调整所述所述立体头部模型的渲染参数以及所述平面纹理空间的纹理参数。The parameter adjustment module is further configured to adjust the rendering parameters of the three-dimensional head model and the texture parameters of the plane texture space according to the ambient light parameters.
本发明提出一种虚拟角色的生成方法及系统,通过获取目标用户的人脸图像,根据立体头部模型模板以生成虚拟角色的立体头部模型,获取初始化渲染参数以对所述立体头部模型进行渲染,将渲染后的所述立体头部模型投影到指定平面以得到平面投影图像,将所述平面投影图像与所述目标用户的人脸图像进行匹配,根据匹配结果对所述立体头部模型和渲染参数进行调整,使用调整后的渲染参数对调整后的所述立体头部模型进行重新渲染,循环执行上述投影、匹配、模型和渲染参数调整以及重新渲染的步骤直至所述平面投影图像与所述目标用户的人脸图像的匹配图符合预设条件,实现了快速生成用户现实形象的虚拟角色。The present invention provides a method and system for generating a virtual character. By acquiring a face image of a target user, a three-dimensional head model of a virtual character is generated according to a three-dimensional head model template, and initialization rendering parameters are obtained to generate the three-dimensional head model for the three-dimensional head model. Rendering, projecting the rendered three-dimensional head model to a designated plane to obtain a plane projection image, matching the plane projection image with the face image of the target user, and applying the stereo head according to the matching result. Adjust the model and rendering parameters, re-render the adjusted three-dimensional head model using the adjusted rendering parameters, and perform the above-mentioned projection, matching, model and rendering parameter adjustment and re-rendering steps in a loop until the plane projection image The matching map with the face image of the target user complies with the preset condition, so that the virtual character of the real image of the user can be quickly generated.
附图说明Description of drawings
图1是本发明一个实施例提供的一种虚拟角色生成方法的示意流程图;1 is a schematic flowchart of a method for generating a virtual character provided by an embodiment of the present invention;
图2是本发明一个实施例提供的投影图像与人脸图像匹配方法的示意流程图;2 is a schematic flowchart of a method for matching a projected image and a face image provided by an embodiment of the present invention;
图3是本发明一个实施例提供的立体模型顶点坐标修改方法的示意流程图;3 is a schematic flowchart of a method for modifying vertex coordinates of a three-dimensional model provided by an embodiment of the present invention;
图4是本发明一个实施例提供的纹理空间生成方法的示意流程图;4 is a schematic flowchart of a texture space generation method provided by an embodiment of the present invention;
图5是本发明一个实施例提供的环境光参数调整方法的示意流程图;5 is a schematic flowchart of a method for adjusting ambient light parameters provided by an embodiment of the present invention;
图6是本发明一个实施例提供的一种虚拟角色生成系统的示意框图。FIG. 6 is a schematic block diagram of a virtual character generation system provided by an embodiment of the present invention.
具体实施方式Detailed ways
为了能够更清楚地理解本发明的上述目的、特征和优点,下面结合附图和具体实施方式对本发明进行进一步的详细描述。需要说明的是,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互组合。In order to understand the above objects, features and advantages of the present invention more clearly, the present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments of the present application and the features in the embodiments may be combined with each other in the case of no conflict.
在下面的描述中阐述了很多具体细节以便于充分理解本发明,但是,本发明还可以采用其他不同于在此描述的方式来实施,因此,本发明的保护范围并不受下面公开的具体实施例的限制。Many specific details are set forth in the following description to facilitate a full understanding of the present invention. However, the present invention can also be implemented in other ways different from those described herein. Therefore, the protection scope of the present invention is not limited by the specific implementation disclosed below. example limitations.
在本发明的描述中,术语“多个”则指两个或两个以上,除非另有明确的限定,术语“上”、“下”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。术语“连接”、“安装”、“固定”等均应做广义理解,例如,“连接”可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是直接相连,也可以通过中间媒介间接相连。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本发明中的具体含义。此外,术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”等的特征可以明示或者隐含地包括一个或者更多个该特征。在本发明的描述中,除非另有说明,“多个”的含义是两个或两个以上。In the description of the present invention, the term "plurality" refers to two or more than two, unless otherwise expressly defined, the orientation or positional relationship indicated by the terms "upper", "lower" etc. is based on the drawings. The orientation or positional relationship is only for the convenience of describing the present invention and simplifying the description, rather than indicating or implying that the indicated device or element must have a specific orientation, be constructed and operated in a specific orientation, and therefore should not be construed as a limitation of the present invention. The terms "connected", "installed", "fixed", etc. should be understood in a broad sense. For example, "connected" can be a fixed connection, a detachable connection, or an integral connection; it can be directly connected, or through the middle media are indirectly connected. For those of ordinary skill in the art, the specific meanings of the above terms in the present invention can be understood according to specific situations. In addition, the terms "first", "second", etc. are used for descriptive purposes only, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature defined as "first", "second", etc., may expressly or implicitly include one or more of that feature. In the description of the present invention, unless otherwise specified, "plurality" means two or more.
在本说明书的描述中,术语“一个实施例”、“一些实施方式”、“具体实施例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或实例。而且,描述的具体特征、结构、材料或特点可以在任何的一个或多个实施例或示例中以合适的方式结合。In the description of this specification, the description of the terms "one embodiment", "some embodiments", "specific embodiment", etc. means that a particular feature, structure, material or characteristic described in connection with the embodiment or example is included in the present invention at least one embodiment or example of . In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or instance. Furthermore, the particular features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
下面参照图1至图6来描述根据本发明一些实施方式提供的一种虚拟角色的生成方法及系统。The following describes a method and system for generating a virtual character according to some embodiments of the present invention with reference to FIG. 1 to FIG. 6 .
如图1所示,本发明的第一方面提出了一种虚拟角色生成方法,包括:As shown in FIG. 1 , a first aspect of the present invention provides a method for generating a virtual character, including:
S110:获取目标用户的人脸图像。在本发明的一些实施方式中,所述目标用户的人脸图像可以是提前拍摄好的照片,由所述虚拟角色生成系统从本地或者远程存储空间中读取得到。在本发明的另一些实施方式中,所述目标用户的人脸图像也可以是在所述虚拟角色生成系统的生成程序启动时,实时通过摄像装置拍摄得到。通过上述方式获取到所述目标用户的人脸图像后,所述虚拟角色生成系统即可根据所述人脸图像为用户生成与其现实形象样貌相似的虚拟角色。S110: Acquire a face image of the target user. In some embodiments of the present invention, the face image of the target user may be a photo taken in advance, and is read from a local or remote storage space by the virtual character generation system. In some other embodiments of the present invention, the face image of the target user may also be captured in real time by a camera device when the generation program of the virtual character generation system is started. After acquiring the face image of the target user in the above manner, the virtual character generation system can generate a virtual character similar to the real image for the user according to the face image.
S120:根据立体头部模型模板以生成虚拟角色的立体头部模型。在根据所述目标用户人脸图像生成所述虚拟角色之前,首先使用通用的立体头部模型模板构建一个通用的立体头部模型。进一步的,在本发明的一些实施方式中,在获取所述目标用户的人脸图像后,对所述人脸图像执行图像特征提取和分析以获取所述目标用户的脸部特征,从而根据所述脸部特征获取对应的立体头部模型模板。例如,所述目标用户的特征包括所述目标用户的性别特征,在获取所述目标用户的人脸图像后,对所述人脸图像执行人脸识别以确认所述目标用户是男性用户还是女性用户,以在当前步骤中获取对应的男性虚拟角色的立体头部模型模板或者女性虚拟角色的立体头部模型模板。所述目标用户的特征还包括所述目标用户的种族特征、年龄特征等。在本发明的一些实施方式中,所述虚拟角色生成系统预置对应不同种族、性别以及年龄的立体头部模型模板,根据对所述人脸图像中所述目标用户脸部特征的提取和分析,从而识别出所述目标用户的种族、性别和年龄,从而根据上述信息获取对应的立体头部模型模板来生成虚拟角色初步立体头部模型,以减少后续对所述立体头部模型和渲染参数进行调整所耗费的时间。S120: Generate a three-dimensional head model of the virtual character according to the three-dimensional head model template. Before generating the virtual character according to the face image of the target user, a general three-dimensional head model is first constructed using a general three-dimensional head model template. Further, in some embodiments of the present invention, after obtaining the face image of the target user, image feature extraction and analysis are performed on the face image to obtain the face feature of the target user, so as to Obtain the corresponding stereo head model template according to the facial features. For example, the characteristics of the target user include gender characteristics of the target user, and after acquiring the face image of the target user, perform face recognition on the face image to confirm whether the target user is a male user or a female The user obtains the corresponding stereoscopic head model template of the male avatar or the stereoscopic head model template of the female avatar in the current step. The characteristics of the target user also include ethnic characteristics, age characteristics and the like of the target user. In some embodiments of the present invention, the virtual character generation system presets stereoscopic head model templates corresponding to different races, genders and ages, according to the extraction and analysis of the facial features of the target user in the face image , so as to identify the race, gender and age of the target user, so as to obtain the corresponding stereo head model template according to the above information to generate a preliminary stereo head model of the virtual character, so as to reduce subsequent changes to the stereo head model and rendering parameters. Time spent making adjustments.
S130:获取初始化渲染参数以对所述立体头部模型进行渲染。所述初始化渲染参数包括但不限于光源数量、光源类型、光源位置、光源颜色、光源光照强度、相机位置、相机角度以及环境物体的反射/漫反射属性等。具体的,在所述获取初始化渲染参数以对所述立体头部模型进行渲染的步骤之前,所述虚拟角色生成系统还获取初始化纹理参数以生成初始化平面纹理空间,在使用所述初始化平面纹理空间映射到所述立体头部模型上以使所述立体头部模型具有初步的材质纹理属性。S130: Acquire initialization rendering parameters to render the three-dimensional head model. The initialized rendering parameters include, but are not limited to, the number of light sources, the type of light sources, the position of the light sources, the color of the light sources, the light intensity of the light sources, the position of the camera, the angle of the camera, and the reflection/diffuse reflection properties of environmental objects, etc. Specifically, before the step of acquiring initialization rendering parameters to render the three-dimensional head model, the virtual character generation system further acquires initialization texture parameters to generate an initialization plane texture space, and when using the initialization plane texture space Mapping onto the stereo head model so that the stereo head model has preliminary material texture properties.
S140:将渲染后的所述立体头部模型投影到指定平面以得到平面投影图像。具体的,根据所述人脸图像中五官及脸部轮廓的相对位置确认摄像装置获取所述人脸图像时的所述摄像装置与所述目标用户头部的相对角度,将该相对角度作为所述立体头部模型与所述指定平面的相对角度以获得所述立体头部模型的平面投影图像。S140: Project the rendered three-dimensional head model to a specified plane to obtain a plane projection image. Specifically, confirm the relative angle between the camera device and the target user's head when the camera device acquires the face image according to the relative positions of the facial features and the facial contour in the face image, and use the relative angle as the The relative angle of the three-dimensional head model and the specified plane is obtained to obtain a plane projection image of the three-dimensional head model.
S150:将所述平面投影图像与所述目标用户的人脸图像进行匹配。通过对所述平面投影图像与所述目标用户的人脸图像进行匹配确定所述平面投影图像与所述人脸图像的差异,例如头发、脸型以及五官形状、位置上的差异,以及每个图像区域的颜色、亮度、对比度上的差异等。S150: Match the planar projection image with the face image of the target user. Determine the difference between the plane projection image and the face image by matching the plane projection image with the face image of the target user, such as differences in hair, face shape and facial features, and position, and each image Differences in color, brightness, contrast, etc. of the area.
S160:根据匹配结果对所述立体头部模型和渲染参数进行调整。具体的,根据所述平面投影图像与所述人脸图像的差异,计算使所述平面投影图像趋向于与所述人脸图像相同的所述立体头部模型和渲染参数的调整方式,并按该调整方式对所述立体头部模型和渲染参数进行调整。S160: Adjust the three-dimensional head model and rendering parameters according to the matching result. Specifically, according to the difference between the plane projection image and the face image, calculate the adjustment method of the three-dimensional head model and rendering parameters so that the plane projection image tends to be the same as the face image, and press This adjustment method adjusts the three-dimensional head model and rendering parameters.
S170:使用调整后的渲染参数对调整后的所述立体头部模型进行重新渲染。调整后的所述立体头部模型会产生的变化包括如发型的变化、脸型的变化以及五官形状的变化等。调整后的所述渲染参数所带来的变化包括头部模型的方向、全局亮度、脸部阴影的颜色/大小、脸部反光区域的亮度/颜色/大小等。S170: Re-render the adjusted three-dimensional head model using the adjusted rendering parameters. The changes produced by the adjusted three-dimensional head model include, for example, changes in hairstyles, changes in face shapes, and changes in facial features. The changes brought about by the adjusted rendering parameters include the orientation of the head model, the global brightness, the color/size of the shadow of the face, the brightness/color/size of the reflective area of the face, and the like.
S180:循环执行上述投影、匹配、模型和渲染参数调整以及重新渲染的步骤直至所述平面投影图像与所述目标用户的人脸图像的匹配图符合预设条件。所述平面投影图像上的同一图像特征的坐标和颜色受到所述立体头部模型和所述渲染参数中的多个不同参数的影响,且不同图像特征之间也会相互影响,因此无法根据所述平面投影图像与所述人脸图像的差异直接还原出唯一的所述立体头部模型和所述渲染参数,需要对所述立体头部模型和所述渲染参数进行多次调整使得所述平面投影图像趋向于与所述人脸图像相同。由于拍摄环境的复杂因素,完全还原所述立体头部模型和渲染参数以使所述平面投影图像与所述人脸图像完全相同所耗费的计算资源和时间资源是非常巨大且非必要的,因此当所述平面投影图像与所述目标用户的人脸图像的相似度达到预期即可停止所述循环执行的步骤。所述预设条件具体是指所述平面投影图像与所述目标用户的人脸图像上的一个或多个图像特征的坐标向量、颜色向量的差值小于预设的阈值。S180: Execute the above steps of projection, matching, model and rendering parameter adjustment, and re-rendering in a loop until the matching map between the planar projection image and the face image of the target user meets a preset condition. The coordinates and color of the same image feature on the planar projection image are affected by a plurality of different parameters in the stereo head model and the rendering parameters, and different image features also affect each other, so it cannot be determined according to the The difference between the plane projection image and the face image directly restores the unique three-dimensional head model and the rendering parameters, and the three-dimensional head model and the rendering parameters need to be adjusted many times so that the The projected image tends to be the same as the face image. Due to the complex factors of the shooting environment, the computational resources and time resources consumed by completely restoring the stereoscopic head model and rendering parameters so that the planar projection image is exactly the same as the face image are very huge and unnecessary. When the similarity between the plane projection image and the face image of the target user reaches the expectation, the loop execution steps can be stopped. The preset condition specifically means that the difference between the coordinate vector and the color vector of one or more image features on the planar projection image and the face image of the target user is smaller than a preset threshold.
如图2所示,在上述的虚拟角色生成方法中,将所述平面投影图像与所述目标用户的人脸图像进行匹配的步骤具体包括:As shown in Figure 2, in the above-mentioned virtual character generation method, the step of matching the plane projection image with the face image of the target user specifically includes:
S151:提取所述平面投影图像与所述人脸图像的至少一个第一图像特征。所述第一图像特征包括但不限于五官、脸部轮廓中的一个或多个。S151: Extract at least one first image feature of the planar projection image and the face image. The first image features include, but are not limited to, one or more of facial features and facial contours.
S152:将所述第一图像特征分别转化为对应所述平面投影图像与所述人脸图像的两组坐标向量和两组颜色向量。示例性的,以眼部特征作为所述第一图像特征之一,所述平面投影图像以及所述人脸图像中,将分别从两个图像上提取到的眼部特征转化为相应的坐标向量和颜色向量。假设眼部特征由n个像素点组成,在所述平面投影图像上的坐标值分别为(x1,y1)、(x2,y2)……(xn,yn),颜色值分别为(r1,g1,b1)、(r2,g2,b2)……(rn,gn,bn),在所述人脸图像上的坐标值分别为(x`1,y`1)、(x`2,y`2)……(x`n,y`n),颜色值分别为(r`1,g`1,b`1)、(r`2,g`2,b`2)……(r`n,g`n,b`n),则所述两组坐标向量和两组颜色向量分别构建为:S152: Transform the first image feature into two sets of coordinate vectors and two sets of color vectors corresponding to the planar projection image and the face image, respectively. Exemplarily, taking the eye feature as one of the first image features, in the plane projection image and the face image, the eye features extracted from the two images are converted into corresponding coordinate vectors. and color vector. Assuming that the eye feature consists of n pixels, the coordinate values on the plane projection image are (x1, y1), (x2, y2)...(xn, yn), and the color values are (r1, g1) ,b1), (r2,g2,b2)...(rn,gn,bn), the coordinate values on the face image are (x`1,y`1), (x`2,y` 2)......(x`n,y`n), the color values are (r`1,g`1,b`1), (r`2,g`2,b`2)......(r` n, g`n, b`n), the two sets of coordinate vectors and the two sets of color vectors are constructed as:
平面投影图像眼部特征坐标向量:[x1,y1,x2,y2,……,xn,yn];Eye feature coordinate vector of plane projection image: [x1,y1,x2,y2,...,xn,yn];
人脸图像眼部特征坐标向量:[x`1,y`1,x`2,y`2,……,x`n,y`n];Face image eye feature coordinate vector: [x`1, y`1, x`2, y`2, ..., x`n, y`n];
平面投影图像眼部特征颜色向量:[r1,g1,b1,r2,g2,b2,……,rn,gn,bn];Eye feature color vector of plane projection image: [r1,g1,b1,r2,g2,b2,...,rn,gn,bn];
人脸图像眼部特征颜色向量:[r`1,g`1,b`1,r`2,g`2,b`2,……,r`n,g`n,b`n]。Face image eye feature color vector: [r`1,g`1,b`1,r`2,g`2,b`2,...,r`n,g`n,b`n].
S153:计算所述两组坐标向量之间差值以及所述两组颜色向量之间的差值。在本发明的一些实施方式中,将所述平面投影图像与所述人脸图像的眼部特征坐标向量及眼部特征颜色向量分别相减得到眼部特征坐标差值向量[dx1,dy1,dx2,dy2,……,dxn,dyn]以及眼部特征颜色差值向量[dr1,dg1,db1,dr2,dg2,db2,……,drn,dgn,dbn]。根据所述坐标差值向量和所述颜色差值向量预测所述立体头部模型和所述渲染参数的至少一组调整值,并据此调整所述所述立体头部模型和所述渲染参数。S153: Calculate the difference between the two sets of coordinate vectors and the difference between the two sets of color vectors. In some embodiments of the present invention, the eye feature coordinate difference vector [dx1, dy1, dx2 is obtained by subtracting the eye feature coordinate vector and the eye feature color vector of the plane projection image and the face image respectively. ,dy2,...,dxn,dyn] and the eye feature color difference vector [dr1,dg1,db1,dr2,dg2,db2,...,drn,dgn,dbn]. Predict at least one set of adjustment values for the stereoscopic head model and the rendering parameters according to the coordinate difference vector and the color difference vector, and adjust the stereoscopic head model and the rendering parameters accordingly .
如图3所示,在上述的虚拟角色生成方法中,所述第一图像特征包括五官图像以及面部轮廓图像,根据匹配结果对所述立体头部模型和渲染参数进行调整的步骤具体包括:As shown in FIG. 3 , in the above-mentioned virtual character generation method, the first image features include facial features images and facial contour images, and the steps of adjusting the three-dimensional head model and rendering parameters according to the matching results specifically include:
S161:根据所述两组坐标向量之间的差值预测所述所述立体头部模型的渲染参数中对应所述第一图像特征的多个顶点坐标。由于从所述立体头部模型到所述平面投影图像的变化是从三维到两维的变化,对应两维图像中的坐标向量差值,映射到三维空间后,所述立体头部模开路中对应区域例如眼部区域,其相应的顶点坐标的对应有多组不同的预测值。而采用其中任一种可能的顶点坐标预测值作为调整后的坐标值,其由于空间坐标的变化,同样会对投影图像中相关联区域的颜色向量带来影响,例如眼窝的深度的不同其相应区域的阴影面积也会有所变化等。S161: Predict a plurality of vertex coordinates corresponding to the first image feature in the rendering parameters of the three-dimensional head model according to the difference between the two sets of coordinate vectors. Since the change from the three-dimensional head model to the plane projection image is a change from three-dimensional to two-dimensional, corresponding to the coordinate vector difference in the two-dimensional image, after mapping to the three-dimensional space, the three-dimensional head model is open-circuited. For the corresponding region, such as the eye region, there are multiple sets of different prediction values corresponding to the corresponding vertex coordinates. However, using any of the possible vertex coordinate prediction values as the adjusted coordinate value will also affect the color vector of the associated area in the projected image due to the change of the spatial coordinate. The shaded area of the area will also vary, etc.
S162:将对应所述第一图像特征的多个顶点坐标修改为预测值。在循环执行所述投影、匹配、模型和渲染参数调整以及重新渲染的步骤中,通过对上述顶点坐标多组不同的预测值进行逐一测试,在测试过程中根据所述颜色向量坐标的差值变化来判断所述顶点坐标值是否符合预设条件。S162: Modify multiple vertex coordinates corresponding to the first image feature to predicted values. In the steps of cyclically executing the projection, matching, model and rendering parameter adjustment and re-rendering, by testing the above-mentioned multiple sets of different predicted values of the vertex coordinates one by one, in the testing process, according to the difference of the color vector coordinates to determine whether the vertex coordinate value meets the preset condition.
如图4所示,在上述的虚拟角色生成方法中,在获取初始化渲染参数以对所述立体头部模型进行渲染的步骤之前,还包括:As shown in FIG. 4 , in the above-mentioned virtual character generation method, before the step of acquiring initialization rendering parameters to render the three-dimensional head model, the method further includes:
S210:获取初始化纹理参数以生成初始化平面纹理空间。如前所述,根据对所述人脸图像的分析判断目标用户的种族、性别、年龄等,从而从预设的模板信息中读取初始化的纹理参数以生成初始化的平面纹理空间。所述平面纹理空间为所述立体头部模型的表面沿预设分割线切割后平铺到一个平面上的展开图形,所述平面纹理空间的坐标与所述立体头部模型表面的坐标具有映射关系,在每一次执行对所述立体头部模型进行渲染的步骤之前,将所述平面纹理空间的材质纹理映射到所述立体头部模型表面,使所述立体头部模型表面具备颜色、光泽等属性。S210: Acquire initialized texture parameters to generate an initialized plane texture space. As mentioned above, the race, gender, age, etc. of the target user are determined according to the analysis of the face image, so that the initialized texture parameters are read from the preset template information to generate the initialized plane texture space. The plane texture space is an unfolded figure in which the surface of the three-dimensional head model is cut along a preset dividing line and then tiled onto a plane, and the coordinates of the plane texture space have a mapping with the coordinates of the surface of the three-dimensional head model. relationship, before the step of rendering the three-dimensional head model is performed each time, the material texture of the plane texture space is mapped to the surface of the three-dimensional head model, so that the surface of the three-dimensional head model has color and gloss. and other properties.
在循环执行上述投影、匹配、模型和渲染参数调整以及重新渲染的步骤直至所述平面投影图像与所述目标用户的人脸图像的匹配图符合预设条件的步骤之后,还包括:After cyclically performing the above steps of projection, matching, model and rendering parameter adjustment and re-rendering until the matching map between the planar projection image and the face image of the target user meets preset conditions, the method further includes:
S220:根据所述立体头部模型建立所述人脸图像与所述初始化平面纹理空间的坐标对应关系。在前述将所述平面投影图像与所述目标用户的人脸图像进行匹配的步骤中以及获取初始化纹理参数以生成初始化平面纹理空间的步骤中已经分别建立了所述人脸图像与所述立体头部模型间的映射关系以及所述平面纹理空间与所述立体头部模型间的映射关系,根据三者之间的关系,可以建立所述人脸图像上每个像素的坐标值与所述平面纹理空间上每个像素的坐标值之间的映射关系。S220: Establish a coordinate correspondence between the face image and the initialized plane texture space according to the three-dimensional head model. In the aforementioned steps of matching the planar projection image with the face image of the target user and the steps of acquiring initialized texture parameters to generate an initialized planar texture space, the face image and the stereo head have been established respectively. The mapping relationship between the head models and the mapping relationship between the plane texture space and the three-dimensional head model, according to the relationship between the three, the coordinate value of each pixel on the face image and the plane can be established. The mapping relationship between the coordinate values of each pixel in texture space.
S230:按照预设的颜色差阈值将所述人脸图像分割为多个区域的局部图像。人脸不同区域包括头发、额头、眉毛、眼睛、脸颊、鼻梁、嘴唇以及下巴等不同区域在颜色上会有较大差异,其中脸颊部分在不同光线的作用下,也可能存在暗色部分和高光部分,以一定的颜色容差范围为阈值,将不同颜色的区域划分成不同的局部图像。S230: Divide the face image into partial images of multiple regions according to a preset color difference threshold. Different areas of the human face, including hair, forehead, eyebrows, eyes, cheeks, bridge of the nose, lips, and chin, have great differences in color. The cheeks may also have dark and high-light parts under the action of different light. , with a certain color tolerance range as the threshold, the regions of different colors are divided into different partial images.
S240:将所述多个区域的局部图像根据所述坐标映射关系映射到所述平面纹理空间。将上述每个局部图像按照前述的坐标映射关系映射到所述平面纹理空间中,随后使用平滑函数将衔接部分执行平滑处理以避免衔接处的过渡显示非常生硬。S240: Map the partial images of the multiple regions to the plane texture space according to the coordinate mapping relationship. Each of the above-mentioned partial images is mapped into the plane texture space according to the aforementioned coordinate mapping relationship, and then a smoothing function is used to perform smoothing processing on the connecting part to prevent the transition at the connecting position from being displayed very bluntly.
如图5所示,在上述的虚拟角色生成方法中,在循环执行上述投影、匹配、模型和渲染参数调整以及重新渲染的步骤直至所述平面投影图像与所述目标用户的人脸图像的匹配图符合预设条件的步骤之后,还包括:As shown in FIG. 5 , in the above-mentioned virtual character generation method, the above-mentioned steps of projection, matching, model and rendering parameter adjustment and re-rendering are performed in a loop until the plane projection image matches the face image of the target user After the steps that the graph meets the preset conditions, it also includes:
S310:从所述人脸图像提取至少一个第二图像特征。所述第二图像特征包括所述人脸图像上的暗色部分和高光部分。S310: Extract at least one second image feature from the face image. The second image features include dark parts and highlight parts on the face image.
S320:根据所述第二图像特征计算环境光参数。所述环境光参数括但不限于光照方向、环境光颜色及强度、漫反射和镜面反射的强度及颜色等。S320: Calculate ambient light parameters according to the second image feature. The ambient light parameters include, but are not limited to, the direction of illumination, the color and intensity of ambient light, the intensity and color of diffuse reflection and specular reflection, and the like.
S330:根据所述环境光参数调整所述所述立体头部模型的渲染参数以及所述平面纹理空间的纹理参数。S330: Adjust the rendering parameters of the stereo head model and the texture parameters of the plane texture space according to the ambient light parameters.
如图6所示,本发明的第二方面提出了一种虚拟角色生成系统,包括:As shown in FIG. 6 , a second aspect of the present invention provides a virtual character generation system, including:
图像获取模块,用于获取目标用户的人脸图像。在本发明的一些实施方式中,所述目标用户的人脸图像可以是提前拍摄好的照片,由所述虚拟角色生成系统从本地或者远程存储空间中读取得到。在本发明的另一些实施方式中,所述目标用户的人脸图像也可以是在所述虚拟角色生成系统的生成程序启动时,实时通过摄像装置拍摄得到。通过上述方式获取到所述目标用户的人脸图像后,所述虚拟角色生成系统即可根据所述人脸图像为用户生成与其现实形象样貌相似的虚拟角色。The image acquisition module is used to acquire the face image of the target user. In some embodiments of the present invention, the face image of the target user may be a photo taken in advance, and is read from a local or remote storage space by the virtual character generation system. In some other embodiments of the present invention, the face image of the target user may also be captured in real time by a camera device when the generation program of the virtual character generation system is started. After acquiring the face image of the target user in the above manner, the virtual character generation system can generate a virtual character similar to the real image for the user according to the face image.
模型生成模块,用于根据立体头部模型模板以生成虚拟角色的立体头部模型。在根据所述目标用户人脸图像生成所述虚拟角色之前,首先使用通用的立体头部模型模板构建一个通用的立体头部模型。进一步的,在本发明的一些实施方式中,在获取所述目标用户的人脸图像后,对所述人脸图像执行图像特征提取和分析以获取所述目标用户的脸部特征,从而根据所述脸部特征获取对应的立体头部模型模板。例如,所述目标用户的特征包括所述目标用户的性别特征,在获取所述目标用户的人脸图像后,对所述人脸图像执行人脸识别以确认所述目标用户是男性用户还是女性用户,以在当前步骤中获取对应的男性虚拟角色的立体头部模型模板或者女性虚拟角色的立体头部模型模板。所述目标用户的特征还包括所述目标用户的种族特征、年龄特征等。在本发明的一些实施方式中,所述虚拟角色生成系统预置对应不同种族、性别以及年龄的立体头部模型模板,根据对所述人脸图像中所述目标用户脸部特征的提取和分析,从而识别出所述目标用户的种族、性别和年龄,从而根据上述信息获取对应的立体头部模型模板来生成虚拟角色初步立体头部模型,以减少后续对所述立体头部模型和渲染参数进行调整所耗费的时间。The model generation module is used for generating the stereo head model of the virtual character according to the stereo head model template. Before generating the virtual character according to the face image of the target user, a general three-dimensional head model is first constructed using a general three-dimensional head model template. Further, in some embodiments of the present invention, after obtaining the face image of the target user, image feature extraction and analysis are performed on the face image to obtain the face feature of the target user, so as to Obtain the corresponding stereo head model template according to the facial features. For example, the characteristics of the target user include gender characteristics of the target user, and after acquiring the face image of the target user, perform face recognition on the face image to confirm whether the target user is a male user or a female The user obtains the corresponding stereoscopic head model template of the male avatar or the stereoscopic head model template of the female avatar in the current step. The characteristics of the target user also include ethnic characteristics, age characteristics and the like of the target user. In some embodiments of the present invention, the virtual character generation system presets stereoscopic head model templates corresponding to different races, genders and ages, according to the extraction and analysis of the facial features of the target user in the face image , so as to identify the race, gender and age of the target user, so as to obtain the corresponding stereo head model template according to the above information to generate a preliminary stereo head model of the virtual character, so as to reduce subsequent changes to the stereo head model and rendering parameters. Time spent making adjustments.
模型渲染模块,用于获取初始化渲染参数以对所述立体头部模型进行渲染以及在渲染参数和所述立体头部模型被调整后使用调整后的渲染参数对调整后的所述立体头部模型进行重新渲染。所述初始化渲染参数包括但不限于光源数量、光源类型、光源位置、光源颜色、光源光照强度、相机位置、相机角度以及环境物体的反射/漫反射属性等。具体的,在所述获取初始化渲染参数以对所述立体头部模型进行渲染的步骤之前,所述虚拟角色生成系统还获取初始化纹理参数以生成初始化平面纹理空间,在使用所述初始化平面纹理空间映射到所述立体头部模型上以使所述立体头部模型具有初步的材质纹理属性。调整后的所述立体头部模型会产生的变化包括如发型的变化、脸型的变化以及五官形状的变化等。调整后的所述渲染参数所带来的变化包括头部模型的方向、全局亮度、脸部阴影的颜色/大小、脸部反光区域的亮度/颜色/大小等。A model rendering module, configured to obtain initialization rendering parameters to render the stereo head model and use the adjusted rendering parameters to render the adjusted stereo head model after the rendering parameters and the stereo head model are adjusted. to re-render. The initialized rendering parameters include, but are not limited to, the number of light sources, the type of light sources, the position of the light sources, the color of the light sources, the light intensity of the light sources, the position of the camera, the angle of the camera, and the reflection/diffuse reflection properties of environmental objects, etc. Specifically, before the step of acquiring initialization rendering parameters to render the three-dimensional head model, the virtual character generation system further acquires initialization texture parameters to generate an initialization plane texture space, and when using the initialization plane texture space Mapping onto the stereo head model so that the stereo head model has preliminary material texture properties. The changes produced by the adjusted three-dimensional head model include, for example, changes in hairstyles, changes in face shapes, and changes in facial features. The changes brought about by the adjusted rendering parameters include the orientation of the head model, the global brightness, the color/size of the shadow of the face, the brightness/color/size of the reflective area of the face, and the like.
平面投影模块,用于将渲染后的所述立体头部模型投影到指定平面以得到平面投影图像。具体的,根据所述人脸图像中五官及脸部轮廓的相对位置确认摄像装置获取所述人脸图像时的所述摄像装置与所述目标用户头部的相对角度,将该相对角度作为所述立体头部模型与所述指定平面的相对角度以获得所述立体头部模型的平面投影图像。The plane projection module is used for projecting the rendered three-dimensional head model to a specified plane to obtain a plane projection image. Specifically, confirm the relative angle between the camera device and the target user's head when the camera device acquires the face image according to the relative positions of the facial features and the facial contour in the face image, and use the relative angle as the The relative angle of the three-dimensional head model and the specified plane is obtained to obtain a plane projection image of the three-dimensional head model.
图像匹配模块,用于将所述平面投影图像与所述目标用户的人脸图像进行匹配。通过对所述平面投影图像与所述目标用户的人脸图像进行匹配确定所述平面投影图像与所述人脸图像的差异,例如头发、脸型以及五官形状、位置上的差异,以及每个图像区域的颜色、亮度、对比度上的差异等。An image matching module, configured to match the planar projection image with the face image of the target user. Determine the difference between the plane projection image and the face image by matching the plane projection image with the face image of the target user, such as differences in hair, face shape and facial features, and position, and each image Differences in color, brightness, contrast, etc. of the area.
参数调整模块,用于根据匹配结果对所述立体头部模型和渲染参数进行调整。具体的,根据所述平面投影图像与所述人脸图像的差异,计算使所述平面投影图像趋向于与所述人脸图像相同的所述立体头部模型和渲染参数的调整方式,并按该调整方式对所述立体头部模型和渲染参数进行调整。A parameter adjustment module, configured to adjust the stereoscopic head model and rendering parameters according to the matching result. Specifically, according to the difference between the plane projection image and the face image, calculate the adjustment method of the three-dimensional head model and rendering parameters so that the plane projection image tends to be the same as the face image, and press This adjustment method adjusts the three-dimensional head model and rendering parameters.
循环执行模块,用于循环执行投影、匹配和渲染参数调整的步骤直至所述平面投影图像与所述目标用户的人脸图像的匹配图符合预设条件。所述平面投影图像上的同一图像特征的坐标和颜色受到所述立体头部模型和所述渲染参数中的多个不同参数的影响,且不同图像特征之间也会相互影响,因此无法根据所述平面投影图像与所述人脸图像的差异直接还原出唯一的所述立体头部模型和所述渲染参数,需要对所述立体头部模型和所述渲染参数进行多次调整使得所述平面投影图像趋向于与所述人脸图像相同。由于拍摄环境的复杂因素,完全还原所述立体头部模型和渲染参数以使所述平面投影图像与所述人脸图像完全相同所耗费的计算资源和时间资源是非常巨大且非必要的,因此当所述平面投影图像与所述目标用户的人脸图像的相似度达到预期即可停止所述循环执行的步骤。所述预设条件具体是指所述平面投影图像与所述目标用户的人脸图像上的一个或多个图像特征的坐标向量、颜色向量的差值小于预设的阈值。The loop execution module is used for loop execution of the steps of projection, matching and rendering parameter adjustment until the matching map between the plane projection image and the face image of the target user meets a preset condition. The coordinates and color of the same image feature on the planar projection image are affected by a plurality of different parameters in the stereo head model and the rendering parameters, and different image features also affect each other, so it cannot be determined according to the The difference between the plane projection image and the face image directly restores the unique three-dimensional head model and the rendering parameters, and the three-dimensional head model and the rendering parameters need to be adjusted many times so that the The projected image tends to be the same as the face image. Due to the complex factors of the shooting environment, the computational resources and time resources consumed by completely restoring the stereoscopic head model and rendering parameters so that the planar projection image is exactly the same as the face image are very huge and unnecessary. When the similarity between the plane projection image and the face image of the target user reaches the expectation, the loop execution steps can be stopped. The preset condition specifically means that the difference between the coordinate vector and the color vector of one or more image features on the planar projection image and the face image of the target user is smaller than a preset threshold.
进一步的,在上述的虚拟角色生成系统中,所述图像匹配模块包括:Further, in the above-mentioned virtual character generation system, the image matching module includes:
特征提取子模块,用于提取所述平面投影图像与所述人脸图像的至少一个第一图像特征;a feature extraction sub-module for extracting at least one first image feature of the planar projection image and the face image;
向量转化子模块,用于将所述第一图像特征分别转化为对应所述平面投影图像与所述人脸图像的两组坐标向量和两组颜色向量;A vector conversion submodule, for converting the first image feature into two sets of coordinate vectors and two sets of color vectors corresponding to the plane projection image and the face image respectively;
差值计算子模块,用于计算所述两组坐标向量之间差值以及所述两组颜色向量之间的差值。A difference calculation submodule, configured to calculate the difference between the two sets of coordinate vectors and the difference between the two sets of color vectors.
进一步的,在上述的虚拟角色生成系统中,所述第一图像特征包括五官图像以及面部轮廓图像,所述参数调整模块包括:Further, in the above virtual character generation system, the first image features include facial features images and facial contour images, and the parameter adjustment module includes:
坐标预测子模块,用于根据所述两组坐标向量之间的差值预测所述所述立体头部模型的渲染参数中对应所述第一图像特征的多个顶点坐标;A coordinate prediction submodule, configured to predict a plurality of vertex coordinates corresponding to the first image feature in the rendering parameters of the three-dimensional head model according to the difference between the two sets of coordinate vectors;
坐标修改子模块,用于将对应所述第一图像特征的多个顶点坐标修改为预测值。The coordinate modification submodule is used for modifying the coordinates of a plurality of vertices corresponding to the first image feature into predicted values.
进一步的,在上述的虚拟角色生成系统中,所述参数获取模块还用于获取初始化纹理参数以生成初始化平面纹理空间,所述虚拟角色生成系统还包括:Further, in the above virtual character generation system, the parameter acquisition module is further configured to acquire initialization texture parameters to generate an initialization plane texture space, and the virtual character generation system further includes:
坐标关联模块,用于根据所述立体头部模型建立所述人脸图像与所述初始化平面纹理空间的坐标对应关系;a coordinate association module, configured to establish a coordinate correspondence between the face image and the initialization plane texture space according to the three-dimensional head model;
图像分割模块,用于按照预设的颜色差阈值将所述人脸图像分割为多个区域的局部图像;an image segmentation module, configured to segment the face image into partial images of multiple regions according to a preset color difference threshold;
像素映射模块,用于将所述多个区域的局部图像根据所述坐标映射关系映射到所述平面纹理空间。A pixel mapping module, configured to map the partial images of the multiple regions to the plane texture space according to the coordinate mapping relationship.
进一步的,在上述的虚拟角色生成系统中,还包括:Further, in the above-mentioned virtual character generation system, it also includes:
特征提取模块,用于从所述人脸图像提取至少一个第二图像特征;a feature extraction module for extracting at least one second image feature from the face image;
参数计算模块,用于根据所述第二图像特征计算环境光参数;a parameter calculation module, configured to calculate ambient light parameters according to the second image feature;
所述参数调整模块还用于根据所述环境光参数调整所述所述立体头部模型的渲染参数以及所述平面纹理空间的纹理参数。The parameter adjustment module is further configured to adjust the rendering parameters of the three-dimensional head model and the texture parameters of the plane texture space according to the ambient light parameters.
本发明提出一种虚拟角色的生成方法及系统,通过获取目标用户的人脸图像,根据立体头部模型模板以生成虚拟角色的立体头部模型,获取初始化渲染参数以对所述立体头部模型进行渲染,将渲染后的所述立体头部模型投影到指定平面以得到平面投影图像,将所述平面投影图像与所述目标用户的人脸图像进行匹配,根据匹配结果对所述立体头部模型和渲染参数进行调整,使用调整后的渲染参数对调整后的所述立体头部模型进行重新渲染,循环执行上述投影、匹配、模型和渲染参数调整以及重新渲染的步骤直至所述平面投影图像与所述目标用户的人脸图像的匹配图符合预设条件,实现了快速生成用户现实形象的虚拟角色。The present invention provides a method and system for generating a virtual character. By acquiring a face image of a target user, a three-dimensional head model of a virtual character is generated according to a three-dimensional head model template, and initialization rendering parameters are obtained to generate the three-dimensional head model for the three-dimensional head model. Rendering, projecting the rendered three-dimensional head model to a designated plane to obtain a plane projection image, matching the plane projection image with the face image of the target user, and applying the stereo head according to the matching result. Adjust the model and rendering parameters, re-render the adjusted three-dimensional head model using the adjusted rendering parameters, and perform the above-mentioned projection, matching, model and rendering parameter adjustment and re-rendering steps in a loop until the plane projection image The matching map with the face image of the target user complies with the preset condition, so that the virtual character of the real image of the user can be quickly generated.
应当说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。It should be noted that, in this document, relational terms such as first and second are used only to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply any relationship between these entities or operations. any such actual relationship or sequence exists. Moreover, the terms "comprising", "comprising" or any other variation thereof are intended to encompass a non-exclusive inclusion such that a process, method, article or device that includes a list of elements includes not only those elements, but also includes not explicitly listed or other elements inherent to such a process, method, article or apparatus. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in a process, method, article or apparatus that includes the element.
依照本发明的实施例如上文所述,这些实施例并没有详尽叙述所有的细节,也不限制该发明仅为所述的具体实施例。显然,根据以上描述,可作很多的修改和变化。本说明书选取并具体描述这些实施例,是为了更好地解释本发明的原理和实际应用,从而使所属技术领域技术人员能很好地利用本发明以及在本发明基础上的修改使用。本发明仅受权利要求书及其全部范围和等效物的限制。Embodiments in accordance with the present invention are described above, and these embodiments do not exhaust all the details, nor do they limit the invention to only the specific embodiments described. Obviously, many modifications and variations are possible in light of the above description. This specification selects and specifically describes these embodiments in order to better explain the principle and practical application of the present invention, so that those skilled in the art can make good use of the present invention and modifications based on the present invention. The present invention is to be limited only by the claims and their full scope and equivalents.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210492683.1A CN114820894B (en) | 2022-05-07 | 2022-05-07 | A method and system for generating a virtual character |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210492683.1A CN114820894B (en) | 2022-05-07 | 2022-05-07 | A method and system for generating a virtual character |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN114820894A true CN114820894A (en) | 2022-07-29 |
| CN114820894B CN114820894B (en) | 2025-05-09 |
Family
ID=82510991
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210492683.1A Active CN114820894B (en) | 2022-05-07 | 2022-05-07 | A method and system for generating a virtual character |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114820894B (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115690285A (en) * | 2022-09-14 | 2023-02-03 | 北京达佳互联信息技术有限公司 | Method, device, equipment and storage medium for determining rendering parameters |
| CN116958447A (en) * | 2023-08-09 | 2023-10-27 | 深圳市固有色数码技术有限公司 | Automatic meta-universe character generation system and method based on Internet of things |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150373318A1 (en) * | 2014-06-23 | 2015-12-24 | Superd Co., Ltd. | Method and apparatus for adjusting stereoscopic image parallax |
| CN107146199A (en) * | 2017-05-02 | 2017-09-08 | 厦门美图之家科技有限公司 | A kind of fusion method of facial image, device and computing device |
| CN108140262A (en) * | 2015-12-22 | 2018-06-08 | 谷歌有限责任公司 | Adjust video rendering rate and stereoscopic image processing for virtual reality content |
| CN113569614A (en) * | 2021-02-23 | 2021-10-29 | 腾讯科技(深圳)有限公司 | Virtual image generation method, device, device and storage medium |
-
2022
- 2022-05-07 CN CN202210492683.1A patent/CN114820894B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150373318A1 (en) * | 2014-06-23 | 2015-12-24 | Superd Co., Ltd. | Method and apparatus for adjusting stereoscopic image parallax |
| CN108140262A (en) * | 2015-12-22 | 2018-06-08 | 谷歌有限责任公司 | Adjust video rendering rate and stereoscopic image processing for virtual reality content |
| CN107146199A (en) * | 2017-05-02 | 2017-09-08 | 厦门美图之家科技有限公司 | A kind of fusion method of facial image, device and computing device |
| CN113569614A (en) * | 2021-02-23 | 2021-10-29 | 腾讯科技(深圳)有限公司 | Virtual image generation method, device, device and storage medium |
Non-Patent Citations (1)
| Title |
|---|
| 宋重钢等: "基于2D头部图像的3D建模及优化", 南开大学学报(自然科学版), vol. 44, no. 06, 20 December 2011 (2011-12-20) * |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115690285A (en) * | 2022-09-14 | 2023-02-03 | 北京达佳互联信息技术有限公司 | Method, device, equipment and storage medium for determining rendering parameters |
| CN116958447A (en) * | 2023-08-09 | 2023-10-27 | 深圳市固有色数码技术有限公司 | Automatic meta-universe character generation system and method based on Internet of things |
Also Published As
| Publication number | Publication date |
|---|---|
| CN114820894B (en) | 2025-05-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7526412B2 (en) | Method for training a parameter estimation model, apparatus for training a parameter estimation model, device and storage medium | |
| KR102887595B1 (en) | Methods and systems for forming personalized 3D head and face models | |
| CN102419868B (en) | Equipment and the method for 3D scalp electroacupuncture is carried out based on 3D hair template | |
| Shi et al. | Automatic acquisition of high-fidelity facial performances using monocular videos | |
| US12159346B2 (en) | Methods and system for generating 3D virtual objects | |
| KR20230097157A (en) | Method and system for personalized 3D head model transformation | |
| CN112669447A (en) | Model head portrait creating method and device, electronic equipment and storage medium | |
| CN103065360B (en) | A kind of hair shape effect map generalization method and system | |
| CN112633191B (en) | Three-dimensional face reconstruction method, device, equipment and storage medium | |
| CN116997933A (en) | Method and system for constructing facial position maps | |
| WO2021140510A2 (en) | Large-scale generation of photorealistic 3d models | |
| JP2024503794A (en) | Method, system and computer program for extracting color from two-dimensional (2D) facial images | |
| CN113628327B (en) | Head three-dimensional reconstruction method and device | |
| KR102353556B1 (en) | Apparatus for Generating Facial expressions and Poses Reappearance Avatar based in User Face | |
| CN111861632B (en) | Virtual makeup test method, device, electronic equipment and readable storage medium | |
| US20230079478A1 (en) | Face mesh deformation with detailed wrinkles | |
| US12020363B2 (en) | Surface texturing from multiple cameras | |
| CN114820894B (en) | A method and system for generating a virtual character | |
| CN120409609A (en) | Model training method, device and storage medium | |
| CN116630575A (en) | Retopology method, device, equipment and storage medium | |
| CN114820942A (en) | Face image processing method and device, electronic equipment and storage medium | |
| CN118115663A (en) | A method, device and electronic device for face reconstruction | |
| JP2023143446A (en) | Operation method of information processing device, information processing device, and program | |
| CN120852611A (en) | A virtual object re-rendering method and related device | |
| HK40091326A (en) | Method and system for personalized 3d head model deformation |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| TA01 | Transfer of patent application right | ||
| TA01 | Transfer of patent application right |
Effective date of registration: 20250414 Address after: No. 501, Building 3 (Library), No.1 Huashang Road, Lihu Street, Zengcheng District, Guangzhou City, Guangdong Province, China 511300 Applicant after: Guangzhou Zhonghua Digital Technology Co.,Ltd. Country or region after: China Address before: 518000 room 803, building 5, meinian International Plaza, Taohuayuan community, merchants street, Nanshan District, Shenzhen, Guangdong Province Applicant before: Shenzhen solid color digital technology Co.,Ltd. Country or region before: China |
|
| GR01 | Patent grant | ||
| GR01 | Patent grant |