CN109584358A - A kind of three-dimensional facial reconstruction method and device, equipment and storage medium - Google Patents
A kind of three-dimensional facial reconstruction method and device, equipment and storage medium Download PDFInfo
- Publication number
- CN109584358A CN109584358A CN201811435701.2A CN201811435701A CN109584358A CN 109584358 A CN109584358 A CN 109584358A CN 201811435701 A CN201811435701 A CN 201811435701A CN 109584358 A CN109584358 A CN 109584358A
- Authority
- CN
- China
- Prior art keywords
- face
- image
- information
- rgb
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
- G06T2207/10021—Stereoscopic video; Stereoscopic image sequence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Geometry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
本申请实施例提供一种三维人脸重建方法及装置、设备和存储介质,其中,获取RGB图像和与所述RGB图像对应的深度图像;对所述RGB图像和所述深度图像对齐后的图像进行人脸检测,确定人脸信息;对所述人脸信息进行处理,生成人脸模型。
Embodiments of the present application provide a three-dimensional face reconstruction method, device, device, and storage medium, wherein an RGB image and a depth image corresponding to the RGB image are acquired; an image after aligning the RGB image and the depth image is Perform face detection to determine face information; process the face information to generate a face model.
Description
技术领域technical field
本申请实施例涉及计算机图像处理领域,涉及但不限于一种三维人脸重建方法及装置、设备和存储介质。The embodiments of the present application relate to the field of computer image processing, and relate to, but are not limited to, a three-dimensional face reconstruction method and apparatus, equipment, and storage medium.
背景技术Background technique
在计算机视觉领域中,三维重建技术根据单个红绿蓝(Red-Green-Blue,RGB)图像或多个RGB图像输入重建三维信息。例如,多视图的三维重建首先计算出拍摄图像与现实世界坐标系的对应关系,然后分别根据各个图像所包含的信息重建与现实高度相似的三维模型。现今,三维重建技术不仅在交互式立体游戏、远程视频会议等方面起到重要作用,在医疗领域也有极其广泛的应用。但是在一般民众容易获取的移动端设备(例如智能手机、平板等)中并不能实现快速、精准的建模效果。In the field of computer vision, 3D reconstruction techniques reconstruct 3D information from a single Red-Green-Blue (RGB) image or multiple RGB image inputs. For example, multi-view 3D reconstruction first calculates the correspondence between the captured image and the real-world coordinate system, and then reconstructs a 3D model that is highly similar to reality based on the information contained in each image. Nowadays, 3D reconstruction technology not only plays an important role in interactive stereo games, remote video conferencing, etc., but also has an extremely wide range of applications in the medical field. However, fast and accurate modeling effects cannot be achieved in mobile devices (such as smartphones, tablets, etc.) that are easily accessible to the general public.
发明内容SUMMARY OF THE INVENTION
有鉴于此,本申请实施例提供一种三维人脸重建方法及装置、设备和存储介质。In view of this, embodiments of the present application provide a three-dimensional face reconstruction method and apparatus, device, and storage medium.
本申请实施例的技术方案是这样实现的:The technical solutions of the embodiments of the present application are implemented as follows:
本申请实施例提供一种三维人脸重建方法,所述方法包括:The embodiment of the present application provides a three-dimensional face reconstruction method, and the method includes:
获取RGB图像和与所述RGB图像对应的深度图像;acquiring an RGB image and a depth image corresponding to the RGB image;
对所述RGB图像和所述深度图像对齐后的图像进行人脸检测,确定人脸信息;performing face detection on the aligned image of the RGB image and the depth image to determine face information;
对所述人脸信息进行处理,生成人脸模型。The face information is processed to generate a face model.
在本申请实施例中,所述人脸模型至少包括以下之一:人脸面具和三维人脸模型;所述人脸面具包含人脸结构特征,所述三维人脸模型包含人脸结构特征和人脸纹理信息。In this embodiment of the present application, the face model includes at least one of the following: a face mask and a three-dimensional face model; the face mask includes a face structure feature, and the three-dimensional face model includes a face structure feature and Face texture information.
在本申请实施例中,所述获取RGB图像和与所述RGB图像对应的深度图像,包括:In this embodiment of the present application, the acquiring an RGB image and a depth image corresponding to the RGB image includes:
采用摄像头获取单帧RGB图像,并采用深度摄像机获取所述单帧RGB图像对应的深度图像;Use a camera to obtain a single frame of RGB image, and use a depth camera to obtain a depth image corresponding to the single frame of RGB image;
或者,or,
所述获取RGB图像和与所述RGB图像对应的深度图像,包括:采用双摄像头获取两个单帧RGB图像;The acquiring an RGB image and a depth image corresponding to the RGB image includes: acquiring two single-frame RGB images with dual cameras;
根据所述两个单帧RGB图像确定所述两个单帧RGB图像对应的深度图像。Depth images corresponding to the two single-frame RGB images are determined according to the two single-frame RGB images.
在本申请实施例中,所述对所述人脸信息进行处理,生成人脸模型,包括:In the embodiment of the present application, the processing of the face information to generate a face model includes:
对所述人脸信息进行过滤,得到过滤后的人脸信息;Filtering the face information to obtain filtered face information;
根据所述过滤后的人脸信息的空间分布信息,生成人脸面具。A face mask is generated according to the spatial distribution information of the filtered face information.
在本申请实施例中,所述获取RGB图像和与所述RGB图像对应的深度图像,包括:In this embodiment of the present application, the acquiring an RGB image and a depth image corresponding to the RGB image includes:
获取所述M帧RGB图像,并分别获取与所述M帧RGB图像对应的M帧深度图像;其中,所述M帧RGB图像包括同一人脸的不同角度的图像;M为大于1的整数。The M frames of RGB images are acquired, and M frames of depth images corresponding to the M frames of RGB images are acquired respectively; wherein the M frames of RGB images include images of the same face from different angles; M is an integer greater than 1.
在本申请实施例中,所述对所述人脸信息进行处理,生成人脸模型,包括:In the embodiment of the present application, the processing of the face information to generate a face model includes:
根据所述M帧深度图像的人脸信息,从所述M帧深度图像中筛选出满足预设条件的N帧有效深度图,N为大于0的整数,且N小于等于M;According to the face information of the M frames of depth images, screen out N frames of effective depth maps that meet preset conditions from the M frames of depth images, where N is an integer greater than 0, and N is less than or equal to M;
将所述N帧有效深度图进行融合,得到融合图像;Fusing the N frames of effective depth maps to obtain a fused image;
根据所述融合图像的人脸信息和预设的标准人脸模型,得到所述三维人脸模型。According to the face information of the fusion image and the preset standard face model, the three-dimensional face model is obtained.
在本申请实施例中,所述根据所述融合图像的人脸信息和预设的标准人脸模型,得到所述三维人脸模型,包括:In the embodiment of the present application, obtaining the three-dimensional face model according to the face information of the fused image and the preset standard face model includes:
将根据所述融合图像的人脸信息对所述预设的标准人脸模型中人脸信息进行调整,得到初步三维人脸模型;Adjusting the face information in the preset standard face model according to the face information of the fusion image to obtain a preliminary three-dimensional face model;
确定所述融合图像对应的人脸纹理信息;determining the face texture information corresponding to the fusion image;
将所述人脸纹理信息输入所述初步三维人脸模型,得到所述三维人脸模型。Inputting the face texture information into the preliminary three-dimensional face model to obtain the three-dimensional face model.
在本申请实施例中,所述对所述RGB图像和所述深度图像对齐后的图像进行人脸检测,确定人脸信息,包括:In the embodiment of the present application, performing face detection on the aligned image of the RGB image and the depth image to determine face information includes:
将所述RGB图像的像素和所述深度图像像素对齐后的图像输入到第一预设的深度神经网络,得到人脸信息。Inputting the pixels of the RGB image and the pixel-aligned image of the depth image into a first preset deep neural network to obtain face information.
在本申请实施例中,所述对所述RGB图像和所述深度图像对齐后的图像进行人脸检测,确定人脸信息,包括:In the embodiment of the present application, performing face detection on the aligned image of the RGB image and the depth image to determine face information includes:
对所述RGB图像和所述深度图像对齐后的图像输入到第二预设的神经深度神经网络进行人脸关键点检测、眼镜检测和模糊检测,输出得到人脸信息。Inputting the aligned RGB image and the depth image to a second preset neural deep neural network to perform face key point detection, glasses detection and blur detection, and outputting face information.
在本申请实施例中,所述将所述人脸纹理信息输入所述初步三维人脸模型,得到所述三维人脸模型之后,还包括:In the embodiment of the present application, after the inputting the face texture information into the preliminary three-dimensional face model to obtain the three-dimensional face model, the method further includes:
保存所述三维人脸模型;save the three-dimensional face model;
根据用户输入的指令对所述三维人脸模型进行编辑。The three-dimensional face model is edited according to the instructions input by the user.
本申请实施例提供一种三维人脸重建装置,所述装置包括:第一获取模块、第一检测模块和第一生成模块,其中:An embodiment of the present application provides a three-dimensional face reconstruction device, the device includes: a first acquisition module, a first detection module and a first generation module, wherein:
所述第一获取模块,用于获取RGB图像与所述RGB图像对应的深度图像;The first obtaining module is used to obtain the depth image corresponding to the RGB image and the RGB image;
所述第一检测模块,用于对所述RGB图像和所述深度图像对齐后的图像进行人脸检测,确定人脸信息;The first detection module is configured to perform face detection on the aligned image of the RGB image and the depth image to determine face information;
所述第一生成模块,用于对所述人脸信息进行处理,生成人脸模型。The first generation module is used for processing the face information to generate a face model.
在本申请实施例中,所述人脸模型至少包括以下之一:人脸面具和三维人脸模型,所述人脸面具包含人脸结构特征,所述三维人脸模型包含人脸结构特征和人脸纹理信息。In the embodiment of the present application, the face model includes at least one of the following: a face mask and a three-dimensional face model, the face mask includes face structure features, and the three-dimensional face model includes face structure features and Face texture information.
在本申请实施例中,当所述RGB图像为单帧图像时,所述第一获取模块,包括:In this embodiment of the present application, when the RGB image is a single-frame image, the first acquisition module includes:
第一获取子模块,用于采用摄像头获取单帧RGB图像,并采用深度摄像机获取所述单帧RGB图像对应的深度图像;The first acquisition submodule is used to acquire a single frame of RGB image by using a camera, and acquire a depth image corresponding to the single frame of RGB image by using a depth camera;
或者,or,
第二获取子模块,用于采用双摄像头获取两个单帧RGB图像;The second acquisition sub-module is used to acquire two single-frame RGB images using dual cameras;
第一确定子模块,用于根据所述两个单帧RGB图像确定所述两个单帧RGB图像对应的深度图像。The first determination submodule is configured to determine depth images corresponding to the two single-frame RGB images according to the two single-frame RGB images.
在本申请实施例中,所述第一生成模块,包括:In this embodiment of the present application, the first generation module includes:
第一过了子模块,用于对所述人脸信息进行过滤,得到过滤后的人脸信息;The first sub-module is used to filter the face information to obtain the filtered face information;
第一生成子模块,用于根据所述过滤后的人脸信息的空间分布信息,生成人脸面具。The first generating submodule is configured to generate a face mask according to the spatial distribution information of the filtered face information.
在本申请实施例中,当所述RGB图像为M帧图像时,所述第一获取模块,包括:In the embodiment of the present application, when the RGB image is an M frame image, the first acquisition module includes:
第三获取子模块,用于获取所述M帧RGB图像,并分别获取与所述M帧RGB图像对应的M帧深度图像;其中,所述M帧RGB图像包括同一人脸的不同角度的图像;M为大于1的整数。The third acquisition sub-module is used for acquiring the M frames of RGB images, and respectively acquiring M frames of depth images corresponding to the M frames of RGB images; wherein the M frames of RGB images include images of the same face from different angles ; M is an integer greater than 1.
在本申请实施例中,所述第一生成模块,包括:In this embodiment of the present application, the first generation module includes:
第二确定子模块,用于根据所述M帧深度图像的人脸信息,从所述M帧深度图像中筛选出满足预设条件的N帧有效深度图,N为大于0的整数,且N小于等于M;The second determination sub-module is configured to screen out N frames of effective depth maps that meet preset conditions from the M frames of depth images according to the face information of the M frames of depth images, where N is an integer greater than 0, and N less than or equal to M;
第一融合子模块,用于将所述N帧有效深度图进行融合,得到融合图像;a first fusion submodule, configured to fuse the N frames of effective depth maps to obtain a fused image;
第三确定子模块,用于根据所述融合图像的人脸信息和预设的标准人脸模型,得到所述三维人脸模型。The third determination sub-module is configured to obtain the three-dimensional face model according to the face information of the fusion image and the preset standard face model.
在本申请实施例中,所述根第三确定子模块,包括:In this embodiment of the present application, the root third determination submodule includes:
第一匹配单元,用于将根据所述融合图像的人脸信息对所述预设的标准人脸模型中人脸信息进行调整,得到初步三维人脸模型;a first matching unit, configured to adjust the face information in the preset standard face model according to the face information of the fused image to obtain a preliminary three-dimensional face model;
第一确定单元,用于确定所述融合图像对应人脸纹理信息;a first determining unit, configured to determine the face texture information corresponding to the fusion image;
第一输入单元,用于将所述人脸纹理信息输入所述初步三维人脸模型,得到所述三维人脸模型。The first input unit is configured to input the face texture information into the preliminary three-dimensional face model to obtain the three-dimensional face model.
在本申请实施例中,所述第一检测模块,包括:In this embodiment of the present application, the first detection module includes:
第一输入子模块,用于将所述RGB图像的像素和所述深度图像像素对齐后的图像输入到第一预设的深度神经网络,得到人脸信息。The first input sub-module is used for inputting the aligned image of the pixels of the RGB image and the pixels of the depth image into a first preset deep neural network to obtain face information.
在本申请实施例中,所述第一检测模块,包括:In this embodiment of the present application, the first detection module includes:
第一检测子模块,用于对所述RGB图像和所述深度图像对齐后的图像输入到第二预设的神经深度神经网络进行人脸关键点检测、眼镜检测和模糊检测,输出得到人脸信息。The first detection sub-module is used for inputting the aligned image of the RGB image and the depth image to the second preset neural deep neural network to perform face key point detection, glasses detection and blur detection, and the output is obtained. information.
在本申请实施例中,所述第一检测模块,还包括:In the embodiment of the present application, the first detection module further includes:
第一保存子模块,用于保存所述三维人脸模型;a first saving submodule for saving the three-dimensional face model;
第一提示子模块,用于根据用户输入的指令对所述三维人脸模型进行编辑。The first prompting sub-module is used to edit the three-dimensional face model according to the instruction input by the user.
本申请实施例提供一种计算机程序产品,所述计算机程序产品包括计算机可执行指令,该计算机可执行指令被执行后,能够实现本申请实施例提供的三维人脸重建方法中的步骤。The embodiments of the present application provide a computer program product, where the computer program product includes computer-executable instructions, which, after being executed, can implement the steps in the three-dimensional face reconstruction method provided by the embodiments of the present application.
本申请实施例提供一种计算机设备,所述计算机设备包括存储器和处理器,所述存储器上存储有计算机可执行指令,所述处理器运行所述存储器上的计算机可执行指令时可实现本申请实施例提供的三维人脸重建方法中的步骤。An embodiment of the present application provides a computer device, the computer device includes a memory and a processor, the memory stores computer-executable instructions, and the processor can implement the present application when the processor runs the computer-executable instructions on the memory The steps in the three-dimensional face reconstruction method provided by the embodiment.
本申请实施例提供一种三维人脸重建方法及装置、设备和存储介质,其中,首先,获取RGB图像和与所述RGB图像对应的深度图像;然后,对所述RGB图像和所述深度图像对齐后的图像进行人脸检测,确定人脸信息;最后,对所述人脸信息进行处理,生成人脸模型;如此,通过RGB图像和深度图像,能够快速精准的重建具备语意信息的三维人脸模型,从而使得三维人脸模型能够应用于普通的容易获取的移动端设备中。Embodiments of the present application provide a three-dimensional face reconstruction method, device, device, and storage medium, wherein, first, an RGB image and a depth image corresponding to the RGB image are acquired; then, the RGB image and the depth image are The aligned images are subjected to face detection to determine face information; finally, the face information is processed to generate a face model; in this way, through the RGB image and the depth image, the three-dimensional human with semantic information can be quickly and accurately reconstructed. face model, so that the three-dimensional face model can be applied to common mobile devices that are easy to obtain.
附图说明Description of drawings
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本申请的实施例,并与说明书一起用于说明本申请的技术方案。The accompanying drawings, which are incorporated into and constitute a part of the specification, illustrate embodiments consistent with the present application, and together with the description, serve to explain the technical solutions of the present application.
图1A为本申请实施例网络架构的组成结构示意图;FIG. 1A is a schematic structural diagram of the composition of a network architecture according to an embodiment of the present application;
图1B为本申请实施例三维人脸重建方法的实现流程示意图;FIG. 1B is a schematic diagram of an implementation flowchart of a three-dimensional face reconstruction method according to an embodiment of the present application;
图2A为本申请实施例采用三维人脸重建方法生成人脸面具的实现流程示意图;FIG. 2A is a schematic flowchart of an implementation of generating a face mask by using a three-dimensional face reconstruction method according to an embodiment of the present application;
图2B为本申请实施例生成的人脸面具的效果示意图;2B is a schematic diagram of the effect of a face mask generated by an embodiment of the application;
图3A为本申请实施例采用三维人脸重建方法生成3D人脸模型的实现流程示意图;FIG. 3A is a schematic diagram of an implementation process of generating a 3D face model by using a three-dimensional face reconstruction method according to an embodiment of the present application;
图3B为本申请实施例生成的3D人脸模型的效果示意图;3B is a schematic diagram of the effect of a 3D face model generated by an embodiment of the present application;
图4为本申请实施例采用本申请实施例提供的三维人脸重建方法快速生成3D人脸面具实现流程示意图;FIG. 4 is a schematic diagram of an implementation flowchart for quickly generating a 3D face mask using the 3D face reconstruction method provided by the embodiment of the present application according to an embodiment of the present application;
图5为本申请实施例采用本申请实施例提供的三维人脸重建方法快速生成3D人脸模型实现流程示意图;FIG. 5 is a schematic diagram of an implementation flowchart for quickly generating a 3D face model by using the 3D face reconstruction method provided by the embodiment of the present application according to an embodiment of the present application;
图6为本申请实施例三维人脸重建装置的组成结构示意图;6 is a schematic diagram of the composition and structure of a three-dimensional face reconstruction apparatus according to an embodiment of the present application;
图7为本申请实施例计算机设备的组成结构示意图。FIG. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application.
具体实施方式Detailed ways
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对发明的具体技术方案做进一步详细描述。以下实施例用于说明本申请,但不用来限制本申请的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present application more clear, the specific technical solutions of the invention will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are used to illustrate the present application, but are not intended to limit the scope of the present application.
本实施例先提供一种网络架构,图1A为本申请实施例网络架构的组成结构示意图,如图1A所示,该网络架构包括两个或多个计算机设备11至1N和服务器31,其中计算机设备11至1N与服务器31之间通过网络21进行交互。计算机设备在实现的过程中可以为各种类型的具有信息处理能力的计算机设备,例如所述计算机设备可以包括手机、平板电脑、台式机、个人数字助理、导航仪、数字电话、电视机等。This embodiment first provides a network architecture. FIG. 1A is a schematic structural diagram of the network architecture according to the embodiment of the present application. As shown in FIG. 1A , the network architecture includes two or more computer devices 11 to 1N and a server 31 , wherein the computer The devices 11 to 1N interact with the server 31 through the network 21 . The computer equipment can be various types of computer equipment with information processing capabilities during implementation, for example, the computer equipment can include mobile phones, tablet computers, desktop computers, personal digital assistants, navigators, digital phones, televisions, and so on.
本实施例提出一种三维人脸重建方法,能够有效解决在三维人脸重建过程中,不能在普通易获取的设备中实现快速、精准的建模效果等问题,该方法应用于计算机设备,该方法所实现的功能可以通过计算机设备中的处理器调用程序代码来实现,当然程序代码可以保存在计算机存储介质中,可见,该计算机设备至少包括处理器和存储介质。This embodiment proposes a three-dimensional face reconstruction method, which can effectively solve the problem that in the process of three-dimensional face reconstruction, fast and accurate modeling effects cannot be achieved in common and easily obtained equipment, and the method is applied to computer equipment. The functions implemented by the method can be implemented by a processor in a computer device calling program codes, and of course the program codes can be stored in a computer storage medium. It can be seen that the computer device includes at least a processor and a storage medium.
图1B为本申请实施例三维人脸重建方法的实现流程示意图,如图1B所示,所述方法包括以下步骤:FIG. 1B is a schematic diagram of an implementation flowchart of a three-dimensional face reconstruction method according to an embodiment of the present application. As shown in FIG. 1B , the method includes the following steps:
步骤S101,获取RGB图像与所述RGB图像对应的深度图像。Step S101, acquiring an RGB image and a depth image corresponding to the RGB image.
这里,如果用户希望生成人脸面具,那么只需要获取单帧的RGB图像,和单帧的深度图像;或者是采用双摄摄像头获取两个单帧的RGB图像,然后采用这两个单帧的RGB图像估算出对应的深度图像。如果用户希望生成三维(Three-Dimensional,3D)人脸模型,那么获取多帧RGB图像和多帧深度图像。Here, if the user wants to generate a face mask, he only needs to obtain a single-frame RGB image and a single-frame depth image; or use a dual-camera to obtain two single-frame RGB images, and then use the two single-frame RGB images. The corresponding depth image is estimated from the RGB image. If the user wishes to generate a three-dimensional (Three-Dimensional, 3D) face model, then acquire multiple frames of RGB images and multiple frames of depth images.
步骤S102,对所述RGB图像和所述深度图像对齐后的图像进行人脸检测,确定人脸信息。Step S102, performing face detection on the aligned image of the RGB image and the depth image to determine face information.
这里,如果用户希望生成人脸面具,那么只需要进行人脸关键点检测,因为人脸面具上不具有人脸皮肤颜色、皮肤纹理等,就相当于是只包含人脸特征的一张白模,所以进行检测时,只需要进行人脸关键点检测,比如,仅检测鼻子、眼镜、嘴巴等的位置。如果用户希望生成3D人脸模型,那么不仅需要进行人脸关键点检测,还需要进行眼镜检测、模糊检测等,以使得到的3D人脸模型与真实的人脸更为逼近。Here, if the user wants to generate a face mask, he only needs to detect the key points of the face, because the face mask does not have face skin color, skin texture, etc., which is equivalent to a white model containing only face features, so the When detecting, only the key points of the face need to be detected, for example, only the positions of the nose, glasses, mouth, etc. are detected. If the user wishes to generate a 3D face model, not only face key point detection, but also glasses detection, blur detection, etc. are required to make the obtained 3D face model closer to the real face.
步骤S103,对所述人脸信息进行处理,生成人脸模型。Step S103, processing the face information to generate a face model.
这里,采用预设处理规则对所述人脸信息进行处理,生成与所述处理规则对应的人脸模型;所述人脸模型至少包括以下之一:人脸面具和三维人脸模型,其中,所述人脸面具包含人脸结构特征,所述三维人脸模型包含人脸结构特征和人脸纹理信息。所述预设处理规则,可以理解为是生成人脸面具对应的执行步骤和生成3D人脸模型对应的执行步骤。Here, a preset processing rule is used to process the face information, and a face model corresponding to the processing rule is generated; the face model includes at least one of the following: a face mask and a three-dimensional face model, wherein, The face mask includes face structure features, and the three-dimensional face model includes face structure features and face texture information. The preset processing rules can be understood as the execution steps corresponding to generating a face mask and the execution steps corresponding to generating a 3D face model.
在本实施例提供的三维人脸重建方法中,通过将RGB图像和深度图像对齐后进行检测,能够快速且精准的生成人脸面具或者3D人脸模型,从而使3D人脸模型和人脸面具能够应用在容易获取的电子设备上,比如,应用在手机应用上。In the three-dimensional face reconstruction method provided in this embodiment, by aligning the RGB image and the depth image for detection, a face mask or a 3D face model can be quickly and accurately generated, so that the 3D face model and the face mask can be It can be applied to easily available electronic devices, such as mobile phone applications.
本申请实施例提供一种三维人脸重建方法,图2A为本申请实施例采用三维人脸重建方法生成人脸面具的实现流程示意图,如图2A所示,所述方法包括以下步骤:An embodiment of the present application provides a three-dimensional face reconstruction method, and FIG. 2A is a schematic diagram of an implementation flowchart for generating a face mask by using a three-dimensional face reconstruction method according to an embodiment of the present application. As shown in FIG. 2A , the method includes the following steps:
在本申请实施例中,如果希望生成人脸面具,可以通过以下步骤实现:In this embodiment of the present application, if you want to generate a face mask, you can implement the following steps:
步骤S201,采用双摄像头获取两个单帧RGB图像。Step S201, using dual cameras to acquire two single-frame RGB images.
步骤S202,根据所述两个单帧RGB图像确定所述两个单帧RGB图像对应的深度图像。Step S202, determining depth images corresponding to the two single-frame RGB images according to the two single-frame RGB images.
这里,步骤S202说明,在本申请实施例中生成人脸面具时,可以仅采用单帧的RGB图像,不需要多帧信息,也不需要借由三维感测硬件所直接输出的深度图信息,只需要使用双摄像头基于2张RGB图所估算出来的深度图像信息,就可以生成人脸面具。所述获取单帧的RGB图像和深度图像还可以是,采用摄像头获取所述单帧RGB图像,并启动所述摄像头的三维感测功能获取所述单帧RGB图像对应的深度图像。所述启动所述摄像头的三维感测功能获取所述单帧RGB图像对应的深度图像,可以是启动所述摄像头的结构光的感测器获取所述单帧RGB图像对应的深度图像,或者是采用飞时测距(Time of Flight,ToF)感测技术获取所述单帧RGB图像对应的深度图像。Here, step S202 illustrates that, when generating a face mask in the embodiment of the present application, only a single frame of RGB image may be used, and multi-frame information is not required, nor is the depth map information directly output by the three-dimensional sensing hardware. The face mask can be generated only by using the depth image information estimated by the dual cameras based on the 2 RGB images. The obtaining of the RGB image and the depth image of a single frame may also be to use a camera to obtain the RGB image of the single frame, and activate the three-dimensional sensing function of the camera to obtain the depth image corresponding to the single frame of RGB image. The activation of the three-dimensional sensing function of the camera to obtain the depth image corresponding to the single-frame RGB image may be to activate the structured light sensor of the camera to obtain the depth image corresponding to the single-frame RGB image, or The depth image corresponding to the single frame of RGB image is acquired by using a time of flight (Time of Flight, ToF) sensing technology.
步骤S203,将单帧的RGB图像的像素和深度图像像素对齐后的图像输入到第一预设的深度神经网络,得到人脸信息。Step S203 , inputting the pixels of the single-frame RGB image and the image after the pixels of the depth image are aligned into the first preset deep neural network to obtain face information.
这里,将单帧的RGB图像和深度图像对齐后的图像输入到深度神经网络中,进行人脸关键点检测,而且还要对检测结果进行判断,如果检测结果不符合预设条件(比如,人脸的关键点比较模糊或者被遮挡等),向用户发出提示信息。Here, the RGB image of a single frame and the aligned image of the depth image are input into the deep neural network to detect the key points of the face, and the detection result is also judged. If the detection result does not meet the preset conditions (for example, human The key points of the face are blurred or occluded, etc.), and a prompt message is sent to the user.
步骤S204,对所述人脸信息进行过滤,得到过滤后的人脸信息。Step S204, filtering the face information to obtain filtered face information.
这里,将人脸信息中不满足预设条件的信息过滤掉,仅保留适合生成人脸面具的信息,比如将模糊的点过滤掉,仅保留清晰且不重复的点。Here, the information in the face information that does not meet the preset conditions is filtered out, and only the information suitable for generating the face mask is retained, for example, the fuzzy points are filtered out, and only the clear and non-repetitive points are retained.
步骤S205,根据所述过滤后的人脸信息的空间分布信息,生成人脸面具。Step S205, generating a face mask according to the spatial distribution information of the filtered face information.
在本申请实施例中,仅需要单帧RGB图像即可快速且精准的生成人脸面具,而且该人脸面具可以用于普通电子设备中,比如手机的相机中,如图2B所示,如果用户想要拍摄一张人脸面具,那么只需要打开相机捕获当前人脸画面,或者从相册中选择一张照片21,将该照片输入到生成人脸面具的模块中,即可得到附加有人脸面具的照片22;或者,是在拍摄视频时,如果用户将相机的当前的模型选择为生成人脸面具的模型,那么人脸面具即可附着在用户的人脸画面上,并随着画面的变化而变化,比如用户张大嘴巴,那么附着有人脸面具的画面也同样张大嘴巴,如此,增加用户的趣味性。In the embodiment of the present application, only a single frame of RGB image is needed to generate a face mask quickly and accurately, and the face mask can be used in common electronic devices, such as a camera of a mobile phone, as shown in FIG. 2B , if If the user wants to take a face mask, he only needs to turn on the camera to capture the current face image, or select a photo 21 from the album, and input the photo into the module that generates the face mask, then the additional face can be obtained. The photo of the mask 22; or, when shooting a video, if the user selects the current model of the camera as the model for generating the face mask, then the face mask can be attached to the user's face screen, and will follow the screen. Changes vary, for example, if the user opens his mouth wide, then the screen with a face mask also opens his mouth wide, which increases the user's interest.
本申请实施例提供一种三维人脸重建方法,图3A为本申请实施例采用三维人脸重建方法生成3D人脸模型的实现流程示意图,如图3A所示,所述方法包括以下步骤:An embodiment of the present application provides a method for reconstructing a 3D face. FIG. 3A is a schematic diagram of an implementation flowchart for generating a 3D face model by using the method for reconstructing a 3D face according to an embodiment of the present application. As shown in FIG. 3A , the method includes the following steps:
在本申请实施例中,如果希望生成3D人脸模型,可以通过以下步骤实现:In this embodiment of the present application, if you want to generate a 3D face model, you can implement the following steps:
步骤S301,采用摄像头获取所述M帧RGB图像,并采用深度摄像机获取所述单帧RGB图像对应的深度图像。Step S301, using a camera to acquire the M frames of RGB images, and using a depth camera to acquire a depth image corresponding to the single frame of RGB images.
这里,深度摄像机是可以发射比如红外光、激光等探测框对图像距离进行检测从而获得深度图像的摄像头。所述多帧RGB图像为人脸不同角度的图像;M为大于1的整数。比如,脸部左转15度和左转30度的图像等。Here, the depth camera is a camera that can emit a detection frame such as infrared light, laser, etc. to detect the distance of the image to obtain a depth image. The multi-frame RGB images are images of faces from different angles; M is an integer greater than 1. For example, images with the face turned 15 degrees to the left and 30 degrees to the left, etc.
步骤S302,按照预设的时间间隔对所述RGB图像的像素和所述深度图像像素对齐后的图像进行人脸关键点检测、眼镜检测和模糊检测,确定人脸信息。Step S302 , perform face key point detection, glasses detection and blur detection on the image in which the pixels of the RGB image and the depth image pixels are aligned according to a preset time interval, to determine the face information.
这里,所述步骤S302可以理解为每间隔几帧图像,或者每间隔预定的时间,就对RGB图像和深度图像对齐后的图像进行一次人脸关键点检测、眼镜检测和模糊检测。Here, the step S302 can be understood as performing face key point detection, glasses detection and blur detection on the image after alignment of the RGB image and the depth image every few frames of images or every predetermined time.
步骤S303,如果所述人脸信息不符合预设条件,发出提示信息。Step S303, if the face information does not meet the preset conditions, send out prompt information.
这里,如果人脸信息不符合预设条件,即向用户发出提示信息。比如,用户戴了眼镜导致照片反光不清晰,或者用户距离相机太近导致照片模糊,于是发提示信息,提示用户拿掉眼镜或者离远一点等或者终止人脸录入。Here, if the face information does not meet the preset conditions, prompt information is sent to the user. For example, if the user wears glasses, the reflection of the photo is not clear, or the user is too close to the camera and the photo is blurred, so a prompt message is sent, prompting the user to remove the glasses or move away, etc., or to terminate the face entry.
步骤S304,根据所述M帧深度图像的人脸信息,从所述M帧深度图像中筛选出满足预设条件的N帧有效深度图。Step S304, according to the face information of the M frames of depth images, filter out N frames of effective depth maps that satisfy a preset condition from the M frames of depth images.
这里,从M帧深度图像中选择出图案清晰且内容丰富的N帧有效深度图。Here, N frames of effective depth maps with clear patterns and rich contents are selected from M frames of depth images.
步骤S305,将所述N帧有效深度图进行融合,得到融合图像。Step S305, fuse the N frames of effective depth maps to obtain a fused image.
步骤S306,将根据所述融合图像的人脸信息对所述预设的标准人脸模型中人脸信息进行调整,得到初步三维人脸模型。Step S306, adjusting the face information in the preset standard face model according to the face information of the fused image to obtain a preliminary three-dimensional face model.
这里,预设的标准人脸模型是预先设置好的,包含很多关键点组成的,具备很完整的点位信息,比如说,有240个点,这些点和检测到的人脸信息中包含的点云融合,在最后生成的3D人脸模型中找到这240个点,就可以选择第1点至第10点往前或者往后移动,从而使得到的3D人脸模型是可视化且可编辑的,进而可以很容易的应用到例如手机的相机应用中。Here, the preset standard face model is preset and consists of many key points and has complete point information. For example, there are 240 points. These points and the detected face information contain Point cloud fusion, find these 240 points in the final generated 3D face model, you can choose the 1st to 10th points to move forward or backward, so that the 3D face model is visualized and editable , which can be easily applied to camera applications such as mobile phones.
步骤S307,确定所述融合图像对应的人脸纹理信息。Step S307, determining the face texture information corresponding to the fusion image.
步骤S308,将所述人脸纹理信息输入所述初步三维人脸模型,得到所述三维人脸模型。Step S308, inputting the face texture information into the preliminary three-dimensional face model to obtain the three-dimensional face model.
步骤S309,保存所述三维人脸模型。Step S309, save the three-dimensional face model.
步骤S310,根据用户输入的指令对所述三维人脸模型进行编辑。Step S310, editing the three-dimensional face model according to the instruction input by the user.
这里,当得到三维人脸模型之后,可以对该模型进行保存,而且如果用户希望小范围的修改该模型,可自行输入相应区域的修改指令,根据该指令即可对该三维人脸模型进行修改。Here, after the 3D face model is obtained, the model can be saved, and if the user wants to modify the model in a small range, he can input the modification instruction of the corresponding area, and the 3D face model can be modified according to the instruction. .
在本申请实施例中,采用多帧RGB图像和深度图像,将多帧多图融合后的图像对应到预设设置好的标准人脸模型上,从而使得到的3D人脸模型是可编辑的,且将人脸模型化,可以在不同人建出的模型间彼此替换皮肤。如图3B所示,在图3B中可以看出,在区域31处,有各种参数可以调节,以使得到的3D模型可以按照用户的需要重新编辑,而且得到的3D人脸模型32是非常精准的。In the embodiment of the present application, multi-frame RGB images and depth images are used, and the multi-frame multi-image fusion image corresponds to the preset standard face model, so that the obtained 3D face model is editable , and the face is modeled, and the skins can be replaced with each other between the models built by different people. As shown in FIG. 3B, it can be seen in FIG. 3B that at the area 31, various parameters can be adjusted, so that the obtained 3D model can be re-edited according to the needs of the user, and the obtained 3D face model 32 is very Accurate.
三维重建通过数据采集、特征提取、信号分析等步骤将现实生活中的三维物体转化为利于计算机表示和处理的数学模型,是在计算机的虚拟空间中构造现实的关键技术,使客观世界的数字化表达成为可能。但是在相关技术中,特定的人脸建模技术仅支持人脸面具生成或3D人脸模型生成其中的一种,而少见同时支持两者的算法。相关技术中的3D人脸建模技术在获取图像纹理特征等方面需要大量计算,导致模型生成速度较慢。同时,当人脸在多帧图像中移动幅度较大时,重建结果较为不稳定,模型精确度较低。静态几何重建的建模方式仅输出一体化的人脸结构模型,而不具备面部特征的语义信息,不能够对五官的不同部位加以区分。因此,用户不能通过模型直接获取“眼睛”“鼻子”“嘴巴”等部位的位置信息。由以上可以看出,相关技术中的人脸3D建模技术不支持后续对模型进行微调或手动编辑,应用较为局限。Three-dimensional reconstruction transforms three-dimensional objects in real life into mathematical models that are conducive to computer representation and processing through steps such as data acquisition, feature extraction, and signal analysis. become possible. However, in the related art, a specific face modeling technology only supports one of face mask generation or 3D face model generation, and algorithms that support both are rare. The 3D face modeling technology in the related art requires a lot of computation in acquiring image texture features, etc., resulting in a slow model generation speed. At the same time, when the face moves greatly in the multi-frame images, the reconstruction results are unstable and the model accuracy is low. The modeling method of static geometric reconstruction only outputs an integrated face structure model, without the semantic information of facial features, and cannot distinguish different parts of the facial features. Therefore, the user cannot directly obtain the position information of "eyes", "nose", "mouth" and other parts through the model. It can be seen from the above that the face 3D modeling technology in the related art does not support subsequent fine-tuning or manual editing of the model, and its application is relatively limited.
本申请实施例提供一种三维人脸重建方法,所述方法包含两种情况,一是生成人脸面具,二是生成3D人脸模型。The embodiment of the present application provides a three-dimensional face reconstruction method, and the method includes two cases, one is to generate a face mask, and the other is to generate a 3D face model.
第一种情况,生成人脸面具:本申请实施例基于深度学习框架所研发,通过单帧RGB图像输入,快速生成具备人脸结构的面具,并依据实际需求做到实时预览,图4为本申请实施例采用本申请实施例提供的三维人脸重建方法快速生成3D人脸面具实现流程示意图,包括以下步骤:The first case is to generate a face mask: the embodiment of the present application is developed based on a deep learning framework, and through a single frame of RGB image input, a mask with a face structure is quickly generated, and a real-time preview is performed according to actual needs, as shown in Figure 4. The application embodiment adopts the three-dimensional face reconstruction method provided by the embodiment of the application to quickly generate a schematic diagram of the implementation process of a 3D face mask, including the following steps:
步骤S401,获取(单帧)RGB图与深度图对齐后的图像信息,进行预处理。In step S401, the image information after the alignment of the (single frame) RGB image and the depth map is acquired, and preprocessing is performed.
步骤S402,将待检测的对齐图像传入深度神经网络,进行人脸关键点检测。In step S402, the alignment image to be detected is transmitted to the deep neural network to detect the key points of the face.
步骤S403,输出对应的3D人脸面具。Step S403, output the corresponding 3D face mask.
在本申请实施例中,首先,可通过实时摄像头拍摄或传入已有的图像文件来获取单帧RGB图像作为待检测数据。然后,对于单帧RGB图像,可快速生成具备人脸结构的人脸面具,并依据实际需求做到实时预览。In the embodiment of the present application, firstly, a single frame of RGB image can be acquired as the data to be detected by shooting with a real-time camera or inputting an existing image file. Then, for a single-frame RGB image, a face mask with a face structure can be quickly generated, and real-time preview can be achieved according to actual needs.
第二种情况,生成3D人脸模型:图5为本申请实施例采用本申请实施例提供的三维人脸重建方法快速生成3D人脸模型实现流程示意图,包括以下步骤:In the second case, a 3D face model is generated: FIG. 5 is a schematic diagram of the implementation process of quickly generating a 3D face model by using the three-dimensional face reconstruction method provided by the embodiment of the present application, including the following steps:
步骤S501,获取RGB图与深度图对齐后的图像信息,进行预处理。Step S501 , obtain image information after the RGB image and the depth map are aligned, and perform preprocessing.
步骤S502,将待检测的对齐图像传入深度神经网络,进行人脸关键点检测、眼镜检测、模糊检测。In step S502, the aligned image to be detected is transmitted to the deep neural network, and face key point detection, glasses detection, and blur detection are performed.
这里,在步骤S502中,每隔几帧图像就检测一次(可依据具体耗时要求调整),若检测结果不符合预设条件,即提示用户,终止人脸录入程序。对于人脸关键点检测,可将关键点对应到3D人脸模型上,使建立的人脸3D模型具备五官及其他面部特征的语意信息。即,能够找到人脸模型上的部位位置,如眼睛、鼻子、嘴巴等。Here, in step S502, detection is performed every few frames of images (which can be adjusted according to specific time-consuming requirements). For face key point detection, the key points can be mapped to the 3D face model, so that the established face 3D model has the semantic information of facial features and other facial features. That is, the position of parts on the face model, such as eyes, nose, mouth, etc., can be found.
步骤S503,将有效的深度图进行多帧融合,并对应到标准人脸模型上,进行人脸的纹理贴合。In step S503, multi-frame fusion is performed on the effective depth map and corresponding to the standard face model, and texture fitting of the face is performed.
这里,在所述步骤S504中采用标准的人脸模型,使最终得到3D人脸模型不仅可以使用户可以从3D模型中获取五官信息,例如获取关键点位置、测量距离等,同时支持用户对重建的3D模型进行可视化编辑Here, in the step S504, a standard face model is used, so that the final 3D face model can not only enable the user to obtain facial features from the 3D model, such as obtaining key point positions, measure distances, etc., but also support the user to reconstruct the reconstruction. 3D models for visual editing
步骤S504,输出3D人脸模型。Step S504, output the 3D face model.
在本申请实施例生成3D人脸模型中,利用深度神经网络的强大描述能力进行建模,将RGB图和深度图对应后生成一个接近真实人像的3D模型,包含人脸3D结构体与人脸纹理。In the generation of the 3D face model in the embodiment of the present application, the powerful description ability of the deep neural network is used for modeling, and the RGB image and the depth image are corresponded to generate a 3D model that is close to the real portrait, including the 3D structure of the face and the face. texture.
在本申请实施例中,可通过实时摄像头拍摄或传入已有的图像文件来获取多帧RGB图像和深度图像作为待检测数据。对于多帧融合图像,可依据图5的描述快速生成具备人脸结构的3D人脸模型,支持多角度旋转预览,并可依据实际需求对特定部位做到可视化编辑。在本申请实施例提供的一种三维人脸重建方法中,通过将多帧图像融合后对应到预设设置好的标准人脸模型上,可以输出人脸3D结构面具或人脸3D模型,依据实际场景使用,提升集成和更新效率。而且本申请实施例中的深度神经网络可位于终端设备中,解决移动端应用程序所需的人脸建模需求,也可位于服务器中,解决更大信息量的人脸建模需求。本申请实施例得到的3D人脸建模深度神经网络可以及时进行迭代更新,能够迅速应对新出现的用户场景进行优化。In this embodiment of the present application, multiple frames of RGB images and depth images may be acquired as data to be detected by shooting with a real-time camera or by passing in an existing image file. For multi-frame fusion images, a 3D face model with a face structure can be quickly generated according to the description in Figure 5, multi-angle rotation preview is supported, and specific parts can be visually edited according to actual needs. In a three-dimensional face reconstruction method provided in the embodiment of the present application, by merging multiple frames of images and corresponding to a preset standard face model, a face 3D structure mask or a face 3D model can be output, according to Use in actual scenarios to improve integration and update efficiency. Moreover, the deep neural network in the embodiment of the present application can be located in the terminal device to meet the face modeling requirements required by the mobile terminal application, and can also be located in the server to meet the face modeling requirements with a larger amount of information. The 3D face modeling deep neural network obtained in the embodiment of the present application can be iteratively updated in time, and can quickly respond to new user scenarios for optimization.
本发明实施例提供一种三维人脸重建装置,图6为本申请实施例三维人脸重建装置的组成结构示意图,如图6所示,所述三维人脸重建装置600包括:第一获取模块601、第一检测模块605和第一生成模块603,其中:An embodiment of the present invention provides a three-dimensional face reconstruction apparatus. FIG. 6 is a schematic structural diagram of a three-dimensional face reconstruction apparatus according to an embodiment of the present application. As shown in FIG. 6 , the three-dimensional face reconstruction apparatus 600 includes: a first acquisition module 601, a first detection module 605 and a first generation module 603, wherein:
所述第一获取模块601,用于获取RGB图像与所述RGB图像对应的深度图像;The first obtaining module 601 is configured to obtain a depth image corresponding to an RGB image and the RGB image;
所述第一检测模块602,用于对所述RGB图像和所述深度图像对齐后的图像进行人脸检测,确定人脸信息;The first detection module 602 is configured to perform face detection on the aligned image of the RGB image and the depth image to determine face information;
所述第一生成模块603,用于对所述人脸信息进行处理,生成人脸模型。The first generating module 603 is configured to process the face information to generate a face model.
在本申请实施例中,所述人脸模型至少包括以下之一:人脸面具和三维人脸模型,其中,所述人脸面具包含人脸结构特征,所述三维人脸模型包含人脸结构特征和人脸纹理信息。In the embodiment of the present application, the face model includes at least one of the following: a face mask and a three-dimensional face model, wherein the face mask includes a face structure feature, and the three-dimensional face model includes a face structure feature and face texture information.
在本申请实施例中,当所述RGB图像为单帧图像时,所述第一获取模块601,包括:In this embodiment of the present application, when the RGB image is a single-frame image, the first acquisition module 601 includes:
第一获取子模块,用于采用摄像头获取单帧RGB图像,并采用深度摄像机获取所述单帧RGB图像对应的深度图像;The first acquisition submodule is used to acquire a single frame of RGB image by using a camera, and acquire a depth image corresponding to the single frame of RGB image by using a depth camera;
或者,or,
第二获取子模块,用于采用双摄像头获取两个单帧RGB图像;The second acquisition sub-module is used to acquire two single-frame RGB images using dual cameras;
第一确定子模块,用于根据所述两个单帧RGB图像确定所述两个单帧RGB图像对应的深度图像。The first determination submodule is configured to determine depth images corresponding to the two single-frame RGB images according to the two single-frame RGB images.
在本申请实施例中,所述第一生成模块603,包括:In this embodiment of the present application, the first generation module 603 includes:
第一过了子模块,用于对所述人脸信息进行过滤,得到过滤后的人脸信息;The first sub-module is used to filter the face information to obtain the filtered face information;
第一生成子模块,用于根据所述过滤后的人脸信息的空间分布信息,生成人脸面具。The first generating submodule is configured to generate a face mask according to the spatial distribution information of the filtered face information.
在本申请实施例中,当所述RGB图像为M帧图像时,所述第一获取模块601,包括:In this embodiment of the present application, when the RGB images are M frames of images, the first acquisition module 601 includes:
第三获取子模块,用于获取所述M帧RGB图像,并分别获取与所述M帧RGB图像对应的M帧深度图像;其中,所述M帧RGB图像包括同一人脸的不同角度的图像;M为大于1的整数。The third acquisition sub-module is used for acquiring the M frames of RGB images, and respectively acquiring M frames of depth images corresponding to the M frames of RGB images; wherein the M frames of RGB images include images of the same face from different angles ; M is an integer greater than 1.
在本申请实施例中,所述第一生成模块603,包括:In this embodiment of the present application, the first generation module 603 includes:
第二确定子模块,用于根据所述M帧深度图像的人脸信息,从所述M帧深度图像中筛选出满足预设条件的N帧有效深度图,N为大于0的整数,且N小于等于M;The second determination sub-module is configured to screen out N frames of effective depth maps that meet preset conditions from the M frames of depth images according to the face information of the M frames of depth images, where N is an integer greater than 0, and N less than or equal to M;
第一融合子模块,用于将所述N帧有效深度图进行融合,得到融合图像;a first fusion submodule, configured to fuse the N frames of effective depth maps to obtain a fused image;
第三确定子模块,用于根据所述融合图像的人脸信息和预设的标准人脸模型,得到所述三维人脸模型。The third determination sub-module is configured to obtain the three-dimensional face model according to the face information of the fusion image and the preset standard face model.
在本申请实施例中,所述根第三确定子模块,包括:In this embodiment of the present application, the root third determination submodule includes:
第一匹配单元,用于将根据所述融合图像的人脸信息对所述预设的标准人脸模型中人脸信息进行调整,得到初步三维人脸模型;a first matching unit, configured to adjust the face information in the preset standard face model according to the face information of the fused image to obtain a preliminary three-dimensional face model;
第一确定单元,用于确定所述融合图像对应的人脸纹理信息;a first determining unit, configured to determine the face texture information corresponding to the fusion image;
第一输入单元,用于将所述人脸纹理信息输入所述初步三维人脸模型,得到所述三维人脸模型。The first input unit is configured to input the face texture information into the preliminary three-dimensional face model to obtain the three-dimensional face model.
在本申请实施例中,所述第一检测模块602,包括:In this embodiment of the present application, the first detection module 602 includes:
第一输入子模块,用于将所述RGB图像的像素和所述深度图像像素对齐后的图像输入到第一预设的深度神经网络,得到人脸信息。The first input sub-module is used for inputting the aligned image of the pixels of the RGB image and the pixels of the depth image into a first preset deep neural network to obtain face information.
在本申请实施例中,所述第一检测模块601,包括:In this embodiment of the present application, the first detection module 601 includes:
第一检测子模块,用于对所述RGB图像和所述深度图像对齐后的图像输入到第二预设的神经深度神经网络进行人脸关键点检测、眼镜检测和模糊检测,输出得到人脸信息。The first detection sub-module is used for inputting the aligned image of the RGB image and the depth image to the second preset neural deep neural network to perform face key point detection, glasses detection and blur detection, and the output is obtained. information.
在本申请实施例中,所述第一检测模块602,还包括:In this embodiment of the present application, the first detection module 602 further includes:
第一保存子模块,用于保存所述三维人脸模型;a first saving submodule for saving the three-dimensional face model;
第一提示子模块,用于根据用户输入的指令对所述三维人脸模型进行编辑。The first prompting sub-module is used to edit the three-dimensional face model according to the instruction input by the user.
需要说明的是,以上装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请装置实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。It should be noted that the descriptions of the above apparatus embodiments are similar to the descriptions of the above method embodiments, and have similar beneficial effects to the method embodiments. For technical details not disclosed in the device embodiments of the present application, please refer to the descriptions of the method embodiments of the present application for understanding.
需要说明的是,本申请实施例中,如果以软件功能模块的形式实现上述的三维人脸重建方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台即时通讯设备(可以是终端、服务器等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本申请实施例不限制于任何特定的硬件和软件结合。It should be noted that, in the embodiments of the present application, if the above-mentioned three-dimensional face reconstruction method is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence or in the parts that make contributions to the prior art. The computer software products are stored in a storage medium and include several instructions for An instant messaging device (which may be a terminal, a server, etc.) is caused to execute all or part of the methods described in the various embodiments of the present application. The aforementioned storage medium includes: a U disk, a removable hard disk, a read only memory (Read Only Memory, ROM), a magnetic disk or an optical disk and other mediums that can store program codes. As such, the embodiments of the present application are not limited to any specific combination of hardware and software.
相应地,本申请实施例再提供一种计算机程序产品,所述计算机程序产品包括计算机可执行指令,该计算机可执行指令被执行后,能够实现本申请实施例提供的三维人脸重建方法中的步骤。Correspondingly, the embodiments of the present application further provide a computer program product. The computer program product includes computer-executable instructions. After the computer-executable instructions are executed, the three-dimensional face reconstruction method provided by the embodiments of the present application can be implemented. step.
相应地,本申请实施例再提供一种计算机存储介质,所述计算机存储介质上存储有计算机可执行指令,所述该计算机可执行指令被处理器执行时实现上述实施例提供的三维人脸重建方法的步骤。Correspondingly, the embodiments of the present application further provide a computer storage medium, where computer-executable instructions are stored thereon, and when the computer-executable instructions are executed by a processor, the three-dimensional face reconstruction provided by the above-mentioned embodiments is implemented. steps of the method.
相应地,本申请实施例提供一种计算机设备,图7为本申请实施例计算机设备的组成结构示意图,如图7所示,该计算机设备700的硬件实体包括:处理器701、通信接口702和存储器703,其中Correspondingly, an embodiment of the present application provides a computer device, and FIG. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in FIG. 7 , the hardware entity of the computer device 700 includes: a processor 701 , a communication interface 702 and memory 703, where
处理器701通常控制计算机设备700的总体操作。The processor 701 generally controls the overall operation of the computer device 700 .
通信接口702可以使计算机设备通过网络与其他终端或服务器通信。The communication interface 702 enables the computer device to communicate with other terminals or servers through a network.
存储器703配置为存储由处理器701可执行的指令和应用,还可以缓存待处理器701以及计算机设备700中各模块待处理或已经处理的数据(例如,图像数据、音频数据、语音通信数据和视频通信数据),可以通过闪存(FLASH)或随机访问存储器(Random AccessMemory,RAM)实现。The memory 703 is configured to store instructions and applications executable by the processor 701, and can also cache data to be processed or processed by the processor 701 and various modules in the computer device 700 (eg, image data, audio data, voice communication data and video communication data), which can be implemented by flash memory (FLASH) or random access memory (Random Access Memory, RAM).
以上即时计算机设备和存储介质实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请即时通讯设备和存储介质实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。The descriptions of the above instant computer device and storage medium embodiments are similar to the descriptions of the above method embodiments, and have similar beneficial effects to the method embodiments. For technical details not disclosed in the instant messaging device and storage medium embodiments of the present application, please refer to the description of the method embodiments of the present application for understanding.
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。It is to be understood that reference throughout the specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic associated with the embodiment is included in at least one embodiment of the present application. Thus, appearances of "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily necessarily referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the size of the sequence numbers of the above-mentioned processes does not mean the sequence of execution, and the execution sequence of each process should be determined by its functions and internal logic, and should not be dealt with in the embodiments of the present application. implementation constitutes any limitation. The above-mentioned serial numbers of the embodiments of the present application are only for description, and do not represent the advantages or disadvantages of the embodiments.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It should be noted that, herein, the terms "comprising", "comprising" or any other variation thereof are intended to encompass non-exclusive inclusion, such that a process, method, article or device comprising a series of elements includes not only those elements, It also includes other elements not expressly listed or inherent to such a process, method, article or apparatus. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in a process, method, article or apparatus that includes the element.
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored, or not implemented. In addition, the coupling, or direct coupling, or communication connection between the various components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be electrical, mechanical or other forms. of.
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。The unit described above as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit; it may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
另外,在本申请各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may all be integrated into one processing unit, or each unit may be separately used as a unit, or two or more units may be integrated into one unit; the above integration The unit can be implemented either in the form of hardware or in the form of hardware plus software functional units.
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。Those of ordinary skill in the art can understand that all or part of the steps of implementing the above method embodiments can be completed by program instructions related to hardware, the aforementioned program can be stored in a computer-readable storage medium, and when the program is executed, the execution includes: The steps of the above method embodiments; and the aforementioned storage medium includes: a removable storage device, a read only memory (Read Only Memory, ROM), a magnetic disk or an optical disk and other media that can store program codes.
或者,本申请上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机或者服务器等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。Alternatively, if the above-mentioned integrated units of the present application are implemented in the form of software function modules and sold or used as independent products, they may also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence or in the parts that make contributions to the prior art. The computer software products are stored in a storage medium and include several instructions for A computer device (which may be a personal computer or a server, etc.) is caused to execute all or part of the methods described in the various embodiments of the present application. The aforementioned storage medium includes various media that can store program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above are only specific embodiments of the present application, but the protection scope of the present application is not limited to this. should be covered within the scope of protection of this application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811435701.2A CN109584358A (en) | 2018-11-28 | 2018-11-28 | A kind of three-dimensional facial reconstruction method and device, equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811435701.2A CN109584358A (en) | 2018-11-28 | 2018-11-28 | A kind of three-dimensional facial reconstruction method and device, equipment and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN109584358A true CN109584358A (en) | 2019-04-05 |
Family
ID=65925325
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201811435701.2A Pending CN109584358A (en) | 2018-11-28 | 2018-11-28 | A kind of three-dimensional facial reconstruction method and device, equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN109584358A (en) |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110874851A (en) * | 2019-10-25 | 2020-03-10 | 深圳奥比中光科技有限公司 | Method, device, system and readable storage medium for reconstructing three-dimensional model of human body |
| CN111028330A (en) * | 2019-11-15 | 2020-04-17 | 腾讯科技(深圳)有限公司 | Three-dimensional expression base generation method, device, equipment and storage medium |
| CN111597928A (en) * | 2020-04-29 | 2020-08-28 | 深圳市商汤智能传感科技有限公司 | Three-dimensional model processing method and device, electronic device and storage medium |
| CN112001859A (en) * | 2020-08-10 | 2020-11-27 | 深思考人工智能科技(上海)有限公司 | Method and system for repairing face image |
| CN112509117A (en) * | 2020-11-30 | 2021-03-16 | 清华大学 | Hand three-dimensional model reconstruction method and device, electronic equipment and storage medium |
| CN112508811A (en) * | 2020-11-30 | 2021-03-16 | 北京百度网讯科技有限公司 | Image preprocessing method, device, equipment and storage medium |
| CN112581598A (en) * | 2020-12-04 | 2021-03-30 | 深圳市慧鲤科技有限公司 | Three-dimensional model construction method, device, equipment and storage medium |
| CN112584079A (en) * | 2019-09-30 | 2021-03-30 | 华为技术有限公司 | Video call face presentation method, video call device and automobile |
| CN112836545A (en) * | 2019-11-22 | 2021-05-25 | 北京新氧科技有限公司 | A 3D face information processing method, device and terminal |
| CN113343925A (en) * | 2021-07-02 | 2021-09-03 | 厦门美图之家科技有限公司 | Face three-dimensional reconstruction method and device, electronic equipment and storage medium |
| CN113673287A (en) * | 2020-05-15 | 2021-11-19 | 深圳市光鉴科技有限公司 | Depth reconstruction method, system, device and medium based on target time node |
| CN113743191A (en) * | 2021-07-16 | 2021-12-03 | 深圳云天励飞技术股份有限公司 | Face image alignment detection method and device, electronic equipment and storage medium |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050111705A1 (en) * | 2003-08-26 | 2005-05-26 | Roman Waupotitsch | Passive stereo sensing for 3D facial shape biometrics |
| CN102034079A (en) * | 2009-09-24 | 2011-04-27 | 汉王科技股份有限公司 | Method and system for identifying faces shaded by eyeglasses |
| CN104966316A (en) * | 2015-05-22 | 2015-10-07 | 腾讯科技(深圳)有限公司 | 3D face reconstruction method, apparatus and server |
| CN105243357A (en) * | 2015-09-15 | 2016-01-13 | 深圳市环阳通信息技术有限公司 | Identity document-based face recognition method and face recognition device |
| CN107133590A (en) * | 2017-05-04 | 2017-09-05 | 上海博历机械科技有限公司 | A kind of identification system based on facial image |
| CN107358648A (en) * | 2017-07-17 | 2017-11-17 | 中国科学技术大学 | Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image |
| CN107705355A (en) * | 2017-09-08 | 2018-02-16 | 郭睿 | A kind of 3D human body modeling methods and device based on plurality of pictures |
| CN108229269A (en) * | 2016-12-31 | 2018-06-29 | 深圳市商汤科技有限公司 | Method for detecting human face, device and electronic equipment |
| CN108764180A (en) * | 2018-05-31 | 2018-11-06 | Oppo广东移动通信有限公司 | Face recognition method and device, electronic equipment and readable storage medium |
| CN108830892A (en) * | 2018-06-13 | 2018-11-16 | 北京微播视界科技有限公司 | Face image processing process, device, electronic equipment and computer readable storage medium |
| CN108875489A (en) * | 2017-09-30 | 2018-11-23 | 北京旷视科技有限公司 | Method for detecting human face, device, system, storage medium and capture machine |
-
2018
- 2018-11-28 CN CN201811435701.2A patent/CN109584358A/en active Pending
Patent Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050111705A1 (en) * | 2003-08-26 | 2005-05-26 | Roman Waupotitsch | Passive stereo sensing for 3D facial shape biometrics |
| CN102034079A (en) * | 2009-09-24 | 2011-04-27 | 汉王科技股份有限公司 | Method and system for identifying faces shaded by eyeglasses |
| CN104966316A (en) * | 2015-05-22 | 2015-10-07 | 腾讯科技(深圳)有限公司 | 3D face reconstruction method, apparatus and server |
| CN105243357A (en) * | 2015-09-15 | 2016-01-13 | 深圳市环阳通信息技术有限公司 | Identity document-based face recognition method and face recognition device |
| CN108229269A (en) * | 2016-12-31 | 2018-06-29 | 深圳市商汤科技有限公司 | Method for detecting human face, device and electronic equipment |
| CN107133590A (en) * | 2017-05-04 | 2017-09-05 | 上海博历机械科技有限公司 | A kind of identification system based on facial image |
| CN107358648A (en) * | 2017-07-17 | 2017-11-17 | 中国科学技术大学 | Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image |
| CN107705355A (en) * | 2017-09-08 | 2018-02-16 | 郭睿 | A kind of 3D human body modeling methods and device based on plurality of pictures |
| CN108875489A (en) * | 2017-09-30 | 2018-11-23 | 北京旷视科技有限公司 | Method for detecting human face, device, system, storage medium and capture machine |
| CN108764180A (en) * | 2018-05-31 | 2018-11-06 | Oppo广东移动通信有限公司 | Face recognition method and device, electronic equipment and readable storage medium |
| CN108830892A (en) * | 2018-06-13 | 2018-11-16 | 北京微播视界科技有限公司 | Face image processing process, device, electronic equipment and computer readable storage medium |
Cited By (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112584079A (en) * | 2019-09-30 | 2021-03-30 | 华为技术有限公司 | Video call face presentation method, video call device and automobile |
| US12192586B2 (en) | 2019-09-30 | 2025-01-07 | Huawei Technologies Co., Ltd. | Method for presenting face in video call, video call apparatus, and vehicle |
| CN110874851A (en) * | 2019-10-25 | 2020-03-10 | 深圳奥比中光科技有限公司 | Method, device, system and readable storage medium for reconstructing three-dimensional model of human body |
| CN111028330A (en) * | 2019-11-15 | 2020-04-17 | 腾讯科技(深圳)有限公司 | Three-dimensional expression base generation method, device, equipment and storage medium |
| CN111028330B (en) * | 2019-11-15 | 2023-04-07 | 腾讯科技(深圳)有限公司 | Three-dimensional expression base generation method, device, equipment and storage medium |
| CN112836545A (en) * | 2019-11-22 | 2021-05-25 | 北京新氧科技有限公司 | A 3D face information processing method, device and terminal |
| CN111597928A (en) * | 2020-04-29 | 2020-08-28 | 深圳市商汤智能传感科技有限公司 | Three-dimensional model processing method and device, electronic device and storage medium |
| CN113673287A (en) * | 2020-05-15 | 2021-11-19 | 深圳市光鉴科技有限公司 | Depth reconstruction method, system, device and medium based on target time node |
| CN113673287B (en) * | 2020-05-15 | 2023-09-12 | 深圳市光鉴科技有限公司 | Depth reconstruction method, system, equipment and medium based on target time node |
| CN112001859A (en) * | 2020-08-10 | 2020-11-27 | 深思考人工智能科技(上海)有限公司 | Method and system for repairing face image |
| CN112001859B (en) * | 2020-08-10 | 2024-04-16 | 深思考人工智能科技(上海)有限公司 | Face image restoration method and system |
| CN112508811A (en) * | 2020-11-30 | 2021-03-16 | 北京百度网讯科技有限公司 | Image preprocessing method, device, equipment and storage medium |
| CN112509117A (en) * | 2020-11-30 | 2021-03-16 | 清华大学 | Hand three-dimensional model reconstruction method and device, electronic equipment and storage medium |
| CN112581598A (en) * | 2020-12-04 | 2021-03-30 | 深圳市慧鲤科技有限公司 | Three-dimensional model construction method, device, equipment and storage medium |
| CN113343925A (en) * | 2021-07-02 | 2021-09-03 | 厦门美图之家科技有限公司 | Face three-dimensional reconstruction method and device, electronic equipment and storage medium |
| CN113343925B (en) * | 2021-07-02 | 2023-08-29 | 厦门美图宜肤科技有限公司 | Face three-dimensional reconstruction method and device, electronic equipment and storage medium |
| CN113743191A (en) * | 2021-07-16 | 2021-12-03 | 深圳云天励飞技术股份有限公司 | Face image alignment detection method and device, electronic equipment and storage medium |
| CN113743191B (en) * | 2021-07-16 | 2023-08-01 | 深圳云天励飞技术股份有限公司 | Face image alignment detection method and device, electronic equipment and storage medium |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109584358A (en) | A kind of three-dimensional facial reconstruction method and device, equipment and storage medium | |
| JP7645917B2 (en) | A technique for capturing and editing dynamic depth images | |
| KR102697772B1 (en) | Augmented reality content generators that include 3D data within messaging systems | |
| JP7403528B2 (en) | Method and system for reconstructing color and depth information of a scene | |
| CN108764091B (en) | Living body detection method and apparatus, electronic device, and storage medium | |
| EP3997662A1 (en) | Depth-aware photo editing | |
| KR20220051376A (en) | 3D Data Generation in Messaging Systems | |
| CN113228625A (en) | Video conference supporting composite video streams | |
| WO2019237745A1 (en) | Facial image processing method and apparatus, electronic device and computer readable storage medium | |
| US20180115700A1 (en) | Simulating depth of field | |
| CN114445562A (en) | Three-dimensional reconstruction method and device, electronic device and storage medium | |
| JP7101269B2 (en) | Pose correction | |
| CN114842120A (en) | Image rendering processing method, device, equipment and medium | |
| US11776201B2 (en) | Video lighting using depth and virtual lights | |
| CN113706430B (en) | Image processing method, device and device for image processing | |
| CN111340865B (en) | Method and apparatus for generating image | |
| CN115497029A (en) | Video processing method, device and computer-readable storage medium | |
| CN111553286B (en) | Method and electronic device for capturing ear animation features | |
| CN116193093A (en) | Video production method, device, electronic device and readable storage medium | |
| CN115240260A (en) | Image processing method and device thereof | |
| CN119583874A (en) | Video content replacement method, program product, electronic device and chip system | |
| CN117274141A (en) | Chrominance matting method and device and video live broadcast system | |
| Szabó et al. | Processing 3D scanner data for virtual reality | |
| CN114764848A (en) | Scene illumination distribution estimation method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190405 |