[go: up one dir, main page]

CN112070022A - Face image recognition method and device, electronic equipment and computer readable medium - Google Patents

Face image recognition method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN112070022A
CN112070022A CN202010939340.6A CN202010939340A CN112070022A CN 112070022 A CN112070022 A CN 112070022A CN 202010939340 A CN202010939340 A CN 202010939340A CN 112070022 A CN112070022 A CN 112070022A
Authority
CN
China
Prior art keywords
face
image
network
sample
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010939340.6A
Other languages
Chinese (zh)
Inventor
邓启力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010939340.6A priority Critical patent/CN112070022A/en
Publication of CN112070022A publication Critical patent/CN112070022A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure discloses a face image recognition method, a face image recognition device, electronic equipment and a computer readable medium. One embodiment of the method comprises: detecting the definition of a face image to be processed; responding to the definition smaller than a set definition threshold value, and generating a face simulation image based on the face image to be processed, wherein the definition of the face simulation image is larger than the definition threshold value; and identifying the face simulation image to obtain face identification information corresponding to the face image to be processed. The embodiment can generate the human face simulation image with higher definition than the human face image to be processed, and improves the identification accuracy of the human face image to be processed.

Description

人脸图像识别方法、装置、电子设备和计算机可读介质Face image recognition method, apparatus, electronic device and computer readable medium

技术领域technical field

本公开的实施例涉及人脸识别技术领域,具体涉及人脸图像识别方法、装置、电子设备和计算机可读介质。Embodiments of the present disclosure relate to the technical field of face recognition, and in particular, to a face image recognition method, apparatus, electronic device, and computer-readable medium.

背景技术Background technique

人脸识别,是基于人的脸部特征信息进行身份识别的一种生物识别技术。用摄像机或摄像头采集含有人脸的图像或视频流,并自动在图像中检测和跟踪人脸,进而对检测到的人脸进行脸部识别的一系列相关技术,通常也叫做人像识别、面部识别。Face recognition is a kind of biometric identification technology based on human facial feature information. A series of related technologies that use cameras or cameras to collect images or video streams containing faces, and automatically detect and track faces in the images, and then perform face recognition on the detected faces. .

实际中,受拍摄距离、镜头像素、环境光亮度等因素的影响,人脸图像的清晰度可能不高。相应的,对人脸识别的准确性也就不高。In practice, due to factors such as shooting distance, lens pixels, and ambient light brightness, the clarity of face images may not be high. Correspondingly, the accuracy of face recognition is not high.

发明内容SUMMARY OF THE INVENTION

本公开的内容部分用于以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。本公开的内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。This summary of the disclosure serves to introduce concepts in a simplified form that are described in detail in the detailed description that follows. The content section of this disclosure is not intended to identify key features or essential features of the claimed technical solution, nor is it intended to be used to limit the scope of the claimed technical solution.

本公开的一些实施例提出了人脸图像识别方法、装置、电子设备和计算机可读介质,来解决以上背景技术部分提到的技术问题。Some embodiments of the present disclosure propose a face image recognition method, apparatus, electronic device, and computer-readable medium to solve the technical problems mentioned in the above background art section.

第一方面,本公开的一些实施例提供了一种人脸图像识别方法,该方法包括:对待处理人脸图像的清晰度进行检测;响应于上述清晰度小于设定清晰度阈值,基于上述待处理人脸图像生成人脸模拟图像,上述人脸模拟图像的清晰度大于上述清晰度阈值;对上述人脸模拟图像进行识别,得到对应上述待处理人脸图像的人脸识别信息。In a first aspect, some embodiments of the present disclosure provide a face image recognition method, the method includes: detecting the sharpness of the face image to be processed; in response to the above-mentioned sharpness being less than a set sharpness threshold, based on the above-mentioned Process the face image to generate a face simulation image, the definition of the face simulation image is greater than the definition threshold; identify the face simulation image to obtain the face recognition information corresponding to the to-be-processed face image.

第二方面,本公开的一些实施例提供了一种人脸图像识别装置,该装置包括:图像清晰度检测单元,被配置成对待处理人脸图像的清晰度进行检测;人脸模拟图像生成单元,响应于上述清晰度小于设定清晰度阈值,被配置成基于上述待处理人脸图像生成人脸模拟图像,上述人脸模拟图像的清晰度大于上述清晰度阈值;人脸识别单元,被配置成对上述人脸模拟图像进行识别,得到对应上述待处理人脸图像的人脸识别信息。In a second aspect, some embodiments of the present disclosure provide a face image recognition apparatus, the apparatus includes: an image sharpness detection unit configured to detect the sharpness of a to-be-processed face image; a face simulation image generation unit is configured to generate a simulated face image based on the above-mentioned face image to be processed in response to the above-mentioned sharpness being less than the set sharpness threshold, and the sharpness of the above-mentioned simulated face image is greater than the above-mentioned sharpness threshold; the face recognition unit is configured to The above-mentioned face simulation images are identified in pairs to obtain face recognition information corresponding to the above-mentioned face images to be processed.

第三方面,本公开的一些实施例提供了一种电子设备,包括:一个或多个处理器;存储器,其上存储有一个或多个程序,当上述一个或多个程序被上述一个或多个处理器执行时,使得上述一个或多个处理器执行上述第一方面的人脸图像识别方法。In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; and a memory on which one or more programs are stored, when the one or more programs are stored by the one or more programs described above When executed by each processor, the above-mentioned one or more processors are caused to execute the above-mentioned face image recognition method of the first aspect.

第四方面,本公开的一些实施例提供了一种计算机可读介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现上述第一方面的人脸图像识别方法。In a fourth aspect, some embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the face image recognition method of the first aspect above is implemented.

本公开的上述各个实施例中的一个实施例具有如下有益效果:首先对待处理人脸图像的清晰度进行检测,实现了对图像质量的预处理。然后在清晰度小于设定清晰度阈值的情况下,生成对应待处理人脸图像的人脸模拟图像。其中,人脸模拟图像的清晰度大于清晰度阈值,并且人脸模拟图像保留了待处理人脸图像的人脸特征。如此,有利于提高对人脸图像识别的准确性。最后对人脸模拟图像进行识别,得到人脸识别信息。从而提高了对待处理人脸图像的识别准确性。One of the above-mentioned embodiments of the present disclosure has the following beneficial effects: firstly, the sharpness of the face image to be processed is detected, so as to realize the preprocessing of the image quality. Then, when the sharpness is less than the set sharpness threshold, a simulated face image corresponding to the face image to be processed is generated. The definition of the face simulation image is greater than the definition threshold, and the face simulation image retains the face features of the face image to be processed. In this way, it is beneficial to improve the accuracy of face image recognition. Finally, the face simulation image is recognized, and the face recognition information is obtained. Thus, the recognition accuracy of the face image to be processed is improved.

附图说明Description of drawings

结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,元件和元素不一定按照比例绘制。The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent when taken in conjunction with the accompanying drawings and with reference to the following detailed description. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.

图1是本公开的一些实施例的人脸图像识别方法的应用场景的示意图;1 is a schematic diagram of an application scenario of a face image recognition method according to some embodiments of the present disclosure;

图2是根据本公开的人脸图像识别方法的一些实施例的流程图;2 is a flowchart of some embodiments of a face image recognition method according to the present disclosure;

图3是根据本公开的人脸图像识别方法的另一些实施例的流程图;3 is a flow chart of other embodiments of the face image recognition method according to the present disclosure;

图4是根据本公开的人脸图像识别方法的另一些实施例的流程图;4 is a flow chart of other embodiments of a face image recognition method according to the present disclosure;

图5是根据本公开的人脸图像识别装置的一些实施例的结构示意图;5 is a schematic structural diagram of some embodiments of a face image recognition apparatus according to the present disclosure;

图6是适于用来实现本公开的一些实施例的电子设备的结构示意图。6 is a schematic structural diagram of an electronic device suitable for implementing some embodiments of the present disclosure.

具体实施方式Detailed ways

下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例。相反,提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are only for exemplary purposes, and are not intended to limit the protection scope of the present disclosure.

另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。In addition, it should be noted that, for the convenience of description, only the parts related to the related invention are shown in the drawings. The embodiments of this disclosure and features of the embodiments may be combined with each other without conflict.

需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。It should be noted that concepts such as "first" and "second" mentioned in the present disclosure are only used to distinguish different devices, modules or units, and are not used to limit the order of functions performed by these devices, modules or units or interdependence.

需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。It should be noted that the modifications of "a" and "a plurality" mentioned in the present disclosure are illustrative rather than restrictive, and those skilled in the art should understand that unless the context clearly indicates otherwise, they should be understood as "one or a plurality of". multiple".

本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。The names of messages or information exchanged between multiple devices in the embodiments of the present disclosure are only for illustrative purposes, and are not intended to limit the scope of these messages or information.

下面将参考附图并结合实施例来详细说明本公开。The present disclosure will be described in detail below with reference to the accompanying drawings and in conjunction with embodiments.

图1是根据本公开一些实施例的人脸图像识别方法的一个应用场景的示意图。FIG. 1 is a schematic diagram of an application scenario of a face image recognition method according to some embodiments of the present disclosure.

如图1所示,电子设备100接收到待处理人脸图像101后,为了保证对人脸识别的准确性和有效性,可以首先检测待处理人脸图像101的清晰度102。当清晰度102大于等于设定清晰度阈值时,说明可以对待处理人脸图像101进行准确的人脸识别。当清晰度102小于设定清晰度阈值时,说明直接对待处理人脸图像101进行人脸识别时,无法得到准确有效的人脸识别信息。此时,电子设备100可以基于待处理人脸图像101生成人脸模拟图像103。其中,人脸模拟图像103为与待处理人脸图像101的图像内容相同,但清晰度高于设定清晰度阈值的图像。在此基础上,电子设备100直接对人脸模拟图像103进行人脸识别,得到的人脸识别信息具有更高的准确性和有效性。As shown in FIG. 1 , after the electronic device 100 receives the face image 101 to be processed, in order to ensure the accuracy and validity of the face recognition, the clarity 102 of the face image 101 to be processed may be detected first. When the sharpness 102 is greater than or equal to the set sharpness threshold, it means that accurate face recognition can be performed on the face image 101 to be processed. When the sharpness 102 is smaller than the set sharpness threshold, it means that when face recognition is performed directly on the face image 101 to be processed, accurate and effective face recognition information cannot be obtained. At this time, the electronic device 100 may generate a face simulation image 103 based on the face image 101 to be processed. Among them, the face simulation image 103 is an image with the same image content as the face image 101 to be processed, but the definition is higher than the set definition threshold. On this basis, the electronic device 100 directly performs face recognition on the face simulation image 103, and the obtained face recognition information has higher accuracy and validity.

应该理解,图1中的电子设备100数目仅仅是示意性的。根据实现需要,可以具有任意数目的电子设备100。It should be understood that the number of electronic devices 100 in FIG. 1 is merely illustrative. There may be any number of electronic devices 100 according to implementation needs.

继续参考图2,图2示出了根据本公开的人脸图像识别方法的一些实施例的流程200。该人脸图像识别方法,包括以下步骤:Continuing to refer to FIG. 2 , FIG. 2 illustrates a process 200 of some embodiments of a face image recognition method according to the present disclosure. The face image recognition method includes the following steps:

步骤201,对待处理人脸图像的清晰度进行检测。Step 201, detecting the sharpness of the face image to be processed.

在一些实施例中,人脸图像识别方法的执行主体(例如图1所示的电子设备100)可以通过有线连接方式或者无线连接方式接收待处理人脸图像。需要指出的是,上述无线连接方式可以包括但不限于3G/4G连接、WiFi连接、蓝牙连接、WiMAX连接、Zigbee连接、UWB(ultra wideband)连接、以及其他现在已知或将来开发的无线连接方式。In some embodiments, the execution body of the face image recognition method (eg, the electronic device 100 shown in FIG. 1 ) may receive the face image to be processed through a wired connection or a wireless connection. It should be pointed out that the above wireless connection methods may include but are not limited to 3G/4G connection, WiFi connection, Bluetooth connection, WiMAX connection, Zigbee connection, UWB (ultra wideband) connection, and other wireless connection methods currently known or developed in the future .

执行主体可以对待处理人脸图像的清晰度进行检测。例如,执行主体可以将待处理人脸图像转换为灰度图,并对灰度图进行降噪处理得到降噪图像。然后,获取降噪图像内的边缘点。其中,边缘点可以表征降噪图像内的各种与人脸图像相关的线条。执行主体对降噪图像作低通滤波处理后可以得到模糊图像,然后再计算降噪图像和模糊图像中每个边缘点的清晰度特征量,进而计算每个边缘点的清晰度值。最后,将全部的边缘点的清晰度值的均值作为待处理人脸图像的清晰度。执行主体还可以检测待处理人脸图像内像素的对比度。当对比度小于设定阈值时,可以认为待处理人脸图像内的像素差别过小,不易对待处理人脸图像进行人脸识别。反之,当对比度大于等于设定阈值时,可以对待处理人脸图像进行人脸识别。The execution subject can detect the sharpness of the face image to be processed. For example, the execution subject can convert the face image to be processed into a grayscale image, and perform noise reduction processing on the grayscale image to obtain a noise reduction image. Then, get the edge points within the denoised image. Among them, the edge points can represent various lines related to the face image in the denoised image. The blurred image can be obtained after the execution subject performs low-pass filtering on the denoised image, and then calculates the sharpness feature quantity of each edge point in the denoised image and the blurred image, and then calculates the sharpness value of each edge point. Finally, the average of the sharpness values of all edge points is taken as the sharpness of the face image to be processed. The execution subject can also detect the contrast of pixels in the face image to be processed. When the contrast is less than the set threshold, it can be considered that the pixel difference in the face image to be processed is too small, and it is difficult to perform face recognition on the face image to be processed. Conversely, when the contrast is greater than or equal to the set threshold, face recognition can be performed on the face image to be processed.

步骤202,响应于上述清晰度小于设定清晰度阈值,基于上述待处理人脸图像生成人脸模拟图像。Step 202, in response to the above-mentioned definition being less than the set definition threshold, generate a face simulation image based on the above-mentioned to-be-processed face image.

在一些实施例中,当清晰度小于设定清晰度阈值时,执行主体可以通过多种方法生成对应待处理人脸图像的人脸模拟图像。例如,执行主体可以对待处理人脸图像进行图像处理,提高待处理人脸图像的亮度、增加对比度等方式来获取人脸模拟图像。其中,上述人脸模拟图像可以是与待处理人脸图像的图像内容相同,且清晰度大于上述清晰度阈值的图像。In some embodiments, when the sharpness is less than the set sharpness threshold, the execution subject can generate a face simulation image corresponding to the face image to be processed through various methods. For example, the execution subject may perform image processing on the face image to be processed, increase the brightness of the face image to be processed, increase the contrast, etc., to obtain the face simulation image. Wherein, the above-mentioned face simulation image may be an image with the same image content as the to-be-processed face image, and the definition is greater than the above-mentioned definition threshold.

步骤203,对上述人脸模拟图像进行识别,得到对应上述待处理人脸图像的人脸识别信息。Step 203: Recognize the above-mentioned face simulation image to obtain face recognition information corresponding to the above-mentioned face image to be processed.

得到人脸模拟图像后,执行主体可以对人脸模拟图像进行人脸识别,以得到对应待处理人脸图像的人脸识别信息。如此,可以提高对待处理人脸图像的识别准确性和有效性。After obtaining the face simulation image, the execution subject may perform face recognition on the face simulation image to obtain face recognition information corresponding to the face image to be processed. In this way, the recognition accuracy and effectiveness of the face image to be processed can be improved.

本公开的一些实施例公开的人脸图像识别方法,首先对待处理人脸图像的清晰度进行检测,实现了对图像质量的预处理。然后在清晰度小于设定清晰度阈值的情况下,生成对应待处理人脸图像的人脸模拟图像。其中,人脸模拟图像的清晰度大于清晰度阈值,并且人脸模拟图像保留了待处理人脸图像的人脸特征。如此,有利于提高对人脸图像识别的准确性。最后对人脸模拟图像进行识别,得到人脸识别信息。从而提高了对待处理人脸图像的识别准确性。In the face image recognition method disclosed by some embodiments of the present disclosure, firstly, the sharpness of the face image to be processed is detected, so as to realize the preprocessing of the image quality. Then, when the sharpness is less than the set sharpness threshold, a simulated face image corresponding to the face image to be processed is generated. The definition of the face simulation image is greater than the definition threshold, and the face simulation image retains the face features of the face image to be processed. In this way, it is beneficial to improve the accuracy of face image recognition. Finally, the face simulation image is recognized, and the face recognition information is obtained. Thus, the recognition accuracy of the face image to be processed is improved.

继续参考图3,图3示出了根据本公开的人脸图像识别方法的一些实施例的流程300。该人脸图像识别方法,包括以下步骤:Continuing to refer to FIG. 3 , FIG. 3 illustrates a process 300 of some embodiments of a face image recognition method according to the present disclosure. The face image recognition method includes the following steps:

步骤301,对待处理人脸图像的清晰度进行检测。Step 301, detecting the sharpness of the face image to be processed.

步骤301的内容与步骤201的内容相同,此处不再一一赘述。The content of step 301 is the same as that of step 201, and details are not repeated here.

步骤302,将上述待处理人脸图像导入人脸图像生成模型,得到上述人脸模拟图像。Step 302: Import the above-mentioned face image to be processed into a face image generation model to obtain the above-mentioned face simulation image.

执行主体可以将待处理人脸图像导入人脸图像生成模型,得到上述人脸模拟图像。其中,人脸图像生成模型可以是深度学习模型、遗传算法模型等多种模型,此处不再一一赘述。The execution subject can import the face image to be processed into the face image generation model to obtain the above-mentioned face simulation image. Among them, the face image generation model may be a deep learning model, a genetic algorithm model and other models, which will not be repeated here.

在一些实施例的一些可选的实现方式中,上述人脸图像生成模型通过以下步骤获取:In some optional implementations of some embodiments, the above-mentioned face image generation model is obtained through the following steps:

第一步,获取多个样本人脸输入图像和与上述多个样本人脸输入图像中的每个样本人脸输入图像对应的样本人脸目标图像。In the first step, a plurality of sample face input images and a sample face target image corresponding to each sample face input image in the above-mentioned plurality of sample face input images are obtained.

执行主体可以获取多个样本人脸输入图像和多个样本人脸目标图像。其中,上述样本人脸输入图像的清晰度小于等于上述清晰度阈值,上述样本人脸目标图像与上述样本人脸输入图像的图像内容相同,并且清晰度大于上述清晰度阈值的图像。样本人脸目标图像可以是人工标注的、包含多个人脸关键点的图像。样本人脸输入图像和对应的样本人脸目标图像可以是各种角度、距离情况下的人脸图像。The execution body can acquire multiple sample face input images and multiple sample face target images. Wherein, the clarity of the sample face input image is less than or equal to the clarity threshold, the sample face target image has the same image content as the sample face input image, and the clarity is greater than the clarity threshold. The sample face target images can be manually annotated images containing multiple face key points. The sample face input image and the corresponding sample face target image may be face images under various angles and distances.

可选的,执行主体还可以首先获取清晰的样本人脸目标图像,然后对样本人脸目标图像进行处理,得到清晰度小于清晰度阈值的样本人脸输入图像。执行主体还可以通过其他方式来获取样本人脸输入图像和样本人脸目标图像,此处不再一一赘述。Optionally, the execution subject may also first obtain a clear sample face target image, and then process the sample face target image to obtain a sample face input image whose clarity is less than the clarity threshold. The execution subject may also obtain the sample face input image and the sample face target image in other ways, which will not be repeated here.

第二步,提取预先建立的生成式对抗网络。In the second step, a pre-built generative adversarial network is extracted.

在本实施例中,上述执行主体可以提取预先建立的生成式对抗网络(GenerativeAdversarial Nets,GAN)。其中,上述生成式对抗网络包括生成网络和判别网络,生成网络用于利用人脸输入图像生成人脸目标图像,判别网络用于确定输入上述判别网络的图像是否是生成网络输出的图像。In this embodiment, the above-mentioned execution subject may extract a pre-established Generative Adversarial Nets (GAN). The generative adversarial network includes a generative network and a discriminant network, the generative network is used to generate a face target image by using the face input image, and the discriminant network is used to determine whether the image input to the above-mentioned discriminant network is an image output by the generative network.

需要说明的是,上述生成网络可以是用于进行图像处理的卷积神经网络(例如包含卷积层、池化层、反池化层、反卷积层的各种卷积神经网络结构)。上述判别网络可以是卷积神经网络(例如包含全连接层的各种卷积神经网络结构,其中,上述全连接层可以实现分类功能)。此外,上述判别网络也可以是用于实现分类功能的其他模型结构,例如支持向量机(Support Vector Machine,SVM)。示例性的,判别网络若判定输入的图像是上述生成网络所输出的图像(来自生成数据),则可以输出1;若判定输入的图像不是上述生成网络所输出的图像,则可以输出0。需要说明的是,判别网络也可以输出其他数值,例如输出0-1之间的数值,该数值表征输入判别网络的图像来自真实数据的概率。It should be noted that the above-mentioned generation network may be a convolutional neural network for image processing (eg, various convolutional neural network structures including convolutional layers, pooling layers, de-pooling layers, and deconvolutional layers). The above-mentioned discriminant network may be a convolutional neural network (for example, various convolutional neural network structures including a fully-connected layer, wherein the above-mentioned fully-connected layer can implement a classification function). In addition, the above-mentioned discriminant network may also be other model structures for implementing the classification function, such as a support vector machine (Support Vector Machine, SVM). Exemplarily, if the discriminant network determines that the input image is an image output by the above-mentioned generation network (from the generation data), it can output 1; if it is determined that the input image is not an image output by the above-mentioned generation network, it can output 0. It should be noted that the discriminant network can also output other values, such as a value between 0 and 1, which represents the probability that the image input to the discriminant network comes from real data.

第三步,利用机器学习方法,将上述多个样本人脸输入图像中的每个样本人脸输入图像作为生成网络的输入,将生成网络输出的图像和输入生成网络的样本人脸输入图像对应的样本人脸目标图像作为判别网络的输入,对生成网络和判别网络进行训练,将训练后的生成网络确定为人脸图像生成模型。In the third step, using the machine learning method, each sample face input image in the above-mentioned multiple sample face input images is used as the input of the generation network, and the image output by the generation network corresponds to the sample face input image input into the generation network. The sample face target image is used as the input of the discriminant network, the generation network and the discriminant network are trained, and the trained generation network is determined as the face image generation model.

在本实施例中,基于上述的生成式对抗网络,执行主体可以利用机器学习方法,将多个样本人脸输入图像中的每个样本人脸输入图像作为生成网络的输入,将生成网络输出的图像和输入生成网络的样本人脸输入图像对应的样本人脸目标图像作为判别网络的输入,对生成网络和判别网络进行训练。然后,将训练后的生成网络确定为人脸图像生成模型。In this embodiment, based on the above-mentioned generative adversarial network, the execution subject can use the machine learning method to use each sample face input image in the multiple sample face input images as the input of the generation network, and use the output of the generation network. The image and the sample face input image of the input generation network The sample face target image corresponding to the input image is used as the input of the discriminant network, and the generation network and the discriminant network are trained. Then, the trained generative network is determined as a face image generative model.

在一些实施例的一些可选的实现方式中,上述对生成网络和判别网络进行训练,将训练后的生成网络确定为人脸图像生成模型,可以执行如下训练步骤:In some optional implementations of some embodiments, the above-mentioned training of the generation network and the discrimination network is performed, and the trained generation network is determined as a face image generation model, and the following training steps can be performed:

第一步,固定上述生成网络的参数,将上述多个样本人脸输入图像中的每个样本人脸输入图像作为生成网络的输入,将生成网络输出的图像和输入生成网络的样本人脸输入图像对应的样本人脸目标图像作为判别网络的输入,利用机器学习方法对判别网络进行训练。机器学习方法可以是监督学习、非监督学习、半监督学习、强化学习等。The first step is to fix the parameters of the above-mentioned generation network, use each sample face input image in the above-mentioned multiple sample face input images as the input of the generation network, and input the image output by the generation network and the sample face input of the input generation network. The sample face target image corresponding to the image is used as the input of the discriminant network, and the discriminant network is trained by the machine learning method. Machine learning methods can be supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, etc.

第二步,固定训练后的判别网络的参数,将上述多个样本人脸输入图像中的每个样本人脸输入图像作为生成网络的输入,利用机器学习方法对生成网络进行训练。执行主体还可以将机器学习方法与反向传播算法、梯度下降算法等结合起来对生成网络进行训练。In the second step, the parameters of the discriminant network after training are fixed, and each sample face input image in the above-mentioned multiple sample face input images is used as the input of the generation network, and the generation network is trained by the machine learning method. The executive body can also combine machine learning methods with back-propagation algorithms, gradient descent algorithms, etc. to train the generative network.

第三步,确定训练后的判别网络输出的判别结果的准确率,响应于确定准确率大于准确率阈值(例如可以是80%),将最近一次训练的生成网络确定为人脸图像生成模型。The third step is to determine the accuracy of the discrimination result output by the trained discriminant network, and in response to determining that the accuracy is greater than an accuracy threshold (for example, it can be 80%), determine the most recently trained generation network as a face image generation model.

在一些实施例中,响应于确定准确率小于等于上述准确率阈值,使用最近一次训练的生成网络和判别网络重新执行上述训练步骤。因此,生成式对抗网络训练得到的人脸图像生成模型的参数不仅可以基于训练样本得到,且可以基于判别网络的反向传播而确定,不需要依赖大量的有标注的样本即可实现生成模型的训练,从而得到人脸图像生成模型,减少了人力成本,提高了生成人脸模拟图像的准确性和有效性。In some embodiments, in response to determining that the accuracy is less than or equal to the above-mentioned accuracy threshold, the above-mentioned training steps are re-performed using the most recently trained generative network and discriminant network. Therefore, the parameters of the face image generation model trained by the generative adversarial network can not only be obtained based on the training samples, but also determined based on the back-propagation of the discriminant network, and the generation model can be realized without relying on a large number of labeled samples. After training, a face image generation model is obtained, which reduces labor costs and improves the accuracy and effectiveness of generating face simulation images.

在一些实施例的一些可选的实现方式中,上述样本人脸目标图像可以通过以下步骤获取:In some optional implementation manners of some embodiments, the above-mentioned sample face target image may be obtained through the following steps:

第一步,获取上述样本人脸输入图像的人脸特征。The first step is to obtain the face features of the above-mentioned sample face input image.

为了得到准确有效的样本人脸目标图像,执行主体可以首先获取上述样本人脸输入图像的人脸特征。其中,人脸特征可以是圆脸、长脸、方脸等。In order to obtain an accurate and effective sample face target image, the execution subject may first acquire the face features of the above-mentioned sample face input image. The facial features may be round faces, long faces, square faces, and the like.

第二步,基于上述人脸特征确定人脸预测关键点。In the second step, the key points of face prediction are determined based on the above-mentioned face features.

人脸上的各个脸部器官(例如可以是眼睛、眉毛、鼻子、嘴等)有相对固定的位置。执行主体可以根据人脸特征确定人脸预测关键点。如此,使得人脸预测关键点与人脸特征建立的对应关系,有利于使得后续的样本人脸目标图像与样本人脸输入图像具备相同的人脸特征。Each facial organ on a human face (for example, eyes, eyebrows, nose, mouth, etc.) has a relatively fixed position. The executive body can determine the face prediction key points according to the face features. In this way, the corresponding relationship established between the face prediction key points and the face features is beneficial to make the subsequent sample face target images and the sample face input images have the same face features.

第三步,通过上述人脸预测关键点构建样本人脸目标图像。The third step is to construct a sample face target image through the above face prediction key points.

得到人脸预测关键点后,执行主体可以基于人脸预测关键点来构建样本人脸目标图像。即,样本人脸目标图像可以是执行主体构建出来的、包含了样本人脸输入图像的人脸预测关键点的假脸图像。After obtaining the face prediction key points, the execution subject can construct a sample face target image based on the face prediction key points. That is, the sample face target image may be a fake face image constructed by the execution subject and including the face prediction key points of the sample face input image.

在一些实施例的一些可选的实现方式中,上述基于上述人脸特征确定人脸预测关键点,可以包括以下步骤:In some optional implementations of some embodiments, the above-mentioned determination of face prediction key points based on the above-mentioned face features may include the following steps:

第一步,基于上述人脸结构信息确定人脸预测关键点位置信息和人脸预测关键点初始图像。The first step is to determine the position information of the face prediction key point and the initial image of the face prediction key point based on the above-mentioned face structure information.

实际中,人脸可以面向各个方向,人脸也各不相同。因此,上述人脸特征可以包括人脸空间倾斜度、人脸结构信息。执行主体可以基于人脸结构信息确定人脸预测关键点位置信息。然后,执行主体可以根据人脸预测关键点位置信息进一步确定人脸预测关键点初始图像。其中,人脸预测关键点初始图像可以是人脸预测关键点对应的人脸器官的图像。In practice, faces can face in all directions, and faces vary. Therefore, the above-mentioned face features may include face spatial inclination and face structure information. The executing subject may determine the position information of the face prediction key points based on the face structure information. Then, the executive body may further determine the initial image of the face prediction key point according to the position information of the face prediction key point. The initial image of the face prediction key point may be an image of the face organ corresponding to the face prediction key point.

第二步,基于上述人脸空间倾斜度对上述人脸预测关键点位置信息和人脸预测关键点初始图像进行调整,确定人脸预测关键点。The second step is to adjust the position information of the face prediction key point and the initial image of the face prediction key point based on the face space inclination to determine the face prediction key point.

各个脸部器官在不同的空间倾斜度下可以呈现不同的图像。执行主体可以根据人脸空间倾斜度对上述人脸预测关键点位置信息和人脸预测关键点初始图像进行调整,进而确定人脸预测关键点。如此,大大提高了人脸预测关键点的准确性和有效性。Different facial parts can present different images under different spatial inclinations. The execution subject may adjust the above-mentioned position information of the face prediction key point and the initial image of the face prediction key point according to the inclination of the face space, and then determine the face prediction key point. In this way, the accuracy and effectiveness of face prediction key points are greatly improved.

在一些实施例的一些可选的实现方式中,上述通过上述人脸预测关键点构建样本人脸目标图像,可以包括以下步骤:In some optional implementations of some embodiments, the above-mentioned construction of a sample face target image by using the above-mentioned face prediction key points may include the following steps:

第一步,基于上述人脸预测关键点构建初始人脸图像。The first step is to construct an initial face image based on the above face prediction key points.

得到人脸预测关键点后,执行主体可以参考上述的人脸结构信息来构建初始人脸图像。此时,初始人脸图像只包含人脸预测关键点和人脸结构信息,不受环境光等因素的感染。因此,初始人脸图像的清晰度可以很高。After obtaining the face prediction key points, the execution subject can refer to the above-mentioned face structure information to construct an initial face image. At this time, the initial face image only contains face prediction key points and face structure information, and is not affected by factors such as ambient light. Therefore, the definition of the initial face image can be high.

第二步,对上述初始人脸图像进行渲染得到样本人脸目标图像。In the second step, the above-mentioned initial face image is rendered to obtain a sample face target image.

最后,执行主体可以对上述初始人脸图像进行渲染得到样本人脸目标图像。此时,得到的样本人脸目标图像可以是包含了样本人脸输入图像的人脸关键点,并且清晰度很高(高于上述清晰度阈值)的图像。Finally, the execution subject may render the above-mentioned initial face image to obtain a sample face target image. At this time, the obtained sample face target image may be an image that includes the face key points of the sample face input image and has a high definition (higher than the above-mentioned definition threshold).

步骤303,对上述人脸模拟图像进行识别,得到对应上述待处理人脸图像的人脸识别信息。Step 303: Recognize the above-mentioned face simulation image to obtain face recognition information corresponding to the above-mentioned face image to be processed.

步骤303的内容与步骤203的内容相同,此处不再一一赘述。The content of step 303 is the same as that of step 203, and details are not repeated here.

进一步参考图4,其示出了人脸图像识别方法的另一个应用场景的示意图。Referring further to FIG. 4 , it shows a schematic diagram of another application scenario of the face image recognition method.

图4中,电子设备首先接收到待处理人脸图像401,在待处理人脸图像401的清晰度小于设定清晰度阈值的情况下,根据待处理人脸图像401中的人脸特征确定多个人脸预测关键点402。之后,根据人脸预测关键点402生成人脸模拟图像403。则人脸模拟图像403可以是包含了的待处理人脸图像401中的人脸特征,且清晰度大于清晰度阈值的假脸图像。之后,电子设备对人脸模拟图像403进行人脸识别,得到的人脸识别信息404就是待处理人脸图像401的人脸识别信息。In FIG. 4 , the electronic device first receives the face image 401 to be processed, and in the case that the clarity of the face image 401 to be processed is less than the set clarity threshold, it determines how many faces according to the face features in the face image 401 to be processed. Face prediction key points 402 . After that, a face simulation image 403 is generated according to the face prediction key points 402 . Then, the face simulation image 403 may be a fake face image that includes the face features in the face image 401 to be processed, and whose clarity is greater than the clarity threshold. After that, the electronic device performs face recognition on the face simulation image 403 , and the obtained face recognition information 404 is the face recognition information of the face image 401 to be processed.

进一步参考图5,作为对上述各图所示方法的实现,本公开提供了一种人脸图像识别装置的一些实施例,这些装置实施例与图2所示的那些方法实施例相对应,该装置具体可以应用于各种电子设备中。Further referring to FIG. 5 , as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of a face image recognition apparatus, these apparatus embodiments correspond to those method embodiments shown in FIG. 2 , the The device can be specifically applied to various electronic devices.

如图5所示,一些实施例的人脸图像识别装置500包括:图像清晰度检测单元501、人脸模拟图像生成单元502和人脸识别单元503。其中,图像清晰度检测单元501,被配置成对待处理人脸图像的清晰度进行检测;人脸模拟图像生成单元502,响应于上述清晰度小于设定清晰度阈值,被配置成基于上述待处理人脸图像生成人脸模拟图像,上述人脸模拟图像的清晰度大于上述清晰度阈值;人脸识别单元503,被配置成对上述人脸模拟图像进行识别,得到对应上述待处理人脸图像的人脸识别信息。As shown in FIG. 5 , a face image recognition apparatus 500 in some embodiments includes: an image clarity detection unit 501 , a face simulation image generation unit 502 , and a face recognition unit 503 . The image sharpness detection unit 501 is configured to detect the sharpness of the face image to be processed; the face simulation image generation unit 502, in response to the above-mentioned sharpness being less than the set sharpness threshold, is configured to be based on the above-mentioned to-be-processed image The face image generates a face simulation image, and the clarity of the above-mentioned face simulation image is greater than the above-mentioned sharpness threshold; the face recognition unit 503 is configured to identify the above-mentioned face simulation image, and obtains corresponding to the above-mentioned face image to be processed. face recognition information.

在一些实施例的可选实现方式中,上述人脸模拟图像生成单元502可以包括:人脸模拟图像生成子单元(图中未示出),被配置成将上述待处理人脸图像导入人脸图像生成模型,得到上述人脸模拟图像。In an optional implementation manner of some embodiments, the above-mentioned face simulation image generation unit 502 may include: a face simulation image generation subunit (not shown in the figure), configured to import the above-mentioned face image to be processed into a human face An image generation model is used to obtain the above-mentioned face simulation image.

在一些实施例的可选实现方式中,上述人脸模拟图像生成子单元可以包括人脸图像生成模型模块(图中未示出),被配置成训练人脸图像生成模型,上述人脸图像生成模型模块可以包括:样本获取子模块(图中未示出)、网络提取子模块(图中未示出)和人脸图像生成模型训练子模块(图中未示出)。其中,样本获取子模块,被配置成获取多个样本人脸输入图像和与上述多个样本人脸输入图像中的每个样本人脸输入图像对应的样本人脸目标图像,其中,上述样本人脸输入图像的清晰度小于等于上述清晰度阈值,上述样本人脸目标图像与上述样本人脸输入图像的图像内容相同,并且清晰度大于上述清晰度阈值的图像;网络提取子模块,被配置成提取预先建立的生成式对抗网络,其中,上述生成式对抗网络包括生成网络和判别网络,生成网络用于利用人脸输入图像生成人脸目标图像,判别网络用于确定输入上述判别网络的图像是否是生成网络输出的图像;人脸图像生成模型训练子模块,被配置成利用机器学习方法,将上述多个样本人脸输入图像中的每个样本人脸输入图像作为生成网络的输入,将生成网络输出的图像和输入生成网络的样本人脸输入图像对应的样本人脸目标图像作为判别网络的输入,对生成网络和判别网络进行训练,将训练后的生成网络确定为人脸图像生成模型。In an optional implementation manner of some embodiments, the above-mentioned face simulation image generation subunit may include a face image generation model module (not shown in the figure), which is configured to train a face image generation model, and the above-mentioned face image generation The model module may include: a sample acquisition sub-module (not shown in the figure), a network extraction sub-module (not shown in the figure), and a face image generation model training sub-module (not shown in the figure). The sample acquisition sub-module is configured to acquire a plurality of sample face input images and a sample face target image corresponding to each sample face input image in the above-mentioned plurality of sample face input images, wherein the above-mentioned sample face The clarity of the face input image is less than or equal to the above-mentioned clarity threshold, the above-mentioned sample face target image is the same as the image content of the above-mentioned sample face input image, and the image with the clarity greater than the above-mentioned clarity threshold; the network extraction sub-module is configured to Extract a pre-established generative adversarial network, wherein the above-mentioned generative adversarial network includes a generative network and a discriminant network, the generative network is used to use the face input image to generate a face target image, and the discriminant network is used to determine whether the image input to the above-mentioned discrimination network is not is the image output by the generation network; the face image generation model training sub-module is configured to use the machine learning method to use each sample face input image in the above-mentioned multiple sample face input images as the input of the generation network, and generate The image output by the network and the sample face target image corresponding to the sample face input image input to the generation network are used as the input of the discriminant network, the generation network and the discriminant network are trained, and the trained generation network is determined as the face image generation model.

在一些实施例的可选实现方式中,上述人脸图像生成模型训练子模块可以包括:人脸图像生成模型训练模组(图中未示出),被配置成固定上述生成网络的参数,将上述多个样本人脸输入图像中的每个样本人脸输入图像作为生成网络的输入,将生成网络输出的图像和输入生成网络的样本人脸输入图像对应的样本人脸目标图像作为判别网络的输入,利用机器学习方法对判别网络进行训练;固定训练后的判别网络的参数,将上述多个样本人脸输入图像中的每个样本人脸输入图像作为生成网络的输入,利用机器学习方法对生成网络进行训练;确定训练后的判别网络输出的判别结果的准确率,响应于确定准确率大于准确率阈值,将最近一次训练的生成网络确定为人脸图像生成模型。In an optional implementation manner of some embodiments, the above-mentioned face image generation model training sub-module may include: a face image generation model training module (not shown in the figure), configured to fix the parameters of the above-mentioned generation network, Each sample face input image in the above-mentioned multiple sample face input images is used as the input of the generation network, and the sample face target image corresponding to the image output by the generation network and the sample face input image input to the generation network is used as the target image of the discriminant network. Input, use the machine learning method to train the discriminant network; fix the parameters of the discriminant network after training, take each sample face input image in the above multiple sample face input images as the input of the generation network, and use the machine learning method. The generative network is trained; the accuracy of the discrimination result output by the discriminant network after training is determined, and in response to determining that the accuracy is greater than the accuracy threshold, the most recently trained generative network is determined as the face image generation model.

在一些实施例的可选实现方式中,上述人脸图像生成模型训练子模块还包括:调整模组(图中未示出),响应于确定准确率小于等于上述准确率阈值,被配置成使用最近一次训练的生成网络和判别网络返回上述人脸图像生成模型训练模组。In an optional implementation manner of some embodiments, the above-mentioned face image generation model training sub-module further includes: an adjustment module (not shown in the figure), configured to use an adjustment module (not shown in the figure), configured to use The most recently trained generative and discriminative networks are returned to the above face image generative model training module.

在一些实施例的可选实现方式中,上述样本获取子模块包括样本人脸目标图像构建模组(图中未示出),上述样本人脸目标图像构建模组可以包括:人脸特征获取子模组(图中未示出)、人脸预测关键点确定子模组(图中未示出)和样本人脸目标图像构建子模组(图中未示出)。其中,人脸特征获取子模组,被配置成获取上述样本人脸输入图像的人脸特征;人脸预测关键点确定子模组,被配置成基于上述人脸特征确定人脸预测关键点;样本人脸目标图像构建子模组,被配置成通过上述人脸预测关键点构建样本人脸目标图像。In an optional implementation manner of some embodiments, the above-mentioned sample acquisition sub-module includes a sample face target image construction module (not shown in the figure), and the above-mentioned sample face target image construction module may include: a face feature acquisition sub-module A module (not shown in the figure), a face prediction key point determination sub-module (not shown in the figure) and a sample face target image construction sub-module (not shown in the figure). Wherein, the face feature acquisition sub-module is configured to obtain the face features of the above-mentioned sample face input image; the face prediction key point determination sub-module is configured to determine the face prediction key points based on the above-mentioned face features; The sample face target image construction sub-module is configured to construct the sample face target image through the above-mentioned face prediction key points.

在一些实施例的可选实现方式中,上述人脸特征包括人脸空间倾斜度、人脸结构信息,以及,上述人脸预测关键点确定子模组可以包括:预测组件(图中未示出)和人脸预测关键点确定组件(图中未示出)。其中,预测组件,被配置成基于上述人脸结构信息确定人脸预测关键点位置信息和人脸预测关键点初始图像;人脸预测关键点确定组件,被配置成基于上述人脸空间倾斜度对上述人脸预测关键点位置信息和人脸预测关键点初始图像进行调整,确定人脸预测关键点。In an optional implementation manner of some embodiments, the above-mentioned face features include face spatial inclination and face structure information, and the above-mentioned face prediction key point determination sub-module may include: a prediction component (not shown in the figure) ) and a face prediction keypoint determination component (not shown in the figure). Wherein, the prediction component is configured to determine the position information of face prediction key points and the initial image of face prediction key points based on the above-mentioned face structure information; the face prediction key point determination component is configured to The above face prediction key point position information and the face prediction key point initial image are adjusted to determine the face prediction key point.

在一些实施例的可选实现方式中,上述样本人脸目标图像构建子模组可以包括:初始人脸图像构建组件(图中未示出)和样本人脸目标图像生成组件(图中未示出)。其中,初始人脸图像构建组件,被配置成基于上述人脸预测关键点构建初始人脸图像;样本人脸目标图像生成组件,被配置成对上述初始人脸图像进行渲染得到样本人脸目标图像。In an optional implementation manner of some embodiments, the above-mentioned sample face target image construction sub-module may include: an initial face image construction component (not shown in the figure) and a sample face target image generation component (not shown in the figure) out). Wherein, the initial face image construction component is configured to construct an initial face image based on the above-mentioned face prediction key points; the sample face target image generation component is configured to render the above-mentioned initial face image to obtain a sample face target image .

可以理解的是,该装置500中记载的诸单元与参考图2描述的方法中的各个步骤相对应。由此,上文针对方法描述的操作、特征以及产生的有益效果同样适用于装置500及其中包含的单元,在此不再赘述。It can be understood that the units recorded in the apparatus 500 correspond to the respective steps in the method described with reference to FIG. 2 . Therefore, the operations, features and beneficial effects described above with respect to the method are also applicable to the apparatus 500 and the units included therein, and details are not described herein again.

如图6所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储装置608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有电子设备600操作所需的各种程序和数据。处理装置601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。As shown in FIG. 6, an electronic device 600 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 601 that may be loaded into random access according to a program stored in a read only memory (ROM) 602 or from a storage device 608 Various appropriate actions and processes are executed by the programs in the memory (RAM) 603 . In the RAM 603, various programs and data necessary for the operation of the electronic device 600 are also stored. The processing device 601 , the ROM 602 , and the RAM 603 are connected to each other through a bus 604 . An input/output (I/O) interface 605 is also connected to bus 604 .

通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置607;包括例如磁带、硬盘等的存储装置608;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。图6中示出的每个方框可以代表一个装置,也可以根据需要代表多个装置。Typically, the following devices can be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 607 of a computer, etc.; a storage device 608 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 609. Communication means 609 may allow electronic device 600 to communicate wirelessly or by wire with other devices to exchange data. While FIG. 6 shows electronic device 600 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in FIG. 6 may represent one device, or may represent multiple devices as required.

特别地,根据本公开的一些实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的一些实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的一些实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM 602被安装。在该计算机程序被处理装置601执行时,执行本公开的一些实施例的方法中限定的上述功能。In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart. In some such embodiments, the computer program may be downloaded and installed from the network via the communication device 609 , or from the storage device 608 , or from the ROM 602 . When the computer program is executed by the processing device 601, the above-mentioned functions defined in the methods of some embodiments of the present disclosure are performed.

需要说明的是,本公开的一些实施例上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开的一些实施例中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开的一些实施例中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。It should be noted that, in some embodiments of the present disclosure, the computer-readable medium described above may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the foregoing two. The computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device. Rather, in some embodiments of the present disclosure, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device . Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.

在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText TransferProtocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。In some embodiments, the client and server can communicate using any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium (eg, a communications network) interconnected. Examples of communication networks include local area networks ("LAN"), wide area networks ("WAN"), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future development network of.

上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:对待处理人脸图像的清晰度进行检测;响应于上述清晰度小于设定清晰度阈值,基于上述待处理人脸图像生成人脸模拟图像,上述人脸模拟图像的清晰度大于上述清晰度阈值;对上述人脸模拟图像进行识别,得到对应上述待处理人脸图像的人脸识别信息。The above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device. The above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: detects the sharpness of the face image to be processed; Determine a sharpness threshold, generate a simulated face image based on the above-mentioned face image to be processed, and the definition of the above-mentioned simulated face image is greater than the above-mentioned sharpness threshold; face recognition information.

可以以一种或多种程序设计语言或其组合来编写用于执行本公开的一些实施例的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)——连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for carrying out operations of some embodiments of the present disclosure may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, or a combination thereof, Also included are conventional procedural programming languages - such as the "C" language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. Where a remote computer is involved, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider to via Internet connection).

附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.

描述于本公开的一些实施例中的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器包括图像清晰度检测单元、人脸模拟图像生成单元和人脸识别单元。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,人脸模拟图像生成单元还可以被描述为“生成对应待处理人脸图像的假脸图像的单元”。The units described in some embodiments of the present disclosure may be implemented by means of software, and may also be implemented by means of hardware. The described unit can also be provided in the processor, for example, it can be described as: a processor includes an image sharpness detection unit, a face simulation image generation unit and a face recognition unit. Wherein, the names of these units do not constitute a limitation of the unit itself under certain circumstances, for example, the face simulation image generation unit can also be described as "a unit that generates a fake face image corresponding to the face image to be processed".

本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical Devices (CPLDs) and more.

根据本公开的一个或多个实施例,提供了一种人脸图像识别方法,包括:对待处理人脸图像的清晰度进行检测;响应于上述清晰度小于设定清晰度阈值,基于上述待处理人脸图像生成人脸模拟图像,上述人脸模拟图像的清晰度大于上述清晰度阈值;对上述人脸模拟图像进行识别,得到对应上述待处理人脸图像的人脸识别信息。According to one or more embodiments of the present disclosure, there is provided a method for recognizing a face image, comprising: detecting the sharpness of the face image to be processed; in response to the above-mentioned sharpness being less than a set sharpness threshold, based on the above-mentioned to-be-processed face image The face image generates a face simulation image, and the definition of the face simulation image is greater than the definition threshold; the face simulation image is recognized to obtain the face recognition information corresponding to the to-be-processed face image.

根据本公开的一个或多个实施例,上述基于上述待处理人脸图像生成人脸模拟图像,包括:将上述待处理人脸图像导入人脸图像生成模型,得到上述人脸模拟图像。According to one or more embodiments of the present disclosure, generating the face simulation image based on the to-be-processed face image includes: importing the to-be-processed face image into a face image generation model to obtain the face simulation image.

根据本公开的一个或多个实施例,上述人脸图像生成模型通过以下步骤获取:获取多个样本人脸输入图像和与上述多个样本人脸输入图像中的每个样本人脸输入图像对应的样本人脸目标图像,其中,上述样本人脸输入图像的清晰度小于等于上述清晰度阈值,上述样本人脸目标图像与上述样本人脸输入图像的图像内容相同,并且清晰度大于上述清晰度阈值的图像;提取预先建立的生成式对抗网络,其中,上述生成式对抗网络包括生成网络和判别网络,生成网络用于利用人脸输入图像生成人脸目标图像,判别网络用于确定输入上述判别网络的图像是否是生成网络输出的图像;利用机器学习方法,将上述多个样本人脸输入图像中的每个样本人脸输入图像作为生成网络的输入,将生成网络输出的图像和输入生成网络的样本人脸输入图像对应的样本人脸目标图像作为判别网络的输入,对生成网络和判别网络进行训练,将训练后的生成网络确定为人脸图像生成模型。According to one or more embodiments of the present disclosure, the above-mentioned face image generation model is obtained by the following steps: obtaining a plurality of sample face input images and corresponding to each sample face input image in the above-mentioned plurality of sample face input images The sample face target image, wherein the clarity of the above-mentioned sample face input image is less than or equal to the above-mentioned sharpness threshold, the above-mentioned sample face target image and the above-mentioned sample face input image have the same image content, and the clarity is greater than the above-mentioned clarity Threshold image; extracting a pre-established generative adversarial network, wherein the above-mentioned generative adversarial network includes a generative network and a discriminant network, the generative network is used to generate a face target image by using the face input image, and the discriminant network is used to determine the input of the above-mentioned discrimination. Whether the image of the network is an image output by the generation network; using the machine learning method, each sample face input image in the above-mentioned multiple sample face input images is used as the input of the generation network, and the image output by the generation network and the input generated by the network are generated. The sample face target image corresponding to the sample face input image is used as the input of the discriminant network, the generation network and the discriminant network are trained, and the trained generation network is determined as the face image generation model.

根据本公开的一个或多个实施例,上述对生成网络和判别网络进行训练,将训练后的生成网络确定为人脸图像生成模型,包括:执行如下训练步骤:固定上述生成网络的参数,将上述多个样本人脸输入图像中的每个样本人脸输入图像作为生成网络的输入,将生成网络输出的图像和输入生成网络的样本人脸输入图像对应的样本人脸目标图像作为判别网络的输入,利用机器学习方法对判别网络进行训练;固定训练后的判别网络的参数,将上述多个样本人脸输入图像中的每个样本人脸输入图像作为生成网络的输入,利用机器学习方法对生成网络进行训练;确定训练后的判别网络输出的判别结果的准确率,响应于确定准确率大于准确率阈值,将最近一次训练的生成网络确定为人脸图像生成模型。According to one or more embodiments of the present disclosure, the above-mentioned training of the generation network and the discrimination network, and determining the trained generation network as a face image generation model, includes: performing the following training steps: fixing the parameters of the generation network, Each sample face input image in the multiple sample face input images is used as the input of the generation network, and the output image of the generation network and the sample face target image corresponding to the sample face input image input to the generation network are used as the input of the discriminant network. , using the machine learning method to train the discriminant network; fixing the parameters of the discriminant network after training, using each sample face input image in the above-mentioned multiple sample face input images as the input of the generation network, and using the machine learning method to generate The network is trained; the accuracy of the discriminant result output by the discriminant network after training is determined, and in response to determining that the accuracy is greater than the accuracy threshold, the most recently trained generation network is determined as the face image generation model.

根据本公开的一个或多个实施例,上述对生成网络和判别网络进行训练,将训练后的生成网络确定为人脸图像生成模型,还包括:响应于确定准确率小于等于上述准确率阈值,使用最近一次训练的生成网络和判别网络重新执行上述训练步骤。According to one or more embodiments of the present disclosure, the above-mentioned training of the generation network and the discrimination network, and determining the trained generation network as a face image generation model, further includes: in response to determining that the accuracy rate is less than or equal to the accuracy rate threshold, using The most recently trained generative and discriminative networks re-execute the above training steps.

根据本公开的一个或多个实施例,上述样本人脸目标图像通过以下步骤获取:获取上述样本人脸输入图像的人脸特征;基于上述人脸特征确定人脸预测关键点;通过上述人脸预测关键点构建样本人脸目标图像。According to one or more embodiments of the present disclosure, the above-mentioned sample face target image is obtained through the following steps: obtaining face features of the above-mentioned sample face input image; determining key points for face prediction based on the above-mentioned face characteristics; Predict keypoints to construct a sample face target image.

根据本公开的一个或多个实施例,上述人脸特征包括人脸空间倾斜度、人脸结构信息,以及,上述基于上述人脸特征确定人脸预测关键点,包括:基于上述人脸结构信息确定人脸预测关键点位置信息和人脸预测关键点初始图像;基于上述人脸空间倾斜度对上述人脸预测关键点位置信息和人脸预测关键点初始图像进行调整,确定人脸预测关键点。According to one or more embodiments of the present disclosure, the above-mentioned face features include face spatial inclination and face structure information, and the above-mentioned determination of face prediction key points based on the above-mentioned face characteristics includes: based on the above-mentioned face structure information Determine the position information of the face prediction key point and the initial image of the face prediction key point; adjust the position information of the face prediction key point and the initial image of the face prediction key point based on the above-mentioned face space inclination, and determine the face prediction key point. .

根据本公开的一个或多个实施例,上述通过上述人脸预测关键点构建样本人脸目标图像,包括:基于上述人脸预测关键点构建初始人脸图像;对上述初始人脸图像进行渲染得到样本人脸目标图像。According to one or more embodiments of the present disclosure, constructing a sample face target image by using the above-mentioned face prediction key points includes: constructing an initial face image based on the above-mentioned face prediction key points; rendering the above-mentioned initial face image to obtain Sample face target image.

根据本公开的一个或多个实施例,提供了一种人脸图像识别装置,包括:图像清晰度检测单元,被配置成对待处理人脸图像的清晰度进行检测;人脸模拟图像生成单元,响应于上述清晰度小于设定清晰度阈值,被配置成基于上述待处理人脸图像生成人脸模拟图像,上述人脸模拟图像的清晰度大于上述清晰度阈值;人脸识别单元,被配置成对上述人脸模拟图像进行识别,得到对应上述待处理人脸图像的人脸识别信息。According to one or more embodiments of the present disclosure, there is provided a face image recognition device, comprising: an image clarity detection unit configured to detect the clarity of the face image to be processed; a face simulation image generation unit, In response to the above-mentioned sharpness being less than the set sharpness threshold, it is configured to generate a face simulation image based on the above-mentioned face image to be processed, and the sharpness of the above-mentioned face simulation image is greater than the above-mentioned sharpness threshold; the face recognition unit is configured to The above-mentioned face simulation image is recognized to obtain the face recognition information corresponding to the above-mentioned to-be-processed face image.

根据本公开的一个或多个实施例,上述人脸模拟图像生成单元包括:人脸模拟图像生成子单元,被配置成将上述待处理人脸图像导入人脸图像生成模型,得到上述人脸模拟图像。According to one or more embodiments of the present disclosure, the above-mentioned face simulation image generation unit includes: a face simulation image generation subunit, configured to import the above-mentioned face image to be processed into a face image generation model to obtain the above-mentioned face simulation image.

根据本公开的一个或多个实施例,上述人脸模拟图像生成子单元包括人脸图像生成模型模块,被配置成训练人脸图像生成模型,上述人脸图像生成模型模块包括:样本获取子模块,被配置成获取多个样本人脸输入图像和与上述多个样本人脸输入图像中的每个样本人脸输入图像对应的样本人脸目标图像,其中,上述样本人脸输入图像的清晰度小于等于上述清晰度阈值,上述样本人脸目标图像与上述样本人脸输入图像的图像内容相同,并且清晰度大于上述清晰度阈值的图像;网络提取子模块,被配置成提取预先建立的生成式对抗网络,其中,上述生成式对抗网络包括生成网络和判别网络,生成网络用于利用人脸输入图像生成人脸目标图像,判别网络用于确定输入上述判别网络的图像是否是生成网络输出的图像;人脸图像生成模型训练子模块,被配置成利用机器学习方法,将上述多个样本人脸输入图像中的每个样本人脸输入图像作为生成网络的输入,将生成网络输出的图像和输入生成网络的样本人脸输入图像对应的样本人脸目标图像作为判别网络的输入,对生成网络和判别网络进行训练,将训练后的生成网络确定为人脸图像生成模型。According to one or more embodiments of the present disclosure, the above-mentioned face simulation image generation subunit includes a face image generation model module configured to train a face image generation model, and the above-mentioned face image generation model module includes: a sample acquisition submodule , is configured to obtain a plurality of sample face input images and a sample face target image corresponding to each sample face input image in the above-mentioned plurality of sample face input images, wherein the clarity of the above-mentioned sample face input images less than or equal to the above-mentioned sharpness threshold, the above-mentioned sample face target image is the same as the image content of the above-mentioned sample face input image, and the image whose sharpness is greater than the above-mentioned sharpness threshold; the network extraction sub-module is configured to extract the pre-established generative formula Adversarial network, wherein, the above-mentioned generative confrontation network includes a generation network and a discriminant network, the generation network is used to generate a face target image using a face input image, and the discriminant network is used to determine whether the image input to the above-mentioned discrimination network is an image output by the generation network The face image generation model training sub-module is configured to utilize the machine learning method to use each sample face input image in the above-mentioned multiple sample face input images as the input of the generation network, and the image and the input output of the generation network will be generated. The sample face target image corresponding to the sample face input image of the generation network is used as the input of the discriminant network, the generation network and the discriminant network are trained, and the trained generation network is determined as the face image generation model.

根据本公开的一个或多个实施例,上述人脸图像生成模型训练子模块包括:人脸图像生成模型训练模组,被配置成固定上述生成网络的参数,将上述多个样本人脸输入图像中的每个样本人脸输入图像作为生成网络的输入,将生成网络输出的图像和输入生成网络的样本人脸输入图像对应的样本人脸目标图像作为判别网络的输入,利用机器学习方法对判别网络进行训练;固定训练后的判别网络的参数,将上述多个样本人脸输入图像中的每个样本人脸输入图像作为生成网络的输入,利用机器学习方法对生成网络进行训练;确定训练后的判别网络输出的判别结果的准确率,响应于确定准确率大于准确率阈值,将最近一次训练的生成网络确定为人脸图像生成模型。According to one or more embodiments of the present disclosure, the above-mentioned face image generation model training sub-module includes: a face image generation model training module, configured to fix the parameters of the above-mentioned generation network, and input the above-mentioned multiple sample faces into the image Each sample face input image in the network is used as the input of the generation network, and the sample face target image corresponding to the output image of the generation network and the sample face input image input to the generation network is used as the input of the discriminant network, and the machine learning method is used to discriminate. The network is trained; the parameters of the discriminant network after training are fixed, each sample face input image in the above-mentioned multiple sample face input images is used as the input of the generation network, and the generation network is trained by the machine learning method; The accuracy rate of the discrimination result output by the discriminant network, in response to determining that the accuracy rate is greater than the accuracy rate threshold, the most recent training generation network is determined as the face image generation model.

根据本公开的一个或多个实施例,上述人脸图像生成模型训练子模块还包括:调整模组,响应于确定准确率小于等于上述准确率阈值,被配置成使用最近一次训练的生成网络和判别网络返回上述人脸图像生成模型训练模组。According to one or more embodiments of the present disclosure, the above-mentioned face image generation model training sub-module further includes: an adjustment module, configured to use the most recently trained generation network and The discriminant network returns to the above face image generation model training module.

根据本公开的一个或多个实施例,上述样本获取子模块包括样本人脸目标图像构建模组,上述样本人脸目标图像构建模组包括:人脸特征获取子模组,被配置成获取上述样本人脸输入图像的人脸特征;人脸预测关键点确定子模组,被配置成基于上述人脸特征确定人脸预测关键点;样本人脸目标图像构建子模组,被配置成通过上述人脸预测关键点构建样本人脸目标图像。According to one or more embodiments of the present disclosure, the above-mentioned sample acquisition sub-module includes a sample face target image construction module, and the above-mentioned sample face target image construction module includes: a face feature acquisition sub-module configured to acquire the above-mentioned face features of the sample face input image; face prediction key point determination sub-module, configured to determine face prediction key points based on the above-mentioned face features; sample face target image construction sub-module, configured to pass the above Face prediction keypoints construct sample face target images.

根据本公开的一个或多个实施例,上述人脸特征包括人脸空间倾斜度、人脸结构信息,以及,上述人脸预测关键点确定子模组包括:According to one or more embodiments of the present disclosure, the above-mentioned face features include face spatial inclination and face structure information, and the above-mentioned face prediction key point determination sub-module includes:

预测组件,被配置成基于上述人脸结构信息确定人脸预测关键点位置信息和人脸预测关键点初始图像;人脸预测关键点确定组件,被配置成基于上述人脸空间倾斜度对上述人脸预测关键点位置信息和人脸预测关键点初始图像进行调整,确定人脸预测关键点。The prediction component is configured to determine the position information of the face prediction key points and the initial image of the face prediction key point based on the above-mentioned face structure information; The face prediction key point position information and the face prediction key point initial image are adjusted to determine the face prediction key point.

根据本公开的一个或多个实施例,上述样本人脸目标图像构建子模组包括:初始人脸图像构建组件,被配置成基于上述人脸预测关键点构建初始人脸图像;样本人脸目标图像生成组件,被配置成对上述初始人脸图像进行渲染得到样本人脸目标图像。According to one or more embodiments of the present disclosure, the above-mentioned sample face target image construction sub-module includes: an initial face image construction component configured to construct an initial face image based on the above-mentioned face prediction key points; the sample face target image The image generation component is configured to render the above-mentioned initial face image to obtain a sample face target image.

以上描述仅为本公开的一些较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开的实施例中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开的实施例中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。The above descriptions are merely some preferred embodiments of the present disclosure and illustrations of the applied technical principles. Those skilled in the art should understand that the scope of the invention involved in the embodiments of the present disclosure is not limited to the technical solution formed by the specific combination of the above technical features, and should also cover, without departing from the above inventive concept, the above Other technical solutions formed by any combination of technical features or their equivalent features. For example, a technical solution is formed by replacing the above-mentioned features with the technical features disclosed in the embodiments of the present disclosure (but not limited to) with similar functions.

Claims (18)

1.一种人脸图像识别方法,包括:1. A face image recognition method, comprising: 对待处理人脸图像的清晰度进行检测;Detect the sharpness of the face image to be processed; 响应于所述清晰度小于设定清晰度阈值,基于所述待处理人脸图像生成人脸模拟图像,所述人脸模拟图像的清晰度大于所述清晰度阈值;In response to the sharpness being less than a set sharpness threshold, generating a face simulation image based on the to-be-processed face image, the sharpness of the face simulation image being greater than the sharpness threshold; 对所述人脸模拟图像进行识别,得到对应所述待处理人脸图像的人脸识别信息。The face simulation image is identified to obtain face identification information corresponding to the to-be-processed face image. 2.根据权利要求1所述的方法,其中,所述基于所述待处理人脸图像生成人脸模拟图像,包括:2. The method according to claim 1, wherein the generating a face simulation image based on the to-be-processed face image comprises: 将所述待处理人脸图像导入人脸图像生成模型,得到所述人脸模拟图像。The face image to be processed is imported into a face image generation model to obtain the face simulation image. 3.根据权利要求2所述的方法,其中,所述人脸图像生成模型通过以下步骤获取:3. The method according to claim 2, wherein the face image generation model is obtained by the following steps: 获取多个样本人脸输入图像和与所述多个样本人脸输入图像中的每个样本人脸输入图像对应的样本人脸目标图像,其中,所述样本人脸输入图像的清晰度小于等于所述清晰度阈值,所述样本人脸目标图像与所述样本人脸输入图像的图像内容相同,并且清晰度大于所述清晰度阈值的图像;Obtain a plurality of sample face input images and a sample face target image corresponding to each sample face input image in the plurality of sample face input images, wherein the clarity of the sample face input images is less than or equal to the definition threshold, the image content of the sample face target image is the same as that of the sample face input image, and the clarity is greater than the image with the clarity threshold; 提取预先建立的生成式对抗网络,其中,所述生成式对抗网络包括生成网络和判别网络,生成网络用于利用人脸输入图像生成人脸目标图像,判别网络用于确定输入所述判别网络的图像是否是生成网络输出的图像;Extract a pre-established generative adversarial network, wherein the generative adversarial network includes a generative network and a discriminant network, the generative network is used to generate a face target image using the face input image, and the discriminant network is used to determine the input to the discriminant network. Whether the image is the image that generated the output of the network; 利用机器学习方法,将所述多个样本人脸输入图像中的每个样本人脸输入图像作为生成网络的输入,将生成网络输出的图像和输入生成网络的样本人脸输入图像对应的样本人脸目标图像作为判别网络的输入,对生成网络和判别网络进行训练,将训练后的生成网络确定为人脸图像生成模型。Using the machine learning method, each sample face input image in the multiple sample face input images is used as the input of the generation network, and the sample face input image corresponding to the image output by the generation network and the sample face input image input into the generation network The face target image is used as the input of the discriminant network, the generation network and the discriminant network are trained, and the trained generation network is determined as the face image generation model. 4.根据权利要求3所述的方法,其中,所述对生成网络和判别网络进行训练,将训练后的生成网络确定为人脸图像生成模型,包括:4. The method according to claim 3, wherein the generating network and the discriminating network are trained, and the trained generating network is determined as a face image generation model, comprising: 执行如下训练步骤:固定所述生成网络的参数,将所述多个样本人脸输入图像中的每个样本人脸输入图像作为生成网络的输入,将生成网络输出的图像和输入生成网络的样本人脸输入图像对应的样本人脸目标图像作为判别网络的输入,利用机器学习方法对判别网络进行训练;固定训练后的判别网络的参数,将所述多个样本人脸输入图像中的每个样本人脸输入图像作为生成网络的输入,利用机器学习方法对生成网络进行训练;确定训练后的判别网络输出的判别结果的准确率,响应于确定准确率大于准确率阈值,将最近一次训练的生成网络确定为人脸图像生成模型。Perform the following training steps: fixing the parameters of the generation network, using each sample face input image in the multiple sample face input images as the input of the generation network, and generating the image output from the network and the input to generate the sample of the network The sample face target image corresponding to the face input image is used as the input of the discriminant network, and the discriminant network is trained by the machine learning method; the parameters of the discriminant network after training are fixed, and each of the multiple sample face input images is The sample face input image is used as the input of the generation network, and the generation network is trained by machine learning method; the accuracy of the discrimination result output by the discriminant network after training is determined, and in response to the determination that the accuracy is greater than the accuracy threshold, the most recent training The generative network determines to generate models for face images. 5.根据权利要求4所述的方法,其中,所述对生成网络和判别网络进行训练,将训练后的生成网络确定为人脸图像生成模型,还包括:5. The method according to claim 4, wherein the generating network and the discriminating network are trained, and the trained generating network is determined as a face image generation model, further comprising: 响应于确定准确率小于等于所述准确率阈值,使用最近一次训练的生成网络和判别网络重新执行所述训练步骤。In response to determining that the accuracy is less than or equal to the accuracy threshold, the training step is re-performed using the most recently trained generative network and discriminant network. 6.根据权利要求3所述的方法,其中,所述样本人脸目标图像通过以下步骤获取:6. The method according to claim 3, wherein the sample face target image is obtained by the following steps: 获取所述样本人脸输入图像的人脸特征;obtaining face features of the sample face input image; 基于所述人脸特征确定人脸预测关键点;Determine a face prediction key point based on the face feature; 通过所述人脸预测关键点构建样本人脸目标图像。A sample face target image is constructed through the face prediction key points. 7.根据权利要求6所述的方法,其中,所述人脸特征包括人脸空间倾斜度、人脸结构信息,以及7. The method of claim 6, wherein the face features include face spatial inclination, face structure information, and 所述基于所述人脸特征确定人脸预测关键点,包括:Described determining the face prediction key point based on the described face feature, including: 基于所述人脸结构信息确定人脸预测关键点位置信息和人脸预测关键点初始图像;Determine face prediction key point position information and face prediction key point initial image based on the face structure information; 基于所述人脸空间倾斜度对所述人脸预测关键点位置信息和人脸预测关键点初始图像进行调整,确定人脸预测关键点。The position information of the face prediction key point and the initial image of the face prediction key point are adjusted based on the face space inclination to determine the face prediction key point. 8.根据权利要求6所述的方法,其中,所述通过所述人脸预测关键点构建样本人脸目标图像,包括:8. The method according to claim 6, wherein said constructing a sample face target image through the face prediction key points comprises: 基于所述人脸预测关键点构建初始人脸图像;constructing an initial face image based on the face prediction key points; 对所述初始人脸图像进行渲染得到样本人脸目标图像。A sample face target image is obtained by rendering the initial face image. 9.一种人脸图像识别装置,包括:9. A face image recognition device, comprising: 图像清晰度检测单元,被配置成对待处理人脸图像的清晰度进行检测;an image sharpness detection unit, configured to detect the sharpness of the face image to be processed; 人脸模拟图像生成单元,响应于所述清晰度小于设定清晰度阈值,被配置成基于所述待处理人脸图像生成人脸模拟图像,所述人脸模拟图像的清晰度大于所述清晰度阈值;A face simulation image generation unit, configured to generate a face simulation image based on the to-be-processed face image, in response to the definition being less than a set definition threshold, the face simulation image having a definition greater than the clarity degree threshold; 人脸识别单元,被配置成对所述人脸模拟图像进行识别,得到对应所述待处理人脸图像的人脸识别信息。The face recognition unit is configured to recognize the face simulation image to obtain face recognition information corresponding to the to-be-processed face image. 10.根据权利要求9所述的装置,其中,所述人脸模拟图像生成单元包括:10. The apparatus according to claim 9, wherein the face simulation image generation unit comprises: 人脸模拟图像生成子单元,被配置成将所述待处理人脸图像导入人脸图像生成模型,得到所述人脸模拟图像。The face simulation image generation subunit is configured to import the to-be-processed face image into a face image generation model to obtain the face simulation image. 11.根据权利要求10所述的装置,其中,所述人脸模拟图像生成子单元包括人脸图像生成模型模块,被配置成训练人脸图像生成模型,所述人脸图像生成模型模块包括:11. The apparatus according to claim 10, wherein the face simulation image generation subunit comprises a face image generation model module configured to train a face image generation model, and the face image generation model module comprises: 样本获取子模块,被配置成获取多个样本人脸输入图像和与所述多个样本人脸输入图像中的每个样本人脸输入图像对应的样本人脸目标图像,其中,所述样本人脸输入图像的清晰度小于等于所述清晰度阈值,所述样本人脸目标图像与所述样本人脸输入图像的图像内容相同,并且清晰度大于所述清晰度阈值的图像;a sample acquisition sub-module configured to acquire a plurality of sample face input images and a sample face target image corresponding to each sample face input image in the plurality of sample face input images, wherein the sample face The clarity of the face input image is less than or equal to the clarity threshold, the image content of the sample face target image and the sample face input image is the same, and the clarity is greater than the clarity threshold image; 网络提取子模块,被配置成提取预先建立的生成式对抗网络,其中,所述生成式对抗网络包括生成网络和判别网络,生成网络用于利用人脸输入图像生成人脸目标图像,判别网络用于确定输入所述判别网络的图像是否是生成网络输出的图像;The network extraction sub-module is configured to extract a pre-established generative adversarial network, wherein the generative adversarial network includes a generative network and a discriminant network, the generative network is used to generate a face target image using the face input image, and the discriminant network uses for determining whether the image input to the discriminant network is an image output by the generation network; 人脸图像生成模型训练子模块,被配置成利用机器学习方法,将所述多个样本人脸输入图像中的每个样本人脸输入图像作为生成网络的输入,将生成网络输出的图像和输入生成网络的样本人脸输入图像对应的样本人脸目标图像作为判别网络的输入,对生成网络和判别网络进行训练,将训练后的生成网络确定为人脸图像生成模型。The face image generation model training sub-module is configured to use the machine learning method to use each sample face input image in the plurality of sample face input images as the input of the generation network, and the image and the input output of the generation network will be generated. The sample face target image corresponding to the sample face input image of the generation network is used as the input of the discriminant network, the generation network and the discriminant network are trained, and the trained generation network is determined as the face image generation model. 12.根据权利要求11所述的装置,其中,所述人脸图像生成模型训练子模块包括:12. The apparatus according to claim 11, wherein the face image generation model training submodule comprises: 人脸图像生成模型训练模组,被配置成固定所述生成网络的参数,将所述多个样本人脸输入图像中的每个样本人脸输入图像作为生成网络的输入,将生成网络输出的图像和输入生成网络的样本人脸输入图像对应的样本人脸目标图像作为判别网络的输入,利用机器学习方法对判别网络进行训练;固定训练后的判别网络的参数,将所述多个样本人脸输入图像中的每个样本人脸输入图像作为生成网络的输入,利用机器学习方法对生成网络进行训练;确定训练后的判别网络输出的判别结果的准确率,响应于确定准确率大于准确率阈值,将最近一次训练的生成网络确定为人脸图像生成模型。The face image generation model training module is configured to fix the parameters of the generation network, and each sample face input image in the multiple sample face input images is used as the input of the generation network, and the output of the generation network is generated. The image and the sample face target image corresponding to the input image of the input generation network are used as the input of the discriminant network, and the discriminant network is trained by the machine learning method; the parameters of the discriminant network after training are fixed, and the multiple sample people are Each sample face input image in the face input image is used as the input of the generation network, and the generation network is trained by the machine learning method; the accuracy rate of the discrimination result output by the discriminant network after training is determined, in response to determining that the accuracy rate is greater than the accuracy rate Threshold to determine the most recently trained generative network as a face image generative model. 13.根据权利要求12所述的装置,其中,所述人脸图像生成模型训练子模块还包括:13. The apparatus according to claim 12, wherein the face image generation model training submodule further comprises: 调整模组,响应于确定准确率小于等于所述准确率阈值,被配置成使用最近一次训练的生成网络和判别网络返回所述人脸图像生成模型训练模组。The adjustment module, in response to determining that the accuracy rate is less than or equal to the accuracy rate threshold, is configured to return the face image generation model training module using the most recently trained generation network and discriminant network. 14.根据权利要求11所述的装置,其中,所述样本获取子模块包括样本人脸目标图像构建模组,所述样本人脸目标图像构建模组包括:14. The apparatus according to claim 11, wherein the sample acquisition submodule comprises a sample face target image construction module, and the sample face target image construction module comprises: 人脸特征获取子模组,被配置成获取所述样本人脸输入图像的人脸特征;a face feature acquisition sub-module, configured to obtain the face feature of the sample face input image; 人脸预测关键点确定子模组,被配置成基于所述人脸特征确定人脸预测关键点;a face prediction key point determination submodule, configured to determine a face prediction key point based on the face feature; 样本人脸目标图像构建子模组,被配置成通过所述人脸预测关键点构建样本人脸目标图像。The sample face target image construction sub-module is configured to construct the sample face target image through the face prediction key points. 15.根据权利要求14所述的装置,其中,所述人脸特征包括人脸空间倾斜度、人脸结构信息,以及15. The apparatus of claim 14, wherein the face features include face spatial inclination, face structure information, and 所述人脸预测关键点确定子模组包括:The face prediction key point determination submodule includes: 预测组件,被配置成基于所述人脸结构信息确定人脸预测关键点位置信息和人脸预测关键点初始图像;a prediction component, configured to determine face prediction key point position information and face prediction key point initial image based on the face structure information; 人脸预测关键点确定组件,被配置成基于所述人脸空间倾斜度对所述人脸预测关键点位置信息和人脸预测关键点初始图像进行调整,确定人脸预测关键点。The face prediction key point determination component is configured to adjust the position information of the face prediction key point and the initial image of the face prediction key point based on the inclination of the face space to determine the face prediction key point. 16.根据权利要求14所述的装置,其中,所述样本人脸目标图像构建子模组包括:16. The apparatus according to claim 14, wherein the sample face target image construction sub-module comprises: 初始人脸图像构建组件,被配置成基于所述人脸预测关键点构建初始人脸图像;an initial face image construction component, configured to construct an initial face image based on the face prediction key points; 样本人脸目标图像生成组件,被配置成对所述初始人脸图像进行渲染得到样本人脸目标图像。The sample face target image generation component is configured to render the initial face image to obtain the sample face target image. 17.一种电子设备,包括:17. An electronic device comprising: 一个或多个处理器;one or more processors; 存储装置,其上存储有一个或多个程序,a storage device on which one or more programs are stored, 当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1至8中任一所述的方法。The one or more programs, when executed by the one or more processors, cause the one or more processors to implement a method as claimed in any one of claims 1 to 8. 18.一种计算机可读介质,其上存储有计算机程序,其中,所述程序被处理器执行时实现如权利要求1至8中任一所述的方法。18. A computer-readable medium having stored thereon a computer program, wherein the program, when executed by a processor, implements the method of any one of claims 1 to 8.
CN202010939340.6A 2020-09-09 2020-09-09 Face image recognition method and device, electronic equipment and computer readable medium Pending CN112070022A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010939340.6A CN112070022A (en) 2020-09-09 2020-09-09 Face image recognition method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010939340.6A CN112070022A (en) 2020-09-09 2020-09-09 Face image recognition method and device, electronic equipment and computer readable medium

Publications (1)

Publication Number Publication Date
CN112070022A true CN112070022A (en) 2020-12-11

Family

ID=73662855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010939340.6A Pending CN112070022A (en) 2020-09-09 2020-09-09 Face image recognition method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN112070022A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657178A (en) * 2021-07-22 2021-11-16 浙江大华技术股份有限公司 Face recognition method, electronic device and computer-readable storage medium
CN114332983A (en) * 2021-12-01 2022-04-12 杭州鸿泉物联网技术股份有限公司 Face image definition detection method, face image definition detection device, electronic equipment and medium
CN114363623A (en) * 2021-08-12 2022-04-15 财付通支付科技有限公司 Image processing method, image processing apparatus, image processing medium, and electronic device
CN116704566A (en) * 2022-02-25 2023-09-05 马上消费金融股份有限公司 Face recognition, model training method, device and equipment for face recognition

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093419A (en) * 2011-10-28 2013-05-08 浙江大华技术股份有限公司 Method and device for detecting image definition
CN107154023A (en) * 2017-05-17 2017-09-12 电子科技大学 Face super-resolution reconstruction method based on generation confrontation network and sub-pix convolution
CN110634053A (en) * 2019-09-24 2019-12-31 广东爱贝佳科技有限公司 Active interactive intelligent selling system and method
CN110689482A (en) * 2019-09-18 2020-01-14 中国科学技术大学 Face super-resolution method based on supervised pixel-by-pixel generation countermeasure network
CN111368685A (en) * 2020-02-27 2020-07-03 北京字节跳动网络技术有限公司 Key point identification method and device, readable medium and electronic equipment
CN111402122A (en) * 2020-03-20 2020-07-10 北京字节跳动网络技术有限公司 Image mapping processing method and device, readable medium and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093419A (en) * 2011-10-28 2013-05-08 浙江大华技术股份有限公司 Method and device for detecting image definition
CN107154023A (en) * 2017-05-17 2017-09-12 电子科技大学 Face super-resolution reconstruction method based on generation confrontation network and sub-pix convolution
CN110689482A (en) * 2019-09-18 2020-01-14 中国科学技术大学 Face super-resolution method based on supervised pixel-by-pixel generation countermeasure network
CN110634053A (en) * 2019-09-24 2019-12-31 广东爱贝佳科技有限公司 Active interactive intelligent selling system and method
CN111368685A (en) * 2020-02-27 2020-07-03 北京字节跳动网络技术有限公司 Key point identification method and device, readable medium and electronic equipment
CN111402122A (en) * 2020-03-20 2020-07-10 北京字节跳动网络技术有限公司 Image mapping processing method and device, readable medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郑宏达 等: "人工智能视频分析 智能终端的崛起》", 31 August 2017, 浙江工商大学出版社, pages: 62 - 66 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657178A (en) * 2021-07-22 2021-11-16 浙江大华技术股份有限公司 Face recognition method, electronic device and computer-readable storage medium
CN114363623A (en) * 2021-08-12 2022-04-15 财付通支付科技有限公司 Image processing method, image processing apparatus, image processing medium, and electronic device
CN114332983A (en) * 2021-12-01 2022-04-12 杭州鸿泉物联网技术股份有限公司 Face image definition detection method, face image definition detection device, electronic equipment and medium
CN116704566A (en) * 2022-02-25 2023-09-05 马上消费金融股份有限公司 Face recognition, model training method, device and equipment for face recognition

Similar Documents

Publication Publication Date Title
CN109214343B (en) Method and device for generating face key point detection model
CN109816589B (en) Method and apparatus for generating manga style transfer model
EP3872699B1 (en) Face liveness detection method and apparatus, and electronic device
CN109800732B (en) Method and device for generating cartoon head portrait generation model
CN111368685B (en) Method and device for identifying key points, readable medium and electronic equipment
CN107622240B (en) Face detection method and device
CN112132847A (en) Model training method, image segmentation method, apparatus, electronic device and medium
CN112070022A (en) Face image recognition method and device, electronic equipment and computer readable medium
CN108509994B (en) Method and device for clustering character images
US11087140B2 (en) Information generating method and apparatus applied to terminal device
CN112668588B (en) Parking space information generation method, device, equipment and computer-readable medium
CN113688928B (en) Image matching method and device, electronic equipment and computer readable medium
CN112149615A (en) Face living body detection method, device, medium and electronic equipment
CN114120454A (en) Training method and device of living body detection model, electronic equipment and storage medium
WO2021034864A1 (en) Detection of moment of perception
CN108229375B (en) Method and device for detecting face image
CN111310595B (en) Method and apparatus for generating information
CN110942033B (en) Method, device, electronic equipment and computer medium for pushing information
CN114882308B (en) Biological feature extraction model training method and image segmentation method
CN113361304A (en) Service evaluation method and device based on expression recognition and storage equipment
CN114821026A (en) Object retrieval method, electronic device, and computer-readable medium
CN111291640A (en) Method and apparatus for recognizing gait
CN116434287B (en) A method, apparatus, electronic device and storage medium for face image detection
CN111860070A (en) Method and apparatus for identifying changed objects
CN112766311B (en) Method and device for testing the robustness of deep learning-based vehicle detection models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201211