[go: up one dir, main page]

CN111199176A - Face identity detection method and device - Google Patents

Face identity detection method and device Download PDF

Info

Publication number
CN111199176A
CN111199176A CN201811385951.XA CN201811385951A CN111199176A CN 111199176 A CN111199176 A CN 111199176A CN 201811385951 A CN201811385951 A CN 201811385951A CN 111199176 A CN111199176 A CN 111199176A
Authority
CN
China
Prior art keywords
image
face
adjustment
identity
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811385951.XA
Other languages
Chinese (zh)
Other versions
CN111199176B (en
Inventor
熊宇龙
林志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201811385951.XA priority Critical patent/CN111199176B/en
Publication of CN111199176A publication Critical patent/CN111199176A/en
Application granted granted Critical
Publication of CN111199176B publication Critical patent/CN111199176B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)

Abstract

本申请提供一种人脸身份检测方法及装置。所述方法包括:对待检测人脸图像进行特征点提取,得到待检测人脸图像的人脸特征点及每个人脸特征点对应的邻接像素点;计算每个人脸特征点与对应邻接像素点之间的图像拓扑算子向量组;获取每个人脸特征点的与该人脸特征点的图像拓扑算子向量组对应的调整参数;基于获取到的所有人脸特征点的调整参数对待检测人脸图像进行图像调整,得到对应的目标调整图像;将目标调整图像与存储的人脸身份图像进行图像比对,得到与目标调整图像匹配的目标身份图像,以完成对待检测人脸图像的人脸身份检测。所述方法可降低美颜技术对人脸身份识别结果的影响程度,提高人脸身份识别的准确度。

Figure 201811385951

The present application provides a face identity detection method and device. The method includes: extracting the feature points of the face image to be detected, obtaining the face feature points of the face image to be detected and the adjacent pixel points corresponding to each face feature point; calculating the difference between each face feature point and the corresponding adjacent pixel points. The image topology operator vector group between each face feature point is obtained; the adjustment parameters corresponding to the image topology operator vector group of each face feature point are obtained; based on the obtained adjustment parameters of all face feature points, the detected face is treated Perform image adjustment on the image to obtain the corresponding target adjustment image; compare the target adjustment image with the stored face identity image to obtain a target identity image that matches the target adjustment image, so as to complete the face identity of the face image to be detected. detection. The method can reduce the degree of influence of the beauty technology on the face identification result, and improve the accuracy of the face identification.

Figure 201811385951

Description

Face identity detection method and device
Technical Field
The application relates to the technical field of face identity recognition, in particular to a face identity detection method and device.
Background
The image deformation beautification processing technology (beautification technology) provides beautification effect for people's image photographing, and simultaneously brings about a small challenge for the face identification technology. The facial features of the face image subjected to the facial beautification processing are different from the real facial features. If the face identification is directly performed by the facial image to be inspected after the facial treatment, the face identification image which is finally output by the face identification technology for realizing the identification effect in the image searching mode and is used for representing the specific identity of the person exists a huge difference with the real face identification image corresponding to the facial image to be inspected, and the accuracy of the face identification is influenced.
Disclosure of Invention
In order to overcome the above defects in the prior art, the present application aims to provide a face identity detection method and apparatus, which can reduce the influence degree of the face beautifying technology on the face identity recognition result and improve the accuracy of face identity recognition.
In terms of a method, an embodiment of the present application provides a face identity detection method, which is applied to an image processing device, where a correspondence between an image topology operator vector set and an adjustment parameter and a face identity image used for indicating a specific identity of a person are stored in the image processing device, and the method includes:
extracting feature points of a face image to be detected to obtain face feature points of the face image to be detected and adjacent pixel points corresponding to the face feature points;
calculating an image topological operator vector group between each facial feature point and the corresponding adjacent pixel point;
acquiring an adjustment parameter of each face characteristic point corresponding to the image topological operator vector group of the face characteristic point;
carrying out image adjustment on the face image to be detected based on the obtained adjustment parameters of all the face characteristic points to obtain a corresponding target adjustment image;
and comparing the target adjustment image with the stored face identity image to obtain a target identity image matched with the target adjustment image in all the face identity images so as to finish the face identity detection of the face image to be detected.
As for a device, an embodiment of the present application provides a face identity detection device, which is applied to an image processing apparatus, where a correspondence between an image topology operator vector set and an adjustment parameter and a face identity image used for indicating a specific identity of a person are stored in the image processing apparatus, and the device includes:
the characteristic point extraction module is used for extracting characteristic points of the face image to be detected to obtain face characteristic points of the face image to be detected and adjacent pixel points corresponding to the face characteristic points;
the vector group calculation module is used for calculating an image topology operator vector group between each facial feature point and the corresponding adjacent pixel point;
the parameter acquisition module is used for acquiring adjustment parameters of each face characteristic point, which correspond to the image topological operator vector group of the face characteristic point;
the image adjusting module is used for carrying out image adjustment on the face image to be detected based on the obtained adjusting parameters of all the face characteristic points to obtain a corresponding target adjusting image;
and the identity comparison module is used for carrying out image comparison on the target adjustment image and the stored face identity image to obtain a target identity image which is matched with the target adjustment image in all the face identity images so as to finish the face identity detection of the face image to be detected.
Compared with the prior art, the face identity detection method and the face identity detection device provided by the embodiment of the application have the following beneficial effects: the face identity detection method can reduce the influence degree of the face beautifying technology on the face identity recognition result and improve the accuracy of face identity recognition. Firstly, the method acquires the face characteristic points in the face image to be detected and adjacent pixel points corresponding to the face characteristic points in the face image to be detected in a manner of extracting the characteristic points. Then, the method obtains the adjustment parameters corresponding to each facial feature point and used for reducing the beauty treatment effect according to the corresponding relation between the image topology operator vector group and the adjustment parameters stored by the image processing equipment in a mode of calculating the image topology operator vector group between each facial feature point and the corresponding adjacent pixel point in the facial image to be detected. Then, the method carries out image adjustment on the face image to be detected based on the obtained adjustment parameters to obtain a corresponding target adjustment image so as to reduce the possible beauty treatment effect on the face image to be detected. Finally, the method obtains the target identity image which is matched with the target adjustment image and used for indicating the specific identity of the person corresponding to the face image to be detected by comparing the target adjustment image with the face identity image stored in the image processing equipment, thereby reducing the influence of the beautifying technology on the face identity recognition and realizing the face identity detection with high accuracy.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments are briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope of the claims of the present application, and it is obvious for those skilled in the art that other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic block diagram of an image processing apparatus according to an embodiment of the present disclosure.
Fig. 2 is a schematic flow chart of a face identity detection method according to an embodiment of the present application.
Fig. 3 is a flowchart illustrating the sub-steps included in step S230 shown in fig. 2.
Fig. 4 is a flowchart illustrating the sub-steps included in step S240 shown in fig. 2.
Fig. 5 is a flowchart illustrating the sub-steps included in step S250 shown in fig. 2.
Fig. 6 is another schematic flow chart of the face identity detection method according to the embodiment of the present application.
Fig. 7 is a schematic block diagram of a face identity detection apparatus according to an embodiment of the present application.
Fig. 8 is another schematic block diagram of a face identity detection apparatus according to an embodiment of the present application.
Icon: 10-an image processing device; 11-a memory; 12-a processor; 13-a communication unit; 100-human face identity detection means; 110-feature point extraction module; 120-vector group calculation module; 130-parameter acquisition module; 140-an image adjustment module; 150-identity alignment module; 160-relationship configuration module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Fig. 1 is a block diagram of an image processing apparatus 10 according to an embodiment of the present disclosure. In this embodiment of the application, the image processing apparatus 10 may be configured to perform face identity detection with high accuracy on a face image to be detected, and the image processing apparatus 10 includes a face identity detection device 100, a memory 11, a processor 12, and a communication unit 13. The memory 11, the processor 12 and the communication unit 13 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The face identity detection device 100 comprises at least one software functional module capable of being stored in the memory 11 in the form of software or firmware (firmware), and the processor 12 executes various functional applications and data processing by running the corresponding software functional module of the face identity detection device 100 stored in the memory 11. In the present embodiment, the image processing apparatus 10 may be, but is not limited to, a server, a mobile terminal, or the like.
In this embodiment, the memory 11 may be configured to store a feature point extraction model for extracting feature points of a face in a face image, where the feature points of the face include at least one or more combinations of two corner points and a center point of each eyebrow, two corner points of each eye, center points of upper and lower eyelids, and center points of the eyes, nose tip points, nose apex points, two nose wing points, nose septum points, two corner points of the mouth, mouth center points, uppermost points of the upper lip, and lowermost points of the lower lip in the face image. The image processing device 10 may extract all human face feature points in a human face image, which are actually present in the human face image and are within the capability coverage range of the feature point extraction model, from the human face image through the feature point extraction model, where the feature point extraction model is an extraction model obtained by training a human face feature point training sample human face image based on a Convolutional Neural Network (CNN) and manually calibrating the human face feature point, and the feature point extraction model may be obtained by the image processing device 10 through sample training, or may be obtained from an external device and stored in the memory 11.
In this embodiment, the memory 11 is further configured to store a corresponding relationship between the image topology operator vector set and the adjustment parameter. The adjustment parameters are used for expressing image processing parameters required by the pixel point for reducing the corresponding beauty processing effect, the image topological operator vector group is used for expressing the image topological relation between one image pixel point in the image and the connected pixel point of the image pixel point in the RGB color space, the image topological operator can be expressed as a Laplacian operator or a gradient operator, and the specific operator expression condition can be configured differently according to requirements. In the embodiments of the present application, a laplacian is taken as an example for performing a related description in the following description, but it can be understood that the laplacian is only one expression form of the image topological operator, the expression form of the image topological operator is not limited to the laplacian, and the related operations when the image topological operator is expressed by other operators are similar to the related operations when the image topological operator is expressed by the laplacian.
In this embodiment, when the image topology operator is expressed by a laplacian operator, the laplacian operator vector group is used to represent a transformation difference between color vectors (R (Red, Red), G (Green ), and B (Blue) vectors) corresponding to a pixel point of an image and an adjacent pixel point of the image in an RGB color space, and may reflect a beauty processing degree of the pixel point of the image. If the image processing device 10 calculates the laplacian operator vector group by using a 4-neighborhood system for an image pixel point, the image processing device 10 performs the laplacian operator vector group calculation on 4 pixel points adjacent to the periphery of the image pixel point and the image pixel point; if the image processing device 10 calculates the laplacian operator vector group by using an 8-neighborhood system for the image pixel point, the image processing device 10 performs the laplacian operator vector group calculation on the extracted 8 pixel points adjacent to the periphery of the image pixel point and the image pixel point. In an implementation manner of this embodiment, the image processing device 10 calculates the laplacian operator vector group by using an 8-neighborhood system for the image pixel points.
The corresponding relation between the laplacian operator vector group and the adjustment parameter can be represented by a parameter comparison model obtained based on convolutional neural network training. The training samples used by the parameter comparison model in the training process are all Laplacian operator vector groups obtained by corresponding calculation in the process from non-beauty to beauty by adopting different beauty parameters of the face image feature points, the Laplacian operator vector group when the beauty parameters are adopted and the Laplacian operator vector group when the beauty parameters are not adopted are compared by the parameter comparison model in the training process (including vector value comparison and vector direction comparison), and corresponding adjusting parameters are generated based on the comparison result. And performing vector transformation on the Laplacian vector group when the beauty parameter is adopted based on the generated adjustment parameter, then calculating the overall difference between the Laplacian vector group after the transformation and the Laplacian vector group when the beauty is not adopted based on a least square method, and adjusting the adjustment parameter reversely based on the overall difference until the overall difference between the Laplacian vector group when the beauty parameter is adopted and the Laplacian vector group when the beauty is not realized after the vector transformation is performed on the Laplacian vector group based on the adjustment parameter is minimum, and taking the corresponding adjustment parameter when the overall difference is minimum as the adjustment parameter corresponding to the Laplacian vector group. The beautifying parameters comprise any one or more combinations of sub-parameters such as whitening operator parameters, peeling operator parameters, stretching operator parameters and skin tendering operator parameters, and the adjusting parameters comprise any one or more combinations of parameters such as image sharpening parameters, image stretching parameters and image noise parameters. The parametric contrast model may be obtained by sample training of the image processing apparatus 10 itself, or may be obtained from an external apparatus and stored in the memory 11.
In this embodiment, the memory 11 may also be used to store a face identity image indicating a specific identity of a person. The memory 11 may be, but is not limited to, a random access memory, a read only memory, a programmable read only memory, an erasable programmable read only memory, an electrically erasable programmable read only memory, and the like. The memory 11 may also be used to store various applications that the processor 12 executes upon receiving execution instructions. Further, the software programs and modules in the memory 11 may also include an operating system, which may include various software components and/or drivers for managing system tasks (e.g., memory management, storage device control, power management, etc.), and may communicate with various hardware or software components to provide an operating environment for other software components.
In this embodiment, the processor 12 may be an integrated circuit chip having signal processing capabilities. The Processor 12 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Network Processor (NP), and the like, and may implement or execute each method, step, and logic block disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In the present embodiment, the communication unit 13 is configured to establish a communication connection between the image processing apparatus 10 and another external apparatus via a network, and perform data transmission via the network. For example, the image processing apparatus 10 may obtain, from the external apparatus through the communication unit 13, a face image to be detected that needs face identity detection, where the face image to be detected may be a face image subjected to face beautification processing, or may be an original face image without face beautification.
In this embodiment, the image processing apparatus 10 can reduce the influence of the beauty technique on the face identification result and improve the accuracy of the face identification through the face identification device 100 stored in the memory 11.
It is to be understood that the configuration shown in fig. 1 is only a schematic configuration of the image processing apparatus 10, and the image processing apparatus 10 may further include more or less components than those shown in fig. 1, or have a different configuration from that shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
Fig. 2 is a schematic flow chart of a face identity detection method according to an embodiment of the present application. In the embodiment of the present application, the face identity detection method is applied to the image processing device 10, and the image processing device 10 stores a corresponding relationship between an image topology operator vector group and an adjustment parameter, and a face identity image for indicating a specific identity of a person. The following describes the specific flow and steps of the face identity detection method shown in fig. 2 in detail.
Step S210, feature point extraction is carried out on the face image to be detected, and the face feature point of the face image to be detected and adjacent pixel points corresponding to the face feature point are obtained.
In this embodiment, after the image processing device 10 acquires the face image to be detected, the feature point extraction module may perform feature point extraction on the face image to be detected based on the feature point extraction model to obtain the face feature points of the face image to be detected, and simultaneously obtain adjacent pixel points corresponding to the face feature points of each person in the face image to be detected.
Step S220, calculating an image topology operator vector group between each facial feature point and the corresponding adjacent pixel point.
In this embodiment, taking laplacian as an expression form of the image topology operator as an example, the image processing apparatus 10 calculates a laplacian vector group between each human face feature point and a corresponding adjacent pixel point in the RGB color space when obtaining the human face feature point in the human face image to be detected and the adjacent pixel point of each human face feature point.
In step S230, the adjustment parameters of each facial feature point corresponding to the image topology operator vector set of the facial feature point are obtained.
Optionally, please refer to fig. 3, which is a flowchart illustrating the sub-steps included in step S230 shown in fig. 2. In this embodiment, the correspondence between the image topology operator vector sets and the adjustment parameters includes adjustment parameters corresponding to different image topology operator vector sets, and the step S230 includes sub-steps S231 and S232.
In sub-step S231, a corresponding matched target vector set is searched for from all image topology operator vector sets stored in the image processing device 10 according to the image topology operator vector set of each facial feature point.
In the substep S232, if the search is successful, the adjustment parameter corresponding to the searched target vector set is used as the adjustment parameter of the face feature point.
In the present embodiment, taking laplacian as an expression form of the image topology operator as an example, the image processing apparatus 10 searches the parameter comparison model for a target vector group having the same vector value and vector direction as the laplacian vector group of each face feature point by inputting the laplacian vector group of each face feature point to the parameter comparison model, and takes an adjustment parameter corresponding to the target vector group in the parameter comparison model as an adjustment parameter of each face feature point. It is understood that the laplacian is only one expression form of the image topological operator, the expression form of the image topological operator is not limited to the laplacian, and the related operation when the image topological operator is expressed by other operators is similar to the related operation when the image topological operator is expressed by the laplacian.
And S240, carrying out image adjustment on the face image to be detected based on the acquired adjustment parameters of all the face characteristic points to obtain a corresponding target adjustment image.
In this embodiment, after obtaining the adjustment parameters of each facial feature point in the facial image to be detected, the image processing device 10 performs image adjustment on the facial image to be detected according to the adjustment influence degree of each pixel point in the facial image to be detected by the adjustment parameters of all facial feature points, and obtains a target adjustment image corresponding to the facial image to be detected after a possible beauty treatment effect is reduced, where the adjustment influence degree indicates the adjustment participation degree of the adjustment parameters at each facial feature point in the image adjustment process of other pixel points.
Optionally, please refer to fig. 4, which is a flowchart illustrating the sub-steps included in step S240 shown in fig. 2. In this embodiment, the step S240 may include a sub-step S241 and a sub-step S242.
And a substep S241 of establishing a blending field among all the pixel points in the face image to be detected, and uniformly spreading the adjustment parameters of the corresponding face characteristic points to other pixel points in the blending field at the positions corresponding to the face characteristic points in the blending field.
And a substep S242, performing pixel adjustment on each pixel point in the blending field according to the adjustment parameter correspondingly obtained by the pixel point, and correspondingly generating the target adjustment image.
In this embodiment, the blending yard is a passive and non-rotating yard, and when each facial feature point propagates the adjustment parameter at the corresponding position in the blending yard, the adjustment parameter will be uniformly radiated and propagated to the peripheral pixels with the facial feature point as the starting point, so that each pixel point in the blending yard correspondingly obtains the adjustment parameter from the radiation propagation of each facial feature point. The image processing device 10 obtains a target adjustment image corresponding to the face image to be detected after adjustment and transformation by performing pixel adjustment and transformation on the pixel points according to the adjustment parameters obtained corresponding to each pixel point.
Step S250, performing image comparison between the target adjustment image and the stored face identity image to obtain a target identity image matched with the target adjustment image in all the face identity images, so as to complete face identity detection on the face image to be detected.
In this embodiment, after obtaining the target adjustment image with the reduced beauty treatment effect corresponding to the face image to be detected, the image processing device 10 performs image comparison between the target adjustment image and the face identity image stored in the image processing device 10 to obtain a target identity image, which is matched with the target adjustment image and used for indicating the specific identity of the person corresponding to the face image to be detected, in all the stored face identity images, so as to reduce the influence of the beauty technology on face identity recognition, and implement face identity detection with high accuracy.
Optionally, please refer to fig. 5, which is a flowchart illustrating the sub-steps included in step S250 shown in fig. 2. In the embodiment of the present application, the step S250 may include a sub-step S251 and a sub-step S252.
And a substep S251 of calculating an image confidence between the target adjustment image and each stored face identity image.
And a substep S252, selecting the face identity image with the maximum image confidence as the target identity image corresponding to the face image to be detected.
Fig. 6 is a schematic flow chart of a face identity detection method according to an embodiment of the present application. In this embodiment of the application, the face identity detection method may further include step S209 before step S210.
Step S209, configuring and storing the corresponding relationship between the image topology vector group and the adjustment parameter.
In this embodiment, taking laplacian as an expression form of the image topology operator as an example, the image processing apparatus 10 may configure the corresponding relationship between the laplacian vector group and the adjustment parameter by training a parameter comparison model between the laplacian vector group and the adjustment parameter by using samples based on a convolutional neural network, and store the corresponding relationship in the memory 11.
When the parameter reference model is trained, the image processing device 10 records each beauty parameter and the laplacian operator vector group corresponding to the face image feature point under different beauty parameters, then compares the laplacian operator vector group under the beauty parameter with the laplacian operator vector group under the non-beauty condition, preliminarily generates an adjustment parameter for the beauty parameter according to the comparison result, performs vector transformation on the laplacian operator vector group under the beauty parameter with the generated adjustment parameter, and then calculates the overall difference between the transformed laplacian operator vector group and the laplacian operator vector group under the non-beauty condition based on the least square method, so as to reversely adjust the adjustment parameter based on the overall difference until the laplacian operator vector group under the beauty parameter performs vector transformation based on the adjustment parameter and then the overall difference between the laplacian operator vector group under the non-beauty condition And taking the corresponding adjusting parameter when the overall difference is minimum as the adjusting parameter corresponding to the Laplace operator vector group.
Fig. 7 is a block diagram of a face identity detection apparatus 100 according to an embodiment of the present application. In the embodiment of the present application, the face identity detection apparatus 100 is applied to the image processing device 10 shown in fig. 1, and the image processing device 10 stores therein a corresponding relationship between an image topology operator vector set and an adjustment parameter, and a face identity image for indicating a specific identity of a person. The face identity detection device 100 includes a feature point extraction module 110, a vector group calculation module 120, a parameter acquisition module 130, an image adjustment module 140, and an identity comparison module 150.
The feature point extraction module 110 is configured to perform feature point extraction on a face image to be detected to obtain a face feature point of the face image to be detected and an adjacent pixel point corresponding to the face feature point.
In this embodiment, the feature point extraction module 110 may execute step S210 shown in fig. 2, and the specific execution process may refer to the above detailed description of step S210.
The vector group calculating module 120 is configured to calculate an image topology operator vector group between each facial feature point and the corresponding adjacent pixel point.
In this embodiment, the vector group calculating module 120 may execute step S220 shown in fig. 2, and the specific execution process may refer to the above detailed description of step S220.
The parameter obtaining module 130 is configured to obtain an adjustment parameter of each facial feature point corresponding to the image topology operator vector set of the facial feature point.
In this embodiment, the parameter obtaining module 130 may execute the step S230 shown in fig. 2, and the sub-steps S231 and S232 shown in fig. 3, and the specific execution process may refer to the above detailed description of the step S220, the sub-step S231, and the sub-step S232.
The image adjusting module 140 is configured to perform image adjustment on the face image to be detected based on the obtained adjustment parameters of all the face feature points, so as to obtain a corresponding target adjustment image.
In this embodiment, the image adjustment module 140 may execute the step S240 shown in fig. 2, and the sub-steps S241 and S242 shown in fig. 4, and the specific execution process may refer to the above detailed description of the step S240, the sub-step S241, and the sub-step S242.
The identity comparison module 150 is configured to perform image comparison on the target adjustment image and the stored face identity image to obtain a target identity image matched with the target adjustment image in all the face identity images, so as to complete face identity detection on the face image to be detected.
In this embodiment, the identity comparison module 150 may execute step S250 shown in fig. 2, and sub-steps S251 and S252 shown in fig. 5, and the specific execution process may refer to the above detailed description of step S250, sub-step S251, and sub-step S252.
Fig. 8 is a schematic block diagram of a face identity detection apparatus 100 according to an embodiment of the present application. In this embodiment, the face identity detection apparatus 100 may further include a relationship configuration module 160.
The relationship configuration module 160 is configured and stores the corresponding relationship between the image topology operator vector group and the adjustment parameter.
In this embodiment, the relationship configuration module 160 may execute step S209 shown in fig. 6, and the specific execution process may refer to the above detailed description of step S209.
In summary, in the face identity detection method and apparatus provided in the embodiment of the present application, the face identity detection method can reduce the influence degree of the beauty technology on the face identity recognition result, and improve the accuracy of face identity recognition. Firstly, the method acquires the face characteristic points in the face image to be detected and adjacent pixel points corresponding to the face characteristic points in the face image to be detected in a manner of extracting the characteristic points. Then, the method obtains the adjustment parameters corresponding to each facial feature point and used for reducing the beauty treatment effect according to the corresponding relation between the image topology operator vector group and the adjustment parameters stored by the image processing equipment in a mode of calculating the image topology operator vector group between each facial feature point and the corresponding adjacent pixel point in the facial image to be detected. Then, the method carries out image adjustment on the face image to be detected based on the obtained adjustment parameters to obtain a corresponding target adjustment image so as to reduce the possible beauty treatment effect on the face image to be detected. Finally, the method obtains the target identity image which is matched with the target adjustment image and used for indicating the specific identity of the person corresponding to the face image to be detected by comparing the target adjustment image with the face identity image stored in the image processing equipment, thereby reducing the influence of the beautifying technology on the face identity recognition and realizing the face identity detection with high accuracy.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A face identity detection method is applied to image processing equipment, wherein the image processing equipment stores a corresponding relation between an image topology operator vector group and an adjustment parameter and a face identity image for indicating a specific identity of a person, and the method comprises the following steps:
extracting feature points of a face image to be detected to obtain face feature points of the face image to be detected and adjacent pixel points corresponding to the face feature points;
calculating an image topological operator vector group between each facial feature point and the corresponding adjacent pixel point;
acquiring an adjustment parameter of each face characteristic point corresponding to the image topological operator vector group of the face characteristic point;
carrying out image adjustment on the face image to be detected based on the obtained adjustment parameters of all the face characteristic points to obtain a corresponding target adjustment image;
and comparing the target adjustment image with the stored face identity image to obtain a target identity image matched with the target adjustment image in all the face identity images so as to finish the face identity detection of the face image to be detected.
2. The method according to claim 1, wherein the correspondence between the image topological operator vector sets and the adjustment parameters comprises adjustment parameters corresponding to different image topological operator vector sets, and the step of obtaining the adjustment parameters of each face feature point corresponding to the image topological operator vector set of the face feature point comprises:
searching a corresponding matched target vector group in all image topological operator vector groups stored in the image processing equipment according to the image topological operator vector group of each facial feature point;
if the search is successful, the adjustment parameter corresponding to the searched target vector group is used as the adjustment parameter of the face characteristic point.
3. The method according to claim 1, wherein the step of performing image adjustment on the face image to be detected based on the obtained adjustment parameters of all the face feature points to obtain a corresponding target adjustment image comprises:
establishing a blending field among all pixel points in the face image to be detected, and uniformly transmitting the adjustment parameters corresponding to the face characteristic points to other pixel points in the blending field at positions corresponding to the face characteristic points in the blending field;
and carrying out pixel adjustment on each pixel point according to the adjustment parameter correspondingly obtained by the pixel point in the blending field, and correspondingly generating the target adjustment image.
4. The method according to claim 1, wherein the step of comparing the target adjustment image with stored face identity images to obtain a target identity image matching the target adjustment image in all the face identity images comprises:
calculating image confidence between the target adjustment image and each stored face identity image;
and selecting the face identity image with the maximum image confidence as a target identity image corresponding to the face image to be detected.
5. The method according to any one of claims 1-4, further comprising:
and configuring and storing the corresponding relation between the image topology operator vector group and the adjustment parameters.
6. A human face identity detection device is applied to image processing equipment, wherein the image processing equipment stores the corresponding relation between an image topology operator vector group and an adjustment parameter and a human face identity image for indicating the specific identity of a person, and the device comprises:
the characteristic point extraction module is used for extracting characteristic points of the face image to be detected to obtain face characteristic points of the face image to be detected and adjacent pixel points corresponding to the face characteristic points;
the vector group calculation module is used for calculating an image topology operator vector group between each facial feature point and the corresponding adjacent pixel point;
the parameter acquisition module is used for acquiring adjustment parameters of each face characteristic point, which correspond to the image topological operator vector group of the face characteristic point;
the image adjusting module is used for carrying out image adjustment on the face image to be detected based on the obtained adjusting parameters of all the face characteristic points to obtain a corresponding target adjusting image;
and the identity comparison module is used for carrying out image comparison on the target adjustment image and the stored face identity image to obtain a target identity image which is matched with the target adjustment image in all the face identity images so as to finish the face identity detection of the face image to be detected.
7. The apparatus according to claim 6, wherein the correspondence between the image topology operator vector group and the adjustment parameter includes adjustment parameters corresponding to different image topology operator vector groups, and the parameter obtaining module is specifically configured to:
searching a corresponding matched target vector group in all image topological operator vector groups stored in the image processing equipment according to the image topological operator vector group of each facial feature point;
if the search is successful, the adjustment parameter corresponding to the searched target vector group is used as the adjustment parameter of the face characteristic point.
8. The apparatus of claim 6, wherein the image adjustment module is specifically configured to:
establishing a blending field among all pixel points in the face image to be detected, and uniformly transmitting the adjustment parameters corresponding to the face characteristic points to other pixel points in the blending field at positions corresponding to the face characteristic points in the blending field;
and carrying out pixel adjustment on each pixel point according to the adjustment parameter correspondingly obtained by the pixel point in the blending field, and correspondingly generating the target adjustment image.
9. The apparatus of claim 6, wherein the identity module is specifically configured to:
calculating image confidence between the target adjustment image and each stored face identity image;
and selecting the face identity image with the maximum image confidence as a target identity image corresponding to the face image to be detected.
10. The apparatus according to any one of claims 6-9, further comprising:
and the relationship configuration module is used for configuring and storing the corresponding relationship between the image topology operator vector group and the adjustment parameters.
CN201811385951.XA 2018-11-20 2018-11-20 Face identity detection method and device Active CN111199176B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811385951.XA CN111199176B (en) 2018-11-20 2018-11-20 Face identity detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811385951.XA CN111199176B (en) 2018-11-20 2018-11-20 Face identity detection method and device

Publications (2)

Publication Number Publication Date
CN111199176A true CN111199176A (en) 2020-05-26
CN111199176B CN111199176B (en) 2023-06-20

Family

ID=70747049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811385951.XA Active CN111199176B (en) 2018-11-20 2018-11-20 Face identity detection method and device

Country Status (1)

Country Link
CN (1) CN111199176B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797754A (en) * 2020-06-30 2020-10-20 上海掌门科技有限公司 Image detection method, device, electronic equipment and medium
CN112101296A (en) * 2020-10-14 2020-12-18 杭州海康威视数字技术股份有限公司 Face registration method, face verification method, device and system
CN114758380A (en) * 2022-03-24 2022-07-15 深圳万卡通科技有限公司 Intelligent community face recognition gate system and method thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080298704A1 (en) * 2007-05-29 2008-12-04 Hila Nachlieli Face and skin sensitive image enhancement
CN106534661A (en) * 2015-09-15 2017-03-22 中国科学院沈阳自动化研究所 Automatic focus algorithm accumulated based on strongest edge gradient Laplasse operator

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080298704A1 (en) * 2007-05-29 2008-12-04 Hila Nachlieli Face and skin sensitive image enhancement
CN106534661A (en) * 2015-09-15 2017-03-22 中国科学院沈阳自动化研究所 Automatic focus algorithm accumulated based on strongest edge gradient Laplasse operator

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
熊宇龙: "《基于调和场的几何模型快速融合方法与应用研究》" *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797754A (en) * 2020-06-30 2020-10-20 上海掌门科技有限公司 Image detection method, device, electronic equipment and medium
CN112101296A (en) * 2020-10-14 2020-12-18 杭州海康威视数字技术股份有限公司 Face registration method, face verification method, device and system
CN112101296B (en) * 2020-10-14 2024-03-08 杭州海康威视数字技术股份有限公司 Face registration method, face verification method, device and system
CN114758380A (en) * 2022-03-24 2022-07-15 深圳万卡通科技有限公司 Intelligent community face recognition gate system and method thereof

Also Published As

Publication number Publication date
CN111199176B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
KR102299847B1 (en) Face verifying method and apparatus
US11487995B2 (en) Method and apparatus for determining image quality
TWI714225B (en) Method, device and electronic apparatus for fixation point judgment and computer storage medium thereof
CN110874594B (en) Human body appearance damage detection method and related equipment based on semantic segmentation network
US11403874B2 (en) Virtual avatar generation method and apparatus for generating virtual avatar including user selected face property, and storage medium
CN108491805B (en) Identity authentication method and device
US10599914B2 (en) Method and apparatus for human face image processing
CN109389562B (en) Image restoration method and device
US11163978B2 (en) Method and device for face image processing, storage medium, and electronic device
WO2019100282A1 (en) Face skin color recognition method, device and intelligent terminal
US20160162673A1 (en) Technologies for learning body part geometry for use in biometric authentication
CN107679466B (en) Information output method and device
CN113343826A (en) Training method of human face living body detection model, human face living body detection method and device
Laibacher et al. M2U-Net: Effective and efficient retinal vessel segmentation for resource-constrained environments
CN108427918A (en) Face method for secret protection based on image processing techniques
CN105608448B (en) A method and device for extracting LBP features based on facial key points
CN110399764A (en) Face identification method, device and computer-readable medium
US12026600B2 (en) Systems and methods for target region evaluation and feature point evaluation
CN111199176A (en) Face identity detection method and device
CN113269719A (en) Model training method, image processing method, device, equipment and storage medium
CN114038045A (en) Cross-modal face recognition model construction method and device and electronic equipment
CN115205943A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111507208B (en) An authentication method, device, device and medium based on sclera recognition
CN110097622B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN110766631A (en) Face image modification method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant