[go: up one dir, main page]

WO2020258119A1 - Face recognition method and apparatus, and electronic device - Google Patents

Face recognition method and apparatus, and electronic device Download PDF

Info

Publication number
WO2020258119A1
WO2020258119A1 PCT/CN2019/093159 CN2019093159W WO2020258119A1 WO 2020258119 A1 WO2020258119 A1 WO 2020258119A1 CN 2019093159 W CN2019093159 W CN 2019093159W WO 2020258119 A1 WO2020258119 A1 WO 2020258119A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
eye image
eye
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2019/093159
Other languages
French (fr)
Chinese (zh)
Inventor
潘雷雷
吴勇辉
范文文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Goodix Technology Co Ltd
Original Assignee
Shenzhen Goodix Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Goodix Technology Co Ltd filed Critical Shenzhen Goodix Technology Co Ltd
Priority to CN201980001099.8A priority Critical patent/CN110462632A/en
Priority to PCT/CN2019/093159 priority patent/WO2020258119A1/en
Publication of WO2020258119A1 publication Critical patent/WO2020258119A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Definitions

  • This application relates to the field of biometric recognition technology, and more specifically, to a method, device and electronic device for face recognition.
  • Face recognition is a kind of biometric recognition technology based on human facial feature information. Use a camera or camera to collect images or video streams containing human faces, and automatically detect and track human faces in the images, and then perform facial image preprocessing, image feature extraction, and matching and recognition of the detected human faces. Related technology is usually called face recognition or facial recognition. With the rapid development of computer and network technology, face recognition technology has been widely used in many industries and fields such as smart access control, mobile terminals, public safety, entertainment, and military.
  • face recognition generally uses two-dimensional (Two Dimensional, 2D) image recognition based on the face, to determine whether the 2D image is a specific user face, and does not determine whether the 2D image is from a living human face.
  • 2D face recognition based on 2D images has no anti-counterfeiting function and poor security performance.
  • the embodiments of the present application provide a face recognition method, device, and electronic equipment, which can recognize the authenticity of a human face, thereby improving the security of face recognition.
  • a face recognition method including:
  • the face recognition result is output according to the living body judgment result and the matching result.
  • the present application provides a face recognition solution with anti-counterfeiting function.
  • the face anti-counterfeiting is performed based on the iris features in the first eye image.
  • the feature template matching is performed according to the first target image to determine whether it is a user, thereby greatly improving the security of the face recognition device and electronic equipment.
  • the outputting the face recognition result according to the living body judgment result and the matching result includes:
  • the matching result When the matching result is successful, output the face recognition result according to the living body judgment result; or, when the living body judgment result is a living body, output the face recognition result according to the matching result; or, in the matching When the result is failure or the living body judgment result is non-living body, the face recognition result is output.
  • the performing feature template matching according to the first target image and outputting the matching result includes:
  • the acquiring the first target image and the first eye image of the first recognition target includes:
  • a first target image of the first recognition target is acquired, and the first eye image is acquired based on the first target image.
  • the first eye image is a two-dimensional infrared image.
  • the first eye image is a human eye area image or an iris area image including the iris.
  • the performing an iris-based face anti-counterfeiting judgment according to the first eye image includes:
  • the iris-based face anti-counterfeiting judgment is performed.
  • the performing an iris-based face anti-counterfeiting judgment according to the first optimized eye image includes:
  • the first eye image includes a first left eye image and/or a first right eye image
  • the histogram equalization method is used to perform a calculation on the first eye image.
  • Image processing to obtain the first optimized eye image includes:
  • the first eye image includes: the first left eye image or the first right eye image;
  • the neural network includes: a first flattened layer, at least one first fully connected layer, and at least one first excitation layer.
  • the classification processing of the first optimized eye image through a neural network includes:
  • nonlinearization processing or classification processing is performed on the plurality of characteristic constants.
  • the neural network includes: the first flattened layer, two first fully connected layers, and two first excitation layers.
  • the excitation functions in the two first excitation layers are the modified linear unit ReLU function and the Sigmoid function, respectively.
  • the first eye image includes: the first left eye image and the first right eye image;
  • the neural network includes a first network, a second network, and a third network
  • the first network includes: a second flattening layer, at least one second fully connected layer, and at least one second excitation layer;
  • the second network includes: a third flattening layer, at least one third fully connected layer, and at least one third excitation layer;
  • the third network includes: at least one fourth fully connected layer and at least one fourth excitation layer.
  • the classification processing of the first optimized eye image through a neural network includes:
  • the first network includes: the second flattened layer, two second fully connected layers, and two second excitation layers;
  • the second network includes: the third flattening layer, two third fully connected layers, and two third excitation layers;
  • the third network includes: a fourth fully connected layer and a fourth excitation layer.
  • the excitation functions in the two second excitation layers are the modified linear unit ReLU function and the Sigmoid function respectively; and/or,
  • excitation functions in the two third excitation layers are the modified linear unit ReLU function and the Sigmoid function respectively; and/or,
  • One of the excitation functions in the fourth excitation layer is a modified linear unit ReLU function.
  • the method further includes:
  • the iris-based face anti-counterfeiting judgment is performed to determine whether the second recognition target is a living human face, wherein the result of the face anti-counterfeiting judgment is used to establish a face feature template.
  • the second eye image is a second eye infrared image.
  • the method further includes:
  • a second target image of the second recognition target is acquired, the second eye image is acquired based on the second target image, and the face feature template is established based on the second target image.
  • the method further includes:
  • the establishment of a facial feature template based on the second target image includes:
  • a second face image is acquired based on the second target image, and the face feature template is established based on the second face image.
  • the establishing the face feature template based on the second face image includes:
  • the second face image belongs to the face feature template library
  • the second face image is matched with multiple face feature templates in the face feature template library.
  • the iris-based face anti-counterfeiting judgment is performed according to the second eye image, and when it is determined that the second recognition target is a living face, The second face image is established as a face feature template.
  • the matching the second face image with multiple face feature templates in the face feature template library includes:
  • the second face image is established as a face feature template.
  • the iris-based face anti-counterfeiting judgment according to the second eye image includes:
  • iris-based face anti-counterfeiting judgment is performed according to the second eye image.
  • the acquiring the second eye image based on the second target image includes:
  • the second eye image is a human eye area image or an iris area image including the iris.
  • the performing iris-based face anti-counterfeiting judgment according to the second eye image includes:
  • the iris-based face anti-counterfeiting judgment is performed.
  • the performing the iris-based anti-counterfeiting judgment of the human face according to the second optimized eye image includes:
  • the second eye image includes a second left eye image and/or a second right eye image
  • the second optimized eye image is performed on the neural network.
  • Classification processing includes:
  • the neural network includes:
  • At least one flattening layer at least one fully connected layer and at least one excitation layer.
  • a face recognition device including a processor, configured to execute the face recognition method in the first aspect or any possible implementation of the first aspect.
  • an electronic device including the face recognition device in the second aspect or any possible implementation of the second aspect.
  • a chip in a fourth aspect, includes an input and output interface, at least one processor, at least one memory, and a bus.
  • the at least one memory is used to store instructions
  • the at least one processor is used to call Instructions to execute the method in the first aspect or any possible implementation of the first aspect.
  • a computer-readable medium for storing a computer program, and the computer program includes instructions for executing the above-mentioned first aspect or any possible implementation of the first aspect.
  • a computer program product including instructions is provided.
  • the computer runs the instructions of the computer program product, the computer executes the first aspect or any of the possible implementations of the first aspect. Methods of face recognition.
  • the computer program product can run on the electronic device of the third aspect.
  • Fig. 1(a) is a schematic block diagram of a face recognition device according to an embodiment of the present application.
  • Figure 1(b) is a schematic flowchart of a face recognition method according to an embodiment of the present application.
  • Fig. 1(c) is a schematic block diagram of a convolutional neural network according to an embodiment of the present application.
  • Fig. 2 is a schematic flowchart of a face recognition method according to an embodiment of the present application.
  • Figure 3 (a) is an infrared image of a human face of a three-dimensional model according to an embodiment of the present application.
  • Figure 3 (b) is an infrared image of a human face of a user according to an embodiment of the present application.
  • Fig. 4 is a schematic flowchart of another method for face recognition according to an embodiment of the present application.
  • Fig. 5 is a schematic flowchart of another face recognition method according to an embodiment of the present application.
  • Fig. 6 is a schematic flowchart of another face recognition method according to an embodiment of the present application.
  • Fig. 7 is a schematic flowchart of another face recognition method according to an embodiment of the present application.
  • Fig. 8 is a schematic flowchart of a method for anti-counterfeiting discrimination of a face in a face recognition method according to an embodiment of the present application.
  • Fig. 9 is a schematic flow chart of another method for anti-counterfeiting discrimination of a face in a face recognition method according to an embodiment of the present application.
  • Fig. 10 is a schematic block diagram of a convolutional neural network according to an embodiment of the present application.
  • Fig. 11 is a schematic diagram of a fully connected layer according to an embodiment of the present application.
  • Fig. 12 is a schematic block diagram of another convolutional neural network according to an embodiment of the present application.
  • FIG. 13 is a schematic flow chart of another method for anti-counterfeiting discrimination in a face recognition method according to an embodiment of the present application.
  • Fig. 14 is a schematic block diagram of another convolutional neural network according to an embodiment of the present application.
  • Fig. 15 is a schematic flowchart of a face registration method in a face recognition method according to an embodiment of the present application.
  • Fig. 16 is a schematic flowchart of another face registration method in a face recognition method according to an embodiment of the present application.
  • Fig. 17 is a schematic flowchart of another face registration method in a face recognition method according to an embodiment of the present application.
  • Fig. 18 is a schematic block diagram of a face recognition device according to an embodiment of the present application.
  • Fig. 19 is a schematic block diagram of an electronic device according to an embodiment of the present application.
  • the embodiments of the present application may be applicable to optical face recognition systems, including but not limited to products based on optical face imaging.
  • the optical face recognition system can be applied to various electronic devices with image acquisition devices (such as cameras).
  • the electronic devices can be mobile phones, tablet computers, smart wearable devices, smart door locks, etc.
  • the embodiments of the present disclosure are to this Not limited.
  • the size of the sequence number of each process does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not correspond to the embodiments of the present application.
  • the implementation process constitutes any limitation.
  • the face recognition device 10 includes an infrared light emitting module 110, an infrared image acquisition module 120 and a processor 130.
  • the infrared light emitting module 110 is used to emit infrared light signals, which may be an infrared light emitting diode (Light Emitting Diode, LED), or may also be a vertical cavity surface emitting laser (Vertical Cavity Surface Emitting Laser, VCSEL), etc.
  • the infrared light emitting light source is not limited in the embodiment of the present application.
  • the infrared image acquisition module 120 may be an infrared camera, which includes an infrared image sensor, which is used to receive infrared light signals and convert the received infrared light signals into corresponding electrical signals, thereby generating infrared images.
  • the processor 130 may be a microprocessor unit (MPU), which may control the infrared light emitting module 110 and the infrared image acquisition module 120 to collect facial images and perform facial image recognition.
  • MPU microprocessor unit
  • S110 Collect 2D infrared images of the identified target.
  • the infrared light-emitting module 110 emits infrared light, and the infrared light is irradiated on a recognition target.
  • the recognition target may be a user's face, or may be a photo, a 3D model or any other object.
  • the infrared reflected light reflected by the surface of the identified target is received by the infrared image sensor 120 and converted into a 2D infrared image, and the infrared image sensor 120 transmits the 2D infrared image to the processor 130.
  • S120 face detection (face detection). That is, it receives 2D infrared images and detects whether there are human faces on the 2D infrared images.
  • a single convolutional neural network Convolutional Neural Networks, CNN
  • CNN convolutional Neural Networks
  • the convolutional neural network mainly includes a convolutional layer 101 (convolutional layer), an activation layer 102 (activation layer), a pooling layer 103 (pooling layer), and a fully connected layer 104 ( fully-connected layer).
  • each convolutional layer in the convolutional neural network is composed of several convolutional kernels, and the parameters of each convolutional kernel are optimized through the back propagation algorithm.
  • the purpose of the convolution operation is to extract different features of the input. Different convolution kernels extract different feature maps. More layers of convolutional networks can iteratively extract more complex features from low-level features such as edge features and line features. feature.
  • the activation layer uses activation functions to introduce nonlinearity to the convolutional neural network. Commonly used activation functions include sigmoid, tanh, and ReLU functions. Usually after the convolutional layer, a feature with a large dimension is obtained. The pooling layer cuts the feature into several regions, and takes the maximum value (max pooling) or average value (average pooling) to obtain new features with smaller dimensions Figure. The fully connected layer combines all local features into global features, which are used to calculate the final score of each category to determine the category of the input data.
  • S121 If there is a face on the 2D infrared image, perform face cut on the 2D infrared image. Specifically, the fully connected layer of the above-mentioned face detection convolutional neural network is changed to a convolutional layer, so that the network becomes a fully convolutional network, and the 2D infrared image passes through the fully convolutional network to obtain a feature map.
  • Each feature map The "point" corresponds to the probability that the location mapped to the original image area belongs to the face, and the face whose probability is greater than the set threshold is regarded as the face candidate frame. Cut the image in the face candidate frame in the 2D infrared image to form a new face 2D infrared image.
  • the face detection fails. In other words, the recognition target is not the user, and the matching fails.
  • face detection can also be performed by cascading CNN, Dlib, OpenCV and other methods, and a new 2D infrared image of the face can be cut. This is not limited in the embodiments of this application.
  • S130 2D face recognition (face recognition). That is, the 2D infrared image of the human face formed in S131 is recognized, and it is determined whether the 2D infrared image of the human face is the user's face. For example, using a convolutional neural network method for face recognition, specifically, first train a face recognition convolutional neural network to determine whether it is a user's face, and the face recognition convolutional neural network is based on multiple templates in the template library. Feature template classification. Input the data of the 2D infrared image of the face into the face recognition convolutional neural network. After the features of the 2D infrared image of the face are extracted through steps such as convolution calculation, the classification and judgment are performed to determine the 2D infrared image of the face Whether it matches with multiple feature templates in the template library.
  • face recognition face recognition
  • S140 Determine whether the restart parameter is less than the first threshold.
  • the face recognition device 10 performs face recognition by collecting 2D infrared images of human faces and judging whether the 2D images of the human faces match the characteristic faces in the characteristic face template library, thereby identifying the electronic device Unlock with an application (APP) on the electronic device. Since during the unlocking process, the face recognition device 10 only performs face recognition based on the two-dimensional features on the 2D image, it cannot identify whether the collected 2D infrared image is derived from a living human face or other photos, videos, and other non-living human faces. Objects, in other words, the face recognition device 10 does not have an anti-counterfeiting function. It can unlock electronic equipment and applications by stealing photos, videos and other information with the user’s face, so the face recognition device and electronic equipment are safe Performance has been greatly affected.
  • APP application
  • the embodiment of the present application provides a face recognition solution with anti-counterfeiting function.
  • face recognition solution with anti-counterfeiting function.
  • FIG. 2 is a face recognition method 200 provided by an embodiment of this application, including:
  • S220 Perform iris-based face anti-counterfeiting judgment according to the eye image to determine whether the recognition target is a living human face and output a living judgment result;
  • the recognition target is also called the first recognition target, the second recognition target, etc., which can be used to distinguish different target objects.
  • the target image and the eye image of the recognition target can also be called the first recognition target.
  • the recognition target includes, but is not limited to, any objects such as human faces, photos, videos, and three-dimensional models.
  • the recognition target may be a user's face, other people's faces, a user's photo, a curved surface model with photos, etc.
  • the eye image may be an eye area image of a living human face
  • the human eye structure is composed of parts such as sclera, iris, pupil lens, and retina.
  • the iris is a circular part located between the black pupil and the white sclera. It contains many intertwined spots, filaments, crowns, stripes, crypts and other detailed features. These characteristics determine the uniqueness of iris characteristics, and also determine the uniqueness of identification. Because it has the unique characteristics of identifying the human body, it is also difficult to copy and forge, so it can be used for face anti-counterfeiting and face recognition.
  • the eye image in the eye image can be used to distinguish between a living human face and a non-living human face
  • the eye image can be a color image generated by visible light, or an infrared image generated by infrared light or other Image, this embodiment of the application does not limit this.
  • the eye image is an infrared image.
  • the following takes the eye image as an infrared image as an example for detailed description.
  • the infrared (Infrared Radation, IR) image is an image formed based on the infrared light signal reflected from the surface of the identification target, which is expressed as a gray scale image, and the identification target is expressed by the gray scale of the image pixels. Appearance shape.
  • the eye image of the recognition target is an eye infrared image including the iris area, for example, an infrared image based on a living human eye of the user's face and reflected infrared light reflected by the iris, or an infrared image based on the eye and The infrared image formed by the reflected infrared light reflected by the iris area and so on.
  • the reflected infrared light reflected by it is quite different from the reflected infrared light reflected by objects such as photos and models. Therefore, different recognitions can be distinguished in the eye image including the iris
  • the iris information of the target is used to distinguish between a living human face and a non-living human face.
  • the eye image of the living human face including the iris is different from the eye image of the non-living human face including the iris, and the difference is large.
  • the embodiment of the present application utilizes this difference to perform the anti-counterfeiting judgment of the human face based on the eye image including the iris.
  • the non-living human face includes but is not limited to: user face photos, user face videos, user face photos placed on a three-dimensional curved surface, user face models, and so on.
  • Figure 3 (a) is an infrared image of the face of the three-dimensional model.
  • the infrared image of the eye is only the human eye model, and the "iris" area of the eye model is only a simulated iris shape
  • the schematic picture of, does not contain the iris information of a living human eye.
  • the picture (b) in FIG. 3 is an infrared image of the face of the user's living body, and the characteristic information of the iris of the human eye of a real person can be reflected in the picture, which is completely different from the picture (a) in FIG. 3.
  • the face anti-counterfeiting judgment is performed to determine whether the iris of the recognition target is the iris of a living human face, so as to determine whether the recognition target is a living human Face, achieve the effect of anti-counterfeiting of human face.
  • the recognition target is a living body face
  • the feature template matching is to match the target image with the feature template of at least one user, and it can be determined whether the target image belongs to the user's image.
  • the feature template is feature data of multiple faces or partial face images of the user under different conditions such as different angles and different environments.
  • the feature template is stored in a face recognition device, in particular, it can be stored in a memory in the device.
  • Combining face anti-counterfeiting judgment and feature template matching judgment can enhance the reliability of the face recognition process and improve safety performance.
  • the face recognition device and face recognition method in Figure 1(a) and Figure 1(b) cannot determine whether the collected 2D image is from a photo or a real face, so it does not have the anti-counterfeiting function and cannot reach the face anti-counterfeiting level in Table 1. Level 1.
  • the characteristic information of the human iris can be obtained from the eye image including the iris, the living human face and the non-living human face can be identified, so that the face anti-counterfeiting level 5 can be reached, and the anti-counterfeiting and recognition Safety performance has been greatly improved.
  • the infrared image of the recognition target may be obtained by the infrared image acquisition device, and then the infrared image of the eyes of the recognition target may be obtained from the infrared image of the recognition target.
  • a rough eye area is initially detected and cut out from the infrared image of the identified target, and then it is determined whether there is an iris in the rough eye area, and the iris infrared image is obtained or includes Eye infrared image with iris infrared image, specifically, obtaining left eye iris infrared image and/or right eye iris infrared image, or left eye infrared image including left eye iris infrared image and/or right eye iris infrared image Infrared image of right eye.
  • the recognition of the symmetrical area in the infrared image can detect the approximate eye area of the recognition target.
  • a rough eye area is cut out from the face image obtained by face detection and cropping to form an infrared image of the eye.
  • the human face is divided into three equal parts longitudinally to form the upper court, the atrium and the lower court, and the approximate eye area is located at 1/5 of the bottom of the upper court and 3/5 of the upper atrium.
  • the infrared image of the iris or the infrared image of the eye including the infrared image of the iris may be directly detected and cut out from the infrared image of the recognition target.
  • the gray value of the eye area is compared with other areas of the face, the gray value of the iris area of the eye is small, and the gray value of the scleral area is large, and the iris area
  • the gray value change gradient between and the sclera area is obvious. Therefore, the eye area and the iris area can be detected by detecting the gray change feature in the infrared image, and the coordinates of the eye area and the iris area in the image can be obtained, and the result can be cut An infrared image of the iris or an infrared image of the eye including an infrared image of the iris.
  • the infrared image of the eye when the infrared image of the eye includes an iris image, it is an effective infrared image of the eye, which can be directly used for face recognition in vivo, or the iris can be cut out from the infrared image of the eye. Infrared image, used for live face recognition.
  • the infrared image of the eye does not include the iris image, that is, when the user closes his eyes or there is no iris or iris pattern in the other recognition target, the infrared image of the eye is an invalid infrared image of the eye and cannot be used for live face recognition .
  • the infrared image of the eye including the iris image and the infrared image of the iris are both referred to as the infrared image of the eye hereinafter.
  • the infrared image of the eye includes: an infrared image of the left eye and/or an infrared image of the right eye.
  • any other algorithms or methods that can recognize the iris of the eye can also be used to obtain an eye image from the infrared image of the recognition target and determine whether there is an iris in the eye image. This is not limited.
  • the feature template matching of 2D recognition can be performed based on the acquired 2D target image of the recognition target, and the face recognition can be performed and output based on the feature template matching result of 2D recognition and the result of face anti-counterfeiting judgment. Face recognition result.
  • feature template matching is a main step and implementation in 2D recognition.
  • 2D recognition can also be understood as feature template matching in 2D recognition.
  • 2D recognition can be performed first, and on the basis of 2D recognition, the iris-based face anti-counterfeiting is performed again based on the eye infrared image according to the 2D recognition result, so that the recognition process is more safe and effective.
  • another method 300 for face recognition provided by an embodiment of the present application includes:
  • the 2D recognition is successful, indicating that the target image includes the user's face image.
  • the 2D recognition fails, which means that the target image does not include the user's face image.
  • the 2D recognition may be the same or similar to the 2D recognition process in FIG. 1(b).
  • the first face recognition result may include, but is not limited to, specific information such as failures and non-authenticated users.
  • S360 Perform iris-based face anti-counterfeiting judgment according to the eye image to determine whether the recognition target is a living human face;
  • the second face recognition result may include, but is not limited to, specific information such as success and biometric authentication of the user.
  • the third face recognition result may include, but is not limited to, specific information such as failures and non-living authenticated users.
  • the target image may be an infrared image, a visible light image, or other images.
  • the target image is an infrared image
  • an eye infrared image of the recognition target is acquired based on the infrared image, and an iris-based human face anti-counterfeiting judgment is performed according to the eye infrared image.
  • face anti-counterfeiting can be performed first, and on the basis of face anti-counterfeiting, 2D recognition can be performed according to the result of face anti-counterfeiting, which can exclude non-living human faces in advance and improve the efficiency of recognition.
  • face recognition method 400 provided by an embodiment of the present application includes:
  • S450 Perform iris-based face anti-counterfeiting judgment according to the eye image to determine whether the recognition target is a living human face;
  • the 2D recognition in this step may be the same as step S340 in FIG. 4, and the specific implementation manner may refer to the foregoing solution, which will not be repeated here.
  • the fourth face recognition result may include, but is not limited to, specific information such as failure and non-living body.
  • the fifth face recognition result may include, but is not limited to, specific information such as success and biometric authentication of the user.
  • the sixth face recognition result may include, but is not limited to, specific information such as failure and a living non-authenticated user.
  • the target image of the recognition target may be acquired through the image acquisition module.
  • the image acquisition module may be the infrared image acquisition module 120 in FIG. 1(a).
  • the infrared image acquisition module may include an infrared photoelectric sensor, wherein the infrared photoelectric sensor includes a plurality of pixel units, and each pixel unit is used to collect the reflected infrared light signal after the infrared light is reflected on the surface of the identification target , And convert the reflected infrared light signal into a pixel electrical signal corresponding to its light intensity.
  • the value of the electrical signal of each pixel corresponds to a pixel of the infrared image, and its size is expressed as the gray value of the infrared image. Therefore, an infrared image formed by a pixel matrix composed of multiple pixel units can also be expressed as a numerical matrix composed of gray values of multiple pixel points.
  • the gray value range of each pixel is between 0 and 255, the gray value 0 represents black, and the gray value 255 represents white.
  • step S351 may specifically include: 3D face reconstruction. That is, when the 2D recognition is successful, the 3D data of the recognition target is obtained, and the 3D face reconstruction is performed based on the 3D data. If the 3D face reconstruction is successful, the eye image of the recognition target is obtained based on the target image, and the The eye image is subjected to the iris-based face anti-counterfeiting judgment. If the 3D face reconstruction fails, the face anti-counterfeiting judgment is not performed. Specifically, the reconstructed face image reflects the feature information of the face from the three-dimensional space, and on the basis of the success of the 3D face, the face anti-counterfeiting judgment is performed.
  • the face recognition method 300 further includes:
  • S320 face detection, specifically, perform face detection based on the target image
  • S331 There is a face, that is, when the face detection is successful, cut the target image to obtain a face image;
  • S340 specifically includes S341: performing 2D face recognition based on the face image.
  • S351 specifically includes S353: when the 2D recognition is successful, crop the face image to obtain the eye image of the recognition target;
  • the face recognition method 400 further includes:
  • S420 face detection, specifically, perform face detection based on the target image
  • S431 There is a face, that is, when the face detection is successful, cut the target image to obtain a face image;
  • step S463 When the recognition target is a living human face, proceed to step S465: Perform 2D face recognition based on the face image.
  • steps S320 to S332 and steps S420 to S432 may be the same as steps S120 to S122 in FIG. 1(b).
  • steps S120 to S122 in FIG. 1(b) may be the same as steps S120 to S122 in FIG. 1(b).
  • steps S320 to S332 and steps S420 to S432 may be the same as steps S120 to S122 in FIG. 1(b).
  • steps S320 to S332 and steps S420 to S432 may be the same as steps S120 to S122 in FIG. 1(b).
  • steps S320 to S332 and steps S420 to S432 may be the same as steps S120 to S122 in FIG. 1(b).
  • the method further includes: judging the size of the restart parameter, and when the restart parameter is less than the second threshold, enter S310 or enter S410; when the restart parameter is greater than or equal to At the second threshold, it is determined that the recognition fails.
  • a face anti-counterfeiting discrimination method 500 is specifically used to perform face anti-counterfeiting discrimination based on the infrared eye image in step S220 to determine whether the recognition target is a living human face. Specifically, after preprocessing the infrared image of the eye, it is input into a neural network for classification, so as to obtain a face anti-counterfeiting discrimination result.
  • the face anti-counterfeiting discrimination method 500 includes:
  • S510 Preprocess the eye image to obtain an optimized eye image; after the eye image is preprocessed, the contrast of the eye image is increased, the image quality of the eye image is improved, and it is more conducive to the processing and processing of the neural network. classification.
  • the eye image includes a left eye image and a right eye image.
  • the preprocessing process includes S511: eye image equalization. Specifically, the left-eye optimized eye image and/or the right-eye optimized eye image are image equalized.
  • a histogram equalization method is used for image equalization processing, which can improve the contrast of the eye infrared image, and can also transform the eye infrared image
  • the grayscale values are almost uniformly distributed images.
  • the histogram equalization step includes:
  • n is the total number of pixels
  • n i is the number of pixels whose gray value is i
  • L is the total number of gray values.
  • the calculated c is the cumulative normalized histogram of the image.
  • the gray value of the pixel with the gray value of i in the original infrared image of the eye is changed to y(i), thereby achieving the balance of the infrared image of the eye, and obtaining an optimized infrared image of the eye.
  • the preprocessing process may also include, but is not limited to, local binary pattern (Local Binary Pattern, LBP) feature processing, normalization, correction, image enhancement, and other processing processes, which are not limited in the embodiment of the present application.
  • LBP Local Binary Pattern
  • a deep learning network is used to classify the preprocessed optimized eye image to determine whether the recognition target is a living body human face.
  • the deep learning network includes but is not limited to a neural network, and may also be other deep learning networks. The embodiment of the present application does not limit this.
  • the following uses a neural network as an example to illustrate the classification in the embodiment of the present application Approach.
  • the face anti-counterfeiting discrimination method 500 further includes:
  • S520 Perform classification processing on the optimized eye image through a neural network to determine whether the recognition target is a living human face.
  • a neural network structure for example, a two-layer neural network or a more-layer network structure can be used, and the structure of each layer of the network structure can also be adjusted according to the face information to be extracted, which is not limited in the embodiment of the application.
  • the initial training parameters may be randomly generated, or obtained based on empirical values, or may be parameters of a neural network model pre-trained based on a large amount of true and false face data.
  • the embodiments of this application do not limit this.
  • the neural network can process the optimized eye images based on the initial training parameters, and determine the optimal eye image for each optimized eye image.
  • the determination result further, adjust the structure of the neural network and/or the training parameters of each layer according to the determination result until the determination result meets the convergence condition.
  • the convergence condition may include at least one of the following:
  • the probability of judging the optimized eye image of the living body face as the optimized eye image of the living body face is greater than the first probability, for example, 98%;
  • the probability of judging the optimized eye image of the non-living human face as the optimized eye image of the non-living human face is greater than the second probability, for example, 95%;
  • the probability of judging the optimized eye image of the living human face as the optimized eye image of the non-living human face is less than the third probability, for example, 2%;
  • the probability of determining the optimized eye image of the non-living human face as the optimized eye image of the living human face is less than the fourth probability, for example, 3%.
  • the processed optimized eye image of the current recognition target is input into the neural network, so that the neural network can use the trained
  • the parameters process the optimized eye image of the recognition target to determine whether the recognition target is a living human face.
  • the left-eye optimized eye image or the right-eye optimized eye image in the optimized eye image is classified by the neural network 50 to determine whether the recognition target It is a living human face.
  • the face anti-counterfeiting determination method 501 includes:
  • S511 Using a histogram equalization method to perform image equalization processing on the left eye image or the right eye image to obtain an optimized left eye image or an optimized right eye image;
  • S521 Perform classification processing on the optimized left eye image or the optimized right eye image through the neural network to determine whether the recognition target is a living human face.
  • the neural network 50 includes a flattened layer 510, a fully connected layer 520, and an excitation layer 530.
  • the flattened layer 510 is used to one-dimensionalize the two-dimensional data of the left-eye optimized eye image input to the neural network, that is, to form a one-dimensional array.
  • the left-eye optimized eye image is represented as a two-dimensional matrix of 20*20 pixels, each of which represents a gray value.
  • a one-dimensional matrix of 400*1 is formed. That is, 400 pixel values are output. That is, in the embodiment of the present application, the two-dimensional image data is flattened into one-dimensional data through the flattening layer 510, and then the one-dimensional data is input to the fully connected layer to be fully connected.
  • each node in the fully connected layer 520 is connected to each node in the upper layer, and is used to synthesize the features extracted from the previous neural network, and act as a "classifier" in the entire neural network.
  • x 1 to x n are the output nodes of the previous layer
  • the fully connected layer 520 includes m fully connected nodes c 1 to c n
  • m characteristic constants are output, which is convenient for m
  • the characteristic constant classification is used for judgment classification.
  • each of the m fully connected nodes includes multiple parameters obtained by the above training convergence, and is used for weighted connection of x 1 to x n to obtain a characteristic constant result.
  • x 1 to x n are one-dimensional data output by the flattening layer 510 as an example, the fully connected layer will be described.
  • a 1 W 11 *x 1 +W 12 *x 2 +W 13 *x 3 +...+W 1n *x n +b 1 ;
  • a 2 W 21 *x 1 +W 22 *x 2 +W 23 *x 3 +...+W 2n *x n +b 2 ;
  • a m W m1 *x 1 +W m2 *x 2 +W m3 *x 3 +...+W mn *x n +b m ;
  • W and b are the weighting parameters and bias parameters in the nodes of the fully connected layer 520, both of which can be obtained by the above-mentioned process of training a convergent neural network.
  • the fully connected layer 520 includes at least one fully connected layer.
  • the fully connected layer 520 includes a first fully connected layer 521 and a second fully connected layer 522.
  • the calculation principles of the two fully connected layers are the same, and both are weighted fully connected to the input one-dimensional array.
  • the excitation layer 530 includes a first excitation layer 531 and a second excitation layer 532, and the first excitation layer 531 includes an excitation function for nonlinearizing a one-dimensional array deal with.
  • the activation function includes, but is not limited to, a corrected linear unit (Rectified Linear Unit, ReLU) function, an exponential linear unit (ELU) function, and several variants of the ReLU function, such as: linear with leakage correction Unit (Leaky ReLU, LReLU), parametric correction linear unit (Parametric ReLU, PReLU), random correction linear unit (Randomized ReLU, RReLU), etc.
  • the excitation function used is the modified linear unit ReLU function.
  • the formula of the ReLU function is as follows:
  • the value less than or equal to 0 becomes 0, and the value greater than 0 remains unchanged, which makes the output one-dimensional array sparse.
  • the neural network structure after ReLU achieves sparseness can better mine relevant features and fit Training data.
  • the second excitation layer 532 includes a classification function Sigmoid to classify and discriminate the constants output by the fully connected layer.
  • the function when the input tends to positive or negative infinity, the function approaches a smooth state. Because the output range of the Sigmoid function is from 0 to 1, this function is often used for the probability of two classifications. The multiple probability values obtained by the Sigmoid function processing are judged to obtain the final face anti-counterfeiting judgment result to determine whether the recognition target is a living face.
  • the neural network 50 may further include: one or more fully connected layers 520 and/or one or more excitation layers 530.
  • the structure of flattened layer-fully connected layer-excited layer, or flattened layer-fully connected layer-excited layer-fully connected layer-excited layer-fully connected layer-excited layer-fully connected layer-excited layer structure does not have this Make a limit.
  • the excitation functions adopted by the multiple excitation layers 530 may be different, and/or the fully connected parameters in the multiple fully connected layers 520 may also be different.
  • the embodiment of the present application does not limit this.
  • a deep learning algorithm is used to perform comprehensive calculation on the left-eye optimized eye image and the right-eye optimized eye image of the recognition target, and perform classification processing together to It is determined whether the recognition target is a living human face.
  • the iris characteristics of the left-eye optimized eye image and the right-eye optimized eye image can be integrated, and the accuracy of anti-counterfeiting judgment can be improved.
  • a face anti-counterfeiting discrimination method 600 includes:
  • S611 Perform image equalization processing on the left eye image by using a histogram equalization method to obtain an optimized left eye image
  • S620 Perform classification processing on the optimized left eye image and the optimized right eye image through a neural network to determine whether the recognition target is a living human face.
  • the optimized left eye image and the optimized right eye image have the same size.
  • the optimized left eye image and the optimized right eye image are classified by the neural network 60 to determine whether the recognition target is a living human face.
  • the neural network 60 is used to comprehensively classify the left-eye optimized eye image and the right-eye optimized eye image of the recognition target.
  • the neural network 60 includes a first network 610, The second network 620 and the third network 630.
  • the first network 610 includes: a second flattened layer, at least one second fully connected layer, and at least one second excitation layer;
  • the second network 620 includes: a third flattened layer, at least one third fully connected layer And at least one third incentive layer;
  • the third network 630 includes: at least one fourth fully connected layer and at least one fourth incentive layer.
  • the first network 610 includes: a second flattened layer 611, a second upper fully connected layer 612, a second upper excitation layer 613, a second lower fully connected layer 614, and a second
  • the lower excitation layer 615 is used to flatten and fully connect the input left-eye optimized eye image, and output a left-eye one-dimensional feature array, which is also called a left-eye classification feature value.
  • the second network 620 includes: a third flattened layer 621, a third upper fully connected layer 622, a third upper excitation layer 623, a third lower fully connected layer 624, and a third lower excitation layer 625, which are used for the right eye of the input
  • the optimized eye image is flattened and fully connected, and the output is a one-dimensional feature array of the right eye, also known as the right eye classification feature value.
  • the third network 630 includes: a fourth fully connected layer 631 and a fourth excitation layer 632, which are used to fully connect and classify the left-eye one-dimensional feature array and the right-eye one-dimensional feature array. For example, if the first network 610 outputs a left-eye one-dimensional feature array that includes 10 feature constants, and the second network 620 outputs a right-eye one-dimensional feature array that also includes 10 feature constants, the left-eye one-dimensional feature array and the right-eye one-dimensional feature array are combined A total of 20 feature constants in the feature array are input to the third network 630 together for full connection and classification processing.
  • the fully connected layer activation functions or classification functions in the first network, the second network, and the third network may be the same or different, which is not limited in the embodiment of the present application.
  • the second upper fully connected layer 612 and the third upper fully connected layer 622 both use ReLU activation functions
  • the second lower fully connected layer 613 and the third lower fully connected layer 623 both use the Sigmoid classification function. .
  • the third network 630 may use the ReLU excitation function to perform non-linearization processing on the output characteristic constant again, to correct the classification result, and to improve the accuracy of recognition and judgment.
  • the neural network 30 and the neural network 40 have a simple network structure and a fast running speed, and can be run on an Advanced RISC Machine (ARM).
  • ARM Advanced RISC Machine
  • the iris-based face anti-counterfeiting judgment is performed according to the eye image to determine whether the recognition target is a living human face, wherein the result of the face anti-counterfeiting judgment is used for face recognition.
  • the result of the face anti-counterfeiting discrimination can also be used for face registration, that is, to generate a face feature template in the 2D face recognition process.
  • face anti-counterfeiting is added in the face registration process to prevent photos collected based on face photos or other non-living face models from being used as templates for face recognition matching, which can improve the accuracy of 2D recognition.
  • the face registration method 700 includes:
  • S710 Acquire an eye image of the recognition target.
  • S720 Perform iris-based face anti-counterfeiting judgment according to the eye image to determine whether the recognition target is a living human face, wherein the result of the face anti-counterfeiting judgment is used to establish a face feature template.
  • the process of the face registration method in this embodiment of the application and the process of the aforementioned face recognition method are two independent stages, and only the face feature template established in the process of the registration method is used for the judgment of 2D recognition in the face recognition process .
  • face recognition is performed through the above-mentioned face recognition method and face anti-counterfeiting discrimination method.
  • the recognition target in the embodiment of the present application may be the same as or different from the recognition target in the above-mentioned face recognition process.
  • it may be both the user's live face, and the user's live face is registered and recognized; or
  • the recognition target in the registration process is the user's live face, but the recognition target in the recognition process is other non-living faces.
  • the embodiments of this application do not limit this.
  • the step S710 may be the same as the step S210 described above, and the eye image of the recognition target is acquired through the image acquisition device.
  • the eye image is an infrared image or a visible light color image.
  • the iris-based face anti-counterfeiting judgment is performed according to the eye image to determine whether the recognition target is a living human face.
  • the above-mentioned application embodiment which will not be repeated here.
  • the face registration method further includes: acquiring a target image of the recognition target, acquiring the eye image based on the target image, and establishing a face feature template based on the target image .
  • the target image is an infrared image
  • the infrared image of the identified target is acquired first, the template matching is performed based on the infrared image, and anti-counterfeiting is performed on the basis of successful matching.
  • FIG. 16 shows a face registration method 800, which includes:
  • S850 Perform template matching based on the infrared image
  • S860 Perform iris-based face anti-counterfeiting judgment according to the eye image to determine whether the recognition target is a living face;
  • step S810 may be the same as step S310.
  • Step S851 may be the same as step S351.
  • Step S860 may be the same as step S360.
  • step S850 may be similar to step S340 for performing 2D recognition based on the target image.
  • the infrared image is matched with multiple facial feature templates in the facial feature template library. If the matching is successful, the face target image is the user. If the matching fails, the target face image is not the user’s face image.
  • step S871 when the recognition target is a living human face, the data of the infrared image is stored in the storage unit as a new face feature template in the face feature template library.
  • the storage unit may be the execution person.
  • the storage unit in the processor of the face registration method may also be the memory in the electronic device that executes the face registration method.
  • the face registration method 800 may further include:
  • step S820 to step S822 may be the same as step S320 to step S332.
  • the reflected structured light or reflected light pulses carrying the information of the recognition target surface are received, so as to obtain the 3D data of the recognition target.
  • the 3D data contains the recognition target.
  • the depth information can indicate the surface shape of the recognition target.
  • the 3D data can be expressed in various forms such as a depth image (Depth Image), a 3D point cloud (Point Cloud), and a geometric model.
  • 3D face reconstruction can be performed based on the 3D data, that is, a 3D shape image representing the recognition target can be obtained.
  • the 3D data is stored in the storage unit, for example, the 3D point cloud data is stored in the storage unit as a 3D point cloud data template to form a 3D point cloud data template library.
  • S840 Determine whether the face image cut in step S821 belongs to the face feature template library.
  • ID user identification
  • enter S842 the person The face image belongs to the face feature template library.
  • enter S841 the face image does not belong to the face feature template library.
  • a new user facial feature template library can be established according to the acquired user ID information of the target image.
  • step S8501 When the face image belongs to the face feature template library, perform template matching based on the face image cut in step S821.
  • the specific matching method may be the same as step S850.
  • step S851 When the template matching is successful, obtain an eye image based on the infrared image, and proceed to step S860.
  • S860 Perform iris-based face anti-counterfeiting judgment according to the eye image to determine whether the recognition target is a living face.
  • S8711 When the recognition target is a living human face, go to S8712: Determine whether it is a valid point cloud.
  • the 3D point cloud data collected by face reconstruction in S830 is matched with multiple 3D point cloud data templates in the 3D point cloud data template library to determine whether it is a valid point cloud.
  • point cloud matching is used to determine whether the face angle of the recognized target in the collected 3D point cloud data is the same as the face angle in the 3D point cloud data template.
  • the matching is successful, indicating that there is a template library
  • the 3D point cloud data with the same face angle is an invalid point cloud; when the angles are different, the matching fails, which means that there is no 3D point cloud data with the same face angle in the template library, and it is a valid point cloud.
  • multiple pieces of 3D point cloud data of the recognized target can be collected, and point cloud splicing and point cloud fusion can be performed to form 3D data and 3D images of the face in all directions and angles, according to the 3D image Can perform 3D face recognition.
  • the face feature templates in the face feature template library are full.
  • the face feature template library it is determined whether the number of face feature templates in the face feature template library is equal to a preset value. If it is equal to the preset value, the face feature template is full, and no new face feature template is stored.
  • the preset value is 8, when the number of face feature templates in the face feature template library is 8, no more face feature templates are added.
  • the face image is stored as the face feature template.
  • the data of the face image is stored in the storage unit as a new face feature template in the face feature template library.
  • the face registration method 800 further includes:
  • FIG. 18 is a schematic block diagram of a face recognition device 20 according to an embodiment of the present application, including: a processor 210;
  • the processor 210 is configured to: obtain a first eye image of a first recognition target;
  • the iris-based face anti-counterfeiting judgment is performed to determine whether the first recognition target is a living human face, wherein the result of the face anti-counterfeiting judgment is used for face recognition.
  • the processor 220 may be a processor of the face recognition device 20, or a processor of an electronic device including the face recognition device 20, which is not limited in the embodiment of the present application.
  • the first eye image is a first eye infrared image.
  • the face recognition device 20 further includes: an image acquisition device 220, configured to obtain a first target image of the first recognition target;
  • the processor 210 is further configured to: perform two-dimensional recognition based on the first target image; when the two-dimensional recognition is successful, obtain the first eye image based on the first target image;
  • the processor 210 is further configured to: when the first recognition target is a living face, determine that the face recognition is successful; or, when the first recognition target is a non-living face, determine that the face recognition fails.
  • the processor 210 is specifically configured to: acquire a first target image of the first recognition target, and acquire the first eye image based on the first target image;
  • the processor 210 is further configured to: when the first recognition target is a living human face, perform two-dimensional recognition based on the first target image;
  • the first recognition target is a non-living face
  • the processor 210 is specifically configured to: acquire a first face image based on the first target image;
  • the first face image is matched with multiple feature templates, and when the matching is successful, the two-dimensional recognition is successful, or when the matching fails, the two-dimensional recognition fails.
  • the processor 210 is specifically configured to: acquire a face region image based on the first target image; acquire the first eye image based on the face region image.
  • the first eye image is a human eye area image or an iris area image including the iris.
  • the processor 210 is specifically configured to: use a histogram equalization method to process the first eye image to obtain a first optimized eye image;
  • the iris-based face anti-counterfeiting judgment is performed.
  • the processor 210 is specifically configured to: perform classification processing on the first optimized eye image through a neural network to determine whether the first recognition target is a living human face.
  • the first eye image includes a first left eye image and/or a first right eye image
  • the processor 210 is specifically configured to:
  • the first eye image includes: the first left eye image or the first right eye image;
  • the neural network includes: a first flattened layer, at least one first fully connected layer, and at least one first excitation layer.
  • the processor 210 is specifically configured to: use the first flattening layer to process the first optimized left-eye eye image or the first optimized right-eye image to obtain multiple eye pixel values;
  • nonlinearization processing or classification processing is performed on the plurality of characteristic constants.
  • the neural network includes: the first flattened layer, two first fully connected layers, and two first excitation layers.
  • the excitation functions in the two first excitation layers are respectively a modified linear unit ReLU function and a Sigmoid function.
  • the first eye image includes: the first left eye image and the first right eye image;
  • the neural network includes a first network, a second network, and a third network
  • the first network includes: a second flattening layer, at least one second fully connected layer, and at least one second excitation layer;
  • the second network includes: a third flattening layer, at least one third fully connected layer, and at least one third excitation layer;
  • the third network includes: at least one fourth fully connected layer and at least one fourth excitation layer.
  • the processor 210 is specifically configured to: process the first optimized left-eye eye image through the first network to obtain a left-eye classification feature value;
  • the first network includes: the second flattened layer, two second fully connected layers, and two second excitation layers;
  • the second network includes: the third flattening layer, two third fully connected layers, and two third excitation layers;
  • the third network includes: a fourth fully connected layer and a fourth excitation layer.
  • the excitation functions in the two second excitation layers are respectively a modified linear unit ReLU function and a Sigmoid function; and/or,
  • excitation functions in the two third excitation layers are the modified linear unit ReLU function and the Sigmoid function respectively; and/or,
  • One of the excitation functions in the fourth excitation layer is a modified linear unit ReLU function.
  • the processor 210 is further configured to: obtain a second eye image of the second recognition target;
  • the iris-based face anti-counterfeiting judgment is performed to determine whether the second recognition target is a living human face, wherein the result of the face anti-counterfeiting judgment is used to establish a face feature template.
  • the second eye image is a second eye infrared image.
  • the processor 210 is further configured to: acquire a second target image of the second recognition target, acquire the second eye image based on the second target image, and based on the second target image Establish the face feature template.
  • the processor 210 is further configured to: perform face detection based on the second target image;
  • the establishment of a facial feature template based on the second target image includes:
  • a second face image is acquired based on the second target image, and the face feature template is established based on the second face image.
  • the processor 210 is specifically configured to: determine whether the second face image belongs to a face feature template library;
  • the second face image belongs to the face feature template library
  • the second face image is matched with multiple face feature templates in the face feature template library.
  • the iris-based face anti-counterfeiting judgment is performed according to the second eye image, and when it is determined that the second recognition target is a living face, The second face image is established as a face feature template.
  • the processor 210 is specifically configured to: when the matching is successful, perform an iris-based face anti-counterfeiting judgment according to the second eye image;
  • the second face image is established as a face feature template.
  • the processor 210 is specifically configured to obtain 3D point cloud data of the second recognition target when the matching is successful;
  • iris-based face anti-counterfeiting judgment is performed according to the second eye image.
  • the processor 210 is specifically configured to: acquire a face region image based on the second target image;
  • the second eye image is a human eye area image or an iris area image including the iris.
  • the processor 210 is specifically configured to: use a histogram equalization method to process the second eye image to obtain a second optimized eye image;
  • the iris-based face anti-counterfeiting judgment is performed.
  • the processor 210 is specifically configured to: perform classification processing on the second optimized eye image through a neural network to determine whether the second recognition target is a living human face.
  • the second eye image includes a second left eye image and/or a second right eye image
  • the processor 210 is specifically configured to: compare the second left eye image to the second left eye through a neural network. Performing classification processing on the second image and/or the second right eye image.
  • the neural network includes: at least one flattened layer, at least one fully connected layer, and at least one excitation layer.
  • an embodiment of the present application further provides an electronic device 2, which may include the face recognition apparatus 20 of the foregoing application embodiment.
  • the electronic device 2 is a smart door lock, a mobile phone, a computer, an access control system and other devices that need to apply face recognition.
  • the face recognition device 20 includes software and hardware devices for face recognition in the electronic device 2.
  • the processor of the embodiment of the present application may be an integrated circuit chip with signal processing capability.
  • the steps of the foregoing method embodiments can be completed by hardware integrated logic circuits in the processor or instructions in the form of software.
  • the aforementioned processor may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA) or other Programming logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC application specific integrated circuit
  • FPGA ready-made programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • the face recognition in the embodiments of the present application may further include a memory
  • the memory may be a volatile memory or a nonvolatile memory, or may include both volatile and nonvolatile memory.
  • the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), and electrically available Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
  • the volatile memory may be a random access memory (Random Access Memory, RAM), which is used as an external cache.
  • RAM random access memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM DDR SDRAM
  • ESDRAM enhanced synchronous dynamic random access memory
  • Synchlink DRAM SLDRAM
  • DR RAM Direct Rambus RAM
  • the embodiment of the present application also proposes a computer-readable storage medium that stores one or more programs, and the one or more programs include instructions.
  • the instructions are included in a portable electronic device that includes multiple application programs When executed, the portable electronic device can be made to execute the method of the embodiment shown in Figs. 1-17.
  • the embodiment of the present application also proposes a computer program, the computer program includes instructions, when the computer program is executed by the computer, the computer can execute the method of the embodiment shown in FIG. 1-17.
  • An embodiment of the present application also provides a chip that includes an input and output interface, at least one processor, at least one memory, and a bus.
  • the at least one memory is used to store instructions, and the at least one processor is used to call the at least one memory. To execute the method of the embodiment shown in Figure 1-17.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer device (which can be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)

Abstract

A face recognition method and apparatus, and an electronic device. It can be recognized whether a face is authentic, thereby improving the security of face recognition. The face recognition method comprises: obtaining a first target image and a first eye image of a first recognition target; performing iris-based face anti-spoofing determination according to the first eye image to determine whether the first recognition target is a living face, and outputting a liveness determination result; performing feature template matching according to the first target image, and outputting a matching result; and outputting a face recognition result according to the liveness determination result and the matching result.

Description

人脸识别的方法、装置和电子设备Method, device and electronic equipment for face recognition 技术领域Technical field

本申请涉及生物特征识别技术领域,并且更具体地,涉及一种人脸识别的方法、装置和电子设备。This application relates to the field of biometric recognition technology, and more specifically, to a method, device and electronic device for face recognition.

背景技术Background technique

人脸识别,是基于人的脸部特征信息进行身份识别的一种生物识别技术。用摄像机或摄像头采集含有人脸的图像或视频流,并自动在图像中检测和跟踪人脸,进而对检测到的人脸进行脸部的图像预处理、图像特征提取以及匹配与识别等一系列相关技术,通常也叫做人像识别或面部识别。随着计算机和网络技术的飞速发展,人脸识别技术已广泛地应用于智能门禁、移动终端、公共安全、娱乐、军事等诸多行业及领域。Face recognition is a kind of biometric recognition technology based on human facial feature information. Use a camera or camera to collect images or video streams containing human faces, and automatically detect and track human faces in the images, and then perform facial image preprocessing, image feature extraction, and matching and recognition of the detected human faces. Related technology is usually called face recognition or facial recognition. With the rapid development of computer and network technology, face recognition technology has been widely used in many industries and fields such as smart access control, mobile terminals, public safety, entertainment, and military.

当前人脸识别普遍使用的是基于人脸的二维(Two Dimensional,2D)图像进行识别,判断该2D图像是否为特定用户人脸,而不判断该2D图像是否来自活体人脸,换言之,现有技术中,基于2D图像的2D人脸识别没有防伪功能,安全性能差。Currently, face recognition generally uses two-dimensional (Two Dimensional, 2D) image recognition based on the face, to determine whether the 2D image is a specific user face, and does not determine whether the 2D image is from a living human face. In other words, In some technologies, 2D face recognition based on 2D images has no anti-counterfeiting function and poor security performance.

发明内容Summary of the invention

本申请实施例提供了一种人脸识别方法、装置和电子设备,能够识别人脸的真假,从而能够提升人脸识别的安全性。The embodiments of the present application provide a face recognition method, device, and electronic equipment, which can recognize the authenticity of a human face, thereby improving the security of face recognition.

第一方面,提供了一种人脸识别方法,包括:In the first aspect, a face recognition method is provided, including:

获取第一识别目标的第一目标图像及第一眼部图像;Acquiring a first target image and a first eye image of the first recognition target;

根据所述第一眼部图像进行基于虹膜的人脸防伪判断,以确定所述第一识别目标是否为活体人脸并输出活体判断结果;Perform iris-based face anti-counterfeiting judgment according to the first eye image to determine whether the first recognition target is a living human face and output a living judgment result;

根据所述第一目标图像进行特征模板匹配,并输出匹配结果;Performing feature template matching according to the first target image, and outputting a matching result;

根据所述活体判断结果和所述匹配结果输出人脸识别结果。The face recognition result is output according to the living body judgment result and the matching result.

本申请提供一种带有防伪功能的人脸识别方案,通过获取第一识别目标的第一目标图像和第一眼部图像,基于第一眼部图像中的虹膜特征进行人脸防伪,在判断该包括虹膜的眼部图像是否来自活体人脸的基础上,根据第一目标图像进行特征模板匹配判断是否为用户,从而大大提高人脸识别装置及 电子设备的安全性。The present application provides a face recognition solution with anti-counterfeiting function. By acquiring the first target image and the first eye image of the first recognition target, the face anti-counterfeiting is performed based on the iris features in the first eye image. Based on whether the eye image including the iris is from a living human face, the feature template matching is performed according to the first target image to determine whether it is a user, thereby greatly improving the security of the face recognition device and electronic equipment.

在一种可能的实现方式中,所述根据所述活体判断结果和所述匹配结果输出人脸识别结果,包括:In a possible implementation manner, the outputting the face recognition result according to the living body judgment result and the matching result includes:

在所述匹配结果为成功时,根据所述活体判断结果输出人脸识别结果;或者,在所述活体判断结果为活体时,根据所述匹配结果输出人脸识别结果;或者,在所述匹配结果为失败或所述活体判断结果为非活体时,输出人脸识别结果。When the matching result is successful, output the face recognition result according to the living body judgment result; or, when the living body judgment result is a living body, output the face recognition result according to the matching result; or, in the matching When the result is failure or the living body judgment result is non-living body, the face recognition result is output.

在一种可能的实现方式中,所述根据所述第一目标图像进行特征模板匹配,并输出匹配结果,包括:In a possible implementation manner, the performing feature template matching according to the first target image and outputting the matching result includes:

基于所述第一目标图像进行人脸检测;Performing face detection based on the first target image;

当人脸检测成功时,基于所述第一目标图像获取第一人脸图像;When the face detection is successful, acquiring a first face image based on the first target image;

将所述第一人脸图像与预存的多个第一特征模板进行匹配;Matching the first face image with a plurality of pre-stored first feature templates;

当所述第一人脸图像与所述多个第一特征模板中任意一个第一特征模板匹配成功时,输出匹配结果为成功;或者,When the first face image is successfully matched with any one of the plurality of first feature templates, the output matching result is successful; or,

当所述第一人脸图像与所述多个第一特征模板匹配失败时,输出匹配结果为失败;When the first face image fails to match the multiple first feature templates, output a matching result as failure;

或者,当人脸检测失败时,输出匹配结果为失败。Or, when the face detection fails, the output matching result is failed.

在一种可能的实现方式中,所述获取第一识别目标的第一目标图像及第一眼部图像,包括:In a possible implementation manner, the acquiring the first target image and the first eye image of the first recognition target includes:

获取所述第一识别目标的第一目标图像,基于所述第一目标图像获取所述第一眼部图像。A first target image of the first recognition target is acquired, and the first eye image is acquired based on the first target image.

在一种可能的实现方式中,所述第一眼部图像为二维红外图像。In a possible implementation manner, the first eye image is a two-dimensional infrared image.

在一种可能的实现方式中,所述第一眼部图像为包括虹膜的人眼区域图像或虹膜区域图像。In a possible implementation manner, the first eye image is a human eye area image or an iris area image including the iris.

在一种可能的实现方式中,所述根据所述第一眼部图像进行基于虹膜的人脸防伪判断,包括:In a possible implementation manner, the performing an iris-based face anti-counterfeiting judgment according to the first eye image includes:

采用直方图均衡化方法对所述第一眼部图像进行处理得到第一优化眼部图像;Processing the first eye image by using a histogram equalization method to obtain a first optimized eye image;

根据所述第一优化眼部图像进行基于虹膜的人脸防伪判断。According to the first optimized eye image, the iris-based face anti-counterfeiting judgment is performed.

在一种可能的实现方式中,所述根据所述第一优化眼部图像进行基于虹膜的人脸防伪判断,包括:In a possible implementation manner, the performing an iris-based face anti-counterfeiting judgment according to the first optimized eye image includes:

通过神经网络对所述第一优化眼部图像进行分类处理,以确定所述第一识别目标是否为活体人脸。Perform classification processing on the first optimized eye image through a neural network to determine whether the first recognition target is a living human face.

在一种可能的实现方式中,所述第一眼部图像包括第一左眼眼部图像和/或第一右眼眼部图像,所述采用直方图均衡化方法对所述第一眼部图像进行处理得到第一优化眼部图像包括:In a possible implementation manner, the first eye image includes a first left eye image and/or a first right eye image, and the histogram equalization method is used to perform a calculation on the first eye image. Image processing to obtain the first optimized eye image includes:

采用所述直方图均衡化方法对所述第一左眼眼部图像进行处理得到第一优化左眼眼部图像;和/或Processing the first left-eye eye image by using the histogram equalization method to obtain a first optimized left-eye eye image; and/or

采用所述直方图均衡化方法对所述第一右眼眼部图像进行处理得到第一优化右眼眼部图像。Using the histogram equalization method to process the first right eye image to obtain a first optimized right eye image.

在一种可能的实现方式中,所述第一眼部图像包括:所述第一左眼眼部图像或所述第一右眼眼部图像;In a possible implementation manner, the first eye image includes: the first left eye image or the first right eye image;

所述神经网络包括:第一扁平化层,至少一个第一全连接层以及至少一个第一激励层。The neural network includes: a first flattened layer, at least one first fully connected layer, and at least one first excitation layer.

在一种可能的实现方式中,所述通过神经网络对所述第一优化眼部图像进行分类处理,包括:In a possible implementation manner, the classification processing of the first optimized eye image through a neural network includes:

通过所述第一扁平化层,对所述第一优化左眼眼部图像或所述第一优化右眼眼部图像进行处理得到多个眼部像素值;Processing the first optimized left eye image or the first optimized right eye image through the first flattening layer to obtain multiple eye pixel values;

通过所述至少一个第一全连接层,对所述多个眼部像素值进行全连接得到多个特征常数;Using the at least one first fully connected layer to fully connect the multiple eye pixel values to obtain multiple characteristic constants;

通过所述至少一个第一激励层,对所述多个特征常数进行非线性化处理或者分类处理。Through the at least one first excitation layer, nonlinearization processing or classification processing is performed on the plurality of characteristic constants.

在一种可能的实现方式中,所述神经网络包括:所述第一扁平化层,两个所述第一全连接层以及两个所述第一激励层。In a possible implementation manner, the neural network includes: the first flattened layer, two first fully connected layers, and two first excitation layers.

在一种可能的实现方式中,两个所述第一激励层中的激励函数分别为修正线性单元ReLU函数和Sigmoid函数。In a possible implementation manner, the excitation functions in the two first excitation layers are the modified linear unit ReLU function and the Sigmoid function, respectively.

在一种可能的实现方式中,所述第一眼部图像包括:所述第一左眼眼部图像和所述第一右眼眼部图像;In a possible implementation manner, the first eye image includes: the first left eye image and the first right eye image;

所述神经网络包括第一网络、第二网络和第三网络;The neural network includes a first network, a second network, and a third network;

所述第一网络包括:第二扁平化层,至少一个第二全连接层以及至少一个第二激励层;The first network includes: a second flattening layer, at least one second fully connected layer, and at least one second excitation layer;

所述第二网络包括:第三扁平化层,至少一个第三全连接层以及至少一 个第三激励层;The second network includes: a third flattening layer, at least one third fully connected layer, and at least one third excitation layer;

所述第三网络包括:至少一个第四全连接层以及至少一个第四激励层。The third network includes: at least one fourth fully connected layer and at least one fourth excitation layer.

在一种可能的实现方式中,所述通过神经网络对所述第一优化眼部图像进行分类处理,包括:In a possible implementation manner, the classification processing of the first optimized eye image through a neural network includes:

通过所述第一网络对所述第一优化左眼眼部图像进行处理得到左眼分类特征值;Processing the first optimized left-eye eye image through the first network to obtain a left-eye classification feature value;

通过所述第二网络对所述第一优化右眼眼部图像进行处理得到右眼分类特征值;Processing the first optimized right eye image through the second network to obtain right eye classification feature values;

通过所述第三网络对所述左眼分类特征值和所述右眼分类特征值进行全连接。Fully connect the left-eye classification feature value and the right-eye classification feature value through the third network.

在一种可能的实现方式中,所述第一网络包括:所述第二扁平化层,两个所述第二全连接层和两个所述第二激励层;In a possible implementation manner, the first network includes: the second flattened layer, two second fully connected layers, and two second excitation layers;

所述第二网络包括:所述第三扁平化层,两个所述第三全连接层和两个所述第三激励层;The second network includes: the third flattening layer, two third fully connected layers, and two third excitation layers;

所述第三网络包括:一个所述第四全连接层和一个所述第四激励层。The third network includes: a fourth fully connected layer and a fourth excitation layer.

在一种可能的实现方式中,两个所述第二激励层中的激励函数分别为修正线性单元ReLU函数和Sigmoid函数;和/或,In a possible implementation manner, the excitation functions in the two second excitation layers are the modified linear unit ReLU function and the Sigmoid function respectively; and/or,

两个所述第三激励层中的激励函数分别为修正线性单元ReLU函数和Sigmoid函数;和/或,The excitation functions in the two third excitation layers are the modified linear unit ReLU function and the Sigmoid function respectively; and/or,

一个所述第四激励层中的激励函数为修正线性单元ReLU函数。One of the excitation functions in the fourth excitation layer is a modified linear unit ReLU function.

在一种可能的实现方式中,所述方法还包括:In a possible implementation manner, the method further includes:

获取第二识别目标的第二眼部图像;Acquiring a second eye image of the second recognition target;

根据所述第二眼部图像进行基于虹膜的人脸防伪判别,以确定所述第二识别目标是否为活体人脸,其中,人脸防伪判别的结果用于建立人脸特征模板。According to the second eye image, the iris-based face anti-counterfeiting judgment is performed to determine whether the second recognition target is a living human face, wherein the result of the face anti-counterfeiting judgment is used to establish a face feature template.

在一种可能的实现方式中,所述第二眼部图像为第二眼部红外图像。In a possible implementation manner, the second eye image is a second eye infrared image.

在一种可能的实现方式中,所述方法还包括:In a possible implementation manner, the method further includes:

获取所述第二识别目标的第二目标图像,基于所述第二目标图像获取所述第二眼部图像,并基于所述第二目标图像建立所述人脸特征模板。A second target image of the second recognition target is acquired, the second eye image is acquired based on the second target image, and the face feature template is established based on the second target image.

在一种可能的实现方式中,所述方法还包括:In a possible implementation manner, the method further includes:

基于所述第二目标图像进行人脸检测;Performing face detection based on the second target image;

其中,所述基于所述第二目标图像建立人脸特征模板包括:Wherein, the establishment of a facial feature template based on the second target image includes:

在人脸检测成功时,基于所述第二目标图像获取第二人脸图像,并根据所述第二人脸图像建立所述人脸特征模板。When the face detection is successful, a second face image is acquired based on the second target image, and the face feature template is established based on the second face image.

在一种可能的实现方式中,所述基于所述第二人脸图像建立所述人脸特征模板,包括:In a possible implementation, the establishing the face feature template based on the second face image includes:

判断所述第二人脸图像是否属于人脸特征模板库;Judging whether the second face image belongs to a face feature template library;

当所述第二人脸图像属于所述人脸特征模板库时,将所述第二人脸图像与所述人脸特征模板库中的多个人脸特征模板进行匹配。When the second face image belongs to the face feature template library, the second face image is matched with multiple face feature templates in the face feature template library.

当所述第二人脸图像不属于所述人脸特征模板库时,根据所述第二眼部图像进行基于虹膜的人脸防伪判别,当确定所述第二识别目标为活体人脸时,将所述第二人脸图像建立为人脸特征模板。When the second face image does not belong to the face feature template library, the iris-based face anti-counterfeiting judgment is performed according to the second eye image, and when it is determined that the second recognition target is a living face, The second face image is established as a face feature template.

在一种可能的实现方式中,所述将所述第二人脸图像与所述人脸特征模板库中的多个人脸特征模板进行匹配,包括:In a possible implementation, the matching the second face image with multiple face feature templates in the face feature template library includes:

当匹配成功时,根据所述第二眼部图像进行基于虹膜的人脸防伪判别;When the matching is successful, perform iris-based face anti-counterfeiting judgment according to the second eye image;

当确定所述第二识别目标为活体人脸时,将所述第二人脸图像建立为人脸特征模板。When it is determined that the second recognition target is a living face, the second face image is established as a face feature template.

在一种可能的实现方式中,所述当匹配成功时,根据所述第二眼部图像进行基于虹膜的人脸防伪判别,包括:In a possible implementation manner, when the matching is successful, the iris-based face anti-counterfeiting judgment according to the second eye image includes:

当匹配成功时,获取所述第二识别目标的3D点云数据;When the matching is successful, obtain the 3D point cloud data of the second recognition target;

当所述3D点云数据为有效点云时,根据所述第二眼部图像进行基于虹膜的人脸防伪判别。When the 3D point cloud data is a valid point cloud, iris-based face anti-counterfeiting judgment is performed according to the second eye image.

在一种可能的实现方式中,所述基于所述第二目标图像获取所述第二眼部图像,包括:In a possible implementation manner, the acquiring the second eye image based on the second target image includes:

基于所述第二目标图像获取人脸区域图像;Acquiring a face area image based on the second target image;

基于所述人脸区域图像获取所述第二眼部图像。Acquiring the second eye image based on the face region image.

在一种可能的实现方式中,所述第二眼部图像为包括虹膜的人眼区域图像或虹膜区域图像。In a possible implementation manner, the second eye image is a human eye area image or an iris area image including the iris.

在一种可能的实现方式中,所述根据所述第二眼部图像进行基于虹膜的人脸防伪判别,包括:In a possible implementation manner, the performing iris-based face anti-counterfeiting judgment according to the second eye image includes:

采用直方图均衡化方法对所述第二眼部图像进行处理得到第二优化眼部图像;Using a histogram equalization method to process the second eye image to obtain a second optimized eye image;

根据所述第二优化眼部图像进行基于虹膜的人脸防伪判别。According to the second optimized eye image, the iris-based face anti-counterfeiting judgment is performed.

在一种可能的实现方式中,所述根据所述第二优化眼部图像进行基于虹膜的人脸防伪判别,包括:In a possible implementation manner, the performing the iris-based anti-counterfeiting judgment of the human face according to the second optimized eye image includes:

通过神经网络对所述第二优化眼部图像进行分类处理,以确定所述第二识别目标是否为活体人脸。Perform classification processing on the second optimized eye image through a neural network to determine whether the second recognition target is a living human face.

在一种可能的实现方式中,所述第二眼部图像包括第二左眼眼部图像和/或第二右眼眼部图像,所述通过神经网络对所述第二优化眼部图像进行分类处理包括:In a possible implementation manner, the second eye image includes a second left eye image and/or a second right eye image, and the second optimized eye image is performed on the neural network. Classification processing includes:

通过神经网络对所述第二左眼眼部图像和/或所述第二右眼眼部图像进行分类处理。Perform classification processing on the second left eye image and/or the second right eye image through a neural network.

在一种可能的实现方式中,所述神经网络包括:In a possible implementation manner, the neural network includes:

至少一个扁平化层,至少一个全连接层和至少一个激励层。At least one flattening layer, at least one fully connected layer and at least one excitation layer.

第二方面,提供了一种人脸识别的装置,包括处理器,用于执行如第一方面或第一方面的任一可能的实现方式中的人脸识别方法。In a second aspect, a face recognition device is provided, including a processor, configured to execute the face recognition method in the first aspect or any possible implementation of the first aspect.

第三方面,提供了一种电子设备,包括如第二方面或第二方面的任一可能的实现方式中的人脸识别装置。In a third aspect, an electronic device is provided, including the face recognition device in the second aspect or any possible implementation of the second aspect.

第四方面,提供了一种芯片,该芯片包括输入输出接口、至少一个处理器、至少一个存储器和总线,该至少一个存储器用于存储指令,该至少一个处理器用于调用该至少一个存储器中的指令,以执行第一方面或第一方面的任一可能的实现方式中的方法。In a fourth aspect, a chip is provided. The chip includes an input and output interface, at least one processor, at least one memory, and a bus. The at least one memory is used to store instructions, and the at least one processor is used to call Instructions to execute the method in the first aspect or any possible implementation of the first aspect.

第五方面,提供了一种计算机可读介质,用于存储计算机程序,所述计算机程序包括用于执行上述第一方面或第一方面的任一可能的实现方式中的指令。In a fifth aspect, a computer-readable medium is provided for storing a computer program, and the computer program includes instructions for executing the above-mentioned first aspect or any possible implementation of the first aspect.

第六方面,提供了一种包括指令的计算机程序产品,当计算机运行所述计算机程序产品的所述指时,所述计算机执行上述第一方面或第一方面的任一可能的实现方式中的人脸识别的方法。In a sixth aspect, a computer program product including instructions is provided. When the computer runs the instructions of the computer program product, the computer executes the first aspect or any of the possible implementations of the first aspect. Methods of face recognition.

具体地,该计算机程序产品可以运行于上述第三方面的电子设备上。Specifically, the computer program product can run on the electronic device of the third aspect.

附图说明Description of the drawings

图1(a)图是根据本申请实施例的一种人脸识别装置的示意性框图。Fig. 1(a) is a schematic block diagram of a face recognition device according to an embodiment of the present application.

图1(b)图是根据本申请实施例的一种人脸识别方法的示意性流程图。Figure 1(b) is a schematic flowchart of a face recognition method according to an embodiment of the present application.

图1(c)图是根据本申请实施例的一种卷积神经网络的示意性框图。Fig. 1(c) is a schematic block diagram of a convolutional neural network according to an embodiment of the present application.

图2是根据本申请实施例的一种人脸识别方法的示意性流程图。Fig. 2 is a schematic flowchart of a face recognition method according to an embodiment of the present application.

图3中的(a)图是根据本申请实施例的三维模型人脸的红外图像。Figure 3 (a) is an infrared image of a human face of a three-dimensional model according to an embodiment of the present application.

图3中的(b)图是根据本申请实施例的用户活体人脸的红外图像。Figure 3 (b) is an infrared image of a human face of a user according to an embodiment of the present application.

图4是根据本申请实施例的另一种人脸识别方法的示意性流程图。Fig. 4 is a schematic flowchart of another method for face recognition according to an embodiment of the present application.

图5是根据本申请实施例的另一种人脸识别方法的示意性流程图。Fig. 5 is a schematic flowchart of another face recognition method according to an embodiment of the present application.

图6是根据本申请实施例的另一种人脸识别方法的示意性流程图。Fig. 6 is a schematic flowchart of another face recognition method according to an embodiment of the present application.

图7是根据本申请实施例的另一种人脸识别方法的示意性流程图。Fig. 7 is a schematic flowchart of another face recognition method according to an embodiment of the present application.

图8是根据本申请实施例的人脸识别方法中一种人脸防伪判别方法的示意性流程图。Fig. 8 is a schematic flowchart of a method for anti-counterfeiting discrimination of a face in a face recognition method according to an embodiment of the present application.

图9是根据本申请实施例的人脸识别方法中另一种人脸防伪判别方法的示意性流程图。Fig. 9 is a schematic flow chart of another method for anti-counterfeiting discrimination of a face in a face recognition method according to an embodiment of the present application.

图10是根据本申请实施例的一种卷积神经网络的示意性框图。Fig. 10 is a schematic block diagram of a convolutional neural network according to an embodiment of the present application.

图11是根据本申请实施例的一种全连接层示意图。Fig. 11 is a schematic diagram of a fully connected layer according to an embodiment of the present application.

图12是根据本申请实施例的另一种卷积神经网络的示意性框图。Fig. 12 is a schematic block diagram of another convolutional neural network according to an embodiment of the present application.

图13是根据本申请实施例的人脸识别方法中另一种人脸防伪判别方法的示意性流程图。FIG. 13 is a schematic flow chart of another method for anti-counterfeiting discrimination in a face recognition method according to an embodiment of the present application.

图14是根据本申请实施例的另一种卷积神经网络的示意性框图。Fig. 14 is a schematic block diagram of another convolutional neural network according to an embodiment of the present application.

图15是根据本申请实施例的人脸识别方法中一种人脸注册方法的示意性流程图。Fig. 15 is a schematic flowchart of a face registration method in a face recognition method according to an embodiment of the present application.

图16是根据本申请实施例的人脸识别方法中另一种人脸注册方法的示意性流程图。Fig. 16 is a schematic flowchart of another face registration method in a face recognition method according to an embodiment of the present application.

图17是根据本申请实施例的人脸识别方法中另一种人脸注册方法的示意性流程图。Fig. 17 is a schematic flowchart of another face registration method in a face recognition method according to an embodiment of the present application.

图18是根据本申请实施例的一种人脸识别装置的示意性框图。Fig. 18 is a schematic block diagram of a face recognition device according to an embodiment of the present application.

图19根据本申请实施例的电子设备的示意性框图。Fig. 19 is a schematic block diagram of an electronic device according to an embodiment of the present application.

具体实施方式Detailed ways

下面将结合附图,对本申请实施例中的技术方案进行描述。The technical solutions in the embodiments of the present application will be described below in conjunction with the drawings.

本申请实施例可适用于光学人脸识别系统,包括但不限于基于光学人脸成像的产品。该光学人脸识别系统可以应用于具有图像采集装置(如摄像头) 的各种电子设备,该电子设备可以为手机,平板电脑,智能可穿戴装置、智能门锁等,本公开的实施例对此不做限定。The embodiments of the present application may be applicable to optical face recognition systems, including but not limited to products based on optical face imaging. The optical face recognition system can be applied to various electronic devices with image acquisition devices (such as cameras). The electronic devices can be mobile phones, tablet computers, smart wearable devices, smart door locks, etc. The embodiments of the present disclosure are to this Not limited.

应理解,本文中的具体的例子只是为了帮助本领域技术人员更好地理解本申请实施例,而非限制本申请实施例的范围。It should be understood that the specific examples in this document are only to help those skilled in the art to better understand the embodiments of the present application, rather than limiting the scope of the embodiments of the present application.

还应理解,本申请实施例中的公式只是一种示例,而非限制本申请实施例的范围,各公式可以进行变形,这些变形也应属于本申请保护的范围。It should also be understood that the formulas in the embodiments of the present application are only examples, and do not limit the scope of the embodiments of the present application. Each formula can be modified, and these modifications should also fall within the protection scope of the present application.

还应理解,在本申请的各种实施例中,各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should also be understood that, in the various embodiments of the present application, the size of the sequence number of each process does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not correspond to the embodiments of the present application. The implementation process constitutes any limitation.

还应理解,本说明书中描述的各种实施方式,既可以单独实施,也可以组合实施,本申请实施例对此并不限定。It should also be understood that the various implementation manners described in this specification can be implemented individually or in combination, which is not limited in the embodiments of the present application.

除非另有说明,本申请实施例所使用的所有技术和科学术语与本申请的技术领域的技术人员通常理解的含义相同。本申请中所使用的术语只是为了描述具体的实施例的目的,不是旨在限制本申请的范围。本申请所使用的术语“和/或”包括一个或多个相关的所列项的任意的和所有的组合。Unless otherwise specified, all technical and scientific terms used in the embodiments of the present application have the same meaning as commonly understood by those skilled in the technical field of the present application. The terminology used in this application is only for the purpose of describing specific embodiments, and is not intended to limit the scope of this application. The term "and/or" as used in this application includes any and all combinations of one or more related listed items.

为了便于理解,先结合图1(a)、图1(b)和图1(c),对基于2D图像的人脸识别进行电子设备的解锁过程进行简单介绍。In order to facilitate understanding, first, in conjunction with Figure 1 (a), Figure 1 (b) and Figure 1 (c), the process of unlocking an electronic device based on 2D image face recognition is briefly introduced.

如图1(a)所示,人脸识别装置10包括红外发光模组110、红外图像采集模组120和处理器130。其中,所述红外光发光模组110用于发出红外光信号,其可以为红外光发光二极管(Light Emitting Diode,LED),或者也可以为垂直腔面发射激光器(VerticalCavity SurfaceEmitting Laser,VCSEL)等其它红外光发光光源,本申请实施例对此不做限定。所述红外图像采集模组120可以为红外摄像头,其中包括红外图像传感器,该红外图像传感器用于接收红外光信号,并将接收的红外光信号转换为对应的电信号,从而生成红外图像。所述处理器130可以为一种微处理器(Microprocessor Unit,MPU),可以控制所述红外发光模组110和所述红外图像采集模组120进行人脸图像采集,并且进行人脸图像识别。As shown in FIG. 1( a ), the face recognition device 10 includes an infrared light emitting module 110, an infrared image acquisition module 120 and a processor 130. Wherein, the infrared light emitting module 110 is used to emit infrared light signals, which may be an infrared light emitting diode (Light Emitting Diode, LED), or may also be a vertical cavity surface emitting laser (Vertical Cavity Surface Emitting Laser, VCSEL), etc. The infrared light emitting light source is not limited in the embodiment of the present application. The infrared image acquisition module 120 may be an infrared camera, which includes an infrared image sensor, which is used to receive infrared light signals and convert the received infrared light signals into corresponding electrical signals, thereby generating infrared images. The processor 130 may be a microprocessor unit (MPU), which may control the infrared light emitting module 110 and the infrared image acquisition module 120 to collect facial images and perform facial image recognition.

具体地,如图1(b)所示,当需要进行人脸识别时,具体2D识别流程如下:Specifically, as shown in Figure 1(b), when face recognition is required, the specific 2D recognition process is as follows:

S110:采集识别目标的2D红外图像。具体地,所述红外发光模组110发出红外光,该红外光照射在识别目标上,该识别目标可以为用户人脸,也 可以为照片,3D模型或者任意其它物体。经过识别目标表面反射的红外反射光被红外图像传感器120接收并转换为2D红外图像,所述红外图像传感器120将2D红外图像传输给处理器130。S110: Collect 2D infrared images of the identified target. Specifically, the infrared light-emitting module 110 emits infrared light, and the infrared light is irradiated on a recognition target. The recognition target may be a user's face, or may be a photo, a 3D model or any other object. The infrared reflected light reflected by the surface of the identified target is received by the infrared image sensor 120 and converted into a 2D infrared image, and the infrared image sensor 120 transmits the 2D infrared image to the processor 130.

S120:人脸检测(face detection)。即接收2D红外图像,检测2D红外图像上是否存在人脸。例如,采用单个卷积神经网络(Convolutional Neural Networks,CNN)对2D红外图像进行人脸检测。首先训练一个判断人脸非人脸的人脸检测卷积神经网络,将2D红外图像的数据输入至人脸检测卷积神经网络中,通过卷积计算等步骤,将2D红外图像的数据的特征提取后,进行分类判别,从而判断该2D红外图像上是否存在人脸。S120: face detection (face detection). That is, it receives 2D infrared images and detects whether there are human faces on the 2D infrared images. For example, a single convolutional neural network (Convolutional Neural Networks, CNN) is used to perform face detection on 2D infrared images. First, train a face detection convolutional neural network to determine whether a human face is not a face, and input the data of the 2D infrared image into the face detection convolutional neural network. Through the steps of convolution calculation, the characteristics of the 2D infrared image data After extraction, classification is performed to determine whether there is a human face on the 2D infrared image.

具体地,如图1(c)所示,卷积神经网络主要包括卷积层101(convolutional layer)、激励层102(activation layer),池化层103(pooling layer)、以及全连接层104(fully-connected layer)。其中,卷积神经网路中每层卷积层由若干卷积核(convolutional kernel)组成,每个卷积核的参数都是通过反向传播算法优化得到的。卷积运算的目的是提取输入的不同特征,不同的卷积核提取不同的特征图(feature map),更多层的卷积网络能从边缘特征、线条特征等低级特征中迭代提取更复杂的特征。激励层使用激励函数(activation function)给卷积神经网络引入了非线性,常用的激励函数有sigmoid、tanh、ReLU函数等。通常在卷积层之后会得到维度很大的特征,池化层将特征切成几个区域,取其最大值(max pooling)或平均值(average pooling),得到新的、维度较小的特征图。全连接层把所有局部特征结合变成全局特征,用来计算最后每一类的得分,从而判断输入的数据的类别。Specifically, as shown in Figure 1(c), the convolutional neural network mainly includes a convolutional layer 101 (convolutional layer), an activation layer 102 (activation layer), a pooling layer 103 (pooling layer), and a fully connected layer 104 ( fully-connected layer). Among them, each convolutional layer in the convolutional neural network is composed of several convolutional kernels, and the parameters of each convolutional kernel are optimized through the back propagation algorithm. The purpose of the convolution operation is to extract different features of the input. Different convolution kernels extract different feature maps. More layers of convolutional networks can iteratively extract more complex features from low-level features such as edge features and line features. feature. The activation layer uses activation functions to introduce nonlinearity to the convolutional neural network. Commonly used activation functions include sigmoid, tanh, and ReLU functions. Usually after the convolutional layer, a feature with a large dimension is obtained. The pooling layer cuts the feature into several regions, and takes the maximum value (max pooling) or average value (average pooling) to obtain new features with smaller dimensions Figure. The fully connected layer combines all local features into global features, which are used to calculate the final score of each category to determine the category of the input data.

S121:若2D红外图像上存在人脸,则对2D红外图像进行人脸剪切。具体地,将上述人脸检测卷积神经网络的全连接层改为卷积层,这样网络变成了全卷积网络,2D红外图像经过全卷积网络将得到特征图,特征图上每一个“点”对应该位置映射到原图区域属于人脸的概率,将属于人脸概率大于设定阈值的视为人脸候选框。将2D红外图像中人脸候选框中的图像剪切形成新的人脸2D红外图像。S121: If there is a face on the 2D infrared image, perform face cut on the 2D infrared image. Specifically, the fully connected layer of the above-mentioned face detection convolutional neural network is changed to a convolutional layer, so that the network becomes a fully convolutional network, and the 2D infrared image passes through the fully convolutional network to obtain a feature map. Each feature map The "point" corresponds to the probability that the location mapped to the original image area belongs to the face, and the face whose probability is greater than the set threshold is regarded as the face candidate frame. Cut the image in the face candidate frame in the 2D infrared image to form a new face 2D infrared image.

S122:若2D红外图像上不存在人脸,则将重启参数加1。S122: If there is no human face in the 2D infrared image, add 1 to the restart parameter.

若2D红外图像上不存在人脸,则人脸检测失败,换言之,该识别目标不为用户,匹配失败。If there is no face in the 2D infrared image, the face detection fails. In other words, the recognition target is not the user, and the matching fails.

可选地,还可以通过级联CNN,Dlib,OpenCV等方法进行人脸检测, 并剪切得到新的人脸2D红外图像。本申请实施例中对此不做限定。Optionally, face detection can also be performed by cascading CNN, Dlib, OpenCV and other methods, and a new 2D infrared image of the face can be cut. This is not limited in the embodiments of this application.

S130:2D人脸识别(face recognition)。即对S131形成的人脸2D红外图像进行识别,判断该人脸2D红外图像是否为用户的人脸。例如,采用卷积神经网络的方法进行人脸识别,具体地,首先训练一个判断是否为用户人脸的人脸识别卷积神经网络,该人脸识别卷积神经网络按照模板库中的多个特征模板分类。将人脸2D红外图像的数据输入至人脸识别卷积神经网络中,通过卷积计算等步骤,将人脸2D红外图像的数据的特征提取后,进行分类判别,判断该人脸2D红外图像是否与模板库中多个特征模板匹配。S130: 2D face recognition (face recognition). That is, the 2D infrared image of the human face formed in S131 is recognized, and it is determined whether the 2D infrared image of the human face is the user's face. For example, using a convolutional neural network method for face recognition, specifically, first train a face recognition convolutional neural network to determine whether it is a user's face, and the face recognition convolutional neural network is based on multiple templates in the template library. Feature template classification. Input the data of the 2D infrared image of the face into the face recognition convolutional neural network. After the features of the 2D infrared image of the face are extracted through steps such as convolution calculation, the classification and judgment are performed to determine the 2D infrared image of the face Whether it matches with multiple feature templates in the template library.

S131:若匹配成功,则该人脸2D红外图像为用户的人脸图像,2D识别成功。进一步的,可以解锁人脸识别装置10所在的电子设备,也可以解锁电子设备上应用程序。S131: If the matching is successful, the 2D infrared image of the face is the face image of the user, and the 2D recognition is successful. Further, the electronic device where the face recognition apparatus 10 is located can be unlocked, and the application on the electronic device can also be unlocked.

S132:若匹配失败,则该人脸2D红外图像不为用户的人脸图像,则2D识别失败,将重启参数加1。S132: If the matching fails, the 2D infrared image of the face is not the user's face image, and the 2D recognition fails, and the restart parameter is increased by 1.

S140:判断重启参数是否小于第一阈值。S140: Determine whether the restart parameter is less than the first threshold.

S141:若重启参数小于第一阈值,则进入S110;S141: If the restart parameter is less than the first threshold, enter S110;

S142:若重启参数大于等于第一阈值,则识别失败。S142: If the restart parameter is greater than or equal to the first threshold, the recognition fails.

在图1(b)中,人脸识别装置10通过采集人脸的2D红外图像,判断人脸的2D图像是否符合特征人脸模板库中的特征人脸来进行人脸识别,从而对电子设备和电子设备上的应用程序(application,APP)进行解锁。由于在解锁过程中,人脸识别装置10仅仅依据2D图像上的二维特征进行人脸识别,无法识别采集的2D红外图像是否来源自活人人脸或者其他照片、视频等其他非活人人脸物体,换言之,该人脸识别装置10不具有防伪功能,可以通过盗取带有用户人脸的照片、视频等信息,对电子设备以及应用程序进行解锁,因而人脸识别装置及电子设备的安全性能受到了极大的影响。In Figure 1(b), the face recognition device 10 performs face recognition by collecting 2D infrared images of human faces and judging whether the 2D images of the human faces match the characteristic faces in the characteristic face template library, thereby identifying the electronic device Unlock with an application (APP) on the electronic device. Since during the unlocking process, the face recognition device 10 only performs face recognition based on the two-dimensional features on the 2D image, it cannot identify whether the collected 2D infrared image is derived from a living human face or other photos, videos, and other non-living human faces. Objects, in other words, the face recognition device 10 does not have an anti-counterfeiting function. It can unlock electronic equipment and applications by stealing photos, videos and other information with the user’s face, so the face recognition device and electronic equipment are safe Performance has been greatly affected.

由于活体人眼虹膜和非活体人眼(照片或视频中的人眼图像、三维模型中的人眼模型)对于红外光反射具有明显差异,因而产生的活体人眼虹膜和非活体人眼的红外图像也因此具有较大差异。基于此,本申请实施例提供一种带有防伪功能的人脸识别方案,通过获取识别目标的眼部红外图像,并对该眼部红外图像进行人脸防伪,判断其是否来自用户的活体人眼虹膜,从而判断识别目标是否为活体人脸,因此大大提高人脸识别装置及电子设备的安全性。Since the iris of living human eyes and non-living human eyes (human eye images in photos or videos, human eye models in three-dimensional models) have significant differences in infrared light reflection, the resulting infrared iris of living human eyes and non-living human eyes The images are therefore very different. Based on this, the embodiment of the present application provides a face recognition solution with anti-counterfeiting function. By acquiring an infrared image of the eye of the recognition target, and performing face anti-counterfeiting on the infrared image of the eye, it is determined whether it is from a living person of the user Eye iris, thereby judging whether the recognition target is a living human face, thus greatly improving the safety of face recognition devices and electronic equipment.

下面,结合图2至图9,对本申请实施例提供的人脸识别方法进行详细介绍。Hereinafter, the face recognition method provided by the embodiment of the present application will be introduced in detail with reference to FIGS. 2-9.

图2为本申请实施例提供的一种人脸识别的方法200,包括:FIG. 2 is a face recognition method 200 provided by an embodiment of this application, including:

S210:获取识别目标的目标图像及眼部图像;S210: Obtain a target image and an eye image of the recognition target;

S220:根据所述眼部图像进行基于虹膜的人脸防伪判断,以确定所述识别目标是否为活体人脸并输出活体判断结果;S220: Perform iris-based face anti-counterfeiting judgment according to the eye image to determine whether the recognition target is a living human face and output a living judgment result;

S230:根据所述目标图像进行特征模板匹配,并输出匹配结果;S230: Perform feature template matching according to the target image, and output a matching result;

S240:根据所述活体判断结果和所述匹配结果输出人脸识别结果。S240: Output a face recognition result according to the living body judgment result and the matching result.

应理解,所述识别目标也称为第一识别目标、第二识别目标等,可以用于区分不同的目标物体,相应的,所述识别目标的目标图像及眼部图像也可以称为第一目标图像或第二目标图像,第一眼部图像或第二眼部图像等等。所述识别目标包括但不限于人脸、照片、视频、三维模型等任意物体。例如,所述识别目标可以为用户人脸、其他人的人脸、用户照片、贴有照片的曲面模型等等。It should be understood that the recognition target is also called the first recognition target, the second recognition target, etc., which can be used to distinguish different target objects. Accordingly, the target image and the eye image of the recognition target can also be called the first recognition target. Target image or second target image, first eye image or second eye image, etc. The recognition target includes, but is not limited to, any objects such as human faces, photos, videos, and three-dimensional models. For example, the recognition target may be a user's face, other people's faces, a user's photo, a curved surface model with photos, etc.

可选地,所述眼部图像可以为活体人脸的眼睛区域图像,人的眼睛结构由巩膜、虹膜、瞳孔晶状体、视网膜等部分组成。虹膜是位于黑色瞳孔和白色巩膜之间的圆环状部分,其包含有很多相互交错的斑点、细丝、冠状、条纹、隐窝等的细节特征。这些特征决定了虹膜特征的唯一性,同时也决定了身份识别的唯一性。由于其具有识别人体的独一无二的特征,也难以进行复制和伪造,因此可以用于进行人脸防伪和人脸识别。Optionally, the eye image may be an eye area image of a living human face, and the human eye structure is composed of parts such as sclera, iris, pupil lens, and retina. The iris is a circular part located between the black pupil and the white sclera. It contains many intertwined spots, filaments, crowns, stripes, crypts and other detailed features. These characteristics determine the uniqueness of iris characteristics, and also determine the uniqueness of identification. Because it has the unique characteristics of identifying the human body, it is also difficult to copy and forge, so it can be used for face anti-counterfeiting and face recognition.

可选地,当眼部图像中的虹膜图像可以用于区分活体人脸和非活体人脸时,所述眼部图像可以为可见光生成的彩色图像,也可以为红外光生成的红外图像或者其它图像,本申请实施例对此不做限定。Optionally, when the iris image in the eye image can be used to distinguish between a living human face and a non-living human face, the eye image can be a color image generated by visible light, or an infrared image generated by infrared light or other Image, this embodiment of the application does not limit this.

优选地,在本申请实施例中,所述眼部图像为红外图像,下文以眼部图像为红外图像为例进行详细说明。具体地,所述红外(Infrared Radiation,IR)图像为基于经过识别目标表面反射的红外光信号所形成的图像,其表现为灰度(Gray Scale)图像,通过图像像素点的灰度表现识别目标的外观形状。所述识别目标的眼部图像为包括虹膜区域的眼部红外图像,例如,基于用户人脸的活体人眼以及其中虹膜反射的反射红外光形成的红外图像,或者基于人脸照片中眼部以及其中虹膜区域反射的反射红外光形成的红外图像等等。Preferably, in the embodiment of the present application, the eye image is an infrared image. The following takes the eye image as an infrared image as an example for detailed description. Specifically, the infrared (Infrared Radation, IR) image is an image formed based on the infrared light signal reflected from the surface of the identification target, which is expressed as a gray scale image, and the identification target is expressed by the gray scale of the image pixels. Appearance shape. The eye image of the recognition target is an eye infrared image including the iris area, for example, an infrared image based on a living human eye of the user's face and reflected infrared light reflected by the iris, or an infrared image based on the eye and The infrared image formed by the reflected infrared light reflected by the iris area and so on.

由于活体人眼虹膜的特殊形态、成分和结构,其反射的反射红外光与照 片、模型等物体反射的反射红外光有较大的区别,因此,包括虹膜的眼部图像中可以区别不同的识别目标的虹膜信息,用于区分活体人脸以及非活体人脸,换言之,对活体人脸的包括虹膜的眼部图像与对非活体人脸的包括虹膜的眼部图像不同,并且差异较大,本申请实施例即利用该区别点,基于包括虹膜的眼部图像进行人脸防伪判别。其中,所述非活体人脸包括但不限于:用户人脸照片,用户人脸视频,放置于三维曲面上的用户人脸照片,用户人脸模型等等。Due to the special shape, composition and structure of the iris of a living human eye, the reflected infrared light reflected by it is quite different from the reflected infrared light reflected by objects such as photos and models. Therefore, different recognitions can be distinguished in the eye image including the iris The iris information of the target is used to distinguish between a living human face and a non-living human face. In other words, the eye image of the living human face including the iris is different from the eye image of the non-living human face including the iris, and the difference is large. The embodiment of the present application utilizes this difference to perform the anti-counterfeiting judgment of the human face based on the eye image including the iris. Wherein, the non-living human face includes but is not limited to: user face photos, user face videos, user face photos placed on a three-dimensional curved surface, user face models, and so on.

例如,如图3所示,图3中的(a)图为三维模型人脸的红外图像,其眼部的红外图像仅为人眼模型,其眼部模型的“虹膜”区域仅为模拟虹膜形态的示意图片,不包含活体人眼虹膜信息。图3中的(b)图为用户活体人脸的红外图像,由图中可以体现出真人活体的人眼虹膜的特征信息,与图3中的(a)图完全不同。For example, as shown in Figure 3, Figure 3 (a) is an infrared image of the face of the three-dimensional model. The infrared image of the eye is only the human eye model, and the "iris" area of the eye model is only a simulated iris shape The schematic picture of, does not contain the iris information of a living human eye. The picture (b) in FIG. 3 is an infrared image of the face of the user's living body, and the characteristic information of the iris of the human eye of a real person can be reflected in the picture, which is completely different from the picture (a) in FIG. 3.

获得包括虹膜的眼部图像之后,基于该眼部图像的特征信息,进行人脸防伪判别,以确定所述识别目标的虹膜是否为活体人脸的虹膜,从而判断所述识别目标是否为活体人脸,达到人脸防伪的效果。After obtaining the eye image including the iris, based on the feature information of the eye image, the face anti-counterfeiting judgment is performed to determine whether the iris of the recognition target is the iris of a living human face, so as to determine whether the recognition target is a living human Face, achieve the effect of anti-counterfeiting of human face.

具体地,在人脸识别的过程中,除了判断识别目标是否为活体人脸,还需要进行特征模板匹配,结合特征模板匹配以及活体判断结果进行人脸识别。所述特征模板匹配为将目标图像与至少一个用户的特征模板进行匹配,可以判断该目标图像是否属于用户的图像。可选地,该特征模板为用户在不同角度,不同环境等不同条件下的多个人脸或局部人脸图像的特征数据。所述特征模板存储在人脸识别的装置中,特别的,可以存储在装置中的存储器中。Specifically, in the process of face recognition, in addition to judging whether the recognition target is a living body face, it is also necessary to perform feature template matching, which combines the feature template matching and the result of living body judgment to perform face recognition. The feature template matching is to match the target image with the feature template of at least one user, and it can be determined whether the target image belongs to the user's image. Optionally, the feature template is feature data of multiple faces or partial face images of the user under different conditions such as different angles and different environments. The feature template is stored in a face recognition device, in particular, it can be stored in a memory in the device.

结合人脸防伪判断以及特征模板匹配判断,能够增强人脸识别过程的可靠性,提升安全性能。Combining face anti-counterfeiting judgment and feature template matching judgment can enhance the reliability of the face recognition process and improve safety performance.

目前,人脸防伪有不同的安全等级,如下表1所示,不同的等级代表不同的人脸防伪要求。即例如:防伪等级为等级1时,能识别出2D打印静态平面人脸。Currently, there are different security levels for face anti-counterfeiting, as shown in Table 1 below. Different levels represent different face anti-counterfeiting requirements. That is, for example: when the anti-counterfeiting level is level 1, the 2D printed static flat face can be recognized.

表1Table 1

Figure PCTCN2019093159-appb-000001
Figure PCTCN2019093159-appb-000001

Figure PCTCN2019093159-appb-000002
Figure PCTCN2019093159-appb-000002

图1(a)和图1(b)中的人脸识别装置以及人脸识别方法无法判断采集的2D图像来源自照片还是真人脸,因而不具有防伪功能,无法达到表1中人脸防伪等级的等级1。但在本申请实施例中,由于可以通过包括虹膜的眼部图像得到人体虹膜的特征信息,因而可以识别出活体人脸与非活体人脸,从而可以达到人脸防伪等级5,防伪和识别的安全性能得到大幅提高。The face recognition device and face recognition method in Figure 1(a) and Figure 1(b) cannot determine whether the collected 2D image is from a photo or a real face, so it does not have the anti-counterfeiting function and cannot reach the face anti-counterfeiting level in Table 1. Level 1. However, in the embodiments of the present application, since the characteristic information of the human iris can be obtained from the eye image including the iris, the living human face and the non-living human face can be identified, so that the face anti-counterfeiting level 5 can be reached, and the anti-counterfeiting and recognition Safety performance has been greatly improved.

可选地,在本申请实施例中,可以通过红外图像采集装置获取识别目标的红外图像,再从该识别目标的红外图像中获取该识别目标的眼部红外图像。Optionally, in the embodiment of the present application, the infrared image of the recognition target may be obtained by the infrared image acquisition device, and then the infrared image of the eyes of the recognition target may be obtained from the infrared image of the recognition target.

在一种可能的实施方式中,先从所述识别目标的红外图像中初步检测并剪切出大致的眼部区域,然后判断该大致的眼部区域中是否存在虹膜,获取虹膜红外图像或者包括有虹膜红外图像的眼部红外图像,具体地,获取左眼虹膜红外图像和/或右眼虹膜红外图像,或者包括左眼虹膜红外图像的左眼红外图像和/或包括右眼虹膜红外图像的右眼红外图像。In a possible implementation manner, a rough eye area is initially detected and cut out from the infrared image of the identified target, and then it is determined whether there is an iris in the rough eye area, and the iris infrared image is obtained or includes Eye infrared image with iris infrared image, specifically, obtaining left eye iris infrared image and/or right eye iris infrared image, or left eye infrared image including left eye iris infrared image and/or right eye iris infrared image Infrared image of right eye.

例如:根据左眼区域和右眼区域对称位于正面人脸图像上,对红外图像中对称区域的识别,检测到识别目标的大致眼部区域。或者根据人脸的“三庭五眼”几何特征,在人脸检测裁剪得到的人脸图像中截取出大致的眼部区域形成眼部红外图像。可选地,将人脸纵向三等分形成上庭,中庭和下庭,大致的眼部区域处于上庭底部的1/5以及中庭上部的3/5处。可选地,根据大致眼部区域中的灰度值变化或者其它方式检测出该大致眼部区域中是否存在虹膜。For example: according to the left eye area and the right eye area symmetrically located on the front face image, the recognition of the symmetrical area in the infrared image can detect the approximate eye area of the recognition target. Or according to the geometric features of the "three courts and five eyes" of the human face, a rough eye area is cut out from the face image obtained by face detection and cropping to form an infrared image of the eye. Optionally, the human face is divided into three equal parts longitudinally to form the upper court, the atrium and the lower court, and the approximate eye area is located at 1/5 of the bottom of the upper court and 3/5 of the upper atrium. Optionally, it is detected whether there is an iris in the approximate eye area according to the gray value change in the approximate eye area or other methods.

在另一种可能的实施方式中,可以直接从识别目标的红外图像中检测并剪切出虹膜红外图像或者包括有虹膜红外图像的眼部红外图像。In another possible implementation manner, the infrared image of the iris or the infrared image of the eye including the infrared image of the iris may be directly detected and cut out from the infrared image of the recognition target.

例如,由于人脸的红外灰度图像中,眼部区域的灰度值与人脸其它区域相比,眼部的虹膜区域的灰度值小,而巩膜区域的灰度值大,且虹膜区域与巩膜区域之间灰度值变化梯度明显,因此,可以通过检测红外图像中灰度变化特征检测眼部区域以及其中的虹膜区域,获取眼部区域和虹膜区域在图像 中的坐标,剪切得到虹膜红外图像或者包括有虹膜红外图像的眼部红外图像。For example, in an infrared grayscale image of a human face, the gray value of the eye area is compared with other areas of the face, the gray value of the iris area of the eye is small, and the gray value of the scleral area is large, and the iris area The gray value change gradient between and the sclera area is obvious. Therefore, the eye area and the iris area can be detected by detecting the gray change feature in the infrared image, and the coordinates of the eye area and the iris area in the image can be obtained, and the result can be cut An infrared image of the iris or an infrared image of the eye including an infrared image of the iris.

具体地,在本申请实施例中,当眼部红外图像中包括有虹膜图像时,为有效的眼部红外图像,可以直接用于活体人脸识别,或者从眼部红外图像中剪切出虹膜红外图像,用于活体人脸识别。当眼部红外图像中不包括虹膜图像时,即用户闭眼或者其它识别目标中无虹膜或虹膜图案的情况下,该眼部红外图像为无效的眼部红外图像,不能用于活体人脸识别。为方便描述,下文中将包含有虹膜图像的眼部红外图像以及虹膜红外图像均简称为眼部红外图像。具体地所述眼部红外图像包括:左眼眼部红外图像和/或右眼眼部红外图像。Specifically, in the embodiments of the present application, when the infrared image of the eye includes an iris image, it is an effective infrared image of the eye, which can be directly used for face recognition in vivo, or the iris can be cut out from the infrared image of the eye. Infrared image, used for live face recognition. When the infrared image of the eye does not include the iris image, that is, when the user closes his eyes or there is no iris or iris pattern in the other recognition target, the infrared image of the eye is an invalid infrared image of the eye and cannot be used for live face recognition . For the convenience of description, the infrared image of the eye including the iris image and the infrared image of the iris are both referred to as the infrared image of the eye hereinafter. Specifically, the infrared image of the eye includes: an infrared image of the left eye and/or an infrared image of the right eye.

应理解,在本申请实施例中,还可以采用任意其它可以识别眼部虹膜的算法或方式从识别目标的红外图像中获取眼部图像以及判断眼部图像中是否存在虹膜,本申请实施例对此不做限定。It should be understood that in the embodiments of the present application, any other algorithms or methods that can recognize the iris of the eye can also be used to obtain an eye image from the infrared image of the recognition target and determine whether there is an iris in the eye image. This is not limited.

具体地,在本申请实施例中,可以基于获取的识别目标的2D目标图像进行2D识别的特征模板匹配,并基于2D识别的特征模板匹配结果以及人脸防伪判断的结果进行人脸识别并输出人脸识别结果。Specifically, in the embodiments of the present application, the feature template matching of 2D recognition can be performed based on the acquired 2D target image of the recognition target, and the face recognition can be performed and output based on the feature template matching result of 2D recognition and the result of face anti-counterfeiting judgment. Face recognition result.

在本申请实施例中,当特征模板为2D图像时,特征模板匹配为2D识别中的一个主要步骤和实施方式,下文中,2D识别也可以理解为2D识别中的特征模板匹配。In the embodiments of the present application, when the feature template is a 2D image, feature template matching is a main step and implementation in 2D recognition. In the following, 2D recognition can also be understood as feature template matching in 2D recognition.

可选地,可以先进行2D识别,在2D识别的基础上,根据2D识别的结果基于所述眼部红外图像再次进行基于虹膜的人脸防伪,使识别过程更加安全有效。例如,如图4所示,本申请实施例提供的另一种人脸识别的方法300,包括:Optionally, 2D recognition can be performed first, and on the basis of 2D recognition, the iris-based face anti-counterfeiting is performed again based on the eye infrared image according to the 2D recognition result, so that the recognition process is more safe and effective. For example, as shown in FIG. 4, another method 300 for face recognition provided by an embodiment of the present application includes:

S310:获取所述识别目标的目标图像;S310: Acquire a target image of the recognition target;

S340:基于所述目标图像进行2D识别;S340: Perform 2D recognition based on the target image;

当目标图像与多个特征模板中的任意一个特征模板匹配成功时,则2D识别成功,表示该目标图像包括用户人脸图像。当目标图像与多个特征模板均匹配失败时,则2D识别失败,表示该目标图像不包括用户人脸图像。When the target image is successfully matched with any one of the multiple characteristic templates, the 2D recognition is successful, indicating that the target image includes the user's face image. When the target image fails to match with multiple feature templates, the 2D recognition fails, which means that the target image does not include the user's face image.

可选地,在本申请实施例中,所述2D识别可以与图1(b)中的2D识别过程相同或近似。Optionally, in this embodiment of the present application, the 2D recognition may be the same or similar to the 2D recognition process in FIG. 1(b).

S351:在2D识别成功时,基于所述目标图像获取所述识别目标的眼部图像;S351: When the 2D recognition is successful, obtain an eye image of the recognition target based on the target image;

S352:在2D识别失败时,确定人脸识别失败,输出第一人脸识别结果;S352: When the 2D recognition fails, it is determined that the face recognition fails, and the first face recognition result is output;

可选地,所述第一人脸识别结果可以包括但不限于失败、非认证用户等具体信息。Optionally, the first face recognition result may include, but is not limited to, specific information such as failures and non-authenticated users.

S360:根据所述眼部图像进行基于虹膜的人脸防伪判别,以确定所述识别目标是否为活体人脸;S360: Perform iris-based face anti-counterfeiting judgment according to the eye image to determine whether the recognition target is a living human face;

S371:在所述识别目标为活体人脸时,确定人脸识别成功,输出第二人脸识别结果;S371: When the recognition target is a living human face, determine that the face recognition is successful, and output a second face recognition result;

可选地,所述第二人脸识别结果可以包括但不限于成功、活体认证用户等具体信息。Optionally, the second face recognition result may include, but is not limited to, specific information such as success and biometric authentication of the user.

S372:在所述识别目标为非活体人脸时,确定人脸识别失败,输出第三人脸识别结果。S372: When the recognition target is a non-living face, determine that the face recognition fails, and output a third face recognition result.

可选地,所述第三人脸识别结果可以包括但不限于失败、非活体认证用户等具体信息。Optionally, the third face recognition result may include, but is not limited to, specific information such as failures and non-living authenticated users.

可选地,在本申请实施例中,所述目标图像可以为红外图像、可见光图像或者其他图像。当所述目标图像为红外图像时,基于所述红外图像获取所述识别目标的眼部红外图像,根据该眼部红外图像进行基于虹膜人脸防伪判别。Optionally, in the embodiment of the present application, the target image may be an infrared image, a visible light image, or other images. When the target image is an infrared image, an eye infrared image of the recognition target is acquired based on the infrared image, and an iris-based human face anti-counterfeiting judgment is performed according to the eye infrared image.

可选地,还可以先进行人脸防伪,在人脸防伪的基础上,根据人脸防伪的结果再进行2D识别,可以提前排除非活体人脸的情况,提高识别的效率。例如,如图5所示,本申请实施例提供的另一种人脸识别的方法400,包括:Optionally, face anti-counterfeiting can be performed first, and on the basis of face anti-counterfeiting, 2D recognition can be performed according to the result of face anti-counterfeiting, which can exclude non-living human faces in advance and improve the efficiency of recognition. For example, as shown in FIG. 5, another face recognition method 400 provided by an embodiment of the present application includes:

S410:获取识别目标的目标图像;S410: Obtain a target image of the recognition target;

S440:基于所述目标图像获取所述识别目标的眼部图像;S440: Acquire an eye image of the recognition target based on the target image;

S450:根据所述眼部图像进行基于虹膜的人脸防伪判别,以确定所述识别目标是否为活体人脸;S450: Perform iris-based face anti-counterfeiting judgment according to the eye image to determine whether the recognition target is a living human face;

S461:在所述识别目标为活体人脸时,基于所述目标图像进行2D识别;S461: When the recognition target is a living human face, perform 2D recognition based on the target image;

可选地,该步骤中的2D识别可以与图4中步骤S340相同,具体实施方式可以参考前述方案,此处不再赘述。Optionally, the 2D recognition in this step may be the same as step S340 in FIG. 4, and the specific implementation manner may refer to the foregoing solution, which will not be repeated here.

S462:在所述识别目标为非活体人脸时,确定人脸识别失败,输出第四人脸识别结果;S462: When the recognition target is a non-living human face, determine that the face recognition fails, and output a fourth face recognition result;

可选地,所述第四人脸识别结果可以包括但不限于失败、非活体等具体信息。Optionally, the fourth face recognition result may include, but is not limited to, specific information such as failure and non-living body.

S471:在2D识别成功时,确定人脸识别成功,输出第五人脸识别结果。S471: When the 2D recognition is successful, it is determined that the face recognition is successful, and the fifth face recognition result is output.

可选地,所述第五人脸识别结果可以包括但不限于成功、活体认证用户等具体信息。Optionally, the fifth face recognition result may include, but is not limited to, specific information such as success and biometric authentication of the user.

S472:在2D识别失败时,确定人脸识别失败,输出第六人脸识别结果。S472: When the 2D recognition fails, it is determined that the face recognition fails, and the sixth face recognition result is output.

可选地,所述第六人脸识别结果可以包括但不限于失败、活体非认证用户等具体信息。Optionally, the sixth face recognition result may include, but is not limited to, specific information such as failure and a living non-authenticated user.

可选地,在步骤S310以及步骤S410中,可以通过图像采集模组获取识别目标的目标图像。该图像采集模组可以为图1(a)中的红外图像采集模组120。Optionally, in step S310 and step S410, the target image of the recognition target may be acquired through the image acquisition module. The image acquisition module may be the infrared image acquisition module 120 in FIG. 1(a).

可选地,所述红外图像采集模组中可以包括红外光电传感器,其中,红外光电传感器中包括多个像素单元,每个像素单元用于采集红外光经过识别目标表面反射后的反射红外光信号,并将该反射红外光信号转换为对应其光强的像素电信号。每一个像素电信号的值对应于红外图像的一个像素点,其大小表现为红外图像的灰度值。因此,多个像素单元组成的像素矩阵形成的红外图像也可以表示为多个像素点的灰度值组成的数值矩阵。可选地,每一个像素点的灰度值范围为0~255之间,灰度值0表现为黑色,灰度值255表现为白色。Optionally, the infrared image acquisition module may include an infrared photoelectric sensor, wherein the infrared photoelectric sensor includes a plurality of pixel units, and each pixel unit is used to collect the reflected infrared light signal after the infrared light is reflected on the surface of the identification target , And convert the reflected infrared light signal into a pixel electrical signal corresponding to its light intensity. The value of the electrical signal of each pixel corresponds to a pixel of the infrared image, and its size is expressed as the gray value of the infrared image. Therefore, an infrared image formed by a pixel matrix composed of multiple pixel units can also be expressed as a numerical matrix composed of gray values of multiple pixel points. Optionally, the gray value range of each pixel is between 0 and 255, the gray value 0 represents black, and the gray value 255 represents white.

可选地,在步骤S351具体还可以包括:3D人脸重建。即当2D识别成功,获取识别目标的三维数据,根据所述三维数据进行3D人脸重建,若3D人脸重建成功,则基于所述目标图像获取所述识别目标的眼部图像,并根据所述眼部图像进行基于虹膜的人脸防伪判别,若3D人脸重建失败,则不进行人脸防伪判别。具体地,重建后的人脸图形从三维空间上反映人脸的特征信息,在3D人脸成功的基础上,进行人脸防伪判别。Optionally, step S351 may specifically include: 3D face reconstruction. That is, when the 2D recognition is successful, the 3D data of the recognition target is obtained, and the 3D face reconstruction is performed based on the 3D data. If the 3D face reconstruction is successful, the eye image of the recognition target is obtained based on the target image, and the The eye image is subjected to the iris-based face anti-counterfeiting judgment. If the 3D face reconstruction fails, the face anti-counterfeiting judgment is not performed. Specifically, the reconstructed face image reflects the feature information of the face from the three-dimensional space, and on the basis of the success of the 3D face, the face anti-counterfeiting judgment is performed.

可选地,如图6所示,所述人脸识别方法300还包括:Optionally, as shown in FIG. 6, the face recognition method 300 further includes:

S320:人脸检测,具体地,基于所述目标图像进行人脸检测;S320: face detection, specifically, perform face detection based on the target image;

S331:存在人脸,即人脸检测成功时,对目标图像进行人脸剪切得到人脸图像;S331: There is a face, that is, when the face detection is successful, cut the target image to obtain a face image;

S332:不存在人脸,即人脸检测失败时,重启参数加1;S332: There is no face, that is, when the face detection fails, the restart parameter is increased by 1;

S340具体包括S341:基于所述人脸图像进行2D人脸识别。S340 specifically includes S341: performing 2D face recognition based on the face image.

S351具体包括S353:在2D识别成功时,对所述人脸图像进行剪切得到所述识别目标的眼部图像;S351 specifically includes S353: when the 2D recognition is successful, crop the face image to obtain the eye image of the recognition target;

S352:2D识别失败,确定人脸识别失败,重启参数加1;S352: 2D recognition fails. If it is determined that the face recognition fails, add 1 to the restart parameter;

S373:在所述识别目标不为活体人脸时,重启参数加1;S373: When the recognition target is not a living human face, add 1 to the restart parameter;

可选地,如图7所示,所述人脸识别方法400还包括:Optionally, as shown in FIG. 7, the face recognition method 400 further includes:

S420:人脸检测,具体地,基于所述目标图像进行人脸检测;S420: face detection, specifically, perform face detection based on the target image;

S431:存在人脸,即人脸检测成功时,对目标图像进行人脸剪切得到人脸图像;S431: There is a face, that is, when the face detection is successful, cut the target image to obtain a face image;

S432:不存在人脸,即人脸检测失败时,重启参数加1;S432: There is no face, that is, when the face detection fails, the restart parameter is increased by 1;

S464:在所述识别目标为非活体人脸时,重启参数加1;S464: When the recognition target is a non-living human face, add 1 to the restart parameter;

S463:当所述识别目标为活体人脸时,进行步骤S465:基于所述人脸图像进行2D人脸识别。S463: When the recognition target is a living human face, proceed to step S465: Perform 2D face recognition based on the face image.

可选地,所述步骤S320~S332和步骤S420~S432可以与图1(b)中步骤S120~步骤S122相同,具体的实施方式可以参照图1(b)中的相关描述,此处不再赘述。Optionally, the steps S320 to S332 and steps S420 to S432 may be the same as steps S120 to S122 in FIG. 1(b). For specific implementations, refer to the relevant description in FIG. 1(b), which will not be omitted here. Repeat.

可选地,在图6和图7的实施例中,方法还包括:对所述重启参数的大小进行判断,当重启参数小于第二阈值时,则进入S310或者进入S410;当重启参数大于等于第二阈值时,则确定识别失败。Optionally, in the embodiment of FIG. 6 and FIG. 7, the method further includes: judging the size of the restart parameter, and when the restart parameter is less than the second threshold, enter S310 or enter S410; when the restart parameter is greater than or equal to At the second threshold, it is determined that the recognition fails.

下面结合图8至图14详细介绍S360以及S450中根据所述眼部图像进行基于虹膜的人脸防伪判别,以确定所述识别目标是否为活体人脸的过程,即人脸防伪的详细过程。The following describes in detail the process of performing iris-based face anti-counterfeiting judgment based on the eye image in S360 and S450 in conjunction with FIGS. 8 to 14 to determine whether the recognition target is a living face, that is, the detailed process of face anti-counterfeiting.

可选地,如图8所示,具体采用人脸防伪判别方法500进行步骤S220中的基于所述眼部红外图像进行人脸防伪判别,以确定所述识别目标是否为活体人脸。具体地,对所述眼部红外图像进行预处理后,输入神经网络进行分类,从而得到人脸防伪判别结果。Optionally, as shown in FIG. 8, a face anti-counterfeiting discrimination method 500 is specifically used to perform face anti-counterfeiting discrimination based on the infrared eye image in step S220 to determine whether the recognition target is a living human face. Specifically, after preprocessing the infrared image of the eye, it is input into a neural network for classification, so as to obtain a face anti-counterfeiting discrimination result.

可选地,如图8所示,所述人脸防伪判别方法500包括:Optionally, as shown in FIG. 8, the face anti-counterfeiting discrimination method 500 includes:

S510:对所述眼部图像预处理得到优化眼部图像;所述眼部图像进行预处理后,增大眼部图像的对比度,提高眼部图像的图像质量,更有利于神经网络的处理及分类。S510: Preprocess the eye image to obtain an optimized eye image; after the eye image is preprocessed, the contrast of the eye image is increased, the image quality of the eye image is improved, and it is more conducive to the processing and processing of the neural network. classification.

具体地,所述眼部图像包括左眼眼部图像和右眼眼部图像。可选地,对所述左眼眼部图像和/或所述右眼眼部图像进行预处理得到左眼优化眼部图像和/或右眼优化眼部图像。Specifically, the eye image includes a left eye image and a right eye image. Optionally, preprocessing the left eye image and/or the right eye image to obtain the left eye optimized eye image and/or the right eye optimized eye image.

可选地,所述预处理过程包括S511:眼部图像均衡。具体地,对左眼优 化眼部图像和/或右眼优化眼部图像进行图像均衡。Optionally, the preprocessing process includes S511: eye image equalization. Specifically, the left-eye optimized eye image and/or the right-eye optimized eye image are image equalized.

可选地,当所述眼部图像为红外灰度图像时,采用直方图(histogram equalization)均衡化方法进行图像均衡处理,既可以提高眼部红外图像的对比度,也可以把眼部红外图像变换成灰度值是几乎均匀分布的图像。Optionally, when the eye image is an infrared gray-scale image, a histogram equalization method is used for image equalization processing, which can improve the contrast of the eye infrared image, and can also transform the eye infrared image The grayscale values are almost uniformly distributed images.

具体地,直方图均衡化步骤包括:Specifically, the histogram equalization step includes:

1)按照如下公式计算眼部红外图像的各灰度值中像素出现的概率p(i):1) Calculate the probability p(i) of pixels in each gray value of the infrared image of the eye according to the following formula:

Figure PCTCN2019093159-appb-000003
Figure PCTCN2019093159-appb-000003

其中n为总的像素个数,n i为灰度值为i的像素个数,L为总的灰度值个数。 Where n is the total number of pixels, n i is the number of pixels whose gray value is i, and L is the total number of gray values.

2)按照如下公式计算p的累计概率函数c(i):2) Calculate the cumulative probability function c(i) of p according to the following formula:

Figure PCTCN2019093159-appb-000004
Figure PCTCN2019093159-appb-000004

计算得到的c即为图像的累计归一化直方图。The calculated c is the cumulative normalized histogram of the image.

3)按照如下公式将c(i)缩放至0~255范围内的y(i):3) Scale c(i) to y(i) in the range of 0~255 according to the following formula:

y(i)=255*c(i)y(i)=255*c(i)

具体地,原始眼部红外图像中灰度值为i的像素灰度值变为y(i),从而实现眼部红外图像均衡,得到优化眼部红外图像。Specifically, the gray value of the pixel with the gray value of i in the original infrared image of the eye is changed to y(i), thereby achieving the balance of the infrared image of the eye, and obtaining an optimized infrared image of the eye.

应理解,所述预处理过程还可以包括但不限于局部二值模式(Local Binary Pattern,LBP)特征处理、归一化、校正、图像增强等处理过程,本申请实施例对此不做限定。It should be understood that the preprocessing process may also include, but is not limited to, local binary pattern (Local Binary Pattern, LBP) feature processing, normalization, correction, image enhancement, and other processing processes, which are not limited in the embodiment of the present application.

可选地,在一种可能的实施方式中,对所述眼部图像进行预处理后,采用深度学习网络对预处理后的优化眼部图像进行分类处理,以确定所述识别目标是否为活体人脸。在本申请实施例中,深度学习网络包括但不限于神经网络,还可以为其它深度学习网络,本申请实施例对此不做限定,以下以神经网络为例,说明本申请实施例中的分类处理方法。Optionally, in a possible implementation manner, after preprocessing the eye image, a deep learning network is used to classify the preprocessed optimized eye image to determine whether the recognition target is a living body human face. In the embodiments of the present application, the deep learning network includes but is not limited to a neural network, and may also be other deep learning networks. The embodiment of the present application does not limit this. The following uses a neural network as an example to illustrate the classification in the embodiment of the present application Approach.

可选地,如图8所示,所述人脸防伪判别方法500还包括:Optionally, as shown in FIG. 8, the face anti-counterfeiting discrimination method 500 further includes:

S520:通过神经网络对所述优化眼部图像进行分类处理,以确定所述识别目标是否为活体人脸。S520: Perform classification processing on the optimized eye image through a neural network to determine whether the recognition target is a living human face.

首先,构建神经网络结构,例如可以采用二层神经网络或更多层网络结构,每层网络结构的构成也可以根据待提取的人脸信息进行调整,本申请实施例对此不作限定。First, construct a neural network structure, for example, a two-layer neural network or a more-layer network structure can be used, and the structure of each layer of the network structure can also be adjusted according to the face information to be extracted, which is not limited in the embodiment of the application.

其次,设置该神经网络的初始训练参数和收敛条件。Second, set the initial training parameters and convergence conditions of the neural network.

可选地,在本申请实施例中,该初始训练参数可以是随机生成的,或根据经验值获取的,或者也可以是根据大量的真假人脸数据预训练好的神经网络模型的参数,本申请实施例对此不作限定。Optionally, in the embodiment of the present application, the initial training parameters may be randomly generated, or obtained based on empirical values, or may be parameters of a neural network model pre-trained based on a large amount of true and false face data. The embodiments of this application do not limit this.

然后,向该神经网络输入大量的用户活体人脸和非活体人脸的优化眼部图像,该神经网络可以基于初始训练参数对上述优化眼部图像进行处理,确定对每个优化眼部图像的判定结果,进一步地,根据该判定结果,调整神经网络的结构和/或各层的训练参数,直至判定结果满足收敛条件。Then, input a large number of optimized eye images of the user's live and non-living faces to the neural network. The neural network can process the optimized eye images based on the initial training parameters, and determine the optimal eye image for each optimized eye image. The determination result, further, adjust the structure of the neural network and/or the training parameters of each layer according to the determination result until the determination result meets the convergence condition.

可选地,在本申请实施例中,该收敛条件可以包括以下中的至少一项:Optionally, in this embodiment of the application, the convergence condition may include at least one of the following:

1、将活体人脸的优化眼部图像判定为活体人脸的优化眼部图像的概率大于第一概率,例如,98%;1. The probability of judging the optimized eye image of the living body face as the optimized eye image of the living body face is greater than the first probability, for example, 98%;

2、将非活体人脸的优化眼部图像判定为非活体人脸的优化眼部图像的概率大于第二概率,例如95%;2. The probability of judging the optimized eye image of the non-living human face as the optimized eye image of the non-living human face is greater than the second probability, for example, 95%;

3、将活体人脸的优化眼部图像判定为非活体人脸的优化眼部图像的概率小于第三概率,例如,2%;3. The probability of judging the optimized eye image of the living human face as the optimized eye image of the non-living human face is less than the third probability, for example, 2%;

4、将非活体人脸的优化眼部图像判定为活体人脸的优化眼部图像的概率小于第四概率,例如3%。4. The probability of determining the optimized eye image of the non-living human face as the optimized eye image of the living human face is less than the fourth probability, for example, 3%.

完成判断是否为活体人脸的神经网络的训练之后,在人脸识别的过程中,将处理得到的当前识别目标的优化眼部图像输入到该神经网络中,从而该神经网络可以使用训练好的参数对识别目标的优化眼部图像进行处理,确定该识别目标是否为活体人脸。After completing the training of the neural network to determine whether it is a living human face, in the process of face recognition, the processed optimized eye image of the current recognition target is input into the neural network, so that the neural network can use the trained The parameters process the optimized eye image of the recognition target to determine whether the recognition target is a living human face.

可选地,在一种可能的实施方式中,通过神经网络50对所述优化眼部图像中的左眼优化眼部图像或右眼优化眼部图像进行分类处理,以确定所述识别目标是否为活体人脸。Optionally, in a possible implementation manner, the left-eye optimized eye image or the right-eye optimized eye image in the optimized eye image is classified by the neural network 50 to determine whether the recognition target It is a living human face.

可选地,当所述优化眼部图像为左眼优化眼部图像或右眼优化眼部图像时,如图9所示,所述人脸防伪判别方法501包括:Optionally, when the optimized eye image is a left eye optimized eye image or a right eye optimized eye image, as shown in FIG. 9, the face anti-counterfeiting determination method 501 includes:

S511:采用直方图均衡化方法对左眼眼部图像或右眼眼部图像进行图像均衡处理得到优化左眼眼部图像或优化右眼眼部图像;S511: Using a histogram equalization method to perform image equalization processing on the left eye image or the right eye image to obtain an optimized left eye image or an optimized right eye image;

S521:通过神经网络对优化左眼眼部图像或优化右眼眼部图像进行分类处理,以确定所述识别目标是否为活体人脸。S521: Perform classification processing on the optimized left eye image or the optimized right eye image through the neural network to determine whether the recognition target is a living human face.

如图10所示,所述神经网络50包括扁平化层510,全连接层520,以 及激励层530。As shown in FIG. 10, the neural network 50 includes a flattened layer 510, a fully connected layer 520, and an excitation layer 530.

其中,所述扁平化(flatten)层510用于将输入至神经网络的左眼优化眼部图像的二维数据一维化,即形成一个一维的数组。例如,所述左眼优化眼部图像表示为20*20像素的二维矩阵,其中的每一个像素值表示一个灰度值,经过扁平化处理后,形成一个400*1的一维矩阵,也即输出了400个像素值。即在本申请实施例中,经过扁平化层510将二维图像数据扁平化处理为一维数据,然后将该一维数据输入至全连接层进行全连接。Wherein, the flattened layer 510 is used to one-dimensionalize the two-dimensional data of the left-eye optimized eye image input to the neural network, that is, to form a one-dimensional array. For example, the left-eye optimized eye image is represented as a two-dimensional matrix of 20*20 pixels, each of which represents a gray value. After flattening, a one-dimensional matrix of 400*1 is formed. That is, 400 pixel values are output. That is, in the embodiment of the present application, the two-dimensional image data is flattened into one-dimensional data through the flattening layer 510, and then the one-dimensional data is input to the fully connected layer to be fully connected.

具体地,所述全连接层520中每个结点均与上层每个结点相连,用于将之前神经网络中提取出来的特征进行综合,在整个神经网络中起到“分类器”的作用。例如,如图11所示,x 1至x n为上一层输出的结点,全连接层520共包括m个全连接结点c 1至c n,输出m个特征常数,便于对m个特征常数分类进行判断分类。具体地,m个全连接结点中的每一个结点均包括上述训练收敛得到的多个参数,用于将x 1至x n进行加权连接,最终得到一个特征常数结果。 Specifically, each node in the fully connected layer 520 is connected to each node in the upper layer, and is used to synthesize the features extracted from the previous neural network, and act as a "classifier" in the entire neural network. . For example, as shown in Figure 11, x 1 to x n are the output nodes of the previous layer, the fully connected layer 520 includes m fully connected nodes c 1 to c n , and m characteristic constants are output, which is convenient for m The characteristic constant classification is used for judgment classification. Specifically, each of the m fully connected nodes includes multiple parameters obtained by the above training convergence, and is used for weighted connection of x 1 to x n to obtain a characteristic constant result.

下面,以本申请实施例中,x 1至x n为扁平化层510输出的一维数据为例,对全连接层进行说明。 In the following, in the embodiment of the present application, x 1 to x n are one-dimensional data output by the flattening layer 510 as an example, the fully connected layer will be described.

一维数据为x 1至x n,一维数据经过m个全连接层结点输出m个常数a 1至a m的全连接数据,其中a 1至a m的计算公式如下所示: Is one-dimensional data x 1 to x n, m one-dimensional data is subjected to full output full connection layer connected to the node data of m constants a 1 to a m, the formula wherein a 1 to a m are as follows:

a 1=W 11*x 1+W 12*x 2+W 13*x 3+…+W 1n*x n+b 1a 1 =W 11 *x 1 +W 12 *x 2 +W 13 *x 3 +…+W 1n *x n +b 1 ;

a 2=W 21*x 1+W 22*x 2+W 23*x 3+…+W 2n*x n+b 2a 2 =W 21 *x 1 +W 22 *x 2 +W 23 *x 3 +…+W 2n *x n +b 2 ;

……...

a m=W m1*x 1+W m2*x 2+W m3*x 3+…+W mn*x n+b ma m =W m1 *x 1 +W m2 *x 2 +W m3 *x 3 +…+W mn *x n +b m ;

其中,W和b为全连接层520结点中的加权参数和偏置参数,均可以由上述训练收敛神经网络的过程得到。Among them, W and b are the weighting parameters and bias parameters in the nodes of the fully connected layer 520, both of which can be obtained by the above-mentioned process of training a convergent neural network.

可选地,所述全连接层520包括至少一层全连接层。例如,在本申请实施例中,如图12所示,所述全连接层520包括第一全连接层521和第二全连接层522。具体地,两层全连接层的计算原理相同,均是对输入的一维数组进行加权全连接。Optionally, the fully connected layer 520 includes at least one fully connected layer. For example, in the embodiment of the present application, as shown in FIG. 12, the fully connected layer 520 includes a first fully connected layer 521 and a second fully connected layer 522. Specifically, the calculation principles of the two fully connected layers are the same, and both are weighted fully connected to the input one-dimensional array.

可选地,如图12所示,所述激励层530包括第一激励层531和第二激励层532,所述第一激励层531中包括激励函数,用于对一维数组进行非线性化处理。可选地,激励函数包括但不限于修正线性单元(Rectified Linear  Unit,ReLU)函数、指数线性单元(exponential linear unit,ELU)函数,以及ReLU函数的几种变体形式,例如:带泄露修正线性单元(Leaky ReLU,LReLU),参数化修正线性单元(Parametric ReLU,PReLU),随机纠正线性单元(Randomized ReLU,RReLU)等。Optionally, as shown in FIG. 12, the excitation layer 530 includes a first excitation layer 531 and a second excitation layer 532, and the first excitation layer 531 includes an excitation function for nonlinearizing a one-dimensional array deal with. Optionally, the activation function includes, but is not limited to, a corrected linear unit (Rectified Linear Unit, ReLU) function, an exponential linear unit (ELU) function, and several variants of the ReLU function, such as: linear with leakage correction Unit (Leaky ReLU, LReLU), parametric correction linear unit (Parametric ReLU, PReLU), random correction linear unit (Randomized ReLU, RReLU), etc.

优选地,在本申请实施例中,采用的激励函数为修正线性单元ReLU函数,具体地,ReLU函数的公式如下所示:Preferably, in the embodiment of the present application, the excitation function used is the modified linear unit ReLU function. Specifically, the formula of the ReLU function is as follows:

Figure PCTCN2019093159-appb-000005
Figure PCTCN2019093159-appb-000005

经过ReLU处理后,小于等于0的数值变为0,大于0的数值保持不变,使得输出的一维数组具有稀疏性,ReLU实现稀疏后的神经网络结构能够更好地挖掘相关特征,拟合训练数据。After ReLU processing, the value less than or equal to 0 becomes 0, and the value greater than 0 remains unchanged, which makes the output one-dimensional array sparse. The neural network structure after ReLU achieves sparseness can better mine relevant features and fit Training data.

可选地,第二激励层532中包括分类函数Sigmoid,对全连接层输出的常数进行分类判别。Optionally, the second excitation layer 532 includes a classification function Sigmoid to classify and discriminate the constants output by the fully connected layer.

其中,Sigmoid函数的公式如下所示:Among them, the formula of the Sigmoid function is as follows:

Figure PCTCN2019093159-appb-000006
Figure PCTCN2019093159-appb-000006

在Sigmoid函数中,在输入趋于正无穷或负无穷时,函数趋近平滑状态,Sigmoid函数因为输出范围为0至1,所以二分类的概率常常用这个函数。对Sigmoid函数处理得到的多个概率值进行判断,从而得到最终人脸防伪判别的结果,以确定所述识别目标是否为活体人脸。In the Sigmoid function, when the input tends to positive or negative infinity, the function approaches a smooth state. Because the output range of the Sigmoid function is from 0 to 1, this function is often used for the probability of two classifications. The multiple probability values obtained by the Sigmoid function processing are judged to obtain the final face anti-counterfeiting judgment result to determine whether the recognition target is a living face.

应理解,在本申请实施例中,神经网络50还可以包括:一个或多个全连接层520和/或一个或多个激励层530。例如:扁平化层-全连接层-激励层的结构,或者扁平化层-全连接层-激励层-全连接层-激励层-全连接层-激励层的结构,本申请实施例对此不做限定。It should be understood that, in the embodiment of the present application, the neural network 50 may further include: one or more fully connected layers 520 and/or one or more excitation layers 530. For example: the structure of flattened layer-fully connected layer-excited layer, or flattened layer-fully connected layer-excited layer-fully connected layer-excited layer-fully connected layer-excited layer structure, the embodiment of this application does not have this Make a limit.

还应理解,多个激励层530采用的激励函数可以不同,和/或多个全连接层520中全连接参数也可以不同。本申请实施例对此也不做限定。It should also be understood that the excitation functions adopted by the multiple excitation layers 530 may be different, and/or the fully connected parameters in the multiple fully connected layers 520 may also be different. The embodiment of the present application does not limit this.

优选地,在另一种可能的实施方式中,采用深度学习算法,对所述识别目标的所述左眼优化眼部图像和所述右眼优化眼部图像综合计算,一起进行分类处理,以确定所述识别目标是否为活体人脸。采用该方法可以综合左眼优化眼部图像和右眼优化眼部图像的虹膜特征,能够提高防伪判断的准确性。Preferably, in another possible implementation manner, a deep learning algorithm is used to perform comprehensive calculation on the left-eye optimized eye image and the right-eye optimized eye image of the recognition target, and perform classification processing together to It is determined whether the recognition target is a living human face. Using this method, the iris characteristics of the left-eye optimized eye image and the right-eye optimized eye image can be integrated, and the accuracy of anti-counterfeiting judgment can be improved.

具体地,如图13所示,一种人脸防伪判别方法600包括:Specifically, as shown in FIG. 13, a face anti-counterfeiting discrimination method 600 includes:

S611:采用直方图均衡化方法对左眼眼部图像进行图像均衡处理得到优 化左眼眼部图像;S611: Perform image equalization processing on the left eye image by using a histogram equalization method to obtain an optimized left eye image;

S612:采用直方图均衡化方法对右眼眼部图像进行图像均衡处理得到优化右眼眼部图像;S612: Using a histogram equalization method to perform image equalization processing on the right eye image to obtain an optimized right eye image;

S620:通过神经网络对优化左眼眼部图像和优化右眼眼部图像进行分类处理,以确定所述识别目标是否为活体人脸。S620: Perform classification processing on the optimized left eye image and the optimized right eye image through a neural network to determine whether the recognition target is a living human face.

可选地,其中,优化左眼眼部图像和优化右眼眼部图像的大小相同。Optionally, wherein the optimized left eye image and the optimized right eye image have the same size.

可选地,在本申请实施例中,通过神经网络60对优化左眼眼部图像和优化右眼眼部图像进行分类处理,以确定所述识别目标是否为活体人脸。Optionally, in this embodiment of the present application, the optimized left eye image and the optimized right eye image are classified by the neural network 60 to determine whether the recognition target is a living human face.

具体地,如图14所示,采用神经网络60对所述识别目标的左眼优化眼部图像和右眼优化眼部图像综合分类处理,具体地,所述神经网络60包括第一网络610、第二网络620以及第三网络630。所述第一网络610包括:第二扁平化层,至少一个第二全连接层以及至少一个第二激励层;所述第二网络620包括:第三扁平化层,至少一个第三全连接层以及至少一个第三激励层;所述第三网络630包括:至少一个第四全连接层以及至少一个第四激励层。Specifically, as shown in FIG. 14, the neural network 60 is used to comprehensively classify the left-eye optimized eye image and the right-eye optimized eye image of the recognition target. Specifically, the neural network 60 includes a first network 610, The second network 620 and the third network 630. The first network 610 includes: a second flattened layer, at least one second fully connected layer, and at least one second excitation layer; the second network 620 includes: a third flattened layer, at least one third fully connected layer And at least one third incentive layer; the third network 630 includes: at least one fourth fully connected layer and at least one fourth incentive layer.

优选地,在本申请实施例中,所述第一网络610包括:第二扁平化层611、第二上全连接层612,第二上激励层613,第二下全连接层614以及第二下激励层615,用于对输入的左眼优化眼部图像进行扁平化以及全连接,输出得到左眼一维特征数组,也称为左眼分类特征值。第二网络620包括:第三扁平化层621、第三上全连接层622,第三上激励层623,第三下全连接层624以及第三下激励层625,用于对输入的右眼优化眼部图像进行扁平化以及全连接,输出得到右眼一维特征数组,也称为右眼分类特征值。Preferably, in the embodiment of the present application, the first network 610 includes: a second flattened layer 611, a second upper fully connected layer 612, a second upper excitation layer 613, a second lower fully connected layer 614, and a second The lower excitation layer 615 is used to flatten and fully connect the input left-eye optimized eye image, and output a left-eye one-dimensional feature array, which is also called a left-eye classification feature value. The second network 620 includes: a third flattened layer 621, a third upper fully connected layer 622, a third upper excitation layer 623, a third lower fully connected layer 624, and a third lower excitation layer 625, which are used for the right eye of the input The optimized eye image is flattened and fully connected, and the output is a one-dimensional feature array of the right eye, also known as the right eye classification feature value.

第三网络630包括:第四全连接层631以及第四激励层632,用于对左眼一维特征数组和右眼一维特征数组进行全连接并分类处理。例如,第一网络610输出左眼一维特征数组包括10个特征常数,第二网络620输出右眼一维特征数组也包括10个特征常数,则将左眼一维特征数组和右眼一维特征数组共20个特征常数一起输入第三网络630,进行全连接并分类处理。The third network 630 includes: a fourth fully connected layer 631 and a fourth excitation layer 632, which are used to fully connect and classify the left-eye one-dimensional feature array and the right-eye one-dimensional feature array. For example, if the first network 610 outputs a left-eye one-dimensional feature array that includes 10 feature constants, and the second network 620 outputs a right-eye one-dimensional feature array that also includes 10 feature constants, the left-eye one-dimensional feature array and the right-eye one-dimensional feature array are combined A total of 20 feature constants in the feature array are input to the third network 630 together for full connection and classification processing.

应理解,所述第一网络、第二网络以及第三网络中的全连接层激励函数或者分类函数可以相同或者不同,本申请实施例对此不做限定。It should be understood that the fully connected layer activation functions or classification functions in the first network, the second network, and the third network may be the same or different, which is not limited in the embodiment of the present application.

优选地,所述第二上全连接层612和第三上全连接层622中均采用ReLU激励函数,所述第二下全连接层613和第三下全连接层623中均采用Sigmoid 分类函数。Preferably, the second upper fully connected layer 612 and the third upper fully connected layer 622 both use ReLU activation functions, and the second lower fully connected layer 613 and the third lower fully connected layer 623 both use the Sigmoid classification function. .

可选地,所述第三网络630可以采用ReLU激励函数对输出的特征常数再次进行非线性化处理,修正分类结果,提高识别判断的准确性。Optionally, the third network 630 may use the ReLU excitation function to perform non-linearization processing on the output characteristic constant again, to correct the classification result, and to improve the accuracy of recognition and judgment.

在本申请实施例中,神经网络30和神经网络40的网络结构简单,运行速度快,可以运行在高级精简指令集机器(Advanced RISC Machine,ARM)上。In the embodiment of the present application, the neural network 30 and the neural network 40 have a simple network structure and a fast running speed, and can be run on an Advanced RISC Machine (ARM).

在上述申请实施例中,根据所述眼部图像进行基于虹膜的人脸防伪判别,以确定所述识别目标是否为活体人脸,其中,人脸防伪判别的结果用于人脸识别。In the above-mentioned application embodiment, the iris-based face anti-counterfeiting judgment is performed according to the eye image to determine whether the recognition target is a living human face, wherein the result of the face anti-counterfeiting judgment is used for face recognition.

可选地,所述人脸防伪判别的结果还可以用于人脸注册,即生成2D人脸识别过程中人脸特征模板。具体地,在人脸注册的过程中加入人脸防伪,防止将根据人脸照片或者其它非活体人脸的模型采集到的照片作为模板进行人脸识别匹配,可以提高2D识别的准确性。Optionally, the result of the face anti-counterfeiting discrimination can also be used for face registration, that is, to generate a face feature template in the 2D face recognition process. Specifically, face anti-counterfeiting is added in the face registration process to prevent photos collected based on face photos or other non-living face models from being used as templates for face recognition matching, which can improve the accuracy of 2D recognition.

具体地,如图15所示,所述人脸注册方法700包括:Specifically, as shown in FIG. 15, the face registration method 700 includes:

S710:获取识别目标的眼部图像。S710: Acquire an eye image of the recognition target.

S720:根据所述眼部图像进行基于虹膜的人脸防伪判别,以确定所述识别目标是否为活体人脸,其中,人脸防伪判别的结果用于建立人脸特征模板。S720: Perform iris-based face anti-counterfeiting judgment according to the eye image to determine whether the recognition target is a living human face, wherein the result of the face anti-counterfeiting judgment is used to establish a face feature template.

应理解,本申请实施例中人脸注册方法过程和上述人脸识别方法过程为两个独立的阶段,仅是注册方法过程中建立的人脸特征模板用于人脸识别过程中2D识别的判断。在通过人脸注册方法建立人脸特征模板之后,通过上述人脸识别方法以及人脸防伪判别方法进行人脸识别。It should be understood that the process of the face registration method in this embodiment of the application and the process of the aforementioned face recognition method are two independent stages, and only the face feature template established in the process of the registration method is used for the judgment of 2D recognition in the face recognition process . After the face feature template is established through the face registration method, face recognition is performed through the above-mentioned face recognition method and face anti-counterfeiting discrimination method.

还应理解,本申请实施例中的识别目标可以与上述人脸识别过程中的识别目标相同或者不同,例如,可以均为用户活体人脸,对用户活体人脸进行注册和识别;也可以为注册过程中的识别目标为用户活体人脸,但识别过程中的识别目标为其它非活体人脸。本申请实施例对此不做限定。It should also be understood that the recognition target in the embodiment of the present application may be the same as or different from the recognition target in the above-mentioned face recognition process. For example, it may be both the user's live face, and the user's live face is registered and recognized; or The recognition target in the registration process is the user's live face, but the recognition target in the recognition process is other non-living faces. The embodiments of this application do not limit this.

可选地,所述步骤S710可以与上述步骤S210相同,通过图像采集装置获取识别目标的眼部图像。可选地,所述眼部图像为红外图像或者可见光彩色图像。Optionally, the step S710 may be the same as the step S210 described above, and the eye image of the recognition target is acquired through the image acquisition device. Optionally, the eye image is an infrared image or a visible light color image.

可选地,所述步骤S720中根据所述眼部图像进行基于虹膜的人脸防伪判别,以确定所述识别目标是否为活体人脸,可以采用上述人脸识别防伪判别方法500、人脸识别防伪判别方法501、人脸识别防伪判别方法600中的 任意一种进行判别,具体描述可以参照上述申请实施例,此处不再赘述。Optionally, in the step S720, the iris-based face anti-counterfeiting judgment is performed according to the eye image to determine whether the recognition target is a living human face. The aforementioned face recognition anti-counterfeiting judgment method 500 and face recognition Any one of the anti-counterfeiting discrimination method 501 and the face recognition anti-counterfeiting discrimination method 600 performs discrimination. For a specific description, please refer to the above-mentioned application embodiment, which will not be repeated here.

可选地,在本申请实施例中,人脸注册方法还包括:获取所述识别目标的目标图像,基于所述目标图像获取所述眼部图像,并根据所述目标图像建立人脸特征模板。Optionally, in the embodiment of the present application, the face registration method further includes: acquiring a target image of the recognition target, acquiring the eye image based on the target image, and establishing a face feature template based on the target image .

在一种可能的实施方式中,当目标图像为红外图像时,先获取识别目标的红外图像,基于所述红外图像进行模板匹配,在匹配成功的基础上进行防伪。In a possible implementation manner, when the target image is an infrared image, the infrared image of the identified target is acquired first, the template matching is performed based on the infrared image, and anti-counterfeiting is performed on the basis of successful matching.

例如,图16示出了一种人脸注册方法800,包括:For example, FIG. 16 shows a face registration method 800, which includes:

S810:获取识别目标的红外图像;S810: Obtain an infrared image of the identified target;

S850:基于所述红外图像进行模板匹配;S850: Perform template matching based on the infrared image;

S851:在模板匹配成功时,基于所述红外图像获取所述眼部图像;S851: When the template matching is successful, obtain the eye image based on the infrared image;

S852:当模板匹配失败时,不建立人脸特征模板;S852: When the template matching fails, the face feature template is not established;

S860:根据所述眼部图像进行基于虹膜的人脸防伪判别,以确定所述识别目标是否为活体人脸;S860: Perform iris-based face anti-counterfeiting judgment according to the eye image to determine whether the recognition target is a living face;

S871:在所述识别目标为活体人脸时,存储红外图像为人脸特征模板;S871: When the recognition target is a living human face, store an infrared image as a face feature template;

S872:在所述识别目标不为活体人脸时,不存储红外图像为人脸特征模板。S872: When the recognition target is not a living human face, not storing the infrared image as a facial feature template.

其中,可选地,步骤S810可以与步骤S310相同。步骤S851可以与步骤S351相同。步骤S860可以与步骤S360相同。Wherein, optionally, step S810 may be the same as step S310. Step S851 may be the same as step S351. Step S860 may be the same as step S360.

可选地,步骤S850可以与步骤S340基于目标图像进行2D识别类似,将该红外图像与人脸特征模板库中的多个人脸特征模板进行匹配,若匹配成功,则该人脸目标图像为用户的人脸图像,若匹配失败,则该人脸目标图像不为用户的人脸图像。Optionally, step S850 may be similar to step S340 for performing 2D recognition based on the target image. The infrared image is matched with multiple facial feature templates in the facial feature template library. If the matching is successful, the face target image is the user. If the matching fails, the target face image is not the user’s face image.

可选地,步骤S871中,当识别目标为活体人脸时,将红外图像的数据存储于存储单元中,作为人脸特征模板库中一个新的人脸特征模板,该存储单元可以为执行人脸注册方法的处理器中的存储单元,也可以为执行人脸注册方法的电子设备中的存储器。Optionally, in step S871, when the recognition target is a living human face, the data of the infrared image is stored in the storage unit as a new face feature template in the face feature template library. The storage unit may be the execution person. The storage unit in the processor of the face registration method may also be the memory in the electronic device that executes the face registration method.

可选地,如图17所示,人脸注册方法800还可以包括:Optionally, as shown in FIG. 17, the face registration method 800 may further include:

S820:人脸检测;S820: face detection;

S821:当人脸检测到所述红外图像上存在人脸时,对红外图像进行人脸剪切得到人脸图像;S821: When the human face detects that there is a human face on the infrared image, perform face cropping on the infrared image to obtain a human face image;

S822:当人脸检测到所述红外图像上不存在人脸时,重启参数加1;S822: When the human face detects that there is no human face in the infrared image, the restart parameter increases by 1;

可选地,步骤S820至步骤S822可以与步骤S320至步骤S332相同。Optionally, step S820 to step S822 may be the same as step S320 to step S332.

S830:3D人脸重建;S830: 3D face reconstruction;

具体地,可以通过发射结构光或者光脉冲,经过识别目标表面反射后,接收到携带识别目标表面信息的反射结构光或者反射光脉冲,从而获取识别目标的3D数据,该3D数据包含了识别目标的深度信息,能够表示识别目标的表面形状。所述3D数据可以表示为深度图(Depth Image)、3D点云(Point Cloud)、几何模型等多种不同形式。在本申请实施例中,可以根据该3D数据进行3D人脸重建,即得到表示识别目标的3D形态图像。Specifically, by emitting structured light or light pulses, after reflecting on the recognition target surface, the reflected structured light or reflected light pulses carrying the information of the recognition target surface are received, so as to obtain the 3D data of the recognition target. The 3D data contains the recognition target. The depth information can indicate the surface shape of the recognition target. The 3D data can be expressed in various forms such as a depth image (Depth Image), a 3D point cloud (Point Cloud), and a geometric model. In the embodiment of the present application, 3D face reconstruction can be performed based on the 3D data, that is, a 3D shape image representing the recognition target can be obtained.

S831:当3D人脸重建成功时,即根据3D数据获取到识别目标的3D形态图像时,进入S840。S831: When the 3D face reconstruction is successful, that is, when the 3D shape image of the recognition target is obtained according to the 3D data, go to S840.

可选地,当3D人脸重建成功时,将该3D数据存储至存储单元中,例如,将3D点云数据作为一个3D点云数据模板存储至存储单元中,形成3D点云数据模板库。Optionally, when the 3D face reconstruction is successful, the 3D data is stored in the storage unit, for example, the 3D point cloud data is stored in the storage unit as a 3D point cloud data template to form a 3D point cloud data template library.

S832:当3D人脸重建失败时,即根据该3D数据不能获取到识别目标的3D形态图像时,重启参数加1。S832: When the 3D face reconstruction fails, that is, when the 3D shape image of the recognition target cannot be obtained according to the 3D data, the restart parameter is increased by 1.

S840:判断S821步骤中剪切得到的人脸图像是否属于人脸特征模板库。可选地,通过获取目标图像的用户身份(Identification,ID)信息,判断是否存在该用户ID的人脸特征模板库,当存在该用户ID的人脸特征模板库时,进入S842:所述人脸图像属于人脸特征模板库。当不存在该用户ID的人脸特征模板库时,进入S841:所述人脸图像不属于人脸特征模板库。S840: Determine whether the face image cut in step S821 belongs to the face feature template library. Optionally, by acquiring user identification (ID) information of the target image, it is determined whether there is a face feature template library of the user ID, and when there is a face feature template library of the user ID, enter S842: the person The face image belongs to the face feature template library. When there is no face feature template library of the user ID, enter S841: the face image does not belong to the face feature template library.

S8411:当所述人脸图像不属于人脸特征模板库时,基于红外图像获取眼部图像,进入步骤S860。S8411: When the face image does not belong to the face feature template library, obtain an eye image based on the infrared image, and proceed to step S860.

可选地,还可以根据获取的目标图像的用户ID信息,建立新的用户人脸特征模板库。Optionally, a new user facial feature template library can be established according to the acquired user ID information of the target image.

S8501:当所述人脸图像属于人脸特征模板库时,基于S821步骤中剪切得到的人脸图像进行模板匹配。具体的匹配方法可以与步骤S850相同。S8501: When the face image belongs to the face feature template library, perform template matching based on the face image cut in step S821. The specific matching method may be the same as step S850.

S851:当模板匹配成功时,基于红外图像获取眼部图像,进入步骤S860。S851: When the template matching is successful, obtain an eye image based on the infrared image, and proceed to step S860.

S852:当模板匹配失败时,不建立人脸特征模板,重启参数加1。S852: When the template matching fails, the face feature template is not established, and the restart parameter is increased by 1.

S860:根据所述眼部图像进行基于虹膜的人脸防伪判别,以确定所述识别目标是否为活体人脸。S860: Perform iris-based face anti-counterfeiting judgment according to the eye image to determine whether the recognition target is a living face.

S8711:当所述识别目标为活体人脸时,进入S8712:判断是否为有效点云。S8711: When the recognition target is a living human face, go to S8712: Determine whether it is a valid point cloud.

可选地,将S830中人脸重建采集到的3D点云数据与3D点云数据模板库中多个3D点云数据模板进行匹配,判断是否为有效点云。当匹配成功时,则为无效点云,当匹配失败时,则为有效点云。具体地,点云匹配用于判断采集的3D点云数据中识别目标的人脸角度是否与3D点云数据模板中的人脸角度相同,当角度相同时,匹配成功,则说明模板库中存在相同人脸角度的3D点云数据,则为无效点云;当角度不同时,匹配失败,则说明模板库中不存在相同人脸角度的3D点云数据,则为有效点云。Optionally, the 3D point cloud data collected by face reconstruction in S830 is matched with multiple 3D point cloud data templates in the 3D point cloud data template library to determine whether it is a valid point cloud. When the matching is successful, it is an invalid point cloud, and when the matching fails, it is a valid point cloud. Specifically, point cloud matching is used to determine whether the face angle of the recognized target in the collected 3D point cloud data is the same as the face angle in the 3D point cloud data template. When the angle is the same, the matching is successful, indicating that there is a template library The 3D point cloud data with the same face angle is an invalid point cloud; when the angles are different, the matching fails, which means that there is no 3D point cloud data with the same face angle in the template library, and it is a valid point cloud.

可选地,还可以在此过程中,采集多张识别目标的3D点云数据,进行点云拼接和点云融合,以形成人脸全方位全角度的3D数据和3D图像,根据该3D图像可以进行3D人脸识别。Optionally, in this process, multiple pieces of 3D point cloud data of the recognized target can be collected, and point cloud splicing and point cloud fusion can be performed to form 3D data and 3D images of the face in all directions and angles, according to the 3D image Can perform 3D face recognition.

S8713:当判断3D点云数据为有效点云时,存储人脸图像为人脸特征模板。具体地,将人脸图像的数据存储于存储单元中,作为人脸特征模板库中一个新的人脸特征模板。S8713: When judging that the 3D point cloud data is a valid point cloud, store the face image as a face feature template. Specifically, the data of the face image is stored in the storage unit as a new face feature template in the face feature template library.

S8714:当判断3D点云数据为无效点云时,重启参数加1。S8714: When it is judged that the 3D point cloud data is invalid, the restart parameter is increased by 1.

可选地,在判断所述3D点云数据为有效点云后,还可以判断人脸特征模板库中的人脸特征模板是否已满。Optionally, after determining that the 3D point cloud data is a valid point cloud, it can also be determined whether the face feature templates in the face feature template library are full.

具体地,判断所述人脸特征模板库中的人脸特征模板数量是否等于预设值,若等于预设值,则人脸特征模板已满,则不再新增存储人脸特征模板。Specifically, it is determined whether the number of face feature templates in the face feature template library is equal to a preset value. If it is equal to the preset value, the face feature template is full, and no new face feature template is stored.

例如,所述预设值为8,则当人脸特征模板库中的人脸特征模板数量为8时,则不再新增人脸特征模板。For example, the preset value is 8, when the number of face feature templates in the face feature template library is 8, no more face feature templates are added.

当人脸特征模板未满时,存储人脸图像为人脸特征模板。具体地,将人脸图像的数据存储于存储单元中,作为人脸特征模板库中一个新的人脸特征模板。When the face feature template is not full, the face image is stored as the face feature template. Specifically, the data of the face image is stored in the storage unit as a new face feature template in the face feature template library.

可选地,所述人脸注册方法800还包括:Optionally, the face registration method 800 further includes:

判断重启参数是否小于第二阈值。若重启参数小于第二阈值,则进入S810;若重启参数大于等于第二阈值,则识别失败。Determine whether the restart parameter is less than the second threshold. If the restart parameter is less than the second threshold, enter S810; if the restart parameter is greater than or equal to the second threshold, the recognition fails.

上文结合图2至图17,详细描述了本申请的人脸识别方法实施例,下文结合图18,详细描述本申请的人脸识别装置实施例,应理解,装置实施例与方法实施例相互对应,类似的描述可以参照方法实施例。The face recognition method embodiment of the present application is described in detail above with reference to Figs. 2 to 17, and the face recognition device embodiment of the present application is described in detail below with reference to Fig. 18. It should be understood that the device embodiment and the method embodiment are mutually Correspondingly, similar descriptions can refer to the method embodiments.

图18是根据本申请实施例的人脸识别装置20的示意性框图,包括:处理器210;FIG. 18 is a schematic block diagram of a face recognition device 20 according to an embodiment of the present application, including: a processor 210;

所述处理器210用于:获取第一识别目标的第一眼部图像;The processor 210 is configured to: obtain a first eye image of a first recognition target;

根据所述第一眼部图像进行基于虹膜的人脸防伪判别,以确定所述第一识别目标是否为活体人脸,其中,人脸防伪判别的结果用于人脸识别。According to the first eye image, the iris-based face anti-counterfeiting judgment is performed to determine whether the first recognition target is a living human face, wherein the result of the face anti-counterfeiting judgment is used for face recognition.

可选地,所述处理器220可以为所述人脸识别装置20的处理器,也可以为包括人脸识别装置20的电子设备的处理器,本申请实施例不做限定。Optionally, the processor 220 may be a processor of the face recognition device 20, or a processor of an electronic device including the face recognition device 20, which is not limited in the embodiment of the present application.

可选地,所述第一眼部图像为第一眼部红外图像。Optionally, the first eye image is a first eye infrared image.

可选地,所述人脸识别装置20还包括:图像采集装置220,用于:获取所述第一识别目标的第一目标图像;Optionally, the face recognition device 20 further includes: an image acquisition device 220, configured to obtain a first target image of the first recognition target;

所述处理器210还用于:基于所述第一目标图像进行二维识别;在二维识别成功时,基于所述第一目标图像获取所述第一眼部图像;The processor 210 is further configured to: perform two-dimensional recognition based on the first target image; when the two-dimensional recognition is successful, obtain the first eye image based on the first target image;

所述处理器210还用于:在所述第一识别目标为活体人脸时,确定人脸识别成功;或者,在所述第一识别目标为非活体人脸时,确定人脸识别失败。The processor 210 is further configured to: when the first recognition target is a living face, determine that the face recognition is successful; or, when the first recognition target is a non-living face, determine that the face recognition fails.

可选地,所述处理器210具体用于:获取所述第一识别目标的第一目标图像,基于所述第一目标图像获取所述第一眼部图像;Optionally, the processor 210 is specifically configured to: acquire a first target image of the first recognition target, and acquire the first eye image based on the first target image;

所述处理器210还用于:在所述第一识别目标为活体人脸时,基于所述第一目标图像进行二维识别;The processor 210 is further configured to: when the first recognition target is a living human face, perform two-dimensional recognition based on the first target image;

在二维识别成功时,确定人脸识别成功,或者,在二维识别失败时,确定人脸识别失败;When the two-dimensional recognition is successful, it is determined that the face recognition is successful, or when the two-dimensional recognition fails, it is determined that the face recognition has failed;

或者,在所述第一识别目标为非活体人脸时,确定人脸识别失败。Or, when the first recognition target is a non-living face, it is determined that the face recognition fails.

可选地,所述处理器210具体用于:基于所述第一目标图像获取第一人脸图像;Optionally, the processor 210 is specifically configured to: acquire a first face image based on the first target image;

将所述第一人脸图像与多个特征模板进行匹配,当匹配成功时,二维识别成功,或者,当匹配失败时,二维识别失败。The first face image is matched with multiple feature templates, and when the matching is successful, the two-dimensional recognition is successful, or when the matching fails, the two-dimensional recognition fails.

可选地,所述处理器210具体用于:基于所述第一目标图像获取人脸区域图像;基于所述人脸区域图像获取所述第一眼部图像。Optionally, the processor 210 is specifically configured to: acquire a face region image based on the first target image; acquire the first eye image based on the face region image.

可选地,所述第一眼部图像为包括虹膜的人眼区域图像或虹膜区域图像。Optionally, the first eye image is a human eye area image or an iris area image including the iris.

可选地,所述处理器210具体用于:采用直方图均衡化方法对所述第一眼部图像进行处理得到第一优化眼部图像;Optionally, the processor 210 is specifically configured to: use a histogram equalization method to process the first eye image to obtain a first optimized eye image;

根据所述第一优化眼部图像进行基于虹膜的人脸防伪判别。According to the first optimized eye image, the iris-based face anti-counterfeiting judgment is performed.

可选地,所述处理器210具体用于:通过神经网络对所述第一优化眼部图像进行分类处理,以确定所述第一识别目标是否为活体人脸。Optionally, the processor 210 is specifically configured to: perform classification processing on the first optimized eye image through a neural network to determine whether the first recognition target is a living human face.

可选地,所述第一眼部图像包括第一左眼眼部图像和/或第一右眼眼部图像,所述处理器210具体用于:Optionally, the first eye image includes a first left eye image and/or a first right eye image, and the processor 210 is specifically configured to:

采用所述直方图均衡化方法对所述第一左眼眼部图像进行处理得到第一优化左眼眼部图像;和/或Processing the first left-eye eye image by using the histogram equalization method to obtain a first optimized left-eye eye image; and/or

采用所述直方图均衡化方法对所述第一右眼眼部图像进行处理得到第一优化右眼眼部图像。Using the histogram equalization method to process the first right eye image to obtain a first optimized right eye image.

可选地,所述第一眼部图像包括:所述第一左眼眼部图像或所述第一右眼眼部图像;Optionally, the first eye image includes: the first left eye image or the first right eye image;

所述神经网络包括:第一扁平化层,至少一个第一全连接层以及至少一个第一激励层。The neural network includes: a first flattened layer, at least one first fully connected layer, and at least one first excitation layer.

所述处理器210具体用于:通过所述第一扁平化层,对所述第一优化左眼眼部图像或所述第一优化右眼眼部图像进行处理得到多个眼部像素值;The processor 210 is specifically configured to: use the first flattening layer to process the first optimized left-eye eye image or the first optimized right-eye image to obtain multiple eye pixel values;

通过所述至少一个第一全连接层,对所述多个眼部像素值进行全连接得到多个特征常数;Using the at least one first fully connected layer to fully connect the multiple eye pixel values to obtain multiple characteristic constants;

通过所述至少一个第一激励层,对所述多个特征常数进行非线性化处理或者分类处理。Through the at least one first excitation layer, nonlinearization processing or classification processing is performed on the plurality of characteristic constants.

可选地,所述神经网络包括:所述第一扁平化层,两个所述第一全连接层以及两个所述第一激励层。Optionally, the neural network includes: the first flattened layer, two first fully connected layers, and two first excitation layers.

可选地,两个所述第一激励层中的激励函数分别为修正线性单元ReLU函数和Sigmoid函数。Optionally, the excitation functions in the two first excitation layers are respectively a modified linear unit ReLU function and a Sigmoid function.

可选地,所述第一眼部图像包括:所述第一左眼眼部图像和所述第一右眼眼部图像;Optionally, the first eye image includes: the first left eye image and the first right eye image;

所述神经网络包括第一网络、第二网络和第三网络;The neural network includes a first network, a second network, and a third network;

所述第一网络包括:第二扁平化层,至少一个第二全连接层以及至少一个第二激励层;The first network includes: a second flattening layer, at least one second fully connected layer, and at least one second excitation layer;

所述第二网络包括:第三扁平化层,至少一个第三全连接层以及至少一个第三激励层;The second network includes: a third flattening layer, at least one third fully connected layer, and at least one third excitation layer;

所述第三网络包括:至少一个第四全连接层以及至少一个第四激励层。The third network includes: at least one fourth fully connected layer and at least one fourth excitation layer.

所述处理器210具体用于:通过所述第一网络对所述第一优化左眼眼部 图像进行处理得到左眼分类特征值;The processor 210 is specifically configured to: process the first optimized left-eye eye image through the first network to obtain a left-eye classification feature value;

通过所述第二网络对所述第一优化右眼眼部图像进行处理得到右眼分类特征值;Processing the first optimized right eye image through the second network to obtain right eye classification feature values;

通过所述第三网络对所述左眼分类特征值和所述右眼分类特征值进行全连接。Fully connect the left-eye classification feature value and the right-eye classification feature value through the third network.

可选地,所述第一网络包括:所述第二扁平化层,两个所述第二全连接层和两个所述第二激励层;Optionally, the first network includes: the second flattened layer, two second fully connected layers, and two second excitation layers;

所述第二网络包括:所述第三扁平化层,两个所述第三全连接层和两个所述第三激励层;The second network includes: the third flattening layer, two third fully connected layers, and two third excitation layers;

所述第三网络包括:一个所述第四全连接层和一个所述第四激励层。The third network includes: a fourth fully connected layer and a fourth excitation layer.

可选地,两个所述第二激励层中的激励函数分别为修正线性单元ReLU函数和Sigmoid函数;和/或,Optionally, the excitation functions in the two second excitation layers are respectively a modified linear unit ReLU function and a Sigmoid function; and/or,

两个所述第三激励层中的激励函数分别为修正线性单元ReLU函数和Sigmoid函数;和/或,The excitation functions in the two third excitation layers are the modified linear unit ReLU function and the Sigmoid function respectively; and/or,

一个所述第四激励层中的激励函数为修正线性单元ReLU函数。One of the excitation functions in the fourth excitation layer is a modified linear unit ReLU function.

可选地,所述处理器210还用于:获取第二识别目标的第二眼部图像;Optionally, the processor 210 is further configured to: obtain a second eye image of the second recognition target;

根据所述第二眼部图像进行基于虹膜的人脸防伪判别,以确定所述第二识别目标是否为活体人脸,其中,人脸防伪判别的结果用于建立人脸特征模板。According to the second eye image, the iris-based face anti-counterfeiting judgment is performed to determine whether the second recognition target is a living human face, wherein the result of the face anti-counterfeiting judgment is used to establish a face feature template.

可选地,所述第二眼部图像为第二眼部红外图像。Optionally, the second eye image is a second eye infrared image.

可选地,所述处理器210还用于:获取所述第二识别目标的第二目标图像,基于所述第二目标图像获取所述第二眼部图像,并基于所述第二目标图像建立所述人脸特征模板。Optionally, the processor 210 is further configured to: acquire a second target image of the second recognition target, acquire the second eye image based on the second target image, and based on the second target image Establish the face feature template.

可选地,所述处理器210还用于:基于所述第二目标图像进行人脸检测;Optionally, the processor 210 is further configured to: perform face detection based on the second target image;

其中,所述基于所述第二目标图像建立人脸特征模板包括:Wherein, the establishment of a facial feature template based on the second target image includes:

在人脸检测成功时,基于所述第二目标图像获取第二人脸图像,并根据所述第二人脸图像建立所述人脸特征模板。When the face detection is successful, a second face image is acquired based on the second target image, and the face feature template is established based on the second face image.

可选地,所述处理器210具体用于:判断所述第二人脸图像是否属于人脸特征模板库;Optionally, the processor 210 is specifically configured to: determine whether the second face image belongs to a face feature template library;

当所述第二人脸图像属于所述人脸特征模板库时,将所述第二人脸图像与所述人脸特征模板库中的多个人脸特征模板进行匹配。When the second face image belongs to the face feature template library, the second face image is matched with multiple face feature templates in the face feature template library.

当所述第二人脸图像不属于所述人脸特征模板库时,根据所述第二眼部图像进行基于虹膜的人脸防伪判别,当确定所述第二识别目标为活体人脸时,将所述第二人脸图像建立为人脸特征模板。When the second face image does not belong to the face feature template library, the iris-based face anti-counterfeiting judgment is performed according to the second eye image, and when it is determined that the second recognition target is a living face, The second face image is established as a face feature template.

可选地,所述处理器210具体用于:当匹配成功时,根据所述第二眼部图像进行基于虹膜的人脸防伪判别;Optionally, the processor 210 is specifically configured to: when the matching is successful, perform an iris-based face anti-counterfeiting judgment according to the second eye image;

当确定所述第二识别目标为活体人脸时,将所述第二人脸图像建立为人脸特征模板。When it is determined that the second recognition target is a living face, the second face image is established as a face feature template.

可选地,所述处理器210具体用于:当匹配成功时,获取所述第二识别目标的3D点云数据;Optionally, the processor 210 is specifically configured to obtain 3D point cloud data of the second recognition target when the matching is successful;

当所述3D点云数据为有效点云时,根据所述第二眼部图像进行基于虹膜的人脸防伪判别。When the 3D point cloud data is a valid point cloud, iris-based face anti-counterfeiting judgment is performed according to the second eye image.

可选地,所述处理器210具体用于:基于所述第二目标图像获取人脸区域图像;Optionally, the processor 210 is specifically configured to: acquire a face region image based on the second target image;

基于所述人脸区域图像获取所述第二眼部图像。Acquiring the second eye image based on the face region image.

可选地,所述第二眼部图像为包括虹膜的人眼区域图像或虹膜区域图像。Optionally, the second eye image is a human eye area image or an iris area image including the iris.

可选地,所述处理器210具体用于:采用直方图均衡化方法对所述第二眼部图像进行处理得到第二优化眼部图像;Optionally, the processor 210 is specifically configured to: use a histogram equalization method to process the second eye image to obtain a second optimized eye image;

根据所述第二优化眼部图像进行基于虹膜的人脸防伪判别。According to the second optimized eye image, the iris-based face anti-counterfeiting judgment is performed.

可选地,所述处理器210具体用于:通过神经网络对所述第二优化眼部图像进行分类处理,以确定所述第二识别目标是否为活体人脸。Optionally, the processor 210 is specifically configured to: perform classification processing on the second optimized eye image through a neural network to determine whether the second recognition target is a living human face.

可选地,所述第二眼部图像包括第二左眼眼部图像和/或第二右眼眼部图像,所述处理器210具体用于:通过神经网络对所述第二左眼眼部图像和/或所述第二右眼眼部图像进行分类处理。Optionally, the second eye image includes a second left eye image and/or a second right eye image, and the processor 210 is specifically configured to: compare the second left eye image to the second left eye through a neural network. Performing classification processing on the second image and/or the second right eye image.

可选地,所述神经网络包括:至少一个扁平化层,至少一个全连接层和至少一个激励层。Optionally, the neural network includes: at least one flattened layer, at least one fully connected layer, and at least one excitation layer.

如图19所示,本申请实施例还提供了一种电子设备2,该电子设备2可以包括上述申请实施例的人脸识别装置20。As shown in FIG. 19, an embodiment of the present application further provides an electronic device 2, which may include the face recognition apparatus 20 of the foregoing application embodiment.

例如,电子设备2为智能门锁、手机、电脑、门禁系统等等需要应用人脸识别的设备。所述人脸识别装置20包括电子设备2中用于人脸识别的软件以及硬件装置。For example, the electronic device 2 is a smart door lock, a mobile phone, a computer, an access control system and other devices that need to apply face recognition. The face recognition device 20 includes software and hardware devices for face recognition in the electronic device 2.

应理解,本申请实施例的处理器可以是一种集成电路芯片,具有信号的 处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。It should be understood that the processor of the embodiment of the present application may be an integrated circuit chip with signal processing capability. In the implementation process, the steps of the foregoing method embodiments can be completed by hardware integrated logic circuits in the processor or instructions in the form of software. The aforementioned processor may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA) or other Programming logic devices, discrete gates or transistor logic devices, discrete hardware components. The methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor. The software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers. The storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.

可以理解,本申请实施例的人脸识别还可以包括存储器,存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。It can be understood that the face recognition in the embodiments of the present application may further include a memory, and the memory may be a volatile memory or a nonvolatile memory, or may include both volatile and nonvolatile memory. Among them, the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), and electrically available Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory. The volatile memory may be a random access memory (Random Access Memory, RAM), which is used as an external cache. By way of exemplary but not restrictive description, many forms of RAM are available, such as static random access memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random access memory (Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), synchronous connection dynamic random access memory (Synchlink DRAM, SLDRAM) ) And Direct Rambus RAM (DR RAM). It should be noted that the memories of the systems and methods described herein are intended to include, but are not limited to, these and any other suitable types of memories.

本申请实施例还提出了一种计算机可读存储介质,该计算机可读存储介质存储一个或多个程序,该一个或多个程序包括指令,该指令当被包括多个应用程序的便携式电子设备执行时,能够使该便携式电子设备执行图1-17 所示实施例的方法。The embodiment of the present application also proposes a computer-readable storage medium that stores one or more programs, and the one or more programs include instructions. When the instructions are included in a portable electronic device that includes multiple application programs When executed, the portable electronic device can be made to execute the method of the embodiment shown in Figs. 1-17.

本申请实施例还提出了一种计算机程序,该计算机程序包括指令,当该计算机程序被计算机执行时,使得计算机可以执行图1-17所示实施例的方法。The embodiment of the present application also proposes a computer program, the computer program includes instructions, when the computer program is executed by the computer, the computer can execute the method of the embodiment shown in FIG. 1-17.

本申请实施例还提供了一种芯片,该芯片包括输入输出接口、至少一个处理器、至少一个存储器和总线,该至少一个存储器用于存储指令,该至少一个处理器用于调用该至少一个存储器中的指令,以执行图1-17所示实施例的方法。An embodiment of the present application also provides a chip that includes an input and output interface, at least one processor, at least one memory, and a bus. The at least one memory is used to store instructions, and the at least one processor is used to call the at least one memory. To execute the method of the embodiment shown in Figure 1-17.

本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。A person of ordinary skill in the art may be aware that the units and algorithm steps of the examples described in combination with the embodiments disclosed herein can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.

所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and conciseness of description, the specific working process of the above-described system, device, and unit can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here.

在本申请所提供的几个实施例中,应所述理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed system, device, and method may be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.

所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.

另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, the functional units in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.

所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使 用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者所述技术方案的部分可以以软件产品的形式体现出来,所述计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present application essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer device (which can be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .

以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应所述以权利要求的保护范围为准。The above are only specific implementations of this application, but the protection scope of this application is not limited to this. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed in this application. Should be covered within the scope of protection of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims.

Claims (31)

一种人脸识别的方法,其特征在于,包括:A method for face recognition, characterized in that it includes: 获取第一识别目标的第一目标图像及第一眼部图像;Acquiring a first target image and a first eye image of the first recognition target; 根据所述第一眼部图像进行基于虹膜的人脸防伪判断,以确定所述第一识别目标是否为活体人脸并输出活体判断结果;Perform iris-based face anti-counterfeiting judgment according to the first eye image to determine whether the first recognition target is a living human face and output a living judgment result; 根据所述第一目标图像进行特征模板匹配,并输出匹配结果;Performing feature template matching according to the first target image, and outputting a matching result; 根据所述活体判断结果和所述匹配结果输出人脸识别结果。The face recognition result is output according to the living body judgment result and the matching result. 根据权利要求1所述的方法,其特征在于,所述根据所述活体判断结果和所述匹配结果输出人脸识别结果,包括:The method according to claim 1, wherein said outputting a face recognition result according to said living body judgment result and said matching result comprises: 在所述匹配结果为成功时,根据所述活体判断结果输出人脸识别结果;或者,在所述活体判断结果为活体时,根据所述匹配结果输出人脸识别结果;或者,在所述匹配结果为失败或所述活体判断结果为非活体时,输出人脸识别结果。When the matching result is successful, output the face recognition result according to the living body judgment result; or, when the living body judgment result is a living body, output the face recognition result according to the matching result; or, in the matching When the result is failure or the living body judgment result is non-living body, the face recognition result is output. 根据权利要求1或2所述的方法,其特征在于,所述根据所述第一目标图像进行特征模板匹配,并输出匹配结果,包括:The method according to claim 1 or 2, wherein the performing feature template matching according to the first target image and outputting the matching result comprises: 基于所述第一目标图像进行人脸检测;Performing face detection based on the first target image; 当人脸检测成功时,基于所述第一目标图像获取第一人脸图像;When the face detection is successful, acquiring a first face image based on the first target image; 将所述第一人脸图像与预存的多个第一特征模板进行匹配;Matching the first face image with a plurality of pre-stored first feature templates; 当所述第一人脸图像与所述多个第一特征模板中任意一个第一特征模板匹配成功时,输出匹配结果为成功;或者,When the first face image is successfully matched with any one of the plurality of first feature templates, the output matching result is successful; or, 当所述第一人脸图像与所述多个第一特征模板匹配失败时,输出匹配结果为失败;When the first face image fails to match the multiple first feature templates, output a matching result as failure; 或者,当人脸检测失败时,输出匹配结果为失败。Or, when the face detection fails, the output matching result is failed. 根据权利要求1-3中任一项所述的方法,其特征在于,所述获取第一识别目标的第一目标图像及第一眼部图像,包括:The method according to any one of claims 1 to 3, wherein the acquiring the first target image and the first eye image of the first recognition target comprises: 获取所述第一识别目标的第一目标图像,基于所述第一目标图像获取所述第一眼部图像。A first target image of the first recognition target is acquired, and the first eye image is acquired based on the first target image. 根据权利要求1-4中任一项所述的方法,其特征在于,所述第一眼部图像为二维红外图像。The method according to any one of claims 1 to 4, wherein the first eye image is a two-dimensional infrared image. 根据权利要求1-5中任一项所述的方法,其特征在于,所述第一眼部图像为包括虹膜的人眼区域图像或虹膜区域图像。The method according to any one of claims 1 to 5, wherein the first eye image is a human eye area image or an iris area image including an iris. 根据权利要求6所述的方法,其特征在于,所述根据所述第一眼部图像进行基于虹膜的人脸防伪判断,包括:The method according to claim 6, wherein said performing an iris-based face anti-counterfeiting judgment according to the first eye image comprises: 采用直方图均衡化方法对所述第一眼部图像进行处理得到第一优化眼部图像;Processing the first eye image by using a histogram equalization method to obtain a first optimized eye image; 根据所述第一优化眼部图像进行基于虹膜的人脸防伪判断。According to the first optimized eye image, the iris-based face anti-counterfeiting judgment is performed. 根据权利要求7所述的方法,其特征在于,所述根据所述第一优化眼部图像进行基于虹膜的人脸防伪判断,包括:8. The method according to claim 7, wherein said performing an iris-based face anti-counterfeiting judgment according to the first optimized eye image comprises: 通过神经网络对所述第一优化眼部图像进行分类处理,以确定所述第一识别目标是否为活体人脸。Perform classification processing on the first optimized eye image through a neural network to determine whether the first recognition target is a living human face. 根据权利要求7或8所述的方法,其特征在于,所述第一眼部图像包括第一左眼眼部图像和/或第一右眼眼部图像,所述采用直方图均衡化方法对所述第一眼部图像进行处理得到第一优化眼部图像包括:The method according to claim 7 or 8, wherein the first eye image includes a first left eye image and/or a first right eye image, and the histogram equalization method is used to The processing of the first eye image to obtain the first optimized eye image includes: 采用所述直方图均衡化方法对所述第一左眼眼部图像进行处理得到第一优化左眼眼部图像;和/或Processing the first left-eye eye image by using the histogram equalization method to obtain a first optimized left-eye eye image; and/or 采用所述直方图均衡化方法对所述第一右眼眼部图像进行处理得到第一优化右眼眼部图像。Using the histogram equalization method to process the first right eye image to obtain a first optimized right eye image. 根据权利要求9所述的方法,其特征在于,所述第一眼部图像包括:所述第一左眼眼部图像或所述第一右眼眼部图像;The method according to claim 9, wherein the first eye image comprises: the first left eye image or the first right eye image; 所述神经网络包括:第一扁平化层,至少一个第一全连接层以及至少一个第一激励层。The neural network includes: a first flattened layer, at least one first fully connected layer, and at least one first excitation layer. 根据权利要求10所述的方法,其特征在于,所述通过神经网络对所述第一优化眼部图像进行分类处理,包括:The method according to claim 10, wherein said classifying said first optimized eye image through a neural network comprises: 通过所述第一扁平化层,对所述第一优化左眼眼部图像或所述第一优化右眼眼部图像进行处理得到多个眼部像素值;Processing the first optimized left eye image or the first optimized right eye image through the first flattening layer to obtain multiple eye pixel values; 通过所述至少一个第一全连接层,对所述多个眼部像素值进行全连接得到多个特征常数;Using the at least one first fully connected layer to fully connect the multiple eye pixel values to obtain multiple characteristic constants; 通过所述至少一个第一激励层,对所述多个特征常数进行非线性化处理或者分类处理。Through the at least one first excitation layer, nonlinearization processing or classification processing is performed on the plurality of characteristic constants. 根据权利要求10或11所述的方法,其特征在于,所述神经网络包括:所述第一扁平化层,两个所述第一全连接层以及两个所述第一激励层。The method according to claim 10 or 11, wherein the neural network comprises: the first flattened layer, two first fully connected layers, and two first excitation layers. 根据权利要求12所述的方法,其特征在于,两个所述第一激励层 中的激励函数分别为修正线性单元ReLU函数和Sigmoid函数。The method according to claim 12, wherein the excitation functions in the two first excitation layers are respectively a modified linear unit ReLU function and a Sigmoid function. 根据权利要求9所述的方法,其特征在于,所述第一眼部图像包括:所述第一左眼眼部图像和所述第一右眼眼部图像;The method according to claim 9, wherein the first eye image comprises: the first left eye image and the first right eye image; 所述神经网络包括第一网络、第二网络和第三网络;The neural network includes a first network, a second network, and a third network; 所述第一网络包括:第二扁平化层,至少一个第二全连接层以及至少一个第二激励层;The first network includes: a second flattening layer, at least one second fully connected layer, and at least one second excitation layer; 所述第二网络包括:第三扁平化层,至少一个第三全连接层以及至少一个第三激励层;The second network includes: a third flattening layer, at least one third fully connected layer, and at least one third excitation layer; 所述第三网络包括:至少一个第四全连接层以及至少一个第四激励层。The third network includes: at least one fourth fully connected layer and at least one fourth excitation layer. 根据权利要求14所述的方法,其特征在于,所述通过神经网络对所述第一优化眼部图像进行分类处理,包括:The method according to claim 14, wherein the classification processing of the first optimized eye image through a neural network comprises: 通过所述第一网络对所述第一优化左眼眼部图像进行处理得到左眼分类特征值;Processing the first optimized left-eye eye image through the first network to obtain a left-eye classification feature value; 通过所述第二网络对所述第一优化右眼眼部图像进行处理得到右眼分类特征值;Processing the first optimized right eye image through the second network to obtain right eye classification feature values; 通过所述第三网络对所述左眼分类特征值和所述右眼分类特征值进行全连接。Fully connect the left-eye classification feature value and the right-eye classification feature value through the third network. 根据权利要求14或15所述的方法,其特征在于,所述第一网络包括:所述第二扁平化层,两个所述第二全连接层和两个所述第二激励层;The method according to claim 14 or 15, wherein the first network comprises: the second flattening layer, two second fully connected layers, and two second excitation layers; 所述第二网络包括:所述第三扁平化层,两个所述第三全连接层和两个所述第三激励层;The second network includes: the third flattening layer, two third fully connected layers, and two third excitation layers; 所述第三网络包括:一个所述第四全连接层和一个所述第四激励层。The third network includes: a fourth fully connected layer and a fourth excitation layer. 根据权利要求16所述的方法,其特征在于,两个所述第二激励层中的激励函数分别为修正线性单元ReLU函数和Sigmoid函数;和/或,The method according to claim 16, wherein the excitation functions in the two second excitation layers are respectively a modified linear unit ReLU function and a Sigmoid function; and/or, 两个所述第三激励层中的激励函数分别为修正线性单元ReLU函数和Sigmoid函数;和/或,The excitation functions in the two third excitation layers are the modified linear unit ReLU function and the Sigmoid function respectively; and/or, 一个所述第四激励层中的激励函数为修正线性单元ReLU函数。One of the excitation functions in the fourth excitation layer is a modified linear unit ReLU function. 根据权利要求1-17中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1-17, wherein the method further comprises: 获取第二识别目标的第二眼部图像;Acquiring a second eye image of the second recognition target; 根据所述第二眼部图像进行基于虹膜的人脸防伪判断,以确定所述第二 识别目标是否为活体人脸,其中,人脸防伪判别的结果用于建立人脸特征模板。According to the second eye image, perform iris-based face anti-counterfeiting judgment to determine whether the second recognition target is a living human face, wherein the result of the face anti-counterfeiting judgment is used to establish a face feature template. 根据权利要求18所述的方法,其特征在于,所述第二眼部图像为二维红外图像。The method of claim 18, wherein the second eye image is a two-dimensional infrared image. 根据权利要求18或19所述的方法,其特征在于,所述方法还包括:The method according to claim 18 or 19, wherein the method further comprises: 获取所述第二识别目标的第二目标图像,基于所述第二目标图像获取所述第二眼部图像,并基于所述第二目标图像建立所述人脸特征模板。A second target image of the second recognition target is acquired, the second eye image is acquired based on the second target image, and the face feature template is established based on the second target image. 根据权利要求20所述的方法,其特征在于,所述方法还包括:The method of claim 20, wherein the method further comprises: 基于所述第二目标图像进行人脸检测;Performing face detection based on the second target image; 其中,所述基于所述第二目标图像建立人脸特征模板包括:Wherein, the establishment of a facial feature template based on the second target image includes: 在人脸检测成功时,基于所述第二目标图像获取第二人脸图像,并根据所述第二人脸图像建立所述人脸特征模板。When the face detection is successful, a second face image is acquired based on the second target image, and the face feature template is established based on the second face image. 根据权利要求21所述的方法,其特征在于,所述基于所述第二人脸图像建立所述人脸特征模板,包括:22. The method of claim 21, wherein the establishing the face feature template based on the second face image comprises: 判断所述第二人脸图像是否属于人脸特征模板库;Judging whether the second face image belongs to a face feature template library; 当所述第二人脸图像属于所述人脸特征模板库时,将所述第二人脸图像与所述人脸特征模板库中的多个人脸特征模板进行匹配;When the second face image belongs to the face feature template library, matching the second face image with multiple face feature templates in the face feature template library; 当所述第二人脸图像不属于所述人脸特征模板库时,根据所述第二眼部图像进行基于虹膜的人脸防伪判别,当确定所述第二识别目标为活体人脸时,将所述第二人脸图像建立为人脸特征模板。When the second face image does not belong to the face feature template library, the iris-based face anti-counterfeiting judgment is performed according to the second eye image, and when it is determined that the second recognition target is a living face, The second face image is established as a face feature template. 根据权利要求22所述的方法,其特征在于,所述将所述第二人脸图像与所述人脸特征模板库中的多个人脸特征模板进行匹配,包括:The method according to claim 22, wherein the matching the second face image with multiple face feature templates in the face feature template library comprises: 当匹配成功时,根据所述第二眼部图像进行基于虹膜的人脸防伪判别;When the matching is successful, perform iris-based face anti-counterfeiting judgment according to the second eye image; 当确定所述第二识别目标为活体人脸时,将所述第二人脸图像建立为人脸特征模板。When it is determined that the second recognition target is a living face, the second face image is established as a face feature template. 根据权利要求23所述的方法,其特征在于,所述当匹配成功时,根据所述第二眼部图像进行基于虹膜的人脸防伪判别,包括:22. The method according to claim 23, wherein, when the matching is successful, performing an iris-based face anti-counterfeiting judgment according to the second eye image comprises: 当匹配成功时,获取所述第二识别目标的3D点云数据;When the matching is successful, obtain the 3D point cloud data of the second recognition target; 当所述3D点云数据为有效点云时,根据所述第二眼部图像进行基于虹膜的人脸防伪判别。When the 3D point cloud data is a valid point cloud, iris-based face anti-counterfeiting judgment is performed according to the second eye image. 根据权利要求18-24中任一项所述的方法,其特征在于,所述第二 眼部图像为包括虹膜的人眼区域图像或虹膜区域图像。The method according to any one of claims 18-24, wherein the second eye image is a human eye area image or an iris area image including the iris. 根据权利要求25所述的方法,其特征在于,所述根据所述第二眼部图像进行基于虹膜的人脸防伪判断,包括:The method according to claim 25, wherein said performing an iris-based face anti-counterfeiting judgment according to the second eye image comprises: 采用直方图均衡化方法对所述第二眼部图像进行处理得到第二优化眼部图像;Using a histogram equalization method to process the second eye image to obtain a second optimized eye image; 根据所述第二优化眼部图像进行基于虹膜的人脸防伪判断。According to the second optimized eye image, an iris-based face anti-counterfeiting judgment is performed. 根据权利要求26所述的方法,其特征在于,所述根据所述第二优化眼部图像进行基于虹膜的人脸防伪判断,包括:The method according to claim 26, wherein said performing an iris-based anti-counterfeiting judgment on the face according to the second optimized eye image comprises: 通过神经网络对所述第二优化眼部图像进行分类处理,以确定所述第二识别目标是否为活体人脸。Perform classification processing on the second optimized eye image through a neural network to determine whether the second recognition target is a living human face. 根据权利要求27所述的方法,其特征在于,所述第二眼部图像包括第二左眼眼部图像和/或第二右眼眼部图像,所述通过神经网络对所述第二优化眼部图像进行分类处理包括:28. The method according to claim 27, wherein the second eye image comprises a second left eye image and/or a second right eye image, and the second optimization is performed on the second eye image through a neural network. The classification of eye images includes: 通过神经网络对所述第二左眼眼部图像和/或所述第二右眼眼部图像进行分类处理。Perform classification processing on the second left eye image and/or the second right eye image through a neural network. 根据权利要求27或28所述的方法,其特征在于,所述神经网络包括:The method according to claim 27 or 28, wherein the neural network comprises: 至少一个扁平化层,至少一个全连接层和至少一个激励层。At least one flattening layer, at least one fully connected layer and at least one excitation layer. 一种人脸识别的装置,其特征在于,包括:处理器;A face recognition device, which is characterized by comprising: a processor; 所述处理器用于执行:如权利要求1至29中任一项所述的人脸识别的方法。The processor is configured to execute: the method for face recognition according to any one of claims 1-29. 一种电子设备,其特征在于,包括:An electronic device, characterized in that it comprises: 如权利要求30所述的人脸识别的装置。The device for face recognition according to claim 30.
PCT/CN2019/093159 2019-06-27 2019-06-27 Face recognition method and apparatus, and electronic device Ceased WO2020258119A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980001099.8A CN110462632A (en) 2019-06-27 2019-06-27 The method, apparatus and electronic equipment of recognition of face
PCT/CN2019/093159 WO2020258119A1 (en) 2019-06-27 2019-06-27 Face recognition method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/093159 WO2020258119A1 (en) 2019-06-27 2019-06-27 Face recognition method and apparatus, and electronic device

Publications (1)

Publication Number Publication Date
WO2020258119A1 true WO2020258119A1 (en) 2020-12-30

Family

ID=68492772

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/093159 Ceased WO2020258119A1 (en) 2019-06-27 2019-06-27 Face recognition method and apparatus, and electronic device

Country Status (2)

Country Link
CN (1) CN110462632A (en)
WO (1) WO2020258119A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033499A (en) * 2021-04-30 2021-06-25 中国工商银行股份有限公司 Iris identification method and device
CN113221766A (en) * 2021-05-18 2021-08-06 北京百度网讯科技有限公司 Method for training living body face recognition model and method for recognizing living body face and related device
CN113255594A (en) * 2021-06-28 2021-08-13 深圳市商汤科技有限公司 Face recognition method and device and neural network
CN113378715A (en) * 2021-06-10 2021-09-10 北京华捷艾米科技有限公司 Living body detection method based on color face image and related equipment
CN113762205A (en) * 2021-09-17 2021-12-07 深圳市爱协生科技有限公司 Human face image operation trace detection method, computer equipment and readable storage medium
CN114359665A (en) * 2021-12-27 2022-04-15 北京奕斯伟计算技术有限公司 Training method and device of full-task face recognition model and face recognition method
CN115601818A (en) * 2022-11-29 2023-01-13 海豚乐智科技(成都)有限责任公司(Cn) Lightweight visible light living body detection method and device
CN116343313A (en) * 2023-05-30 2023-06-27 乐山师范学院 Face recognition method based on eye features
CN117789312A (en) * 2023-12-28 2024-03-29 深圳市华弘智谷科技有限公司 Method and device for identifying living body in VR and intelligent glasses
US12361761B1 (en) * 2021-05-06 2025-07-15 Wicket, Llc System and method for access control using liveness detection
US12462606B2 (en) 2018-05-10 2025-11-04 Wicket, Llc System and method for facial recognition accuracy

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114764924B (en) * 2020-12-30 2025-09-05 北京眼神智能科技有限公司 Method, device, readable storage medium and equipment for silent face liveness detection
CN113705460B (en) * 2021-08-30 2024-03-15 平安科技(深圳)有限公司 Method, device, equipment and storage medium for detecting open and closed eyes of face in image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077563A (en) * 2014-05-30 2014-10-01 小米科技有限责任公司 Human face recognition method and device
CN107506696A (en) * 2017-07-29 2017-12-22 广东欧珀移动通信有限公司 Anti-fake processing method and related product
US20180276487A1 (en) * 2017-03-24 2018-09-27 Wistron Corporation Method, system, and computer-readable recording medium for long-distance person identification
CN108647600A (en) * 2018-04-27 2018-10-12 深圳爱酷智能科技有限公司 Face identification method, equipment and computer readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573209A (en) * 2018-02-28 2018-09-25 天眼智通(香港)有限公司 Single-model multi-output age and gender identification method and system based on human face
CN109635746A (en) * 2018-12-14 2019-04-16 睿云联(厦门)网络通讯技术有限公司 It is a kind of that face vivo identification method and computer readable storage medium are singly taken the photograph based on NIR residual plot elephant

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077563A (en) * 2014-05-30 2014-10-01 小米科技有限责任公司 Human face recognition method and device
US20180276487A1 (en) * 2017-03-24 2018-09-27 Wistron Corporation Method, system, and computer-readable recording medium for long-distance person identification
CN107506696A (en) * 2017-07-29 2017-12-22 广东欧珀移动通信有限公司 Anti-fake processing method and related product
CN108647600A (en) * 2018-04-27 2018-10-12 深圳爱酷智能科技有限公司 Face identification method, equipment and computer readable storage medium

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12462606B2 (en) 2018-05-10 2025-11-04 Wicket, Llc System and method for facial recognition accuracy
CN113033499A (en) * 2021-04-30 2021-06-25 中国工商银行股份有限公司 Iris identification method and device
US12361761B1 (en) * 2021-05-06 2025-07-15 Wicket, Llc System and method for access control using liveness detection
CN113221766A (en) * 2021-05-18 2021-08-06 北京百度网讯科技有限公司 Method for training living body face recognition model and method for recognizing living body face and related device
CN113378715B (en) * 2021-06-10 2024-01-05 北京华捷艾米科技有限公司 Living body detection method based on color face image and related equipment
CN113378715A (en) * 2021-06-10 2021-09-10 北京华捷艾米科技有限公司 Living body detection method based on color face image and related equipment
CN113255594A (en) * 2021-06-28 2021-08-13 深圳市商汤科技有限公司 Face recognition method and device and neural network
CN113762205A (en) * 2021-09-17 2021-12-07 深圳市爱协生科技有限公司 Human face image operation trace detection method, computer equipment and readable storage medium
CN114359665A (en) * 2021-12-27 2022-04-15 北京奕斯伟计算技术有限公司 Training method and device of full-task face recognition model and face recognition method
CN114359665B (en) * 2021-12-27 2024-03-26 北京奕斯伟计算技术股份有限公司 Full-task face recognition model training method and device, face recognition method
CN115601818A (en) * 2022-11-29 2023-01-13 海豚乐智科技(成都)有限责任公司(Cn) Lightweight visible light living body detection method and device
CN116343313A (en) * 2023-05-30 2023-06-27 乐山师范学院 Face recognition method based on eye features
CN116343313B (en) * 2023-05-30 2023-08-11 乐山师范学院 Face recognition method based on eye features
CN117789312A (en) * 2023-12-28 2024-03-29 深圳市华弘智谷科技有限公司 Method and device for identifying living body in VR and intelligent glasses

Also Published As

Publication number Publication date
CN110462632A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
WO2020258119A1 (en) Face recognition method and apparatus, and electronic device
CN110383288B (en) Face recognition method and device and electronic equipment
WO2020258121A1 (en) Face recognition method and apparatus, and electronic device
WO2020258120A1 (en) Face recognition method and device, and electronic apparatus
CN106778525B (en) Identity authentication method and device
US9971920B2 (en) Spoof detection for biometric authentication
CN112651348B (en) Identity authentication method and device and storage medium
US8744141B2 (en) Texture features for biometric authentication
US10095927B2 (en) Quality metrics for biometric authentication
KR102766550B1 (en) Liveness test method and liveness test apparatus, biometrics authentication method and biometrics authentication apparatus
US20220148338A1 (en) Method and apparatus with liveness testing
CN105335722A (en) A detection system and method based on depth image information
WO2020243968A1 (en) Facial recognition apparatus and method, and electronic device
US20200279101A1 (en) Face verification method and apparatus, server and readable storage medium
JP2025507401A (en) Facial recognition with material data extracted from images
JP2025507403A (en) Face recognition including occlusion detection based on material data extracted from images
CN111191549A (en) Two-stage face anti-counterfeiting detection method
JP2025508407A (en) Image manipulation for determining materials information.
CN120472549B (en) Smart lock unlocking method and device based on facial recognition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19935008

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19935008

Country of ref document: EP

Kind code of ref document: A1