[go: up one dir, main page]

CN111353326A - In-vivo detection method based on multispectral face difference image - Google Patents

In-vivo detection method based on multispectral face difference image Download PDF

Info

Publication number
CN111353326A
CN111353326A CN201811561012.6A CN201811561012A CN111353326A CN 111353326 A CN111353326 A CN 111353326A CN 201811561012 A CN201811561012 A CN 201811561012A CN 111353326 A CN111353326 A CN 111353326A
Authority
CN
China
Prior art keywords
living body
region
image
difference image
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811561012.6A
Other languages
Chinese (zh)
Inventor
钟千里
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Irisian Optronics Technology Co ltd
Original Assignee
Shanghai Irisian Optronics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Irisian Optronics Technology Co ltd filed Critical Shanghai Irisian Optronics Technology Co ltd
Priority to CN201811561012.6A priority Critical patent/CN111353326A/en
Publication of CN111353326A publication Critical patent/CN111353326A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention provides a living body detection method based on a multispectral face difference image, which is used for carrying out living body detection through an infrared lamp, an infrared camera and a visible light camera of iris recognition equipment, and comprises the following steps: 11: acquiring a multispectral face difference image of a current living body detection object and determining an interested area; 12: inputting the region of interest into the corresponding classification model; 13: and determining whether the living body detection is passed or not according to the detection result of the classification model.

Description

In-vivo detection method based on multispectral face difference image
Technical Field
The invention relates to the technical field of in-vivo detection, in particular to an in-vivo detection method based on a multispectral face difference image.
Background
The existing iris anti-counterfeiting method has the following defects:
1. the method of the frequency spectrum analysis, the iris image printed with high definition or the iris image with motion blur can not be judged accurately; 2. a method of reflectance information analysis that will fail for a printed iris image worn with contact lenses; 3. the texture analysis method is characterized in that when an iris image is near a fuzzy and clear boundary region, the method cannot accurately judge whether the iris image is a living body; 4. the method for detecting the pupil constriction needs strong light to stimulate the pupil, so that the pupil constriction is caused to achieve the aim of living body detection, the user experience is poor, and the user can hardly accept the method.
Disclosure of Invention
In order to solve the technical problem, the invention provides a living body detection method based on a multispectral face difference image, which comprises the following steps:
s1: respectively collecting a visible light image and an infrared image for a current user;
s2: respectively converting the infrared image and the visible light image into gray level images;
s3: respectively carrying out face detection on the two images;
s4: judging whether the faces are detected in the two images, and if the faces are not detected, determining that the current user is a non-living body;
s5: if the faces of the two images are detected, face key point detection is carried out on the two images to obtain key points of the region of interest;
s6: aligning the face regions in the two images by using a face alignment algorithm according to the key point detection result to obtain a face difference image;
s7: determining an interested region of the face difference image according to the key point information of the face difference image;
s8: and inputting the region of interest of the face difference image into a classification model trained in advance to obtain a living body detection result.
The living body detection method based on the multispectral face difference image is further improved in that the visible light image and the infrared image in the step S1 are acquired through the visible light camera and the infrared camera respectively and simultaneously.
The living body detection method based on the multispectral face difference image is further improved in that the visible light image and the infrared image in the step S1 are respectively collected through a single camera with a switchable filter.
The living body detection method based on the multispectral face difference image is further improved in that the interested region in the step S5 has 4 types, namely an eye region, a nose region, a mouth region and a cheek region,
the living body detection method based on the multispectral face difference image is further improved in that the face alignment algorithm aligned in the step S6 aligns and coincides the faces in the infrared images with the face regions in the two images based on the faces in the visible light images.
The living body detection method based on the multispectral face difference image is further improved in that the face difference image in the step S6 is obtained by subtracting the face area of the visible light image from the face area of the aligned infrared image.
The living body detection method based on the multispectral face difference image is further improved in that the number of the classification models in the step S8 is 4, and the classification models are an eye living body classification model, a nose living body classification model, a mouth living body classification model and a cheek living body classification model respectively.
The living body detection method based on the multispectral face difference image is further improved in that the method for establishing the 4 classification models comprises the following steps:
s01: respectively acquiring a region of interest of the living body face difference image and a region of interest of the non-living body face difference image according to the steps S1-S7;
s02: taking the eye region of the living body face difference image as a positive sample, and taking the eye region of the non-living body face difference image as a negative sample, and inputting the eye region into a convolutional neural network to train an eye living body classification model;
inputting the nose area of the living body face difference image as a positive sample and the nose area of the non-living body face difference image as a negative sample into a convolutional neural network to train a nose living body classification model;
inputting a mouth region of the living body face difference image as a positive sample and a mouth region of the non-living body face difference image as a negative sample into a convolutional neural network to train a mouth living body classification model;
and inputting the cheek region of the living body face difference image as a positive sample and the cheek region of the non-living body face difference image as a negative sample into a convolutional neural network to train a cheek living body classification model.
The living body detection method based on the multispectral face difference image is further improved in that the detection result in the step S8 comprises that the living body detection is passed and the living body detection is not passed, and the judgment of the detection result comprises the following steps:
s11: acquiring a face difference image of a current living body detection object according to the steps S1-S7, and determining a region of interest of the face difference image;
s12: respectively inputting the eye region image, the nose region image, the mouth region image and the cheek region image into an eye living body classification model, a nose living body classification model, a mouth living body classification model and a cheek living body classification model to respectively obtain the detection results of each living body classification model, and the method comprises the following steps: a living eye detection value, a living nose detection value, a living mouth detection value, and a living cheek detection value;
s13: respectively multiplying the eye living body detection value, the nose living body detection value, the mouth living body detection value and the cheek living body detection value by respective preset weights and summing to obtain a comprehensive living body detection value;
s14 judges whether the integrated live body detection value is higher than a preset live body detection threshold value, and if yes, the live body detection is passed, and if not, the live body detection is not passed.
The in-vivo detection method based on the multispectral face difference image does not need active cooperation of a user, has no abnormal stimulation to the user, and can effectively resist photo attack, video attack, 3D face model or 3D face mask attack. Besides being used for iris living body detection, the method can also be applied to face living body detection.
Drawings
FIG. 1 is a schematic view of a biopsy procedure.
FIG. 2 is a schematic diagram of a process for training a classification model.
Fig. 3 is a schematic flow chart of acquiring a multispectral face difference map and determining a region of interest.
Detailed Description
The present invention is described in further detail below with reference to the attached drawings.
The invention provides a living body detection method based on a multispectral face difference image, which is implemented by the steps as shown in figure 1 and comprises the following steps:
11: acquiring a multispectral face difference image of a current living body detection object and determining an interested area;
12: inputting the region of interest into the corresponding classification model;
13: and determining whether the living body detection is passed or not according to the detection result of the classification model.
Step 11, acquiring a multispectral face difference map of a current living body detection object, and determining a region of interest, as shown in fig. 3, the implementation steps include:
31: respectively collecting a visible light image and an infrared image;
preferably, in this embodiment, the visible light image and the infrared image are collected by the visible light camera and the infrared camera respectively, and the simultaneous collection means that the exposure start times of the two cameras are the same; in other embodiments, a single camera of the switchable filter can be used for collecting visible light images and infrared images, the single camera of the switchable filter can switch the infrared band-pass filter and the infrared cut-off filter to be used as the camera filter, when the camera is switched to the infrared band-pass filter, the camera can collect the infrared images, and when the camera is switched to the infrared cut-off filter, the camera can collect the visible light images;
32: converting the infrared image and the visible light image into a gray image;
33: respectively carrying out face detection on the two images;
34: whether human faces are detected in the two images or not; if the human faces are not all detected, the current user is considered as a non-living body;
35: if the faces of the two images are detected, face key point detection is carried out on the two images to obtain key points of the region of interest;
in the embodiment, the number of the key points is 68, and the position information of eyebrows, eyes, a nose, a mouth and cheeks and the outline of a human face are indicated;
36: aligning the face by using a face alignment algorithm according to the key point detection result to obtain a face difference image;
specifically, the method comprises the following steps: firstly, aligning the face areas in the two images by using a face alignment algorithm, wherein in the embodiment, the face alignment algorithm takes the face in the visible light image as a reference and deforms the face of the infrared image so as to align the outline of the face of the infrared image and the outline of the face of the visible light image with the region of interest; in other embodiments, the human face alignment algorithm can also deform the human face of the visible light image by taking the human face in the infrared image as a reference, so that the human face of the visible light image and the outline and the region of interest of the human face of the infrared image are aligned; after alignment, the difference between the face area of the visible light image and the face area of the aligned infrared image is obtained to obtain a face difference image, where the difference is obtained by subtracting pixel values corresponding to the images, and in this embodiment, an absolute value is taken from a result of the subtraction of the pixel values; in other embodiments, the face area of the visible light image and the face area of the aligned infrared image may be summed to obtain a face superimposed image, where the difference is obtained by adding pixel values corresponding to the images, and if the result of adding the pixel values is greater than 255, the result is subtracted by 255 to represent the pixel value after pixel superimposition;
37: determining an interested region according to the key point information of the face difference image;
since the face alignment algorithm in this embodiment is based on the face in the visible light image, the key point information in the face difference image is the same as the key point information in the visible light image.
As shown in fig. 2: training the classification model obtained in step 12 comprises the following steps:
21: acquiring multispectral face difference images of a living body and a non-living body, and determining an interested area;
steps 31 to 37 of the method for acquiring a face difference image are already shown, and will not be described herein again, in this embodiment, the non-living body acquisition object may be a face image printed on a paper sheet, or a 3D face model, or a 3D face mask, and if the non-living body acquisition object cannot pass the detection of step 34, the acquisition of the region of interest is not performed,
22: inputting the positive sample and the negative sample into a convolutional neural network to train a classification model;
in the embodiment, the number of the classification models is 4, and the classification models are respectively an eye living body classification model, a nose living body classification model, a mouth living body classification model and a cheek living body classification model; in other embodiments, the method can also comprise a classification model trained by other areas of the eyebrows or the faces;
the training method of the classification model comprises the following steps:
the predicted value of the living body classification model is designated as 1 or 0 (1 is a label of a positive sample, and 0 is a label of a negative sample);
taking the eye region of the living body face difference image as a positive sample, and taking the eye region of the non-living body face difference image as a negative sample, and inputting the eye region into a convolutional neural network to train an eye living body classification model;
inputting the nose area of the living body face difference image as a positive sample and the nose area of the non-living body face difference image as a negative sample into a convolutional neural network to train a nose living body classification model;
inputting a mouth region of the living body face difference image as a positive sample and a mouth region of the non-living body face difference image as a negative sample into a convolutional neural network to train a mouth living body classification model;
and inputting the cheek region of the living body face difference image as a positive sample and the cheek region of the non-living body face difference image as a negative sample into a convolutional neural network to train a cheek living body classification model.
Because the eye region of interest, nose region of interest, mouth region of interest and cheek region of interest are different in importance degree in the living body detection; therefore, in the method for calculating the result of the in-vivo detection, weights need to be preset for different regions, and the preset weights for the four regions of interest are 0.5, 0.1, 0.2, and 0.2, respectively; step 13 the method for calculating the result of the in vivo test comprises the steps of: and multiplying the predicted value obtained after each interested region is input into the corresponding living body classification model by the corresponding weight and summing to obtain a comprehensive predicted value, comparing the comprehensive predicted value with a preset living body detection threshold value, if the comprehensive predicted value is smaller than the preset living body detection threshold value, not passing the living body detection, and if the comprehensive predicted value is larger than or equal to the preset living body detection threshold value, passing the living body detection, wherein the living body detection threshold value is 0.6 in the embodiment.
It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

Claims (9)

1. A living body detection method based on a multispectral face difference image is characterized in that the living body detection method based on the multispectral face difference image comprises the following steps:
s1: respectively collecting a visible light image and an infrared image for a current user;
s2: respectively converting the infrared image and the visible light image into gray level images;
s3: respectively carrying out face detection on the two images;
s4: judging whether the faces are detected in the two images, and if the faces are not detected, determining that the current user is a non-living body;
s5: if the faces of the two images are detected, face key point detection is carried out on the two images to obtain key points of the region of interest;
s6: aligning the face regions in the two images by using a face alignment algorithm according to the key point detection result to obtain a face difference image;
s7: determining an interested region of the face difference image according to the key point information of the face difference image;
s8: and inputting the region of interest of the face difference image into a classification model trained in advance to obtain a living body detection result.
2. The method according to claim 1, wherein the visible light image and the infrared image are captured by a visible light camera and an infrared camera at the same time in step S1.
3. The method according to claim 1, wherein the visible light image and the infrared image are collected by a single camera with switchable filter at step S1.
4. The method according to claim 1, wherein the regions of interest in step S5 have 4 types, which are eye region, nose region, mouth region and cheek region respectively.
5. The in-vivo detection method based on the multispectral human face difference map as claimed in claim 1, wherein the human face alignment algorithm in step S6 aligns and coincides the human face in the infrared image with the human face in the visible light image based on the human face in the visible light image.
6. The method according to claim 1, wherein the human face difference image obtained in step S6 is obtained by subtracting a human face region of the visible light image from a human face region of the aligned infrared image.
7. The method according to claim 4, wherein there are 4 classification models in step S8, which are an eye living body classification model, a nose living body classification model, a mouth living body classification model and a cheek living body classification model.
8. The in-vivo detection method based on multispectral face difference map as claimed in claim 7, wherein the establishment method of said 4 classification models comprises the steps of:
s01: respectively acquiring a region of interest of the living body face difference image and a region of interest of the non-living body face difference image according to the steps S1-S7;
s02: taking the eye region of the living body face difference image as a positive sample, and taking the eye region of the non-living body face difference image as a negative sample, and inputting the eye region into a convolutional neural network to train an eye living body classification model;
inputting the nose area of the living body face difference image as a positive sample and the nose area of the non-living body face difference image as a negative sample into a convolutional neural network to train a nose living body classification model;
inputting a mouth region of the living body face difference image as a positive sample and a mouth region of the non-living body face difference image as a negative sample into a convolutional neural network to train a mouth living body classification model;
and inputting the cheek region of the living body face difference image as a positive sample and the cheek region of the non-living body face difference image as a negative sample into a convolutional neural network to train a cheek living body classification model.
9. The method according to claim 7, wherein the step S8 is performed to determine whether the detected result includes a pass or a fail of the live body detection, and the step S comprises:
s11: acquiring a face difference image of a current living body detection object according to the steps S1-S7, and determining a region of interest of the face difference image;
s12: respectively inputting the eye region image, the nose region image, the mouth region image and the cheek region image into an eye living body classification model, a nose living body classification model, a mouth living body classification model and a cheek living body classification model to respectively obtain the detection results of each living body classification model, and the method comprises the following steps: a living eye detection value, a living nose detection value, a living mouth detection value, and a living cheek detection value;
s13: respectively multiplying the eye living body detection value, the nose living body detection value, the mouth living body detection value and the cheek living body detection value by respective preset weights and summing to obtain a comprehensive living body detection value;
s14 judges whether the integrated live body detection value is higher than a preset live body detection threshold value, and if yes, the live body detection is passed, and if not, the live body detection is not passed.
CN201811561012.6A 2018-12-20 2018-12-20 In-vivo detection method based on multispectral face difference image Pending CN111353326A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811561012.6A CN111353326A (en) 2018-12-20 2018-12-20 In-vivo detection method based on multispectral face difference image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811561012.6A CN111353326A (en) 2018-12-20 2018-12-20 In-vivo detection method based on multispectral face difference image

Publications (1)

Publication Number Publication Date
CN111353326A true CN111353326A (en) 2020-06-30

Family

ID=71195354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811561012.6A Pending CN111353326A (en) 2018-12-20 2018-12-20 In-vivo detection method based on multispectral face difference image

Country Status (1)

Country Link
CN (1) CN111353326A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132046A (en) * 2020-09-24 2020-12-25 天津锋物科技有限公司 Static living body detection method and system
WO2022110846A1 (en) * 2020-11-24 2022-06-02 奥比中光科技集团股份有限公司 Living body detection method and device
US11475714B2 (en) * 2020-02-19 2022-10-18 Motorola Solutions, Inc. Systems and methods for detecting liveness in captured image data
CN115880787A (en) * 2022-12-15 2023-03-31 深圳市巨龙创视科技有限公司 A face detection method, system, terminal equipment and storage medium
US20240021021A1 (en) * 2021-05-26 2024-01-18 Orbbec Inc. Light source spectrum and multispectral reflectivity image acquisition methods and apparatuses, and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778518A (en) * 2016-11-24 2017-05-31 汉王科技股份有限公司 A kind of human face in-vivo detection method and device
CN107292290A (en) * 2017-07-17 2017-10-24 广东欧珀移动通信有限公司 Face vivo identification method and Related product
US20170345146A1 (en) * 2016-05-30 2017-11-30 Beijing Kuangshi Technology Co., Ltd. Liveness detection method and liveness detection system
CN107862299A (en) * 2017-11-28 2018-03-30 电子科技大学 A kind of living body faces detection method based on near-infrared Yu visible ray binocular camera
CN108764071A (en) * 2018-05-11 2018-11-06 四川大学 It is a kind of based on infrared and visible images real human face detection method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170345146A1 (en) * 2016-05-30 2017-11-30 Beijing Kuangshi Technology Co., Ltd. Liveness detection method and liveness detection system
CN106778518A (en) * 2016-11-24 2017-05-31 汉王科技股份有限公司 A kind of human face in-vivo detection method and device
CN107292290A (en) * 2017-07-17 2017-10-24 广东欧珀移动通信有限公司 Face vivo identification method and Related product
CN107862299A (en) * 2017-11-28 2018-03-30 电子科技大学 A kind of living body faces detection method based on near-infrared Yu visible ray binocular camera
CN108764071A (en) * 2018-05-11 2018-11-06 四川大学 It is a kind of based on infrared and visible images real human face detection method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
袁浩期;李扬;王俊影;刘航;: "基于红外热像的行人面部温度高精度检测技术" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11475714B2 (en) * 2020-02-19 2022-10-18 Motorola Solutions, Inc. Systems and methods for detecting liveness in captured image data
CN112132046A (en) * 2020-09-24 2020-12-25 天津锋物科技有限公司 Static living body detection method and system
WO2022110846A1 (en) * 2020-11-24 2022-06-02 奥比中光科技集团股份有限公司 Living body detection method and device
US20240021021A1 (en) * 2021-05-26 2024-01-18 Orbbec Inc. Light source spectrum and multispectral reflectivity image acquisition methods and apparatuses, and electronic device
CN115880787A (en) * 2022-12-15 2023-03-31 深圳市巨龙创视科技有限公司 A face detection method, system, terminal equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111353326A (en) In-vivo detection method based on multispectral face difference image
CN108389207B (en) Dental disease diagnosis method, diagnosis equipment and intelligent image acquisition device
CN107451998B (en) Fundus image quality control method
AU2012328140B2 (en) System and method for identifying eye conditions
CN109697716B (en) Identification method and equipment of cyan eye image and screening system
US7327860B2 (en) Conjunctival scans for personal identification
CN106022209B (en) A kind of method and device of range estimation and processing based on Face datection
CN110363087B (en) Long-baseline binocular face in-vivo detection method and system
CN107103298B (en) Pull-up counting system and counting method based on image processing
CN109684981B (en) Identification method and equipment of cyan eye image and screening system
CN109558840A (en) A kind of biopsy method of Fusion Features
RU2431190C2 (en) Facial prominence recognition method and device
CN101271517A (en) Face area detection device, method and computer readable recording medium
CN111832464B (en) Living body detection method and device based on near infrared camera
CN109948476B (en) Human face skin detection system based on computer vision and implementation method thereof
JP2008113902A (en) Eye opening detection device and method
JP3490910B2 (en) Face area detection device
CN109345524A (en) A kind of bearing open defect detection system of view-based access control model
CN113989870A (en) Living body detection method, door lock system and electronic equipment
CN112784669A (en) Method for object re-recognition
KR100822476B1 (en) Remote emergency monitoring system and method
CN110532993B (en) Face anti-counterfeiting method and device, electronic equipment and medium
CN112651957A (en) Human eye closing degree detection device
CN117011921A (en) Image processing method and system for improving face recognition rate
CN110674737A (en) Iris recognition enhancement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20231229

AD01 Patent right deemed abandoned