[go: up one dir, main page]

CN111881815A - A face detection method based on multi-model feature transfer - Google Patents

A face detection method based on multi-model feature transfer Download PDF

Info

Publication number
CN111881815A
CN111881815A CN202010728371.7A CN202010728371A CN111881815A CN 111881815 A CN111881815 A CN 111881815A CN 202010728371 A CN202010728371 A CN 202010728371A CN 111881815 A CN111881815 A CN 111881815A
Authority
CN
China
Prior art keywords
model
probability
living body
face
yuv
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010728371.7A
Other languages
Chinese (zh)
Inventor
凌康杰
王祥雪
林焕凯
刘双广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gosuncn Technology Group Co Ltd
Original Assignee
Gosuncn Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gosuncn Technology Group Co Ltd filed Critical Gosuncn Technology Group Co Ltd
Priority to CN202010728371.7A priority Critical patent/CN111881815A/en
Publication of CN111881815A publication Critical patent/CN111881815A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明提出了一种多模型特征迁移的人脸活体检测方法,通过构建和融合异构数据集,采用多种颜色空间下的多模型特征迁移方法进行活体训练,提高活体检测模型的精度和泛化能力。具体方法是,在训练阶段,融合开源或私有数据集上的可见光图像,经过人脸检测、对齐、裁剪,同时分别训练RGB模型和YUV模型直到模型收敛;在预测阶段,将采集到的可见光图像分别输入到已经训练好的RGB模型和YUV模型,分别获得两个模型的结果,通过模型得分融合策略来得出最终得分,最后根据该得分判断活体检测结果。该方法泛化性能好,精度高,适用于工业上部署使用。

Figure 202010728371

The present invention proposes a face live detection method with multi-model feature migration. By constructing and fusing heterogeneous data sets, the multi-model feature migration method in multiple color spaces is used for live body training, so as to improve the accuracy and generality of the live body detection model. transformation ability. The specific method is, in the training phase, fuse visible light images from open source or private datasets, and then train the RGB model and YUV model separately after face detection, alignment, and cropping until the models converge; in the prediction phase, the collected visible light images are collected. Input to the trained RGB model and YUV model respectively, obtain the results of the two models respectively, obtain the final score through the model score fusion strategy, and finally judge the living body detection result according to the score. The method has good generalization performance and high accuracy, and is suitable for industrial deployment.

Figure 202010728371

Description

一种多模型特征迁移的人脸活体检测方法A face detection method based on multi-model feature transfer

技术领域technical field

本发明属于计算机视觉、模式识别、机器学习、卷积神经网络、人脸识别技术领域,具体涉及一种多模型特征迁移的人脸活体检测方法。The invention belongs to the technical fields of computer vision, pattern recognition, machine learning, convolutional neural network, and face recognition, and in particular relates to a face living body detection method with multi-model feature migration.

背景技术Background technique

人脸识别技术已经广泛应用在安防监控、人机交换、电子商务、移动支付等领域,而人脸活体检测是人脸识别第一道门槛,也是人脸识别技术应用的前提。当前活体检测中,主要的技术解决方案有交互式活体检测、多源信息融合活体检测、静态图像活体检测。交互式活体检测需要用户配合操作,十分之不便,步骤繁琐,用户易抵触,效率低;多源信息融合活体检测往往需要增加深度摄像头、红外摄像头、3D摄像头、麦克风等,不仅仅增加了硬件开销,同时也带来了大量复杂的3D建模计算;静态图像活体检测往往是一种低成本、快速的活体检测方法,但由于当前数据集不足和模型构建方法低效,往往导致模型泛化性不足。Face recognition technology has been widely used in security monitoring, human-machine exchange, e-commerce, mobile payment and other fields, and face liveness detection is the first threshold of face recognition, and it is also the premise of the application of face recognition technology. In the current live detection, the main technical solutions are interactive live detection, multi-source information fusion live detection, and static image live detection. Interactive live detection requires user cooperation, which is very inconvenient, cumbersome steps, easy for users to resist, and low efficiency; multi-source information fusion live detection often requires adding depth cameras, infrared cameras, 3D cameras, microphones, etc., which not only increases hardware overhead At the same time, it also brings a lot of complex 3D modeling calculations; static image live detection is often a low-cost and fast live detection method, but due to the lack of current data sets and inefficient model construction methods, it often leads to model generalization insufficient.

在当前的静态图像活体检测方法中,往往会在单个数据集所代表的单一场景下,针对有限的少数的攻击类型,进行活体检测模型构建,这种方法下在实验室阶段可以达到理想精度要求,但在工业和现实使用中的实际场景中是非常复杂的,不仅仅是场景多样带来的光照、背景多样,而且也会存在攻击类型、攻击手段的多样性,对活体检测能够真正落地使用带来了严重的挑战。常见活体攻击检测中,打印机类型,打印质量,不同的显示设备带来的不同显示屏类型、分辨率、大小等等,乃至呈现攻击的角度、距离、显示屏亮度、显示设备是否贴膜等等均对都会对活体检测产生影响。攻击类型的多样性、同种攻击中图像分布差异性大,导致模型在真实场景下泛化能力低。针对过去的活体检测方法泛化能力不足问题,提出一种多模型迁移的活体检测方法,通过构建异构数据集,采用多种颜色空间下的多模型特征迁移融合方法进行活体训练和预测。In the current static image liveness detection methods, the liveness detection model is often constructed for a limited number of attack types in a single scene represented by a single data set. This method can achieve ideal accuracy requirements in the laboratory stage. , but the actual scene in industrial and real use is very complex, not only the lighting and background brought by the variety of scenes, but also the variety of attack types and attack methods, which can really be used for live detection. posed serious challenges. In common living attack detection, the printer type, print quality, different display types, resolutions, sizes, etc. brought by different display devices, and even the angle, distance, display brightness, and whether the display device is filmed are all displayed. will have an impact on live detection. The diversity of attack types and the large differences in image distribution in the same attack lead to low generalization ability of the model in real scenarios. Aiming at the lack of generalization ability of the past live detection methods, a multi-model transfer live detection method is proposed. By constructing heterogeneous data sets, the multi-model feature transfer fusion method in multiple color spaces is used for live training and prediction.

专利CN109840467A使用了一种称为生成式对抗网络(GAN)来产生新的负样本(所述负样本为攻击图像)训练集,其目的是想解决采用深度学习训练网络时负样本数过少的问题。但在给定数据集上,该GAN方法只能学习到该数据集上有限的样本概率分布。因此,对于全新的攻击场景和手段,该网络产生的图像数据代表性有限,泛化能力不足。Patent CN109840467A uses a so-called generative adversarial network (GAN) to generate a new training set of negative samples (the negative samples are attack images), the purpose is to solve the problem that the number of negative samples is too small when using deep learning to train the network. question. But on a given dataset, the GAN method can only learn a limited probability distribution of samples on the dataset. Therefore, for brand-new attack scenarios and means, the image data generated by this network has limited representation and insufficient generalization ability.

专利CN110472519A采用了一种多模型融合方法进行活体检测,但其模型要求同时使用自然光图像和红外光图像作为训练集输入到网络。该方法中,红外光图像采集方法复杂,需要红外光摄像头,增加了硬件成本,也不利于当前已存在的一些人脸检测与识别设备快速升级。The patent CN110472519A adopts a multi-model fusion method for live detection, but its model requires both natural light images and infrared light images to be input to the network as training sets. In this method, the infrared light image acquisition method is complicated, requires an infrared light camera, increases the hardware cost, and is not conducive to the rapid upgrade of some existing face detection and recognition equipment.

现有的技术中,受制于训练数据分布单一和构建模型方法低效,静态活体检测模型往往存在泛化性不足问题,普遍无法在工业和真实生产场景下使用。In the existing technology, due to the single distribution of training data and the inefficient model building method, the static live detection model often has the problem of insufficient generalization, and generally cannot be used in industrial and real production scenarios.

发明内容SUMMARY OF THE INVENTION

针对当前静态活体检测模型中存在的泛化性不足问题,本发明提出了一种多模型特征迁移的人脸活体检测方法。Aiming at the problem of insufficient generalization in the current static live body detection model, the present invention proposes a face live body detection method with multi-model feature migration.

本发明通过以下技术方案实现:The present invention is achieved through the following technical solutions:

一种多模型特征迁移的人脸活体检测方法,所述的方法包括步骤:A face liveness detection method with multi-model feature migration, the method comprises the steps:

S1,获取可见光图像,所述可见光包含有人脸;S1, obtain a visible light image, and the visible light includes a human face;

S2,使用第一RGB模型识别所述可见光图像,得到第一活体概率;S2, using the first RGB model to identify the visible light image to obtain the first living body probability;

S3,使用第二YUV模型识别所述可见光图像,得到第二活体概率;S3, using the second YUV model to identify the visible light image to obtain a second living body probability;

S4,根据所述第一活体概率和所述第二活体概率确定第三活体概率;S5,根据所述第三活体概率判断是否是活体。S4, determining a third living body probability according to the first living body probability and the second living body probability; S5, judging whether it is a living body according to the third living body probability.

进一步地,第一活体概率包括:负样本概率p1和正样本概率p2;第二活体概率包括:负样本概率p3和正样本概率p4。Further, the first living body probability includes: a negative sample probability p1 and a positive sample probability p2; the second living body probability includes: a negative sample probability p3 and a positive sample probability p4.

进一步地,步骤S4还包括:第三活体概率为各个模型概率的均值,即最终负样本概率为α×(p1+p3),最终正样本概率为β×(p2+p4),其中0≤α,β≤1,α+β=1。Further, step S4 also includes: the third living body probability is the mean value of each model probability, that is, the final negative sample probability is α×(p1+p3), and the final positive sample probability is β×(p2+p4), where 0≤α , β≤1, α+β=1.

进一步地,步骤S4还包括:逐级判断,设第一RGB模型阈值为T1,第二YUV模型阈值为T2,其中0.5≤T1<1,0.5≤T2<1;具体地,先判断第一RGB模型的输出,若p2<1-T1或p2>T1,则最终输出的第三活体概率为第一RGB模型输出的第一活体概率,若不满足,则进行判断第二YUV模型的输出;如果第二YUV模型中,若p4<1-T2或p4>T2,则最终输出的第三活体概率为第二YUV模型输出的第二活体概率,若不满足,则第三活体概率为各个模型概率的均值,即最终负样本概率为α×(p1+p3),最终正样本概率为β×(p2+p4),其中0≤α,β≤1,α+β=1。Further, step S4 also includes: judging step by step, setting the threshold of the first RGB model to be T1 and the threshold of the second YUV model to be T2, where 0.5≤T1<1, 0.5≤T2<1; specifically, first determine the first RGB model The output of the model, if p2<1-T1 or p2>T1, then the final output probability of the third living body is the first living body probability output by the first RGB model, if not, then judge the output of the second YUV model; if In the second YUV model, if p4<1-T2 or p4>T2, the final output probability of the third living body is the second living body probability output by the second YUV model; if not, the third living body probability is the probability of each model The mean of , that is, the final negative sample probability is α×(p1+p3), and the final positive sample probability is β×(p2+p4), where 0≤α, β≤1, α+β=1.

进一步地,步骤S5包括:设活体判断阈值为T3,其中0.5≤T3<1,若最终正样本的概率大于或等于T3,则该图像被判断为正样本;若小于T3,则判断为负样本。Further, step S5 includes: setting the living body judgment threshold as T3, where 0.5≤T3<1, if the probability of the final positive sample is greater than or equal to T3, the image is judged as a positive sample; if it is less than T3, then it is judged as a negative sample .

进一步地,在步骤S1之前还包括步骤S0,训练阶段,在训练阶段,融合异构数据集上的可见光图像,经过人脸检测、对齐、裁剪,同时分别训练第一RGB模型和第二YUV模型直到模型收敛。Further, before step S1, step S0 is also included, in the training phase, in the training phase, the visible light images on the heterogeneous data sets are fused, and the first RGB model and the second YUV model are trained respectively after face detection, alignment and cropping. until the model converges.

进一步地,在步骤S0中,所述的训练阶段具备包括如下步骤:Further, in step S0, the training phase includes the following steps:

S101:异构数据集的构建,收集异构数据集,只挑选可见光下的图像或视频构成异构数据集;正样本为异构数据集中的真实样本,负样本为异构数据集中的攻击样本;S101: Construction of heterogeneous data sets, collecting heterogeneous data sets, and selecting only images or videos under visible light to form heterogeneous data sets; positive samples are real samples in heterogeneous data sets, and negative samples are attack samples in heterogeneous data sets ;

S102:数据预处理,包括3个步骤:S102: Data preprocessing, including 3 steps:

A:人脸检测,针对视频数据,每隔n个帧进行一次人脸检测,如果检测出人脸,则进行下一步,反之,则进行下一个隔n帧的人脸检测;针对图像数据,直接进行人脸检测,如果检测出人脸,则进行下一步,反之,进行下一张图像的人脸检测;A: Face detection, for video data, perform face detection every n frames, if a face is detected, proceed to the next step, otherwise, perform the next face detection every n frames; for image data, Perform face detection directly, if a face is detected, proceed to the next step, otherwise, perform face detection in the next image;

B:人脸对齐,其特征是采用相似度变换,对步骤A检测出的人脸进行对齐;B: face alignment, which is characterized by adopting similarity transformation to align the faces detected in step A;

C:人脸裁剪,将步骤B中对齐后的人脸裁剪至第一RGB模型和第二YUV模型均适合的输入大小;C: face cropping, crop the face aligned in step B to an input size suitable for both the first RGB model and the second YUV model;

S103:训练第一RGB模型,将S102的预处理后的人脸图像输入至第一RGB模型中进行训练;S103: Train the first RGB model, and input the preprocessed face image of S102 into the first RGB model for training;

S104:训练第二YUV模型,将S102的预处理后的人脸图像经过颜色空间转换,先将人脸图像从RGB颜色空间转至YUV颜色空间,再输入至第二YUV模型中进行训练;S104: Train the second YUV model, convert the preprocessed face image in S102 to the color space, first transfer the face image from the RGB color space to the YUV color space, and then input it into the second YUV model for training;

S105:分别训练S103和S104中两个模型,当模型收敛且达到验证集或测试集上预期精度,则代表模型训练完毕,进入步骤S1。S105: Train the two models in S103 and S104 respectively. When the model converges and reaches the expected accuracy on the validation set or the test set, it means that the model training is completed, and the process goes to step S1.

进一步地,在步骤S101中,所述的异构数据集包括公开或私有数据集,其中公开数据集包括:Replay-Attack、Print-Attack、Yale-Recaptured、CASIA-MFSD、MSU-MFSD、Replay-Mobile、Msspoof、SiW、Oulu-NPU、VAD、NUAA或CASIA-SURF。Further, in step S101, the heterogeneous data sets include public or private data sets, wherein the public data sets include: Replay-Attack, Print-Attack, Yale-Recaptured, CASIA-MFSD, MSU-MFSD, Replay- Mobile, Msspoof, SiW, Oulu-NPU, VAD, NUAA or CASIA-SURF.

一种计算机可读存储介质,其上存储有计算机程序,其中,该程序被处理器执行时实现多模型特征迁移的人脸活体检测方法的步骤。A computer-readable storage medium on which a computer program is stored, wherein, when the program is executed by a processor, the steps of a method for detecting a face living body with multi-model feature migration are realized.

一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中所述处理器执行所述程序时实现多模型特征迁移的人脸活体检测方法的步骤。A computer device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of a method for facial living detection using multi-model feature migration when the processor executes the program.

本发明的关键点在于:The key points of the present invention are:

1、异构数据集的构建策略和方法:异构数据集的构建策略和多种颜色空间下的多模型是相辅相成的,如果没有异构数据集,则多种颜色空间下的多模型学习到的特征有限、单一,泛化能力低;1. Construction strategies and methods of heterogeneous data sets: The construction strategies of heterogeneous data sets and multi-models in various color spaces are complementary. If there is no heterogeneous data set, multi-models in various color spaces learn. The features are limited and single, and the generalization ability is low;

2、RGB和YUV多种颜色空间下的多模型特征迁移的模型构建方案:缺乏多种颜色空间下的多模型,异构数据集里的复杂特征难以被单一模型所充分学习;2. Model construction scheme for multi-model feature transfer in RGB and YUV color spaces: lack of multi-model in multiple color spaces, complex features in heterogeneous datasets are difficult to be fully learned by a single model;

3、模型得分融合策略:模型得分融合策略是影响多模型最终效果的关键因素,若模型得分融合策略和实际构建的多模型不匹配,往往导致多模型的最终效果比单一模型还差。3. Model score fusion strategy: The model score fusion strategy is a key factor affecting the final effect of the multi-model. If the model score fusion strategy does not match the actual multi-model, the final effect of the multi-model is often worse than that of a single model.

与现有技术相比,本发明至少具有下述的有益效果或优点:首先本方案具有成本低、计算量小优点,并能快速部署升级现有的一些单摄像头的人脸检测识别设备。本方案采用静态图像活体检测技术,只需单摄像头,无深度摄像头、红外摄像头、3D摄像头、麦克风等额外硬件带来的成本,无大量复杂的3D建模计算,具有成本低、计算量小特点。本方案所采用的骨干网络,可根据实际需求替换成MobileNet V1、MobileNet V2、EfficientNet等轻量化网络,从而进一步加快推理计算。其次本方案所建立的活体检测模型具有泛化性强、精度高的优点,由于训练集使用了异构数据集和构建了多种颜色空间下的多模型,使得模型的泛化性和精度得到了明显提高。Compared with the prior art, the present invention has at least the following beneficial effects or advantages: firstly, the solution has the advantages of low cost and small calculation amount, and can quickly deploy and upgrade some existing single-camera face detection and recognition equipment. This solution adopts static image living detection technology, only needs a single camera, no cost caused by additional hardware such as depth camera, infrared camera, 3D camera, microphone, etc., without a large number of complex 3D modeling calculations, and has the characteristics of low cost and small amount of calculation. . The backbone network used in this solution can be replaced with lightweight networks such as MobileNet V1, MobileNet V2, and EfficientNet according to actual needs, thereby further speeding up inference calculations. Secondly, the live detection model established by this scheme has the advantages of strong generalization and high accuracy. Because the training set uses heterogeneous data sets and builds multiple models in various color spaces, the generalization and accuracy of the model are obtained. significantly improved.

附图说明Description of drawings

以下将结合附图对本发明做进一步详细说明;The present invention will be described in further detail below in conjunction with the accompanying drawings;

图1是本发明的训练阶段的流程图;Fig. 1 is the flow chart of the training phase of the present invention;

图2是本发明的预测阶段的流程图;Fig. 2 is the flow chart of the prediction stage of the present invention;

图3是本发明的逐级判断的流程图。FIG. 3 is a flow chart of the step-by-step judgment of the present invention.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

针对现有基于深度学习的静态活体检测模型往往存在泛化性不足问题,提出了一种基于多模型特征迁移的活体检测方法。通过构建和融合异构数据集,采用多种颜色空间下的多模型特征迁移融合方法进行活体训练,提高活体检测模型的精度和泛化能力。具体方法是,在训练阶段,融合开源或私有数据集上的可见光图像,经过人脸检测、对齐、裁剪,同时训练RGB模型和YUV模型;在预测阶段,将采集到的可见光图像分别输入到RGB模型和YUV模型,分别获得两个模型的结果,通过阈值融合方案来得出最终得分,最后根据该得分判断活体检测结果。该方法泛化性能好,精度高,适用于工业上部署使用。Aiming at the problem of insufficient generalization of existing deep learning-based static live detection models, a live detection method based on multi-model feature transfer was proposed. By constructing and fusing heterogeneous datasets, the multi-model feature transfer fusion method in multiple color spaces is used for in vivo training to improve the accuracy and generalization ability of the in vivo detection model. The specific method is that, in the training phase, the visible light images on the open source or private data sets are fused, and after face detection, alignment, and cropping, the RGB model and the YUV model are trained at the same time; in the prediction phase, the collected visible light images are respectively input into RGB Model and YUV model, respectively obtain the results of the two models, obtain the final score through the threshold fusion scheme, and finally judge the living body detection result according to the score. The method has good generalization performance and high accuracy, and is suitable for industrial deployment.

在本发明的一个实施例中,提供一种多模型特征迁移的人脸活体检测方法,包括步骤:In one embodiment of the present invention, there is provided a face liveness detection method with multi-model feature migration, comprising the steps of:

S1,获取可见光图像,所述可见光包含有人脸;S1, obtain a visible light image, and the visible light includes a human face;

S2,使用第一RGB模型识别所述可见光图像,得到第一活体概率;S2, using the first RGB model to identify the visible light image to obtain the first living body probability;

S3,使用第二YUV模型识别所述可见光图像,得到第二活体概率;S3, using the second YUV model to identify the visible light image to obtain a second living body probability;

S4,根据所述第一活体概率和所述第二活体概率确定第三活体概率;S5,根据所述第三活体概率判断是否是活体。S4, determining a third living body probability according to the first living body probability and the second living body probability; S5, judging whether it is a living body according to the third living body probability.

进一步地,第一活体概率包括:负样本概率p1和正样本概率p2;第二活体概率包括:负样本概率p3和正样本概率p4。Further, the first living body probability includes: a negative sample probability p1 and a positive sample probability p2; the second living body probability includes: a negative sample probability p3 and a positive sample probability p4.

进一步地,步骤S4还包括:第三活体概率为各个模型概率的均值,即最终负样本概率为α×(p1+p3),最终正样本概率为β×(p2+p4),其中0≤α,β≤1,α+β=1。Further, step S4 also includes: the third living body probability is the mean value of each model probability, that is, the final negative sample probability is α×(p1+p3), and the final positive sample probability is β×(p2+p4), where 0≤α , β≤1, α+β=1.

进一步地,步骤S4还包括:逐级判断,设第一RGB模型阈值为T1,第二YUV模型阈值为T2,其中0.5≤T1<1,0.5≤T2<1;具体地,先判断第一RGB模型的输出,若p2<1-T1或p2>T1,则最终输出的第三活体概率为第一RGB模型输出的第一活体概率,若不满足,则进行判断第二YUV模型的输出;如果第二YUV模型中,若p4<1-T2或p4>T2,则最终输出的第三活体概率为第二YUV模型输出的第二活体概率,若不满足,则第三活体概率为各个模型概率的均值,即最终负样本概率为α×(p1+p3),最终正样本概率为β×(p2+p4),其中0≤α,β≤1,α+β=1。Further, step S4 also includes: judging step by step, setting the threshold of the first RGB model to be T1 and the threshold of the second YUV model to be T2, where 0.5≤T1<1, 0.5≤T2<1; specifically, first determine the first RGB model The output of the model, if p2<1-T1 or p2>T1, then the final output probability of the third living body is the first living body probability output by the first RGB model, if not, then judge the output of the second YUV model; if In the second YUV model, if p4<1-T2 or p4>T2, the final output probability of the third living body is the second living body probability output by the second YUV model; if not, the third living body probability is the probability of each model The mean of , that is, the final negative sample probability is α×(p1+p3), and the final positive sample probability is β×(p2+p4), where 0≤α, β≤1, α+β=1.

进一步地,步骤S5包括:设活体判断阈值为T3,其中0.5≤T3<1,若最终正样本的概率大于或等于T3,则该图像被判断为正样本;若小于T3,则判断为负样本。Further, step S5 includes: setting the living body judgment threshold as T3, where 0.5≤T3<1, if the probability of the final positive sample is greater than or equal to T3, the image is judged as a positive sample; if it is less than T3, then it is judged as a negative sample .

进一步地,在步骤S1之前还包括步骤S0,训练阶段,在训练阶段,融合异构数据集上的可见光图像,经过人脸检测、对齐、裁剪,同时分别训练第一RGB模型和第二YUV模型直到模型收敛。Further, before step S1, step S0 is also included, in the training phase, in the training phase, the visible light images on the heterogeneous data sets are fused, and the first RGB model and the second YUV model are trained respectively after face detection, alignment and cropping. until the model converges.

在另一个实施例中,本方法分为两步,第一步是活体检测模型的构建,对应着训练阶段;第二步是活体检测模型的部署,对应着预测阶段。训练阶段的方案如图1,预测阶段的方案如图2。In another embodiment, the method is divided into two steps, the first step is the construction of the living body detection model, which corresponds to the training stage; the second step is the deployment of the living body detection model, which corresponds to the prediction stage. The scheme of the training phase is shown in Figure 1, and the scheme of the prediction phase is shown in Figure 2.

训练阶段其特征在于,构建异构数据集,同时训练两种或两种以上不同架构的深度模型,每个深度学习模型采用不同的颜色空间。The training phase is characterized by constructing heterogeneous datasets and simultaneously training two or more deep models with different architectures, each deep learning model using a different color space.

S101:异构数据集的构建。首先收集公开或私有数据集,只挑选可见光下的图像或视频构成异构数据集。公开数据集包括,但不局限于以下数据集:Replay-Attack、Print-Attack、Yale-Recaptured、CASIA-MFSD、MSU-MFSD、Replay-Mobile、Msspoof、SiW、Oulu-NPU、VAD、NUAA、CASIA-SURF等。针对私有的特定场景,收集并加入私有数据集。所述的正样本为异构数据集中的真实样本,所述的负样本为异构数据集中的攻击样本,所述的负样本类型是指负样本中的打印图片攻击、平板攻击、手机攻击、显示器攻击等2D攻击类型。异构数据集的构建策略包括以下2种:S101: Construction of heterogeneous datasets. First, public or private datasets are collected, and only images or videos under visible light are selected to constitute heterogeneous datasets. Public datasets include, but are not limited to, the following datasets: Replay-Attack, Print-Attack, Yale-Recaptured, CASIA-MFSD, MSU-MFSD, Replay-Mobile, Msspoof, SiW, Oulu-NPU, VAD, NUAA, CASIA -SURF et al. For private specific scenarios, collect and join private datasets. The positive samples are real samples in the heterogeneous data set, the negative samples are attack samples in the heterogeneous data sets, and the negative sample types refer to print image attacks, tablet attacks, mobile phone attacks, 2D attack types such as display attacks. Construction strategies for heterogeneous datasets include the following two:

设收集到的数据集为S个,记作{D1,D2,D3,…,DS}。设第m个数据集Dm中包含的正样本数为Mm个,负样本数为Nm个,负样本类型有Om种,其中m={m|1≤m≤S,m∈N*}。设γ为均衡因子,其中0≤γ≤1。Let the collected data sets be S, denoted as {D 1 , D 2 , D 3 ,..., D S }. Let the number of positive samples contained in the mth data set D m be M m , the number of negative samples be N m , and there are O m types of negative samples, where m={m|1≤m≤S, m∈N * }. Let γ be the equalization factor, where 0≤γ≤1.

策略一:在第m个数据集中,一种负样本类型随机(分层)抽取

Figure BDA0002599413870000091
个样本,那么负样本总共抽取γNm个。如果γNm>Mm,正样本随机(分层)收取Mm个;如果γNm≤Mm,正样本随机(分层)抽取γNm个。Strategy 1: In the mth dataset, a negative sample type is randomly (stratified) drawn
Figure BDA0002599413870000091
samples, then a total of γN m negative samples are drawn. If γN m >M m , M m positive samples are randomly (stratified) collected; if γN m ≤M m , γN m positive samples are randomly (stratified) selected.

策略二:先将全部数据集中的正样本归为一类,负样本按照负样本类型归类,设负样本类型一共有Oo类,每一类有Po个负样本。举例,如打印图片攻击同时出现在{D1,D2,D5}数据集,那么{D1,D2,D5}三个数据集的负样本中打印图片攻击归为一类。策略二具体构建方法为,针对每一类负样本类型,随机(分层)抽取γPo个。那么在所有的数据上,一共抽取的负样本数有γPoOo个。如果

Figure BDA0002599413870000092
正样本随机(分层)抽取
Figure BDA0002599413870000093
个;如果
Figure BDA0002599413870000094
Figure BDA0002599413870000095
正样本随机(分层)抽取γPoOo个。Strategy 2: First, the positive samples in all data sets are classified into one category, and the negative samples are classified according to the type of negative samples. It is assumed that there are O o classes of negative sample types, and each class has P o negative samples. For example, if the print image attack appears in the {D 1 , D 2 , D 5 } dataset at the same time, then the print image attack in the negative samples of the three datasets {D 1 , D 2 , D 5 } is classified into one category. The specific construction method of strategy 2 is to randomly (hierarchically) extract γP o samples for each type of negative sample. Then on all the data, the total number of negative samples drawn is γP o O o . if
Figure BDA0002599413870000092
Random (stratified) selection of positive samples
Figure BDA0002599413870000093
a; if
Figure BDA0002599413870000094
Figure BDA0002599413870000095
Positive samples are randomly (stratified) drawn γP o O o .

S102:数据预处理。数据预处理包括3个步骤。第一步是人脸检测,针对视频数据,每隔n(n大于1)个帧进行一次人脸检测,如果检测出人脸,则进行第二步,反之,则进行下一个隔n帧的人脸检测;针对图像数据,直接进行人脸检测,如果检测出人脸,则进行第二步,反之,进行下一张图像的人脸检测。第二步是人脸对齐,其特征是采用相似度变换,对第一步检测出的人脸进行对齐。第三步是人脸裁剪,将第二步对齐后的人脸裁剪至RGB模型和YUV模型均适合的输入大小。S102: data preprocessing. Data preprocessing consists of 3 steps. The first step is face detection. For video data, face detection is performed every n (n is greater than 1) frames. If a face is detected, the second step is performed. Otherwise, the next n frames are performed. Face detection; for image data, face detection is performed directly. If a face is detected, the second step is performed, otherwise, the face detection of the next image is performed. The second step is face alignment, which is characterized by using similarity transformation to align the faces detected in the first step. The third step is face cropping, and the face aligned in the second step is cropped to an input size suitable for both the RGB model and the YUV model.

S103:训练RGB模型。将S102的预处理后的图像输入至RGB模型中进行训练。RGB模型是指输入图像的颜色空间为RGB的深度学习模型,RGB模型的具体网络骨干可以是,但不局限于VGG、GoogLeNet、ResNet、DenseNet、MobileNet V1、MobileNet V2、MobileFaceNet、EfficientNet、ShuffleNet等卷积神经网络及其变体。特别地,这里的RGB模型是经过ImageNet数据集预训练的。S103: Train an RGB model. The preprocessed image of S102 is input into the RGB model for training. The RGB model refers to a deep learning model whose color space of the input image is RGB. The specific network backbone of the RGB model can be, but is not limited to, VGG, GoogLeNet, ResNet, DenseNet, MobileNet V1, MobileNet V2, MobileFaceNet, EfficientNet, ShuffleNet and other volumes Integrating Neural Networks and Their Variants. In particular, the RGB model here is pretrained on the ImageNet dataset.

S104:训练YUV模型。将S102的预处理后的图像经过颜色空间转换,先将图像从RGB颜色空间转至YUV颜色空间,再输入至YUV模型中进行训练。YUV模型是指输入图像的颜色空间为YUV的深度学习模型,YUV颜色空间具体包括,但不局限于YCrCb、YCbCr、YPbPr、YDbDr等。YUV模型的具体网络骨干可以是,但不局限于VGG、GoogLeNet、ResNet、DenseNet、MobileNet V1、MobileNet V2、MobileFaceNet、EfficientNet、ShuffleNet等卷积神经网络及其变体。特别地,这里的YUV颜色空间指代YCrCb,YUV模型是经过ImageNet数据集预训练的。S104: Train a YUV model. The preprocessed image of S102 is converted into color space, and the image is first transferred from RGB color space to YUV color space, and then input into the YUV model for training. The YUV model refers to a deep learning model in which the color space of the input image is YUV. The YUV color space specifically includes, but is not limited to, YCrCb, YCbCr, YPbPr, YDbDr, and the like. The specific network backbone of the YUV model can be, but is not limited to, convolutional neural networks such as VGG, GoogLeNet, ResNet, DenseNet, MobileNet V1, MobileNet V2, MobileFaceNet, EfficientNet, ShuffleNet and their variants. In particular, the YUV color space here refers to YCrCb, and the YUV model is pre-trained on the ImageNet dataset.

S105:模型训练完毕。分别训练S103和S104中两个模型,当模型收敛且达到验证集或测试集上预期精度,则代表模型训练完毕,可以进行下一步预测阶段的部署。S105: Model training is completed. The two models in S103 and S104 are trained respectively. When the model converges and reaches the expected accuracy on the validation set or test set, it means that the model training is completed, and the deployment of the next prediction stage can be performed.

在预测阶段,对输入的可见光图像进行预处理后,同时输入至RGB模型和经过颜色空间转换后输入至YUV模型,各自得出分数,再通过得分融合方案来获得最终分数,从而判断最终活体检测模型结果。推理阶段具体步骤为:In the prediction stage, after the input visible light image is preprocessed, it is input to the RGB model and the YUV model after color space conversion, and each obtains a score, and then the final score is obtained through the score fusion scheme, so as to judge the final live detection. model results. The specific steps of the inference stage are:

S201:输入可见光图像。通过可见光摄像头采集到含有人脸的RGB图像,该图像是S202的输入。S201: Input a visible light image. The RGB image containing the human face is collected by the visible light camera, and the image is the input of S202.

S202:图像预处理。对S201采集到的RGB图像进行预处理,预处理方法与S102一致。S202: Image preprocessing. The RGB images collected in S201 are preprocessed, and the preprocessing method is the same as that in S102.

S203:RGB模型前向计算。将S202预处理后的图像送到已经训练好的RGB模型中,进行前向计算。S203: RGB model forward calculation. The image preprocessed in S202 is sent to the trained RGB model for forward calculation.

S204:获得RGB模型得分。获取S203经前向计算后的网络输出结果,设输出负样本概率为p1,正样本概率为p2,记作(p1,p2),其中0≤p1≤1,0≤p2≤1,p1+p2=1。S204: Obtain the RGB model score. Obtain the network output result after forward calculation in S203, set the output probability of negative sample as p1 and the probability of positive sample as p2, denoted as (p1, p2), where 0≤p1≤1, 0≤p2≤1, p1+p2 =1.

S205:颜色空间转换。将S202预处理后的图像经过颜色空间转换,转为YUV颜色空间。特别地,这里YUV颜色空间指代YCrCb。S205: Color space conversion. The image preprocessed by S202 is converted into YUV color space after color space conversion. In particular, the YUV color space here refers to YCrCb.

S206:YUV模型前向计算。将S205经过颜色空间转换的图像送到已经训练好的YUV模型中,进行前向计算。S206: YUV model forward calculation. Send the image converted by the color space of S205 to the already trained YUV model for forward calculation.

S207:获得YUV模型得分。获取S206经前向计算后的网络输出结果,设输出负样本概率为p3,正样本概率为p4,记作(p3,p4),其中0≤p3≤1,0≤p4≤1,p3+p4=1。S207: Obtain the YUV model score. Obtain the network output result after forward calculation in S206, set the output probability of negative samples as p3 and the probability of positive samples as p4, denoted as (p3, p4), where 0≤p3≤1, 0≤p4≤1, p3+p4 =1.

S208:模型得分融合。模型得分融合策略包括以下2种:S208: Model score fusion. Model score fusion strategies include the following two:

策略一:最终概率输出为各个模型概率的均值。具体地,最终负样本概率为α×(p1+p3),正样本概率为β×(p2+p4),记作(α×(p1+p3),β×(p2+p4)),其中0≤α,β≤1,α+β=1。典型地,α=0.5,β=0.5。Strategy 1: The final probability output is the mean of the probabilities of each model. Specifically, the final probability of negative samples is α×(p1+p3), and the probability of positive samples is β×(p2+p4), denoted as (α×(p1+p3),β×(p2+p4)), where 0 ≤α, β≤1, α+β=1. Typically, α=0.5, β=0.5.

策略二:逐级判断。流程如图3。设RGB模型阈值为T1(0.5≤T1<1),YUV模型阈值为T2(0.5≤T2<1)。具体流程是,先判断RGB模型的输出,若p2<1-T1或p2>T1,则最终输出为RGB模型输出;若不满足,则进行判断YUV模型的输出。若YUV模型中,若p4<1-T2或p4>T2,则最终输出为YUV模型输出;若不满足,则最终模型输出中,等同于上述策略一的概率输出,即最终输出负样本概率为α×(p1+p3),正样本概率为β×(p2+p4),记作(α×(p1+p3),β×(p2+p4)),其中0≤α,β≤1,α+β=1。典型地,α=0.5,β=0.5。Strategy 2: Judgment step by step. The process is shown in Figure 3. The RGB model threshold is set to T1 (0.5≤T1<1), and the YUV model threshold is T2 (0.5≤T2<1). The specific process is to first judge the output of the RGB model. If p2<1-T1 or p2>T1, the final output is the output of the RGB model; if not, the output of the YUV model is judged. If in the YUV model, if p4<1-T2 or p4>T2, the final output is the output of the YUV model; if not, the final model output is equivalent to the probability output of the above strategy 1, that is, the final output probability of negative samples is α×(p1+p3), the positive sample probability is β×(p2+p4), denoted as (α×(p1+p3),β×(p2+p4)), where 0≤α,β≤1,α +β=1. Typically, α=0.5, β=0.5.

S209:活体检测模型结果判断。设活体判断阈值为T3(0.5≤T3<1),若S208输出中正样本的概率大于或等于T3,则该图像被判断为活体图像(正样本);若小于T3,则判断为攻击图像(负样本)。对于上述的模型阈值T1,T2和T3,典型地,T1=T2=T3。S209: Judging the result of the living body detection model. Set the living body judgment threshold as T3 (0.5≤T3<1). If the probability of the positive sample in the output of S208 is greater than or equal to T3, the image is judged as a living body image (positive sample); if it is less than T3, it is judged as an attack image (negative sample). sample). For the model thresholds T1, T2 and T3 described above, typically, T1=T2=T3.

本发明还提供一种计算机可读存储介质,其上存储有计算机程序,其中,该程序被处理器执行时实现多模型特征迁移的人脸活体检测方法的步骤。The present invention also provides a computer-readable storage medium on which a computer program is stored, wherein when the program is executed by a processor, the steps of the method for detecting a face living body with multi-model feature migration are realized.

本发明还提供一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中所述处理器执行所述程序时实现多模型特征迁移的人脸活体检测方法的步骤。The present invention also provides a computer device, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements multi-model feature migration for face liveness detection when the processor executes the program steps of the method.

以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施例而已,并不用于限定本发明的保护范围。在不脱离本发明之精神和范围内,所做的任何修改、等同替换、改进等,同样属于本发明的保护范围之内。The specific embodiments described above further describe the purpose, technical solutions and beneficial effects of the present invention in detail. It should be understood that the above-mentioned specific embodiments are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention. protected range. Any modifications, equivalent replacements, improvements, etc. made without departing from the spirit and scope of the present invention also fall within the protection scope of the present invention.

Claims (10)

1. A human face living body detection method of multi-model feature migration is characterized by comprising the following steps:
s1, acquiring a visible light image, wherein the visible light comprises a human face;
s2, identifying the visible light image by using a first RGB model to obtain a first living body probability;
s3, identifying the visible light image by using a second YUV model to obtain a second living body probability;
s4, determining a third living body probability according to the first living body probability and the second living body probability;
and S5, judging whether the living body is the living body according to the third living body probability.
2. The method of claim 1, wherein the first live probability comprises: negative sample probability p1 and positive sample probability p 2; the second live body probability includes: negative sample probability p3 and positive sample probability p 4.
3. The method according to claim 2, wherein step S4 further comprises: the third live body probability is the mean of the probabilities of the respective models, i.e., the final negative sample probability is α × (p1+ p3) and the final positive sample probability is β × (p2+ p4), where 0 ≦ α, β ≦ 1, and α + β ≦ 1.
4. The method according to claim 2, wherein step S4 further comprises: step-by-step judgment, setting a first RGB model threshold value as T1 and a second YUV model threshold value as T2, wherein T1 is more than or equal to 0.5 and less than 1, and T2 is more than or equal to 0.5 and less than 1; specifically, the output of the first RGB model is firstly determined, if p2 is less than 1-T1 or p2 is greater than T1, the finally output third live body probability is the first live body probability output by the first RGB model, and if not, the output of the second YUV model is determined; if p4 < 1-T2 or p4 > T2 in the second YUV model, the final output third living body probability is the second living body probability output by the second YUV model, and if not, the third living body probability is the average value of the model probabilities, namely the final negative sample probability is alpha x (p1+ p3) and the final positive sample probability is beta x (p2+ p4), wherein alpha is greater than or equal to 0, beta is less than or equal to 1, and alpha + beta is 1.
5. The method according to claim 1, wherein step S5 includes: setting the living body judgment threshold value as T3, wherein T3 is more than or equal to 0.5 and less than 1, and if the probability of the final positive sample is more than or equal to T3, judging the image as the positive sample; if the value is less than T3, the result is judged to be a negative sample.
6. The method of claim 1, further comprising a step S0 before the step S1, wherein the training phase is a step of fusing visible light images on the heterogeneous data set, and performing face detection, alignment and cropping, and training the first RGB model and the second YUV model respectively until the models converge.
7. The method of claim 6, wherein in step S0, the training phase comprises the steps of:
s101: constructing a heterogeneous data set, collecting the heterogeneous data set, and only selecting images or videos under visible light to form the heterogeneous data set; the positive sample is a real sample in the heterogeneous data set, and the negative sample is an attack sample in the heterogeneous data set;
s102: data preprocessing, comprising 3 steps:
a: face detection, namely performing face detection once every n frames for video data, if a face is detected, performing the next step, and if not, performing the face detection of the next n frames; directly carrying out face detection on image data, if a face is detected, carrying out the next step, and otherwise, carrying out face detection on the next image;
b: b, face alignment, which is characterized in that the face detected in the step A is aligned by adopting similarity transformation;
c: b, cutting the face aligned in the step B to an input size suitable for both the first RGB model and the second YUV model;
s103: training a first RGB model, and inputting the preprocessed face image of S102 into the first RGB model for training;
s104: training a second YUV model, converting the preprocessed face image of S102 from an RGB color space to a YUV color space through color space conversion, and inputting the face image into the second YUV model for training;
s105: and respectively training the two models in the S103 and the S104, and when the models converge and reach the expected accuracy on the verification set or the test set, representing that the model training is finished, and entering the step S1.
8. The method according to claim 7, wherein in step S101, the heterogeneous data set comprises a public or private data set, wherein the public data set comprises: Replay-Attack, Print-Attack, Yale-Recampled, CASIA-MFSD, MSU-MFSD, Replay-Mobile, Mspoof, SiW, Ouu-NPU, VAD, NUAA, or CASIA-SURF.
9. A computer-readable storage medium, having stored thereon a computer program, wherein the program, when executed by a processor, performs the steps of the method for living human face detection with multi-model feature migration according to any one of claims 1-8.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the program to implement the steps of the method for living human face detection with multi-model feature migration according to any one of claims 1-8.
CN202010728371.7A 2020-07-23 2020-07-23 A face detection method based on multi-model feature transfer Pending CN111881815A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010728371.7A CN111881815A (en) 2020-07-23 2020-07-23 A face detection method based on multi-model feature transfer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010728371.7A CN111881815A (en) 2020-07-23 2020-07-23 A face detection method based on multi-model feature transfer

Publications (1)

Publication Number Publication Date
CN111881815A true CN111881815A (en) 2020-11-03

Family

ID=73201464

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010728371.7A Pending CN111881815A (en) 2020-07-23 2020-07-23 A face detection method based on multi-model feature transfer

Country Status (1)

Country Link
CN (1) CN111881815A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6295369B1 (en) * 1998-11-06 2001-09-25 Shapiro Consulting, Inc. Multi-dimensional color image mapping apparatus and method
US20140270409A1 (en) * 2013-03-15 2014-09-18 Eyelock, Inc. Efficient prevention of fraud
US20180025217A1 (en) * 2016-07-22 2018-01-25 Nec Laboratories America, Inc. Liveness detection for antispoof face recognition
CN109034102A (en) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 Human face in-vivo detection method, device, equipment and storage medium
CN109598242A (en) * 2018-12-06 2019-04-09 中科视拓(北京)科技有限公司 A kind of novel biopsy method
CN109840467A (en) * 2018-12-13 2019-06-04 北京飞搜科技有限公司 A kind of in-vivo detection method and system
CN110008783A (en) * 2018-01-04 2019-07-12 杭州海康威视数字技术股份有限公司 Human face in-vivo detection method, device and electronic equipment based on neural network model
US20200005061A1 (en) * 2018-06-28 2020-01-02 Beijing Kuangshi Technology Co., Ltd. Living body detection method and system, computer-readable storage medium
CN111368731A (en) * 2020-03-04 2020-07-03 上海东普信息科技有限公司 Silent in-vivo detection method, silent in-vivo detection device, silent in-vivo detection equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6295369B1 (en) * 1998-11-06 2001-09-25 Shapiro Consulting, Inc. Multi-dimensional color image mapping apparatus and method
US20140270409A1 (en) * 2013-03-15 2014-09-18 Eyelock, Inc. Efficient prevention of fraud
US20180025217A1 (en) * 2016-07-22 2018-01-25 Nec Laboratories America, Inc. Liveness detection for antispoof face recognition
CN110008783A (en) * 2018-01-04 2019-07-12 杭州海康威视数字技术股份有限公司 Human face in-vivo detection method, device and electronic equipment based on neural network model
US20200005061A1 (en) * 2018-06-28 2020-01-02 Beijing Kuangshi Technology Co., Ltd. Living body detection method and system, computer-readable storage medium
CN109034102A (en) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 Human face in-vivo detection method, device, equipment and storage medium
CN109598242A (en) * 2018-12-06 2019-04-09 中科视拓(北京)科技有限公司 A kind of novel biopsy method
CN109840467A (en) * 2018-12-13 2019-06-04 北京飞搜科技有限公司 A kind of in-vivo detection method and system
CN111368731A (en) * 2020-03-04 2020-07-03 上海东普信息科技有限公司 Silent in-vivo detection method, silent in-vivo detection device, silent in-vivo detection equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
汪亚航;宋晓宁;吴小俊;: "结合混合池化的双流人脸活体检测网络", 中国图象图形学报, no. 07 *
牛德姣, 詹永照, 宋顺林: "实时视频图像中的人脸检测与跟踪", 计算机应用, no. 06 *

Similar Documents

Publication Publication Date Title
JP2022133378A (en) Face biometric detection method, device, electronic device, and storage medium
US20210150268A1 (en) Method of using deep discriminate network model for person re-identification in image or video
CN110379020B (en) Laser point cloud coloring method and device based on generation countermeasure network
CN112836625A (en) Face living body detection method and device and electronic equipment
CN110929569A (en) Face recognition method, device, equipment and storage medium
CN106372581A (en) Method for constructing and training human face identification feature extraction network
CN106778496A (en) Biopsy method and device
CN112215043A (en) Human face living body detection method
CN114038045B (en) Cross-modal face recognition model construction method and device and electronic equipment
WO2024099026A1 (en) Image processing method and apparatus, device, storage medium and program product
CN115861595A (en) A Multi-Scale Domain Adaptive Heterogeneous Image Matching Method Based on Deep Learning
CN115497176A (en) Living body detection model training method, living body detection method and system
CN113450297A (en) Fusion model construction method and system for infrared image and visible light image
CN110807409A (en) Crowd density detection model training method and crowd density detection method
CN116524606A (en) Face living body recognition method, device, electronic equipment and storage medium
CN116452914A (en) Self-adaptive guide fusion network for RGB-D significant target detection
CN113221824A (en) Human body posture recognition method based on individual model generation
CN116343310A (en) Thermal imaging personnel identification method, terminal equipment and storage medium
CN111144407A (en) Target detection method, system, device and readable storage medium
CN115601456A (en) Truck picture editing method based on neural network, storage medium and equipment
CN114333019A (en) Training method of living body detection model, living body detection method and related device
CN113947804A (en) Target fixation identification method and system based on sight line estimation
CN114764822A (en) Image processing method and device and electronic equipment
CN111881815A (en) A face detection method based on multi-model feature transfer
CN113221870B (en) OCR (optical character recognition) method, device, storage medium and equipment for mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201103

RJ01 Rejection of invention patent application after publication