CN116189316A - Liveness detection method and system - Google Patents
Liveness detection method and system Download PDFInfo
- Publication number
- CN116189316A CN116189316A CN202211731962.5A CN202211731962A CN116189316A CN 116189316 A CN116189316 A CN 116189316A CN 202211731962 A CN202211731962 A CN 202211731962A CN 116189316 A CN116189316 A CN 116189316A
- Authority
- CN
- China
- Prior art keywords
- living body
- target
- living
- detection
- body detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/45—Structures or tools for the administration of authentication
- G06F21/46—Structures or tools for the administration of authentication by designing passwords or checking the strength of passwords
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/778—Active pattern-learning, e.g. online learning of image or video features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
本说明书提供一种活体检测方法及系统,所述方法包括:获取包含目标用户的生物特征的生物特征图像,基于所述生物特征图像对目标用户进行初步活体检测,得到目标用户的初步活体检测结果,进而,指示目标用户输入目标长度的身份验证信息,并基于目标用户的实际输入信息对目标用户进行再次活体检测,得到目标用户的目标活体检测结果,其中,所述目标长度与所述初步活体检测结果相关。上述方案能够提升活体检测结果的准确性。
This specification provides a living body detection method and system, the method comprising: acquiring a biometric image containing the biological characteristics of the target user, performing preliminary living body detection on the target user based on the biometric image, and obtaining the preliminary living body detection result of the target user , furthermore, instructing the target user to input the identity verification information of the target length, and based on the actual input information of the target user, perform another biopsy detection on the target user to obtain the target biopsy detection result of the target user, wherein the target length is the same as the preliminary biopsy The test results are related. The above solution can improve the accuracy of the living body detection result.
Description
技术领域technical field
本说明书涉及人工智能技术领域,尤其涉及一种活体检测方法及系统。This specification relates to the technical field of artificial intelligence, and in particular to a method and system for detecting a living body.
背景技术Background technique
目前,生物特征识别(例如人脸识别)在多个场景得到广泛应用,例如,基于人脸识别的支付场景、基于人脸识别的门禁系统等。生物特征识别能够为人们的生活带来便利,但与此同时也会带来一些安全风险。其中,活体攻击就是生物特征识别领域面临的安全风险之一,例如,攻击者使用手机屏幕、照片、打印纸张或高精面具等方式进行活体攻击。Currently, biometric identification (such as face recognition) is widely used in multiple scenarios, such as payment scenarios based on face recognition, access control systems based on face recognition, and the like. Biometric identification can bring convenience to people's lives, but at the same time it also brings some security risks. Among them, live attacks are one of the security risks faced in the field of biometric identification. For example, attackers use mobile phone screens, photos, printed paper or high-precision masks to carry out live attacks.
为此,生物特征识别系统在采集到生物特征图像后,需要基于生物特征图像先检测目标用户是否为活体,在确定目标用户为活体的情况下再执行后续的生物特征识别流程。然而,实际应用中发现,在基于生物特征图像进行活体检测时,检测结果的准确性不高,导致生物特征识别领域的安全性较低。For this reason, after the biometrics recognition system collects the biometrics image, it needs to first detect whether the target user is a living body based on the biometrics image, and then execute the subsequent biometrics recognition process when it is determined that the target user is a living body. However, in practical applications, it is found that when liveness detection is performed based on biometric images, the accuracy of detection results is not high, resulting in low security in the field of biometric recognition.
发明内容Contents of the invention
本说明书提供一种活体检测方法及系统,能够提高活体检测结果的准确性,从而提升生物特征识别领域的安全性。This specification provides a living body detection method and system, which can improve the accuracy of living body detection results, thereby improving the security in the field of biometric identification.
第一方面,本说明书提供一种活体检测方法,包括:获取生物特征图像,所述生物特征图像包括目标用户的生物特征;基于所述生物特征图像对所述目标用户进行初步活体检测,得到所述目标用户的初步活体检测结果;以及指示所述目标用户输入目标长度的身份验证信息,并基于所述目标用户的实际输入信息对所述目标用户进行再次活体检测,得到所述目标用户的目标活体检测结果,其中,所述目标长度与所述初步活体检测结果相关。In the first aspect, this specification provides a living body detection method, including: acquiring a biological feature image, the biological feature image including the biological features of the target user; performing preliminary living body detection on the target user based on the biological feature image, and obtaining the the target user's preliminary liveness detection result; and instruct the target user to input the identity verification information of the target length, and based on the target user's actual input information, conduct another liveness test on the target user to obtain the target user's target A living body detection result, wherein the target length is related to the preliminary living body detection result.
在一些实施例中,其中,所述初步活体检测结果包括:所述目标用户为活体的第一概率;以及所述目标长度与所述第一概率呈反相关关系。In some embodiments, the preliminary living body detection result includes: a first probability that the target user is alive; and the target length is inversely correlated with the first probability.
在一些实施例中,在指示所述目标用户输入目标长度的身份验证信息之前,还包括:确定目标身份验证类型;以及基于所述目标身份验证类型和所述第一概率,确定所述目标长度。In some embodiments, before instructing the target user to input identity verification information of a target length, further comprising: determining a target identity verification type; and determining the target length based on the target identity verification type and the first probability .
在一些实施例中,所述基于所述目标身份验证类型和所述第一概率,确定所述目标长度,包括:基于当前应用场景的类型确定权重系数,所述权重系数与所述当前应用场景对生物特征识别安全性的需求程度呈反相关关系;基于所述权重系数对所述第一概率进行加权处理,得到场景适配后的第一概率;以及基于所述目标身份验证类型和所述场景适配后的第一概率,确定所述目标长度。In some embodiments, the determining the target length based on the target identity verification type and the first probability includes: determining a weight coefficient based on the type of the current application scenario, and the weight coefficient is related to the current application scenario The degree of demand for biometric security is inversely correlated; weighting the first probability based on the weight coefficient to obtain the first probability after scene adaptation; and based on the target identity verification type and the The first probability after scene adaptation is to determine the target length.
在一些实施例中,所述基于所述目标身份验证类型和所述第一概率,确定所述目标长度,包括:将所述目标身份验证类型和所述第一概率输入预先训练的长度映射模型,得到所述目标长度,其中,所述长度映射模型是采用多组训练样本训练得到的,每组训练样本包括:样本身份验证类型、样本概率、以及样本长度。In some embodiments, said determining said target length based on said target authentication type and said first probability comprises: inputting said target authentication type and said first probability into a pre-trained length mapping model , to obtain the target length, wherein the length mapping model is trained by using multiple sets of training samples, and each set of training samples includes: sample identity verification type, sample probability, and sample length.
在一些实施例中,所述确定目标身份验证类型,包括:在多个待选身份验证类型中随机确定所述目标身份验证类型;或者,确定当前应用场景的类型,并在多个待选身份验证类型中选择与所述当前应用场景的类型匹配的身份验证类型作为所述目标身份验证类型;或者,将所述目标用户在多个待选身份验证类型中指定的身份验证类型作为所述目标身份验证类型。In some embodiments, the determining the target identity verification type includes: randomly determining the target identity verification type among multiple candidate identity verification types; or determining the type of the current application scenario, and In the verification type, select the identity verification type that matches the type of the current application scenario as the target identity verification type; or, use the identity verification type specified by the target user among multiple candidate identity verification types as the target authentication type.
在一些实施例中,所述多个待选身份验证类型包括下述中的至少两种:通过用户标识进行身份验证;通过用户终端的标识进行身份验证;通过注册密码进行身份验证;以及通过注册账号标识进行身份验证。In some embodiments, the plurality of authentication types to be selected includes at least two of the following: authentication by user identification; authentication by identification of user terminal; authentication by registration password; Account ID for authentication.
在一些实施例中,所述基于所述目标用户的实际输入信息对所述目标用户进行再次活体检测,得到所述目标用户的目标活体检测结果,包括:将数据库中存储的第一用户的身份信息确定为目标身份信息,其中,所述第一用户为触发活体检测的登录用户,或者,所述第一用户为对所述生物特征图像进行生物特征识别得到的用户;以及将所述实际输入信息和所述目标身份信息进行匹配,并基于匹配结果在第一操作或第二操作中择一执行,其中,所述第一操作包括:确定所述匹配结果为匹配成功,则确定所述目标用户为活体,以及,所述第二操作包括:确定所述匹配结果为匹配失败,则确定所述目标用户为非活体。In some embodiments, the performing liveness detection on the target user again based on the actual input information of the target user to obtain the target liveness detection result of the target user includes: storing the identity of the first user stored in the database The information is determined as the target identity information, wherein the first user is the login user who triggers the biometric detection, or the first user is the user obtained by performing biometric identification on the biometric image; and the actual input Information is matched with the target identity information, and based on the matching result, one of the first operation or the second operation is performed, wherein the first operation includes: determining that the matching result is a successful match, then determining that the target The user is a living body, and the second operation includes: determining that the matching result is a matching failure, then determining that the target user is not a living body.
在一些实施例中,所述初步活体检测结果包括:所述目标用户为活体的第一概率;以及,所述基于所述生物特征图像对所述目标用户进行初步活体检测,得到所述目标用户的初步活体检测结果,包括:将所述生物特征图像输入已训练的活体检测模型,通过所述活体检测模型对所述生物特征图像进行活体检测处理,得到所述目标用户为活体的第二概率和所述第二概率对应的置信度;以及基于所述第二概率和所述第二概率对应的置信度确定所述第一概率。In some embodiments, the preliminary living body detection result includes: a first probability that the target user is alive; and, performing preliminary living body detection on the target user based on the biometric image to obtain the target user The preliminary living body detection result, including: inputting the biometric feature image into a trained living body detection model, performing living body detection processing on the biometric feature image through the living body detection model, and obtaining the second probability that the target user is a living body a confidence level corresponding to the second probability; and determining the first probability based on the second probability and the confidence level corresponding to the second probability.
在一些实施例中,基于所述第二概率和所述第二概率对应的置信度确定所述第一概率,包括:将所述第二概率和所述第二概率对应的置信度的乘积确定为所述第一概率。In some embodiments, determining the first probability based on the second probability and the confidence degree corresponding to the second probability includes: determining the product of the second probability and the confidence degree corresponding to the second probability is the first probability.
在一些实施例中,所述活体检测模型包括:特征提取网络、局部活体检测网络、全局活体检测网络和置信度检测网络;以及,通过所述活体检测模型对所述生物特征图像进行活体检测处理,得到所述目标用户为活体的第二概率和所述第二概率对应的置信度,包括:通过所述特征提取网络对所述生物特征图像进行特征提取处理,得到特征图谱;通过所述局部活体检测网络对所述特征图谱的多个局部区域分别进行活体检测处理,得到所述多个局部区域的活体检测结果;通过所述全局活体检测网络对所述特征图谱和所述多个局部区域的活体检测结果进行活体检测处理,得到所述目标用户为活体的第二概率;以及通过所述置信度检测网络基于所述多个局部区域的活体检测结果确定所述第二概率对应的置信度。In some embodiments, the living body detection model includes: a feature extraction network, a local living body detection network, a global living body detection network, and a confidence detection network; and, the living body detection model is used to perform living body detection processing on the biometric image , obtaining the second probability that the target user is a living body and the confidence degree corresponding to the second probability includes: performing feature extraction processing on the biometric image through the feature extraction network to obtain a feature map; through the local The living body detection network performs living body detection processing on a plurality of local regions of the feature map respectively, and obtains living body detection results of the plurality of local regions; through the global living body detection network, the feature map and the multiple local regions Perform live detection processing on the living body detection results to obtain the second probability that the target user is alive; and determine the confidence level corresponding to the second probability based on the living body detection results of the multiple local areas through the confidence degree detection network .
在一些实施例中,所述活体检测模型的训练过程包括第一训练阶段和第二训练阶段,其中,所述第一训练阶段被配置为对所述特征提取网络、所述局部活体检测网络和所述全局活体检测网络进行训练,所述第二训练阶段被配置为对所述置信度检测网络进行训练。In some embodiments, the training process of the living body detection model includes a first training phase and a second training phase, wherein the first training phase is configured to perform the training on the feature extraction network, the local living body detection network and The global living body detection network is trained, and the second training stage is configured to train the confidence detection network.
在一些实施例中,所述第一训练阶段包括:获取样本生物特征图像以及所述样本生物特征图像对应的标注信息,所述标注信息包括:全局活体标注结果和多个局部区域的活体标注结果;将所述样本生物特征图像输入所述特征提取网络得到样本特征图谱,将所述样本特征图谱输入所述局部活体检测网络得到多个局部区域的第一活体检测结果,并将所述样本特征图谱和所述多个局部区域的第一活体检测结果输入所述全局活体检测网络,得到第一全局活体检测结果;以及基于所述第一全局活体检测结果、所述多个局部区域的第一活体检测结果、所述全局活体标注结果和所述多个局部区域的活体标注结果确定第一目标损失,并以最小化所述第一目标损失为目标对所述特征提取网络、所述局部活体检测网络和所述全局活体检测网络进行训练。In some embodiments, the first training stage includes: acquiring a sample biometric image and annotation information corresponding to the sample biometric image, and the annotation information includes: global in vivo annotation results and in vivo annotation results in multiple local regions ; Input the sample biometric image into the feature extraction network to obtain a sample feature map, input the sample feature map into the local living body detection network to obtain the first live body detection results in multiple local areas, and use the sample feature Input the atlas and the first living body detection results of the multiple local regions into the global living body detection network to obtain the first global living body detection result; and based on the first global living body detection result, the first living body detection results of the multiple local regions The first target loss is determined from the living body detection result, the global living body labeling result and the multiple local area living body labeling results, and the feature extraction network, the local living body The detection network and the global liveness detection network are trained.
在一些实施例中,所述第一训练阶段还包括:对所述样本生物特征图像进行扰动处理,得到扰动生物特征图像,将所述扰动生物特征图像输入所述特征提取网络得到扰动特征图谱,将所述扰动特征图谱输入所述局部活体检测网络得到多个局部区域的第二活体检测结果;以及,基于所述第一全局活体检测结果、所述多个局部区域的第一活体检测结果、所述全局活体标注结果和所述多个局部区域的活体标注结果确定第一目标损失,包括:基于所述第一全局活体检测结果、所述多个局部区域的第一活体检测结果、所述多个局部区域的第二活体检测结果、所述全局活体标注结果和所述多个局部区域的活体标注结果确定所述第一目标损失。In some embodiments, the first training stage further includes: performing disturbance processing on the sample biometric image to obtain a disturbed biometric image, inputting the disturbed biometric image into the feature extraction network to obtain a disturbed feature map, inputting the disturbance feature map into the local living body detection network to obtain second living body detection results in multiple local regions; and, based on the first global living body detection results, the first living body detection results in the multiple local regions, Determining the first target loss based on the global living body labeling result and the living body labeling results of the multiple local regions includes: based on the first global living body detection result, the first living body detection results of the multiple local regions, the The first target loss is determined by the second living body detection results of multiple local regions, the global living body labeling results, and the living body labeling results of the multiple local regions.
在一些实施例中,基于所述第一全局活体检测结果、所述多个局部区域的第一活体检测结果、所述多个局部区域的第二活体检测结果、所述全局活体标注结果和所述多个局部区域的活体标注结果确定所述第一目标损失,包括:基于所述第一全局活体检测结果与所述全局活体标注结果之间的差异,确定第一损失;基于所述多个局部区域的第一活体检测结果与所述多个局部区域的活体标注结果之间的差异,确定第二损失;基于所述多个局部区域的第一活体检测结果与所述多个局部区域的第二活体检测结果之间的差异,确定第三损失;以及基于所述第一损失、所述第二损失和所述第三损失,确定所述第一目标损失。In some embodiments, based on the first global living body detection result, the first living body detection results of the multiple local regions, the second living body detection results of the multiple local regions, the global living body labeling result and the Determining the first target loss based on the living body labeling results of the plurality of local regions includes: determining the first loss based on the difference between the first global living body detection result and the global living body labeling result; determining the second loss based on the difference between the first living body detection result of the local area and the living body labeling results of the multiple local areas; based on the first living body detection result of the multiple local areas and the determining a third loss based on the difference between the second liveness detection results; and determining the first target loss based on the first loss, the second loss, and the third loss.
在一些实施例中,所述第二训练阶段包括:获取样本生物特征图像以及所述样本生物特征图像对应的标注信息,所述标注信息包括:全局活体标注结果和多个局部区域的活体标注结果;将所述样本生物特征图像输入所述特征提取网络得到样本特征图谱,将所述样本特征图谱输入所述局部活体检测网络得到多个局部区域的第一活体检测结果,将所述多个局部区域的第一活体检测结果输入所述置信度检测网络得到所述多个局部区域的检测置信度,基于所述多个局部区域的检测置信度对所述样本特征图谱的对应局部区域进行更新,得到更新特征图谱,将所述更新特征图谱输入所述局部活体检测网络得到多个局部区域的第三活体检测结果,并将所述更新特征图谱和所述多个局部区域的第三活体检测结果输入所述全局活体检测网络,得到第二全局活体检测结果;以及,基于所述第二全局活体检测结果、所述多个局部区域的第三活体检测结果、所述全局活体标注结果和所述多个局部区域的活体标注结果确定第二目标损失,并以最小化所述第二目标损失为训练目标对所述置信度检测网络进行训练。In some embodiments, the second training stage includes: acquiring a sample biometric image and annotation information corresponding to the sample biometric image, and the annotation information includes: a global in vivo annotation result and a plurality of local region in vivo annotation results ; Input the sample biometric image into the feature extraction network to obtain a sample feature map, input the sample feature map into the local living body detection network to obtain the first living body detection results of multiple local areas, and the multiple local The first living body detection result of the area is input into the confidence detection network to obtain the detection confidence of the multiple local areas, and the corresponding local area of the sample feature map is updated based on the detection confidence of the multiple local areas, Obtaining an updated feature map, inputting the updated feature map into the local living body detection network to obtain third live body detection results in multiple local areas, and combining the updated feature map and the third live body detection results in the multiple local areas Input the global living body detection network to obtain a second global living body detection result; and, based on the second global living body detection result, the third living body detection results of the plurality of local regions, the global living body labeling result and the A second target loss is determined from live body labeling results of multiple local regions, and the confidence detection network is trained with the training target of minimizing the second target loss.
在一些实施例中,基于所述目标用户的实际输入信息对所述目标用户进行再次活体检测,得到所述目标用户的目标活体检测结果,包括:基于所述实际输入信息对所述目标用户进行再次活体检测,得到再次活体检测结果;以及,基于所述初步活体检测结果和所述再次活体检测结果,确定所述目标活体检测结果。In some embodiments, performing liveness detection on the target user again based on the actual input information of the target user, and obtaining the target liveness detection result of the target user includes: performing liveness detection on the target user based on the actual input information performing another liveness detection to obtain a second liveness detection result; and, based on the preliminary liveness detection result and the second liveness detection result, determining the target liveness detection result.
在一些实施例中,所述生物特征包括下述中的至少一种:人脸、虹膜、指纹和掌纹中的至少一种。In some embodiments, the biological feature includes at least one of the following: at least one of human face, iris, fingerprint and palm print.
第二方面,本说明书还提供一种活体检测系统,包括:至少一个存储介质和至少一个处理器,所述至少一个存储介质存储有至少一个指令集,用于进行活体检测,所述至少一个处理器,同所述至少一个存储介质通信连接,其中,当所述活体检测系统运行时,所述至少一个处理器读取所述至少一个指令集,并且根据所述至少一个指令集的指示执行第一方面中任一项所述的活体检测方法。In the second aspect, this specification also provides a living body detection system, including: at least one storage medium and at least one processor, the at least one storage medium stores at least one instruction set for performing living body detection, and the at least one processing A processor, connected in communication with the at least one storage medium, wherein, when the living body detection system is running, the at least one processor reads the at least one instruction set, and executes the first step according to the instructions of the at least one instruction set The living body detection method described in any one aspect.
由以上技术方案可知,本说明书提供的活体检测方法及系统,获取包含目标用户的生物特征的生物特征图像,基于该生物特征图像对目标用户进行初步活体检测,得到目标用户的初步活体检测结果,进而,指示目标用户输入目标长度的身份验证信息,并基于目标用户的实际输入信息对目标用户进行再次活体检测,得到目标用户的目标活体检测结果,其中,所述目标长度与所述初步活体检测结果相关。上述方案在基于生物特征图像对目标用户进行初步活体检测的基础上,进一步引入身份验证流程,即,指示目标用户输入目标长度的身份验证信息,并基于目标用户的实际输入信息对目标用户进行再次活体检测,通过上述两个阶段的活体检测过程,能够从整体上提升活体检测结果的准确性。进一步的,由于身份验证信息对应的目标长度与初步活体检测结果相关,使得目标长度具有一定的可变性,相比固定长度的身份验证信息,能够进一步提升活体检测结果的准确性。It can be known from the above technical solutions that the living body detection method and system provided in this manual obtains a biometric image containing the biological characteristics of the target user, and performs preliminary living body detection on the target user based on the biometric image to obtain the preliminary living body detection result of the target user. Furthermore, the target user is instructed to input the identity verification information of the target length, and based on the actual input information of the target user, the target user is re-tested to obtain the target liveness detection result of the target user, wherein the target length and the preliminary liveness detection The results are relevant. The above scheme further introduces an identity verification process on the basis of preliminary liveness detection of the target user based on the biometric image, that is, instructs the target user to input the identity verification information of the target length, and re-inspects the target user based on the actual input information of the target user. Liveness detection, through the above-mentioned two-stage liveness detection process, can improve the accuracy of the liveness detection result as a whole. Furthermore, since the target length corresponding to the identity verification information is related to the preliminary liveness detection result, the target length has certain variability, which can further improve the accuracy of the liveness detection result compared with the fixed-length identity verification information.
本说明书提供的活体检测方法及系统的其他功能将在以下说明中部分列出。本说明书提供的活体检测方法及系统的创造性方面可以通过实践或使用下面详细示例中所述的方法、装置和组合得到充分解释。Other functions of the living body detection method and system provided in this manual will be partially listed in the following description. The inventive aspects of the liveness detection methods and systems provided in this specification can be fully explained by practice or use of the methods, devices and combinations described in the detailed examples below.
附图说明Description of drawings
为了更清楚地说明本说明书实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of this specification, the drawings that need to be used in the description of the embodiments will be briefly introduced below. Obviously, the drawings in the following description are only some embodiments of this specification. For those skilled in the art, other drawings can also be obtained based on these drawings without creative effort.
图1示出了根据本说明书的实施例提供的一种活体检测系统的应用场景示意图;FIG. 1 shows a schematic diagram of an application scenario of a living body detection system provided according to an embodiment of the present specification;
图2示出了根据本说明书的实施例提供的一种计算设备的硬件结构图;FIG. 2 shows a hardware structural diagram of a computing device provided according to an embodiment of this specification;
图3示出了根据本说明书的实施例提供的一种活体检测方法的流程图;Fig. 3 shows a flow chart of a living body detection method provided according to an embodiment of the specification;
图4示出了根据本说明书的实施例提供的一种活体检测模型的结构示意图;Fig. 4 shows a schematic structural diagram of a living body detection model provided according to an embodiment of the specification;
图5示出了根据本说明书的实施例提供的活体检测模型的第一训练阶段的训练过程的示意图;Fig. 5 shows a schematic diagram of the training process of the first training stage of the living body detection model provided according to the embodiment of the specification;
图6示出了根据本说明书的实施例提供的活体检测模型的第二训练阶段的训练过程的示意图;以及Fig. 6 shows a schematic diagram of the training process of the second training stage of the living body detection model provided according to an embodiment of the present specification; and
图7示出了根据本说明书的实施例提供的一种活体检测过程的交互示意图。Fig. 7 shows an interactive schematic diagram of a living body detection process provided according to an embodiment of the present specification.
具体实施方式Detailed ways
以下描述提供了本说明书的特定应用场景和要求,目的是使本领域技术人员能够制造和使用本说明书中的内容。对于本领域技术人员来说,对所公开的实施例的各种局部修改是显而易见的,并且在不脱离本说明书的精神和范围的情况下,可以将这里定义的一般原理应用于其他实施例和应用。因此,本说明书不限于所示的实施例,而是与权利要求一致的最宽范围。The following description provides specific application scenarios and requirements of this specification, with the purpose of enabling those skilled in the art to manufacture and use the contents of this specification. Various local modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and embodiments without departing from the spirit and scope of the description. application. Accordingly, the description is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.
这里使用的术语仅用于描述特定示例实施例的目的,而不是限制性的。比如,除非上下文另有明确说明,这里所使用的,单数形式“一”,“一个”和“该”也可以包括复数形式。当在本说明书中使用时,术语“包括”、“包含”和/或“含有”意思是指所关联的整数,步骤、操作、元素和/或组件存在,但不排除一个或多个其他特征、整数、步骤、操作、元素、组件和/或组的存在或在该系统/方法中可以添加其他特征、整数、步骤、操作、元素、组件和/或组。The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. For example, as used herein, the singular forms "a", "an" and "the" may also include the plural forms unless the context clearly dictates otherwise. When used in this specification, the terms "comprising", "comprising" and/or "containing" mean that the associated integer, step, operation, element and/or component is present, but does not exclude one or more other features , integers, steps, operations, elements, components and/or groups exist or other features, integers, steps, operations, elements, components and/or groups may be added in the system/method.
考虑到以下描述,本说明书的这些特征和其他特征、以及结构的相关元件的操作和功能、以及部件的组合和制造的经济性可以得到明显提高。参考附图,所有这些形成本说明书的一部分。然而,应该清楚地理解,附图仅用于说明和描述的目的,并不旨在限制本说明书的范围。还应理解,附图未按比例绘制。These and other features of this specification, as well as the operation and function of the relevant elements of the structure, and the economy of assembly and manufacture of the components, may be significantly enhanced in view of the following description. Reference is made to the accompanying drawings, all of which form a part of this specification. It should be clearly understood, however, that the drawings are for purposes of illustration and description only and are not intended to limit the scope of the specification. It should also be understood that the figures are not drawn to scale.
本说明书中使用的流程图示出了根据本说明书中的一些实施例的系统实现的操作。应该清楚地理解,流程图的操作可以不按顺序实现。相反,操作可以以反转顺序或同时实现。此外,可以向流程图添加一个或多个其他操作。可以从流程图中移除一个或多个操作。The flowcharts used in this specification illustrate the operation of system implementations according to some embodiments in this specification. It should be clearly understood that the operations of the flowcharts may be performed out of order. Instead, operations may be performed in reverse order or concurrently. Additionally, one or more other operations can be added to the flowchart. One or more operations can be removed from the flowchart.
为了方便描述,本说明书将对以下描述出现的术语进行如下解释:For the convenience of description, this manual will explain the terms that appear in the following descriptions as follows:
活体攻击:针对人脸识别系统或其他生物识别系统进行攻击的手段,例如,可以通过手机屏幕、打印照片、高精度面具等方式进行活体攻击。Live attack: means of attacking face recognition systems or other biometric systems, for example, live attacks can be carried out through mobile phone screens, printed photos, high-precision masks, etc.
活体检测:是指在人脸识别或其他生物识别系统中,用于防范活体攻击手段的算法和技术,具体可用于识别目标用户是否为活体。Liveness detection: refers to the algorithms and technologies used to prevent liveness attacks in face recognition or other biometric systems, and can be used to identify whether the target user is alive.
在对本说明书具体实施例说明之前,先对本说明书的应用场景进行如下介绍:Before describing the specific embodiments of this specification, the application scenarios of this specification are introduced as follows:
本说明书提供的活体检测方法可以应用于基于生物特征进行身份验证的场景中。本说明书提供的活体检测方法可以视为一种用户分类方法,可以将目标用户分类为活体或者非活体。例如,人脸识别(例如人脸支付、人脸门禁、人脸考勤等)系统采集到人脸图像后,可以采用本说明书提供的活体检测方法基于人脸图像对目标用户进行活体检测,以确定目标用户是否为活体。若是活体,则继续执行后续的人脸识别流程,若非活体,则无需执行后续的人脸识别流程,从而提高人脸识别的安全性。The liveness detection method provided in this specification can be applied to the scene of authentication based on biometrics. The living body detection method provided in this specification can be regarded as a user classification method, which can classify the target user as living body or non-living body. For example, after a face recognition (such as face payment, face access control, face attendance, etc.) system collects a face image, it can use the liveness detection method provided in this manual to perform liveness detection on the target user based on the face image to determine Whether the target user is alive. If it is a living body, the subsequent face recognition process will continue to be performed. If it is not a living body, the subsequent face recognition process will not need to be performed, thereby improving the security of face recognition.
需要说明的是,上述人脸识别场景只是本说明书提供的多个使用场景中的一个,本说明书提供的活体检测方法不仅可以应用于人脸识别场景,还可以应用于基于其他生物特征,例如指纹、掌纹、虹膜等生物特征进行身份验证的所有场景。本领域技术人员应当明白,本说明书描述的活体检测方法应用于其他使用场景也在本说明书的保护范围内。It should be noted that the above-mentioned face recognition scenario is only one of the multiple usage scenarios provided in this manual. The living body detection method provided in this manual can not only be applied to face recognition scenarios, but also can be applied to other biometric features, such as fingerprints. , palmprint, iris and other biometric authentication scenarios. Those skilled in the art should understand that the application of the living body detection method described in this specification to other usage scenarios is also within the protection scope of this specification.
图1示出了根据本说明书的实施例提供的一种活体检测系统的应用场景示意图。活体检测系统001(以下简称系统001)可以应用于任意场景的活体检测,比如,基于人脸识别的支持场景、基于人脸识别的门禁场景、基于人脸识别的考勤场景等。如图1所示,系统001可以包括目标用户100、客户端200、服务器300以及网络400。Fig. 1 shows a schematic diagram of an application scenario of a living body detection system provided according to an embodiment of the present specification. Liveness detection system 001 (hereinafter referred to as system 001) can be applied to liveness detection in any scene, for example, support scene based on face recognition, access control scene based on face recognition, attendance scene based on face recognition, etc. As shown in FIG. 1 , the
目标用户100可以为触发活体检测的用户,目标用户100可以在客户端200进行预设操作以触发活体检测。The
客户端200可以是对目标用户100的活体检测操作进行响应的设备。在一些实施例中,本说明书描述的活体检测方法可以在客户端200上执行。此时,客户端200可以存储有执行本说明书描述的活体检测方法的数据或指令,并可以执行或用于执行所述数据或指令。在一些实施例中,客户端200可以包括具有数据信息处理功能的硬件设备和驱动该硬件设备工作所需必要的程序。如图1所示,客户端200可以与服务器300进行通信连接。在一些实施例中,服务器300可以与多个客户端200进行通信连接。在一些实施例中,客户端200可以通过网络400与服务器300交互,以接收或发送消息等,比如接收或发送生物特征图像。在一些实施例中,客户端200可以包括移动设备、平板电脑、笔记本电脑、机动车辆的内置设备或类似内容,或其任意组合。在一些实施例中,所述移动设备可包括智能家居设备、智能移动设备、虚拟现实设备、增强现实设备或类似设备,或其任意组合。在一些实施例中,所述智能家居装置可包括智能电视、台式电脑等,或任意组合。在一些实施例中,所述智能移动设备可包括智能手机、个人数字辅助、游戏设备、导航设备等,或其任意组合。在一些实施例中,所述虚拟现实设备或增强现实设备可能包括虚拟现实头盔、虚拟现实眼镜、虚拟现实补丁、增强现实头盔、增强现实眼镜、增强现实补丁或类似内容,或其中的任何组合。例如,所述虚拟现实设备或所述增强现实设备可能包括谷歌眼镜、头戴式显示器、VR等。在一些实施例中,所述机动车中的内置设备可包括车载计算机、车载电视等。在一些实施例中,客户端200可以包括图像采集设备,用于采集目标用户的视频或图像信息,从而获取生物特征图像。在一些实施例中,所述图像采集设备可以是二维图像采集设备(比如RGB摄像头),也可以是二维图像采集设备(比如RGB摄像头)和深度图像采集设备(比如3D结构光摄像头、激光探测器,等等)。在一些实施例中,客户端200可以是具有定位技术的设备,用于定位客户端200的位置。The
在一些实施例中,客户端200可以安装有一个或多个应用程序(APP)。所述APP能够为目标用户110提供通过网络400同外界交互的能力以及界面。所述APP包括但不限于:网页浏览器类APP程序、搜索类APP程序、聊天类APP程序、购物类APP程序、视频类APP程序、理财类APP程序、即时通信工具、邮箱客户端、社交平台软件等等。在一些实施例中,客户端200上可以安装有目标APP。所述目标APP能够为客户端200采集目标用户的视频或图像信息,从而得到生物特征图像。在一些实施例中,所述目标用户100还可以通过所述目标APP触发活体检测请求。所述目标APP可以响应于所述活体检测请求,执行本说明书描述的活体检测方法。所述活体检测方法将在后面的内容中详细介绍。In some embodiments, the
服务器300可以是提供各种服务的服务器,例如对客户端200上采集的生物特征图像提供支持的后台服务器。在一些实施例中,本说明书描述的活体检测方法可以在服务器300上执行。此时,服务器300可以存储有执行本说明书描述的活体检测方法的数据或指令,并可以执行或用于执行所述数据或指令。在一些实施例中,服务器300可以包括具有数据信息处理功能的硬件设备和驱动该硬件设备工作所需必要的程序。服务器300可以与多个客户端200通信连接,并接收客户端200发送的数据。The
网络400用以在客户端200和服务器300之间提供通信连接的介质。网络400可以促进信息或数据的交换。如图1所示,客户端200和服务器300可以同网络400连接,并且通过网络400互相传输信息或数据。在一些实施例中,网络400可以是任何类型的有线或无线网络,也可以是其组合。比如,网络400可以包括电缆网络、有线网络、光纤网络、电信通信网络、内联网、互联网、局域网(LAN)、广域网(WAN)、无线局域网(WLAN)、大都市市区网(MAN)、广域网(WAN)、公用电话交换网(PSTN)、蓝牙网络、ZigBee网络、近场通信(NFC)网络或类似网络。在一些实施例中,网络400可以包括一个或多个网络接入点。例如,网络400可以包括有线或无线网络接入点,如基站或互联网交换点,通过该接入点,客户端200和服务器300的一个或多个组件可以连接到网络400以交换数据或信息。The
应该理解,图1中的客户端200、服务器300和网络400的数目仅仅是示意性的。根据实现需要,可以具有任意数目的客户端200、服务器300和网络400。It should be understood that the numbers of
需要说明的是,本说明书描述的活体检测方法可以完全在客户端200上执行,也可以完全在服务器300上执行,还可以部分在客户端200上执行,部分在服务器300上执行。It should be noted that the living body detection method described in this specification may be completely executed on the
图2示出了根据本说明书的实施例提供的一种计算设备600的硬件结构图。计算设备600可以执行本说明书描述的活体检测方法。所述活体检测方法在本说明书中的其他部分介绍。当所述活体检测方法在客户端200上执行时,计算设备600可以是客户端200。当所述活体检测方法在服务器300上执行时,计算设备600可以是服务器300。当所述活体检测方法可以部分在客户端200上执行,部分在服务器300上执行时,计算设备600可以包括客户端200和服务器300。FIG. 2 shows a hardware structure diagram of a
如图2所示,计算设备600可以包括至少一个存储介质630和至少一个处理器620。在一些实施例中,计算设备600还可以包括通信端口650和内部通信总线610。同时,计算设备600还可以包括I/O组件660。As shown in FIG. 2 ,
内部通信总线610可以连接不同的系统组件,包括存储介质630、处理器620和通信端口650。
I/O组件660支持计算设备600和其他组件之间的输入/输出。I/
通信端口650用于计算设备600同外界的数据通信,比如,通信端口650可以用于计算设备600同网络400之间的数据通信。通信端口650可以是有线通信端口也可以是无线通信端口。The
存储介质630可以包括数据存储装置。所述数据存储装置可以是非暂时性存储介质,也可以是暂时性存储介质。比如,所述数据存储装置可以包括磁盘632、只读存储介质(ROM)634或随机存取存储介质(RAM)636中的一种或多种。存储介质630还包括存储在所述数据存储装置中的至少一个指令集。所述指令集中存储的指令是计算机程序代码,所述计算机程序代码可以包括执行本说明书提供的活体检测方法的程序、例程、对象、组件、数据结构、过程、模块等等。The
至少一个处理器620可以同至少一个存储介质630以及通信端口650通过内部通信总线610通信连接。至少一个处理器620用以执行上述至少一个指令集。当计算设备600运行时,至少一个处理器620读取所述至少一个指令集,并且根据所述至少一个指令集的指示,执行本说明书提供的活体检测方法。处理器620可以执行活体检测方法包含的所有步骤或部分步骤。处理器620可以是一个或多个处理器的形式,在一些实施例中,处理器620可以包括一个或多个硬件处理器,例如微控制器,微处理器,精简指令集计算机(RISC),专用集成电路(ASIC),特定于应用的指令集处理器(ASIP),中心处理单元(CPU),图形处理单元(GPU),物理处理单元(PPU),微控制器单元,数字信号处理器(DSP),现场可编程门阵列(FPGA),高级RISC机器(ARM),可编程逻辑器件(PLD),能够执行一个或多个功能的任何电路或处理器等,或其任何组合。仅仅为了说明问题,在本说明书中计算设备600中仅描述了一个处理器620。然而,应当注意,本说明书中计算设备600还可以包括多个处理器,因此,本说明书中披露的操作和/或方法步骤可以如本说明书所述的由一个处理器执行,也可以由多个处理器联合执行。例如,如果在本说明书中计算设备600的处理器620执行步骤A和步骤B,则应该理解,步骤A和步骤B也可以由两个不同处理器620联合或分开执行(例如,第一处理器执行步骤A,第二处理器执行步骤B,或者第一和第二处理器共同执行步骤A和B)。At least one
图3示出了根据本说明书的实施例提供的一种活体检测方法P100的流程图。如前,计算设备600可以执行本说明书描述的活体检测方法P100。具体地,处理器620可以读取存储在其本地存储介质中的指令集,然后根据指令集的规定,执行本说明书描述的活体检测方法P100。如图3所示,方法P100可以包括:Fig. 3 shows a flow chart of a living body detection method P100 provided according to an embodiment of the present specification. As before, the
S110:获取生物特征图像,所述生物特征图像包括目标用户的生物特征。S110: Acquire a biometric image, where the biometric image includes the biometrics of the target user.
其中,生物特征图像可以是指包含目标用户100的生物特征的图像。利用生物特征的唯一性,该生物特征图像可用于对目标用户100的身份信息进行识别或验证。本申请实施例中所述生物识别所涉及的生物特征,例如可以包括眼纹、声纹、指纹、掌纹、心跳、脉搏、染色体、DNA、人牙咬痕等。在一些实施例中,上述生物特征可以包括人脸、眼纹、指纹、掌纹、或虹膜中的至少一种。其中,眼纹可以包括虹膜、巩膜等生物特征。换言之,上述生物特征图像可以为人脸图像、眼纹图像、指纹图像、掌纹图像、或者虹膜图像中的至少一种。Wherein, the biometric image may refer to an image containing the biometrics of the
在一些实施例中,当计算设备600为客户端200时,客户端200可以利用图像采集模组对目标用户100进行采集得到的上述生物特征图像。在一些实施例中,当计算设备600为服务器300时,服务器300可以从客户端200接收生物特征图像,其中,上述生物特征图像可以是客户端200利用图像采集模组对目标用户100进行采集得到的。In some embodiments, when the
S120:基于所述生物特征图像对所述目标用户进行初步活体检测,得到所述目标用户的初步活体检测结果。S120: Perform a preliminary living body detection on the target user based on the biometric image, and obtain a preliminary living body detection result of the target user.
其中,上述初步活体检测结果用于初步指示目标用户100是否为活体。例如,上述初步活体检测结果可以包括:目标用户100为活体的第一概率。Wherein, the above-mentioned preliminary living body detection result is used to preliminarily indicate whether the
在一些实施例中,处理器620可以将所述生物特征图像输入预先训练好的活体检测模型,其中,该活体检测模型具有识别目标用户100是否为活体的能力。这样,通过该活体检测模型对目标用户100进行初步活体检测,可以得到目标用户100的初步活体检测结果,即,得到目标用户100为活体的第一概率。其中,需要说明的是,上述活体检测模型可以为具有活体检测能力的任意机器学习模型。In some embodiments, the
在一些实施例中,上述活体检测模型除了具有识别目标用户100是否为活体的能力,还可以具有对模型输出结果的置信度进行检测的能力。具体而言,处理器620将生物特征图像输入已训练的活体检测模型,通过该活体检测模型对生物特征图像进行活体检测处理,得到目标用户为活体的第二概率和所述第二概率对应的置信度。该情况下,处理器620可以基于所述第二概率和所述第二概率对应的置信度,确定目标用户100的初步活体检测结果,即,目标用户100为活体的第一概率。在一些实施例中,假设活体检测模型输出的第二概率记为P、第二概率P对应的置信度记为C,其中,P和C的取值范围均为[0,1]。可以将第二概率P和置信度C的乘积确定为所述第一概率S,即:In some embodiments, in addition to the ability to identify whether the
S=C*P 公式(1)S=C*P formula (1)
由上述公式(1)可见,第一概率S与第二概率P呈正相关关系,并与置信度C呈正相关关系,也就是说,活体检测模型输出的第二概率P越高,并且置信度C越高,则说明第一概率S越高。应理解,上述第一概率S可以视为利用置信度C对模型输出的第二概率P进行修正得到的概率,因此,第一概率S相对于第二概率P的准确度更高。It can be seen from the above formula (1) that the first probability S is positively correlated with the second probability P, and is positively correlated with the confidence level C, that is, the higher the second probability P output by the living body detection model, and the higher the confidence level C The higher it is, the higher the first probability S is. It should be understood that the above-mentioned first probability S can be regarded as a probability obtained by correcting the second probability P output by the model by using the confidence degree C, therefore, the accuracy of the first probability S is higher than that of the second probability P.
图4示出了根据本说明书的实施例提供的一种活体检测模型的结构示意图。如图4所示,活体检测模型400可以包括:特征提取网络401、局部活体检测网络402、全局活体检测网络403和置信度检测网络404。参见图4,活体检测模型400对目标用户100的活体检测过程可以包括:Fig. 4 shows a schematic structural diagram of a living body detection model provided according to an embodiment of the present specification. As shown in FIG. 4 , the living
(1)将生物特征图像输入特征提取网络401,通过特征提取网络401对生物特征图像进行特征提取处理,得到特征图谱。(1) Input the biometric image into the
(2)将特征图谱输入局部活体检测网络402,通过局部活体检测网络402对特征图谱的多个局部区域分别进行活体检测处理,得到多个局部区域的活体检测结果。(2) Input the feature map into the local live
其中,上述多个局部区域是指按照某种区域划分方式对特征图谱进行区域划分得到的多个区域。例如,可以在特征图谱的高度方向上进行3等分,并在宽度方向上进行3等分,从而将特征图谱划分为3*3=9个局部区域。应理解的是,本说明书的实施例对于局部区域划分方式不作限定,例如,多个局部区域还可以是随机划分得到的区域。每个局部区域的活体检测结果指示的是,基于该局部区域的图像内容对目标用户进行活体检测得到的检测结果。以将特征图谱划分为9个局部区域为例,将特征图谱输入局部活体检测网络,可以得到9个局部区域对应的活体概率。Wherein, the above-mentioned plurality of local regions refers to a plurality of regions obtained by region-dividing the feature map according to a certain region-dividing manner. For example, the feature map may be divided into 3 equal parts in the height direction and 3 equal parts in the width direction, so as to divide the feature map into 3*3=9 local regions. It should be understood that, the embodiment of the present specification does not limit the manner of dividing the local areas, for example, the multiple local areas may also be areas obtained by dividing randomly. The living body detection result of each local area indicates the detection result obtained by performing live body detection on the target user based on the image content of the local area. Taking the feature map divided into 9 local areas as an example, the feature map is input into the local living body detection network, and the probability of living body corresponding to the 9 local areas can be obtained.
(3)将特征图谱和多个局部区域的活体检测结果输入全局活体检测网络403,通过全局活体检测网络403对特征图谱和多个局部区域的活体检测结果进行活体检测处理,得到全局活体检测结果。(3) Input the liveness detection results of the feature map and multiple local regions into the global
其中,全局活体检测结果可以包括:目标用户为活体的第二概率(即前文描述的概率P)。应理解,全局活体检测网络403在进行全局的活体检测时,不仅基于特征图谱,还考虑了多个局部区域的活体检测结果,使得得到的全局活体检测结果更加准确。Wherein, the global living body detection result may include: the second probability that the target user is living (that is, the probability P described above). It should be understood that when the global
(4)将多个局部区域的活体检测结果输入置信度检测网络404,得到全局活体检测结果对应的置信度。(4) Input the living body detection results of multiple local regions into the confidence
具体的,置信度检测网络404可以基于多个局部区域的活体检测结果确定多个局部区域对应的置信度。其中,每个局部区域对应的置信度指示该局部区域的活体检测结果的可信程度。进而,置信度检测网络404可以基于多个局部区域对应的置信度确定全局活体检测结果对应的置信度(例如前文描述的置信度C)。例如,可以对多个局部区域对应的置信度进行加权均值运算,得到全局活体检测结果对应的置信度。Specifically, the confidence
需要说明的是,本说明的实施例对于上述特征提取网络401、局部活体检测网络402、全局活体检测网络403以及置信度检测网络404的具体网络结构不作限定,例如,可以采用CNN(Convolutional Neural Network、卷积神经网络)、DNN(Deep Neural Network,深度神经网络)、RNN(Recurrent Neural Network,循环神经网络)或其他任意可行的网络结构。It should be noted that the embodiment of this description does not limit the specific network structure of the above-mentioned
下面对图4所示的活体检测模型的训练过程进行说明。The training process of the living body detection model shown in FIG. 4 will be described below.
所述活体检测模型可以采用分阶段训练的方式训练得到。在一些实施例中,该活体检测模型的训练过程可以包括第一训练阶段和第二训练阶段。其中,第一训练阶段被配置为对特征提取网络401、局部活体检测网络402和全局活体检测网络403进行训练。第二训练阶段被配置为对置信度检测网络404进行训练。也就是说,先对特征提取网络401、局部活体检测网络402和全局活体检测网络403进行训练,待这三个网络单元收敛之后,再对置信度检测网络404进行训练。应理解,通过分阶段进行训练,能够降低每个阶段的训练计算量,有利于提高整体的训练效率。The living body detection model can be trained in a staged training manner. In some embodiments, the training process of the living body detection model may include a first training phase and a second training phase. Wherein, the first training stage is configured to train the
图5示出了根据本说明书的实施例提供的活体检测模型的第一训练阶段的训练过程的示意图。如图5所示,在一些实施例中,上述第一训练阶段的训练过程可以包括:Fig. 5 shows a schematic diagram of the training process of the first training stage of the living body detection model provided according to the embodiment of the present specification. As shown in Figure 5, in some embodiments, the training process of the above-mentioned first training phase may include:
(1)获取样本生物特征图像以及所述样本生物特征图像对应的标注信息,所述标注信息包括:全局活体标注结果和多个局部区域的活体标注结果。(1) Acquiring a sample biometric image and labeling information corresponding to the sample biometric image, where the labeling information includes: global living body labeling results and living body labeling results in multiple local regions.
其中,样本生物特征图像对应的标注信息用于指示该样本生物特征图像是通过对活体采集得到的,还是通过活体攻击方式得到的。Wherein, the annotation information corresponding to the sample biometric image is used to indicate whether the sample biometric image is obtained by collecting a living body or by attacking a living body.
(2)参见图5,将所述样本生物特征图像输入特征提取网络401得到样本特征图谱,将所述样本特征图谱输入局部活体检测网络402得到多个局部区域的第一活体检测结果,并将所述样本特征图谱和所述多个局部区域的第一活体检测结果输入全局活体检测网络403,得到第一全局活体检测结果。(2) Referring to FIG. 5 , the sample biometric image is input into the
(3)继续参见图5,基于所述第一全局活体检测结果、所述多个局部区域的第一活体检测结果、所述全局活体标注结果和所述多个局部区域的活体标注结果确定第一目标损失,并以最小化所述第一目标损失为目标对特征提取网络401、局部活体检测网络402和全局活体检测网络403进行训练。(3) Continue referring to FIG. 5 , based on the first global living body detection result, the first living body detection results of the multiple local regions, the global living body labeling result, and the living body labeling results of the multiple local regions to determine the first A target loss, and aim at minimizing the first target loss to train the
在一些实施例中,第一目标损失的计算方式可以如下:基于第一全局活体检测结果和所述全局活体标注结果之间的差异,确定全局损失,基于所述多个局部区域的第一活体检测结果和所述多个局部区域的活体标注结果之间的差异,确定局部损失,进而基于全局损失和局部损失确定第一目标损失。继续参见图5,在确定出第一目标损失之后,可以以最小化第一目标损失为训练目标,调整特征提取网络401、局部活体检测网络402和全局活体检测网络403的网络参数,直至上述三个网络单元达到收敛条件,或者直至达到预设的迭代次数。In some embodiments, the calculation method of the first target loss may be as follows: determine the global loss based on the difference between the first global living body detection result and the global living body labeling result, and determine the global loss based on the first living body in the multiple local regions The difference between the detection result and the living body labeling results of the plurality of local regions is determined to determine a local loss, and then determine the first target loss based on the global loss and the local loss. Continuing to refer to FIG. 5, after the first target loss is determined, the network parameters of the
在上述第一训练阶段中,在确定第一目标损失时不仅考虑全局损失,还考虑局部损失,使得确定出的第一目标损失更加准确。经过第一训练阶段之后,活体检测模型具有基于生物特征图像进行活体检测的能力。In the above first training stage, not only global loss but also local loss are considered when determining the first target loss, so that the determined first target loss is more accurate. After the first training stage, the living body detection model has the ability to perform living body detection based on biometric images.
在一些实施例中,所述第一训练阶段还可以包括:对所述样本生物特征图像进行扰动处理,得到扰动生物特征图像。其中,上述扰动处理可以为全局扰动,也可以为局部扰动,例如,可以对样本生物特征图像的全部区域或者部分区域添加噪声,以形成扰动生物特征图像。进而,将所述扰动生物特征图像输入所述特征提取网络得到扰动特征图谱,将所述扰动特征图谱输入所述局部活体检测网络得到多个局部区域的第二活体检测结果。In some embodiments, the first training stage may further include: performing perturbation processing on the sample biometric image to obtain a disturbed biometric image. Wherein, the above disturbance processing may be global disturbance or local disturbance, for example, noise may be added to all or part of the sample biometric image to form a disturbed biometric image. Furthermore, inputting the disturbance biometric image into the feature extraction network to obtain a disturbance feature map, and inputting the disturbance feature map into the local living body detection network to obtain second living body detection results in multiple local regions.
在此基础上,可以基于所述第一全局活体检测结果、所述多个局部区域的第一活体检测结果、所述多个局部区域的第二活体检测结果、所述全局活体标注结果和所述多个局部区域的活体标注结果确定所述第一目标损失。在一些实施例中,第一目标损失的计算方式可以如下:基于所述第一全局活体检测结果与全局活体标注结果之间的差异,确定第一损失,基于所述多个局部区域的第一活体检测结果与所述多个局部区域的活体标注结果之间的差异,确定第二损失,基于所述多个局部区域的第一活体检测结果与所述多个局部区域的第二活体检测结果之间的差异,确定第三损失,进而,基于所述第一损失、所述第二损失和所述第三损失,确定第一目标损失。其中,上述第一损失用于表征全局损失,上述第二损失用于表征局部损失,上述第三损失用于表征扰动一致性损失,即扰动前后的图像的局部活体检测结果之间的差异。On this basis, based on the first global living body detection result, the first living body detection results of the multiple local regions, the second living body detection results of the multiple local regions, the global living body labeling result and the The first target loss is determined based on the living body labeling results of the plurality of local regions. In some embodiments, the calculation method of the first target loss may be as follows: determine the first loss based on the difference between the first global living body detection result and the global living body labeling result, and determine the first loss based on the first Determine the second loss based on the difference between the living body detection result and the living body labeling results of the plurality of local regions, based on the first living body detection result of the plurality of local regions and the second living body detection result of the plurality of local regions Determine the third loss based on the difference between them, and then determine the first target loss based on the first loss, the second loss and the third loss. Wherein, the above-mentioned first loss is used to represent the global loss, the above-mentioned second loss is used to represent the local loss, and the above-mentioned third loss is used to represent the perturbation consistency loss, that is, the difference between the local living body detection results of the image before and after the perturbation.
在上述第一训练阶段中,不仅利用原始的样本生物特征图像,还利用扰动生物特征图像,并且,在确定第一目标损失时不仅考虑全局损失(即第一损失)和局部损失(即第二损失),还考虑扰动一致性损失(即第三损失),使得训练后活体检测模型对扰动不敏感,即,对于扰动图像的活体检测结果与非扰动图像的活体检测结果具有一致性。这样,训练后的活体检测模型对于扰动图像和非扰动图像的活体检测结果均具有较高的准确性。In the above first training stage, not only the original sample biometric image but also the disturbed biometric image are used, and when determining the first target loss, not only the global loss (i.e. the first loss) and the local loss (i.e. the second loss) are considered Loss), also consider the perturbation consistency loss (ie, the third loss), so that the trained liveness detection model is not sensitive to perturbations, that is, the liveness detection results of the perturbed images are consistent with the liveness detection results of the non-perturbed images. In this way, the trained live body detection model has high accuracy for the live body detection results of the perturbed image and the non-perturbed image.
图6示出了根据本说明书的实施例提供的活体检测模型的第二训练阶段的训练过程的示意图。如图6所示,在一些实施例中,所述第二训练阶段的训练过程可以包括:Fig. 6 shows a schematic diagram of the training process of the second training stage of the living body detection model provided according to the embodiment of the present specification. As shown in Figure 6, in some embodiments, the training process of the second training phase may include:
(1)获取样本生物特征图像以及所述样本生物特征图像对应的标注信息,所述标注信息包括:全局活体标注结果和多个局部区域的活体标注结果。(1) Acquiring a sample biometric image and labeling information corresponding to the sample biometric image, where the labeling information includes: global living body labeling results and living body labeling results in multiple local regions.
(2)参见图6,将所述样本生物特征图像输入特征提取网络401得到样本特征图谱,将样本特征图谱输入局部活体检测网络402得到多个局部区域的第一活体检测结果,将所述多个局部区域的第一活体检测结果输入置信度检测网络404得到所述多个局部区域的检测置信度,基于所述多个局部区域的检测置信度对样本特征图谱的对应局部区域进行更新,得到更新样本特征图谱,将更新样本特征图谱输入局部活体检测网络402得到多个局部区域的第三活体检测结果,并将更新样本特征图谱和所述多个局部区域的第三活体检测结果输入全局活体检测网络403,得到第二全局活体检测结果。(2) Referring to FIG. 6 , the sample biometric image is input into the
第二训练阶段与第一训练阶段的不同之处在于,第二训练阶段在通过局部活体检测网络402得到多个局部区域的第一活体检测结果之后,还通过置信度检测网络404得到多个局部区域的检测置信度,进而,将多个局部区域的检测置信度作为权重系数对特征图谱的对应局部区域进行更新,得到更新特征图谱。进而,通过局部活体检测网络402和全局活体检测网络403对更新特征图谱进行活体检测。应理解,局部活体检测网络402和全局活体检测网络403对更新特征图谱的活体检测过程与第一训练阶段类似,此处不作赘述。The difference between the second training stage and the first training stage is that in the second training stage, after obtaining the first living body detection results of multiple local regions through the local living
(3)继续参见图6,基于所述第二全局活体检测结果、所述多个局部区域的第三活体检测结果、所述全局活体标注结果和所述多个局部区域的活体标注结果确定第二目标损失,并以最小化所述第二目标损失为训练目标对所述置信度检测网络404进行训练。(3) Continue referring to FIG. 6 , based on the second global living body detection result, the third living body detection result of the plurality of local regions, the global living body labeling result and the living body labeling results of the multiple local regions to determine the first Two target losses, and the
应理解的是,第二训练阶段中第二目标损失的计算方式与第一训练阶段中第一目标损失的计算方式是类似的,此处不作赘述。在计算得到第二目标损失之后,以最小化第二目标损失为训练目标,调整置信度检测网络404的网络参数,直至置信度检测网络404达到收敛条件,或者直至达到预设的迭代次数。It should be understood that, the calculation method of the second target loss in the second training stage is similar to the calculation method of the first target loss in the first training stage, and will not be repeated here. After the second target loss is calculated, the network parameters of the
经过上述第一训练阶段和第二训练阶段之后,活体检测模型不仅具有活体检测能力,还具有对模型输出结果的置信度进行检测的能力,即,活体检测模型不仅输出第二概率P,还输出置信度C。After the first training stage and the second training stage above, the liveness detection model not only has the ability to detect liveness, but also has the ability to detect the confidence of the model output results, that is, the liveness detection model not only outputs the second probability P, but also outputs Confidence C.
S130:指示所述目标用户输入目标长度的身份验证信息,并基于所述目标用户的实际输入信息对所述目标用户进行再次活体检测,得到所述目标用户的目标活体检测结果,其中,所述目标长度与所述初步活体检测结果相关。S130: Instruct the target user to input identity verification information of a target length, and perform another biopsy detection on the target user based on the actual input information of the target user, and obtain a target biopsy detection result of the target user, wherein the The target length is correlated with the preliminary liveness test result.
其中,身份验证信息是指用于对目标用户的身份进行验证的信息。例如,身份验证信息可以包括下述中的至少一种:用户标识(例如证件号码)、用户终端的标识(例如手机号码)、注册账号标识(例如用户在app中注册的账号)、注册密码等。Wherein, the identity verification information refers to information used to verify the identity of the target user. For example, the identity verification information may include at least one of the following: user identification (such as certificate number), user terminal identification (such as mobile phone number), registration account identification (such as the account registered by the user in the app), registration password, etc. .
在本说明书的实施例中,初步活体检测结果也可以称为中间活体检测结果,目标活体检测结果也可以称为最终活体检测结果。In the embodiments of this specification, the preliminary biopsy detection result may also be referred to as an intermediate biopsy detection result, and the target biopsy detection result may also be referred to as a final biopsy detection result.
本说明书的实施例在得到目标用户的初步活体检测结果之后,进一步引入身份验证流程,即,指示目标用户输入目标长度的身份验证信息。进而,处理器620获取目标用户根据指示输入的实际输入信息,并基于所述实际输入信息对目标用户进行再次活体检测,得到目标用户的目标活体检测结果。能够理解,上述方案通过初步活体检测和再次活体检测这两个阶段的检测,能够从整体上提升活体检测结果的准确性。进一步的,由于所述目标长度与初步活体检测结果相关,使得目标长度具有一定的可变性,相比固定长度的身份验证信息,能够进一步提升活体检测结果的准确性。In the embodiment of the present specification, after obtaining the preliminary living body detection result of the target user, an identity verification process is further introduced, that is, the target user is instructed to input identity verification information of a target length. Furthermore, the
在一些实施例中,初步活体检测结果包括:目标用户为活体的第一概率。上述目标长度与所述第一概率呈反相关关系。也就是说,第一概率越大,目标长度越短,第一概率越小,目标长度越长。应理解,当第一概率较大时,说明目标用户较大概率为活体,也就是说,本次生物特征识别的安全性较高,因此,该情况下可以指示目标用户输入长度较短的身份验证信息,这样可以降低目标用户的交互耗时,提升活体检测效率。当第一概率较小时,说明目标用户较大概率为非活体,也就是说,本次生物特征识别的安全性较低,因此,该情况下可以指示目标用户输入长度较长的身份验证信息,以便提升本次生物特征识别的安全性。由此可见,上述目标长度与第一概率呈反相关关系,能够在保证生物特征识别安全性的前提下,尽量降低目标用户的交互耗时,提升活体检测效率。In some embodiments, the preliminary living body detection result includes: a first probability that the target user is a living body. The aforementioned target length is inversely correlated with the first probability. That is to say, the greater the first probability, the shorter the target length, and the smaller the first probability, the longer the target length. It should be understood that when the first probability is higher, it means that the target user has a higher probability of being alive, that is to say, the security of this biometric identification is higher. Therefore, in this case, the target user can be instructed to enter a shorter identity Verify information, which can reduce the time-consuming interaction of target users and improve the efficiency of liveness detection. When the first probability is small, it means that the target user has a high probability of being non-living, that is to say, the security of this biometric identification is low. Therefore, in this case, the target user can be instructed to input longer identity verification information. In order to improve the security of this biometric identification. It can be seen that the above-mentioned target length has an inverse correlation with the first probability, which can reduce the time-consuming interaction of the target user as much as possible and improve the efficiency of liveness detection under the premise of ensuring the safety of biometric identification.
在一些实施例中,在S130之前还可以包括:确定目标身份验证类型,基于所述目标身份验证类型和所述第一概率,确定目标长度。其中,目标身份验证类型可以为下述多个待选身份验证类型中的一种:通过用户标识进行身份验证、通过用户终端的标识进行身份验证、通过注册账号标识进行身份验证、以及通过注册密码进行身份验证。上述不同的身份验证类型具有不同的安全保障,可以根据需求将其中一种身份验证类型确定为目标验证类型。In some embodiments, before S130, it may further include: determining a target identity verification type, and determining a target length based on the target identity verification type and the first probability. Wherein, the target identity verification type may be one of the following multiple identity verification types to be selected: identity verification through user ID, identity verification through user terminal ID, identity verification through registered account ID, and registration password Authenticate. The above different authentication types have different security guarantees, and one of the authentication types can be determined as the target authentication type according to requirements.
在一些实施例中,可以采用如下方式中的一种在多个待选身份验证类型中确定目标身份验证类型:In some embodiments, one of the following methods may be used to determine the target identity authentication type among multiple identity authentication types to be selected:
方式1:在多个待选身份验证类型中随机确定目标身份验证类型。例如,可以在多个待选身份验证类型中随机选择一种作为目标身份验证类型。上述随机方式使得在不同的生物特征识别流程中不会固定采用某一种身份验证类型,从而进一步提升生物特征识别的安全性。Method 1: Randomly determine the target authentication type among multiple authentication types to be selected. For example, one of multiple authentication types to be selected may be randomly selected as the target authentication type. The above-mentioned random method prevents a certain type of identity verification from being fixedly used in different biometric identification processes, thereby further improving the security of biometric identification.
方式2:基于当前应用场景的类型,在多个待选身份验证类型中选择与所述当前应用场景的类型匹配的身份验证类型作为所述目标身份验证类型。不同待选身份验证类型对应不同的安全等级,比如,“通过注册密码进行身份验证”的安全等级高于“通过用户终端的标识进行身份验证”的安全等级,因此,在对安全性要求较高的场景中,可以将“通过注册密码进行身份验证”作为目标身份验证类型,在对安全性要求较低的场景中,可以将“通过用户终端的标识进行身份验证”作为目标身份验证类型。该方式可以满足不同应用场景对安全性的要求。Manner 2: Based on the type of the current application scenario, select an identity authentication type that matches the type of the current application scenario from multiple candidate authentication types as the target identity authentication type. Different identity verification types to be selected correspond to different security levels. For example, the security level of "authentication through registration password" is higher than that of "authentication through user terminal identification". In the scenario of , you can use "authentication by registration password" as the target authentication type, and in the scenario with low security requirements, you can use "authentication by user terminal identity" as the target authentication type. This method can meet the security requirements of different application scenarios.
方式3:将目标用户在多个待选身份验证类型中指定的身份验证类型作为所述目标身份验证类型。例如,目标用户可以在注册阶段或者其他任何阶段,对目标身份验证类型进行设置,以指定将哪个身份验证类型作为目标身份验证类型。该方式可以增加目标用户的验证灵活性,满足不同用户个性化的验证需求。Mode 3: use the authentication type specified by the target user among multiple authentication types to be selected as the target authentication type. For example, the target user can set the target authentication type at the registration stage or any other stage, so as to specify which authentication type is used as the target authentication type. This method can increase the verification flexibility of target users and meet the personalized verification requirements of different users.
在一些实施例中,可以采用如下方式确定目标长度:将所述目标身份验证类型和所述第一概率输入预先训练的长度映射模型,得到所述目标长度。其中,所述长度映射模型是采用多组训练样本训练得到的,每组训练样本包括:样本身份验证类型、样本概率以及样本长度。上述多组训练样本是基于历史数据统计得到的,上述历史数据对应的生物特征识别的安全性满足预设要求。需要说明的是,本说明书的实施例对于长度映射模型的网络结构不作限定,例如,可以采用基于神经网络,还可以采用MLP(Multi-Layer Perceptron,多层感知器)网络,或者其他任何可行的网络结构。训练后的长度映射模型具有将目标身份验证类型和第一概率映射为目标长度的能力。上述实施例通过采用预先训练得到的长度映射模型来确定目标长度,一方面具有较高的处理效率,另一方面使得目标长度具有较高的准确性。In some embodiments, the target length may be determined in the following manner: input the target identity verification type and the first probability into a pre-trained length mapping model to obtain the target length. Wherein, the length mapping model is trained by using multiple sets of training samples, and each set of training samples includes: sample identity verification type, sample probability, and sample length. The above multiple sets of training samples are obtained based on historical data statistics, and the security of the biometric identification corresponding to the above historical data meets the preset requirements. It should be noted that, the embodiment of this specification does not limit the network structure of the length mapping model. For example, a neural network can be used, an MLP (Multi-Layer Perceptron, multi-layer perceptron) network can also be used, or any other feasible network structure. The trained length-mapping model has the ability to map target authentication types and first probabilities to target lengths. In the above embodiments, the target length is determined by using the length mapping model obtained through pre-training. On the one hand, it has high processing efficiency, and on the other hand, the target length has high accuracy.
在一些实施例中,还可以采用如下方式确定目标长度:基于当前应用场景的类型确定权重系数,所述权重系数与所述当前应用场景对生物特征识别的安全性需求程度呈反相关关系;基于所述权重系数对所述第一概率进行加权处理,得到场景适配后的第一概率;基于所述目标身份验证类型和所述场景适配后的第一概率,确定目标长度。In some embodiments, the target length can also be determined in the following manner: determine the weight coefficient based on the type of the current application scenario, and the weight coefficient is inversely correlated with the degree of security requirements of the current application scenario for biometric identification; The weighting coefficient performs weighting processing on the first probability to obtain the first probability after scene adaptation; and determine the target length based on the target identity verification type and the first probability after scene adaptation.
具体的,当前应用场景对生物特征识别的安全性需求程度越高,则权重系数越小,当前应用场景对生物特征识别的安全性需求程度越低,则权重系数越大。举例而言,可以基于对安全性需求程度的高低,将场景划分为高安全性场景、中安全性场景和低安全性场景。若当前应用场景为高安全性场景,则将权重系数确定为0.8,若当前应用场景为中安全性场景,则将权重系数确定为1,若当前应用场景为低安全性场景,则将权重系数确定为1.25。Specifically, the higher the security requirements for biometric identification in the current application scenario, the smaller the weight coefficient, and the lower the security requirements for biometric identification in the current application scenario, the larger the weight coefficient. For example, scenarios may be divided into high-security scenarios, medium-security scenarios, and low-security scenarios based on the level of security requirements. If the current application scenario is a high-security scenario, set the weight factor to 0.8; if the current application scenario is a medium-security scenario, set the weight factor to 1; if the current application scenario is a low-security scenario, set the weight factor Determined to be 1.25.
将确定出的权重系数记为α,第一概率记为S,则场景适配后的第一概率S′可以采用如下公式得到:The determined weight coefficient is denoted as α, and the first probability is denoted as S, then the first probability S ' after scene adaptation can be obtained by the following formula:
S′=*SS ′ =*S
在确定出场景适配后的第一概率S′之后,可以将所述目标身份验证类型和所述场景适配后的第一概率S′输入已训练的长度映射模型,得到所述目标长度。After the scene-adapted first probability S ' is determined, the target identity verification type and the scene-adapted first probability S ' can be input into a trained length mapping model to obtain the target length.
在确定出目标长度之后,可以指示目标用户输入目标长度的身份验证信息。例如,假设目标长度为4,目标身份验证类型为“通过用户终端标识进行身份验证”,则可以指示目标用户输入终端标识(例如手机号码)的后四位字符。又例如,假设目标长度为6,目标身份验证类型为“通过注册密码进行身份验证”,则可以指示目标用户输入注册密码的第2至7位字符。需要说明的是,本说明书的实施例对于指示目标用户的方式不作限定,例如,可以通过在交互界面中通过文字方式指示目标用户输入目标长度的身份验证信息,或者,可以通过语音方式指示目标用户输入目标长度的身份验证信息,或者,还可以通过其他任何方式指示目标用户输入目标长度的身份验证信息。After the target length is determined, the target user may be instructed to input identity verification information of the target length. For example, assuming that the target length is 4 and the target authentication type is "authentication by user terminal ID", the target user may be instructed to input the last four characters of the terminal ID (such as a mobile phone number). For another example, assuming that the target length is 6, and the target authentication type is "authentication by registration password", the target user may be instructed to input the 2nd to 7th characters of the registration password. It should be noted that the embodiment of this specification does not limit the method of instructing the target user. For example, the target user may be instructed to enter the identity verification information of the target length by text in the interactive interface, or the target user may be instructed by voice Enter a target length of authentication information, or instruct the target user to enter a target length of authentication information by any other means.
在指示目标用户输入目标长度的身份验证信息之后,处理器620可以获取目标用户基于该指示而输入的实际输入信息。需要说明的是,本说明书的实施例对于目标用户所采用的输入方式不作限定,例如,目标用户可以通过文本方式输入,也可以通过语音方式输入,还可以通过任何可能的其他方式输入。进而,处理器620可以基于目标用户的实际输入信息对目标用户进行再次活体检测,从而得到目标用户的目标活体检测结果。After instructing the target user to input identity verification information of a target length, the
在一些实施例中,可以采用如下方式确定目标用户的目标活体检测结果:将数据库中存储的第一用户的身份信息确定为目标身份信息,其中,所述第一用户为触发活体检测的登录用户,或者,所述第一用户为对生物特征图像进行生物特征识别得到的用户。例如,可以将当前登录用户作为第一用户,在数据库中查询第一用户的身份信息作为目标身份信息。又例如,可以对生物特征图像进行生物特征识别,将识别得到的用户作为第一用户,在数据库中查询第一用户的身份信息作为目标身份信息。进而,将目标用户的实际输入信息与所述目标身份信息进行匹配,得到匹配结果,并基于匹配结果在第一操作和第二操作中择一执行。其中,所述第一操作包括:确定匹配结果为匹配成功,则确定目标用户为活体,所述第二操作包括:确定匹配结果为匹配失败,则确定目标用户为非活体。In some embodiments, the target liveness detection result of the target user may be determined in the following manner: the identity information of the first user stored in the database is determined as the target identity information, wherein the first user is the login user who triggers the liveness detection , or, the first user is a user obtained by performing biometric identification on a biometric image. For example, the currently logged-in user may be used as the first user, and the identity information of the first user may be queried in the database as the target identity information. For another example, biometric identification may be performed on the biometric image, the identified user may be used as the first user, and the identity information of the first user may be queried in the database as the target identity information. Furthermore, the actual input information of the target user is matched with the target identity information to obtain a matching result, and one of the first operation and the second operation is executed based on the matching result. Wherein, the first operation includes: determining that the matching result is a successful match, then determining that the target user is a living body, and the second operation includes: determining that the matching result is a matching failure, then determining that the target user is not a living body.
图7示出了根据本说明书的实施例提供的一种活体检测过程的交互示意图。如图7所示,以人脸支付场景为例,客户端200呈现界面701,在界面701中,目标用户点击“开始刷脸支付”之后,客户端200呈现界面702。如界面702所示,客户端200启动图像采集模组,以便采集目标用户的人脸图像。在采集得到人脸图像之后,客户端200或者服务器300(例如客户端200将生物特征图像发送给服务器300)将人脸图像输入图4所示的活体检测模型,得到目标用户为活体的第一概率P和置信度C,进而计算得到目标用户为活体的第一概率S。进一步的,基于目标身份验证类型和第一概率S,可以确定出在身份验证阶段需要目标用户输入的身份验证信息的目标长度。假设确定出的目标长度为4,客户端200可以呈现界面703,在界面703中指示目标用户输入4位身份验证信息(例如手机号码的后4位)。参见界面704,目标用户输入4位身份验证信息之后,点击确认,客户端200或者服务器300获取目标用户的实际输入信息(例如“1234”),并将实际输入信息与数据库存储的第一用户的身份信息进行匹配,若匹配成功,则确定目标用户为活体,若匹配失败,则确定目标用户为非活体。在确定目标用户为活体的情况下,客户端200或服务器300可以基于人脸图像对目标用户进行人脸识别,以对目标用户的身份进行验证,在验证通过的情况下进行支付,支付成功后显示界面705。在确定目标用户为非活体的情况下,客户端200或服务器300不再执行后续的人脸识别流程,本次人脸支付失败,可以在显示界面中提示目标用户身份验证失败。Fig. 7 shows an interactive schematic diagram of a living body detection process provided according to an embodiment of the present specification. As shown in FIG. 7 , taking the facial recognition payment scenario as an example, the
需要说明的是,图7是以人脸支付场景为例对交互过程进行举例说明,本说明书提供的活体检测方案应用于其他场景的交互过程是类似的,本说明书对此不作一一举例说明。It should be noted that Figure 7 uses the face payment scenario as an example to illustrate the interaction process. The interaction process of the live detection solution provided in this specification is similar to other scenarios, and this specification does not give examples.
在一些实施例中,初步活体检测结果包括目标用户为活体的第一概率,在确定所述第一概率大于或等于预设概率的情况下执行S130。例如,预设概率可以为30%,也就是说,当第一概率大于或等于30%的情况下才执行S130。这样可以过滤掉目标用户明显为非活体的情况,也就是说,在目标用户明显为非活体的情况下,无需执行身份验证环节,这样可以提高活体检测效率。In some embodiments, the preliminary living body detection result includes a first probability that the target user is a living body, and if it is determined that the first probability is greater than or equal to a preset probability, S130 is executed. For example, the preset probability may be 30%, that is, S130 is executed only when the first probability is greater than or equal to 30%. In this way, the situation that the target user is obviously not alive can be filtered out, that is to say, in the case that the target user is obviously not alive, there is no need to perform an identity verification link, which can improve the efficiency of liveness detection.
在一些实施例中,在获取到目标用户的实际输入信息之后,可以基于所述实际输入信息对目标用户进行再次活体检测,得到再次活体检测结果,进而基于所述初步活体检测结果和所述再次活体检测结果,确定目标用户的目标活体检测结果。例如,假设初步活体检测结果指示目标用户为活体的概率为55%,而再次活体检测结果指示目标用户为活体的概率为90%,则可以将上述两个概率的均值(或者加权均值)作为目标用户为活体的最终概率,若最终概率大于或等于预设阈值,则确定目标用户为活体,若最终概率小于预设概率,则确定目标用户为非活体。该实施例中,通过将初步活体检测结果和再次活体检测结果综合分析,确定目标活体检测结果,能够进一步提升目标活体检测结果的准确性。In some embodiments, after the actual input information of the target user is obtained, the target user may be re-detected based on the actual input information to obtain the result of the re-living detection, and then based on the preliminary living detection result and the re- The liveness detection result determines the target liveness detection result of the target user. For example, assuming that the preliminary liveness detection result indicates that the probability that the target user is alive is 55%, and the probability that the second liveness detection result indicates that the target user is alive is 90%, then the mean (or weighted mean) of the above two probabilities can be used as the target The final probability that the user is a living body, if the final probability is greater than or equal to the preset threshold, it is determined that the target user is a living body, and if the final probability is less than the preset probability, then it is determined that the target user is not a living body. In this embodiment, the target biopsy detection result can be determined by comprehensively analyzing the preliminary biopsy detection result and the re-vitality detection result, which can further improve the accuracy of the target biopsy detection result.
综上所述,本说明书提供的活体检测方法P100和系统001,获取包含目标用户的生物特征的生物特征图像,基于该生物特征图像对目标用户进行初步活体检测,得到目标用户的初步活体检测结果,进而,指示目标用户输入目标长度的身份验证信息,并基于目标用户的实际输入信息对目标用户进行再次活体检测,得到目标用户的目标活体检测结果,其中,所述目标长度与所述初步活体检测结果相关。上述方案在基于生物特征图像对目标用户进行初步活体检测的基础上,进一步引入身份验证流程,即,指示目标用户输入目标长度的身份验证信息,并基于目标用户的实际输入信息对目标用户进行再次活体检测,通过上述两个阶段的活体检测过程,能够从整体上提升活体检测结果的准确性。进一步的,由于身份验证信息对应的目标长度与初步活体检测结果相关,使得目标长度具有一定的可变性,相比固定长度的身份验证信息,能够进一步提升活体检测结果的准确性。另外,上述方案基于初步活体检测结果进行可变长度的身份验证流程,还使得方案具有更高的应用灵活性。例如,在初步活体检测结果指示目标用户为活体的概率较高时,可以指示目标用户输入较短长度的身份验证信息,以节省目标用户的交互时长,提高活体检测效率;在初步活体检测结果指示目标用户为活体的概率较低时,可以指示目标用户输入较长长度的身份验证信息,以尽量保证活体检测结果的准确性。To sum up, the liveness detection method P100 and
本说明书提供的技术方案中,所涉及的用户个人信息的收集、存储、使用、加工、传输、提供和公开等处理,均符合相关法律法规的规定,且不违背公序良俗。In the technical solutions provided in this manual, the collection, storage, use, processing, transmission, provision, and disclosure of users' personal information involved are all in compliance with relevant laws and regulations, and do not violate public order and good customs.
本说明书另一方面提供一种非暂时性存储介质,存储有至少一组用来进行活体检测的可执行指令。当所述可执行指令被处理器执行时,所述可执行指令指导所述处理器实施本说明书所述的活体检测方法P100的步骤。在一些可能的实施方式中,本说明书的各个方面还可以实现为一种程序产品的形式,其包括程序代码。当所述程序产品在计算设备600上运行时,所述程序代码用于使计算设备600执行本说明书描述的活体检测方法P100的步骤。用于实现上述方法的程序产品可以采用便携式紧凑盘只读存储器(CD-ROM)包括程序代码,并可以在计算设备600上运行。然而,本说明书的程序产品不限于此,在本说明书中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统使用或者与其结合使用。所述程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。所述计算机可读存储介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读存储介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。可读存储介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。可以以一种或多种程序设计语言的任意组合来编写用于执行本说明书操作的程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在计算设备600上执行、部分地在计算设备600上执行、作为一个独立的软件包执行、部分在计算设备600上部分在远程计算设备上执行、或者完全在远程计算设备上执行。Another aspect of the present specification provides a non-transitory storage medium storing at least one set of executable instructions for liveness detection. When the executable instruction is executed by the processor, the executable instruction instructs the processor to implement the steps of the living body detection method P100 described in this specification. In some possible implementations, various aspects of this specification can also be implemented in the form of a program product, which includes program codes. When the program product runs on the
上述对本说明书特定实施例进行了描述。其他实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者是可能有利的。The foregoing describes specific embodiments of this specification. Other implementations are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in an order different from that in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Multitasking and parallel processing are also possible or advantageous in certain embodiments.
综上所述,在阅读本详细公开内容之后,本领域技术人员可以明白,前述详细公开内容可以仅以示例的方式呈现,并且可以不是限制性的。尽管这里没有明确说明,本领域技术人员可以理解本说明书需求囊括对实施例的各种合理改变,改进和修改。这些改变,改进和修改旨在由本说明书提出,并且在本说明书的示例性实施例的精神和范围内。To sum up, after reading this detailed disclosure, those skilled in the art can understand that the foregoing detailed disclosure may be presented by way of example only, and may not be restrictive. Although not explicitly stated here, those skilled in the art can understand that this description needs to cover various reasonable changes, improvements and modifications to the embodiments. Such alterations, improvements and modifications are intended to be suggested by this description, and are within the spirit and scope of the exemplary embodiments of this description.
此外,本说明书中的某些术语已被用于描述本说明书的实施例。例如,“一个实施例”,“实施例”和/或“一些实施例”意味着结合该实施例描述的特定特征,结构或特性可以包括在本说明书的至少一个实施例中。因此,可以强调并且应当理解,在本说明书的各个部分中对“实施例”或“一个实施例”或“替代实施例”的两个或更多个引用不一定都指代相同的实施例。此外,特定特征,结构或特性可以在本说明书的一个或多个实施例中适当地组合。In addition, certain terms in this specification have been used to describe the embodiments of this specification. For example, "one embodiment," "an embodiment" and/or "some embodiments" means that a particular feature, structure or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. Thus, it is emphasized and should be understood that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various parts of this specification are not necessarily all referring to the same embodiment. Also, specific features, structures or characteristics may be properly combined in one or more embodiments of the present specification.
应当理解,在本说明书的实施例的前述描述中,为了帮助理解一个特征,出于简化本说明书的目的,本说明书将各种特征组合在单个实施例、附图或其描述中。然而,这并不是说这些特征的组合是必须的,本领域技术人员在阅读本说明书的时候完全有可能将其中一部分设备标注出来作为单独的实施例来理解。也就是说,本说明书中的实施例也可以理解为多个次级实施例的整合。而每个次级实施例的内容在于少于单个前述公开实施例的所有特征的时候也是成立的。It should be understood that in the foregoing description of the embodiments of the present specification, in order to facilitate the understanding of one feature, for the purpose of simplifying the present specification, the present specification combines various features in a single embodiment, drawing or description thereof. However, this does not mean that the combination of these features is necessary, and it is entirely possible for those skilled in the art to label some of the devices as separate embodiments when reading this specification. That is to say, the embodiments in this specification can also be understood as the integration of multiple sub-embodiments. It is also true that each sub-embodiment lies in less than all features of a single preceding disclosed embodiment.
本文引用的每个专利,专利申请,专利申请的出版物和其他材料,例如文章,书籍,说明书,出版物,文件,物品等,可以通过引用结合于此。用于所有目的全部内容,除了与其相关的任何起诉文件历史,可能与本文件不一致或相冲突的任何相同的,或者任何可能对权利要求的最宽范围具有限制性影响的任何相同的起诉文件历史。现在或以后与本文件相关联。举例来说,如果在与任何所包含的材料相关联的术语的描述、定义和/或使用与本文档相关的术语、描述、定义和/或之间存在任何不一致或冲突时,使用本文件中的术语为准。Every patent, patent application, publication of a patent application, and other material, such as articles, books, specifications, publications, documents, articles, etc., cited herein is hereby incorporated by reference. The entire content for all purposes, except for any prosecution history related thereto, any identical prosecution history that may be inconsistent with or conflict with this document, or any identical prosecution history that may have a restrictive effect on the broadest scope of the claims . now or in the future to be associated with this document. For example, in the event of any inconsistency or conflict in the description, definition, and/or use of terms associated with any contained material in relation to this document, use the term prevails.
最后,应理解,本文公开的申请的实施方案是对本说明书的实施方案的原理的说明。其他修改后的实施例也在本说明书的范围内。因此,本说明书披露的实施例仅仅作为示例而非限制。本领域技术人员可以根据本说明书中的实施例采取替代配置来实现本说明书中的申请。因此,本说明书的实施例不限于申请中被精确地描述过的实施例。Finally, it is to be understood that the embodiments of the application disclosed herein are illustrative of the principles of the embodiments of the specification. Other modified embodiments are also within the scope of this specification. Therefore, the embodiments disclosed in this specification are only illustrative and not restrictive. Those skilled in the art may implement the applications in this specification by adopting alternative configurations according to the embodiments in this specification. Therefore, the embodiments of the present specification are not limited to the embodiments precisely described in the application.
Claims (19)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211731962.5A CN116189316A (en) | 2022-12-30 | 2022-12-30 | Liveness detection method and system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211731962.5A CN116189316A (en) | 2022-12-30 | 2022-12-30 | Liveness detection method and system |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN116189316A true CN116189316A (en) | 2023-05-30 |
Family
ID=86441574
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211731962.5A Pending CN116189316A (en) | 2022-12-30 | 2022-12-30 | Liveness detection method and system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN116189316A (en) |
Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070172114A1 (en) * | 2006-01-20 | 2007-07-26 | The Johns Hopkins University | Fusing Multimodal Biometrics with Quality Estimates via a Bayesian Belief Network |
| CN109858381A (en) * | 2019-01-04 | 2019-06-07 | 深圳壹账通智能科技有限公司 | Biopsy method, device, computer equipment and storage medium |
| CN110334705A (en) * | 2019-06-25 | 2019-10-15 | 华中科技大学 | A kind of Language Identification of the scene text image of the global and local information of combination |
| CN110765434A (en) * | 2019-10-23 | 2020-02-07 | 上海商汤智能科技有限公司 | Identity authentication method and device, electronic equipment and storage medium |
| CN110909693A (en) * | 2019-11-27 | 2020-03-24 | 深圳市华付信息技术有限公司 | 3D face living body detection method and device, computer equipment and storage medium |
| CN111598867A (en) * | 2020-05-14 | 2020-08-28 | 国家卫生健康委科学技术研究所 | Method, apparatus, and computer-readable storage medium for detecting specific facial syndrome |
| WO2020248780A1 (en) * | 2019-06-13 | 2020-12-17 | 北京迈格威科技有限公司 | Living body testing method and apparatus, electronic device and readable storage medium |
| CN112434647A (en) * | 2020-12-09 | 2021-03-02 | 浙江光珀智能科技有限公司 | Human face living body detection method |
| CN114140613A (en) * | 2021-12-08 | 2022-03-04 | 北京有竹居网络技术有限公司 | Image detection method, device, electronic device and storage medium |
-
2022
- 2022-12-30 CN CN202211731962.5A patent/CN116189316A/en active Pending
Patent Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070172114A1 (en) * | 2006-01-20 | 2007-07-26 | The Johns Hopkins University | Fusing Multimodal Biometrics with Quality Estimates via a Bayesian Belief Network |
| CN109858381A (en) * | 2019-01-04 | 2019-06-07 | 深圳壹账通智能科技有限公司 | Biopsy method, device, computer equipment and storage medium |
| WO2020248780A1 (en) * | 2019-06-13 | 2020-12-17 | 北京迈格威科技有限公司 | Living body testing method and apparatus, electronic device and readable storage medium |
| CN110334705A (en) * | 2019-06-25 | 2019-10-15 | 华中科技大学 | A kind of Language Identification of the scene text image of the global and local information of combination |
| CN110765434A (en) * | 2019-10-23 | 2020-02-07 | 上海商汤智能科技有限公司 | Identity authentication method and device, electronic equipment and storage medium |
| CN110909693A (en) * | 2019-11-27 | 2020-03-24 | 深圳市华付信息技术有限公司 | 3D face living body detection method and device, computer equipment and storage medium |
| CN111598867A (en) * | 2020-05-14 | 2020-08-28 | 国家卫生健康委科学技术研究所 | Method, apparatus, and computer-readable storage medium for detecting specific facial syndrome |
| CN112434647A (en) * | 2020-12-09 | 2021-03-02 | 浙江光珀智能科技有限公司 | Human face living body detection method |
| CN114140613A (en) * | 2021-12-08 | 2022-03-04 | 北京有竹居网络技术有限公司 | Image detection method, device, electronic device and storage medium |
Non-Patent Citations (3)
| Title |
|---|
| KANGGEON KIM: "Local-Global Landmark Confidences for Face Recognition", 2017 IEEE 12TH INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION, 3 October 2017 (2017-10-03), pages 1 - 7 * |
| 匿名: "刷脸付手机号输入位数规则", Retrieved from the Internet <URL:https://opendocs.alipay.com/support/01rfqp> * |
| 张鹏: "基于深度学习的人脸活体检测算法研究", 中国优秀硕士学位论文全文数据库 信息科技辑, no. 05, 15 May 2022 (2022-05-15), pages 1 - 84 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN108491805B (en) | Identity authentication method and device | |
| KR101629224B1 (en) | Authentication method, device and system based on biological characteristics | |
| WO2020207189A1 (en) | Method and device for identity authentication, storage medium, and computer device | |
| WO2022105118A1 (en) | Image-based health status identification method and apparatus, device and storage medium | |
| CN105005779A (en) | Face verification anti-counterfeit recognition method and system thereof based on interactive action | |
| Oza et al. | Federated learning-based active authentication on mobile devices | |
| CN115880530A (en) | Anti-attack detection method and system | |
| CN115578768A (en) | Training method of image detection network, image detection method and system | |
| CN111753656B (en) | Feature extraction method, device, equipment and computer readable storage medium | |
| CN116486494A (en) | Living body detection method, training method and device for living body detection model | |
| CN116246356A (en) | A living body detection method and system | |
| CN114581978B (en) | Method and system for face recognition | |
| CN115953849A (en) | Training method of living body detection model, living body detection method and system | |
| Muhammad et al. | A Survey Of Fingerprint Identification System Using Deep Learning | |
| CN116189316A (en) | Liveness detection method and system | |
| CN116468113A (en) | Training method of living body detection model, living body detection method and system | |
| CN116433955A (en) | Anti-attack detection method and system | |
| CN116434287B (en) | A method, apparatus, electronic device and storage medium for face image detection | |
| Bhuvana et al. | Image sensor fusion for multimodal biometric recognition in mobile devices | |
| CN116110132A (en) | Liveness detection method and system | |
| CN115953709A (en) | Anti-attack detection method and system | |
| CN115984977A (en) | Liveness detection method and system | |
| CN114373213A (en) | Juvenile identity recognition method and device based on face recognition | |
| CN115223226A (en) | Methods and systems for biometric identification | |
| CN107679474A (en) | Face matching process and device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| CB02 | Change of applicant information | ||
| CB02 | Change of applicant information |
Country or region after: China Address after: 310000 Zhejiang Province, Hangzhou City, Xihu District, Xixi Road 543-569 (continuous odd numbers) Building 1, Building 2, 5th Floor, Room 518 Applicant after: Alipay (Hangzhou) Digital Service Technology Co.,Ltd. Address before: 310000 801-11 section B, 8th floor, 556 Xixi Road, Xihu District, Hangzhou City, Zhejiang Province Applicant before: Alipay (Hangzhou) Information Technology Co., Ltd. Country or region before: China |