[go: up one dir, main page]

TWI792017B - Biometric identification system and identification method thereof - Google Patents

Biometric identification system and identification method thereof Download PDF

Info

Publication number
TWI792017B
TWI792017B TW109122271A TW109122271A TWI792017B TW I792017 B TWI792017 B TW I792017B TW 109122271 A TW109122271 A TW 109122271A TW 109122271 A TW109122271 A TW 109122271A TW I792017 B TWI792017 B TW I792017B
Authority
TW
Taiwan
Prior art keywords
information
extraction unit
feature extraction
identification
recognition
Prior art date
Application number
TW109122271A
Other languages
Chinese (zh)
Other versions
TW202203055A (en
Inventor
鄭智元
林威漢
振庭 翁
趙芳譽
蔡呈新
Original Assignee
義隆電子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 義隆電子股份有限公司 filed Critical 義隆電子股份有限公司
Priority to TW109122271A priority Critical patent/TWI792017B/en
Priority to CN202010666402.0A priority patent/CN112001233B/en
Publication of TW202203055A publication Critical patent/TW202203055A/en
Application granted granted Critical
Publication of TWI792017B publication Critical patent/TWI792017B/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Collating Specific Patterns (AREA)
  • Medicines Containing Antibodies Or Antigens For Use As Internal Diagnostic Agents (AREA)

Abstract

A biometric identification system includes a sensor for sensing a first biometric feature to generate an image; a first feature extractor coupled to the sensor for generating a first information according to the image, wherein the first information represents uniqueness of the first biometric feature; a second feature extractor coupled to the sensor for generating a second information according to the image, wherein the second information represents true and false features of the image; an identification unit coupled to the first feature extractor and the second feature extractor for generating an identification result according to the first information and/or the second information.

Description

生物特徵的辨識系統及辨識方法Biometric identification system and identification method

本發明是有關一種生物特徵的辨識系統及辨識方法。 The invention relates to an identification system and an identification method of biological characteristics.

越來越多電子裝置或系統使用生物特徵來識別使用者的身份。常見的生物特徵辨識包括指紋辨識、人臉辨識、虹膜辨識、聲紋辨識及掌紋辨識等。然而,目前的生物特徵辨識仍有缺陷,例如指紋辨識無法有效地分辨手指的真偽。因此,生物特徵辨識系統仍存在可能被欺騙的疑慮。 More and more electronic devices or systems use biometrics to identify users. Common biometric recognition includes fingerprint recognition, face recognition, iris recognition, voiceprint recognition and palmprint recognition. However, the current biometric identification still has defects, for example, fingerprint identification cannot effectively distinguish the authenticity of the finger. Therefore, there are still doubts that the biometric system may be spoofed.

本發明的目的,在於提出一種生物特徵辨識系統及其辨識方法,能夠避免假的生物特徵通過身份識別。 The purpose of the present invention is to propose a biological feature identification system and its identification method, which can prevent false biological features from being identified.

根據本發明,一種生物特徵的辨識系統包括一感測器、一第一特徵萃取單元、一第二特徵萃取單元及一識別單元。該感測器感測一第一生物特徵以產生一圖像。該第一特徵萃取單元耦接該感測器,並根據該圖像產生一第一資訊,其中該第一資訊描述該第一生物特徵的真假特性。該第二特徵萃取單元耦接該感測器,並且根據該圖像產生一第二資訊,其中該第二資訊描述該第一生物特徵的真假特性。該識別單元耦接該第一特徵萃取單元及該第二特徵萃取單元,並且根據該第二資訊或該第一資訊與該第二資訊,產生一識別結果。 According to the present invention, a biometric identification system includes a sensor, a first feature extraction unit, a second feature extraction unit, and a recognition unit. The sensor senses a first biological feature to generate an image. The first feature extraction unit is coupled to the sensor, and generates a first information according to the image, wherein the first information describes the true and false characteristics of the first biological feature. The second feature extraction unit is coupled to the sensor, and generates a second information according to the image, wherein the second information describes the true and false characteristics of the first biological feature. The recognition unit is coupled to the first feature extraction unit and the second feature extraction unit, and generates a recognition result according to the second information or the first information and the second information.

根據本發明,一種生物特徵的辨識方法,包括:感測一第一生物 特徵以產生一圖像;根據該圖像產生一第一資訊,其中該第一資訊描述該第一生物特徵的獨特性;從該圖像取得一第二資訊,其中該第二資訊描述該第一生物特徵的真假特性;以及根據該第二資訊或該第一資訊與該第二資訊,產生一識別結果。 According to the present invention, a biometric identification method includes: sensing a first biometric characteristics to generate an image; generate a first information based on the image, wherein the first information describes the uniqueness of the first biological characteristic; obtain a second information from the image, wherein the second information describes the first The true and false characteristics of a biometric feature; and generating a recognition result according to the second information or the first information and the second information.

本發明的辨識系統及方法,可以有效的提昇生物特徵辨識系統的安全性,防止假的生物特徵通過身份認證。 The identification system and method of the present invention can effectively improve the security of the biological feature identification system and prevent fake biological features from passing identity authentication.

10:辨識系統 10: Identification system

11:感測器 11: Sensor

12:第一特徵萃取單元 12: The first feature extraction unit

13:第二特徵萃取單元 13: The second feature extraction unit

131:卷積神經網路 131: Convolutional Neural Networks

1311:特徵擷取部份 1311: Feature extraction part

1312:分類部份 1312: classification part

14:記憶體 14: Memory

15:識別單元 15: Identification unit

151:分類器 151: Classifier

圖1是本發明生物特徵的辨識系統一實施例的方塊圖。 FIG. 1 is a block diagram of an embodiment of the biometric identification system of the present invention.

圖2顯示本發明生物特徵的辨識方法的流程圖。 FIG. 2 shows a flow chart of the biometric identification method of the present invention.

圖3顯示CNN與分類器一實施例的方塊圖。 Figure 3 shows a block diagram of an embodiment of a CNN and a classifier.

圖1顯示本發明生物特徵的辨識系統的實施例,辨識系統10包括感測器11、第一特徵萃取單元12、第二特徵萃取單元13、記憶體14、識別單元15。感測器11可以是一個影像感測器,用以感測第一生物特徵產生一圖像(image)A。該第一生物特徵可以是例如指紋、人臉、掌紋或虹膜。第一特徵萃取單元12、第二特徵萃取單元13與識別單元15可以是用軟體或硬體實現。第一特徵萃取單元12耦接感測器11,其根據感測器11所提供的圖像A產生一第一資訊,該第一資訊描述該第一生物特徵的獨特性。該第一資訊包括多組特徵向量。在一實施例中,該第一特徵萃取單元12可以是使用電腦視覺(computer vision)的方法萃取圖像A的特徵以產生該第一資訊。該電腦視覺的方法可以是例如加速段測試特徵(Features From Accelerated Segment Test,FAST)、自適應通用加速分割檢測(Adaptive and generic corner detection based on the accelerated segment test,AGAST)、尺度不變特徵轉換(Scale-invariant feature transform,SIFT)、加速穩健 特徵(Speeded Up Robust Features,SURF)、KAZE或AKAZE等等的演算法。在另一實施例中,該第一特徵萃取單元12是訓練完成的深度學習模型,可以是以例如基於卷積神經網路(Convolutional Neural Network,CNN)131的模型架構來實現。該第一特徵萃取單元12使用的方法可以是但不限於局部二值模式(Local Binary Patterns,LBP)、局部相位量化(Local Phase Quantization,LPQ)或方向梯度直方圖(Histogram of Oriented Gradient,HOG)等等的演算法。 FIG. 1 shows an embodiment of the biometric identification system of the present invention. The identification system 10 includes a sensor 11 , a first feature extraction unit 12 , a second feature extraction unit 13 , a memory 14 , and a recognition unit 15 . The sensor 11 may be an image sensor for sensing a first biometric feature to generate an image (image) A. The first biometric feature may be, for example, a fingerprint, a human face, a palm print or an iris. The first feature extraction unit 12 , the second feature extraction unit 13 and the identification unit 15 can be realized by software or hardware. The first feature extraction unit 12 is coupled to the sensor 11, and generates a first information according to the image A provided by the sensor 11, and the first information describes the uniqueness of the first biological feature. The first information includes multiple sets of feature vectors. In one embodiment, the first feature extraction unit 12 may use a computer vision method to extract features of the image A to generate the first information. The computer vision method can be, for example, Features From Accelerated Segment Test (FAST), Adaptive and generic corner detection based on the accelerated segment test (AGAST), scale-invariant feature transformation ( Scale-invariant feature transform, SIFT), accelerated robustness Algorithms of Speeded Up Robust Features (SURF), KAZE or AKAZE, etc. In another embodiment, the first feature extraction unit 12 is a trained deep learning model, which may be implemented with a model architecture based on, for example, a convolutional neural network (Convolutional Neural Network, CNN) 131 . The method used by the first feature extraction unit 12 may be, but not limited to, Local Binary Patterns (LBP), Local Phase Quantization (LPQ) or Histogram of Oriented Gradient (HOG) And so on algorithm.

第二特徵萃取單元13耦接感測器11,其根據圖像A產生一第二資訊,該第二資訊描述該第一生物特徵的真假特性。在一實施例中,該第二特徵萃取單元13可以是使用電腦視覺(computer vision)的方法萃取圖像A的特徵以產生該第二資訊。該電腦視覺的方法可以是使用例如加速段測試特徵(Features From Accelerated Segment Test,FAST)、自適應通用加速分割檢測(Adaptive and generic corner detection based on the accelerated segment test,AGAST)、尺度不變特徵轉換(Scale-invariant feature transform,SIFT)、加速穩健特徵(Speeded Up Robust Features,SURF)、KAZE或AKAZE等等的演算法。在另一實施例中,該第二特徵萃取單元13是預先訓練完成的深度學習模型,可以是以例如基於卷積神經網路(Convolutional Neural Network,CNN)131或其改良的模型架構來實現,或者也可以是AlexNet,MobileNet等深度網路模型。該第二特徵萃取單元13使用的方法可以是但不限於局部二值模式(Local Binary Patterns,LBP)、局部相位量化(Local Phase Quantization,LPQ)或方向梯度直方圖(Histogram of Oriented Gradient,HOG)等等的演算法。在一實施例中,該第二資訊包含該圖像的嵌入特徵(embedding feature),該嵌入特徵是CNN 131根據圖像A進行資料轉換時產生的向量資料,用以描述該第一生物特徵的真假特性,其非僅是一個代表真(1)或假(0)的數值。 The second feature extraction unit 13 is coupled to the sensor 11 and generates a second information according to the image A, the second information describes the authenticity of the first biological feature. In one embodiment, the second feature extraction unit 13 may use a computer vision method to extract features of the image A to generate the second information. The method of the computer vision may be using, for example, accelerated segment test features (Features From Accelerated Segment Test, FAST), adaptive general accelerated segmentation detection (Adaptive and generic corner detection based on the accelerated segment test, AGAST), scale invariant feature conversion (Scale-invariant feature transform, SIFT), accelerated robust features (Speeded Up Robust Features, SURF), KAZE or AKAZE and so on. In another embodiment, the second feature extraction unit 13 is a pre-trained deep learning model, which can be implemented, for example, based on a convolutional neural network (Convolutional Neural Network, CNN) 131 or its improved model architecture, Or it can be AlexNet, MobileNet and other deep network models. The method used by the second feature extraction unit 13 may be, but not limited to, Local Binary Patterns (LBP), Local Phase Quantization (LPQ) or Histogram of Oriented Gradient (HOG) And so on algorithm. In one embodiment, the second information includes an embedding feature of the image, and the embedding feature is the vector data generated when the CNN 131 performs data conversion according to the image A, and is used to describe the first biological feature. A true-false property, which is not just a value representing true (1) or false (0).

第一特徵萃取單元12與第二特徵萃取單元13雖然都是用於特徵 的萃取。但兩者的目的並不相同,兩者用於萃取特徵的係數也不相同。第一特徵萃取單元12與第二特徵萃取單元13可以被理解為是從不同的角度來看圖像A。第一特徵萃取單元12是用來描述圖像A所示的第一生物特徵的獨特性,例如一些特定特徵點的分佈關係。該獨特性可以被用來區別不同的個體。而第二特徵萃取單元13是用描述圖像A所示的第一生物特徵的真假特性,通常被用來判斷該第一生物特徵是否來自活體。 Although both the first feature extraction unit 12 and the second feature extraction unit 13 are used for feature of extraction. But the purposes of the two are not the same, and the coefficients used to extract features are also different. The first feature extraction unit 12 and the second feature extraction unit 13 can be understood as looking at the image A from different angles. The first feature extraction unit 12 is used to describe the uniqueness of the first biological feature shown in the image A, such as the distribution relationship of some specific feature points. This uniqueness can be used to distinguish different individuals. The second feature extraction unit 13 is used to describe the true and false characteristics of the first biological feature shown in the image A, and is usually used to judge whether the first biological feature comes from a living body.

記憶體14分別耦接第一特徵萃取單元12、第二特徵萃取單元13及識別單元15,用以儲存一第一模板資訊與一第二模板資訊。該第一模板資訊與第二模板資訊是在使用者的註冊(enrollment)程序中,分別由第一特徵萃取單元12與第二特徵萃取單元13所產生。在該註冊程序中,感測器11感測一待註冊使用者的第二生物特徵產生一圖像B(圖中未示出),其中,該第一生物特徵與該第二生物特徵是同一種類的生物特徵。該第一特徵萃取單元12根據圖像B產生第一模板資訊,該第一模板資訊描述該第二生物特徵的獨特性。該第二特徵萃取單元13根據該圖像B產生該第二模板資訊,該第二模板資訊描述該第二生物特徵的真假特性。 The memory 14 is respectively coupled to the first feature extraction unit 12 , the second feature extraction unit 13 and the identification unit 15 for storing a first template information and a second template information. The first template information and the second template information are respectively generated by the first feature extraction unit 12 and the second feature extraction unit 13 in the user's enrollment process. In the registration process, the sensor 11 senses a second biological feature of a user to be registered to generate an image B (not shown in the figure), wherein the first biological feature and the second biological feature are the same Kind of biometrics. The first feature extraction unit 12 generates first template information according to the image B, and the first template information describes the uniqueness of the second biological feature. The second feature extraction unit 13 generates the second template information according to the image B, and the second template information describes the true and false characteristics of the second biological feature.

識別單元15耦接第一特徵萃取單元12、第二特徵萃取單元13與記憶體14。識別單元15根據該第一資訊、該第二資訊或者該第一資訊與該第二資訊,產生一識別結果,該識別結果代表通過(pass)或不通過(fail)身份認證。在一實施例中,該識別單元15包括一分類器151。分類器151基於機器學習(Machine Learning)所訓練好的模型,其可以是由支持向量機(Support Vector Machine,SVM)或類神經網路(Neural Network,NN)來實現。分類器151可以是軟體或者硬體電路。 The recognition unit 15 is coupled to the first feature extraction unit 12 , the second feature extraction unit 13 and the memory 14 . The identification unit 15 generates an identification result according to the first information, the second information or the first information and the second information, and the identification result represents pass or fail of identity authentication. In one embodiment, the recognition unit 15 includes a classifier 151 . The classifier 151 is based on a model trained by Machine Learning, which can be implemented by a Support Vector Machine (Support Vector Machine, SVM) or a neural network (Neural Network, NN). The classifier 151 can be a software or a hardware circuit.

以下是以指紋辨識為例來說明本發明辨識系統10的操作,但本發明不局限在指紋辨識,本發明也可以應用在其他的生物特徵辨識,例如人臉辨 識、虹膜辨識及掌紋辨識等。當辨識系統10進行註冊程序時,註冊者將其手指放到感測器11上,在此實施例中,感測器11為一指紋感測器,其可以是光學式指紋感測器或是電容式指紋感測器。感測器11感測手指的指紋(相當於前述的第二生物特徵)產生一指紋圖像Fi1。第一特徵萃取單元12根據指紋圖像Fi1產生第一模板資訊En1,該第一模板資訊En1可以理解為是描述指紋圖像Fi1的指紋特性。每個人的指紋的紋路都具有其獨特性,與其他人不同。該第一模板資訊En1即是在描述該指紋的獨特性。第二特徵萃取單元13根據指紋圖像Fil產生第二模板資訊En2。該第二模板資訊En2描述指紋圖像Fi1的真假特性。該第一模板資訊En1與第二模板資訊En2被儲存至記憶體14之後,註冊程序即完成。 The following uses fingerprint identification as an example to illustrate the operation of the identification system 10 of the present invention, but the present invention is not limited to fingerprint identification, and the present invention can also be applied to other biometric identification, such as face recognition recognition, iris recognition and palmprint recognition, etc. When the identification system 10 performs the registration process, the registrant puts his finger on the sensor 11. In this embodiment, the sensor 11 is a fingerprint sensor, which can be an optical fingerprint sensor or a fingerprint sensor. Capacitive fingerprint sensor. The sensor 11 senses the fingerprint of the finger (equivalent to the aforementioned second biometric feature) to generate a fingerprint image Fi1. The first feature extraction unit 12 generates first template information En1 according to the fingerprint image Fi1. The first template information En1 can be understood as describing the fingerprint characteristics of the fingerprint image Fi1. The texture of each person's fingerprint is unique and different from others. The first template information En1 is describing the uniqueness of the fingerprint. The second feature extraction unit 13 generates second template information En2 according to the fingerprint image Fil. The second template information En2 describes the authenticity of the fingerprint image Fi1. After the first template information En1 and the second template information En2 are stored in the memory 14, the registration procedure is completed.

當辨識系統10進行認證(verification)程序時,待認證者將手指放在感測器11上後,感測器11會感測手指的指紋(相當於前述的第一生物特徵)產生一指紋圖像Fi2,如圖2的步驟S10。第一特徵萃取單元12根據指紋圖像Fi1產生第一資訊Ve1,如圖2的步驟S12。第二特徵萃取單元13根據指紋圖像Fi2產生第二資訊Ve2,如圖2的步驟S14。第二資訊Ve2描述指紋圖像Fi2的真假特性。最後,識別單元15再根據第一資訊Ve1、第二資訊Ve2或第一資訊Ve1及第二資訊Ve2,產生一識別結果Vre以表示認證成功或失敗,如圖2的步驟S16。 When the identification system 10 performs the verification procedure, after the authenticator puts his finger on the sensor 11, the sensor 11 will sense the fingerprint of the finger (equivalent to the aforementioned first biometric feature) to generate a fingerprint Like Fi2, step S10 in FIG. 2 . The first feature extraction unit 12 generates the first information Ve1 according to the fingerprint image Fi1, as shown in step S12 of FIG. 2 . The second feature extraction unit 13 generates second information Ve2 according to the fingerprint image Fi2, as shown in step S14 of FIG. 2 . The second information Ve2 describes the authenticity of the fingerprint image Fi2. Finally, the identification unit 15 generates an identification result Vre according to the first information Ve1, the second information Ve2 or the first information Ve1 and the second information Ve2 to indicate the authentication success or failure, as shown in step S16 of FIG. 2 .

在一實施例中,識別單元15根據第一資訊Ve1與第一模板資訊En1的差異D1以及第二資訊Ve2與第二模板資訊En2的差異D2來產生識別結果Vre。識別單元15可以使用但不限於歐式距離、曼哈頓距離、切比雪夫距離、閔可夫斯基距離、標準化歐氏距離、馬氏距離及漢明距離的其中之一種來判斷第一資訊Ve1與第一模板資訊En1的差異D1以及判斷第二資訊Ve2與第二模板資訊En2的差異D2,接著再將差異D1與D2分別乘上權重W1及W2後相加產生一總和,其中權重W1與W2為非0的數值。若該總和小於一預設值TH1,則產生的識別結果Vre為數值”1”,表示認證成功。相反的,若該總和大於預設值TH1,則 產生的識別結果Vre為數值”0”代表此次認證為失敗。 In one embodiment, the recognition unit 15 generates a recognition result Vre according to the difference D1 between the first information Ve1 and the first template information En1 and the difference D2 between the second information Ve2 and the second template information En2. The recognition unit 15 can use but not limited to one of Euclidean distance, Manhattan distance, Chebyshev distance, Minkowski distance, normalized Euclidean distance, Mahalanobis distance and Hamming distance to judge the first information Ve1 and the first template The difference D1 of the information En1 and the difference D2 between the second information Ve2 and the second template information En2 are judged, and then the difference D1 and D2 are multiplied by the weights W1 and W2 respectively, and then added to generate a sum, wherein the weights W1 and W2 are non-zero value. If the sum is less than a preset value TH1, the generated identification result Vre is a value "1", indicating that the authentication is successful. On the contrary, if the sum is greater than the preset value TH1, then If the generated recognition result Vre is a value of "0", it means that the authentication is failed.

在另一實施例中,識別單元15包括一分類器151,分類器151可以由硬體或軟體實現,其使用機器學習的方法進行分類。分類器151可以進行一識別步驟以產生識別結果Vre。該識別步驟包括但不限於判斷第一資訊Ve1與第二資訊Ve2的組合和第一模板資訊En1與第二模板資訊En2的組合之間的相似度以產生識別結果Vre。若該相似度大於一預設值TH2,則產生的識別結果Vre為數值”1”,代表認證成功,反之則產生識別結果Vre為數值”0”,代表認證失敗。 In another embodiment, the identification unit 15 includes a classifier 151, which can be implemented by hardware or software, and uses machine learning methods for classification. The classifier 151 can perform a recognition step to generate a recognition result Vre. The recognition step includes but not limited to judging the similarity between the combination of the first information Ve1 and the second information Ve2 and the combination of the first template information En1 and the second template information En2 to generate a recognition result Vre. If the similarity is greater than a preset value TH2, the generated recognition result Vre is a value "1", which means the authentication is successful; otherwise, the generated recognition result Vre is a value "0", which means the authentication has failed.

在一實施例中,識別單元15同樣包括分類器151,識別單元15先使用但不限於歐式距離、曼哈頓距離、切比雪夫距離、閔可夫斯基距離、標準化歐氏距離、馬氏距離及漢明距離的其中之一種來判斷第一資訊Ve1與第一模板資訊En1是否相似。若否,則識別單元15產生識別結果Vre為”0”,表示認證失敗。若是,識別單元15再利用分類器151進行前述的識別步驟產生識別結果Vre。這樣作的好處是可以節省資源。如果判斷出第一資訊Ve1與第一模板資訊EN1不相似,即已足已判斷待驗證者與註冊的使用者不同。因此也就沒必要再繼續進行該識別步驟。 In one embodiment, the identification unit 15 also includes a classifier 151. The identification unit 15 first uses but is not limited to Euclidean distance, Manhattan distance, Chebyshev distance, Minkowski distance, normalized Euclidean distance, Mahalanobis distance and Hamming One of the distances is used to determine whether the first information Ve1 is similar to the first template information En1. If not, the recognition unit 15 generates a recognition result Vre of "0", indicating that the authentication fails. If yes, the recognition unit 15 then uses the classifier 151 to perform the aforementioned recognition steps to generate a recognition result Vre. The advantage of doing this is that resources can be saved. If it is determined that the first information Ve1 is not similar to the first template information EN1, it is enough to determine that the person to be verified is different from the registered user. It is therefore not necessary to continue this identification step.

在另一實施例中,識別單元15同樣包括分類器151,識別單元15先使用但不限於歐式距離、曼哈頓距離、切比雪夫距離、閔可夫斯基距離、標準化歐氏距離、馬氏距離及漢明距離的其中之一種來判斷第二資訊Ve2與第二模板資訊En2是否相似。若否,則識別單元15產生識別結果Vre為”0”,表示認證失敗。若是,識別單元15再利用分類器151進行前述的識別步驟產生識別結果Vre。 In another embodiment, the recognition unit 15 also includes a classifier 151. The recognition unit 15 first uses but is not limited to Euclidean distance, Manhattan distance, Chebyshev distance, Minkowski distance, normalized Euclidean distance, Mahalanobis distance and Han One of the clear distances is used to determine whether the second information Ve2 is similar to the second template information En2. If not, the recognition unit 15 generates a recognition result Vre of "0", indicating that the authentication fails. If yes, the recognition unit 15 then uses the classifier 151 to perform the aforementioned recognition steps to generate a recognition result Vre.

如果第一特徵萃取單元12是使用訓練好的深度學習的模型,需要預先進行訓練以獲得特徵萃取的能力。以下以指紋識別為例說明。在訓練的過程中,是提供許多不同人的指紋圖像給一訓練程式T1。該訓練程式T1具有與第一特徵萃取單元12相同的模型架構。訓練程式T1根據這些不同人的指紋圖像及 其對應的擁有者,獲得一組係數用於萃取指紋圖像的特徵。這個過程其實就是告訴訓練程式T1哪張指紋圖像是代表誰,讓訓練程式T1學習怎麼作分類。第一特徵萃取單元12即利用該組係數進行運作,以萃取指紋圖像的特徵而產生第一資訊。這個第一資訊描述的是這個指紋的獨特性,也可以被理解是用數學的方式在描述這張指紋是什麼樣子。如果第一特徵萃取單元12是使用電腦視覺的方法來萃取特徵,則其使用的是已經定義好的特徵,來描述這個生物特徵是什麼樣子,也就是其獨特性。 If the first feature extraction unit 12 uses a trained deep learning model, it needs to be trained in advance to obtain the ability of feature extraction. The following uses fingerprint recognition as an example. During the training process, fingerprint images of many different people are provided to a training program T1. The training program T1 has the same model structure as that of the first feature extraction unit 12 . The training program T1 is based on the fingerprint images of these different people and Its corresponding owner obtains a set of coefficients for extracting the features of the fingerprint image. This process is actually to tell the training program T1 which fingerprint image represents whom, and let the training program T1 learn how to classify. The first feature extraction unit 12 uses the set of coefficients to operate to extract the features of the fingerprint image to generate the first information. This first information describes the uniqueness of this fingerprint, and it can also be understood as describing what this fingerprint looks like in a mathematical way. If the first feature extraction unit 12 uses a computer vision method to extract features, it uses defined features to describe what the biological feature looks like, that is, its uniqueness.

如果第二特徵萃取單元13是使用訓練好的深度學習的模型,也同樣需要預先進行訓練以獲得特徵萃取的能力。以下以指紋識別為例說明。在訓練的過程中,需要準備大量的真指紋圖像與假指紋指像提供給訓練程式T2,該訓練程式T2具有與第一特徵萃取單元13相同的模型架構。前述大量的真指紋圖像是經由感測許多不同人的指紋而獲得,都是從活體上取得的指紋圖像。將這些指紋圖像形成於例如矽膠之類的材質,並進行感測,即可獲得前述大量的假指紋圖像。藉由告訴該訓練程式T2哪些是真指紋圖像,哪些是假指紋圖像,訓練程式T2經過學習之後可獲得一組係數用於識別真指紋或假指紋。這個過程其實就是在讓訓練程式T2學習怎麼識別真假指紋。第二特徵萃取單元13即利用該組係數萃取指紋圖像的特徵而產生第二資訊。用這個第二資訊也可以直接判斷是真指紋或假指紋,但本發明並不是這樣作。本發明的第二特徵萃取單元13主要是在取得在萃取特徵之後到判斷真假指紋之前所產生的一組特徵值作為第二資訊。所以說這個第二資訊描述的是這個指紋的真假特性,也可以被理解是用數學的方式在描述這張指紋的真實性。如果第一特徵萃取單元12是使用電腦視覺的方法來萃取特徵,則其使用的是已經定義好的特徵,來描述這個生物特徵的真假特性。 If the second feature extraction unit 13 uses a trained deep learning model, it also needs to be trained in advance to obtain the ability of feature extraction. The following uses fingerprint recognition as an example. During the training process, a large number of real fingerprint images and fake fingerprint images need to be prepared and provided to the training program T2, which has the same model structure as the first feature extraction unit 13 . The aforementioned large number of real fingerprint images are obtained by sensing the fingerprints of many different people, all of which are fingerprint images obtained from living bodies. These fingerprint images are formed on materials such as silicon gel and sensed to obtain the aforementioned large number of fake fingerprint images. By telling the training program T2 which are real fingerprint images and which are fake fingerprint images, the training program T2 can obtain a set of coefficients for identifying real fingerprints or fake fingerprints after learning. This process is actually allowing the training program T2 to learn how to identify true and false fingerprints. The second feature extraction unit 13 uses the set of coefficients to extract features of the fingerprint image to generate second information. It is also possible to directly judge whether it is a real fingerprint or a fake fingerprint with this second information, but the present invention does not do this. The second feature extraction unit 13 of the present invention mainly obtains a set of feature values generated after the feature extraction and before judging the true and false fingerprints as the second information. Therefore, the second information describes the authenticity of the fingerprint, and it can also be understood as describing the authenticity of the fingerprint in a mathematical way. If the first feature extraction unit 12 uses a computer vision method to extract features, it uses already defined features to describe the true and false characteristics of the biological feature.

藉由預先提供大量的圖像給第一特徵萃取單元12與第二特徵萃 取單元13,可以獲得大量的第一資訊與第二資訊供分類器151的機器學習的模型學習分類。使得分類器151具有判斷身份驗證成功或失敗的能力。以指紋辨識為例,需要先準備一訓練程式T3具有與分類器151相同的模型架構。接下來要提供大量的第一資訊與第二資訊的組合給訓練程式T3,並且告訴訓練程式T3哪些組合是驗證成功,哪些組合是驗證失敗。例如,告訴訓練程式T3用真指紋圖像產生的第一資訊與第二資訊代表驗證成功(pass),用真指紋圖像產生的第一資訊與假指紋產生的第二資訊代表驗證失敗(fail)。訓練程式T3從這個過程可以學習到如何判斷第一資訊與第二資訊的組合代表驗證成功或失敗。 By providing a large number of images in advance to the first feature extraction unit 12 and the second feature extraction unit 12 The fetching unit 13 can obtain a large amount of first information and second information for the machine learning model of the classifier 151 to learn and classify. So that the classifier 151 has the ability to judge the success or failure of identity verification. Taking fingerprint recognition as an example, it is necessary to prepare a training program T3 having the same model structure as that of the classifier 151 . Next, a large number of combinations of the first information and the second information are provided to the training program T3, and the training program T3 is told which combinations are verified successfully and which combinations are verified failed. For example, tell the training program T3 to use the first information generated by the real fingerprint image and the second information to represent the verification success (pass), and to use the first information generated by the real fingerprint image and the second information generated by the fake fingerprint to represent the verification failure (fail). ). From this process, the training program T3 can learn how to judge whether the combination of the first information and the second information represents the verification success or failure.

從以上訓練和識別的描述即可了解,本發明是透過第一特徵萃取單元11產生的第一資訊來判斷現在感測的這張指紋和註冊的指紋像不像以及透過第二特徵萃取單元12產生的第二資訊來判斷現在感測的這張指紋的真假特性和註冊指紋的真假特性是否接近,來決定身份驗證是否通過。 As can be understood from the above training and identification descriptions, the present invention uses the first information generated by the first feature extraction unit 11 to judge whether the currently sensed fingerprint is similar to the registered fingerprint and through the second feature extraction unit 12 The generated second information is used to judge whether the authenticity of the currently sensed fingerprint is close to the authenticity of the registered fingerprint to determine whether the identity verification is passed.

圖2提供一實施例說明CNN 131的架構,其主要包括特徵擷取部份1311與分類部份1312。圖像A經過特徵擷取部份1311及分類部份1312的處理,在分類器1312產生嵌入特徵資訊,該嵌入特徵資訊是代表該圖像A所示的生物特徵(例如指紋)的真假特性。該嵌入特徵資訊可以是一組數值,例如”1101000”。請注意,第二特徵萃取單元13並不利用分類部份1312進行分類而產生真或假的辨識結果,而是取得分類器1312所產生的嵌入特徵資訊。如果使用一特徵萃取單元直接判斷一待驗證的生物特徵是真或者假,可能會有準確性的問題。以指紋識別為例,如果提供10種材質的假指紋來訓練該特徵萃取單元,對於這10種材質以外的假指紋,該特徵萃取單元便無法準確的判斷是真或假。而本發明的第二萃取特徵單元13並不直接判斷該生物特徵是真或假,而是獲得描述該生物特徵之真假特性的嵌入特徵資訊,來跟記憶體中已註冊的第二模板資訊作比對。因此,對於第二萃取特徵單元13沒有學習過的新材質,對於本發明之身份 驗證的準確性的影響較低。 FIG. 2 provides an embodiment to illustrate the architecture of the CNN 131 , which mainly includes a feature extraction part 1311 and a classification part 1312 . The image A is processed by the feature extraction part 1311 and the classification part 1312, and the embedded feature information is generated in the classifier 1312. The embedded feature information represents the true and false characteristics of the biological feature (such as a fingerprint) shown in the image A . The embedded feature information may be a set of values, such as "1101000". Please note that the second feature extraction unit 13 does not use the classification part 1312 to classify to generate a true or false recognition result, but obtains the embedded feature information generated by the classifier 1312 . If a feature extraction unit is used to directly determine whether a biometric feature to be verified is true or false, there may be a problem of accuracy. Taking fingerprint recognition as an example, if fake fingerprints of 10 materials are provided to train the feature extraction unit, for fake fingerprints other than these 10 materials, the feature extraction unit cannot accurately judge whether it is true or false. The second feature extraction unit 13 of the present invention does not directly judge whether the biological feature is true or false, but obtains the embedded feature information describing the true and false characteristics of the biological feature to follow the registered second template information in the memory for comparison. Therefore, for the new material that has not been learned by the second extraction feature unit 13, for the identity of the present invention The impact on the accuracy of validation is low.

以上對於本發明之較佳實施例所作的敘述係為闡明之目的,而無意限定本發明精確地為所揭露的形式,基於以上的教導或從本發明的實施例學習而作修改或變化是可能的,實施例係為解說本發明的原理以及讓熟習該項技術者以各種實施例利用本發明在實際應用上而選擇及敘述,本發明的技術思想企圖由之後的申請專利範圍及其均等來決定。 The above descriptions of the preferred embodiments of the present invention are for the purpose of illustration, and are not intended to limit the present invention to the disclosed form. It is possible to modify or change based on the above teachings or learning from the embodiments of the present invention. The embodiment is selected and described in order to explain the principle of the present invention and to allow those familiar with the art to use the present invention in various embodiments for practical application. Decide.

10:辨識系統10: Identification system

11:感測器11: Sensor

12:第一特徵萃取單元12: The first feature extraction unit

13:第二特徵萃取單元13: The second feature extraction unit

131:卷積神經網路131: Convolutional Neural Networks

14:記憶體14: Memory

15:識別單元15: Identification unit

151:分類器151: Classifier

Claims (20)

一種生物特徵的辨識系統,包括:一感測器,用於感測一第一生物特徵以產生一圖像;一第一特徵萃取單元,耦接該感測器,根據該圖像產生一第一資訊,該第一資訊描述該第一生物特徵的獨特性;一第二特徵萃取單元,耦接該感測器,該第二特徵萃取單元根據該圖像產生一第二資訊,該第二資訊描述該第一生物特徵的真假特性;一識別單元,耦接該第一特徵萃取單元及該第二特徵萃取單元,根據該第二資訊或該第一資訊與該第二資訊,產生一識別結果;以及一記憶體,耦接該第一特徵萃取單元、該第二特徵萃取單元及該識別單元,用於儲存一第一模板資訊與一第二模板資訊;其中,該識別單元包括一分類器,該分類器用於進行一識別步驟以判斷該第一資訊與該第二資訊的組合和該第一模板資訊與該第二模板資訊的組合之間的相似度,以產生該識別結果。 A biometric identification system, comprising: a sensor for sensing a first biometric to generate an image; a first feature extraction unit coupled to the sensor to generate a first a piece of information, the first piece of information describes the uniqueness of the first biological feature; a second feature extraction unit, coupled to the sensor, the second feature extraction unit generates a second piece of information according to the image, the second The information describes the true and false characteristics of the first biological feature; an identification unit, coupled to the first feature extraction unit and the second feature extraction unit, generates a recognition result; and a memory, coupled to the first feature extraction unit, the second feature extraction unit and the recognition unit, for storing a first template information and a second template information; wherein, the recognition unit includes a A classifier, the classifier is used to perform a recognition step to judge the similarity between the combination of the first information and the second information and the combination of the first template information and the second template information, so as to generate the recognition result. 如請求項1的辨識系統,其中該識別結果代表通過或不通過身份認證。 The identification system according to claim 1, wherein the identification result represents passing or failing the identity authentication. 如請求項1的辨識系統,其中該感測器為一指紋感測器,該第一生物特徵為一指紋。 The identification system according to claim 1, wherein the sensor is a fingerprint sensor, and the first biometric feature is a fingerprint. 如請求項1的辨識系統,其中該分類器包括支持向量機或類神經網路。 The recognition system according to claim 1, wherein the classifier includes a support vector machine or a neural network. 如請求項1的辨識系統,其中該識別單元更包括先判斷該第一資訊與該第一模板資訊是否相似,若是,該識別單元利用該分類器進 行該識別步驟。 The recognition system of claim item 1, wherein the recognition unit further includes first judging whether the first information is similar to the first template information, and if so, the recognition unit uses the classifier to perform Perform the identification step. 如請求項1的辨識系統,其中該識別單元更包括先判斷該第二資訊與該第二模板資訊是否相似,若是,該識別單元利用該分類器進行該識別步驟。 The recognition system according to claim 1, wherein the recognition unit further includes first determining whether the second information is similar to the second template information, and if so, the recognition unit uses the classifier to perform the recognition step. 如請求項5或6的辨識系統,其中該識別單元更包括使用歐式距離、曼哈頓距離、切比雪夫距離、閔可夫斯基距離、標準化歐氏距離、馬氏距離及漢明距離的其中之一判斷該第一資訊與該第一模板資訊是否相似或判斷該第二資訊與該第二模板資訊是否相似。 The identification system according to claim 5 or 6, wherein the identification unit further includes a judgment using one of Euclidean distance, Manhattan distance, Chebyshev distance, Minkowski distance, normalized Euclidean distance, Mahalanobis distance and Hamming distance Whether the first information is similar to the first template information or whether the second information is similar to the second template information. 如請求項1的辨識系統,其中該第一特徵萃取單元或該第二特徵萃取單元包括一深度學習模型。 The recognition system according to claim 1, wherein the first feature extraction unit or the second feature extraction unit includes a deep learning model. 如請求項1的辨識系統,其中該第一特徵萃取單元或該第二特徵萃取單元包括一卷積神經網路。 The recognition system according to claim 1, wherein the first feature extraction unit or the second feature extraction unit includes a convolutional neural network. 如請求項1的辨識系統,其中該第一特徵萃取單元或該第二特徵萃取單元使用加速段測試特徵、自適應通用加速分割檢測、尺度不變特徵轉換、加速穩健特徵、KAZE、AKAZE、局部二值模式、局部相位量化或方向梯度直方圖演算法來獲得該第一資訊或該第二資訊。 The identification system according to claim 1, wherein the first feature extraction unit or the second feature extraction unit uses accelerated segment test features, adaptive general accelerated segmentation detection, scale invariant feature conversion, accelerated robust features, KAZE, AKAZE, local Binary mode, local phase quantization or histogram of oriented gradient algorithm to obtain the first information or the second information. 一種生物特徵的辨識方法,包括下列步驟:A.感測一第一生物特徵以產生一圖像;B.根據該圖像產生一第一資訊,其中該第一資訊描述該第一生物特徵的獨特性;C.從該圖像取得一第二資訊,其中該第二資訊描述該第一生物特徵的真假特性;D.從一記憶體取得一第一模板資訊與一第二模板資訊;以及 E.根據該第二資訊或者該第一資訊與該第二資訊,產生一識別結果;其中,該步驟E包括使用一分類器進行一識別步驟以判斷該第一資訊與該第二資訊的組合和該第一模板資訊與該第二模板資訊的組合之間的相似度,以產生該識別結果。 A biological feature identification method, comprising the following steps: A. sensing a first biological feature to generate an image; B. generating a first information according to the image, wherein the first information describes the first biological feature Uniqueness; C. Obtain a second information from the image, wherein the second information describes the true and false characteristics of the first biological feature; D. Obtain a first template information and a second template information from a memory; as well as E. Generate a recognition result according to the second information or the first information and the second information; wherein, the step E includes using a classifier to perform a recognition step to determine the combination of the first information and the second information and the similarity between the combination of the first template information and the second template information to generate the recognition result. 如請求項11的辨識方法,更包括根據該識別結果判斷通過或不通過身份認證。 The identification method as claimed in claim 11 further includes judging whether to pass or fail the identity authentication according to the identification result. 如請求項11的辨識方法,其中該第一生物特徵為一指紋。 The identification method according to claim 11, wherein the first biometric feature is a fingerprint. 如請求項11的辨識方法,更包括以支持向量機或類神經網路來實現該分類器。 The identification method of claim 11 further includes implementing the classifier with a support vector machine or a neural network. 如請求項11的辨識方法,其中該步驟E更包括:判斷該第一資訊與該第一模板資訊是否相似;以及若是,進行該識別步驟。 The identification method according to claim 11, wherein the step E further includes: judging whether the first information is similar to the first template information; and if so, performing the identification step. 如請求項11的辨識方法,其中該步驟E更包括:判斷該第二資訊與該第二模板資訊是否相似;以及若是,進行該識別步驟。 The identification method according to claim 11, wherein the step E further includes: judging whether the second information is similar to the second template information; and if so, performing the identification step. 如請求項15或16的辨識方法,其中該步驟E更包括使用歐式距離、曼哈頓距離、切比雪夫距離、閔可夫斯基距離、標準化歐氏距離、馬氏距離及漢明距離的其中之一判斷該第一資訊與該第一模板資訊是否相似或判斷該第二資訊與該第二模板資訊是否相似。 The identification method of claim item 15 or 16, wherein the step E further includes using one of Euclidean distance, Manhattan distance, Chebyshev distance, Minkowski distance, standardized Euclidean distance, Mahalanobis distance and Hamming distance to judge Whether the first information is similar to the first template information or whether the second information is similar to the second template information. 如請求項11的辨識方法,其中該步驟B包括使用加速段測試特徵、自適應通用加速分割檢測、尺度不變特徵轉換、加速穩健特徵、KAZE、AKAZE、局部二值模式、局部相位量化或方向梯度直方圖演算法來獲得該第一資訊。 The identification method as claimed in item 11, wherein the step B includes using accelerated segment test features, adaptive general accelerated segmentation detection, scale invariant feature conversion, accelerated robust features, KAZE, AKAZE, local binary mode, local phase quantization or direction Gradient histogram algorithm to obtain the first information. 如請求項11的辨識方法,其中該步驟B包括使用一深度學習模型 來獲取該第一資訊或該第二資訊。 The identification method as claimed in item 11, wherein the step B includes using a deep learning model to obtain the first information or the second information. 如請求項11的辨識方法,其中該步驟B包括使用一卷積神經網路來獲取該第一資訊或該第二資訊。 The identification method according to claim 11, wherein the step B includes using a convolutional neural network to obtain the first information or the second information.
TW109122271A 2020-07-01 2020-07-01 Biometric identification system and identification method thereof TWI792017B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
TW109122271A TWI792017B (en) 2020-07-01 2020-07-01 Biometric identification system and identification method thereof
CN202010666402.0A CN112001233B (en) 2020-07-01 2020-07-13 Biological feature identification system and identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW109122271A TWI792017B (en) 2020-07-01 2020-07-01 Biometric identification system and identification method thereof

Publications (2)

Publication Number Publication Date
TW202203055A TW202203055A (en) 2022-01-16
TWI792017B true TWI792017B (en) 2023-02-11

Family

ID=73467956

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109122271A TWI792017B (en) 2020-07-01 2020-07-01 Biometric identification system and identification method thereof

Country Status (2)

Country Link
CN (1) CN112001233B (en)
TW (1) TWI792017B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI794696B (en) * 2020-12-14 2023-03-01 晶元光電股份有限公司 Optical sensing device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106663204A (en) * 2015-07-03 2017-05-10 指纹卡有限公司 Apparatus and computer-implemented method for fingerprint-based authentication
US20170277938A1 (en) * 2016-03-23 2017-09-28 Intel Corporation Automated facial recognition systems and methods
US20180260625A1 (en) * 2007-06-11 2018-09-13 Jeffrey A. Matos Apparatus and method for verifying the identity of an author and a person receiving information
US20200178871A1 (en) * 2016-02-17 2020-06-11 Panasonic Intellectual Property Management Co., Ltd. Biological information detection device including calculation circuit that generates signal of biological information

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4599320B2 (en) * 2006-03-13 2010-12-15 富士通株式会社 Fingerprint authentication device, biometric finger determination device, and biometric finger determination method
CN101162499A (en) * 2006-10-13 2008-04-16 上海银晨智能识别科技有限公司 Method for using human face formwork combination to contrast
JP5914995B2 (en) * 2011-06-06 2016-05-11 セイコーエプソン株式会社 Biological identification device and biological identification method
CN103136504B (en) * 2011-11-28 2016-04-20 汉王科技股份有限公司 Face identification method and device
CN102708360A (en) * 2012-05-09 2012-10-03 深圳市亚略特生物识别科技有限公司 Method for generating and automatically updating fingerprint template
CN103902961B (en) * 2012-12-28 2017-02-15 汉王科技股份有限公司 Face recognition method and device
CN104042220A (en) * 2014-05-28 2014-09-17 上海思立微电子科技有限公司 Device and method for detecting living body fingerprint
CN105740750B (en) * 2014-12-11 2019-07-30 深圳印象认知技术有限公司 Fingerprint In vivo detection and recognition methods and device
WO2018082011A1 (en) * 2016-11-04 2018-05-11 深圳市汇顶科技股份有限公司 Living fingerprint recognition method and device
CN106657056A (en) * 2016-12-20 2017-05-10 深圳芯启航科技有限公司 Biological feature information management method and system
EP3425556A1 (en) * 2017-05-03 2019-01-09 Shenzhen Goodix Technology Co., Ltd. Vital sign information determination method, identity authentication method and device
CN107358145A (en) * 2017-05-20 2017-11-17 深圳信炜科技有限公司 Imaging sensor and electronic installation
WO2018213946A1 (en) * 2017-05-20 2018-11-29 深圳信炜科技有限公司 Image recognition method, image recognition device, electronic device, and computer storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180260625A1 (en) * 2007-06-11 2018-09-13 Jeffrey A. Matos Apparatus and method for verifying the identity of an author and a person receiving information
CN106663204A (en) * 2015-07-03 2017-05-10 指纹卡有限公司 Apparatus and computer-implemented method for fingerprint-based authentication
US20200178871A1 (en) * 2016-02-17 2020-06-11 Panasonic Intellectual Property Management Co., Ltd. Biological information detection device including calculation circuit that generates signal of biological information
US20170277938A1 (en) * 2016-03-23 2017-09-28 Intel Corporation Automated facial recognition systems and methods

Also Published As

Publication number Publication date
CN112001233B (en) 2024-08-20
CN112001233A (en) 2020-11-27
TW202203055A (en) 2022-01-16

Similar Documents

Publication Publication Date Title
KR102455633B1 (en) Liveness test method and apparatus
Ribarić et al. Multimodal biometric user-identification system for network-based applications
Ribaric et al. A biometric identification system based on eigenpalm and eigenfinger features
US20190392129A1 (en) Identity authentication method
Shawkat et al. The new hand geometry system and automatic identification
George et al. An efficient system for palm print recognition using ridges
Ribaric et al. A biometric verification system based on the fusion of palmprint and face features
Charfi et al. Personal recognition system using hand modality based on local features
TWI792017B (en) Biometric identification system and identification method thereof
Kaur et al. Minutiae extraction and variation of fast Fourier transform on fingerprint recognition
Heenaye et al. A multimodal hand vein biometric based on score level fusion
Binh Tran et al. Multimodal personal verification using likelihood ratio for the match score fusion
Choras et al. Hand shape geometry and palmprint features for the personal identification
Badrinath et al. An efficient multi-algorithmic fusion system based on palmprint for personnel identification
Ma et al. Research of dual-modal decision level fusion for fingerprint and finger vein image
Kumar Hand image biometric based personal authentication system
Sharma et al. Multimodal biometric system fusion using fingerprint and face with fuzzy logic
Trabelsi et al. A bi-modal palmvein palmprint biometric human identification based on fusing new CDSDP features
Oyeniyi et al. An enhanced iris feature extraction technique using continuous wavelet transform
Neware et al. A Novel Multimodal Biometric Authentication System based on Fusion of Face, Finger Knuckle and Iris Traits.
Sree et al. Dorsal Hand Vein Pattern Authentication by Hough Peaks
Isnanto et al. Palmprint recognition system based on principle-lines feature using euclidean distance and neural network
Sujitha et al. Counter measures for indirect attack for iris based biometric authentication
Benson-Emenike Mercy et al. Trimodal Biometric Authentication System using Cascaded Link-based Feed forward Neural Network [CLBFFNN]
OLAYIWOLA et al. Development of Palmvein Recognition System Using Fire Fly Algorithm