WO2018001092A1 - Procédé et appareil de reconnaissance d'un visage - Google Patents
Procédé et appareil de reconnaissance d'un visage Download PDFInfo
- Publication number
- WO2018001092A1 WO2018001092A1 PCT/CN2017/088219 CN2017088219W WO2018001092A1 WO 2018001092 A1 WO2018001092 A1 WO 2018001092A1 CN 2017088219 W CN2017088219 W CN 2017088219W WO 2018001092 A1 WO2018001092 A1 WO 2018001092A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- feature
- face
- key point
- face image
- point coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
Definitions
- the present disclosure relates to the field of communications technologies, and in particular, to a face recognition method and apparatus.
- human face As a biometric feature, human face has the advantages of not being lost, difficult to be copied, convenient to collect, unique, and undetected. It is getting more and more people's attention and has entered various fields of social life. Compared with other human biometric recognition systems such as retina, fingerprint, iris, voice, palm print, etc., face recognition system has a wide application prospect because of its convenience and friendliness, especially in face recognition access control attendance system, people. Face recognition ATM intelligent video alarm system, face recognition, public security offenders, intelligent alarm system identification, video conferencing and medical applications have become a research hotspot in the field of pattern recognition and content retrieval.
- Feature extraction and selection are the core issues of face recognition and the basis for subsequent correct recognition.
- the face image acquisition process is often interfered by factors such as illumination changes and face pose changes.
- the traditional face recognition process compares the acquired face features with face samples after extracting facial features. To see if it is the same person. In this way, simply extracting facial features for comparison results in low face recognition accuracy and poor accuracy.
- An object of the present disclosure is to provide a face recognition method, which solves the problem that the face recognition technology in the related art simply extracts facial features for comparison, resulting in low accuracy and poor accuracy of face recognition.
- an embodiment of the present disclosure provides a face recognition method, including: acquiring a user face image; performing key point coordinate detection on the face image, and using the detected key point coordinates to Correcting the face image; recalculating the key point coordinates of the corrected face image, and extracting the user face feature according to the recalculated key point coordinates; and using the face feature and the recalculated key point Coordinates for face verification.
- the embodiment of the present disclosure further provides a face recognition device, including: an acquisition unit, configured to acquire a user face image;
- a correction unit configured to perform key point coordinate detection on the face image, and correct the face image using the detected key point coordinates; and extract an unit to recalculate the corrected face image Key point coordinates, and extracting user face features according to the recalculated key point coordinates; and a verification unit configured to perform face verification using the face features and the recalculated key point coordinates.
- Embodiments of the present disclosure also provide a computer storage medium having stored therein one or more programs executable by a computer, the one or more programs being executed by the computer to cause the computer to perform as described above
- a method of face recognition is provided.
- the face recognition method and apparatus of the present disclosure acquires a user face image; performs key point coordinate detection on the face image, and uses the detected Correcting the face image by key point coordinates; recalculating the key point coordinates of the corrected face image, and according to the re The calculated key point coordinates are used to extract the user face feature; the face feature and the recalculated key point coordinates are used for face verification.
- the face recognition method and apparatus of the present disclosure acquires a user face image; performs key point coordinate detection on the face image, and uses the detected Correcting the face image by key point coordinates; recalculating the key point coordinates of the corrected face image, and according to the re The calculated key point coordinates are used to extract the user face feature; the face feature and the recalculated key point coordinates are used for face verification.
- the face recognition method and apparatus of the present disclosure acquires a user face image; performs key point coordinate detection on the face image, and uses the detected Correcting the face image by key point coordinates; recalculating the key point coordinates
- FIG. 1 is a method for recognizing a face according to an embodiment of the present disclosure
- FIG. 2 is another method for recognizing a face according to an embodiment of the present disclosure
- FIG. 3 is a device for recognizing a face according to an embodiment of the present disclosure.
- an embodiment of the present disclosure provides a method for face recognition, including the following steps.
- step S101 a user face image is acquired.
- the device is activated to acquire an image of a face.
- the method for obtaining a user's face image may be that the user is photographed by using a camera, or the image of the user's face that has been photographed is selected, which is not limited herein.
- step S102 key point coordinates are detected on the face image, and the face image is corrected using the detected key point coordinates.
- the key point coordinate detection is performed on the face image, which may be a reference coordinate system set first, then the face image is placed in the set reference coordinate system, and then the key point coordinates are detected.
- the key point can be representative points on the face image, such as the facial features, nose and ears of the human face, or the characteristic points such as dimples and scorpions.
- the face image is corrected using key point coordinates so that subsequent comparisons are more accurate. For example, on the acquired face image, the face is a bit awkward, or slightly sideways, so in order to improve the accuracy of recognition, the face needs to be squared and then verified. Correcting the face image may be based on the detected key point coordinates, in the set reference coordinate system, the face image is translated or rotated to make the face image in a positive direction, and then the face image is performed, for example. Remove the blur points and other corrections.
- step S103 the key point coordinates are recalculated for the corrected face image, and the user face feature is extracted according to the recalculated key point coordinates.
- the key point coordinates on the face image since the face image has been translated or rotated in the reference coordinate system, the key point coordinates on the face image have changed, and at this time, the face is recalculated.
- the key point coordinates on the image the method of recalculating the key point coordinates may be implemented by performing the corresponding transformation on the key point coordinates detected in step S102 according to the transformation performed in the face correction in S102, or in the reference coordinates.
- the new coordinates of the key points on the face image are re-extracted, and the user face features are extracted according to the recalculated key point coordinates, wherein the user face features are not limited to one, and multiple user face features can be extracted for subsequent Verification to improve recognition accuracy.
- the present disclosure preferentially extracts user facial features with relatively high importance, such as facial features.
- step S104 the face verification is performed using the face feature and the recalculated key point coordinates.
- the face samples in the face database are compared to perform face verification.
- the present disclosure uses a Support Vector Machine (SVM) algorithm (hereinafter referred to as SVM) to train through a large number of training samples, and finally obtains a SVM classifier.
- SVM Support Vector Machine
- User face verification is performed using the SVM classifier.
- the SVM classifier can effectively utilize the information carried by different parts of the face and the importance of the information, and comprehensively evaluate the similarity of each part of the face to obtain the final verification result.
- the face recognition method of the present disclosure includes: acquiring a user face image; performing key point coordinate detection on the face image, and correcting the face image using the detected key point coordinates; The face image recalculates the key point coordinates, and extracts the user face feature according to the recalculated key point coordinates; and uses the face feature and the recalculated key point coordinates to perform face verification.
- the face recognition method of the present disclosure includes: acquiring a user face image; performing key point coordinate detection on the face image, and correcting the face image using the detected key point coordinates; The face image recalculates the key point coordinates, and extracts the user face feature according to the recalculated key point coordinates; and uses the face feature and the recalculated key point coordinates to perform face verification.
- degree and accuracy by detecting the key point coordinates of the face image, correcting the face image through the key point coordinates, extracting the user face feature from the corrected face image, and performing face verification to improve the accuracy of the face recognition. Degree and accuracy
- an embodiment of the present disclosure provides a method for face recognition, including the following steps.
- step S201 a user face image is acquired.
- step S202 key point coordinate detection is performed on the face image, and the face image is aligned according to the preset direction using the detected key point coordinates, and then the aligned face image is normalized. .
- step S202 is not limited to the step S202.
- Embodiments may also be implemented by setting a coordinate system on the face image to compare a coordinate system on the face image with a preset reference coordinate system, that is, step 202 may be replaced with the person
- the coordinate system is set on the face image, and the embodiment in which the coordinate system on the face image is compared with the preset reference coordinate system is replaced or replaced with the feature in step 102.
- the key point coordinate detection is performed on the face image, which may be a reference coordinate system set first, and then the face image is placed in the set reference coordinate system, and then the key point coordinates are detected.
- the key point can be representative points on the face image, such as the facial features, nose and ears of the human face, or the characteristic points such as dimples and scorpions.
- the face image is first aligned on the face image, and the face image is aligned on the face image.
- the face image is obtained on the face image.
- the face is somewhat awkward or slightly side-faced.
- the face image is squared in the reference coordinate system, so that the face image is presented in the preset positive direction.
- the alignment method may be based on the extracted key point coordinates, calculated in the reference coordinate system, and after the translation or rotation, the face image is presented in the positive direction. For example, if the coordinates of the two eyes are obtained, it is found that the line between the eyes is at an angle to the X axis of the reference coordinate system, that is, the face image is awkward, and the face needs to be rotated by a certain angle to make the eyes The connection between the lines is parallel to the X axis, and then the face is translated to a certain extent, so that the center of the face on the face image coincides with the origin, that is, the face alignment is completed.
- the face image is normalized to remove the blur points.
- normalization can be For coordinate image centering, shear invariance (x-shearing) normalization, scaling normalization, and rotation normalization.
- step S203 the key point coordinates are recalculated for the corrected face image, and the user face feature is extracted according to the recalculated key point coordinates.
- step S204 face verification is performed using the face feature and the recalculated key point coordinates.
- Step S201, step S203, and step S204 are the same as steps S101, S103, and S104 in the first embodiment of the present disclosure, and are not described herein.
- step S203 includes: recalculating the key point coordinates of the corrected face image, and dividing the face feature area by using the coordinate position of the retrieved key points to extract the user face feature.
- the position of the user's face image is changed.
- the key point coordinates need to be recalculated in the reference coordinate system, and the coordinate position of the retrieved key point is used.
- a face feature area is divided on the face image, and a user face feature is extracted in the divided face feature area.
- the face feature area mainly includes a first feature area and a second feature area, wherein the first feature area is an area composed of an eyebrow, an eye and a nose, and an eyebrow, an eye and a surrounding part of the nose on the face image,
- the second feature area is an area composed of a mouth and a peripheral portion of the mouth on the face image.
- the step S203 further includes: extracting the feature 1 of the first feature area, the feature 2 of the second feature area, the key point position coordinate of the facial face of the face as the feature 3 of the feature, and the key point position coordinate of the outer contour of the face as Feature four of the features.
- four features are extracted for face verification, but it is not limited thereto, and more features and Feature points.
- the facial features include feature 1, feature two, feature three, and feature four, wherein feature one is an eyebrow, an eye and a nose on the face image, and a first feature region composed of an eyebrow, an eye, and a surrounding portion of the nose.
- the Gabor feature, feature 2 is the Gabor feature of the second feature region composed of the mouth and the surrounding part of the mouth
- the feature 3 is the coordinate position of the retrieved key face of the face
- the feature 4 is the outline of the face on the face image.
- the coordinate position of the key points such as the coordinate positions of the cheeks and chin.
- the first feature region and the second feature region are characterized by a Gabor feature, but are not limited thereto.
- the first feature region and the second feature region may also be characterized by an LBP feature and an HOG.
- the features of the first feature region and the second feature region use the same feature, which are both Gabor features, but are not limited thereto.
- the first feature region and the second feature region are Features may also use different features, such as the first feature being a Gabor feature, the second feature being an LBP feature, or other combinations.
- step S204 includes: comparing feature 1, feature 2, feature 3, and feature 4 with a preset face sample, respectively, and obtaining similarity 1 of feature 1, similarity 2 of feature 2, and feature 3
- the similarity degree of similarity three and feature four; four-dimensional feature vectors composed of similarity 1, similarity 2, similarity 3 and similarity are classified by using a preset algorithm, and the user face and the preset person are calculated and verified. Whether the face sample belongs to the same person.
- step S204 after extracting the features of each part of the person's face, that is, after extracting the feature one, the feature two, the feature three, and the feature four, respectively, the feature one, the feature two, the feature three, and the feature four are respectively Corresponding to the face sample
- the feature of the part is compared, and the similarity of feature one, the similarity of feature two, the similarity of feature three, and the similarity of feature four are obtained.
- the similarity degree 1, the similarity degree 2, the similarity degree 3, and the similarity degree 4 can be compared and calculated at the same time, and can be separately compared and calculated, but the order is not distinguished.
- the four-dimensional feature vectors composed of the similarity one, the similarity two, the similarity three and the similarity four are classified, and it is calculated whether the user face and the preset face sample belong to the same person.
- the similarity degree 1, the similarity degree 2, the similarity degree 3, and the similarity degree 4 may be cosine similarity, but is not limited thereto.
- the Euclidean distance similarity and the Mahalanobis distance may be used. Similarity or Hamming distance similarity.
- the similarity one, the similarity two, the similarity three, and the similarity four are combined into one four-dimensional feature vector, and the four-dimensional feature vector is classified by using a preset SVM classifier, wherein the SVM The classifier is trained in advance through a large number of training samples through the SVM algorithm, and finally the SVM classifier is obtained.
- the SVM classifier can effectively utilize the information carried by different parts of the face and the importance of the information, and comprehensively evaluate the similarity of each part of the face to obtain the final verification result.
- the four-dimensional feature vectors composed of four feature similarities are classified, but the present invention is not limited thereto. In other embodiments, for example, five-dimensional features may be extracted as needed. Vector, six-dimensional feature vectors or more dimensional feature vectors for face recognition verification.
- the face recognition method of the present disclosure includes: acquiring a user face image; performing key point coordinate detection on the face image, and using the detected key point coordinates, aligning the face image according to the preset direction, and then aligning The subsequent face image is normalized; the key point coordinates are recalculated for the corrected face image, and the user face feature is extracted according to the recalculated key point coordinates; the face feature and the recalculated key point are used. Coordinates for face verification. In this way, by detecting the key point coordinates of the face image, correcting the face image through the key point coordinates, extracting the user face feature from the corrected face image, and performing face verification to improve the accuracy of the face recognition. Degree and accuracy.
- an embodiment of the present disclosure provides an apparatus for recognizing a face, including: an obtaining unit 301 configured to acquire a user face image; and a correcting unit 302 configured to perform key point coordinate detection on the face image, and Correcting the face image using the detected key point coordinates; extracting unit 303, recalculating the key point coordinates of the corrected face image, and extracting the user face feature according to the recalculated key point coordinates; and the verification unit 304 , set to use face features and recalculated keypoint coordinates for face verification.
- the correcting unit 302 is configured to perform face alignment on the face image according to the preset direction using the detected key point coordinates, and then normalize the aligned face image.
- the extracting unit 303 is configured to recalculate the key point coordinates of the corrected face image, and divide the face feature area by using the coordinate position of the retrieved key points to extract the user face feature.
- the face feature area includes a first feature area composed of an eyebrow, an eye and a nose and a surrounding portion thereof, and a second feature area composed of a mouth and a surrounding portion thereof, and the extracting unit 303 is configured to extract features of the first feature area One,
- the feature of the second feature area is the key point position coordinate of the facial features of the face as the feature three of the feature and the key point position coordinate of the outer contour of the face as the feature four of the feature.
- the verification unit 304 is configured to compare the feature 1, the feature 2, the feature 3, and the feature 4 with the preset face samples respectively, and obtain the similarity of the feature one, the similarity of the feature two, and the feature three. Similarity 3 and similarity 4 of feature 4;
- the four-dimensional feature vector consisting of similarity 1, similarity 2, similarity 3 and similarity 4 is classified by using a preset algorithm, and it is calculated whether the user face and the preset face sample belong to the same person.
- the acquiring unit is configured to acquire a user face image;
- the correcting unit is configured to perform key point coordinate detection on the face image, and perform face image detection using the detected key point coordinates Correcting; extracting unit, recalculating the key point coordinates of the corrected face image, and extracting the user face feature according to the recalculated key point coordinates;
- the verification unit is set to use the face feature and the recalculated key point coordinates Perform face verification. In this way, by detecting the key point coordinates of the face image, correcting the face image through the key point coordinates, extracting the user face feature from the corrected face image, and performing face verification to improve the accuracy of the face recognition. Degree and accuracy.
- the face image is corrected by using the detected key point coordinates, including: using the detected key point coordinates, performing face alignment on the face image according to a preset direction, and then performing the aligned face image. Normalized processing.
- recalculating the key point coordinates of the corrected face image, and extracting the user face feature according to the recalculated key point coordinates including: recalculating the key point coordinates on the corrected face image, and using the retrieved The coordinate position of the key point divides the face feature area to extract the user face feature.
- the face feature area includes a first feature area composed of an eyebrow, an eye and a nose and a surrounding portion thereof, and a second feature area composed of the mouth and the surrounding portion thereof, and recalculating key point coordinates on the corrected face image And extracting the user face feature according to the recalculated key point coordinates, including: extracting the feature 1 of the first feature area, the feature 2 of the second feature area, and the coordinate position coordinates of the facial features of the face as the feature three of the feature And the key point position coordinates of the outer contour of the face are taken as the feature four of the feature.
- the step of performing face verification using the face feature and the recalculated key point coordinates comprises: comparing the feature 1, the feature 2, the feature 3, and the feature 4 with the preset face sample respectively, and obtaining The similarity degree of feature one, the similarity degree of feature two, the similarity degree of feature three, and the similarity degree of feature four are four; using the preset algorithm to combine similarity 1, similarity 2, similarity 3 and similarity 4
- the four-dimensional feature vector is classified, and it is calculated whether the user face and the preset face sample belong to the same person.
- the storage medium is, for example, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
L'invention concerne un procédé et un appareil de reconnaissance d'un visage. Le procédé comprend les étapes consistant à : obtenir une image du visage de l'utilisateur (S101) ; détecter des coordonnées de points clés de l'image du visage et corriger l'image du visage à l'aide des coordonnées de points clés détectées (S102) ; recalculer les coordonnées des points clés de l'image du visage corrigée et extraire une caractéristique du visage de l'utilisateur d'après les coordonnées des points clés recalculées (S103) ; puis exécuter une vérification du visage en utilisant la caractéristique du visage et les coordonnées des points clés recalculées (S104). La détection de coordonnées de points clés d'une image d'un visage, la correction de l'image du visage à l'aide des coordonnées de points clés, l'extraction d'une caractéristique du visage de l'utilisateur à partir de l'image du visage corrigée et l'exécution d'une vérification du visage améliorent la précision et la qualité de la reconnaissance du visage.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610495337.3A CN107545220A (zh) | 2016-06-29 | 2016-06-29 | 一种人脸识别方法及装置 |
| CN201610495337.3 | 2016-06-29 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018001092A1 true WO2018001092A1 (fr) | 2018-01-04 |
Family
ID=60786458
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2017/088219 Ceased WO2018001092A1 (fr) | 2016-06-29 | 2017-06-14 | Procédé et appareil de reconnaissance d'un visage |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN107545220A (fr) |
| WO (1) | WO2018001092A1 (fr) |
Cited By (20)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109583426A (zh) * | 2018-12-23 | 2019-04-05 | 广东腾晟信息科技有限公司 | 一种根据图像辨识人脸的方法 |
| CN109635752A (zh) * | 2018-12-12 | 2019-04-16 | 腾讯科技(深圳)有限公司 | 人脸关键点的定位方法、人脸图像处理方法和相关装置 |
| CN109711392A (zh) * | 2019-01-24 | 2019-05-03 | 郑州市现代人才测评与考试研究院 | 一种基于人脸识别的人才评定方法 |
| CN110263772A (zh) * | 2019-07-30 | 2019-09-20 | 天津艾思科尔科技有限公司 | 一种基于人脸关键点的人脸特征识别系统 |
| CN110781712A (zh) * | 2019-06-12 | 2020-02-11 | 上海荟宸信息科技有限公司 | 一种基于人脸检测与识别的人头空间定位方法 |
| CN110879983A (zh) * | 2019-11-18 | 2020-03-13 | 讯飞幻境(北京)科技有限公司 | 一种人脸特征关键点的提取方法和一种人脸图像合成方法 |
| CN110909766A (zh) * | 2019-10-29 | 2020-03-24 | 北京明略软件系统有限公司 | 相似度的确定方法及装置、存储介质、电子装置 |
| CN111091031A (zh) * | 2018-10-24 | 2020-05-01 | 北京旷视科技有限公司 | 目标对象选取方法和人脸解锁方法 |
| CN111209823A (zh) * | 2019-12-30 | 2020-05-29 | 南京华图信息技术有限公司 | 一种红外人脸对齐方法 |
| CN111339990A (zh) * | 2020-03-13 | 2020-06-26 | 乐鑫信息科技(上海)股份有限公司 | 一种基于人脸特征动态更新的人脸识别系统和方法 |
| CN111401152A (zh) * | 2020-02-28 | 2020-07-10 | 中国工商银行股份有限公司 | 人脸识别方法及装置 |
| WO2020215283A1 (fr) * | 2019-04-25 | 2020-10-29 | 深圳市汇顶科技股份有限公司 | Procédé de reconnaissance faciale, puce de traitement et dispositif électronique |
| CN112101127A (zh) * | 2020-08-21 | 2020-12-18 | 深圳数联天下智能科技有限公司 | 人脸脸型的识别方法、装置、计算设备及计算机存储介质 |
| CN112528939A (zh) * | 2020-12-22 | 2021-03-19 | 广州海格星航信息科技有限公司 | 一种人脸图像的质量评价方法及装置 |
| CN112651279A (zh) * | 2020-09-24 | 2021-04-13 | 深圳福鸽科技有限公司 | 基于近距离应用的3d人脸识别方法及系统 |
| CN112800819A (zh) * | 2019-11-14 | 2021-05-14 | 深圳云天励飞技术有限公司 | 一种人脸识别方法、装置及电子设备 |
| CN113515977A (zh) * | 2020-04-10 | 2021-10-19 | 嘉楠明芯(北京)科技有限公司 | 人脸识别方法及系统 |
| CN113536844A (zh) * | 2020-04-16 | 2021-10-22 | 中移(成都)信息通信科技有限公司 | 人脸对比方法、装置、设备及介质 |
| CN113723214A (zh) * | 2021-08-06 | 2021-11-30 | 武汉光庭信息技术股份有限公司 | 一种人脸关键点标注方法、系统、电子设备及存储介质 |
| CN116740393A (zh) * | 2023-04-19 | 2023-09-12 | 北京旷视科技有限公司 | 人脸匹配方法、电子设备、存储介质和计算机程序产品 |
Families Citing this family (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109101919B (zh) * | 2018-08-03 | 2022-05-10 | 北京字节跳动网络技术有限公司 | 用于生成信息的方法和装置 |
| CN109145783B (zh) * | 2018-08-03 | 2022-03-25 | 北京字节跳动网络技术有限公司 | 用于生成信息的方法和装置 |
| CN109376684B (zh) * | 2018-11-13 | 2021-04-06 | 广州市百果园信息技术有限公司 | 一种人脸关键点检测方法、装置、计算机设备和存储介质 |
| CN109685740B (zh) * | 2018-12-25 | 2023-08-11 | 努比亚技术有限公司 | 人脸校正的方法及装置、移动终端及计算机可读存储介质 |
| CN109685018A (zh) * | 2018-12-26 | 2019-04-26 | 深圳市捷顺科技实业股份有限公司 | 一种人证校验方法、系统及相关设备 |
| CN110728225B (zh) * | 2019-10-08 | 2022-04-19 | 北京联华博创科技有限公司 | 一种用于考勤的高速人脸搜索方法 |
| CN111382408A (zh) * | 2020-02-17 | 2020-07-07 | 深圳壹账通智能科技有限公司 | 智能化用户识别方法、装置及计算机可读存储介质 |
| CN113837020B (zh) * | 2021-08-31 | 2024-02-02 | 北京新氧科技有限公司 | 一种化妆进度检测方法、装置、设备及存储介质 |
| CN117079315A (zh) * | 2023-06-12 | 2023-11-17 | 盛视科技股份有限公司 | 人脸摆正方法、装置及计算机可读存储介质 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1831846A (zh) * | 2006-04-20 | 2006-09-13 | 上海交通大学 | 基于统计模型的人脸姿势识别方法 |
| US20090060290A1 (en) * | 2007-08-27 | 2009-03-05 | Sony Corporation | Face image processing apparatus, face image processing method, and computer program |
| CN103218609A (zh) * | 2013-04-25 | 2013-07-24 | 中国科学院自动化研究所 | 一种基于隐最小二乘回归的多姿态人脸识别方法及其装置 |
| CN103605965A (zh) * | 2013-11-25 | 2014-02-26 | 苏州大学 | 一种多姿态人脸识别方法和装置 |
| CN104036276A (zh) * | 2014-05-29 | 2014-09-10 | 无锡天脉聚源传媒科技有限公司 | 人脸识别方法及装置 |
-
2016
- 2016-06-29 CN CN201610495337.3A patent/CN107545220A/zh active Pending
-
2017
- 2017-06-14 WO PCT/CN2017/088219 patent/WO2018001092A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1831846A (zh) * | 2006-04-20 | 2006-09-13 | 上海交通大学 | 基于统计模型的人脸姿势识别方法 |
| US20090060290A1 (en) * | 2007-08-27 | 2009-03-05 | Sony Corporation | Face image processing apparatus, face image processing method, and computer program |
| CN103218609A (zh) * | 2013-04-25 | 2013-07-24 | 中国科学院自动化研究所 | 一种基于隐最小二乘回归的多姿态人脸识别方法及其装置 |
| CN103605965A (zh) * | 2013-11-25 | 2014-02-26 | 苏州大学 | 一种多姿态人脸识别方法和装置 |
| CN104036276A (zh) * | 2014-05-29 | 2014-09-10 | 无锡天脉聚源传媒科技有限公司 | 人脸识别方法及装置 |
Cited By (29)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111091031A (zh) * | 2018-10-24 | 2020-05-01 | 北京旷视科技有限公司 | 目标对象选取方法和人脸解锁方法 |
| CN109635752A (zh) * | 2018-12-12 | 2019-04-16 | 腾讯科技(深圳)有限公司 | 人脸关键点的定位方法、人脸图像处理方法和相关装置 |
| CN109583426A (zh) * | 2018-12-23 | 2019-04-05 | 广东腾晟信息科技有限公司 | 一种根据图像辨识人脸的方法 |
| CN109711392A (zh) * | 2019-01-24 | 2019-05-03 | 郑州市现代人才测评与考试研究院 | 一种基于人脸识别的人才评定方法 |
| WO2020215283A1 (fr) * | 2019-04-25 | 2020-10-29 | 深圳市汇顶科技股份有限公司 | Procédé de reconnaissance faciale, puce de traitement et dispositif électronique |
| CN110781712A (zh) * | 2019-06-12 | 2020-02-11 | 上海荟宸信息科技有限公司 | 一种基于人脸检测与识别的人头空间定位方法 |
| CN110781712B (zh) * | 2019-06-12 | 2023-05-02 | 上海荟宸信息科技有限公司 | 一种基于人脸检测与识别的人头空间定位方法 |
| CN110263772A (zh) * | 2019-07-30 | 2019-09-20 | 天津艾思科尔科技有限公司 | 一种基于人脸关键点的人脸特征识别系统 |
| CN110263772B (zh) * | 2019-07-30 | 2024-05-10 | 天津艾思科尔科技有限公司 | 一种基于人脸关键点的人脸特征识别系统 |
| CN110909766A (zh) * | 2019-10-29 | 2020-03-24 | 北京明略软件系统有限公司 | 相似度的确定方法及装置、存储介质、电子装置 |
| CN112800819A (zh) * | 2019-11-14 | 2021-05-14 | 深圳云天励飞技术有限公司 | 一种人脸识别方法、装置及电子设备 |
| CN112800819B (zh) * | 2019-11-14 | 2024-06-11 | 深圳云天励飞技术有限公司 | 一种人脸识别方法、装置及电子设备 |
| CN110879983B (zh) * | 2019-11-18 | 2023-07-25 | 讯飞幻境(北京)科技有限公司 | 一种人脸特征关键点的提取方法和一种人脸图像合成方法 |
| CN110879983A (zh) * | 2019-11-18 | 2020-03-13 | 讯飞幻境(北京)科技有限公司 | 一种人脸特征关键点的提取方法和一种人脸图像合成方法 |
| CN111209823A (zh) * | 2019-12-30 | 2020-05-29 | 南京华图信息技术有限公司 | 一种红外人脸对齐方法 |
| CN111401152A (zh) * | 2020-02-28 | 2020-07-10 | 中国工商银行股份有限公司 | 人脸识别方法及装置 |
| CN111401152B (zh) * | 2020-02-28 | 2024-04-12 | 中国工商银行股份有限公司 | 人脸识别方法及装置 |
| CN111339990A (zh) * | 2020-03-13 | 2020-06-26 | 乐鑫信息科技(上海)股份有限公司 | 一种基于人脸特征动态更新的人脸识别系统和方法 |
| CN111339990B (zh) * | 2020-03-13 | 2023-03-24 | 乐鑫信息科技(上海)股份有限公司 | 一种基于人脸特征动态更新的人脸识别系统和方法 |
| CN113515977A (zh) * | 2020-04-10 | 2021-10-19 | 嘉楠明芯(北京)科技有限公司 | 人脸识别方法及系统 |
| CN113536844A (zh) * | 2020-04-16 | 2021-10-22 | 中移(成都)信息通信科技有限公司 | 人脸对比方法、装置、设备及介质 |
| CN113536844B (zh) * | 2020-04-16 | 2023-10-31 | 中移(成都)信息通信科技有限公司 | 人脸对比方法、装置、设备及介质 |
| CN112101127B (zh) * | 2020-08-21 | 2024-04-30 | 深圳数联天下智能科技有限公司 | 人脸脸型的识别方法、装置、计算设备及计算机存储介质 |
| CN112101127A (zh) * | 2020-08-21 | 2020-12-18 | 深圳数联天下智能科技有限公司 | 人脸脸型的识别方法、装置、计算设备及计算机存储介质 |
| CN112651279A (zh) * | 2020-09-24 | 2021-04-13 | 深圳福鸽科技有限公司 | 基于近距离应用的3d人脸识别方法及系统 |
| CN112528939A (zh) * | 2020-12-22 | 2021-03-19 | 广州海格星航信息科技有限公司 | 一种人脸图像的质量评价方法及装置 |
| CN113723214A (zh) * | 2021-08-06 | 2021-11-30 | 武汉光庭信息技术股份有限公司 | 一种人脸关键点标注方法、系统、电子设备及存储介质 |
| CN113723214B (zh) * | 2021-08-06 | 2023-10-13 | 武汉光庭信息技术股份有限公司 | 一种人脸关键点标注方法、系统、电子设备及存储介质 |
| CN116740393A (zh) * | 2023-04-19 | 2023-09-12 | 北京旷视科技有限公司 | 人脸匹配方法、电子设备、存储介质和计算机程序产品 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN107545220A (zh) | 2018-01-05 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2018001092A1 (fr) | Procédé et appareil de reconnaissance d'un visage | |
| KR101998112B1 (ko) | 얼굴 특징점 기반 부분 영역 지정을 통한 부분 가림 얼굴 인식 방법, 이를 수행하기 위한 기록 매체 및 장치 | |
| US8655029B2 (en) | Hash-based face recognition system | |
| CN112232117A (zh) | 一种人脸识别方法、装置及存储介质 | |
| US9990538B2 (en) | Face recognition apparatus and method using physiognomic feature information | |
| Kaur et al. | A review on iris recognition | |
| US20200356648A1 (en) | Device and method for user authentication on basis of iris recognition | |
| WO2009041963A1 (fr) | Reconnaissance de l'iris à l'aide d'informations de cohérence | |
| Aboshosha et al. | Score level fusion for fingerprint, iris and face biometrics | |
| Ribaric et al. | A biometric verification system based on the fusion of palmprint and face features | |
| WO2021207378A1 (fr) | Signatures biométriques masquées de synthèse | |
| Xu et al. | Palmprint image processing and linear discriminant analysis method. | |
| Muthukumaran et al. | Face and Iris based Human Authentication using Deep Learning | |
| CN206541317U (zh) | 用户识别系统 | |
| Gawande et al. | Improving iris recognition accuracy by score based fusion method | |
| CN110298275A (zh) | 基于关键点和局部特征的三维人耳识别方法 | |
| WO2016082253A1 (fr) | Procédé de reconnaissance faciale en 3d pour l'administration d'entrées et de sorties | |
| Attallah et al. | Application of BSIF, Log-Gabor and mRMR transforms for iris and palmprint based Bi-modal identification system | |
| Patel et al. | Human identification by partial iris segmentation using pupil circle growing based on binary integrated edge intensity curve | |
| Lu et al. | Zernike moment invariants based iris recognition | |
| Gawande et al. | Novel cryptographic algorithm based fusion of multimodal biometrics authentication system | |
| Resmi et al. | Automatic 2D ear detection: A survey | |
| Pereira et al. | A method for improving the reliability of an iris recognition system | |
| Dere et al. | Biometric Accreditation Adoption using Iris and Fingerprint: A Review | |
| Henry et al. | Synergy in facial recognition extraction methods and recognition algorithms |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17819089 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17819089 Country of ref document: EP Kind code of ref document: A1 |