WO2019033571A1 - Procédé de détection de point de caractéristique faciale, appareil et support de stockage - Google Patents
Procédé de détection de point de caractéristique faciale, appareil et support de stockage Download PDFInfo
- Publication number
- WO2019033571A1 WO2019033571A1 PCT/CN2017/108750 CN2017108750W WO2019033571A1 WO 2019033571 A1 WO2019033571 A1 WO 2019033571A1 CN 2017108750 W CN2017108750 W CN 2017108750W WO 2019033571 A1 WO2019033571 A1 WO 2019033571A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- facial
- image
- feature points
- real
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2148—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Definitions
- the present application relates to the field of computer vision processing technologies, and in particular, to a facial feature point detecting method and apparatus, and a computer readable storage medium.
- Face recognition is a biometric recognition technology based on human facial feature information for user recognition. At present, face recognition has a wide range of applications, and plays a very important role in many areas such as access control attendance and identity recognition, which brings great convenience to people's lives. Face recognition, the general product approach is to use the deep learning method to train the facial feature point recognition model through deep learning, and then use the facial feature point recognition model to identify facial features.
- Face recognition includes facial micro-expression recognition.
- Micro-expression recognition is widely used in psychology, advertising effect evaluation, human factors engineering and human-computer interaction. Therefore, how to accurately recognize facial micro-expression is very important.
- the industry can currently detect 5 and 68 feature points.
- the 5 feature points include two eyeballs, the tip of the nose and the corners of the mouth; 68 feature points do not include the eyeball.
- the above identification The feature points are not enough.
- the present application provides a facial feature point detecting method, device and computer readable storage medium, the main purpose of which is to identify a more comprehensive feature point, which can make the face recognition and the facial micro expression judgment more accurate.
- the present application provides an electronic device, including: a memory, a processor, and an imaging device, wherein the memory includes a facial feature point detecting program, and the facial feature point detecting program is executed by the processor Implement the following steps:
- Real-time facial image acquisition step capturing a real-time image by using a camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;
- Feature point identification step input the real-time facial image into a pre-trained facial average model, and use the facial average model to identify t facial feature points from the real-time facial image.
- the feature point identification step further comprises:
- the real-time facial image is aligned with the facial average model, and the feature extraction algorithm searches for the t facial feature points matching the t facial feature points of the facial average model in the real-time facial image.
- the training step of the facial average model comprises:
- a sample library with n face sample images, and marking t facial feature points in each face sample image, the t facial feature points including: eyes, eyebrows, nose, mouth, and facial contour a position feature point, wherein the position feature point of the eye includes a position feature point of the eyeball; and
- the face feature recognition model is trained by using the face sample image marked with t facial feature points to obtain a face average model for the face feature points.
- the present application further provides a facial feature point detecting method, the method comprising:
- Real-time facial image acquisition step capturing a real-time image by using a camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;
- Feature point identification step input the real-time facial image into a pre-trained facial average model, and use the facial average model to identify t facial feature points from the real-time facial image.
- the feature point identification step further comprises:
- the real-time facial image is aligned with the facial average model, and the feature extraction algorithm searches for the t facial feature points matching the t facial feature points of the facial average model in the real-time facial image.
- the training step of the facial average model comprises:
- a sample library with n face sample images, and marking t facial feature points in each face sample image, the t facial feature points including: eyes, eyebrows, nose, mouth, and facial contour a position feature point, wherein the position feature point of the eye includes a position feature point of the eyeball; and
- the face feature recognition model is trained by using the face sample image marked with t facial feature points to obtain a face average model for the face feature points.
- the face feature recognition model is an ERT algorithm, and the formula is as follows:
- each regression is composed of a number of regression trees
- S (t) is the shape estimation of the current model
- each regression ⁇ t ( ⁇ , ⁇ ) predicts an increment based on the input current image I and S(t)
- a part of the feature points of each sample picture of n sample pictures is taken to train the first regression tree, and the predicted value of the first regression tree and the part of the feature points are The residual of the true value is used to train the second tree... and so on, until the predicted value of the Nth tree is trained and the true value of the partial feature point is close to 0, and all the regression trees of the ERT algorithm are obtained.
- a facial average model for facial feature points is obtained from these regression trees.
- the feature extraction algorithm comprises: a SIFT algorithm, a SURF algorithm, an LBP algorithm, and an HOG algorithm.
- the present application further provides a computer readable storage medium including a facial feature point detecting program, when the facial feature point detecting program is executed by a processor, implementing the above Any of the steps of the facial feature point detection method described.
- the facial feature point detecting method and device and the computer readable storage medium proposed by the present application can identify a feature point more comprehensively by recognizing a plurality of feature points including a position feature point of an eyeball from a real-time facial image. Face recognition and facial micro-expressions are more accurate.
- FIG. 1 is a schematic diagram of an operating environment of a preferred embodiment of a facial feature point detecting method of the present application
- FIG. 2 is a block diagram of a facial feature point detecting program of FIG. 1;
- FIG. 3 is a flow chart of a preferred embodiment of a facial feature point detecting method of the present application.
- the application provides a facial feature point detecting method.
- FIG. 1 it is a schematic diagram of an operating environment of a preferred embodiment of a facial feature point detecting method of the present application.
- the facial feature point detecting method is applied to an electronic device 1.
- the electronic device 1 may be a terminal device having a computing function, such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
- the electronic device 1 includes a processor 12, a memory 11, an imaging device 13, a network interface 14, and a communication bus 15.
- the camera device 13 is installed in a specific place, such as an office place and a monitoring area, and real-time images are taken in real time for the target entering the specific place, and the captured real-time image is transmitted to the processor 12 through the network.
- Network interface 14 may optionally include a standard wired interface, a wireless interface (such as a WI-FI interface).
- Communication bus 15 is used to implement connection communication between these components.
- the memory 11 includes at least one type of readable storage medium.
- the at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory, or the like.
- the readable storage medium may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1.
- the readable storage medium may also be an external memory of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC), Secure Digital (SD) card, Flash Card, etc.
- SMC smart memory card
- SD Secure Digital
- the readable storage medium of the memory 11 is generally used to store the facial feature point detecting program 10 installed on the electronic device 1, the face image sample library, and the constructed and trained facial average model and the like.
- the memory 11 can also be used to temporarily store the output that has been output or will be output The data.
- the processor 12 in some embodiments, may be a Central Processing Unit (CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing facial features. Point detection program 10, etc.
- CPU Central Processing Unit
- microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing facial features. Point detection program 10, etc.
- Figure 1 shows only the electronic device 1 having the components 11-15 and the facial feature point detection program 10, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead. .
- the electronic device 1 may further include a user interface
- the user interface may include an input unit such as a keyboard, a voice input device such as a microphone, a device with a voice recognition function, a voice output device such as an audio, a headphone, and the like.
- the user interface may also include a standard wired interface and a wireless interface.
- the electronic device 1 may further include a display, which may also be appropriately referred to as a display screen or a display unit.
- a display may also be appropriately referred to as a display screen or a display unit.
- it may be an LED display, a liquid crystal display, a touch liquid crystal display, an OLED (Organic Light-Emitting Diode) touch sensor, or the like.
- the display is used to display information processed in the electronic device 1 and a user interface for displaying visualizations.
- the electronic device 1 further comprises a touch sensor.
- the area provided by the touch sensor for the user to perform a touch operation is referred to as a touch area.
- the touch sensor described herein may be a resistive touch sensor, a capacitive touch sensor, or the like.
- the touch sensor includes not only a contact type touch sensor but also a proximity type touch sensor or the like.
- the touch sensor may be a single sensor or a plurality of sensors arranged, for example, in an array.
- the area of the display of the electronic device 1 may be the same as or different from the area of the touch sensor.
- a display is stacked with the touch sensor to form a touch display. The device detects a user-triggered touch operation based on a touch screen display.
- the electronic device 1 may further include an RF (Radio Frequency) circuit, a sensor, an audio circuit, and the like, and details are not described herein.
- RF Radio Frequency
- an operating system and a facial feature point detecting program 10 may be included in the memory 11 as a computer storage medium; when the processor 12 executes the facial feature point detecting program 10 stored in the memory 11, Implement the following steps:
- Real-time facial image acquisition step a real-time image is captured by the camera device 13, and a real-time facial image is extracted from the real-time image by using a face recognition algorithm.
- the camera 13 captures a real-time image
- the camera 13 transmits the real-time image to the processor 12.
- the processor 12 receives the real-time image, it first acquires the size of the image to create a grayscale image of the same size. Converting the acquired color image into a grayscale image and creating a memory space; equalizing the grayscale image histogram can reduce the amount of grayscale image information to speed up the detection, and then load the training library to detect the image.
- the face of the face, and return an object containing the face information obtain the data of the location of the face, and record the number; finally obtain the area of the avatar and save it, thus completing a real-time facial image extraction process.
- the face recognition algorithm for extracting a real-time facial image from the real-time image may also be: Geometric feature based methods, local feature analysis methods, feature face methods, elastic model based methods, neural network methods, and the like.
- Feature point identification step input the real-time facial image into a pre-trained facial average model, and use the facial average model to identify t facial feature points from the real-time facial image.
- a sample library with n face sample images and marking t facial feature points in each face sample image, the t facial feature points including: eyes, eyebrows, nose, mouth, and facial contour A position feature point, wherein the position feature point of the eye includes a position feature point of the eyeball.
- a sample library having n face images is created, and t facial feature points are manually marked in each face image, and the position feature points of the eye include: a position feature point of the eyelid and a position feature point of the eyeball.
- the face feature recognition model is trained by using the face sample image marked with t facial feature points to obtain a face average model for the face feature points.
- the face feature recognition model is an Ensemble of Regression Tress (ERT) algorithm.
- ERT Regression Tress
- t represents the cascading sequence number
- ⁇ t ( ⁇ , ⁇ ) represents the regression of the current stage.
- Each regression is composed of a number of regression trees, and the purpose of training is to obtain these regression trees.
- each regression ⁇ t ( ⁇ , ⁇ ) predicts an increment based on the input images I and S(t) Add this increment to the current shape estimate to improve the current model.
- Each level of regression is based on feature points for prediction.
- the training data set is: (I1, S1), ..., (In, Sn) where I is the input sample image and S is the shape feature vector composed of the feature points in the sample image.
- each sample picture has 76 face feature points, and part of the feature points of all sample images are taken (for example, 70 features are randomly selected among 76 feature points of each sample image).
- Point training the first regression tree, using the residual of the predicted value of the first regression tree and the true value of the partial feature points (weighted average of 70 feature points taken from each sample picture) Training the second tree... and so on, until the predicted value of the Nth tree is trained and the true value of the part of the feature points is close to 0, and all the regression trees of the ERT algorithm are obtained, and the average of the face points is obtained according to the regression trees.
- the model is saved to the memory 11 and the model file and the sample library.
- 76 facial feature points are marked in each face sample image in the sample library, there are also 76 facial feature points in the face average model, and the trained facial average model is called from the memory. Aligning the real-time facial image with the facial average model, and then using the feature extraction algorithm to search for 76 facial feature points matching the 76 facial feature points of the facial average model in the real-time facial image, and identifying the recognized facial features
- the 76 facial feature points are still recorded as P1 to P76, and the coordinates of the 76 facial feature points are: (x 1 , y 1 ), (x 2 , y 2 ), (x 3 , y 3 ), ..., (x 76 , y 76 ).
- the outer contour of the face has 17 feature points (P1 ⁇ P17, evenly distributed on the outer contour of the face), and the left and right eyebrows respectively have 5 feature points (respectively recorded as P18 ⁇ P22, P23 ⁇ P27, evenly distributed in the eyebrows)
- the upper end) the nose has 9 feature points (P28 ⁇ P36)
- the left and right eyelids have 6 feature points (respectively labeled as P37 ⁇ P42, P43 ⁇ P48)
- the left and right eyeballs have 4 feature points (respectively recorded as P49 ⁇ P52, P53 ⁇ P56)
- there are 20 feature points in the lip P57 ⁇ P76
- there are 8 feature points on the upper and lower lips of the lip respectively.
- One of the two feature points of the left and right lip angles is located on the outer contour line of the lips (for example, P74 and P76, which can be called outer lip feature points), and one is located on the outer contour line of the lips (for example, P73 and P75, which can be called Inner lip corner feature point).
- the feature extraction algorithm is a SIFT (scale-invariant feature transform) algorithm.
- SIFT scale-invariant feature transform
- the SIFT algorithm extracts the local features of each facial feature point from the facial average model of the facial feature points, selects a facial feature point as the reference feature point, and searches for the same or similar local feature of the reference feature point in the real-time facial image.
- the feature points (for example, the difference of the local features of the two feature points are within a preset range), according to this principle until all the face feature points are found in the real-time face image.
- the feature extraction algorithm may also be a SURF (Speeded Up Robust Features) algorithm, an LBP (Local Binary Patterns) algorithm, a HOG (Histogram of Oriented Gridients) algorithm, or the like.
- SURF Speeded Up Robust Features
- LBP Long Binary Patterns
- HOG Histogram of Oriented Gridients
- the electronic device 1 of the present embodiment extracts a real-time facial image from a real-time image, and uses the facial average model to identify a facial feature point in the real-time facial image, and the recognized feature point is more comprehensive, and the face recognition and the face recognition can be performed.
- the judgment of the facial micro-expression is more accurate.
- facial feature point detection program 10 may also be partitioned into one or more modules, one or more modules being stored in memory 11 and executed by processor 12 to complete the application.
- a module as referred to in this application refers to a series of computer program instructions that are capable of performing a particular function.
- FIG. 2 it is a block diagram of the facial feature point detecting program 10 of FIG.
- the facial feature point detecting program 10 can be divided into: an obtaining module 110, an identifying module 120, and a calculating module 130.
- the functions or operational steps implemented by the modules 110-130 are similar to the above, and are not described in detail herein, by way of example, for example:
- the acquiring module 110 is configured to acquire a real-time image captured by the camera device 13 and extract a real-time facial image from the real-time image by using a face recognition algorithm;
- the identification module 120 is configured to input the real-time facial image into a facial average model, and use the facial average model to identify t facial feature points from the real-time facial image.
- the present application also provides a facial feature point detecting method.
- FIG. 3 it is a flowchart of a preferred embodiment of the facial feature point detecting method of the present application.
- the method can be performed by a device that can be implemented by software and/or hardware.
- the facial feature point detecting method includes:
- step S10 a real-time image is captured by the camera device, and a real-time face image is extracted from the real-time image by using a face recognition algorithm.
- the camera captures a real-time image
- the camera sends the real-time image to the processor.
- the processor When the processor receives the real-time image, the image is first acquired to create a grayscale image of the same size; Color image, converted into grayscale image, and create a memory space; equalize the grayscale image histogram to make grayscale image Reduce the amount of information to speed up the detection, then load the training library, detect the face in the picture, and return an object containing the face information, obtain the data of the location of the face, and record the number; finally get the area of the avatar And save it, this completes the process of real-time facial image extraction.
- the face recognition algorithm for extracting the real-time facial image from the real-time image may also be: a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, and the like.
- Step S20 input the real-time facial image into a pre-trained facial average model, and use the facial average model to identify t facial feature points from the real-time facial image.
- a sample library with n face sample images and marking t facial feature points in each face sample image, the t facial feature points including: eyes, eyebrows, nose, mouth, and facial contour A position feature point, wherein the position feature point of the eye includes a position feature point of the eyeball.
- a sample library having n face images is created, and t facial feature points are manually marked in each face image, and the position feature points of the eye include: a position feature point of the eyelid and a position feature point of the eyeball.
- the face feature recognition model is trained by using the face sample image marked with t facial feature points to obtain a face average model for the face feature points.
- the face feature recognition model is an ERT algorithm.
- the ERT algorithm is expressed as follows:
- t represents the cascading sequence number
- ⁇ t ( ⁇ , ⁇ ) represents the regression of the current stage.
- Each regression is composed of a number of regression trees, and the purpose of training is to obtain these regression trees.
- each regression ⁇ t ( ⁇ , ⁇ ) predicts an increment based on the input images I and S(t) Add this increment to the current shape estimate to improve the current model.
- Each level of regression is based on feature points for prediction.
- the training data set is: (I1, S1), ..., (In, Sn) where I is the input sample image and S is the shape feature vector composed of the feature points in the sample image.
- each sample picture has 76 face feature points, and part of the feature points of all sample images are taken (for example, 70 features are randomly selected among 76 feature points of each sample image).
- Point training the first regression tree, using the residual of the predicted value of the first regression tree and the true value of the partial feature points (weighted average of 70 feature points taken from each sample picture) Training the second tree... and so on, until the predicted value of the Nth tree is trained and the true value of the part of the feature points is close to 0, and all the regression trees of the ERT algorithm are obtained, and the average of the face points is obtained according to the regression trees.
- Model and save the model file and sample library to memory.
- 76 facial feature points are marked in each face sample image in the sample library, there are also 76 facial feature points in the face average model, and the trained facial average model is called from the memory. Aligning the real-time facial image with the facial average model, and then using the feature extraction algorithm to search for 76 facial feature points matching the 76 facial feature points of the facial average model in the real-time facial image, and identifying the recognized facial features
- the 76 facial feature points are still recorded as P1 to P76, and the coordinates of the 76 facial feature points are: (x 1 , y 1 ), (x 2 , y 2 ), (x 3 , y 3 ), ..., (x 76 , y 76 ).
- the outer contour of the face has 17 feature points (P1 ⁇ P17, evenly distributed on the outer contour of the face), and the left and right eyebrows respectively have 5 feature points (respectively recorded as P18 ⁇ P22, P23 ⁇ P27, evenly distributed in the eyebrows)
- the upper end) the nose has 9 feature points (P28 ⁇ P36)
- the left and right eyelids have 6 feature points (respectively labeled as P37 ⁇ P42, P43 ⁇ P48)
- the left and right eyeballs have 4 feature points (respectively recorded as P49 ⁇ P52, P53 ⁇ P56)
- there are 20 feature points in the lip P57 ⁇ P76
- there are 8 feature points on the upper and lower lips of the lip respectively.
- P73 to P74 and P75 to P76 there are two feature points (respectively labeled as P73 to P74 and P75 to P76).
- 8 feature points of the upper lip 5 are located on the outer contour line of the upper lip (P57-61), 3 are located on the contour line of the upper lip (P62-P64, P63 is the central feature point on the inner side of the upper lip); 8 of the lower lip Of the feature points, 5 are located on the outer contour line of the lower lip (P65 to P69), and 3 are located in the outline of the lower lip (P70 to P72, and P71 is the central feature point on the inner side of the lower lip).
- One of the two feature points of the left and right lip angles is located on the outer contour line of the lips (for example, P74 and P76, which can be called outer lip feature points), and one is located on the outer contour line of the lips (for example, P73 and P75, which can be called Inner lip corner feature point).
- the feature extraction algorithm is a SIFT algorithm.
- the SIFT algorithm extracts the local features of each facial feature point from the facial average model of the facial feature points, selects a facial feature point as the reference feature point, and searches for the same or similar local feature of the reference feature point in the real-time facial image.
- the feature points (for example, the difference of the local features of the two feature points are within a preset range), according to this principle until all the face feature points are found in the real-time face image.
- the feature extraction algorithm may also be a SURF algorithm, an LBP algorithm, an HOG algorithm, or the like.
- the facial feature point detecting method proposed in the embodiment extracts a real-time facial image from a real-time image, and uses the facial average model to identify a facial feature point in the real-time facial image, and the recognized feature point is more comprehensive and can make a face
- the recognition and facial micro-expressions are more accurate.
- the embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium includes a facial feature point detecting program, and when the facial feature point detecting program is executed by the processor, the following operations are implemented:
- Real-time facial image acquisition step capturing a real-time image by using a camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;
- Feature point identification step input the real-time facial image into a pre-trained facial average model, and use the facial average model to identify t facial feature points from the real-time facial image.
- the training step of the facial average model includes:
- a sample library with n face sample images, and marking t facial feature points in each face sample image, the t facial feature points including: eyes, eyebrows, nose, mouth, and facial contour a position feature point, wherein the position feature point of the eye includes a position feature point of the eyeball; and
- the face feature recognition model is trained by using the face sample image marked with t facial feature points to obtain a face average model for the face feature points.
- the face feature recognition model is an ERT algorithm, and the formula is as follows:
- each regression is composed of a number of regression trees
- S (t) is the shape estimation of the current model
- each regression ⁇ t ( ⁇ , ⁇ ) predicts an increment based on the input current image I and S(t)
- a part of the feature points of each sample picture of n sample pictures is taken to train the first regression tree, and the predicted value of the first regression tree and the part of the feature points are The residual of the true value is used to train the second tree... and so on, until the predicted value of the Nth tree is trained and the true value of the partial feature point is close to 0, and all the regression trees of the ERT algorithm are obtained.
- a facial average model for facial feature points is obtained from these regression trees.
- a disk including a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the various embodiments of the present application.
- a terminal device which may be a mobile phone, a computer, a server, or a network device, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
La présente invention concerne un procédé de détection de point de caractéristique faciale, un appareil électronique et un support de stockage lisible par ordinateur. Le procédé consiste à : capturer une image en temps réel par l'intermédiaire d'un appareil d'imagerie, et extraire une image faciale en temps réel à partir de l'image en temps réel par l'intermédiaire d'un algorithme d'identification de visage humain; entrer l'image faciale en temps réel sur un modèle moyen facial pré-entraîné, et identifier des points de caractéristique faciale t à partir de l'image faciale en temps réel par l'intermédiaire du modèle moyen facial.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710709109.6A CN107679447A (zh) | 2017-08-17 | 2017-08-17 | 面部特征点检测方法、装置及存储介质 |
| CN201710709109.6 | 2017-08-17 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019033571A1 true WO2019033571A1 (fr) | 2019-02-21 |
Family
ID=61136036
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2017/108750 Ceased WO2019033571A1 (fr) | 2017-08-17 | 2017-10-31 | Procédé de détection de point de caractéristique faciale, appareil et support de stockage |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN107679447A (fr) |
| WO (1) | WO2019033571A1 (fr) |
Cited By (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109901716A (zh) * | 2019-03-04 | 2019-06-18 | 厦门美图之家科技有限公司 | 视线点预测模型建立方法、装置及视线点预测方法 |
| CN110147727A (zh) * | 2019-04-15 | 2019-08-20 | 深圳壹账通智能科技有限公司 | 基于面部特征识别的地铁抽查方法及相关设备 |
| CN110334643A (zh) * | 2019-06-28 | 2019-10-15 | 广东奥园奥买家电子商务有限公司 | 一种基于人脸识别的特征评估方法及装置 |
| CN110516626A (zh) * | 2019-08-29 | 2019-11-29 | 上海交通大学 | 一种基于人脸识别技术的面部对称性评估方法 |
| CN111191571A (zh) * | 2019-12-26 | 2020-05-22 | 新绎健康科技有限公司 | 一种基于人脸特征点检测的中医面诊脸部分区方法和系统 |
| CN111860047A (zh) * | 2019-04-26 | 2020-10-30 | 美澳视界(厦门)智能科技有限公司 | 一种基于深度学习的人脸快速识别方法 |
| CN112052730A (zh) * | 2020-07-30 | 2020-12-08 | 广州市标准化研究院 | 一种3d动态人像识别监控设备及方法 |
| CN112102146A (zh) * | 2019-06-18 | 2020-12-18 | 北京陌陌信息技术有限公司 | 脸部图像处理方法、装置、设备及计算机存储介质 |
| CN113947787A (zh) * | 2021-07-05 | 2022-01-18 | 江苏鼎峰信息技术有限公司 | 一种智能识别系统 |
| CN114698399A (zh) * | 2020-10-12 | 2022-07-01 | 鸿富锦精密工业(武汉)有限公司 | 人脸识别方法、装置及可读存储介质 |
Families Citing this family (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108629278B (zh) * | 2018-03-26 | 2021-02-26 | 奥比中光科技集团股份有限公司 | 基于深度相机实现信息安全显示的系统及方法 |
| CN108597074A (zh) * | 2018-04-12 | 2018-09-28 | 广东汇泰龙科技有限公司 | 一种基于人脸配准算法和人脸锁的开门方法及系统 |
| CN108564531B (zh) * | 2018-05-08 | 2022-07-08 | 麒麟合盛网络技术股份有限公司 | 一种图像处理方法及装置 |
| CN108763897A (zh) * | 2018-05-22 | 2018-11-06 | 平安科技(深圳)有限公司 | 身份合法性的校验方法、终端设备及介质 |
| CN109117716A (zh) * | 2018-06-28 | 2019-01-01 | 众安信息技术服务有限公司 | 一种气质相似度获取方法及装置 |
| CN109255327A (zh) * | 2018-09-07 | 2019-01-22 | 北京相貌空间科技有限公司 | 人脸特征信息的获取方法、脸部整形手术评价方法及装置 |
| CN109308584A (zh) * | 2018-09-27 | 2019-02-05 | 深圳市乔安科技有限公司 | 一种无感考勤系统及方法 |
| CN109389069B (zh) * | 2018-09-28 | 2021-01-05 | 北京市商汤科技开发有限公司 | 注视点判断方法和装置、电子设备和计算机存储介质 |
| CN109376621A (zh) * | 2018-09-30 | 2019-02-22 | 北京七鑫易维信息技术有限公司 | 一种样本数据生成方法、装置以及机器人 |
| CN109657550B (zh) * | 2018-11-15 | 2020-11-06 | 中科院微电子研究所昆山分所 | 一种疲劳度检测方法及装置 |
| CN109886213B (zh) * | 2019-02-25 | 2021-01-08 | 湖北亿咖通科技有限公司 | 疲劳状态判断方法、电子设备及计算机可读存储介质 |
| CN110610131B (zh) * | 2019-08-06 | 2024-04-09 | 平安科技(深圳)有限公司 | 人脸运动单元的检测方法、装置、电子设备及存储介质 |
| CN111839519B (zh) * | 2020-05-26 | 2021-05-18 | 合肥工业大学 | 非接触式呼吸频率监测方法及系统 |
| CN114360023B (zh) * | 2022-01-07 | 2025-04-08 | 安徽大学 | 基于面部微动作变化检测的麻醉病人复苏预警方法 |
| CN119112107B (zh) * | 2024-09-09 | 2025-04-04 | 合肥工业大学 | 轻量化的非接触式多参数生理检测方法及系统 |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106650682A (zh) * | 2016-12-29 | 2017-05-10 | Tcl集团股份有限公司 | 一种人脸追踪的方法及装置 |
| CN106845327A (zh) * | 2015-12-07 | 2017-06-13 | 展讯通信(天津)有限公司 | 人脸对齐模型的训练方法、人脸对齐方法和装置 |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105512627B (zh) * | 2015-12-03 | 2019-04-12 | 腾讯科技(深圳)有限公司 | 一种关键点的定位方法及终端 |
| CN105426867B (zh) * | 2015-12-11 | 2019-08-30 | 小米科技有限责任公司 | 人脸识别验证方法及装置 |
| CN106295566B (zh) * | 2016-08-10 | 2019-07-09 | 北京小米移动软件有限公司 | 人脸表情识别方法及装置 |
| CN106295602A (zh) * | 2016-08-18 | 2017-01-04 | 无锡天脉聚源传媒科技有限公司 | 一种人脸识别方法及装置 |
-
2017
- 2017-08-17 CN CN201710709109.6A patent/CN107679447A/zh active Pending
- 2017-10-31 WO PCT/CN2017/108750 patent/WO2019033571A1/fr not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106845327A (zh) * | 2015-12-07 | 2017-06-13 | 展讯通信(天津)有限公司 | 人脸对齐模型的训练方法、人脸对齐方法和装置 |
| CN106650682A (zh) * | 2016-12-29 | 2017-05-10 | Tcl集团股份有限公司 | 一种人脸追踪的方法及装置 |
Non-Patent Citations (1)
| Title |
|---|
| VAHID KAZEMI ET AL.: "One Millisecond Face Alignment with an Ensemble of Regression Trees", PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNISION (CVPR, 31 December 2014 (2014-12-31), pages 1867 - 1874, XP032649427 * |
Cited By (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109901716A (zh) * | 2019-03-04 | 2019-06-18 | 厦门美图之家科技有限公司 | 视线点预测模型建立方法、装置及视线点预测方法 |
| CN110147727A (zh) * | 2019-04-15 | 2019-08-20 | 深圳壹账通智能科技有限公司 | 基于面部特征识别的地铁抽查方法及相关设备 |
| CN111860047A (zh) * | 2019-04-26 | 2020-10-30 | 美澳视界(厦门)智能科技有限公司 | 一种基于深度学习的人脸快速识别方法 |
| CN111860047B (zh) * | 2019-04-26 | 2024-06-11 | 美澳视界(厦门)智能科技有限公司 | 一种基于深度学习的人脸快速识别方法 |
| CN112102146A (zh) * | 2019-06-18 | 2020-12-18 | 北京陌陌信息技术有限公司 | 脸部图像处理方法、装置、设备及计算机存储介质 |
| CN112102146B (zh) * | 2019-06-18 | 2023-11-03 | 北京陌陌信息技术有限公司 | 脸部图像处理方法、装置、设备及计算机存储介质 |
| CN110334643B (zh) * | 2019-06-28 | 2023-05-23 | 知鱼智联科技股份有限公司 | 一种基于人脸识别的特征评估方法及装置 |
| CN110334643A (zh) * | 2019-06-28 | 2019-10-15 | 广东奥园奥买家电子商务有限公司 | 一种基于人脸识别的特征评估方法及装置 |
| CN110516626A (zh) * | 2019-08-29 | 2019-11-29 | 上海交通大学 | 一种基于人脸识别技术的面部对称性评估方法 |
| CN111191571A (zh) * | 2019-12-26 | 2020-05-22 | 新绎健康科技有限公司 | 一种基于人脸特征点检测的中医面诊脸部分区方法和系统 |
| CN112052730A (zh) * | 2020-07-30 | 2020-12-08 | 广州市标准化研究院 | 一种3d动态人像识别监控设备及方法 |
| CN112052730B (zh) * | 2020-07-30 | 2024-03-29 | 广州市标准化研究院 | 一种3d动态人像识别监控设备及方法 |
| CN114698399A (zh) * | 2020-10-12 | 2022-07-01 | 鸿富锦精密工业(武汉)有限公司 | 人脸识别方法、装置及可读存储介质 |
| CN113947787A (zh) * | 2021-07-05 | 2022-01-18 | 江苏鼎峰信息技术有限公司 | 一种智能识别系统 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN107679447A (zh) | 2018-02-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2019033571A1 (fr) | Procédé de détection de point de caractéristique faciale, appareil et support de stockage | |
| US10445562B2 (en) | AU feature recognition method and device, and storage medium | |
| CN107633204B (zh) | 人脸遮挡检测方法、装置及存储介质 | |
| WO2019033569A1 (fr) | Procédé d'analyse du mouvement du globe oculaire, dispositif et support de stockage | |
| WO2019033568A1 (fr) | Procédé de saisie de mouvement labial, appareil et support d'informations | |
| WO2019109526A1 (fr) | Procédé et dispositif de reconnaissance de l'âge de l'image d'un visage, et support de stockage | |
| Ahmed et al. | Vision based hand gesture recognition using dynamic time warping for Indian sign language | |
| WO2019095571A1 (fr) | Procédé d'analyse d'émotion de figure humaine, appareil, et support d'informations | |
| WO2019033570A1 (fr) | Procédé d'analyse de mouvement labial, appareil et support d'informations | |
| CN107633203A (zh) | 面部情绪识别方法、装置及存储介质 | |
| WO2019061658A1 (fr) | Procédé et dispositif de localisation de lunettes, et support d'informations | |
| WO2019033567A1 (fr) | Procédé de capture de mouvement de globe oculaire, dispositif et support d'informations | |
| WO2021012494A1 (fr) | Procédé et appareil de reconnaissance faciale basée sur l'apprentissage profond, et support de stockage lisible par ordinateur | |
| CN111989689A (zh) | 用于识别图像内目标的方法和用于执行该方法的移动装置 | |
| CN107958230A (zh) | 人脸表情识别方法及装置 | |
| CN110210319A (zh) | 计算机设备、舌体照片体质识别装置及存储介质 | |
| Lahiani et al. | Hand pose estimation system based on Viola-Jones algorithm for android devices | |
| Gupta et al. | HaarCascade and LBPH algorithms in face recognition analysis | |
| CN113469138A (zh) | 对象检测方法和装置、存储介质及电子设备 | |
| WO2015131571A1 (fr) | Procédé et terminal de mise en œuvre d'un séquençage d'image | |
| HK1246922A1 (zh) | 面部特徵点检测方法、装置及存储介质 | |
| HK1246921B (zh) | 人脸遮挡检测方法、装置及存储介质 | |
| HK1246925B (zh) | 嘴唇动作分析方法、装置及存储介质 | |
| HK1246926B (zh) | 眼球动作分析方法、装置及存储介质 | |
| HK1246924A (en) | Identifying method, device and storage medium for au features |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17921932 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 24.09.2020) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17921932 Country of ref document: EP Kind code of ref document: A1 |