WO2013042992A1 - Procédé et système de reconnaissance d'expressions faciales - Google Patents
Procédé et système de reconnaissance d'expressions faciales Download PDFInfo
- Publication number
- WO2013042992A1 WO2013042992A1 PCT/KR2012/007602 KR2012007602W WO2013042992A1 WO 2013042992 A1 WO2013042992 A1 WO 2013042992A1 KR 2012007602 W KR2012007602 W KR 2012007602W WO 2013042992 A1 WO2013042992 A1 WO 2013042992A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- aam
- appearance
- facial
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/175—Static expression
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/755—Deformable models or variational models, e.g. snakes or active contours
- G06V10/7557—Deformable models or variational models, e.g. snakes or active contours based on appearance, e.g. active appearance models [AAM]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Definitions
- the present invention relates to a method and a system for recognizing facial expressions, and more particularly, a DoG (Difference of Gaussian) kernel is applied to an image applied to an AAM (Active Appearance Model) to enhance the visibility of a detailed area and reduce noise.
- a DoG Difference of Gaussian
- AAM Active Appearance Model
- edge information is mostly binarized, it is difficult to produce stable results due to loss of edge information when lighting is uneven.
- Principal Component Analysis has the advantage of easy real-time implementation, but it is sensitive to area separation and background image, and the neural network approach can operate stably under limited conditions, but it is difficult to adjust network variables when learning new inputs. The disadvantage is that it takes a lot of time.
- a representative model of the continuous facial expression recognition method is AAM (Active Appearance Model).
- AAM applies principal component analysis (PCA) to face model vectors and face surface texture vectors to warn sample face models created using a variety of human face statistics to normalize the sample face data. The error square of the face data of the captured image 2D is minimized. Use this data to find facial feature points.
- PCA principal component analysis
- AAM has the advantage of speed calculation and tracking.
- AAM has a disadvantage in that a lot of performance is degraded in a mobile environment in which lighting changes are severe. The reason is that the error between the model and the input image is used to calculate the optimized parameters used when updating the AAM. If the image of the input image and the train set are not similar due to the same reasons such as lighting and pose changes, the error becomes large. This is because it is difficult to calculate the parameter.
- the present invention has been made to solve the above-described problems, by applying a DoG (Difference of Gaussian) kernel to the image applied to the AAM (Active Appearance Model), thereby improving the visibility of the detailed area and reducing the noise due to lighting
- DoG Difference of Gaussian
- AAM Active Appearance Model
- a facial expression recognition method comprising: convolving an input image using a DoG kernel; Extracting a face region from the convolved image; Extracting an appearance parameter and a shape parameter from the face region; Converting a statistical face model (AAM; Active Appearance Model) previously stored based on the extracted facial feature elements and synthesizing the facial region; And recognizing a facial expression by updating the appearance and shape parameters until the synthesized facial image converges within the preset mapping value with the image forming the input face region.
- AAM Active Appearance Model
- two Gaussian kernels having different standard deviations are convolved to make a blood image and then create a difference image between the two images. Calculating may be included.
- the method may further include classifying facial expressions by processing the appearance parameters and the shape parameters with an Enhanced Fisher Model (EMF) classification method.
- EMF Enhanced Fisher Model
- the present invention for achieving the above object is a facial expression recognition system, the DoG kernel to convolution the input image using the DoG kernel; And extracting a face region from the convolved image, extracting an appearance parameter and a shape parameter from the face region, and storing a statistical face model (AAM; Active) based on the extracted facial feature elements. After converting an Appearance Model and synthesizing the face region, the parameters of appearance and shape are updated until the synthesized face image converges within the preset mapping value with the image forming the input face region.
- AAM modeling unit for recognizing facial expressions.
- the DoG kernel generates a blood image by convolving images with two Gaussian kernels having different standard deviations, and then calculates a difference image between the two images to provide the AAM modeling unit. Can be.
- the apparatus may further include an EFM classifier for classifying facial expressions by processing the appearance parameters and shape parameters with an Enhanced Fisher Model (EMF) classification method.
- EFM Enhanced Fisher Model
- the method and system for recognizing facial expressions of the present invention improves visibility of detailed regions and reduces noise by applying a DoG (Difference of Gaussian) kernel to an image applied to an AAM (Active Appearance Model).
- DoG Difference of Gaussian
- AAM Active Appearance Model
- the facial expression recognition method and system of the present invention by applying the DoG (Difference of Gaussian) kernel to the image applied to the AAM (Active Appearance Model), after removing unnecessary information from the image to extract the object feature to remove due to lighting Maintain important information.
- DoG Difference of Gaussian
- AAM Active Appearance Model
- FIG. 1 is a conceptual diagram of a facial expression recognition method according to an embodiment of the present invention
- FIG. 2 is a control flowchart of a facial expression recognition system according to an embodiment of the present invention
- 3 and 4 are comparative experiment results of facial expression recognition results of the present invention and the prior art under the condition that the illumination changes
- 5 and 6 are comparative experiment results of facial expression recognition results of the present invention and the prior art under the condition that the expression changes
- FIG. 7 is a control block diagram of a facial expression recognition system according to an embodiment of the present invention.
- FIG. 8 is a control flowchart of a facial expression recognition system according to another embodiment of the present invention.
- FIG. 9 is a state diagram used in the facial expression recognition system according to an embodiment of the present invention.
- DOG kernel 516 AAM modeling unit
- EFM classifier 520 display unit
- FIG. 1 is a conceptual diagram of a facial expression recognition method according to an embodiment of the present invention.
- the present invention convolves an input image A with a Difference of Gaussian (DoG) kernel to generate a convolved image B.
- DoG Difference of Gaussian
- FIG. 1 an AAM model is generated by fitting an AAM (Active Appearance Model) image to the convoluted image (B), and then a training set is applied to the output image (D) where a facial expression is recognized.
- AAM Active Appearance Model
- DoG Kernel is an image processing algorithm that removes noise of gray images and detects features.
- the DOG kernel is composed of two Gaussian kernels with different standard deviations, each of which convolves an image to produce a blurred image, and then calculates a difference image between the two images.
- Such a DoG kernel can be defined as in Equation 1 below.
- L (x, y, k ⁇ ) and L (x, y, ⁇ ) are Gaussian kernels with different standard deviations k ⁇ and ⁇ .
- the DOG kernel is an algorithm aimed at detecting image features, which is useful for enhancing the visibility of edges and other details in digital images. Since the DoG kernel reduces noise through Gaussian filtering, it can not only remove unnecessary information from the image, but also maintain important information provided by lighting through object feature extraction. In particular, when DoG kernels are applied to face images, local features such as eyes, noses, mouths, etc. may be strengthened, and information of shapes containing unnecessary information, such as balls, may be weakened.
- the image B convolved with the DoG kernel has a local shape containing a lot of information in the face image, for example, a shape of a feature part such as an eye, a nose, a mouth, etc., so that the face shape is recognized.
- the facial expression may be recognized by performing AAM fitting on the convolved image B.
- FIG. 1 AAM fitting on the convolved image B.
- * means the image to which the DoG kernel is applied, that is, the convolved image (B).
- the fitting algorithm used in AAM extracts facial feature elements, transforms the statistical face model based on the extracted facial feature elements, and models a synthetic face image matching the face region. After that, the parameters of the appearance and shape are repeatedly updated until the synthesized face image converges within the input face region within a preset mapping value, thereby reducing the error between the model and the image.
- the input image is aligned on the coordinate frame, and the current model instant (C) and the training set are convolved to obtain an error image between the AAM-fitted image, thereby reducing and optimizing the error.
- the fitting algorithm continues to iterate until the error satisfies the aforementioned threshold or the specified number of times is repeated, thereby recognizing the facial expression with the optimized error.
- the DoG kernel to the AAM
- the feature of the local shape containing a lot of information in the object of the face image for example, the shape of the eye, nose, mouth, etc.
- the performance of the AAM fitting algorithm can be improved by performing the AAM fitting algorithm.
- FIG. 2 is a control flowchart of a facial expression recognition system according to an embodiment of the present invention.
- the facial expression recognition system trained an Enhanced Fisher Model (EMF) model using an AAM model for facial expression recognition and an image used to generate the AAM model (S110).
- EMF Enhanced Fisher Model
- AAM model where facial expression data is stored, facial expression image sequences of males and females of various ethnic backgrounds are stored. For example, facial expression images such as joy, surprise, anger, disgust, fear, and sadness may be stored.
- the EFM model training can be applied to the images used to generate the AAM model.
- EFM was introduced to improve the performance of the standard Fisher linear discriminant (FLD) based approach, and the first EFM was applied to Principle Component Analysis (PCA) for dimension reduction and used to identify the reduced PCA subspace.
- PCA Principle Component Analysis
- In facial expression recognition systems EFM classifies facial expressions by determining features between facial expressions.
- the image including the face to be recognized is input to the facial expression recognition system (S112).
- the face is recognized from the input image (S114).
- DoG kernel is applied to face recognition. By convolving the input image using the DoG kernel, the visibility of edges and other details included in the image is increased to accurately recognize a face included in the image.
- AAM image fitting is started on the recognized face image (S116).
- parameters of appearance and shape of the face are extracted from the image (S118 and S120).
- EFM classifier S122.
- S122 EFM classifier
- the training set reflecting the expression and the shape is convolved with the input image (S126).
- the parameters of appearance and shape are updated repeatedly until a certain threshold is met, thereby reducing the error between the model and the image. For example, if the parameter of the current shape is measured, it fits the input image on the model coordinate frame and obtains an error image between the current model instant and the image fitted by AAM to reduce and optimize the error. The fitting algorithm continues to iterate until the error meets the aforementioned threshold or iterations for a specified number of times.
- 3 and 4 are comparison graphs of facial expression recognition results of the present invention and the prior art under the condition that the illumination changes.
- Figure 3 shows the AAM fitting result when the lighting changes, when only AAM is applied (1-1), when AAM is applied to the image convolved with the DoG kernel (1-2), Canny Edge Detector is When the AAM is applied to the applied image (1-3) shows the facial expression recognition results.
- FIG. 4 is a graph showing the accuracy of facial expression recognition results according to the prior art and facial expression recognition results according to the present invention when illumination changes.
- the graph shows the facial expression recognition result when the image captured in the environment of changing lighting is applied only with AAM, when AAM is applied to the image convolved with the DoG kernel, and when AAM is applied to the image with Canny Edge Detector. The accuracy of the calculation is shown in the graph.
- the standard deviation and mean error were calculated between shape and ground truth using RMS error.
- the number of images corresponding to the mean error was cumulatively calculated.
- AAM to which the DoG kernel and the Canny Edge Detector are applied has accumulated more images at a smaller error rate.
- AAM with DoG kernel accumulates the most images in the section where the average error is small.
- 5 and 6 are comparison graphs of the facial expression recognition results of the present invention and the prior art under the condition that the expression changes.
- FIG. 5 shows the AAM fitting result when the illumination changes, when only AAM is applied (2-1), when AAM is applied to the image convolved with the DoG kernel (2-2), and the Canny Edge Detector is shown.
- the AAM is applied to the applied image (2-3) shows the facial expression recognition results.
- FIG. 6 is a graph showing the accuracy of the facial expression recognition result according to the prior art and the facial expression recognition result according to the present invention under a facial expression change.
- FIG. 7 is a control block diagram of a facial expression recognition system according to an embodiment of the present invention, illustrating a case where the facial expression recognition system of the present invention is applied to a portable terminal such as a smartphone.
- the facial expression recognition system includes a camera 510, an image preprocessor 512, a DOG kernel 514, an AAM modeling unit 516, an EFM classifier 518, and a display unit 520. Database 522.
- the camera 510 generates an 2D image by capturing an image of an object, for example, a face of a specific person.
- the image preprocessor 512 converts the 2D image provided from the camera 510 into an image for face recognition.
- the DOG kernel 514 reduces the noise of the input image through Gaussian filtering to enhance the visibility of edges and other details and to attenuate information in the shape that contains redundant information that is repeated.
- the image convolved with the DOG kernel 514 is a local shape containing a lot of information of the face image, for example, the shape of the feature portion, such as eyes, nose, mouth, etc. is enhanced to recognize the face shape.
- the AAM modeling unit 516 extracts parameters of appearance and shape from the image convolved with the DOG kernel 514 and converts a statistical face model based on the extracted facial feature elements to match the face region.
- a synthetic face image is modeled.
- the AAM modeling unit 516 repeatedly updates the parameters of appearance and shape until the synthetic face image converges within the preset mapping value with the face image to reduce the error between the model and the image.
- the EFM classifier 518 recognizes an expression of the face image based on parameters of appearance and shape.
- the facial expression recognition result of the EFM classifier 518 is provided to the AAM modeling unit 516 and used to determine the appearance and shape parameters for modeling the composite face image.
- the database 522 stores a facial expression image sequence for performing AAM.
- the EFM model generated by training the facial expression image sequence is stored.
- the display unit 520 displays the face recognition result determined by the AAM modeling unit 516.
- FIG. 8 is a control flowchart of a facial expression recognition system according to another embodiment of the present invention.
- the user may photograph a face with the camera 510 and input a face image (S612).
- the input face image is converted using the DoG kernel 514 (S614). Accordingly, the facial features are recognized by enhancing local features included in the facial image (S616).
- the AAM modeling unit 516 performs an AAM fitting on the image converted into the DoG kernel 514 to detect parameters of appearance and shape for modeling a face image (S618).
- the EFM classifier 518 classifies facial expressions based on parameters of appearance and shape in the AAM modeling unit 516 (S620).
- the AAM modeling unit 516 repeats the updating of appearance and shape parameters by referring to the classification result of the EFM classifier 518 to recognize the expression of the input face image and displays the recognition result (S622). ).
- FIG. 9 is a diagram illustrating a usage state of the facial expression recognition system according to an exemplary embodiment of the present invention, and illustrates a usage state when a service is provided by applying the facial expression recognition system according to an exemplary embodiment of the present invention to a portable terminal.
- the face image of the user photographed by the camera 510 may be input to a facial expression recognition system mounted in the portable terminal so that the expression of the current user may be recognized in real time.
- the recognized user's expression is reflected in real time on the character displayed on the portable terminal.
- the user's current facial expression and the facial expression of the character displayed on the portable terminal are linked to each other to display emotions such as (a) no expression, (b) joy, (c) surprise, and (d) sadness through the character in real time. Can be.
- the facial expression recognition result of the facial expression recognition system of the present invention is the display of emotions such as the user's avatar, image, animal or animation character All possible graphic images are applicable.
- various applications are possible, such as outputting a facial expression recognition result as text, changing the graphic color of a display screen, or a theme.
- the DOG kernel 514 is used to remove the noise of the image and maintain only the features, thereby removing the information not necessary for the AAM and maintaining the information necessary for the fitting algorithm. Accordingly, excellent facial recognition performance can be obtained even in an image having a change in illumination and a facial expression, and excellent performance can be guaranteed even in a low quality image obtained from a portable terminal such as a smart phone.
- the local area such as the eyes, nose, mouth, etc. is enhanced by enhancing the visibility of the detailed area and reducing the noise.
- Facial expression recognition method that not only removes unnecessary information from the image but also maintains important information removed by lighting through object feature extraction by weakening appearance information containing unnecessary information repeated as seen. And systems.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Image Processing (AREA)
Abstract
La présente invention porte sur un procédé et un système de reconnaissance d'expressions faciales. En particulier, le procédé comprend : une étape consistant à effectuer une convolution sur une image d'entrée au moyen d'un noyau DoG ; une étape consistant à extraire une région faciale de l'image sur laquelle la convolution a été effectuée ; une étape consistant à extraire un paramètre d'apparence et un paramètre de forme à partir de la région faciale ; une étape consistant à convertir un modèle actif d'apparence (AAM) préstocké sur la base des éléments caractéristiques du visage extraits et à synthétiser le modèle converti et la région faciale ; et une étape de mise à jour du paramètre d'apparence et du paramètre de forme jusqu'à ce que l'image faciale synthétisée converge avec l'image formant la région faciale d'entrée dans la limite d'une valeur de correspondance préréglée de manière à reconnaître une expression faciale. Ainsi, la visibilité de la région détaillée de l'image est accrue et le bruit est réduit, de telle sorte que les caractéristiques d'une région locale, telle qu'un œil, le nez et la bouche, sont renforcées, et les informations d'apparence, telles que celles pour une joue, qui contiennent des informations répétées et non nécessaires, sont supprimées. Ainsi, des informations non nécessaires peuvent être supprimées de l'image et des informations importantes, qui pourraient être supprimées par l'éclairage par l'intermédiaire d'une extraction de caractéristique d'objet, peuvent être protégées.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR10-2011-0096332 | 2011-09-23 | ||
| KR1020110096332A KR101198322B1 (ko) | 2011-09-23 | 2011-09-23 | 얼굴 표정 인식 방법 및 시스템 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2013042992A1 true WO2013042992A1 (fr) | 2013-03-28 |
Family
ID=47564084
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2012/007602 Ceased WO2013042992A1 (fr) | 2011-09-23 | 2012-09-21 | Procédé et système de reconnaissance d'expressions faciales |
Country Status (2)
| Country | Link |
|---|---|
| KR (1) | KR101198322B1 (fr) |
| WO (1) | WO2013042992A1 (fr) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2015115681A1 (fr) * | 2014-01-28 | 2015-08-06 | 영남대학교 산학협력단 | Procédé et appareil de reconnaissance d'expression à l'aide d'un dictionnaire d'expressions-gestes |
| CN105404878A (zh) * | 2015-12-11 | 2016-03-16 | 广东欧珀移动通信有限公司 | 一种照片分类方法和装置 |
| WO2017115937A1 (fr) * | 2015-12-30 | 2017-07-06 | 단국대학교 산학협력단 | Dispositif et procédé de synthèse d'une expression faciale à l'aide d'une carte d'interpolation de valeurs pondérées |
| CN108764207A (zh) * | 2018-06-07 | 2018-11-06 | 厦门大学 | 一种基于多任务卷积神经网络的人脸表情识别方法 |
| CN110264544A (zh) * | 2019-05-30 | 2019-09-20 | 腾讯科技(深圳)有限公司 | 图片处理方法和装置、存储介质及电子装置 |
| CN112699797A (zh) * | 2020-12-30 | 2021-04-23 | 常州码库数据科技有限公司 | 基于联合特征对关系网络的静态人脸表情识别方法及系统 |
Families Citing this family (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101510798B1 (ko) | 2008-12-10 | 2015-04-10 | 광주과학기술원 | 휴대용 얼굴 표정 연습 장치와 그 방법. |
| KR101382172B1 (ko) | 2013-03-12 | 2014-04-10 | 건아정보기술 주식회사 | 얼굴 영상의 계층적 특징 분류 시스템 및 그 방법 |
| KR101436730B1 (ko) | 2013-03-26 | 2014-09-02 | 가톨릭대학교 산학협력단 | 능동적 외양 모델을 이용한 비학습 얼굴의 3차원 얼굴 피팅 방법 |
| US9251405B2 (en) * | 2013-06-20 | 2016-02-02 | Elwha Llc | Systems and methods for enhancement of facial expressions |
| KR101663239B1 (ko) | 2014-11-18 | 2016-10-06 | 상명대학교서울산학협력단 | 인체 미동에 의한 hrc 기반 사회 관계성 측정 방법 및 시스템 |
| US10134177B2 (en) | 2015-01-15 | 2018-11-20 | Samsung Electronics Co., Ltd. | Method and apparatus for adjusting face pose |
| KR101816412B1 (ko) * | 2016-06-14 | 2018-01-08 | 현대자동차주식회사 | 가림 랜드마크를 고려한 얼굴 랜드마크 검출 시스템 및 방법 |
| US10198626B2 (en) | 2016-10-19 | 2019-02-05 | Snap Inc. | Neural networks for facial modeling |
| US10860841B2 (en) | 2016-12-29 | 2020-12-08 | Samsung Electronics Co., Ltd. | Facial expression image processing method and apparatus |
| KR101950721B1 (ko) * | 2017-12-29 | 2019-02-21 | 한남대학교 산학협력단 | 다중 인공지능 안전스피커 |
| CN109948541A (zh) * | 2019-03-19 | 2019-06-28 | 西京学院 | 一种面部情感识别方法与系统 |
| CN112307942B (zh) * | 2020-10-29 | 2024-06-28 | 广东富利盛仿生机器人股份有限公司 | 一种面部表情量化表示方法、系统及介质 |
| KR102592601B1 (ko) * | 2021-11-15 | 2023-10-23 | 한국전자기술연구원 | 합성 표정을 이용한 얼굴인식 방법 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7587099B2 (en) * | 2006-01-27 | 2009-09-08 | Microsoft Corporation | Region-based image denoising |
| KR100950776B1 (ko) * | 2009-10-16 | 2010-04-02 | 주식회사 쓰리디누리 | 얼굴 인식 방법 |
| KR20100062207A (ko) * | 2008-12-01 | 2010-06-10 | 삼성전자주식회사 | 화상통화 중 애니메이션 효과 제공 방법 및 장치 |
| KR20100081874A (ko) * | 2009-01-07 | 2010-07-15 | 포항공과대학교 산학협력단 | 사용자 맞춤형 표정 인식 방법 및 장치 |
| KR20100116178A (ko) * | 2008-01-29 | 2010-10-29 | 테쎄라 테크놀로지스 아일랜드 리미티드 | 디지털 이미지들에서 얼굴 표정들을 검출 |
-
2011
- 2011-09-23 KR KR1020110096332A patent/KR101198322B1/ko not_active Expired - Fee Related
-
2012
- 2012-09-21 WO PCT/KR2012/007602 patent/WO2013042992A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7587099B2 (en) * | 2006-01-27 | 2009-09-08 | Microsoft Corporation | Region-based image denoising |
| KR20100116178A (ko) * | 2008-01-29 | 2010-10-29 | 테쎄라 테크놀로지스 아일랜드 리미티드 | 디지털 이미지들에서 얼굴 표정들을 검출 |
| KR20100062207A (ko) * | 2008-12-01 | 2010-06-10 | 삼성전자주식회사 | 화상통화 중 애니메이션 효과 제공 방법 및 장치 |
| KR20100081874A (ko) * | 2009-01-07 | 2010-07-15 | 포항공과대학교 산학협력단 | 사용자 맞춤형 표정 인식 방법 및 장치 |
| KR100950776B1 (ko) * | 2009-10-16 | 2010-04-02 | 주식회사 쓰리디누리 | 얼굴 인식 방법 |
Cited By (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2015115681A1 (fr) * | 2014-01-28 | 2015-08-06 | 영남대학교 산학협력단 | Procédé et appareil de reconnaissance d'expression à l'aide d'un dictionnaire d'expressions-gestes |
| KR101549645B1 (ko) | 2014-01-28 | 2015-09-03 | 영남대학교 산학협력단 | 표정 동작사전을 이용한 표정인식 방법 및 장치 |
| US10068131B2 (en) | 2014-01-28 | 2018-09-04 | Industry-Academic Cooperation Foundation, Yeungnam University | Method and apparatus for recognising expression using expression-gesture dictionary |
| CN105404878A (zh) * | 2015-12-11 | 2016-03-16 | 广东欧珀移动通信有限公司 | 一种照片分类方法和装置 |
| WO2017115937A1 (fr) * | 2015-12-30 | 2017-07-06 | 단국대학교 산학협력단 | Dispositif et procédé de synthèse d'une expression faciale à l'aide d'une carte d'interpolation de valeurs pondérées |
| CN108764207A (zh) * | 2018-06-07 | 2018-11-06 | 厦门大学 | 一种基于多任务卷积神经网络的人脸表情识别方法 |
| CN108764207B (zh) * | 2018-06-07 | 2021-10-19 | 厦门大学 | 一种基于多任务卷积神经网络的人脸表情识别方法 |
| CN110264544A (zh) * | 2019-05-30 | 2019-09-20 | 腾讯科技(深圳)有限公司 | 图片处理方法和装置、存储介质及电子装置 |
| CN110264544B (zh) * | 2019-05-30 | 2023-08-25 | 腾讯科技(深圳)有限公司 | 图片处理方法和装置、存储介质及电子装置 |
| CN112699797A (zh) * | 2020-12-30 | 2021-04-23 | 常州码库数据科技有限公司 | 基于联合特征对关系网络的静态人脸表情识别方法及系统 |
| CN112699797B (zh) * | 2020-12-30 | 2024-03-26 | 常州码库数据科技有限公司 | 基于联合特征对关系网络的静态人脸表情识别方法及系统 |
Also Published As
| Publication number | Publication date |
|---|---|
| KR101198322B1 (ko) | 2012-11-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2013042992A1 (fr) | Procédé et système de reconnaissance d'expressions faciales | |
| CN108764071B (zh) | 一种基于红外和可见光图像的真实人脸检测方法及装置 | |
| JP5675229B2 (ja) | 画像処理装置及び画像処理方法 | |
| CN102332095B (zh) | 一种人脸运动跟踪方法和系统以及一种增强现实方法 | |
| CN109598242B (zh) | 一种活体检测方法 | |
| CN111783629B (zh) | 一种面向对抗样本攻击的人脸活体检测方法及装置 | |
| CN106682601B (zh) | 一种基于多维信息特征融合的驾驶员违规通话检测方法 | |
| KR101612605B1 (ko) | 얼굴 특징점 추출 방법 및 이를 수행하는 장치 | |
| CN109583304A (zh) | 一种基于结构光模组的快速3d人脸点云生成方法及装置 | |
| CN108961675A (zh) | 基于卷积神经网络的跌倒检测方法 | |
| KR20170006355A (ko) | 모션벡터 및 특징벡터 기반 위조 얼굴 검출 방법 및 장치 | |
| WO2009131539A1 (fr) | Procédé et système permettant de détecter et suivre des mains dans une image | |
| KR20120069922A (ko) | 얼굴 인식 장치 및 그 방법 | |
| Rao et al. | Sign Language Recognition System Simulated for Video Captured with Smart Phone Front Camera. | |
| CN111967319B (zh) | 基于红外和可见光的活体检测方法、装置、设备和存储介质 | |
| CN106297755A (zh) | 一种用于乐谱图像识别的电子设备及识别方法 | |
| Alksasbeh et al. | Smart hand gestures recognition using K-NN based algorithm for video annotation purposes | |
| CN110796101A (zh) | 一种嵌入式平台的人脸识别方法及系统 | |
| CN108229493A (zh) | 对象验证方法、装置和电子设备 | |
| CN112861661A (zh) | 图像处理方法及装置、电子设备及计算机可读存储介质 | |
| CN115205943A (zh) | 图像处理方法、装置、电子设备及存储介质 | |
| Nikam et al. | Bilingual sign recognition using image based hand gesture technique for hearing and speech impaired people | |
| KR101344851B1 (ko) | 영상처리장치 및 영상처리방법 | |
| CN111274851A (zh) | 一种活体检测方法及装置 | |
| CN111597926A (zh) | 图像处理方法及装置、电子设备及存储介质 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12833630 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 12833630 Country of ref document: EP Kind code of ref document: A1 |