WO2002031772A8 - Method for tracking motion of a face - Google Patents
Method for tracking motion of a faceInfo
- Publication number
- WO2002031772A8 WO2002031772A8 PCT/IB2001/002736 IB0102736W WO0231772A8 WO 2002031772 A8 WO2002031772 A8 WO 2002031772A8 IB 0102736 W IB0102736 W IB 0102736W WO 0231772 A8 WO0231772 A8 WO 0231772A8
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- motion
- person
- represented
- global
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/176—Dynamic expression
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/245—Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
A method for tracking the motion of a person's face for the purpose of animating a 3-D face model of the same or another person is disclosed. The 3-D face model carries both the geometry (shape) and the texture (color) characteristics of the person's face. The shape of the face model is represented via a 3-D triangular mesh (geometry mesh), while the texture of the face model is represented via a 2-D composite image (texture image). Both the global motion and the local motion of the person's face are tracked. Global motion of the face involves the rotation and the translation of the face in 3-D. Local motion of the face involves the 3-D motion of the lips, eyebrows, etc., caused by speech and facial expressions. The 2-D positions of salient features of the person's face and/or markers placed on the person's face are automatically tracked in a time-sequence of 2-D images of the face. Global and local motion of the face are separately calculated using the tracked 2-D positions of the salient features or markers. Global motion is represented in a 2-D image by rotation and position vectors while local motion is represented by an action vector that specifies the amount of facial actions such as smiling-mouth, raised-eyebrows, etc.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US09/689,595 US6294157B1 (en) | 1999-10-14 | 2000-10-13 | Composition containing sapogenin |
| US09/689,595 | 2000-10-13 |
Publications (3)
| Publication Number | Publication Date |
|---|---|
| WO2002031772A2 WO2002031772A2 (en) | 2002-04-18 |
| WO2002031772A8 true WO2002031772A8 (en) | 2002-07-04 |
| WO2002031772A3 WO2002031772A3 (en) | 2002-10-31 |
Family
ID=24769119
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IB2001/002736 Ceased WO2002031772A2 (en) | 2000-10-13 | 2001-10-09 | Method for tracking motion of a face |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2002031772A2 (en) |
Families Citing this family (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6664956B1 (en) | 2000-10-12 | 2003-12-16 | Momentum Bilgisayar, Yazilim, Danismanlik, Ticaret A. S. | Method for generating a personalized 3-D face model |
| DE102004038545A1 (en) * | 2004-08-06 | 2006-03-16 | Peters, Heiko, Dr. | Positioning system and position measuring system |
| CN101796545A (en) * | 2007-09-04 | 2010-08-04 | 索尼公司 | Integrated Motion Capture |
| US20090110245A1 (en) * | 2007-10-30 | 2009-04-30 | Karl Ola Thorn | System and method for rendering and selecting a discrete portion of a digital image for manipulation |
| CN105975935B (en) * | 2016-05-04 | 2019-06-25 | 腾讯科技(深圳)有限公司 | A kind of face image processing process and device |
| KR102540756B1 (en) * | 2022-01-25 | 2023-06-08 | 주식회사 딥브레인에이아이 | Apparatus and method for generating speech synsthesis image |
| KR102540759B1 (en) * | 2022-02-09 | 2023-06-08 | 주식회사 딥브레인에이아이 | Apparatus and method for generating speech synsthesis image |
| KR102584485B1 (en) * | 2022-02-14 | 2023-10-04 | 주식회사 딥브레인에이아이 | Apparatus and method for generating speech synsthesis image |
| KR102584484B1 (en) * | 2022-02-14 | 2023-10-04 | 주식회사 딥브레인에이아이 | Apparatus and method for generating speech synsthesis image |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2894241B2 (en) * | 1995-04-21 | 1999-05-24 | 村田機械株式会社 | Image recognition device |
| US5802220A (en) * | 1995-12-15 | 1998-09-01 | Xerox Corporation | Apparatus and method for tracking facial motion through a sequence of images |
-
2001
- 2001-10-09 WO PCT/IB2001/002736 patent/WO2002031772A2/en not_active Ceased
Also Published As
| Publication number | Publication date |
|---|---|
| WO2002031772A3 (en) | 2002-10-31 |
| WO2002031772A2 (en) | 2002-04-18 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Habermann et al. | Deepcap: Monocular human performance capture using weak supervision | |
| Noh et al. | A survey of facial modeling and animation techniques | |
| Kalra et al. | Real-time animation of realistic virtual humans | |
| Theobalt et al. | Combining 2D feature tracking and volume reconstruction for online video-based human motion capture | |
| WO2002030171A3 (en) | Facial animation of a personalized 3-d face model using a control mesh | |
| WO2002037415A3 (en) | Person tagging in an image processing system utilizing a statistical model based on both appearance and geometric features | |
| Breton et al. | FaceEngine a 3D facial animation engine for real time applications | |
| Ohya et al. | Real-time reproduction of 3D human images in virtual space teleconferencing | |
| Bregler et al. | Video motion capture | |
| Goto et al. | MPEG-4 based animation with face feature tracking | |
| WO2002031772A3 (en) | Method for tracking motion of a face | |
| JP6818219B1 (en) | 3D avatar generator, 3D avatar generation method and 3D avatar generation program | |
| Valente et al. | Face tracking and realistic animations for telecommunicant clones | |
| Ulgen | A step towards universal facial animation via volume morphing | |
| Pandžić et al. | Real-time facial interaction | |
| Moeslund et al. | Summaries of 107 computer vision-based human motion capture papers | |
| Richter et al. | Real-time reshaping of humans | |
| Valente et al. | A visual analysis/synthesis feedback loop for accurate face tracking | |
| Malciu et al. | Tracking facial features in video sequences using a deformable-model-based approach | |
| Otsuka et al. | Extracting facial motion parameters by tracking feature points | |
| Hilton | Towards model-based capture of a persons shape, appearance and motion | |
| Zheng et al. | A model based approach in extracting and generating human motion | |
| Kakadiaris et al. | Vision-based animation of digital humans | |
| Schreer et al. | Real-time vision and speech driven avatars for multimedia applications | |
| Pei et al. | Transferring of speech movements from video to 3D face space |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: C1 Designated state(s): JP |
|
| AL | Designated countries for regional patents |
Kind code of ref document: C1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR |
|
| CFP | Corrected version of a pamphlet front page | ||
| DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| AK | Designated states |
Kind code of ref document: A3 Designated state(s): JP |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A3 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR |
|
| 122 | Ep: pct application non-entry in european phase | ||
| NENP | Non-entry into the national phase |
Ref country code: JP |