WO2007118919A1 - Procédé de production d'images de synthèse animées - Google Patents
Procédé de production d'images de synthèse animées Download PDFInfo
- Publication number
- WO2007118919A1 WO2007118919A1 PCT/ES2007/000235 ES2007000235W WO2007118919A1 WO 2007118919 A1 WO2007118919 A1 WO 2007118919A1 ES 2007000235 W ES2007000235 W ES 2007000235W WO 2007118919 A1 WO2007118919 A1 WO 2007118919A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- model
- images
- dimensional
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Definitions
- the present invention relates to a method for generating synthetic animation * images for the leisure sector in general, such as the film industry, videogames and promotional advertising among others, in which the personalization of an image or animation is of interest. kinematics.
- the procedure for the generation of synthetic animation images of this invention presents some technical peculiarities that allow the realization of audiovisual means of synthetic image that allow one or more users to personalize them to be able to configure one or more characters of the story or event described in the audiovisual medium with the face and / or body of said users.
- the procedure seeks to obtain a method or system that allows a personalization of, for example, a film, with the participation of the user, whose image and similarity is easily incorporated into one of the characters.
- the procedure also allows the creation of videogames and interactive software in which the user can also configure one or more characters to his image and similarity, allowing the hardware of the machine that performs the image synthesis a virtual representation of the user in the graphic environment of said game.
- the procedure has several phases that include:
- This procedure is integrated so that the user can configure the final result very easily and easily.
- a number of important steps have been provided in the procedure that allow considerable possibilities of interacting and modifying the work parameters, depending on the object to be obtained by executing the procedure.
- the acquisition of at least one image of the user is carried out by means of a video camera, photography or the like, such as a "webcam” or a mobile camera.
- a video camera photography or the like, such as a "webcam” or a mobile camera.
- These images comprise the face of the user seen preferably from the front, this image being configured as a two-dimensional digital dot map.
- the process of recognizing the user's facial and / or body features can be carried out by means of suitable software that executes an artificial intelligence algorithm applied to the artificial vision, already existing in the market, from said image in static format if the quality It is enough for the correct contrast.
- an artificial intelligence algorithm applied to the artificial vision, already existing in the market, from said image in static format if the quality It is enough for the correct contrast.
- the camera is connected to the computer that performs said recognition in real time, it allows the recognition to be performed automatically in real time. In the cases mentioned above, this recognition is executed automatically without the user having to handle any part of the algorithm.
- the objective of this algorithm is to automate the process of distinguishing between the various objects that they may appear in the image, discarding those that do not work and focusing on the only object of interest, which is the user's face.
- the facial features of this face are identified, such as, for example, the position and size of the eyes, mouth, nose, and others.
- a contour pattern of the face is used, which comprises, for example, the eyebrows, the eyes, the nose, the lips, the jaw and the cheekbones, among others.
- This pattern adapts to the image automatically by the software, obtaining a personalized shift of key coordinates with respect to a neutral pattern. This displacement corresponds to the physiognomic characteristics of the user.
- it is possible to alter a model, for example of the head to adapt it to the physiognomy of the user and thus be able to superimpose the image as texture with complete accuracy. It is possible that the model may have a beard or mustache if they have been detected in the recognition of the image.
- the modification can be made in real time.
- the generated custom model can be a single portion of the character, such as the head, so that the generation of the complete three-dimensional model includes the use of portions of pre-established models, In addition to the custom models generated.
- An example is the coupling of a custom head model of the user to an already designed comic body. This hybrid model is fully manageable in the synthesis of modeling images.
- This synthesis or "rendering" of the customized models, whether complete or hybrid, is performed in a virtual scenario or background, together with the pre-established models that are necessary to obtain the sequence of images that gives rise to the animation.
- the synthesization can be configured passively, that is, the production software generates a movie or sequence according to the script previously established for the animation and can be turned into a physical medium for later viewing or sent to a display device, such as a television or analog screen, as a cinematic medium, obtaining one or several movie sequences with at least one custom character.
- This synthesis of images can occur in real time, so that the user is allowed to intervene in the scene and manage his character, a method especially suitable for video games.
- the user can have adequate control means for the interaction and alteration in real time of the characteristics of the customized model.
- These interactions may correspond to gestures and actions that represent facial emotions, such as laughing, crying or showing anger or body movements, such as speaking, opening and closing the eyes or body movements.
- the image dump of the successive frames obtained in the synthesis of images can be sent to various devices.
- digital files can be generated by way of cinematic sequences to be dumped into physical media (DVD or others) or sent to multimedia devices such as mobile phones.
- the animation produced can be viewed in real time, for example on a television screen, cinematographic or broadcast on the Internet, interconnected computer networks or broadcasting. Description of the figures.
- FIG. 1 shows a process flow scheme
- Figure 2 shows a diagram of the phases of capture and recognition of facial features from the image obtained.
- Figure 3 shows a diagram of the phases of generation of the corresponding mathematical three-dimensional model with the recognized features of the user.
- Figure 4 shows a scheme of the synthesis of a scene in which a character has been introduced with the custom model added and a second character consisting of a preset model, with user control means for interactive management. representative model of his character.
- the procedure comprises the following sequence of phases:
- This software module comprises an artificial intelligence module applied to the artificial vision, which makes a recognition (2) of the facial features from a neutral pattern (21) that adapts to a custom pattern (14) as they are the characteristics of the captured image (12) of the user.
- Another software module evaluates the custom pattern (14) obtained and generates the three-dimensional mesh model (11) corresponding to the user's features, from the deformation of a pre-established neutral three-dimensional mesh model (31). In this generation, the surface textures obtained from the image (12) or acquired images are superimposed on said model (11). External modifiers (32) are also included, such as the design of glasses, for example, or physiognomic modifications, such as the modification of pointed ears among others.
- the custom head model (11) is coupled to a portion of the preset body model (42), constituting a mixed three-dimensional model (6) and introduced into a virtual scenario (43) together with other preset models (41 ).
- control means (7) such as game controls, for the indirect handling of points and curves of the custom model (6, 11 ) for the representation of facial gestures and emotions.
- One dump (5) is produced per output device of the consecutive frames generated by a synthesizing hardware for display on television screen. It is also possible to proceed to the recording of said frames on physical media, such as DVD or other.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
L'invention concerne un procédé qui comprend les étapes suivantes: acquisition (1) d'au moins une image frontale (12) de l'utilisateur, de son visage et/ou de tout son corps, de préférence à l'aide d'une caméra vidéo (13) ou d'un appareil photo; reconnaissance (2) des traits du visage et/ou des caractéristiques du corps de l'utilisateur à partir des images acquises (12); production (3) d'un modèle de quadrillage tridimensionnel (11) correspondant aux traits de l'utilisateur, comprenant la déformation d'un modèle de quadrillage tridimensionnel neutre préétabli (31), selon les traits reconnus, et la superposition des images (12) selon la structure modifiée; introduction du modèle (11) dans un environnement virtuel (43) avec d'autres modèles personnalisés ou préétablis (41); synthèse (4) d'images de modélisation avec ou sans interaction avec les modèles personnalisés (11), les modèles préétablis (41) et l'environnement (43), pour leur visualisation; vidage (5) par dispositif de sortie des photogrammes successifs obtenus lors de la synthèse (4) d'images.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| ES200600993A ES2284391B1 (es) | 2006-04-19 | 2006-04-19 | Procedimiento para la generacion de imagenes de animacion sintetica. |
| ESP200600993 | 2006-04-19 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2007118919A1 true WO2007118919A1 (fr) | 2007-10-25 |
Family
ID=38609077
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/ES2007/000235 Ceased WO2007118919A1 (fr) | 2006-04-19 | 2007-04-19 | Procédé de production d'images de synthèse animées |
Country Status (2)
| Country | Link |
|---|---|
| ES (1) | ES2284391B1 (fr) |
| WO (1) | WO2007118919A1 (fr) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101916456A (zh) * | 2010-08-11 | 2010-12-15 | 李浩民 | 一种个性化三维动漫的制作方法 |
| CN101930618A (zh) * | 2010-08-20 | 2010-12-29 | 李浩民 | 一种个性化二维动漫的制作方法 |
| CN102087750A (zh) * | 2010-06-13 | 2011-06-08 | 湖南宏梦信息科技有限公司 | 一种动漫特效的制作方法 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP0675461A2 (fr) * | 1994-03-22 | 1995-10-04 | Casio Computer Co., Ltd. | Méthode et appareil de génération d'images |
| JPH0973559A (ja) * | 1995-09-07 | 1997-03-18 | Fujitsu Ltd | モーフィング編集装置 |
| WO2003017206A1 (fr) * | 2001-08-14 | 2003-02-27 | Pulse Entertainment, Inc. | Systeme et procede de modelisation tridimensionnelle automatique |
| US20030051255A1 (en) * | 1993-10-15 | 2003-03-13 | Bulman Richard L. | Object customization and presentation system |
-
2006
- 2006-04-19 ES ES200600993A patent/ES2284391B1/es not_active Expired - Fee Related
-
2007
- 2007-04-19 WO PCT/ES2007/000235 patent/WO2007118919A1/fr not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030051255A1 (en) * | 1993-10-15 | 2003-03-13 | Bulman Richard L. | Object customization and presentation system |
| EP0675461A2 (fr) * | 1994-03-22 | 1995-10-04 | Casio Computer Co., Ltd. | Méthode et appareil de génération d'images |
| JPH0973559A (ja) * | 1995-09-07 | 1997-03-18 | Fujitsu Ltd | モーフィング編集装置 |
| WO2003017206A1 (fr) * | 2001-08-14 | 2003-02-27 | Pulse Entertainment, Inc. | Systeme et procede de modelisation tridimensionnelle automatique |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102087750A (zh) * | 2010-06-13 | 2011-06-08 | 湖南宏梦信息科技有限公司 | 一种动漫特效的制作方法 |
| CN101916456A (zh) * | 2010-08-11 | 2010-12-15 | 李浩民 | 一种个性化三维动漫的制作方法 |
| CN101930618A (zh) * | 2010-08-20 | 2010-12-29 | 李浩民 | 一种个性化二维动漫的制作方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| ES2284391A1 (es) | 2007-11-01 |
| ES2284391B1 (es) | 2008-09-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107154069B (zh) | 一种基于虚拟角色的数据处理方法及系统 | |
| ES2237010T3 (es) | Procedimiento para la creacion de modelos faciales en 3d a partir de imagenes faciales. | |
| US10684467B2 (en) | Image processing for head mounted display devices | |
| CN105654537B (zh) | 一种实现与虚拟角色实时互动的表情克隆方法及装置 | |
| US8334872B2 (en) | Inverse kinematics for motion-capture characters | |
| Foster et al. | Integrating 3D modeling, photogrammetry and design | |
| Thalmann et al. | Synthetic Actors: in Computer-Generated 3D Films | |
| JP2004506276A (ja) | 3次元顔モデリングシステム及びモデリング方法 | |
| JP2006520971A (ja) | デジタル顔モデルのアニメーション化のためのシステム及び方法 | |
| CN107274464A (zh) | 一种实时交互3d动画的方法、装置和系统 | |
| JPH11219446A (ja) | 映像音響再生システム | |
| WO2011156115A2 (fr) | Animation d'expressions faciales en temps réel | |
| KR102215290B1 (ko) | 컴퓨터 그래픽 합성 시스템 및 방법 | |
| CN110045817A (zh) | 采用虚拟实境技术的交互式摄影系统 | |
| WO2007118919A1 (fr) | Procédé de production d'images de synthèse animées | |
| KR20140065762A (ko) | 사용자 맞춤 캐릭터 영상물의 실시간 제공시스템 및 방법 | |
| KR20200029968A (ko) | 딥 러닝 기술을 활용한 자동 캐릭터 얼굴 표정 모델링 방법 | |
| Aitken et al. | The Lord of the Rings: the visual effects that brought middle earth to the screen | |
| CN110853147A (zh) | 一种三维人脸变换的方法 | |
| Tiddeman et al. | Transformation of dynamic facial image sequences using static 2D prototypes | |
| Huang et al. | A process for the semi-automated generation of life-sized, interactive 3D character models for holographic projection | |
| Doroski | Thoughts of spirits in madness: Virtual production animation and digital technologies for the expansion of independent storytelling | |
| JP2001319108A (ja) | リクレーション、ショッピング、ビジネスモデル | |
| Pavey | Ready for their Close-Ups | |
| Cosker | Facial capture and animation in visual effects |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07765824 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 07765824 Country of ref document: EP Kind code of ref document: A1 |