Generation of realistic looking, animated human face models is one of the most interesting and challenging tasks in computer graphics today. Since a hallmark of the individuality of the people is the range of variation in the shape of their faces, an animation that fails to reproduce this diversity deprives its characters of independent identities. To animate a scene realistically or to play out a virtual interaction believably requires reconstruction of the face of a specific person, i.e. cloning a real person’s face.
The approaches to face modeling and animation described in the literature range from parameterization of 3D geometric surface models to models which involve detailed simulation of physical properties of the anatomical facial structures. They can produce expressive and plausible animation of a 3D face model. However, these approaches make little use of existing data for animating a new model. Animation structures do not simply transfer between models. Each time a new model is created for animation, a method-specific manual tuning is inevitable. If manual tuning or computational costs are high in generating animations for one model, generating similar animations for every new model will take similar efforts. Efficient generation of animatable models for various people still remains as an unsolved problem.
In this project, we propose a new Structure-Driven Adaptation (SDA) method to efficient generate anatomy-based 3D faces of real human individuals for animation. The technique is based on adapting a prototype facial model to the acquired surface data in an “outside-in” manner: deformation applied to the external skin layer is propagated along with the subsequent transformations to the muscles, with the final effect of warping the underlying skull. The prototype model has a known topology and incorporates an anatomy-based layered structure hierarchy of physically-based skin, muscles, and skull based on the techniques described in our previous work. Starting with interactive specification of a set of anthropometric landmarks on the generic control model and scanned surface, a global alignment automatically adapts the position, size, and orientation of the generic control model to align it with the scan data based on a series of measurements between a subset of landmarks. In the physically-based skin mesh adaptation, the generic skin mesh is represented as a dynamic deformable model which subjects to internal force stemming from the elastic properties of the surface and external forces generated by input data points and features. The adaptation is governed by Lagrangian equation of motion which is iteratively simulated to recover individualized face shape. We incorporate the effect of structural differences in muscles and skull - both to generate and animate the model. A fully automated approach has been developed for adapting the underlying muscle layer which includes three types of facial muscles. SDA deforms a set of automatically generated skull feature points according to the deformed external skin and muscle layers. The new positions of these feature points are then used to drive a volume morphing applied to the skull model template. The reconstructed model not only resembles the individual face in shape but also reflects the anatomical structure of human face; therefore it can be animated immediately.
Animating the adapted low-resolution control mesh is computationally efficient, while the reconstruction of high-resolution surface detail on the animated control model is controlled separately. A scalar displacement map represents the detail of the high-resolution geometry, providing an efficient representation of the surface shape and allowing control over level of detail. We develop an offset-envelope mapping method to automatically generate a displacement map by mapping the scan data onto the low-resolution control mesh. A hierarchical representation of the model is then constructed to approximate the scanned data-set with increasing accuracy by surface refinement using a triangular mesh subdivision scheme together with resampling of the displacement map. This mechanism enables efficient and seamless animation of the high-resolution human face geometry through the animation control over the adapted control model. The resulting system provides a closed-form solution for efficient anatomy-based human face modeling for animation.
Original scanned face data and a generic anatomy-based model.
Adaptation of the face skin and muscle layers.
Skull fitting.
Cloned anatomy-based face models.
Facial animation
Papers:
Yu Zhang, Terence Sim and Chew Lim Tan. "Generating personalized anatomy-based 3D facial models from scanned data". Machine GRAPHICS & VISION Journal, to appear, 2005.
Yu Zhang and Terence Sim. "From range data to animated anatomy-based faces: a model adaptation method". Proc. 5th International Conference on 3D Digital Imaging and Modeling (3DIM2005), IEEE Computer Society Press, to appear, Ottawa, Canada, June 2005.
Yu Zhang, Terence Sim and Chew Lim Tan. "Faces alive: Reconstruction of animated 3D human faces". Proc. International Conference on Computational Science and its Applications, Technical Session on
Computer Graphics and Geometric Modeling (TSCG2005), Springer-Verlag, ISBN: 3-540-25862-0, pp. 1197-1208, Singapore, May 2005.
Yu Zhang, Terence Sim and Chew Lim Tan. "Human face modeling for facial image synthesis using
optimization-based adaptation". Proc. 2004 International Symposium on Intelligent Multimedia, Video and Speech Processing (ISIMP2004), pp. 258-261, Hong Kong, China, Oct. 2004.
Yu Zhang, Terence Sim, and Chew Lim Tan. "Human face modeling by fitting a 3D anatomy-based model using dynamic deformable meshes". Proc. 7th IASTED International Conference on Computer Graphics and Imaging (CGIM2004), Kauai, Hawaii, USA, Aug. 2004.
Yu Zhang, Terence Sim and Chew Lim Tan. "Adaptation-based individualized face modeling for animation using displacement map". Proc. Computer Graphics International 2004 (CGI2004), IEEE Computer Society Press, pp. 518-521, Crete, Greece, June 2004.
Copyright 2005-2013, Yu
Zhang. This material may not be published, modified or otherwise
redistributed in whole or part without prior approval.