Rapid Reconstruction of Animatable Individualized 3D Faces from Range Data
In the task of modeling human face from real individuals for animation we are usually confronted with two conflicting goals: one is the requirement for accurate reproduction of face shape, the other is the demand for an efficient representation which can be animated easily and quickly. The goal of face cloning calls for models that are based on real measurements of the structures of the human face. Current technology allows us to acquire precise 3D geometry of a face easily by using a range scanning device. 3D models reconstructed automatically from range data can bear very good resemblance to the specific persons, especially if they are properly textured. In practice, though, it turns out that there are a number of obstacles to using the acquired geometry directly for reconstructing animatable facial models:
absence of functional structure for animation;
irregular and dense surface data that can not be used for optimal animatable model construction and real-time animation; and
incomplete data due to projector/camera shadowing effects or bad reflective properties of the surface.
The goal of our work is to automatically reconstruct an animatable 3D facial model of a specific person. In this project, we propose an efficient method for creating a personalized facial model by adapting a prototype physically-based model to the geometry of an individual face. The initial prototype facial model resembles an average human face and has a layered anatomical structure for controlling facial motions and expressions, incorporating a physically-based approximation to facial skin and a set of anatomically-motivated facial muscle actuators. The face geometry and texture of real individuals are recovered from a set of range and reflectance data acquired from a laser range scanner. For adaptation, we first specify a minimal set of anthropometric landmarks on the 2D images of both the prototype and individual faces to identify facial features. The 3D positions of the landmarks that should be located on the skin surface are calculated automatically by using a mapping-projection approach. Based on a series of measurements between the computed 3D landmark points, a global shape adaptation is then carried out to adapt the size, position and orientation of the prototype model in the 3D space. After global adaptation, a local shape adaptation deforms the skin geometry of the prototype model to fit all of its vertices to the surface data of the real face. The resulting model shares the same muscle structure with the prototype model and can be animated immediately after adaptation by using the given muscle actuation parameters.
The novel features of our algorithm are:
Efficient face reconstruction technique with minimum user
intervention.
A new projection-mapping approach to recover the 3D coor-dinates
of landmark points defined in 2D images.
Automated global adaptation process with no restriction on
the position and orientation of the prototype model and
scanned data.
Framework for representing a static face scanned data-set for
efficient animation.
Individualized 3D face reconstructed from acquired range data.
Side-by-side comparison of two views of reconstructed individualized face models with their original photographs.
Papers:
Yu Zhang, Terence Sim, and Chew Lim Tan. "Rapid modeling of 3D faces for animation using an efficient adaptation algorithm". Proc. International Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia (GRAPHITE 2004), pp. 173-181, Singapore, June 2004.
Yu Zhang, Terence Sim and Chew Lim Tan. "Reanimating real humans: automatic reconstruction of animated faces from range data". Proc. IEEE International Conference on Multimedia and Expo 2004 (ICME2004), IEEE Computer Society Press, pp. 395-398, Taipei, China, June 2004.
Yu Zhang, Terence Sim and Chew Lim Tan. "Reconstruction of animatable personalized 3D faces by adaptation-based modeling". Eurographics 2003, Short Presentations, pp. 201-208, Granada, Spain, Sept. 2003.
Copyright 2005-2013, Yu
Zhang. This material may not be published, modified or otherwise
redistributed in whole or part without prior approval.