Rapid Reconstruction of Animatable Individualized 3D Faces from Range Data


    In the task of modeling human face from real individuals for animation we are usually confronted with two conflicting goals: one is the requirement for accurate reproduction of face shape, the other is the demand for an efficient representation which can be animated easily and quickly. The goal of face cloning calls for models that are based on real measurements of the structures of the human face. Current technology allows us to acquire precise 3D geometry of a face easily by using a range scanning device. 3D models reconstructed automatically from range data can bear very good resemblance to the specific persons, especially if they are properly textured. In practice, though, it turns out that there are a number of obstacles to using the acquired geometry directly for reconstructing animatable facial models:
    The goal of our work is to automatically reconstruct an animatable 3D facial model of a specific person. In this project, we propose an efficient method for creating a personalized facial model by adapting a prototype physically-based model to the geometry of an individual face. The initial prototype facial model resembles an average human face and has a layered anatomical structure for controlling facial motions and expressions, incorporating a physically-based approximation to facial skin and a set of anatomically-motivated facial muscle actuators. The face geometry and texture of real individuals are recovered from a set of range and reflectance data acquired from a laser range scanner. For adaptation, we first specify a minimal set of anthropometric landmarks on the 2D images of both the prototype and individual faces to identify facial features. The 3D positions of the landmarks that should be located on the skin surface are calculated automatically by using a mapping-projection approach. Based on a series of measurements between the computed 3D landmark points, a global shape adaptation is then carried out to adapt the size, position and orientation of the prototype model in the 3D space. After global adaptation, a local shape adaptation deforms the skin geometry of the prototype model to fit all of its vertices to the surface data of the real face. The resulting model shares the same muscle structure with the prototype model and can be animated immediately after adaptation by using the given muscle actuation parameters.
    The novel features of our algorithm are: