Anatomy-based Face Cloning for Animation


    Generation of realistic looking, animated human face models is one of the most interesting and challenging tasks in computer graphics today. Since a hallmark of the individuality of the people is the range of variation in the shape of their faces, an animation that fails to reproduce this diversity deprives its characters of independent identities. To animate a scene realistically or to play out a virtual interaction believably requires reconstruction of the face of a specific person, i.e. cloning a real person’s face.
    The approaches to face modeling and animation described in the literature range from parameterization of 3D geometric surface models to models which involve detailed simulation of physical properties of the anatomical facial structures. They can produce expressive and plausible animation of a 3D face model. However, these approaches make little use of existing data for animating a new model. Animation structures do not simply transfer between models. Each time a new model is created for animation, a method-specific manual tuning is inevitable. If manual tuning or computational costs are high in generating animations for one model, generating similar animations for every new model will take similar efforts. Efficient generation of animatable models for various people still remains as an unsolved problem.
    In this project, we propose a new Structure-Driven Adaptation (SDA) method to efficient generate anatomy-based 3D faces of real human individuals for animation. The technique is based on adapting a prototype facial model to the acquired surface data in an “outside-in” manner: deformation applied to the external skin layer is propagated along with the subsequent transformations to the muscles, with the final effect of warping the underlying skull. The prototype model has a known topology and incorporates an anatomy-based layered structure hierarchy of physically-based skin, muscles, and skull based on the techniques described in our previous work. Starting with interactive specification of a set of anthropometric landmarks on the generic control model and scanned surface, a global alignment automatically adapts the position, size, and orientation of the generic control model to align it with the scan data based on a series of measurements between a subset of landmarks. In the physically-based skin mesh adaptation, the generic skin mesh is represented as a dynamic deformable model which subjects to internal force stemming from the elastic properties of the surface and external forces generated by input data points and features. The adaptation is governed by Lagrangian equation of motion which is iteratively simulated to recover individualized face shape. We incorporate the effect of structural differences in muscles and skull - both to generate and animate the model. A fully automated approach has been developed for adapting the underlying muscle layer which includes three types of facial muscles. SDA deforms a set of automatically generated skull feature points according to the deformed external skin and muscle layers. The new positions of these feature points are then used to drive a volume morphing applied to the skull model template. The reconstructed model not only resembles the individual face in shape but also reflects the anatomical structure of human face; therefore it can be animated immediately.
    Animating the adapted low-resolution control mesh is computationally efficient, while the reconstruction of high-resolution surface detail on the animated control model is controlled separately. A scalar displacement map represents the detail of the high-resolution geometry, providing an efficient representation of the surface shape and allowing control over level of detail. We develop an offset-envelope mapping method to automatically generate a displacement map by mapping the scan data onto the low-resolution control mesh. A hierarchical representation of the model is then constructed to approximate the scanned data-set with increasing accuracy by surface refinement using a triangular mesh subdivision scheme together with resampling of the displacement map. This mechanism enables efficient and seamless animation of the high-resolution human face geometry through the animation control over the adapted control model. The resulting system provides a closed-form solution for efficient anatomy-based human face modeling for animation.