Instant Facial Modeling and Animation of Living Humans


    Since human face and facial expressions are an important aspect of human interaction, both in context of interpersonal communication and human-computer interface, realistic facial modeling and animation become very essential for the applications such as teleconferencing, man-machine interface, realistic avatars in virtual reality and surgical facial planning. The goal of modeling a specific person's face with realistic and nature looking display calls for models that are based on real measurements of the structures of human face, as well as facial features such as color, shape and size. So far, the most accurate modeling can be achieved by using range scanning technology. The automated laser range (LR) scanner can digitize on the order of 10^5 3D points from a solid object such as a person's head and shoulders within a second. Thus it allows detailed facial geometries and corresponding texture image to be extracted quickly. Based on the highly accurate range and color data, it is possible to develop a realistic individualized face model with structured information for further animation.
    The goal of our work is to achieve a realistic facial expression synthesis of a specific person from the anatomical perspective and executing at an interactive rate. We developed a new individualized face model that conforms to the human anatomy for this purpose. The individualized face model has a hierarchical structure, incorporating a physically-based approximation to facial skin, a set of anatomically-motivated facial muscle actuators and the underlying skull structure. We start from a highly accurate facial mesh reconstructed from the individual facial measurements. By exploiting the laser range data obtained from scanning a subject, a facial mesh precisely representing the subject's face geometry is reconstructed in various preprocessing steps. We automatically blend multiple reflectance images captured from different viewpoints, each containing color information of the visible facial regions, to perform a view-based texture mapping. The resulting 3D model realistically portrays a specific person's face geometry and texture. Based on the reduced facial mesh, we develop a multi-layer MSD skin model to dynamically simulate the nonhomogenous behavior of the real skin. The model takes into account the nonlinear stress-strain relationship of the skin and the fact that soft tissue is almost incompressible due to its liquid components. Our 3D facial model incorporates a skull structure which extends the scope of facial motion and facilitates facial muscle definition. To efficiently construct facial muscles, we developed a muscle mapping approach which ensures the muscles to be automatically located at the anatomically correct positions between the skin and skull layers. When muscles contract, the deformation of facial skin is computed by solving the underlying dynamic equation. A semi-implicit integration method is developed to calculate the relaxation. The dynamic facial animation algorithm runs at an interactive rate with flexible facial expressions to be generated. Using our system, the individualized face can be brought to life within minutes.

    System overview


    Flow diagram for face geometry reconstruction from range scans

    View-dependent texture blending and mapping

    Modeling anatomical structure

    A muscle mapping approach for efficient facial muscle construction

    Facial animation

    Typical facial expressions simulated on the model.

    Dynamic expression simulation compared with the actual ones.
Papers:


Copyright 2005-2013, Yu Zhang.
This material may not be published, modified or otherwise redistributed in whole or part without prior approval.

Back to my page