Would you like to make this site your homepage? It's fast and easy...
Yes, Please make this my home page!
Instant Facial Modeling and Animation of Living Humans
Since human face and facial expressions are an important aspect of human interaction, both in context of interpersonal communication and human-computer interface, realistic facial modeling and animation become very essential for the applications such as teleconferencing, man-machine interface, realistic avatars in virtual reality and surgical facial planning. The goal of modeling a specific person's face with realistic and nature looking display calls for models that are based on real measurements of the structures of human face, as well as facial features such as color, shape and size. So far, the most accurate modeling can be achieved by using range scanning technology. The automated laser range (LR) scanner can digitize on the order of 10^5 3D points from a solid object such as a person's head and shoulders within a second. Thus it allows detailed facial geometries and corresponding texture image to be extracted quickly. Based on the highly accurate range and color data, it is possible to develop a realistic individualized face model with structured information for further animation.
The goal of our work is to achieve a realistic facial expression synthesis of a specific person from the anatomical perspective and executing at an interactive rate. We developed a new individualized face model that conforms to the human anatomy for this purpose. The individualized face model has a hierarchical structure, incorporating a physically-based approximation to facial skin, a set of anatomically-motivated facial muscle actuators and the underlying skull structure. We start from a highly accurate facial mesh reconstructed from the individual facial measurements. By exploiting the laser range data obtained from scanning a subject, a facial mesh precisely representing the subject's face geometry is reconstructed in various preprocessing steps. We automatically blend multiple reflectance images captured from different viewpoints, each containing color information of the visible facial regions, to perform a view-based texture mapping. The resulting 3D model realistically portrays a specific person's face geometry and texture. Based on the reduced facial mesh, we develop a multi-layer MSD skin model to dynamically simulate the nonhomogenous behavior of the real skin. The model takes into account the nonlinear stress-strain relationship of the skin and the fact that soft tissue is almost incompressible due to its liquid components. Our 3D facial model incorporates a skull structure which extends the scope of facial motion and facilitates facial muscle definition. To efficiently construct facial muscles, we developed a muscle mapping approach which ensures the muscles to be automatically located at the anatomically correct positions between the skin and skull layers. When muscles contract, the deformation of facial skin is computed by solving the underlying dynamic equation. A semi-implicit integration method is developed to calculate the relaxation. The dynamic facial animation algorithm runs at an interactive rate with flexible facial expressions to be generated. Using our system, the individualized face can be brought to life within minutes.
System overview

Flow diagram for face geometry reconstruction from range scans

View-dependent texture blending and mapping



Modeling anatomical structure

A muscle mapping approach for efficient facial muscle construction









Facial animation

Typical facial expressions simulated on the model.






Dynamic expression simulation compared with the actual ones.
Papers:
- Yu Zhang, Edmond C. Prakash and Eric Sung. "Modeling and animation of individualized faces for 3D facial expression synthesis". International Journal of Imaging Systems and Technology, 13(1): 42-64, 2003.
- Yu Zhang, Edmond C. Prakash and Eric Sung. "Hierarchical facial data modeling for visual expression synthesis". Journal of Visualization, 6(3): 313-320, 2003.
- Yu Zhang, Edmond C. Prakash and Eric Sung. "Constructing a realistic face model of an individual for expression animation". International Journal of Information Technology, 8(2): 10-25, Sept. 2002.
- Alvin W. K. Soh, Yu Zhang, Edmond C. Prakash, Tony K. Y. Chan and Eric Sung. "Texture mapping of 3D human face for virtual reality environments". International Journal of Information Technology, 8(2): 54-65, Sept. 2002.
- Yu Zhang, Edmond C. Prakash and Eric Sung. "Instant facial modeling and animation of living humans". Proc. 7th International Fall Workshop on Vision, Modeling and Visualization (VMV2002), pp. 479-486, Erlangen, Germany, Nov. 2002.
- Yu Zhang, Edmond C. Prakash and Eric Sung. "Hierarchical modeling of a personalized face for realistic expression animation". Proc. IEEE International Conference on Multimedia and Expo 2002 (ICME2002), IEEE Computer Society Press, pp. 457-460, Lausanne, Switzerland, Aug. 2002.
Copyright 2005-2013, Yu
Zhang.
This material may not be published, modified or otherwise
redistributed in whole or part without prior approval.