All current state-of-the-art facial animation techniques use 3D facial model discretized at a fixed resolution in space such that a compromise between the represented amount of detail and computational complexity is achieved. However, when the surface is animated, the resulting mesh does not adapt well to the deformation. Bends and folds can only appear at the edges of the mesh. This leads to the unfortunate situation that the model has to provide enough detail to accommodate for possible deformations. In the facial model with fixed resolution, the discretization rate has to be defined by the animator a priori. If too coarse an approximation is employed then an incorrect animation will be generated, but unfortunately the animator will often have no way of knowing the minimum amount of resolution needed. Conversely, if too fine a spring mesh is adopted then a more precise result may be obtained, at the expense of increased computation. Consequently the animator often has to either tinker with the model, or endure inaccurate results (thereby negating the advantages of dynamic simulation). A promising way to optimize the computations, maximizing the overall realism while guaranteeing the efficiency, is to adaptively refine the model according to the complexity of the occurring motion. In facial animation, to ensure a precise geometrical description of the deformation, the spatial sampling (hence the accuracy) is adapted to concentrate the computational load into areas undergoing significant local deformation while the resolution of the stable regions remains unchanged. It in effect will save large amount of computation while ensuring a realistic facial deformation within a given accuracy threshold. To generate a realistic facial expression simulation at a reduced computational cost, we propose a technique for adaptively refining the MSD facial model during dynamic simulation. The refinement is based on a nonlinear subdivision scheme which provides a very efficient way to perform smoothing by local interpolation of an arbitrary polygon with minimum amount of geometrical information required. To refine the facial surface, we define an initial facial approximation and a measure which indicates the accuracy required for this case. To achieve acceptable performance while ensuring a precise geometrical description of the deformation, our facial animation system automatically adapts the local resolution at which potential inaccuracies are detected depending on a predefined error estimator, where different levels of resolution are provided in order to reduce the overall complexity by only simulating relevant levels of detail. When animated, the facial model is dynamically refined based on local error measurement. Then the refined network is adopted for further simulation. This mechanism allows more pleasing animation results to be produced at a reduced computational cost.
Global refinement of simple geometries using a nonlinear interpolation scheme.
Simulated facial expressions with face mesh adaptive refinement.
Snapshots of facial aniomation.
Adaptive refinement vs coarse mesh in facial animation.
Papers:
Yu Zhang, Edmond C. Prakash and Eric Sung. "A new physical model with multi-layer architecture for facial expression animation using dynamic adaptive mesh". IEEE Transactions on Visualization and Computer Graphics, 10(3):339-352, May 2004.
Yu Zhang, Edmond C. Prakash and Eric Sung. "A physically-based model with adaptive refinement for facial animation". Proc. IEEE Computer Animation 2001 (CA2001), IEEE Computer Society Press, pp. 28-39, Seoul, Korea, Nov. 2001.
Yu Zhang, Edmond C. Prakash and Eric Sung. "Adaptive simulation of facial expressions". IEEE International Conference on Multimedia and Expo 2001 (ICME2001), IEEE Computer Society Press, pp. 1072-1075, Tokyo, Japan, Aug. 2001.
Copyright 2005-2013, Yu
Zhang. This material may not be published, modified or otherwise
redistributed in whole or part without prior approval.