Exploring Face Shape Space


    For some applications such as face morphing, we are more concerned with applying digital geometry processing (DGP) algorithms to a group of facial models than to a single one. However, when given a group of face scans it is not trivial to process them even for simple operations such as computing their average shape or computing the norm of their differences. The main cause for this difficulty is the generally differing sampling patterns and connectivity of meshes describing geometry. DGP algorithms involving multiple models require a consistent parameterization and a common sampling pattern. Consider a set of face scans, the parameterizations are called consistent if they all use the same base domain and if all parameterizations respect previously defined face features such as eyes, nose, mouth, etc. We use our SDA method to create a consistent surface parameterization of range scans, which gives immediate point correspondences between all the models via a single, prototype layout and also allow us to remesh each model with the same connectivity via the recursive regular refinement from the low triangle-count generic model. Therefore, every vertex in one mesh has a unique corresponding vertex in every other mesh. This in turn enables a series of applications ranging from shape morphing to the transfer of attributes, such as textures, from one model to a whole set of models. In addition, it forms the basis for the DGP algorithms that involve many models simultaneously such as principal component analysis. We sketch a few exemplary applications to demonstrate the versatility of the representation provided by our SDA method.
    Texture Transfer
    Given the vertex-wise correspondence between two meshes it is trivial to transfer texture maps between any pair of meshes through direct parametric mapping.

    The same texture is applied to three different meshes (each row). The mesh in each column is identical. In the 3*3 matrix of renderings, the models along the diagonal have their original textures.
    3D Face Morphing
    The consistent parameterization enables us to morph between any two reconstructed facial models by taking linear combinations of the vertices. Every face generated from one prototype model has a similar characteristic for texture coordinates, which enables 2D metamorphosis of texture images.

    Morphing between male and female faces.
    Principal Component Analysis
    For a successful application of PCA, one needs the same number of 3D vertex positions among the various faces in the dataset. Our ADD method generates the necessary point-to-point correspondence across faces.

    Visualization of the first three principal component meshes for the faces in our database.
    Face Attribute Control
    PCA extracts the most salient directions of human face variation from dataset, but it dose not provide a direct way to intuitively control the facial attributes used in human language, such as overall face shape (e.g., round or square faces), fullness of faces, sharpness of chin, and gender. We propose a method for mapping facial attribute controls to the parametric PCA mesh space.
    Computer-Generated Face Recognition Database
    Three main variations challenging the face recognition are pose, illumination, and facial expressions. Instead of building a complicated and high-cost studio for data collection, we can use reconstructed 3D mesh models to build a computer-generated database. For the pose, we apply rotation of the desired degree to the mesh and then render it using 2D projection so that it ensures exact rotation degree of the face (see the following figure). Another advantage using 3D mesh data is the freedom to easily set the environment. For the illumination, using the mesh models enables many virtual lights to be used to control various illumination conditions. For facial expressions, the approach is to fall back on the individualized model with the ability to generate realistic expressions. It is thus endowed with trained sets of expression.

    Rendered images from a reconstructed 3D facial model in various poses.
Papers:


Copyright 2005 to 2013, Yu Zhang.
This material may not be published, modified or otherwise redistributed in whole or part without prior approval.

Back to my page