Full-head Texture Synthesis for Human Head Cloning


    With a significant increase in the quality and availability of 3D capture methods, a common approach towards creating face models of real humans uses laser range scanners to accurately acquire both the face geometry and texture. One limitation of the scanner technology, however, is that the complete head geometry can not be easily captured as the hair in dark color absorbs all of the laser radiation. The top and back of the head are generally not digitized unless the hair is artificially colored white or the subject wears a light-color cap, but that destroys the texture. In most cases, only the frontal face can be properly textured. There is no automatic mechanism provided to generate a full-head texture from the acquired single frontal-face image for realistic rendering towards a "cloned" head.
    We present a technique to efficiently generate a parameterized full-head texture for modeling heads with a high degree of realism. We start with a generic head model with a known topology. It is deformed to fit the face scan of the particular human subject using a volume morphing approach. The facial texture associated with the scanned geometry is then transferred to the original undeformed generic mesh. We automatically construct a parameterization of the 3D head mesh over a 2D texture domain, which gives immediate correspondence between all the scanned textures via a single, prototype layout. After having performed a vertex-to-image binding for vertices of the head mesh, we generate a cylindrical full-head texture from the remaining parameterized texture of the face area. We also address the creation of individual textures for ears. Apart from an initial feature point selection for the texturing, our method works automatically without any user interaction. Our main contribution is a technique that uses a frontal-face image of the scanned data to generate a full-head texture for photorealistic rendering and morphing with minimal manual intervention. This includes the new algorithms to automatically parameterize textures of a set of unregistered face scans to establish the mapping correspondence, to robustly produce individual full-head skin texture, and to efficiently create ear textures from a single input image.