Fully Automatic Facial Deformation Transfer

Greg Maguire (Contributor), Shaojun Bian (Contributor), Anzong Zheng (Contributor), Lin Gao (Contributor), Willem Kokke (Contributor), Jon Macey (Contributor), Lihua You (Contributor), Jianjun Zhang (Contributor)

Research output: Contribution to journalArticlepeer-review

152 Downloads (Pure)

Abstract

Facial Animation is a serious and ongoing challenge for the Computer Graphic industry. Because diverse and complex emotions need to be expressed by different facial deformation and animation, copying facial deformations from existing character to another is widely needed in both industry and academia, to reduce time-consuming and repetitive manual work of modeling to create the 3D shape sequences for every new character. But transfer of realistic facial animations between two 3D models is limited and inconvenient for general use. Modern deformation transfer methods require correspondences mapping, in most cases, which are tedious to get. In this paper, we present a fast and automatic approach to transfer the deformations of the facial mesh models by obtaining the 3D point-wise correspondences in the automatic manner. The key idea is that we could estimate the correspondences with different facial meshes using the robust facial landmark detection method by projecting the 3D model to the 2D image. Experiments show that without any manual labelling efforts, our method detects reliable correspondences faster and simpler compared with the state-of-the-art automatic deformation transfer method on the facial models.
Original languageEnglish
Pages (from-to)27
Number of pages10
JournalSymmetry
Volume12
Issue number1
Publication statusPublished (in print/issue) - 21 Dec 2019

Keywords

  • face landmark detection
  • orthographic projection
  • point-wise correspondences
  • automatic deformation transfer

Fingerprint

Dive into the research topics of 'Fully Automatic Facial Deformation Transfer'. Together they form a unique fingerprint.
  • HUMAIN: EKER

    Maguire, G. (Creator), 28 Jul 2019

    Research output: Non-textual formSoftware

    Open Access
    File

Cite this