The facial movements of a character are commonly animated using a blendshape-based rig, which requires a large dataset and a significant amount of manual work to correct artefacts intrinsically caused by linear interpolation between the multiple blendshape targets. Another common approach uses a bone-based rig which can add a large degree of global nonlinearity, has less intrinsic artefacts but does not have the high fidelity of scan-based blendshape targets. In order to solve this problem, we combine a) the nonlinearity captured by a 4D scanner and b) high-fidelity captured scans into c) a bone-based rig. Our results show that reintroducing nonlinearity in a face rig improves accuracy and fidelity in a measurable and observable way, and that our method reduces manual labour significantly and can scale to other characters.
|Publication status||Published (in print/issue) - 27 Sept 2019|