Neural Novel Actor: Learning a Generalized Animatable Neural Representation for Human Actors.

TVCG 2023
*: equal comtribution, †: corresponding author
1Shandong Unviersity, 2Peking Unviersity, 3Max Planck Institute for Informatics

Abstract

We propose a new method for learning a generalized ani- matable neural human representation from a sparse set of multi-view imagery of multiple persons. The learned repre- sentation can be used to synthesize novel view images of an arbitrary person from a sparse set of cameras, and further animate them with the user’s pose control. While existing methods can either generalize to new persons or synthesize animations with user control, none of them can achieve both at the same time. We attribute this accomplishment to the employment of a 3D proxy for a shared multi-person human model, and further the warping of the spaces of different poses to a shared canonical pose space, in which we learn a neural field and predict the person- and pose-dependent deforma- tions, as well as appearance with the features extracted from input images. To cope with the complexity of the large vari- ations in body shapes, poses, and clothing deformations, we design our neural human model with disentangled geometry and appearance. Furthermore, we utilize the image features both at the spatial point and on the surface points of the 3D proxy for predicting person- and pose-dependent properties. Experiments show that our method significantly outperforms the state-of-the-arts on both task.

Video

BibTeX

@article{Gao2022neuralnovelactor,
        url = {https://arxiv.org/abs/2208.11905},
        author = {Wang, Yiming and Gao, Qingzhe and Liu, Libin and Liu, Lingjie and Theobalt, Christian and Chen, Baoquan},    
        title = {Neural Novel Actor: Learning a Generalized Animatable Neural Representation for Human Actors},
        publisher = {IEEE Transactions on Visualization and Computer Graphics},
        year = {2023},
      }