Neural Novel Actor: Learning a Generalized Animatable Neural Representation for Human Actors.

*: equal comtribution, †: corresponding author
1Peking Unviersity, 2Shandong Unviersity 3Max Planck Institute for Informatics


We propose a new method for learning a generalized ani- matable neural human representation from a sparse set of multi-view imagery of multiple persons. The learned repre- sentation can be used to synthesize novel view images of an arbitrary person from a sparse set of cameras, and further animate them with the user’s pose control. While existing methods can either generalize to new persons or synthesize animations with user control, none of them can achieve both at the same time. We attribute this accomplishment to the employment of a 3D proxy for a shared multi-person human model, and further the warping of the spaces of different poses to a shared canonical pose space, in which we learn a neural field and predict the person- and pose-dependent deforma- tions, as well as appearance with the features extracted from input images. To cope with the complexity of the large vari- ations in body shapes, poses, and clothing deformations, we design our neural human model with disentangled geometry and appearance. Furthermore, we utilize the image features both at the spatial point and on the surface points of the 3D proxy for predicting person- and pose-dependent properties. Experiments show that our method significantly outperforms the state-of-the-arts on both task.



        url = {},
        author = {Wang, Yiming and Gao, Qingzhe and Liu, Libin and Liu, Lingjie and Theobalt, Christian and Chen, Baoquan},    
        title = {Neural Novel Actor: Learning a Generalized Animatable Neural Representation for Human Actors},
        publisher = {arXiv},
        year = {2022},