Download Video: HD (MP4, 270 MB)

Abstract

We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance learned in a new weakly supervised way from multi-view imagery. In contrast to previous work, our controllable 3D character displays dynamics, e.g., the swing of the skirt, dependent on skeletal body motion in an efficient data-driven way, without requiring complex physics simulation. Our character model also features a learned dynamic texture model that accounts for photo-realistic motion-dependent appearance details, as well as view-dependent lighting effects. During training, we do not need to resort to difficult dynamic 3D capture of the human; instead we can train our model entirely from multi-view video in a weakly supervised manner. To this end, we propose a parametric and differentiable character representation which allows us to model coarse and fine dynamic deformations, e.g., garment wrinkles, as explicit space-time coherent mesh geometry that is augmented with high-quality dynamic textures dependent on motion and view point. As input to the model, only an arbitrary 3D skeleton motion is required, making it directly compatible with the established 3D animation pipeline. We use a novel graph convolutional network architecture to enable motion-dependent deformation learning of body and clothing, including dynamics, and a neural generative dynamic texture model creates corresponding dynamic texture maps. We show that by merely providing new skeletal motions, our model creates motion-dependent surface deformations, physically plausible dynamic clothing deformations, as well as video-realistic surface textures at a much higher level of detail than previous state of the art approaches, and even in real-time.

Downloads


Dataset


Citation

BibTeX, 1 KB

@article{
	habermann2021,
	author = {Marc Habermann and Lingjie Liu and Weipeng Xu and Michael Zollhoefer and Gerard Pons-Moll and Christian Theobalt},
	title = {Real-time Deep Dynamic Characters},
	journal = {ACM Transactions on Graphics}, 
	month = {aug},
	volume = {40},
	number = {4}, 
	articleno = {94},
	year = {2021}, 
	publisher = {ACM}
} 
				

Acknowledgments

The authors from MPII were supported by the ERC Consolidator Grant 4DRepLy (770784), the Deutsche Forschungsgemeinschaft (Project Nr. 409792180, Emmy Noether Programme, project: Real Virtual Humans) and Lise Meitner Postdoctoral Fellowship.

Contact

For questions, clarifications, please get in touch with:
Marc Habermann
mhaberma@mpi-inf.mpg.de

This page is Zotero translator friendly. Page last updated Imprint. Data Protection.