Download Video: HD (MP4, 46 MB)

Abstract

Rigging and skinning clothed human avatars is a chal- lenging task and traditionally requires a lot of manual work and expertise. Recent methods addressing it either general- ize across different characters or focus on capturing the dy- namics of a single character observed under different pose configurations. However, the former methods typically pre- dict solely static skinning weights, which perform poorly for highly articulated poses, and the latter ones either re- quire dense 3D character scans in different poses or can- not generate an explicit mesh with vertex correspondence over time. To address these challenges, we propose a fully automated approach for creating a fully rigged character with pose-dependent skinning weights, which can be solely learned from multi-view video. Therefore, we first acquire a rigged template, which is then statically skinned. Next, a coordinate-based MLP learns a skinning weights field pa- rameterized over the position in a canonical pose space and the respective pose. Moreover, we introduce our pose- and view-dependent appearance field allowing us to differen- tiably render and supervise the posed mesh using multi-view imagery. We show that our approach outperforms state-of- the-art while not relying on dense 4D scans.

Downloads


  • Paper
    PDF

  • Supplemental document
    PDF

  • Main video
    MP4


Citation

@InProceedings{vinecs,
title = {VINECS: Video-based Neural Character Skinning},
author = {Zhouyingcheng Liao and Vladislav Golyanik and Marc Habermann and Christian Theobalt},
booktitle={Computer Vision and Pattern Recognition (CVPR)},
year={2024}
}
				

Contact

For questions, clarifications, please get in touch with:
Marc Habermann
mhaberma@mpi-inf.mpg.de

This page is Zotero translator friendly. Page last updated Imprint. Data Protection.