MPI-INF Logo
Homepage

Contact

Firstname Lastname

Guoxing Sun

Max-Planck-Institut für Informatik
Department 6: Visual Computing and Artificial Intelligence
 office: Campus E1 4, Room 117
Saarland Informatics Campus
66123 Saarbrücken
Germany
 email: gsun(at)mpi-inf.mpg.de
 phone: +49 681 9325-4539
 fax: +49 681 9325-4099

Research Interests

  • Performance/Motion Capture
  • Human-Centric 3D Scene Understanding
  • 3D/4D Reconstruction
  • Neural Rendering

Publications

MetaCap: Meta-learning Priors from Multi-View Imagery for Sparse-view Human Performance Capture and Rendering

Guoxing Sun   Rishabh Dabral   Pascal Fua   Christian Theobalt   Marc Habermann  

ECCV 2024

Abstract
Faithful human performance capture and free-view render- ing from sparse RGB observations is a long-standing problem in Vision and Graphics. The main challenges are the lack of observations and the inherent ambiguities of the setting, e.g. occlusions and depth ambiguity. As a result, radiance fields, which have shown great promise in capturing high-frequency appearance and geometry details in dense setups, perform poorly when naïvely supervising them on sparse camera views, as the field simply overfits to the sparse-view inputs. To address this, we propose MetaCap, a method for efficient and high-quality geometry recovery and novel view synthesis given very sparse or even a single view of the human. Our key idea is to meta-learn the radiance field weights solely from potentially sparse multi-view videos, which can serve as a prior when fine-tuning them on sparse imagery depicting the human. This prior provides a good network weight initialization, thereby effectively addressing ambiguities in sparse-view capture. Due to the articulated structure of the human body and motion-induced surface deformations, learning such a prior is non-trivial. Therefore, we propose to meta-learn the field weights in a pose-canonicalized space, which reduces the spatial feature range and makes feature learning more effective. Consequently, one can fine-tune our field parameters to quickly generalize to unseen poses, novel illumination conditions as well as novel and sparse (even monocular) camera views. For evaluating our method under different scenarios, we collect a new dataset, WildDynaCap, which contains subjects captured in, both, a dense camera dome and in-the-wild sparse camera rigs, and demonstrate superior results compared to recent state-of-the-art methods on both public and WildDynaCap dataset.

[pdf], [video], [project page], [arxiv]

Holoported Characters: Real-time Free-viewpoint Rendering of Humans from Sparse RGB Cameras Rendering

Ashwath Shetty   Marc Habermann   Guoxing Sun   Diogo Luvizon   Vladislav Golyanik   Christian Theobalt  

CVPR 2024

Abstract
We present the first approach to render highly realistic free-viewpoint videos of a human actor in general apparel, from sparse multi-view recording to display, in real-time at an unprecedented 4K resolution. At inference, our method only requires four camera views of the moving actor and the respective 3D skeletal pose. It handles actors in wide clothing, and reproduces even fine-scale dynamic detail, e.g. clothing wrinkles, face expressions, and hand gestures. At training time, our learning-based approach expects dense multi-view video and a rigged static surface scan of the actor. Our method comprises three main stages. Stage 1 is a skeleton-driven neural approach for high-quality capture of the detailed dynamic mesh geometry. Stage 2 is a novel solution to create a view-dependent texture using four test-time camera views as input. Finally, stage 3 comprises a new image-based refinement network rendering the final 4K image given the output from the previous stages. Our approach establishes a new benchmark for real-time rendering resolution and quality using sparse input camera views, unlocking possibilities for immersive telepresence.

[pdf], [video], [project page], [arxiv]

Education

Private

Hobbies

  • Driving, Hiking
  • Movie appreciation
  • Listening to songs of pop, rock, folk
  • Keeping a cat (Don't have a cat right now.)