office: |
Campus E1 4,
Room 222 Saarland Informatics Campus 66123 Saarbrücken Germany |
email: | atewari@mpi-inf.mpg.de |
phone: | +49 681 9325-0222 |
![]() |
Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video
E. Tretschk,
A. Tewari,
V. Golyanik,
M. Zollhöfer,
C. Lassner and
C. Theobalt arXiv, 2020 we present the current state of our ongoing work on reconstructing Neural Radiance Fields (NERF) of general non-rigid scenes from monocular videos via ray bending. [paper] [project page] [code] |
![]() |
i3DMM: Deep Implicit 3D Morphable Model of Human Heads
T. Yenamandra,
A. Tewari,
F. Bernard,
H-P. Seidel,
M. Elgharib,
D. Cremers and
C. Theobalt arXiv, 2020 We present the first deep implicit 3D Morphable Model of full heads, including hair. [paper] [video] [project page] |
![]() |
Monocular Reconstruction of Neural Face Reflectance Fields
M. B R,
A. Tewari,
T-H. Oh,
T. Weyrich,
B. Bickel,
H-P. Seidel,
H. Pfister,
W. Matusik,
M. Elgharib and
C. Theobalt arXiv, 2020 We present a new neural representation for face reflectance where we can estimate all components of the reflectance responsible for the final appearance from a single monocular image. [paper] [project page] |
![]() |
Learning Complete 3D Morphable Face Models from Images and Videos
M. B R,
A. Tewari,
H-P. Seidel,
M. Elgharib and
C. Theobalt arXiv, 2020 We present the first ap-proach to learn complete 3D models of face identity geometry, albedo and expression just from images and videos. [paper] [project page] |
![]() |
Egocentric Videoconferencing
M. Elgharib*,
M. Mendiratta*,
J. Thies,
M. Nießner,
H-P. Seidel,
A. Tewari,
V. Golyanik and
C. Theobalt (* equal contribution) ACM Transactions on Graphics (Proc. of SIGGRAPH Asia 2020) — SIGGRAPH Asia 2020 We present the first approach for embedding real portrait images in the latent space of StyleGAN which allows for intuitive editing of the head pose, facial expression, and scene illumination in the image. [paper] [video] [project page] |
![]() |
PIE: Portrait Image Embedding for Semantic Control
A. Tewari,
M. Elgharib,
M. B R,
F. Bernard,
H-P. Seidel,
P. Perez,
M. Zollhöfer and
C. Theobalt ACM Transactions on Graphics (Proc. of SIGGRAPH Asia 2020) — SIGGRAPH Asia 2020 We present the first approach for embedding real portrait images in the latent space of StyleGAN which allows for intuitive editing of the head pose, facial expression, and scene illumination in the image. [paper] [video] [project page] |
![]() |
PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations
E. Tretschk,
A. Tewari,
V. Golyanik,
M. Zollhöfer,
C. Stoll and
C. Theobalt European Conference on Computer Vision 2020 — ECCV 2020 We present a new mid-level patch-based surface representation. At the level of patches, objects across different categories share similarities, which leads to more generalizable models. [paper] [video] [project page] |
![]() |
DEMEA: Deep Mesh Autoencoders for Non-Rigidly Deforming Objects
E. Tretschk,
A. Tewari,
M. Zollhöfer,
V. Golyanik and
C. Theobalt European Conference on Computer Vision 2020 — ECCV 2020 (Spotlight) We propose a general-purpose DEep MEsh Autoencoder (DEMEA) which adds a novel embedded deformation layer to a graph-convolutional mesh autoencoder. [paper] [video] [project page] |
![]() |
Neural Voice Puppetry: Audio-driven Facial Reenactment
J. Thies,
M. Elgharib,
A. Tewari,
C. Theobalt and
M. Nießner European Conference on Computer Vision 2020 — ECCV 2020 Given an audio sequence of a source person or digital assistant, we generate a photo-realistic output video that is in sync with the audio. [paper] [video] [online demo] [code] [project page] |
![]() |
3D Morphable Face Models - Past, Present and Future
B. Egger,
W. A.P. Smith,
A. Tewari,
S. Wuhrer,
M. Zollhöfer,
T. Beeler,
F. Bernard,
T. Bolkart,
A. Kortylewski,
S. Romdhani,
C. Theobalt,
V. Blanz and
T. Vetter ACM Transactions on Graphics 2020 We provide a detailed survey of 3D Morphable Face Models over the 20 years since they were first proposed. [arXiv] |
![]() |
State of the Art on Neural Rendering
A. Tewari*,
O. Fried*,
J. Thies*,
V. Sitzmann*,
S. Lombardi,
K. Sunkavalli,
R. Martin-Brualla,
T. Simon,
J. Saragih,
M. Nießner,
R. Pandey,
S. Fanello,
G. Wetzstein,
J.-Y. Zhu,
C. Theobalt,
M. Agrawala,
E. Shechtman,
B. Goldman and
M. Zollhöfer (* equal contribution) Computer Graphics Forum 2020 — Eurographics STAR report This state-of-the-art report summarizes recent trends in neural rendering and discusses its applications. [arXiv] [CVPR 2020 Tutorial] |
![]() |
StyleRig: Rigging StyleGAN for 3D Control over Portrait Images
A. Tewari,
M. Elgharib,
G. Bharaj,
F. Bernard,
H-P. Seidel,
P. Perez,
M. Zollhöfer and
C. Theobalt Proc. Computer Vision and Pattern Recognition 2020 — CVPR 2020 (Oral) We present the first method to provide a face rig-like control over a pretrained and fixed StyleGAN network using a 3D morphable model. [paper] [video] [project page] |
![]() |
Text-based Editing of Talking-head Video
O. Fried,
A. Tewari,
M. Zollhöfer,
A. Finkelstein,
E. Schectman,
D. Goldman,
K. Genova,
C. Theobalt and
M. Agrawala ACM Transactions on Graphics (Proc. of SIGGRAPH 2019) — SIGGRAPH 2019 We propose a novel method to edit a talking-head video based on its transcript to produce a realistic output video in which the dialogue of the speaker has been modified. [paper] [video] [project page] |
![]() |
FML: Face Model Learning from Videos
A. Tewari,
F. Bernard,
P. Garrido,
G. Bharaj,
M. Elgharib,
H-P. Seidel,
P. Perez,
M. Zollhöfer and
C. Theobalt Proc. Computer Vision and Pattern Recognition 2019 — CVPR 2019 (Oral) We propose multi-frame video-based self-supervised training of a deep network that learns a face identity model both in shape and appearance while jointly learning to reconstruct 3D faces. [paper] [video] [project page] |
![]() |
High-Fidelity Monocular Face Reconstruction based on an Unsupervised Model-based Face Autoencoder
A. Tewari,
M. Zollhöfer,
F. Bernard,
P. Garrido,
H. Kim,
P. Perez and
C. Theobalt
Transactions on Pattern Analysis and Machine Intelligence — TPAMI special issue on The Best of ICCV 2017 This work is an extended version of our ICCV17 paper, where we additionally present a stochastic vertex sampling technique for faster training of our networks, and moreover, we propose and evaluate analysis-by-synthesis and shape-from-shading refinement approaches to achieve a high-fidelity reconstruction. [paper] [project page] |
![]() |
A Hybrid Model for Identity Obfuscation by Face Replacement
Q. Sun,
A. Tewari,
W. Xu,
Mario Fritz,
C. Theobalt and
B. Schiele
European Conference on Computer Vision (ECCV), 2018 — ECCV 2018 We propose a new hybrid approach to obfuscate identities in photos by head replacement. Our approach combines state of the art parametric face synthesis with latest advances in Generative Adversarial Networks (GANs) for data-driven image synthesis. [paper] [project page] |
![]() |
Deep Video Portraits
H. Kim,
P. Garrido,
A. Tewari,
W. Xu,
J. Thies,
M. Nießner,
P. Perez,
C. Richardt
M. Zollhöfer and
C. Theobalt
ACM Transactions on Graphics (Proc. SIGGRAPH 2018) — SIGGRAPH 2018 We present a novel approach that enables full photo-realistic re-animation of portrait videos using only an input video. [paper] [video] [project page] |
![]() |
Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz
A. Tewari,
M. Zollhöfer,
P. Garrido,
F. Bernard,
H. Kim,
P. Perez and
C.Theobalt
Proc. Computer Vision and Pattern Recognition 2018 — CVPR 2018 (Oral) We propose the first approach that jointly learns 1) a regressor for face shape, expression, reflectance and illumination on the basis of 2) a concurrently learned parametric face model. [paper] [video] [project page] |
![]() |
InverseFaceNet: Deep Single-Shot Inverse Face Rendering From A Single Image
H. Kim,
M. Zollhöfer,
A. Tewari,
J. Thies,
C. Richardt and
C.Theobalt
Proc. Computer Vision and Pattern Recognition 2018 — CVPR 2018 We introduce InverseFaceNet, a deep convolutional inverse rendering framework for faces that jointly estimates facial pose, shape, expression, reflectance and illumination from a single input image in a single shot. [paper] [video] [project page] |
![]() |
MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction
A. Tewari,
M. Zollhöfer,
H. Kim,
P. Garrido,
F. Bernard,
P. Perez and
C.Theobalt
Proc. of the International Conference on Computer Vision 2017 — ICCV 2017 (Oral) In this work we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a human face from a single image. [paper] [video] [project page] |