Computer Graphics


Ayush Tewari

Ayush Tewari

Max-Planck-Institut für Informatik
Department 4: Computer Graphics
 office: Campus E1 4, Room 222
Saarland Informatics Campus
66123 Saarbrücken
 phone: +49 681 9325-0222


Neural Voice Puppetry: Audio-driven Facial Reenactment
J. Thies, M. Elgharib, A. Tewari, C. Theobalt and M. Nießner

European Conference on Computer Vision 2020   —   ECCV 2020

Given an audio sequence of a source person or digital assistant, we generate a photo-realistic output video that is in sync with the audio.
[paper] [video] [online demo] [code] [project page]

DEMEA: Deep Mesh Autoencoders for Non-Rigidly Deforming Objects
E. Tretschk, A. Tewari, M. Zollhöfer, V. Golyanik and C. Theobalt

European Conference on Computer Vision 2020   —   ECCV 2020 (Spotlight)

We propose a general-purpose DEep MEsh Autoencoder (DEMEA) which adds a novel embedded deformation layer to a graph-convolutional mesh autoencoder.
[arXiv] [video] [project page]

3D Morphable Face Models - Past, Present and Future
B. Egger, W. A.P. Smith, A. Tewari, S. Wuhrer, M. Zollhöfer, T. Beeler, F. Bernard, T. Bolkart, A. Kortylewski, S. Romdhani, C. Theobalt, V. Blanz and T. Vetter

ACM Transactions on Graphics 2020

We provide a detailed survey of 3D Morphable Face Models over the 20 years since they were first proposed.

State of the Art on Neural Rendering
A. Tewari*, O. Fried*, J. Thies*, V. Sitzmann*, S. Lombardi, K. Sunkavalli, R. Martin-Brualla, T. Simon, J. Saragih, M. Nießner, R. Pandey, S. Fanello, G. Wetzstein, J.-Y. Zhu, C. Theobalt, M. Agrawala, E. Shechtman, B. Goldman and M. Zollhöfer (* equal contribution)

Computer Graphics Forum 2020   —   Eurographics STAR report

This state-of-the-art report summarizes recent trends in neural rendering and discusses its applications.
[paper] [arXiv]

StyleRig: Rigging StyleGAN for 3D Control over Portrait Images
A. Tewari, M. Elgharib, G. Bharaj, F. Bernard, H-P. Seidel, P. Perez, M. Zollhöfer and C. Theobalt

Proc. Computer Vision and Pattern Recognition 2020   —   CVPR 2020 (Oral)

We present the first method to provide a face rig-like control over a pretrained and fixed StyleGAN network using a 3D morphable model.
[paper] [video] [project page]

Text-based Editing of Talking-head Video
O. Fried, A. Tewari, M. Zollhöfer, A. Finkelstein, E. Schectman, D. Goldman, K. Genova, C. Theobalt and M. Agrawala

ACM Transactions on Graphics (Proc. of SIGGRAPH 2019)   —   SIGGRAPH 2019

We propose a novel method to edit talking-head video based on its transcript to produce a realistic output video in which the dialogue of the speaker has been modified.
[paper] [video] [project page]

FML: Face Model Learning from Videos
A. Tewari, F. Bernard, P. Garrido, G. Bharaj, M. Elgharib, H-P. Seidel, P. Perez, M. Zollhöfer and C. Theobalt

Proc. Computer Vision and Pattern Recognition 2019   —   CVPR 2019 (Oral)

We propose multi-frame video-based self-supervised training of a deep network that (i) learns a face identity model both in shape and appearance while (ii) jointly learning to reconstruct 3D faces.
[paper] [video] [project page]

High-Fidelity Monocular Face Reconstruction based on an Unsupervised Model-based Face Autoencoder
A. Tewari, M. Zollhöfer, F. Bernard, P. Garrido, H. Kim, P. Perez and C. Theobalt

Transactions on Pattern Analysis and Machine Intelligence   —   TPAMI
special issue on The Best of ICCV 2017

This work is an extended version of our ICCV17 paper, where we additionally present a stochastic vertex sampling technique for faster training of our networks, and moreover, we propose and evaluate analysis-by-synthesis and shape-from-shading refinement approaches to achieve a high-fidelity reconstruction.
[paper] [project page]

A Hybrid Model for Identity Obfuscation by Face Replacement
Q. Sun, A. Tewari, W. Xu, Mario Fritz, C. Theobalt and B. Schiele

European Conference on Computer Vision (ECCV), 2018   —   ECCV 2018

We propose a new hybrid approach to obfuscate identities in photos by head replacement. Our approach combines state of the art parametric face synthesis with latest advances in Generative Adversarial Networks (GANs) for data-driven image synthesis.
[paper] [project page]

Deep Video Portraits
H. Kim, P. Garrido, A. Tewari, W. Xu, J. Thies, M. Nießner, P. Perez, C. Richardt M. Zollhöfer and C. Theobalt

ACM Transactions on Graphics (Proc. SIGGRAPH 2018)   —   SIGGRAPH 2018

We present a novel approach that enables full photo-realistic re-animation of portrait videos using only an input video.
[paper] [video] [project page]

Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz
A. Tewari, M. Zollhöfer, P. Garrido, F. Bernard, H. Kim, P. Perez and C.Theobalt

Proc. Computer Vision and Pattern Recognition 2018   —   CVPR 2018 (Oral)

We propose the first approach that jointly learns 1) a regressor for face shape, expression, reflectance and illumination on the basis of 2) a concurrently learned parametric face model.
[paper] [video] [project page]

InverseFaceNet: Deep Single-Shot Inverse Face Rendering From A Single Image
H. Kim, M. Zollhöfer, A. Tewari, J. Thies, C. Richardt and C.Theobalt

Proc. Computer Vision and Pattern Recognition 2018   —   CVPR 2018

We introduce InverseFaceNet, a deep convolutional inverse rendering framework for faces that jointly estimates facial pose, shape, expression, reflectance and illumination from a single input image in a single shot.
[paper] [video] [project page]

MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction
A. Tewari, M. Zollhöfer, H. Kim, P. Garrido, F. Bernard, P. Perez and C.Theobalt

Proc. of the International Conference on Computer Vision 2017   —   ICCV 2017 (Oral)

In this work we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a human face from a single image.
[paper] [video] [project page]