MPI-INF Logo
Computer Graphics

Contact

Ayush Tewari

Ayush Tewari

Max-Planck-Institut für Informatik
Department 4: Computer Graphics
 office: Campus E1 4, Room 222
Saarland Informatics Campus
66123 Saarbrücken
Germany
 email: atewari@mpi-inf.mpg.de
 phone: +49 681 9325-0222

Research Interests

  • Computer Graphics
  • Computer Vision

Publications

High-Fidelity Monocular Face Reconstruction based on an Unsupervised Model-based Face Autoencoder
A. Tewari, M. Zollhöfer, F. Bernard, P. Garrido, H. Kim, P. Perez and C.Theobalt

Transactions on Pattern Analysis and Machine Intelligence   —   TPAMI
special issue on The Best of ICCV 2017

This work is an extended version of our ICCV17 paper, where we additionally present a stochastic vertex sampling technique for faster training of our networks, and moreover, we propose and evaluate analysis-by-synthesis and shape-from-shading refinement approaches to achieve a high-fidelity reconstruction.
[paper] [project page]


A Hyrbrid Model for Identity Obfuscation by Face Replacement
Q. Sun, A. Tewari, W. Xu, Mario Fritz, C. Theobalt and B. Schiele

European Conference on Computer Vision (ECCV), 2018   —   ECCV 2018

We propose a new hybrid approach to obfuscate identities in photos by head replacement. Our approach combines state of the art parametric face synthesis with latest advances in Generative Adversarial Networks (GANs) for data-driven image synthesis.
[paper] [project page]


Deep Video Portraits
H. Kim, P. Garrido, A. Tewari, W. Xu, J. Thies, M. Nießner, P. Perez, C. Richardt M. Zollhöfer and C. Theobalt

ACM Transactions on Graphics (Proc. SIGGRAPH 2018)   —   SIGGRAPH 2018

We present a novel approach that enables full photo-realistic re-animation of portrait videos using only an input video.
[paper] [video] [project page]


Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz
A. Tewari, M. Zollhöfer, P. Garrido, F. Bernard, H. Kim, P. Perez and C.Theobalt

Proc. Computer Vision and Pattern Recognition 2018   —   CVPR 2018 (Oral)

We propose the first approach that jointly learns 1) a regressor for face shape, expression, reflectance and illumination on the basis of 2) a concurrently learned parametric face model.
[paper] [video] [project page]


InverseFaceNet: Deep Single-Shot Inverse Face Rendering From A Single Image
H. Kim, M. Zollhöfer, A. Tewari, J. Thies, C. Richardt and C.Theobalt

Proc. Computer Vision and Pattern Recognition 2018   —   CVPR 2018

We introduce InverseFaceNet, a deep convolutional inverse rendering framework for faces that jointly estimates facial pose, shape, expression, reflectance and illumination from a single input image in a single shot.
[paper] [video] [project page]


MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction
A. Tewari, M. Zollhöfer, H. Kim, P. Garrido, F. Bernard, P. Perez and C.Theobalt

Proc. of the International Conference on Computer Vision 2017   —   ICCV 2017 (Oral)

In this work we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a human face from a single image.
[paper] [video] [project page]


Education