Edgar Tretschk

Max-Planck-Institut für Informatik
D6: Visual Computing and Artificial Intelligence
3D Reconstruction
 office: Campus E1 4, Room 221
Saarland Informatics Campus
66123 Saarbrücken
 email: Get my email address via email
 phone: +49 681 9325-4021
 fax: +49 681 9325-4099
I'm a 4th year Ph.D. candidate in the Visual Computing and Artificial Intelligence department where Prof. Dr. Christian Theobalt is my Ph.D. advisor.

Research Interests

My research lies at the intersection of graphics, vision, and machine learning, with a focus on general, non-rigidly deforming objects.
  • Real-time tracking of general objects
  • 3D reconstruction of non-rigid objects
  • Machine learning for computer graphics/vision


φ-SfT: Shape-from-Template with a Physics-based Deformation Model

Navami Kairanda, Edgar Tretschk, Mohamed Elgharib, Christian Theobalt, Vladislav Golyanik

CVPR 2022

This paper proposes a new SfT approach explaining the observations through simulation of a physically-based surface deformation model representing forces and material properties. In contrast to previous works, we utilise a differentiable physics-based simulator to regularise the surface evolution. In addition, we regress the material properties such as its bending coefficients, elasticity, stiffness, and material density. For the evaluation, we record with an RGB-D camera challenging real surfaces with various material properties and texture, exposed to physical forces. Our approach reconstructs the underlying deformations much more accurately than related methods.

[Project Page (incl. Code & Data)] [arXiv]

Virtual Elastic Objects

Hsiao-yu Chen, Edgar Tretschk, Tuur Stuyck, Petr Kadleček, Ladislav Kavan, Etienne Vouga, Christoph Lassner

CVPR 2022

We present Virtual Elastic Objects (VEOs): virtual objects that not only look like their real-world counterparts but also behave like them, even when subject to novel interactions. Achieving this presents multiple challenges: not only do objects have to be captured including the physical forces acting on them, then faithfully reconstructed and rendered, but also plausible material parameters found and simulated. The resulting method can handle objects composed of inhomogeneous material, with very different shapes, and it can simulate interactions with other virtual objects. We present our results using a newly collected dataset of 12 objects under a variety of force fields, which will be shared with the community.

[Project Page (incl. Data)] [arXiv]

Advances in Neural Rendering

Ayush Tewari, Justus Thies, Ben Mildenhall, Pratul Srinivasan, Edgar Tretschk, Yifan Wang, Christoph Lassner, Vincent Sitzmann, Ricardo Martin-Brualla, Stephen Lombardi, Tomas Simon, Christian Theobalt, Matthias Niessner, Jonathan T. Barron, Gordon Wetzstein, Michael Zollhöfer, Vladislav Golyanik

Eurographics 2022

This state-of-the-art report on advances in neural rendering focuses on methods that combine classical rendering principles with learned 3D scene representations, often now referred to as neural scene representations. In addition to methods that handle static scenes, we cover neural scene representations for modeling non-rigidly deforming objects and scene editing and composition. While most of these approaches are scene-specific, we also discuss techniques that generalize across object classes and can be used for generative tasks. In addition to reviewing these state-of-the-art methods, we provide an overview of fundamental concepts and definitions used in the current literature. We conclude with a discussion on open challenges and social implications.

[Project Page] [arXiv]

Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video

Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollhöfer, Christoph Lassner, Christian Theobalt

ICCV 2021

We present Non-Rigid Neural Radiance Fields (NR-NeRF), a reconstruction and novel view synthesis approach for general non-rigid dynamic scenes. Our approach takes RGB images of a dynamic scene as input (e.g., from a monocular video recording), and creates a high-quality space-time geometry and appearance representation. We show that a single handheld consumer-grade camera is sufficient to synthesize sophisticated renderings of a dynamic scene from novel virtual camera views, e.g. a `bullet-time' video effect. Our formulation enables dense correspondence estimation across views and time, and compelling video editing applications such as motion exaggeration.

[Project Page (incl. Code)] [arXiv]

PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations

Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollhöfer, Carsten Stoll, Christian Theobalt

ECCV 2020

We present a new mid-level patch-based surface representation. At the level of patches, objects across different categories share similarities, which leads to more generalizable models. We show that our representation trained on one category of objects from ShapeNet can also well represent detailed shapes from any other category. In addition, it can be trained using much fewer shapes, compared to existing approaches. We show several applications of our new representation, including shape interpolation and partial point cloud completion. Due to explicit control over positions, orientations and scales of patches, our representation is also more controllable compared to object-level representations, which enables us to deform encoded shapes non-rigidly.

[Project Page (incl. Code)] [arXiv]

DEMEA: Deep Mesh Autoencoders for Non-Rigidly Deforming Objects

Edgar Tretschk, Ayush Tewari, Michael Zollhöfer, Vladislav Golyanik, Christian Theobalt

ECCV 2020 (Spotlight)

We propose a general-purpose DEep MEsh Autoencoder (DEMEA) which adds a novel embedded deformation layer to a graph-convolutional mesh autoencoder. We demonstrate multiple applications of DEMEA, including non-rigid 3D reconstruction from depth and shading cues, non-rigid surface tracking, as well as the transfer of deformations over different meshes.

[Project Page] [arXiv]

Neural Dense Non-Rigid Structure from Motion with Latent Space Constraints

Vikramjit Singh Sidhu, Edgar Tretschk, Vladislav Golyanik, Antonio Agudo, Christian Theobalt

ECCV 2020

We introduce the first dense neural non-rigid structure from motion (N-NRSfM) approach which can be trained end-to-end in an unsupervised manner from 2D point tracks. We formulate the deformation model by an auto-decoder and impose subspace constraints on the recovered latent space function in the frequency domain, allowing us to recover the period of the input sequence. Our method enables multiple applications including shape compression, completion and interpolation, among others. Combined with an encoder trained directly on 2D images, we perform scenario-specific monocular 3D shape reconstruction at interactive frame rates.

[Project Page (incl. Code)]

DispVoxNets: Non-Rigid Point Set Alignment with Supervised Learning Proxies

Soshi Shimada, Vladislav Golyanik, Edgar Tretschk, Didier Stricker, Christian Theobalt

3DV 2019 (Oral)

We introduce a supervised-learning framework for non-rigid point set alignment of a new kind - Displacements on Voxels Networks (DispVoxNets) - which abstracts away from the point set representation and regresses 3D displacement fields on regularly sampled proxy 3D voxel grids. Thanks to recently released collections of deformable objects with known intra-state correspondences, DispVoxNets learn a deformation model and further priors (e.g., weak point topology preservation) for different object categories such as cloths, human bodies and faces.

[Project Page] [arXiv]

Sequential Attacks on Agents for Long-Term Adversarial Goals

Edgar Tretschk, Seong Joon Oh, Mario Fritz

2. ACM Computer Science in Cars Symposium 2018

We show that an adversary can be trained to control a deep reinforcement learning agent. Our technique works on fully trained victim agents and makes them pursue an alternative, adversarial goal when under attack. In contrast to traditional attacks on e.g. image classifiers, our setting involves adversarial goals that may not be immediately reachable but instead may require multiple steps to be achieved.

[pdf] [arXiv]


  • Summer semester 2019, 2020, 2021:
    Supervisor for Computer Vision and Machine Learning for Computer Graphics Seminar, Saarland University and MPI for Informatics
  • September/October 2016, 2017, 2018:
    Coach for the Mathematik-Vorkurs für Informatiker (Math preparation course for new CS students), Saarland University
  • September/October 2017:
    Voluntary lecturer for the Mathematik-Vorkurs für Informatiker (Math preparation course for new CS students), Saarland University
  • Winter semester 2017/18:
    Tutor for Grundzüge der Theoretischen Informatik (Theoretical Computer Science), Lecturer: Prof. Dr. Markus Bläser, Saarland University
  • Winter semester 2015/16:
    Tutor for Programmierung 1 (Programming 1), Lecturer: Prof. Dr. Gert Smolka, Saarland University
  • March 2015:
    Tutor for re-exam preparation in Programmierung 1 (Programming 1), Lecturer: Prof. Bernd Finkbeiner, Ph.D., Saarland University

Recent Positions



  • Reviewer for: ECCV (2022), CVPR (2022), ICML (2022), ICLR (2022), NeurIPS (2021)
  • November 2017:
    Bachelor Award (for the best Bachelor graduates in CS)
  • April 2015 -- March 2017:
    Member of the Bachelor Honors Program (special support program for talented and ambitious Bachelor students in CS)
  • April 2015 -- March 2017:
    Deutschlandstipendium scholarship