MPI-INF Logo
Homepage

Contact

Edith Tretschk

Edith Tretschk

Max-Planck-Institut für Informatik
D6: Visual Computing and Artificial Intelligence
3D Reconstruction
 office: Campus E1 4, Room 221
Saarland Informatics Campus
66123 Saarbrücken
Germany
 email: Get my email address via email
 phone: +49 681 9325-4021
 fax: +49 681 9325-4099
Looking for positions in industry! I'm a soon-to-graduate Ph.D. candidate in the Visual Computing and Artificial Intelligence department where Prof. Dr. Christian Theobalt is my Ph.D. advisor.

Research Interests

My research lies at the intersection of graphics, vision, and machine learning. My focus is on 3D reconstruction; general, non-rigidly deforming objects; and neural rendering and neural scene representations.

Publications

CCuantuMM: Cycle-Consistent Quantum-Hybrid Matching of Multiple Shapes

Harshil Bhatia, Edith Tretschk, Zorah Lähner, Marcel Seelbach Benkner, Michael Moeller, Christian Theobalt, Vladislav Golyanik

CVPR 2023

Jointly matching multiple, non-rigidly deformed 3D shapes is a challenging, NP-hard problem. A perfect matching is necessarily cycle-consistent: Following the pairwise point correspondences along several shapes must end up at the starting vertex of the original shape. This paper introduces the first quantum-hybrid approach for 3D shape multi-matching. Its iterative formulation is admissible to modern adiabatic quantum hardware and scales linearly with the total number of input shapes. On benchmark datasets, the proposed approach is on-par with classical multi-matching methods.

[Project Page (incl. Code)] [arXiv]



State of the Art in Dense Monocular Non-Rigid 3D Reconstruction

Edith Tretschk*, Navami Kairanda*, Mallikarjun B R, Rishabh Dabral, Adam Kortylewski, Bernhard Egger, Marc Habermann, Pascal Fua, Christian Theobalt, Vladislav Golyanik

Eurographics 2023 (STAR)

This survey focuses on state-of-the-art methods for dense non-rigid 3D reconstruction of various deformable objects and composite scenes from monocular videos or sets of monocular views. It reviews the fundamentals of 3D reconstruction from 2D image observations. We then start from general methods, and proceed towards techniques making stronger assumptions about the observed objects (e.g. human faces, bodies, hands, and animals). We conclude by discussing open challenges in the field and the social aspects associated with the usage of the reviewed methods.

[Project Page] [arXiv]



QuAnt: Quantum Annealing with Learnt Couplings

Marcel Seelbach Benkner, Maximilian Krahn, Edith Tretschk, Zorah Lähner, Michael Moeller, Vladislav Golyanik

ICLR 2023 (Top 25%)

Modern quantum annealers can find high-quality solutions to combinatorial optimisation objectives given as quadratic unconstrained binary optimisation (QUBO) problems. Unfortunately, obtaining suitable QUBO forms in computer vision remains challenging and currently requires problem-specific analytical derivations. In stark contrast to prior work, this paper proposes to learn QUBO forms from data through gradient backpropagation instead of deriving them. As a result, the solution encodings can be chosen flexibly and compactly. Furthermore, our methodology is general and virtually independent of the specifics of the target problem type.

[Project Page (incl. Code & Data)] [arXiv] [OpenReview]



Generation of Truly Random Numbers on a Quantum Annealer

Harshil Bhatia, Edith Tretschk, Christian Theobalt, Vladislav Golyanik

IEEE Access 2022

This study investigates how qubits of modern quantum annealers (QA) such as D-Wave can be applied for generating truly random numbers. We show how a QA can be initialised and how the annealing schedule can be set so that after the annealing, thousands of truly random binary numbers are measured in parallel. We discuss the observed qubits' properties and their influence on the random number generation and consider various physical factors that influence the performance of our generator, i.e., digital-to-analogue quantisation errors, flux errors, temperature errors and spin bath polarisation.

[Project Page (incl. Code & Data)]



φ-SfT: Shape-from-Template with a Physics-based Deformation Model

Navami Kairanda, Edith Tretschk, Mohamed Elgharib, Christian Theobalt, Vladislav Golyanik

CVPR 2022

This paper proposes a new SfT approach explaining the observations through simulation of a physically-based surface deformation model representing forces and material properties. In contrast to previous works, we utilise a differentiable physics-based simulator to regularise the surface evolution. In addition, we regress the material properties such as its bending coefficients, elasticity, stiffness, and material density. For the evaluation, we record with an RGB-D camera challenging real surfaces with various material properties and texture, exposed to physical forces. Our approach reconstructs the underlying deformations much more accurately than related methods.

[Project Page (incl. Code & Data)] [arXiv]



Virtual Elastic Objects

Hsiao-yu Chen, Edith Tretschk, Tuur Stuyck, Petr Kadleček, Ladislav Kavan, Etienne Vouga, Christoph Lassner

CVPR 2022

We present Virtual Elastic Objects (VEOs): virtual objects that not only look like their real-world counterparts but also behave like them, even when subject to novel interactions. Achieving this presents multiple challenges: not only do objects have to be captured including the physical forces acting on them, then faithfully reconstructed and rendered, but also plausible material parameters found and simulated. The resulting method can handle objects composed of inhomogeneous material, with very different shapes, and it can simulate interactions with other virtual objects. We present our results using a newly collected dataset of 12 objects under a variety of force fields, which will be shared with the community.

[Project Page (incl. Data)] [arXiv]



Advances in Neural Rendering

Ayush Tewari*, Justus Thies*, Ben Mildenhall*, Pratul Srinivasan*, Edith Tretschk, Yifan Wang, Christoph Lassner, Vincent Sitzmann, Ricardo Martin-Brualla, Stephen Lombardi, Tomas Simon, Christian Theobalt, Matthias Niessner, Jonathan T. Barron, Gordon Wetzstein, Michael Zollhöfer, Vladislav Golyanik

Eurographics 2022 (STAR)

This state-of-the-art report on advances in neural rendering focuses on methods that combine classical rendering principles with learned 3D scene representations, often now referred to as neural scene representations. In addition to methods that handle static scenes, we cover neural scene representations for modeling non-rigidly deforming objects and scene editing and composition. While most of these approaches are scene-specific, we also discuss techniques that generalize across object classes and can be used for generative tasks. In addition to reviewing these state-of-the-art methods, we provide an overview of fundamental concepts and definitions used in the current literature. We conclude with a discussion on open challenges and social implications.

[Project Page] [arXiv]



Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video

Edith Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollhöfer, Christoph Lassner, Christian Theobalt

ICCV 2021

We present Non-Rigid Neural Radiance Fields (NR-NeRF), a reconstruction and novel view synthesis approach for general non-rigid dynamic scenes. Our approach takes RGB images of a dynamic scene as input (e.g., from a monocular video recording), and creates a high-quality space-time geometry and appearance representation. We show that a single handheld consumer-grade camera is sufficient to synthesize sophisticated renderings of a dynamic scene from novel virtual camera views, e.g. a `bullet-time' video effect. Our formulation enables dense correspondence estimation across views and time, and compelling video editing applications such as motion exaggeration.

[Project Page (incl. Code)] [arXiv]



PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations

Edith Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollhöfer, Carsten Stoll, Christian Theobalt

ECCV 2020

We present a new mid-level patch-based surface representation. At the level of patches, objects across different categories share similarities, which leads to more generalizable models. We show that our representation trained on one category of objects from ShapeNet can also well represent detailed shapes from any other category. In addition, it can be trained using much fewer shapes, compared to existing approaches. We show several applications of our new representation, including shape interpolation and partial point cloud completion. Due to explicit control over positions, orientations and scales of patches, our representation is also more controllable compared to object-level representations, which enables us to deform encoded shapes non-rigidly.

[Project Page (incl. Code)] [arXiv]



DEMEA: Deep Mesh Autoencoders for Non-Rigidly Deforming Objects

Edith Tretschk, Ayush Tewari, Michael Zollhöfer, Vladislav Golyanik, Christian Theobalt

ECCV 2020 (Spotlight)

We propose a general-purpose DEep MEsh Autoencoder (DEMEA) which adds a novel embedded deformation layer to a graph-convolutional mesh autoencoder. We demonstrate multiple applications of DEMEA, including non-rigid 3D reconstruction from depth and shading cues, non-rigid surface tracking, as well as the transfer of deformations over different meshes.

[Project Page] [arXiv]



Neural Dense Non-Rigid Structure from Motion with Latent Space Constraints

Vikramjit Singh Sidhu, Edith Tretschk, Vladislav Golyanik, Antonio Agudo, Christian Theobalt

ECCV 2020

We introduce the first dense neural non-rigid structure from motion (N-NRSfM) approach which can be trained end-to-end in an unsupervised manner from 2D point tracks. We formulate the deformation model by an auto-decoder and impose subspace constraints on the recovered latent space function in the frequency domain, allowing us to recover the period of the input sequence. Our method enables multiple applications including shape compression, completion and interpolation, among others. Combined with an encoder trained directly on 2D images, we perform scenario-specific monocular 3D shape reconstruction at interactive frame rates.

[Project Page (incl. Code)]



DispVoxNets: Non-Rigid Point Set Alignment with Supervised Learning Proxies

Soshi Shimada, Vladislav Golyanik, Edith Tretschk, Didier Stricker, Christian Theobalt

3DV 2019 (Oral)

We introduce a supervised-learning framework for non-rigid point set alignment of a new kind - Displacements on Voxels Networks (DispVoxNets) - which abstracts away from the point set representation and regresses 3D displacement fields on regularly sampled proxy 3D voxel grids. Thanks to recently released collections of deformable objects with known intra-state correspondences, DispVoxNets learn a deformation model and further priors (e.g., weak point topology preservation) for different object categories such as cloths, human bodies and faces.

[Project Page] [arXiv]



Sequential Attacks on Agents for Long-Term Adversarial Goals

Edith Tretschk, Seong Joon Oh, Mario Fritz

2. ACM Computer Science in Cars Symposium 2018

We show that an adversary can be trained to control a deep reinforcement learning agent. Our technique works on fully trained victim agents and makes them pursue an alternative, adversarial goal when under attack. In contrast to traditional attacks on e.g. image classifiers, our setting involves adversarial goals that may not be immediately reachable but instead may require multiple steps to be achieved.

[pdf] [arXiv]



Teaching

  • Summer semester 2019, 2020, 2021:
    Supervisor for Computer Vision and Machine Learning for Computer Graphics Seminar, Saarland University and MPI for Informatics
  • September/October 2016, 2017, 2018:
    Coach for the Mathematik-Vorkurs für Informatiker (Math preparation course for new CS students), Saarland University
  • September/October 2017:
    Voluntary lecturer for the Mathematik-Vorkurs für Informatiker (Math preparation course for new CS students), Saarland University
  • Winter semester 2017/18:
    Tutor for Grundzüge der Theoretischen Informatik (Theoretical Computer Science), Lecturer: Prof. Dr. Markus Bläser, Saarland University
  • Winter semester 2015/16:
    Tutor for Programmierung 1 (Programming 1), Lecturer: Prof. Dr. Gert Smolka, Saarland University
  • March 2015:
    Tutor for re-exam preparation in Programmierung 1 (Programming 1), Lecturer: Prof. Bernd Finkbeiner, Ph.D., Saarland University

Recent Positions

Education

Misc

  • Reviewer for: CVPR, ECCV, ICCV, ICLR, ICML, NeurIPS (Top Reviewer 2022), Siggraph Asia, Siggraph, TPAMI
  • November 2017:
    Bachelor Award (for the best Bachelor graduates in CS)
  • April 2015 -- March 2017:
    Member of the Bachelor Honors Program (special support program for talented and ambitious Bachelor students in CS)
  • April 2015 -- March 2017:
    Deutschlandstipendium scholarship