A | B | C | D | E | F 
 G | H | I | J | K | L | M 
 N | O | P | Q | R | S | T 
 U | V | W | X | Y | Z 
max planck institut
mpii logo Minerva of the Max Planck Society


Rhodin, Helge

Helge Rhodin

Postdoctoral researcher at EPFL Lausanne, CVLAB.
Alumnus of the Graphics, Vision, and Video Group, Department 4---Computer Graphics
Please see my personal homepage, helge.rhodin.de

Email: Get my email address via email

Projects till 2016, please see my personal homepage, helge.rhodin.de, for my recent works

EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras

We propose an egocentric motion capture method that estimates full skeleton pose from a lightweight pair of fisheye cameras attached to a helmet or VR headset. Our method overcomes limitations of static cameras and body suits by offering unconstrained recording volumes and reconstruction in crowded spaces with minimal instrumentation.

Model-based Outdoor Performance Capture

We propose a new model-based method to accurately reconstruct human performances captured outdoors in a multi-camera setup. Particularly for surface shape refinement we propose a new combination of 3D Gaussians designed to align the projected model with likely silhouette contours without explicit segmentation or edge detection.

General Automatic Human Shape and Motion Capture Using Volumetric Contour Cues

In this paper, we propose a fully automatic algorithm that jointly creates a rigged actor model commonly used for animation -- skeleton, volumetric shape, appearance, and optionally a body surface -- and estimates the actor's motion from multi-view video input only. We minimize a new edge-based analytically differentiable alignment energy inspired by volume ray casting in an absorbing medium.

A Versatile Scene Model with Differentiable Visibility Applied to Generative Pose Estimation

Proper handling of occlusions is a big challenge for generative reconstructions methods. We present a new scene representation that enables an analytically differentiable closed-form formulation of surface visibility. The underlying idea is to represent opaque objects by a translucent medium with a smooth Gaussian density distribution which turns visibility into a smooth phenomenon.

Generalizing Wave Gestures from Sparse Examples for Real-time Character Control

Motion-tracked real-time character control is important for games and VR. We overcome limitations of current retargeting and pose mapping solutions by exploiting the inherent structure of periodic motions and by disambiguating simultaneous gestures.

Real-time Hand Tracking Using a Sum of Anisotropic Gaussians Model

We propose a new real-time marker-less hand tracking approach for multi-view RGB camera setups. The main contributions are an Ellipsoid and Gaussian based hand model of high modelling flexibility and a full perspective projection model with analytical gradients. Our method achieves better accuracy than previous methods in the field and runs in real time at 25 fps.

Interactive Motion Mapping for Real-time Character Control

We control virtual characters by natural motions in real time. Input to our algorithm is the user's body motion captured with low cost sensors such as the Kinect. We transfer the user's motion to arbitrary characters, for instance a sheep, a caterpillar and a horse. No skeleton and rigging is required. The correspondence between input and character is learned through an easy to use interface.

Shape from Texture

In this project we reconstruct surface shape from texture gradients. We related the distortion of regular textures under perspective projection in closed form equations to the surface shape. Efficiency is ensured due to a separable texture analysis. This project is realized in cooperation with Michael Breuß, head of the Applied Mathematics and Computer Vision group.


The goal of AnySL is the development of a unified shading system that is independent of source language, target architecture and rendering engine without sacrificing runtime performance. My focus was on the GPU code generation. The AnySL project is joined work of the Compiler Design Lab of Prof. Sebastian Hack and the Computer Graphics Group of Prof. Philipp Slusallek.

Roboking Competition

As Team Hamburg we were able to win twice the international Roboking competition hosted by the Chemnitz University of Technology. The goal was to construct and operate an autonomous robot capable of fulfilling the given objectives without human intervention. The team was mentored by Andreas Rhodin and consisted of Julian Bende, Paul Bröker, Daniel Reck, and Helge Rhodin.

Publications till 2016, please see my personal homepage, helge.rhodin.de, for my recent works