A | B | C | D | E | F 
 G | H | I | J | K | L | M 
 N | O | P | Q | R | S | T 
 U | V | W | X | Y | Z 
max planck institut informatik
mpii logoMinerva of the Max Planck Society

Projects



Reconstructing Human Shape and Motion


Investigators: E. de Aguiar and C. Theobalt
Supervisors: M. Magnor and H.-P. Seidel

In model-based free-viewpoint video, a detailed representation of the time-varying geometry of a real-word scene is used to generate renditions of it from novel viewpoints. In [1], we present a method for reconstructing such a dynamic geometry model of a human actor from multi-view video. In a two-step procedure, first the spatio-temporally consistent shape and poses of a generic human body model are estimated by means of a silhouette-based analysis-by-synthesis method. In a second step, subtle details in surface geometry that are specific to each particular time step are recovered by enforcing a color-consistency criterion. By this means, we generate a realistic representation of the time-varying geometry of a moving person that also reproduces these dynamic surface variations.

References:

[1] E. de Aguiar, C. Theobalt, M. Magnor, H.-P. Seidel, Reconstructing Human Shape and Motion from Multi-view video. 2nd European Conference on Visual Media Production (CVMP), p. 42-49. London, UK. 2005. [pdf]

@INPROCEEDINGS{deAguiarCVMP05,
AUTHOR = {de Aguiar, Edilson and Theobalt, Christian and Magnor, Marcus and Seidel, Hans-Peter},
TITLE = {Reconstructing Human Shape and Motion from Multi-View Video},
BOOKTITLE = {2nd European Conference on Visual Media Production (CVMP)},
PUBLISHER = {The IEE},
YEAR = {2005},
PAGES = {42--49},
ADDRESS = {London, UK},
MONTH = {December},
ISBN = {0-86341-583-0},
}

Automatic Generation of Human Avatars

Investigators: E. de Aguiar, N. Ahmed and C. Theobalt
Supervisors: M. Magnor and H.-P. Seidel

In multi-user virtual environments, like online games or 3D chat rooms, real-world people interact via digital avatars. In order to make the step from the real world onto the virtual stage convincing the digital equivalent of the user has to be personalized. It should reflect the shape and proportions, the kinematic properties, as well as the textural appearance of its real-world equivalent. In [1] we present a novel easy-to-use and fully-automatic approach to create a personalized avatar from multi-view video data of a moving person. An adaptable generic human body model is scaled and deformed until its shape and skeletal dimensions match the real human shown in the video footage. A consistent surface texture for the model is generated using multi-view video frames from different camera views and different body poses. With our proposed method photo-realistic human avatars can be robustly generated.

References:

[1] N. Ahmed, E. de Aguiar, C. Theobalt, M. Magnor, H.-P. Seidel, Automatic Generation of Personalized Human Avatars from Multi-view Video. Proceedings of the ACM VRST '05, p. 257-260. Monterey, USA. 2005. [pdf]

@INPROCEEDINGS{deAguiarVRST05,
AUTHOR = {Ahmed, Naveed and de Aguiar, Edilson and Theobalt, Christian and Magnor, Marcus and Seidel, Hans-Peter},
TITLE = {Automatic Generation of Personalized Human Avatars from Multi-View Video},
BOOKTITLE = {VRST '05: Proceedings of the ACM symposium on Virtual reality software and technology},
PUBLISHER = {ACM},
YEAR = {2005},
ORGANIZATION = {Association for Computing Machinery (ACM)},
PAGES = {257--260},
ADDRESS = {Monterey, USA},
MONTH = {December},
ISBN = {1-59593-098-1},
}

Relightable 3D Video


Investigators: E. de Aguiar, N. Ahmed, C. Theobalt, G. Ziegler and H. Lensch,
Supervisors: M. Magnor and H-P Seidel

3D Videos of Human Actors can be faithfully reconstructed from multiple synchronized video streams by means of a model-based analysis-by-synthesis approach. The reconstructed videos play back in real-time and the virtual viewpoint onto the scene can be arbitrarily changed. By this means authentically animated, photo-realistically and view-dependently textured models of real people can be created that look real under fixed illumination conditions. To import real-world characters into virtual environments, however, also surface reflectance properties must be known. We have thus developed a video-based modeling approach that captures human motion as well as reflectance characteristics from a handful of synchronized video recordings. The presented method [1][2][3] is able to recover spatially varying reflectance properties of clothes by exploiting the time-varying orientation of each surface point with respect to camera and light direction. The resulting model description enables us to match animated subject appearance to different lighting conditions, as well as to interchange surface attributes among different people, e.g. for virtual dressing.

References:

[1] C. Theobalt, N. Ahmed, E. de Aguiar, G. Ziegler, H. Lensch, M. Magnor, H.-P. Seidel, Joint Motion and Reflectance Capture for Creating Relightable 3D Videos, Technical Report MPI-I-2005-4-004, Max-Planck-Institut fuer Informatik, 2005. [pdf]

@TECHREPORT{deAguiar_TR05,
AUTHOR = {Theobalt, Christian and Ahmed, Naveed and de Aguiar, Edilson and Ziegler, Gernot and Lensch, Hendrik P. A. and Magnor, Marcus and Seidel, Hans-Peter},
TITLE = {Joint Motion and Reflectance Capture for Creating Relightable 3D Videos},
YEAR = {2005},
TYPE = {Research Report},
INSTITUTION = {Max-Planck-Institut fuer Informatik},
NUMBER = {MPI-I-2005-4-004},
PAGES = {17},
ADDRESS = {Saarbruecken, Germany},
MONTH = {April},
}

[2] C. Theobalt, N. Ahmed, E. de Aguiar, G. Ziegler, H. Lensch, M. Magnor, H.-P. Seidel, Joint Motion and Reflectance Capture for Relightable 3D Video , Technical Sketch, ACM SIGGRAPH, Los Angeles, 2005. [pdf]

[3] C. Theobalt, N. Ahmed, E. de Aguiar, G. Ziegler, H. Lensch, M. Magnor, H.-P. Seidel, Relightable 3D Video, Poster at Symposium on Computational Photography and Video, MIT, Cambridge, USA, 2005. [pdf]


Automatic Skeleton Reconstruction

Investigators: E. de Aguiar, C. Theobalt and H. Theisel
Supervisors: M. Magnor and H.-P. Seidel

In computer animation, human motion capture from video is a widely used technique to acquire motion parameters. The acquisition process typically requires an intrusion into the scene in the form of optical markers which are used to estimate the parameters of motion as well as the kinematic structure of the performer. Marker-free optical motion capture approaches exist, but due to their dependence on a specific type of a-priori model they can hardly be used to track other subjects, e.g. animals. To bridge the gap between the generality of marker-based methods and the applicability of marker-free methods we study a flexible nonintrusive approach that estimates both, a kinematic model and its parameters of motion from a sequence of voxel-volumes. The volume sequences are reconstructed from multi-view video data by means of a shape from-silhouette technique. The method [1] is well-suited for but not limited to motion capture of human subjects, as presented in [2].

References:

[1] E. de Aguiar, C. Theobalt, M. Magnor, H. Theisel, H.-P. Seidel: M^3: Marker-free Model
Reconstruction and Motion Tracking from 3D Voxel Data
. In Proceedings of Pacific
Graphics 2004, Seoul, Korea. p.101-110. [pdf]

@INPROCEEDINGS{deAguiarPG04,
AUTHOR = {de Aguiar, Edilson and Theobalt, Christian and Magnor, Marcus and Theisel, Holger and Seidel, Hans-Peter},
EDITOR = {Cohen-Or, Daniel and Ko, Hyeong-Seok and Terzopoulos, Demetri and Warren, Joe},
TITLE = {M3 : Marker-free Model Reconstruction and Motion Tracking from 3D Voxel Data},
BOOKTITLE = {12th Pacific Conference on Computer Graphics and Applications, PG 2004},
PUBLISHER = {IEEE},
YEAR = {2004},
ORGANIZATION = {IEEE},
PAGES = {101--110},
ADDRESS = {Seoul, Korea},
MONTH = {October},
ISBN = {0-7695-2234-3},
}

[2] C. Theobalt, E. de Aguiar, M. Magnor, H. Theisel, H.-P. Seidel: Marker-free Kinematic
Skeleton Estimation from Sequences of Volume Data
, Proc. of ACM Symposium on
Virtual Reality Software and Technology (VRST), p.57-64, Hong Kong, China, 2004. [pdf]

@INPROCEEDINGS{deAguiarVRST04,
AUTHOR = {Theobalt, Christian and de Aguiar, Edilson and Magnor, Marcus and Theisel, Holger and Seidel, Hans-Peter},
EDITOR = {Lau, Rynson and Baciu, George},
TITLE = {Marker-free Kinematic Skeleton Estimation from Sequences of Volume Data},
BOOKTITLE = {ACM Symposium on Virtual Reality Software and Technology (VRST 2004)},
PUBLISHER = {ACM},
YEAR = {2004},
ORGANIZATION = {Association of Computing Machinery (ACM)},
PAGES = {57--64},
ADDRESS = {Hong Kong, China},
MONTH = {November},
ISBN = {1-58113-907-1},
}

Character Animation from a MOCAP Database

Investigator: E. de Aguiar
Supervisors: C. Theobalt and H.-P. Seidel

With the advent of photo-realism in Computer Graphics, life-like character animations that capture fine details of a motion have become more important. We have studied methods [1] that use the information contained in a motion capture database to assist in the creation of a realistic character animation. Starting with an animation sketch, where only a small number of keyframes for some degrees of freedom are set, the motion capture data is used to enhance the initial motion.

References:

[1] E. de Aguiar: Character Animation from a Motion Capture Database,
Master thesis, University of Saarland, 2003. [pdf]

@MASTERSTHESIS{deAguiar03_Master,
AUTHOR = {de Aguiar, Edilson},
TITLE = {Character Animation from a Motion Capture Database},
SCHOOL = {Universit{\"a}t des Saarlandes},
YEAR = {2003},
MONTH = {November},
}

Search MPII (type ? for help)