By means of passive optical motion capture real people can be authentically animated and photo-realistically textured. To import real-world characters into virtual environments, however, also surface reflectance properties must be known. We describe a video-based modeling approach that captures human shape and motion as well as reflectance characteristics from a handful of synchronized video recordings. The presented method is able to recover spatially varying surface reflectance properties of clothes from multi-view video footage. The resulting model description enables us to realistically reproduce the appearance of animated virtual actors under different lighting conditions, as well as to interchange surface attributes among different people, e.g. for virtual dressing. Our contribution can be used to create 3D renditions of real-world people under arbitrary novel lighting conditions on standard graphics hardware.
![]() |
![]() |
Reconstruction improvement through cloth shift compensation | Overview of Spatio-temporal Dynamic Reflectance Sampling Procedure |
In the context of the project, we developed a variety of spatio-temporal registration techniques, for instance to properly reconstruct the spatio-temporal appearance of apparel that is shifting while the person is moving. We also developed novel reflecance sharing concepts to robustly reconstruct dynamic surface reflectance even if there is a bias in the measured data.
We have augmented our original model-based pipeline to capture and render free-viewpoint videos such that we do not only estimate dynamic texture information but also dynamic surface reflectance properties. This way, free-viewpoint videos can also be rendered under novel virtual lighting conditions.
Christian Theobalt | Naveed Ahmed | |
theobalt@mpi-sb.mpg.de | nahmed@mpi-sb.mpg.de | |
Max-Planck-Institut für Informatik
Stuhlsatzenhausweg 85 66123 Saarbrücken Germany |