Rendering

At each time step of a free viewpoint video the body model is rendered in the pose that was computed by the motion capture algorithm. The model geometry is textured with a time-varying multi-view texture that is created from the input camera views (Fig. 1).
Figure: 1: Blend bewteen wire-frame view and textured view of the body model.

Multi-view texture creation

We apply projective texturing using the input camera images to create a realistic time-varying surface appearance of the rendered model. To combine the camera images from different viewpoints, per-vertex blending is performed. In order to compute the blending weights the visibility of the vertex in each camera view must be taken into account. The actual spatial blending weights can be computed in a view-independent and/or view-dependent way (Fig. 2). The view-independent weight for each camera is the reciprocal of the angle between the vertex normal and the camera viewing vector. The view-dependent weight for each camear is the reciprocal of the angle between the input viewing direction and the output viewing direction. We introduce an additional rescaling factor of the view-independent weights, that provides us with more control over the visual appearance. The system can use view-dependent and view-independent weights separately or use the view-dependent weights for rescaling of the view-independent weights.
Figure: 2: Spatial per-vertex blending weights computed in a view-independent (left) and view-dependent (right) way.

Prevention of texture artifacts

In some frames of video the model may not be perfectly aligned with the image silhouettes in each camera view. This can lead to disturbing visual artifacts in the textures. We use two methods, a modified visibility computation and a texture expansion, to solve these problems. The modified visibility computation computes the visibility of a vertex from a set of slightly displaced camera views instead of the actual camera view only. This way, erroneous projections of foreground texture on occluded more distant geometry are prevented. Second, the texture information at silhouette boundaries is expanded into the background by performing an image dilation on the background-subtracted video frames.

The results we obtained with our system show that even small details such as wrinkles in clothing are preserved in the multi-view texture leading to a highly realistic appearance of the free-viewpoint video. The polygonal model and its surface texture are a type of representation for a free-viewpoint video that is highly suitable for today's consumer-grade graphics boards.

Figure: 1: Rendered novel viewpoint (large) with 2 input camera views (small).


Home / Research Units / AG4: Home Page / Research Areas
Max-Planck-Institut für Informatik
About the Institute | Research Units | News & Activities | Location | People | Services | Search the Site | Intranet

Copyright © 1998-2002 by Max-Planck-Institut für Informatik. All rights reserved. Impressum and legal notices.
Page maintained by Christian Theobalt <theobalt@mpi-sb.mpg.de>
www site design and concept by Uwe Brahm
Document last changed on Tuesday, April 22 2003 - 15:00