Neural Radiance Transfer Fields for Relightable Novel-view Synthesis with Global Illumination

(ECCV 2022 Oral)

Download Video: HD (MP4, 46 MB)

Abstract

Given a set of images of a scene, the re-rendering of this scene from novel views and lighting conditions is an important and challenging problem in Computer Vision and Graphics. On the one hand, most existing works in Computer Vision usually impose many assumptions regarding the image formation process, e.g. direct illumination and predefined materials, to make scene parameter estimation tractable. On the other hand, mature Computer Graphics tools allow modeling of complex photo-realistic light transport given all the scene parameters. Combining these approaches, we propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function, which implicitly handles global illumination effects using novel environment maps. Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition. To disambiguate the task during training, we tightly integrate a differentiable path tracer in the training process and propose a combination of a synthesized OLAT and a real image loss. Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art and, thus, also our re-rendering results are more realistic and accurate.

Downloads



Frequent Q&A

Q: Why using spatially-constant specularity in the material estimation stage instead of spatially-varying BRDF?
A: We investigate the quality of material estimation from multi-view images with a spatially-varying specularity model, using the state-of-the-art differentiable path tracer Mitsuba 3, and obtain the following relighting result. We still observe very strong artifacts in the form of mid-frequency noise as well as incorrectly recovered reflections of the engine on the wings, as is evident from the 360-video below. We find that this is a common type of artifact for non-convex glossy objects with strong (higher-order) reflections -- the same artifacts that led us to develop our NRTF algorithm, which successfully addresses these problems.

Mitsuba 3 optimization and relighting with spatially-varying specularity


Citation

@inproceedings{lyu2022nrtf,
title = {Neural Radiance Transfer Fields for Relightable Novel-view Synthesis with Global Illumination},
author = {Lyu, Linjie and Tewari, Ayush and Leimkuehler, Thomas and Habermann, Marc and Theobalt, Christian},
year = {2022},
booktitle={ECCV},
}
				

Contact

For questions, clarifications, please get in touch with:
Linjie Lyu
llyu@mpi-inf.mpg.de

This page is Zotero translator friendly. Page last updated Imprint. Data Protection.