Authors
Tiancheng Sun, Kai-En Lin, Sai Bi, Zexiang Xu, Ravi Ramamoorthi
University of California, San Diego; Adobe Research
Portals
Summary
This paper introduces the neural light-transport field (NeLF) to infer the light-transport and volume density from a sparse set of input views. NeLF enables joint relighting and view synthesis of real portraits from only five input images.
Abstract
Human portraits exhibit various appearances when observed from different views under different lighting conditions. We can easily imagine how the face will look like in another setup, but computer algorithms still fail on this problem given limited observations. To this end, we present a system for portrait view synthesis and relighting: given multiple portraits, we use a neural network to predict the light-transport field in 3D space, and from the predicted Neural Light-transport Field (NeLF) produce a portrait from a new camera view under a new environmental lighting. Our system is trained on a large number of synthetic models, and can generalize to different synthetic and real portraits under various lighting conditions. Our method achieves simultaneous view synthesis and relighting given multi-view portraits as the input, and achieves state-of-the-art results.
Contribution
- A novel neural representation that models scene appearance as light transport functions and enables relighting for neural volumetric rendering
- A domain adaptation module to enhance the generalizability of the network trained on rendered images
- Realistic practical rendering results of joint relighting and view synthesis of real portraits from only five captured images.
Related Works
Portrait Appearance; Relighting; View Synthesis; Neural Rendering
Comparisons
SIPR, IBRNet
Overview
The proposed algorithm takes multi-view portraits as input and predicts the source environment map, light-transport and volume density at a query point. We then use the predicted light-transport and volume density to perform the joint task of view synthesis and relighting.