Authors
Duan Gao, Guojun Chen, Yue Dong, Pieter Peers, Kun Xu, Xin Tong
BNRist, Tsinghua University; Microsoft Research Asia; College of William & Mary
Portals
Summary
In this paper we present a novel image-based method for 360? free-viewpoint relighting from unstructured photographs that borrows ideas from model-based approaches, without the stringent accuracy demands on the components, and that leverages neural networks to reduce the complexity of typical image-based acquisition procedures.
Abstract
We present deferred neural lighting, a novel method for free-viewpoint relighting from unstructured photographs of a scene captured with handheld devices. Our method leverages a scene-dependent neural rendering network for relighting a rough geometric proxy with learnable neural textures. Key to making the rendering network lighting aware are radiance cues: global illumination renderings of a rough proxy geometry of the scene for a small set of basis materials and lit by the target lighting. As such, the light transport through the scene is never explicitely modeled, but resolved at rendering time by a neural rendering network. We demonstrate that the neural textures and neural renderer can be trained end-to-end from unstructured photographs captured with a double hand-held camera setup that concurrently captures the scene while being lit by only one of the cameras' flash lights. In addition, we propose a novel augmentation refinement strategy that exploits the linearity of light transport to extend the relighting capabilities of the neural rendering network to support other lighting types (e.g., environment lighting) beyond the lighting used during acquisition (i.e., flash lighting). We demonstrate our deferred neural lighting solution on a variety of real-world and synthetic scenes exhibiting a wide range of material properties, light transport effects, and geometrical complexity.
Contribution
- a novel end-to-end system that enables full 360? free-viewpoint relighting from unstructured handheld captured photographs for a wide range of material properties and light transport effects
- a deferred neural lighting renderer suitable for a wide range of lighting conditions
- a novel handheld acquisition scheme that only requires two cameras
- an augmentation method for extending the relighting capabilities of our neural rendering network beyond the acquisition lighting
Related Works
Appearance Modeling; Joint Modeling of Shape and Appearance; Image-based Rendering; Image-based Relighting
Overview
Overview of Deferred Neural Lighting. First, an ?-channel scene-dependent neural texture is projected to the desired viewpoint p via a rough geometrical proxy G of the scene. Next, radiance cues are synthesized by rendering ? scene-independent basis materials under the target lighting l onto the rough geometry G. Finally, the radiance cues and projected neural textures are combined (via a per-pixel multiplication) and passed to a scene-dependent neural rendering network R that produces the final relit appearance of the scene. Additionally, to facilitate compositing the relit appearance, we also predict a binary mask from the projected neural textures.