Authors
Sai Bi, Zexiang Xu, Pratul Srinivasan, Ben Mildenhall, Kalyan Sunkavalli, Milo
UC San Diego; Adobe Research; UC Berkeley
Portals
Abstract
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene using a fully-connected neural network. We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light. We demonstrate that neural reflectance fields can be estimated from images captured with a simple collocated camera-light setup, and accurately model the appearance of real-world scenes with complex geometry and reflectance. Once estimated, they can be used to render photo-realistic images under novel viewpoint and (non-collocated) lighting conditions and accurately reproduce challenging effects like specularities, shadows and occlusions. This allows us to perform high-quality view synthesis and relighting that is significantly better than previous methods. We also demonstrate that we can compose the estimated neural reflectance field of a real scene with traditional scene models and render them using standard Monte Carlo rendering engines. Our work thus enables a complete pipeline from high-quality and practical appearance acquisition to 3D scene composition and rendering.
Contribution
- A novel neural reflectance field representation that models both scene geometry and reflectance
- A physically-based ray marching scheme that can render neural reflectance fields under any view and lighting
- A method to reconstruct neural reflectance fields from unstructured flash images
- Applications of this representation to tasks like view synthesis, relighting, and scene composition
Related Works
Neural scene representations; Geometry and reflectance capture; Relighting and view synthesis