Authors
Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T. Barron, Pratul P. Srinivasan
Harvard University; Google Research
Portals
Summary
Ref-NeRF introduces Integrated Directional Encoding to significantly improve normal vectors and visual realism compared to mip-NeRF, the previous top-performing neural view synthesis model. Ref-NeRF structures the outgoing radiance at each point into (prefiltered) incoming radiance, diffuse color, material roughness, and specular tint. By explicitly using these components in our directional MLP, Ref-NeRF can accurately reproduce the appearance of specular highlights and reflections.
Abstract
Neural Radiance Fields (NeRF) is a popular view synthesis technique that represents a scene as a continuous volumetric function, parameterized by multilayer perceptrons that provide the volume density and view-dependent emitted radiance at each location. While NeRF-based techniques excel at representing fine geometric structures with smoothly varying view-dependent appearance, they often fail to accurately capture and reproduce the appearance of glossy surfaces. We address this limitation by introducing Ref-NeRF, which replaces NeRF's parameterization of view-dependent outgoing radiance with a representation of reflected radiance and structures this function using a collection of spatially-varying scene properties. We show that together with a regularizer on normal vectors, our model significantly improves the realism and accuracy of specular reflections. Furthermore, we show that our model's internal representation of outgoing radiance is interpretable and useful for scene editing.
Contribution
- A reparameterization of NeRF’s outgoing radiance, based on the reflection of the viewing vector about the local normal vector (Section 3.1)
- An Integrated Directional Encoding (Section 3.2) that, when coupled with a separation of diffuse and specular colors (Section 3.3), enables the reflected radiance function to be smoothly interpolated across scenes with varying materials and textures
- A regularization that concentrates volume density around surfaces and improves the orientation of NeRF’s normal vectors (Section 4)
Related Works
3D scene representations for view synthesis; Efficient rendering of glossy appearance