Authors
James Bieron, Xin Tong, Pieter Peers
College of William & Mary; Microsoft Research Asia
Portals
Abstract
This paper presents a novel neural material relighting method for revisualizing a photograph of a planar spatially-varying material under novel viewing and lighting conditions. Our approach is motivated by the observation that the plausibility of a spatially varying material is judged purely on the visual appearance, not on the underlying distribution of appearance parameters. Therefore, instead of using an intermediate parametric representation (e.g., SVBRDF) that requires a rendering stage to visualize the spatially-varying material for novel viewing and lighting conditions, neural material relighting directly generates the target visual appearance. We explore and evaluate two different use cases where the relit results are either used directly, or where the relit images are used to enhance the input in existing multi-image spatially varying reflectance estimation methods. We demonstrate the robustness and efficacy for both use cases on a wide variety of spatially varying materials.
Related Works
Relighting; SVBRDF Estimation
Overview
Neural material relighting takes as input a single photograph of a planar material sample viewed from straight above and lit with a colocated point light (i.e., camera flash). We assume the input photograph is captured by a camera with a FOV of 28? and resampled to a 256 × 256 resolution. We deliberately train for such a narrow FOV so that images captured with a larger FOV can be easily cropped to mimic the correct FOV before resampling; the reverse (going from narrow to wide) would be more difficult. We concatenate the per-pixel corresponding ? coordinate of the view/light direction to the captured input photograph (i.e., in total: 3 + 1 input channels). In addition, a 9 channel decoder-condition ’image’ containing per-pixel output view, lighting, and halfway vectors is provided to control the appearance of the output. The resulting output is a rectified photograph where each pixel’s appearance is relit based on the corresponding view and light directions in the output condition. The rectification ensures that each surface point on the material sample is mapped to the same surface location in the photograph irrespective of the view direction, thereby alleviating the network from learning the projective mapping and avoiding foreshortening issues (i.e., spatial low pass filtering) at grazing view angles. Note that while we provide a light source direction, we do so per-surface point, hence we can specify point lighting (i.e., converging directions) at different distances without actually encoding the distance, consequently neural material relighting models point lighting but without the distance-squared fall-off; this can be easily added afterwards by scaling each pixel appropriately