Authors
Yuanqing Zhang, Jiaming Sun, Xingyi He, Huan Fu, Rongfei Jia, Xiaowei Zhou
Zhejiang University; Tao Technology Department, Alibaba Group
Portals
Summary
To precisely recover SVBRDF (parameterized as albedoand roughness) from multi-view RGB images, we propose an efficient approach to reconstruct spatially varying indirect illumination and combine it with environmental light evaluated by visibility as the full light model (a). The example in (b) demonstrates thatwithout modeling indirect illumination, its rendering effects arebaked into the estimated albedo to compensate for the incompletelight model and also result in artifacts in the estimated roughness.
Abstract
Recent advances in implicit neural representations and differentiable rendering make it possible to simultaneously recover the geometry and materials of an object from multi-view RGB images captured under unknown static illumination. Despite the promising results achieved, indirect illumination is rarely modeled in previous methods, as it requires expensive recursive path tracing which makes the inverse rendering computationally intractable. In this paper, we propose a novel approach to efficiently recovering spatially-varying indirect illumination. The key insight is that indirect illumination can be conveniently derived from the neural radiance field learned from input images instead of being estimated jointly with direct illumination and materials. By properly modeling the indirect illumination and visibility of direct illumination, interreflection- and shadow-free albedo can be recovered. The experiments on both synthetic and real data demonstrate the superior performance of our approach compared to previous work and its capability to synthesize realistic renderings under novel viewpoints and illumination. Our code and data are available at https://zju3dv.github.io/invrender/.
Related Works
Inverse rendering; Implicit neural representation; Inverse rendering with implicit neural representation; The rendering equation
Comparisons
Overview
Given a set of posed images of an object captured understatic illumination, we learn to decompose the shape andSVBRDF to enable applications such as free-view relighting. We solve the inverse rendering problem in an analysisby-synthesis manner, where we optimize the parametersof the forward rendering model until the rendered imagesclosely resemble the observed images.