Authors
Cheng Sun, Guangyan Cai, Zhengqin Li, Kai Yan, Cheng Zhang, Carl Marshall, Jia-Bin Huang, Shuang Zhao, Zhao Dong
Meta RLR; National Tsing Hua University; University of California, Irvine; University of Maryland, College Park
Portals
Abstract
Reconstructing the shape and spatially varying surface appearances of a physical-world object as well as its surrounding illumination based on 2D images (e.g., photographs) of the object has been a long-standing problem in computer vision and graphics. In this paper, we introduce a robust object reconstruction pipeline combining neural based object reconstruction and physics-based inverse rendering (PBIR). Specifically, our pipeline firstly leverages a neural stage to produce high-quality but potentially imperfect predictions of object shape, reflectance, and illumination. Then, in the later stage, initialized by the neural predictions, we perform PBIR to refine the initial results and obtain the final high-quality reconstruction. Experimental results demonstrate our pipeline significantly outperforms existing reconstruction methods quality-wise and performance-wise.
Contribution
- A hybrid volume representation for fast and accurate geometry reconstruction
- A efficient optimization scheme to distill high-quality initial material and lighting estimation from the reconstructed geometry and radiance field
- An advanced PBIR framework that jointly optimizes materials, lighting and geometry with visibility and interreflection handled in a physically unbiased way
- A end-to-end pipeline that achieves state-of-the-art geometry, material and lighting estimation that enables realistic view synthesis and relighting
Related Works
Volumetric Surface Reconstruction; Material and Lighting Estimation; Physics-based Inverse Rendering
Comparisons
Overview
Our pipeline is comprised of three main stages. The first stage is a fast and precise surface reconstruction step that brings direct SDF grid optimization into NeuS. Associated with this surface is an overfitted radiance field that does not fully model the surface reflectance of the object. Our second stage is an efficient neural distillation method that converts the radiance fields to physics-based reflectance and illumination models. Lastly, our third stage utilizes physics-based inverse rendering (PBIR) to further refine the object geometry and reflectance reconstructed by the first two stages. This stage leverages physics-based differentiable rendering that captures global illumination (GI) effects such as soft shadows and interreflection.