Authors
Alban Gauthier, Robin Faury, Jérémy Levallois, Théo Thonat, Jean-Marc Thiery, Tamy Boubekeur
Adobe; Télécom Paris
Portals
Abstract
We present MIPNet, a novel approach for SVBRDF mipmapping which preserves material appearance under varying view distances and lighting conditions. As in classical mipmapping, our method explicitly encodes the multiscale appearance of materials in a SVBRDF mipmap pyramid. To do so, we use a tensor-based representation, coping with gradient-based optimization, for encoding anisotropy which is compatible with existing real-time rendering engines. Instead of relying on a simple texture patch average for each channel independently, we propose a cascaded architecture of multilayer perceptrons to approximate the material appearance using only the fixed material channels. Our neural model learns simple mipmapping filters using a differentiable rendering pipeline based on a rendering loss and is able to transfer signal from normal to anisotropic roughness. As a result, we obtain a drop-in replacement for standard material mipmapping, offering a significant improvement in appearance preservation while still boiling down to a single per-pixel mipmap texture fetch. We report extensive experiments on two distinct BRDF models.
Contribution
- an efficient pipeline for learning mipmapping filters requiring no data preparation for training
- a neural architecture encoding anisotropic appearance and generalizing on unseen materials, and
- a tensor-based formulation for anisotropic BRDF distributions which is well-suited for differentiable pipelines and trilinear interpolation
Related Works
Texture minification; Normal map filtering; Rendering high-resolution normal maps; Reflectance filtering; Differentiable rendering