Authors
Hendrik Baatz, Jonathan Granskog, Marios Papas, Fabrice Rousselle, Jan Novák
ETH Zurich; NVIDIA
Portals
Summary
We create a scene with mesoscale appearance using a learned NeRF texture, which is instantiated over a base mesh according to an artist-defined distribution of anchor points. The local appearance of the parametric NeRF texture is controlled using classical surface textures and lighting parameters.
Abstract
We investigate the use of neural fields for modeling diverse mesoscale structures, such as fur, fabric, and grass. Instead of using classical graphics primitives to model the structure, we propose to employ a versatile volumetric primitive represented by a neural reflectance field (NeRF-Tex), which jointly models the geometry of the material and its response to lighting. The NeRF-Tex primitive can be instantiated over a base mesh to “texture” it with the desired meso and microscale appearance. We condition the reflectance field on user-defined parameters that control the appearance. A single NeRF texture thus captures an entire space of reflectance fields rather than one specific structure. This increases the gamut of appearances that can be modeled and provides a solution for combating repetitive texturing artifacts. We also demonstrate that NeRF textures naturally facilitate continuous level-of-detail rendering. Our approach unites the versatility and modeling power of neural networks with the artistic control needed for precise modeling of virtual scenes. While all our training data is currently synthetic, our work provides a recipe that can be further extended to extract complex, hard-to-model appearances from real images.
Contribution
- We opt to model a reflectance field instead of a radiance field, i.e. lighting is not baked in the neural representation but rather used as a conditional input
- Instead of using a single neural field to represent the entire scene, we use an assembly of neural fields to represent a layer of mesoscale structure on top of a base triangle mesh. Our approach is conceptually similar to volumetric textures, with the distinction that we use a neural network to represent the content of the texture
- Our neural fields are parametric, i.e. they allow varying the density and reflectance fields as a function of artist-friendly parameters. This can be used, for instance, to transition from straight to curly fur or to spatially vary its color
Related Works
Mesoscale appearance; Neural rendering