Authors
Aakash KT, Parikshit Sakurikar, Saurabh Saini, P. J. Narayanan
IIIT Hyderabad; DreamVu Inc.
Portals
Abstract
Photo realism in computer generated imagery is crucially dependent on how well an artist is able to recreate real-world materials in the scene. The workflow for material modeling and editing typically involves manual tweaking of material parameters and uses a standard path tracing engine for visual feedback. A lot of time may be spent in iterative selection and rendering of materials at an appropriate quality. In this work, we propose a convolutional neural network based workflow which quickly generates high-quality ray traced material visualizations on a shaderball. Our novel architecture allows for control over environment lighting and assists material selection along with the ability to render spatially-varying materials. Additionally, our network enables control over environment lighting which gives an artist more freedom and provides better visualization of the rendered material. Comparison with state-of-the-art denoising and neural rendering techniques suggests that our neural renderer performs faster and better. We provide a interactive visualization tool and release our training dataset to foster further research in this area.
Contribution
- A neural renderer to aid in visualization of uniform and spatially-varying materials
- An architectural enhancement to provide control over the environment lighting, thereby increasing visualization capability and freedom
- An interactive tool for material visualization and editing and a large-scale dataset of uniform and spatially-varying material parameters with corresponding ground truth ray-traced images
Related Works
Material modelling; Material acquisition; Rendering as Denoising; Image-based relighting; Neural rendering
Overview
An overview of our proposed workflow. From input SVBRDF maps (a), we create screen-space maps (b) by UV-mapping each map on the shaderball, and rasterizing the scene with that map as base texture. We provide the sun direction and turbidity [?s, c] as an input along with the concatenation of screen-space maps. (c) shows the architecture of our proposed neural renderer. (d) shows the results of our network under different environment lighting.