Computer graphics lighting is the collection of techniques used to simulate light in computer graphics scenes. While lighting techniques offer flexibility in the level of detail and functionality available, they also operate at different levels of computational demand and complexity. Graphics artists can choose from a variety of light sources, models, shading techniques, and effects to suit the needs of each application.
In computer graphics, shading refers to the process of altering the color of an object/surface/polygon in the 3D scene, based on things like (but not limited to) the surface’s angle to lights, its distance from lights, its angle to the camera and material properties (e.g. bidirectional reflectance distribution function) to create a photorealistic effect. Shading is performed during the rendering process by a program called a shader.
Computer Graphics: Principals and Practice Section 27.5.3:
The computation of the amount of light reflected from a surface was sometimes called “lighting” or “illumination,” although the standard interpretation of these words as descriptions of the light ARRIVING at the surface was also common. The “lighting model” was typically evaluated at the vertices of a triangular mesh and then interpolated in some way to give values at points in the interior of the triangle. This latter interpolation process was known as shading, and you’ll sometimes read of Gouraud shading (barycentric interpolation of values at the vertices) or Phong shading, in which, rather than interpolating the values, the component parts were interpolated so that the normal vector was reestimated for each internal point of each triangle, and then the inner product with the incoming light vector was computed, etc.
Nowadays we refer to shading and lighting differently: The description of the outgoing light in response to the incoming light is called a reflection model or scattering model, and the program fragment that computes this is called a shader. Because of the highly parallel nature of most graphics processing, the scattering model is usually evaluated at every pixel, often multiple times, and the “shading” process (i.e., interpolation across triangles) is no longer necessary.
Recent research efforts in image synthesis have been directed toward the rendering of believable and predictable images of biological materials. This course addresses an important topic in this area, namely the predictive simulation of skin's appearance. The modeling approaches, algorithms and data examined during this course can be also applied to the rendering of other organic materials such as hair and ocular tissues.
This paper introduces a shading model for light diffusion in multi-layered translucent materials. Previous work on diffusion in translucent materials has assumed smooth semi-infinite homogeneous materials and solved for the scattering of light using a dipole diffusion approximation. This approximation breaks down in the case of thin translucent slabs and multi-layered materials. We present a new efficient technique based on multiple dipoles to account for diffusion in thin slabs. We enhance this multipole theory to account for mismatching indices of refraction at the top and bottom of of translucent slabs, and to model the effects of rough surfaces. To model multiple layers, we extend this single slab theory by convolving the diffusion profiles of the individual slabs. We account for multiple scattering between slabs by using a variant of Kubelka-Munk theory in frequency space. Our results demonstrate diffusion of light in thin slabs and multi-layered materials such as paint, paper, and human skin.
We have developed an analytical spectral shading model for human skin. Our model accounts for both subsurface and surface scattering. To simulate the interaction of light with human skin, we have narrowed the number of necessary parameters down to just four, controlling the amount of oil, melanin, and hemoglobin, which makes it possible to match specific skin types. Using these physicallybased parameters we generate custom spectral diffusion profiles for a two-layer skin model (shown in the top left figure) that account for subsurface scattering within the skin. We use the diffusion profiles in combination with a Torrance-Sparrow model for surface scattering to simulate the reflectance of the specific skin type.
Decomposing a scene into its shape, reflectance, and illumination is a challenging but important problem in computer vision and graphics. This problem is inherently more challenging when the illumination is not a single light source under laboratory conditions but is instead an unconstrained environmental illumination. Though recent work has shown that implicit representations can be used to model the radiance field of an object, most of these techniques only enable view synthesis and not relighting. Additionally, evaluating these radiance fields is resource and time-intensive. We propose a neural reflectance decomposition (NeRD) technique that uses physically-based rendering to decompose the scene into spatially varying BRDF material properties. In contrast to existing techniques, our input images can be captured under different illumination conditions. In addition, we also propose techniques to convert the learned reflectance volume into a relightable textured mesh enabling fast real-time rendering with novel illuminations. We demonstrate the potential of the proposed approach with experiments on both synthetic and real datasets, where we are able to obtain high-quality relightable 3D assets from image collections. The datasets and code is available on the project page: https://markboss.me/publication/2021-nerd/
// The following code goes to Customize -> Widgets -> coffee
// if coffee is not existed, enable Ultimate floating widgets plugin to create one
// https://docs.widgetbot.io/embed/crate/options