Authors
Joshua Weir, Junhong Zhao, Andrew Chalmers, Taehyun Rhee
Victoria University of Wellington
Portals
Summary
Given a portrait image, we perform delighting: removing undesirable lighting characteristics and reconstructing the image under uniform lighting.
Abstract
We present a deep neural network for removing undesirable shading features from an unconstrained portrait image, recovering the underlying texture. Our training scheme incorporates three regularization strategies: masked loss, to emphasize high-frequency shading features; soft-shadow loss, which improves sensitivity to subtle changes in lighting; and shading-offset estimation, to supervise separation of shading and texture. Our method demonstrates improved delighting quality and generalization when compared with the state-of-the-art. We further demonstrate how our delighting method can enhance the performance of light-sensitive computer vision tasks such as face relighting and semantic parsing, allowing them to handle extreme lighting conditions.
Contribution
- A novel portrait delighting method that can recover the underlying texture of portraits illuminated under a wide range of complex lighting environments
- Three novel loss functions: shading-offset loss, soft-shadow loss and masked loss that improve our models robustness to unseen lighting environments while preserving image detail
- Our delighting method can serve as a data normalization tool for improving light-sensitive computer vision tasks such as face relighting and semantic parsing