Authors
Junho Kim, Minjae Kim, Hyeonwoo Kang, Kwanghee Lee
Clova AI Research, NAVER; NCSOFT; Boeing Korea Engineering and Technology Center
Portals
Abstract
We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. The attention module guides our model to focus on more important regions distinguishing between source and target domains based on the attention map obtained by the auxiliary classifier. Unlike previous attention-based method which cannot handle the geometric changes between domains, our model can translate both images requiring holistic changes and images requiring large shape changes. Moreover, our new AdaLIN (Adaptive Layer-Instance Normalization) function helps our attention-guided model to flexibly control the amount of change in shape and texture by learned parameters depending on datasets. Experimental results show the superiority of the proposed method compared to the existing state-of-the-art models with a fixed network architecture and hyper-parameters. Our code and datasets are available at https://github.com/taki0112/UGATIT or https://github.com/znxlwm/UGATIT-pytorch.
Contribution
- We propose a novel method for unsupervised image-to-image translation with a new attention module and a new normalization function, AdaLIN
- Our attention module helps the model to know where to transform intensively by distinguishing between source and target domains based on the attention map obtained by the auxiliary classifier
- AdaLIN function helps our attention-guided model to flexibly control the amount of change in shape and texture without modifying the model architecture or the hyper-parameters
Related Works
Generative Adversarial Networks; Image-to-image translation; Class Activation Map; Normalization
Comparisons
CycleGAN, UNIT, MUNIT, DRIT, AGGAN, CartoonGAN