Authors
Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, Jiwon Kim
SK T-Brain
Portals
Summary
Our proposed GAN model for relation discovery – DiscoGAN – couples the previously proposed model (Figure 2c). Each of the two coupled models learns the mapping from one domain to another, and also the reverse mapping to for reconstruction.
Abstract
While humans easily recognize relations between data from different domains without any supervision, learning to automatically discover them is in general very challenging and needs many ground-truth pairs that illustrate the relations. To avoid costly pairing, we address the task of discovering cross-domain relations given unpaired data. We propose a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN). Using the discovered relations, our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity. Source code for official implementation is publicly available https://github.com/SKTBrain/DiscoGAN
Overview
Our GAN-based model trains with two independently collected sets of images and learns how to map two domains without any extra label. In this paper, we reduces this problem into generating a new image of one domain given an image from the other domain.