Authors
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Bj
Ludwig Maximilian University of Munish; IWR, Heidelberg University; Runway ML
Portals
Summary
Our latent diffusion models (LDMs) achieve new state-of-the-art scores for image inpainting and class-conditional image synthesis and highly competitive performance on various tasks, including text-to-image synthesis, unconditional image generation and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs.
Abstract
By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion .
Contribution
- In contrast to purely transformer-based approaches, our method scales more graceful to higher dimensional data and can thus (a) work on a compression level which provides more faithful and detailed reconstructions than previous work and (b) can be efficiently applied to high-resolution synthesis of megapixel images
- We achieve competitive performance on multiple tasks (unconditional image synthesis, inpainting, stochastic super-resolution) and datasets while significantly lowering computational costs. Compared to pixel-based diffusion approaches, we also significantly decrease inference costs
- We show that, in contrast to previous work which learns both an encoder/decoder architecture and ascore-based prior simultaneously, our approach does not require a delicate weighting of reconstruction and generative abilities. This ensures extremely faithful reconstructions and requires very little regularization of the latent space
- We find that for densely conditioned tasks such as super-resolution, inpainting and semantic synthesis, ourmodel can be applied in a convolutional fashion and renderlarge, consistent images of ~ 1024 x 1024 px
- Moreover, we design a general-purpose conditioning mechanism based on cross-attention, enabling multi-modaltraining. We use it to train class-conditional, text-to-imageand layout-to-image models
Related Works
Generative Models for Image Synthesis; Diffusion Probabilistic Models; Two-Stage Image Synthesis