Abstract
The use of coarse-grained layouts for controllable synthesis of complex scene images via deep generative models has recently gained popularity. However, results of current approaches still fall short of their promise of high-resolution synthesis. We hypothesize that this is mostly due to the highly engineered nature of these approaches which often rely on auxiliary losses and intermediate steps such as mask generators. In this note, we present an orthogonal approach to this task, where the generative model is based on pure likelihood training without additional objectives. To do so, we first optimize a powerful compression model with adversarial training which learns to reconstruct its inputs via a discrete latent bottleneck and thereby effectively strips the latent representation of high-frequency details such as texture. Subsequently, we train an autoregressive transformer model to learn the distribution of the discrete image representations conditioned on a tokenized version of the layouts. Our experiments show that the resulting system is able to synthesize high-quality images consistent with the given layouts. In particular, we improve the state-of-the-art FID score on COCO-Stuff and on Visual Genome by up to 19% and 53% and demonstrate the synthesis of images up to 512 x 512 px on COCO and Open Images.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Faculties: | History and Art History > Department of Art History > Art History |
Subjects: | 000 Computer science, information and general works > 000 Computer science, knowledge, and systems 700 Arts and recreation > 700 Arts |
Annotation: | AI for Content Creation Workshop fand während der Conference on Computer Vision and Pattern Recognition (CVPR) statt |
Language: | English |
Item ID: | 109948 |
Date Deposited: | 03. Apr 2024, 11:43 |
Last Modified: | 28. May 2024, 12:33 |