Abstract
The use of coarse-grained layouts for controllable synthesis of complex scene images via deep generative models has recently gained popularity. However, results of current approaches still fall short of their promise of high-resolution synthesis. We hypothesize that this is mostly due to the highly engineered nature of these approaches which often rely on auxiliary losses and intermediate steps such as mask generators. In this note, we present an orthogonal approach to this task, where the generative model is based on pure likelihood training without additional objectives. To do so, we first optimize a powerful compression model with adversarial training which learns to reconstruct its inputs via a discrete latent bottleneck and thereby effectively strips the latent representation of high-frequency details such as texture. Subsequently, we train an autoregressive transformer model to learn the distribution of the discrete image representations conditioned on a tokenized version of the layouts. Our experiments show that the resulting system is able to synthesize high-quality images consistent with the given layouts. In particular, we improve the state-of-the-art FID score on COCO-Stuff and on Visual Genome by up to 19% and 53% and demonstrate the synthesis of images up to 512 x 512 px on COCO and Open Images.
Dokumententyp: | Konferenzbeitrag (Paper) |
---|---|
Fakultät: | Geschichts- und Kunstwissenschaften > Department Kunstwissenschaften > Kunstgeschichte |
Themengebiete: | 000 Informatik, Informationswissenschaft, allgemeine Werke > 000 Informatik, Wissen, Systeme
700 Künste und Unterhaltung > 700 Künste |
Bemerkung: | AI for Content Creation Workshop fand während der Conference on Computer Vision and Pattern Recognition (CVPR) statt |
Sprache: | Englisch |
Dokumenten ID: | 109948 |
Datum der Veröffentlichung auf Open Access LMU: | 03. Apr. 2024, 11:43 |
Letzte Änderungen: | 28. Mai 2024, 12:33 |