Logo Logo
Hilfe
Hilfe
Switch Language to English

Farshad, Azade; Yeganeh, Yousef; Chi, Yu; Shen, Chengzhi; Ommer, Björn ORCID logoORCID: https://orcid.org/0000-0003-0766-120X und Navab, Nassir (2023): SceneGenie: Scene Graph Guided Diffusion Models for Image Synthesis. International Conference on Computer Vision Workshops (ICCV Workshops), Paris, France, 02-06 October 2023. Jurie, Frédéric und Sharma, Gaurav (Hrsg.): In: 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Piscataway, NJ: IEEE. S. 88-98

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

Text-conditioned image generation has made significant progress in recent years with generative adversarial networks and more recently, diffusion models. While diffusion models conditioned on text prompts have produced impressive and high-quality images, accurately representing complex text prompts such as the number of instances of a specific object remains challenging.To address this limitation, we propose a novel guidance approach for the sampling process in the diffusion model that leverages bounding box and segmentation map information at inference time without additional training data. Through a novel loss in the sampling process, our approach guides the model with semantic features from CLIP embeddings and enforces geometric constraints, leading to high-resolution images that accurately represent the scene. To obtain bounding box and segmentation map information, we structure the text prompt as a scene graph and enrich the nodes with CLIP embeddings. Our proposed model achieves state-of-the-art performance on two public benchmarks for image generation from scene graphs, surpassing both scene graph to image and text-based diffusion models in various metrics. Our results demonstrate the effectiveness of incorporating bounding box and segmentation map guidance in the diffusion model sampling process for more accurate text-to-image generation.

Dokument bearbeiten Dokument bearbeiten