Generating Compositional Scenes via Text-to-image RGBA Instance Generation
Abstract
Text-to-image diffusion generative models can generate high quality images at the cost of tedious prompt engineering. Controllability can be improved by introducing layout conditioning, however existing methods lack layout editing ability and fine-grained control over object attributes. The concept of multi-layer generation holds great potential to address these limitations, however generating image instances concurrently to scene composition limits control over fine-grained object attributes, relative positioning in 3D space and scene manipulation abilities. In this work, we propose a novel multi-stage generation paradigm that is designed for fine-grained control, flexibility and interactivity. To ensure control over instance attributes, we devise a novel training paradigm to adapt a diffusion model to generate isolated scene components as RGBA images with transparency information. To build complex images, we employ these pre-generated instances and introduce a multi-layer composite generation process that smoothly assembles components in realistic scenes. Our experiments show that our RGBA diffusion model is capable of generating diverse and high quality instances with precise control over object attributes. Through multi-layer composition, we demonstrate that our approach allows to build and manipulate images from highly complex prompts with fine-grained control over object appearance and location, granting a higher degree of control than competing methods.
Community
We propose an approach for layered text-to-image generation, designed for scenes with complex layouts and multiple objects with fine-grained attributes.
First, we fine-tune a diffusion model to generate individual objects as RGBA images with transparency. These objects are then composited into multi-instance scenes using a noise-blending technique, where each instance is associated with a specific image layer.
Project page: https://mulanrgba.github.io/
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- 3DIS: Depth-Driven Decoupled Instance Synthesis for Text-to-Image Generation (2024)
- DreamSteerer: Enhancing Source Image Conditioned Editability using Personalized Diffusion Models (2024)
- Scene Graph Disentanglement and Composition for Generalizable Complex Image Generation (2024)
- HiCo: Hierarchical Controllable Diffusion Model for Layout-to-image Generation (2024)
- MCGM: Mask Conditional Text-to-Image Generative Model (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper