FluxSpace: Disentangled Semantic Editing in Rectified Flow Transformers
Abstract
Rectified flow models have emerged as a dominant approach in image generation, showcasing impressive capabilities in high-quality image synthesis. However, despite their effectiveness in visual generation, rectified flow models often struggle with disentangled editing of images. This limitation prevents the ability to perform precise, attribute-specific modifications without affecting unrelated aspects of the image. In this paper, we introduce FluxSpace, a domain-agnostic image editing method leveraging a representation space with the ability to control the semantics of images generated by rectified flow transformers, such as Flux. By leveraging the representations learned by the transformer blocks within the rectified flow models, we propose a set of semantically interpretable representations that enable a wide range of image editing tasks, from fine-grained image editing to artistic creation. This work offers a scalable and effective image editing approach, along with its disentanglement capabilities.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Stable Flow: Vital Layers for Training-Free Image Editing (2024)
- SLayR: Scene Layout Generation with Rectified Flow (2024)
- SeedEdit: Align Image Re-Generation to Image Editing (2024)
- Latent Space Disentanglement in Diffusion Transformers Enables Precise Zero-shot Semantic Editing (2024)
- FiVA: Fine-grained Visual Attribute Dataset for Text-to-Image Diffusion Models (2024)
- Taming Rectified Flow for Inversion and Editing (2024)
- Learning Flow Fields in Attention for Controllable Person Image Generation (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper