Abstract
Multimodal generative models that can understand and generate across multiple modalities are dominated by autoregressive (AR) approaches, which process tokens sequentially from left to right, or top to bottom. These models jointly handle images, text, video, and audio for various tasks such as image captioning, question answering, and image generation. In this work, we explore discrete diffusion models as a unified generative formulation in the joint text and image domain, building upon their recent success in text generation. Discrete diffusion models offer several advantages over AR models, including improved control over quality versus diversity of generated samples, the ability to perform joint multimodal inpainting (across both text and image domains), and greater controllability in generation through guidance. Leveraging these benefits, we present the first Unified Multimodal Discrete Diffusion (UniDisc) model which is capable of jointly understanding and generating text and images for a variety of downstream tasks. We compare UniDisc to multimodal AR models, performing a scaling analysis and demonstrating that UniDisc outperforms them in terms of both performance and inference-time compute, enhanced controllability, editability, inpainting, and flexible trade-off between inference time and generation quality. Code and additional visualizations are available at https://unidisc.github.io.
Community
We trained a diffusion model to jointly model image and text pairs. This allows one to do joint image-text inpainting, editing etc.
Our code is opensourced - https://github.com/alexanderswerdlow/unidisc
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Unified Autoregressive Visual Generation and Understanding with Continuous Tokens (2025)
- D2C: Unlocking the Potential of Continuous Autoregressive Image Generation with Discrete Tokens (2025)
- Unlocking Pretrained LLMs for Motion-Related Multimodal Generation: A Fine-Tuning Approach to Unify Diffusion and Next-Token Prediction (2025)
- Show-o Turbo: Towards Accelerated Unified Multimodal Understanding and Generation (2025)
- FlowTok: Flowing Seamlessly Across Text and Image Tokens (2025)
- Aligning Text to Image in Diffusion Models is Easier Than You Think (2025)
- UniForm: A Unified Diffusion Transformer for Audio-Video Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend