Error Analyses of Auto-Regressive Video Diffusion Models: A Unified Framework
Abstract
A variety of Auto-Regressive Video Diffusion Models (ARVDM) have achieved remarkable successes in generating realistic long-form videos. However, theoretical analyses of these models remain scant. In this work, we develop theoretical underpinnings for these models and use our insights to improve the performance of existing models. We first develop Meta-ARVDM, a unified framework of ARVDMs that subsumes most existing methods. Using Meta-ARVDM, we analyze the KL-divergence between the videos generated by Meta-ARVDM and the true videos. Our analysis uncovers two important phenomena inherent to ARVDM -- error accumulation and memory bottleneck. By deriving an information-theoretic impossibility result, we show that the memory bottleneck phenomenon cannot be avoided. To mitigate the memory bottleneck, we design various network structures to explicitly use more past frames. We also achieve a significantly improved trade-off between the mitigation of the memory bottleneck and the inference efficiency by compressing the frames. Experimental results on DMLab and Minecraft validate the efficacy of our methods. Our experiments also demonstrate a Pareto-frontier between the error accumulation and memory bottleneck across different methods.
Community
🚶♂️ Ever gotten lost, only to find yourself back at the same spot, struggling to recall what it looked like?
We challenge video generation models with the same problem! 🤯
🔍 Key takeaways from our work:
1️⃣ We first pinpoint error accumulation & memory bottlenecks in autoregressive video diffusion—both theoretically & experimentally.
2️⃣ We reveal a surprising link: better memory retrieval ↔ faster error accumulation.
3️⃣ A simple memory module compresses context into limited tokens while preserving retrieval ability.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- AR-Diffusion: Asynchronous Video Generation with Auto-Regressive Diffusion (2025)
- Dynamical Diffusion: Learning Temporal Dynamics with Diffusion Models (2025)
- MALT Diffusion: Memory-Augmented Latent Transformers for Any-Length Video Generation (2025)
- Understanding Representation Dynamics of Diffusion Models via Low-Dimensional Modeling (2025)
- History-Guided Video Diffusion (2025)
- FreqPrior: Improving Video Diffusion Models with Frequency Filtering Gaussian Noise (2025)
- TPDiff: Temporal Pyramid Video Diffusion Model (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper