Papers
arxiv:2310.15111

Matryoshka Diffusion Models

Published on Oct 23, 2023
· Featured in Daily Papers on Oct 24, 2023
Authors:
,
,
,
,

Abstract

Diffusion models are the de facto approach for generating high-quality images and videos, but learning high-dimensional models remains a formidable task due to computational and optimization challenges. Existing methods often resort to training cascaded models in pixel space or using a downsampled latent space of a separately trained auto-encoder. In this paper, we introduce Matryoshka Diffusion Models(MDM), an end-to-end framework for high-resolution image and video synthesis. We propose a diffusion process that denoises inputs at multiple resolutions jointly and uses a NestedUNet architecture where features and parameters for small-scale inputs are nested within those of large scales. In addition, MDM enables a progressive training schedule from lower to higher resolutions, which leads to significant improvements in optimization for high-resolution generation. We demonstrate the effectiveness of our approach on various benchmarks, including class-conditioned image generation, high-resolution text-to-image, and text-to-video applications. Remarkably, we can train a single pixel-space model at resolutions of up to 1024x1024 pixels, demonstrating strong zero-shot generalization using the CC12M dataset, which contains only 12 million images.

Community

The model's grasp of form and structure seems remarkably strong for being trained on such a small dataset. It's on par with, if not better than SDXL in that regard! I imagine this has partially to do with the T5 encoder, but the architecture and progressive training certainly make a big difference.

I feel like if we combine this paper's architectural/training advancements with DALLE 3's strategy of training on highly detailed machine-generated captions, and scaled all of this up to something like LAION-2B, it could result in a very strong model.

This comment has been hidden
This comment has been hidden

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2310.15111 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2310.15111 in a Space README.md to link it from this page.

Collections including this paper 14