Papers
arxiv:2312.02696

Analyzing and Improving the Training Dynamics of Diffusion Models

Published on Dec 5, 2023
· Featured in Daily Papers on Dec 6, 2023
Authors:
,
,
,

Abstract

Diffusion models currently dominate the field of data-driven image synthesis with their unparalleled scaling to large datasets. In this paper, we identify and rectify several causes for uneven and ineffective training in the popular ADM diffusion model architecture, without altering its high-level structure. Observing uncontrolled magnitude changes and imbalances in both the network activations and weights over the course of training, we redesign the network layers to preserve activation, weight, and update magnitudes on expectation. We find that systematic application of this philosophy eliminates the observed drifts and imbalances, resulting in considerably better networks at equal computational complexity. Our modifications improve the previous record FID of 2.41 in ImageNet-512 synthesis to 1.81, achieved using fast deterministic sampling. As an independent contribution, we present a method for setting the exponential moving average (EMA) parameters post-hoc, i.e., after completing the training run. This allows precise tuning of EMA length without the cost of performing several training runs, and reveals its surprising interactions with network architecture, training time, and guidance.

Community

I am curious about Algorithm 1. As stated in the Forced weight normalization section, the goal is to make the weights have a norm of sqrt(N). Why do we scale the normalized weights with 1 / sqrt(N) instead?
image.png

@kinyugo if I'm not mistaken, it's because the normalizer normalizes per dimension, resulting in a vector with norm N. Hence the division by sqrt(n)

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2312.02696 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2312.02696 in a Space README.md to link it from this page.

Collections including this paper 10