Diffusers documentation

Unconditional image generation

You are viewing v0.21.0 version. A newer version v0.32.1 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Unconditional image generation

Unconditional image generation is not conditioned on any text or images, unlike text- or image-to-image models. It only generates images that resemble its training data distribution.

This guide will show you how to train an unconditional image generation model on existing datasets as well as your own custom dataset. All the training scripts for unconditional image generation can be found here if you’re interested in learning more about the training details.

Before running the script, make sure you install the library’s training dependencies:

pip install diffusers[training] accelerate datasets

Next, initialize an 🤗 Accelerate environment with:

accelerate config

To setup a default 🤗 Accelerate environment without choosing any configurations:

accelerate config default

Or if your environment doesn’t support an interactive shell like a notebook, you can use:

from accelerate.utils import write_basic_config

write_basic_config()

Upload model to Hub

You can upload your model on the Hub by adding the following argument to the training script:

--push_to_hub

Save and load checkpoints

It is a good idea to regularly save checkpoints in case anything happens during training. To save a checkpoint, pass the following argument to the training script:

--checkpointing_steps=500

The full training state is saved in a subfolder in the output_dir every 500 steps, which allows you to load a checkpoint and resume training if you pass the --resume_from_checkpoint argument to the training script:

--resume_from_checkpoint="checkpoint-1500"

Finetuning

You’re ready to launch the training script now! Specify the dataset name to finetune on with the --dataset_name argument and then save it to the path in --output_dir. To use your own dataset, take a look at the Create a dataset for training guide.

The training script creates and saves a diffusion_pytorch_model.bin file in your repository.

💡 A full training run takes 2 hours on 4xV100 GPUs.

For example, to finetune on the Oxford Flowers dataset:

accelerate launch train_unconditional.py \
  --dataset_name="huggan/flowers-102-categories" \
  --resolution=64 \
  --output_dir="ddpm-ema-flowers-64" \
  --train_batch_size=16 \
  --num_epochs=100 \
  --gradient_accumulation_steps=1 \
  --learning_rate=1e-4 \
  --lr_warmup_steps=500 \
  --mixed_precision=no \
  --push_to_hub

Or if you want to train your model on the Pokemon dataset:

accelerate launch train_unconditional.py \
  --dataset_name="huggan/pokemon" \
  --resolution=64 \
  --output_dir="ddpm-ema-pokemon-64" \
  --train_batch_size=16 \
  --num_epochs=100 \
  --gradient_accumulation_steps=1 \
  --learning_rate=1e-4 \
  --lr_warmup_steps=500 \
  --mixed_precision=no \
  --push_to_hub

Training with multiple GPUs

accelerate allows for seamless multi-GPU training. Follow the instructions here for running distributed training with accelerate. Here is an example command:

accelerate launch --mixed_precision="fp16" --multi_gpu train_unconditional.py \
  --dataset_name="huggan/pokemon" \
  --resolution=64 --center_crop --random_flip \
  --output_dir="ddpm-ema-pokemon-64" \
  --train_batch_size=16 \
  --num_epochs=100 \
  --gradient_accumulation_steps=1 \
  --use_ema \
  --learning_rate=1e-4 \
  --lr_warmup_steps=500 \
  --mixed_precision="fp16" \
  --logger="wandb" \
  --push_to_hub