ceyda's picture
add usage code
30ccb17
metadata
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []

ddpm-ema-butterflies-64

Model description

This diffusion model is trained with the 🤗 Diffusers library on the huggan/smithsonian_butterflies_subset dataset. Using this script

Intended uses & limitations

How to use

from diffusers import DDPMPipeline

model_id = "ceyda/ddpm-ema-butterflies-64"

# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id)  # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference

# run pipeline in inference (sample random noise and denoise)
image = ddpm()["sample"]

# save image
image[0].save("ddpm_generated_image.png")

Limitations and bias

[TODO: provide examples of latent issues and potential remediations]

Training data

[TODO: describe the data used to train the model]

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 16
  • gradient_accumulation_steps: 1
  • optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08
  • lr_scheduler: cosine
  • lr_warmup_steps: 500
  • ema_inv_gamma: 1.0
  • ema_inv_gamma: 0.75
  • ema_inv_gamma: 0.9999
  • mixed_precision: no

Training results

📈 TensorBoard logs