ceyda's picture
add usage code
30ccb17
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-ema-butterflies-64
## Model description
This diffusion model is trained with the [πŸ€— Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset. Using this [script](https://github.com/huggingface/diffusers/blob/cde0ed162a127b17f1b4d4b16ff7f736cf04e690/examples/train_unconditional.py)
## Intended uses & limitations
#### How to use
```python
from diffusers import DDPMPipeline
model_id = "ceyda/ddpm-ema-butterflies-64"
# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
# run pipeline in inference (sample random noise and denoise)
image = ddpm()["sample"]
# save image
image[0].save("ddpm_generated_image.png")
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08
- lr_scheduler: cosine
- lr_warmup_steps: 500
- ema_inv_gamma: 1.0
- ema_inv_gamma: 0.75
- ema_inv_gamma: 0.9999
- mixed_precision: no
### Training results
πŸ“ˆ [TensorBoard logs](https://huggingface.co/ceyda/ddpm-ema-butterflies-64/tensorboard?#scalars)