Edit model card

ddpm-ema-anime-v2-128

Model description

This diffusion model is trained with the 🤗 Diffusers library on the huggan/selfie2anime dataset.

Intended uses & limitations

How to use

from diffusers import DDPMPipeline

model_id = "mrm8488/ddpm-ema-anime-v2-128"

# load model and scheduler
pipeline = DDPMPipeline.from_pretrained(model_id)

# run pipeline in inference 
image = pipeline()["sample"]

image[0].save("anime_face_128.png")

Limitations and bias

[TODO: provide examples of latent issues and potential remediations]

Training data

[TODO: describe the data used to train the model]

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 2
  • eval_batch_size: 4
  • gradient_accumulation_steps: 1
  • optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08
  • lr_scheduler: cosine
  • lr_warmup_steps: 500
  • ema_inv_gamma: 1.0
  • ema_inv_gamma: 0.75
  • ema_inv_gamma: 0.9999
  • mixed_precision: fp16

Training results

📈 TensorBoard logs

Created by Manuel Romero/@mrm8488 with the support of Q Blocks

Downloads last month
5
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train mrm8488/ddpm-ema-anime-v2-128

Space using mrm8488/ddpm-ema-anime-v2-128 1