Edit model card

Training data

huggan/few-shot-pokemon

Training hyperparameters

The following hyperparameters were used during training: --checkpointing_steps=1000
--dataset_name="huggan/few-shot-pokemon"
--resolution=128
--output_dir="ddpm-ema-pokemon-128"
--train_batch_size=16
--eval_batch_size=16
--num_epochs=100
--gradient_accumulation_steps=1
--learning_rate=1e-4
--lr_warmup_steps=800
--mixed_precision="fp16"
--push_to_hub

Training results

📈 TensorBoard logs

Downloads last month
9
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train miluELK/ddpm-ema-pokemonv2-128