Edit model card

Stable Diffusion fine tuned on PokΓ©mon by Lambda Labs.

Put in a text prompt and generate your own PokΓ©mon character, no "prompt engineering" required!

If you want to find out how to train your own Stable Diffusion variants, see this example from Lambda Labs.


Girl with a pearl earring, Cute Obama creature, Donald Trump, Boris Johnson, Totoro, Hello Kitty


Make sure you have setup the Stable Diffusion repo and downloaded ema-only-epoch=000142.ckpt

 python scripts/txt2img.py \
    --prompt 'robotic cat with wings' \
    --outdir 'outputs/generated_pokemon' \
    --H 512 --W 512 \
    --n_samples 4 \
    --config 'configs/stable-diffusion/pokemon.yaml' \
    --ckpt ema-only-epoch=000142.ckpt

You can also use the normal stable diffusion inference config.

Model description

Trained on BLIP captioned PokΓ©mon images using 2xA6000 GPUs on Lambda GPU Cloud for around 15,000 step (about 6 hours, at a cost of about $10).


Trained by Justin Pinkney (@Buntworthy) at Lambda Labs.

Downloads last month

Dataset used to train justinpinkney/pokemon-stable-diffusion

Spaces using justinpinkney/pokemon-stable-diffusion 9