Edit model card

Zelda Diffusion Model Card

SDZelda is a latent text-to-image diffusion model capable of generating images of Zelda from The Legend of Zelda. For more information about how Stable Diffusion functions, please have a look at 🤗's Stable Diffusion blog.

You can use this with the 🧨Diffusers library from Hugging Face.

So pretty, right?

Diffusers

from diffusers import StableDiffusionPipeline
import torch

pipeline = StableDiffusionPipeline.from_pretrained("nroggendorff/zelda-diffusion", torch_dtype=torch.float16, use_safetensors=True).to("cuda")

image = pipeline(prompt="a drawing of a woman in a blue dress and gold crown").images[0]
image.save("zelda.png")

Model Details

  • train_batch_size: 1
  • gradient_accumulation_steps: 4
  • learning_rate: 1e-2
  • lr_warmup_steps: 500
  • mixed_precision: "fp16"
  • eval_metric: "mean_squared_error"

Limitations

  • The model does not achieve perfect photorealism
  • The model cannot render legible text
  • The model was trained on a small-scale dataset: nroggendorff/zelda

Developed by

  • Noa Linden Roggendorff

This model card was written by Noa Roggendorff and is based on the Stable Diffusion v1-5 Model Card.

Downloads last month
56
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train nroggendorff/zelda-diffusion

Space using nroggendorff/zelda-diffusion 1

Collection including nroggendorff/zelda-diffusion