--- license: creativeml-openrail-m language: - en thumbnail: "https://huggingface.co/Norod78/sd-simpsons-model/raw/main/examples/00496-2202810362-A%20beautiful%20hungry%20demon%20girl,%20John%20Philip%20Falter,%20Very%20detailed%20painting,%20Mark%20Ryden.jpg" tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image datasets: - Norod78/simpsons-blip-captions inference: true --- # Simpsons diffusion *Stable Diffusion fine tuned on images related to "The Simpsons" If you want more details on how to generate your own blip cpationed dataset see this [colab](https://colab.research.google.com/gist/Norod/ee6ee3c4bf11c2d2be531d728ec30824/buildimagedatasetwithblipcaptionsanduploadtohf.ipynb) Training was done using a slightly modified version of Hugging-Face's text to image training [example script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) ## About Put in a text prompt and generate cartoony/simpsony images **A beautiful hungry demon girl, John Philip Falter, Very detailed painting, Mark Ryden** ![A beautiful hungry demon girl, John Philip Falter, Very detailed painting, Mark Ryden](https://huggingface.co/Norod78/sd-simpsons-model/raw/main/examples/00496-2202810362-A%20beautiful%20hungry%20demon%20girl,%20John%20Philip%20Falter,%20Very%20detailed%20painting,%20Mark%20Ryden.jpg) **Gal Gadot, cartoon** ![Gal Gadot, cartoon](https://huggingface.co/Norod78/sd-simpsons-model/raw/main/examples/00323-2574793241-Gal%20Gadot,%20cartoon.jpg) ## More examples The [examples](https://huggingface.co/Norod78/sd-simpsons-model/tree/main/examples) folder contains a few images generated by this model's ckpt file using [stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) which means their EXIF info contain the parameter used to generate them ## Sample code ```py from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler import torch # this will substitute the default PNDM scheduler for K-LMS lms = LMSDiscreteScheduler( beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear" ) guidance_scale=9 seed=7777 steps=100 model_id = "Norod78/sd-simpsons-model" pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=lms, torch_dtype=torch.float16) pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` ## Dataset and Training Finetuned for 10,000 iterations upon [Runway ML's Stable-Diffusion v1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5) on [BLIP captioned Simpsons images](https://huggingface.co/datasets/Norod78/simpsons-blip-captions) using 1xA5000 GPU on my home desktop computer Trained by [@Norod78](https://twitter.com/Norod78)