patrickvonplaten's picture
Add diffusers code example
7daf804
|
raw
history blame
No virus
1.61 kB
---
license: creativeml-openrail-m
---
**Spider-Verse Diffusion**
This is the fine-tuned Stable Diffusion model trained on movie stills from Sony's Into the Spider-Verse.
Use the tokens **_spiderverse style_** in your prompts for the effect.
If you enjoy this model, please check out my other models on [Huggingface](https://huggingface.co/nitrosocke)
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
#!pip install diffusers transformers scipy torch
from diffusers import StableDiffusionPipeline
import torch
model_id = "nitrosocke/spider-verse-diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a magical princess with golden hair, spiderverse style"
image = pipe(prompt).images[0]
image.save("./magical_princess.png")
```
**Portraits rendered with the model:**
![Portrait Samples](https://huggingface.co/nitrosocke/spider-verse-diffusion/resolve/main/spiderverse-portraits-small.jpg)
**Sample images used for training:**
![Training Samples](https://huggingface.co/nitrosocke/spider-verse-diffusion/resolve/main/spiderverse-training-small.jpg)
This model was trained using the diffusers based dreambooth training and prior-preservation loss in 3.000 steps.