Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ This `stable-diffusion-2-depth` model is resumed from [stable-diffusion-2-base](
|
|
14 |
![image](https://huggingface.co/stabilityai/stable-diffusion-2-depth/resolve/main/depth2image.png)
|
15 |
|
16 |
- Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `512-depth-ema.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-2-depth/resolve/main/512-depth-ema.ckpt).
|
17 |
-
- Use it with 🧨 diffusers
|
18 |
|
19 |
## Model Details
|
20 |
- **Developed by:** Robin Rombach, Patrick Esser
|
@@ -34,6 +34,41 @@ This `stable-diffusion-2-depth` model is resumed from [stable-diffusion-2-base](
|
|
34 |
pages = {10684-10695}
|
35 |
}
|
36 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
# Uses
|
38 |
|
39 |
## Direct Use
|
|
|
14 |
![image](https://huggingface.co/stabilityai/stable-diffusion-2-depth/resolve/main/depth2image.png)
|
15 |
|
16 |
- Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `512-depth-ema.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-2-depth/resolve/main/512-depth-ema.ckpt).
|
17 |
+
- Use it with 🧨 [`diffusers`](#examples)
|
18 |
|
19 |
## Model Details
|
20 |
- **Developed by:** Robin Rombach, Patrick Esser
|
|
|
34 |
pages = {10684-10695}
|
35 |
}
|
36 |
|
37 |
+
|
38 |
+
## Examples
|
39 |
+
|
40 |
+
Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 in a simple and efficient manner.
|
41 |
+
|
42 |
+
```bash
|
43 |
+
pip install -U git+https://github.com/huggingface/transformers.git
|
44 |
+
pip install -U git+https://github.com/huggingface/diffusers.git accelerate ftfy scipy
|
45 |
+
```
|
46 |
+
Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to EulerDiscreteScheduler):
|
47 |
+
|
48 |
+
```python
|
49 |
+
import torch
|
50 |
+
import requests
|
51 |
+
from PIL import Image
|
52 |
+
from diffusers import StableDiffusionDepth2ImgPipeline
|
53 |
+
|
54 |
+
pipe = StableDiffusionDepth2ImgPipeline.from_pretrained(
|
55 |
+
"stabilityai/stable-diffusion-2-depth",
|
56 |
+
torch_dtype=torch.float16,
|
57 |
+
).to("cuda")
|
58 |
+
|
59 |
+
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
|
60 |
+
init_image = Image.open(requests.get(url, stream=True).raw)
|
61 |
+
|
62 |
+
prompt = "two tigers"
|
63 |
+
n_propmt = "bad, deformed, ugly, bad anotomy"
|
64 |
+
image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0]
|
65 |
+
```
|
66 |
+
|
67 |
+
**Notes**:
|
68 |
+
- Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance)
|
69 |
+
- If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed)
|
70 |
+
|
71 |
+
|
72 |
# Uses
|
73 |
|
74 |
## Direct Use
|