Commit
•
2959e4d
1
Parent(s):
c7f4129
model card (#1)
Browse files- model card (f92555a1721fa5725f85f03a9076214b8ebf2284)
- metadata fix (3037b78987e9128f1df765a155319dee21dd296c)
Co-authored-by: Will Berman <williamberman@users.noreply.huggingface.co>
- README.md +143 -1
- images/zoedepth.png +0 -0
- images/zoedepth_in.png +0 -0
- images/zoedepth_out.png +0 -0
README.md
CHANGED
@@ -1,3 +1,145 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: apache-2.0
|
3 |
+
base_model: runwayml/stable-diffusion-v1-5
|
4 |
+
tags:
|
5 |
+
- art
|
6 |
+
- t2i-adapter
|
7 |
+
- controlnet
|
8 |
+
- stable-diffusion
|
9 |
+
- image-to-image
|
10 |
---
|
11 |
+
|
12 |
+
# T2I Adapter - Zoedepth
|
13 |
+
|
14 |
+
T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint.
|
15 |
+
|
16 |
+
This checkpoint provides conditioning on zoedepth depth estimation for the stable diffusion 1.5 checkpoint.
|
17 |
+
|
18 |
+
## Model Details
|
19 |
+
- **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
|
20 |
+
- **Model type:** Diffusion-based text-to-image generation model
|
21 |
+
- **Language(s):** English
|
22 |
+
- **License:** Apache 2.0
|
23 |
+
- **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453).
|
24 |
+
- **Cite as:**
|
25 |
+
|
26 |
+
@misc{
|
27 |
+
title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models},
|
28 |
+
author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie},
|
29 |
+
year={2023},
|
30 |
+
eprint={2302.08453},
|
31 |
+
archivePrefix={arXiv},
|
32 |
+
primaryClass={cs.CV}
|
33 |
+
}
|
34 |
+
|
35 |
+
### Checkpoints
|
36 |
+
|
37 |
+
| Model Name | Control Image Overview| Control Image Example | Generated Image Example |
|
38 |
+
|---|---|---|---|
|
39 |
+
|[TencentARC/t2iadapter_color_sd14v1](https://huggingface.co/TencentARC/t2iadapter_color_sd14v1)<br/> *Trained with spatial color palette* | A image with 8x8 color palette.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"/></a>|
|
40 |
+
|[TencentARC/t2iadapter_canny_sd14v1](https://huggingface.co/TencentARC/t2iadapter_canny_sd14v1)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"/></a>|
|
41 |
+
|[TencentARC/t2iadapter_sketch_sd14v1](https://huggingface.co/TencentARC/t2iadapter_sketch_sd14v1)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"/></a>|
|
42 |
+
|[TencentARC/t2iadapter_depth_sd14v1](https://huggingface.co/TencentARC/t2iadapter_depth_sd14v1)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"/></a>|
|
43 |
+
|[TencentARC/t2iadapter_openpose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_openpose_sd14v1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"/></a>|
|
44 |
+
|[TencentARC/t2iadapter_keypose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_keypose_sd14v1)<br/> *Trained with mmpose skeleton image* | A [mmpose skeleton](https://github.com/open-mmlab/mmpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"/></a>|
|
45 |
+
|[TencentARC/t2iadapter_seg_sd14v1](https://huggingface.co/TencentARC/t2iadapter_seg_sd14v1)<br/>*Trained with semantic segmentation* | An [custom](https://github.com/TencentARC/T2I-Adapter/discussions/25) segmentation protocol image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"/></a> |
|
46 |
+
|[TencentARC/t2iadapter_canny_sd15v2](https://huggingface.co/TencentARC/t2iadapter_canny_sd15v2)||
|
47 |
+
|[TencentARC/t2iadapter_depth_sd15v2](https://huggingface.co/TencentARC/t2iadapter_depth_sd15v2)||
|
48 |
+
|[TencentARC/t2iadapter_sketch_sd15v2](https://huggingface.co/TencentARC/t2iadapter_sketch_sd15v2)||
|
49 |
+
|[TencentARC/t2iadapter_zoedepth_sd15v1](https://huggingface.co/TencentARC/t2iadapter_zoedepth_sd15v1)||
|
50 |
+
|
51 |
+
## Example
|
52 |
+
|
53 |
+
1. Dependencies
|
54 |
+
|
55 |
+
```sh
|
56 |
+
pip install diffusers transformers matplotlib
|
57 |
+
```
|
58 |
+
|
59 |
+
2. Run code:
|
60 |
+
|
61 |
+
```python
|
62 |
+
from PIL import Image
|
63 |
+
import torch
|
64 |
+
import numpy as np
|
65 |
+
import matplotlib
|
66 |
+
from diffusers import T2IAdapter, StableDiffusionAdapterPipeline
|
67 |
+
|
68 |
+
def colorize(value, vmin=None, vmax=None, cmap='gray_r', invalid_val=-99, invalid_mask=None, background_color=(128, 128, 128, 255), gamma_corrected=False, value_transform=None):
|
69 |
+
"""Converts a depth map to a color image.
|
70 |
+
|
71 |
+
Args:
|
72 |
+
value (torch.Tensor, numpy.ndarry): Input depth map. Shape: (H, W) or (1, H, W) or (1, 1, H, W). All singular dimensions are squeezed
|
73 |
+
vmin (float, optional): vmin-valued entries are mapped to start color of cmap. If None, value.min() is used. Defaults to None.
|
74 |
+
vmax (float, optional): vmax-valued entries are mapped to end color of cmap. If None, value.max() is used. Defaults to None.
|
75 |
+
cmap (str, optional): matplotlib colormap to use. Defaults to 'magma_r'.
|
76 |
+
invalid_val (int, optional): Specifies value of invalid pixels that should be colored as 'background_color'. Defaults to -99.
|
77 |
+
invalid_mask (numpy.ndarray, optional): Boolean mask for invalid regions. Defaults to None.
|
78 |
+
background_color (tuple[int], optional): 4-tuple RGB color to give to invalid pixels. Defaults to (128, 128, 128, 255).
|
79 |
+
gamma_corrected (bool, optional): Apply gamma correction to colored image. Defaults to False.
|
80 |
+
value_transform (Callable, optional): Apply transform function to valid pixels before coloring. Defaults to None.
|
81 |
+
|
82 |
+
Returns:
|
83 |
+
numpy.ndarray, dtype - uint8: Colored depth map. Shape: (H, W, 4)
|
84 |
+
"""
|
85 |
+
if isinstance(value, torch.Tensor):
|
86 |
+
value = value.detach().cpu().numpy()
|
87 |
+
|
88 |
+
value = value.squeeze()
|
89 |
+
if invalid_mask is None:
|
90 |
+
invalid_mask = value == invalid_val
|
91 |
+
mask = np.logical_not(invalid_mask)
|
92 |
+
|
93 |
+
# normalize
|
94 |
+
vmin = np.percentile(value[mask],2) if vmin is None else vmin
|
95 |
+
vmax = np.percentile(value[mask],85) if vmax is None else vmax
|
96 |
+
if vmin != vmax:
|
97 |
+
value = (value - vmin) / (vmax - vmin) # vmin..vmax
|
98 |
+
else:
|
99 |
+
# Avoid 0-division
|
100 |
+
value = value * 0.
|
101 |
+
|
102 |
+
# squeeze last dim if it exists
|
103 |
+
# grey out the invalid values
|
104 |
+
|
105 |
+
value[invalid_mask] = np.nan
|
106 |
+
cmapper = matplotlib.cm.get_cmap(cmap)
|
107 |
+
if value_transform:
|
108 |
+
value = value_transform(value)
|
109 |
+
# value = value / value.max()
|
110 |
+
value = cmapper(value, bytes=True) # (nxmx4)
|
111 |
+
|
112 |
+
img = value[...]
|
113 |
+
img[invalid_mask] = background_color
|
114 |
+
|
115 |
+
if gamma_corrected:
|
116 |
+
img = img / 255
|
117 |
+
img = np.power(img, 2.2)
|
118 |
+
img = img * 255
|
119 |
+
img = img.astype(np.uint8)
|
120 |
+
return img
|
121 |
+
|
122 |
+
model = torch.hub.load("isl-org/ZoeDepth", "ZoeD_N", pretrained=True)
|
123 |
+
|
124 |
+
img = Image.open('./images/zoedepth_in.png')
|
125 |
+
|
126 |
+
out = model.infer_pil(img)
|
127 |
+
|
128 |
+
zoedepth_image = Image.fromarray(colorize(out)).convert('RGB')
|
129 |
+
|
130 |
+
zoedepth_image.save('images/zoedepth.png')
|
131 |
+
|
132 |
+
adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_zoedepth_sd15v1", torch_dtype=torch.float16)
|
133 |
+
pipe = StableDiffusionAdapterPipeline.from_pretrained(
|
134 |
+
"runwayml/stable-diffusion-v1-5", adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16"
|
135 |
+
)
|
136 |
+
|
137 |
+
pipe.to('cuda')
|
138 |
+
zoedepth_image_out = pipe(prompt="motorcycle", image=zoedepth_image).images[0]
|
139 |
+
|
140 |
+
zoedepth_image_out.save('images/zoedepth_out.png')
|
141 |
+
```
|
142 |
+
|
143 |
+
![zoedepth_in](./images/zoedepth_in.png)
|
144 |
+
![zoedepth](./images/zoedepth.png)
|
145 |
+
![zoedepth_out](./images/zoedepth_out.png)
|
images/zoedepth.png
ADDED
images/zoedepth_in.png
ADDED
images/zoedepth_out.png
ADDED