|
--- |
|
license: apache-2.0 |
|
base_model: runwayml/stable-diffusion-v1-5 |
|
tags: |
|
- art |
|
- t2i-adapter |
|
- controlnet |
|
- stable-diffusion |
|
- image-to-image |
|
--- |
|
|
|
# T2I Adapter - Segment |
|
|
|
T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint. |
|
|
|
This checkpoint provides conditioning on semantic segmentation for the stable diffusion 1.4 checkpoint. |
|
|
|
## Model Details |
|
- **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models |
|
- **Model type:** Diffusion-based text-to-image generation model |
|
- **Language(s):** English |
|
- **License:** Apache 2.0 |
|
- **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453). |
|
- **Cite as:** |
|
|
|
@misc{ |
|
title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models}, |
|
author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie}, |
|
year={2023}, |
|
eprint={2302.08453}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CV} |
|
} |
|
|
|
### Checkpoints |
|
|
|
| Model Name | Control Image Overview| Control Image Example | Generated Image Example | |
|
|---|---|---|---| |
|
|[TencentARC/t2iadapter_color_sd14v1](https://huggingface.co/TencentARC/t2iadapter_color_sd14v1)<br/> *Trained with spatial color palette* | A image with 8x8 color palette.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"/></a>| |
|
|[TencentARC/t2iadapter_canny_sd14v1](https://huggingface.co/TencentARC/t2iadapter_canny_sd14v1)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"/></a>| |
|
|[TencentARC/t2iadapter_sketch_sd14v1](https://huggingface.co/TencentARC/t2iadapter_sketch_sd14v1)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"/></a>| |
|
|[TencentARC/t2iadapter_depth_sd14v1](https://huggingface.co/TencentARC/t2iadapter_depth_sd14v1)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"/></a>| |
|
|[TencentARC/t2iadapter_openpose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_openpose_sd14v1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"/></a>| |
|
|[TencentARC/t2iadapter_keypose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_keypose_sd14v1)<br/> *Trained with mmpose skeleton image* | A [mmpose skeleton](https://github.com/open-mmlab/mmpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"/></a>| |
|
|[TencentARC/t2iadapter_seg_sd14v1](https://huggingface.co/TencentARC/t2iadapter_seg_sd14v1)<br/>*Trained with semantic segmentation* | An [custom](https://github.com/TencentARC/T2I-Adapter/discussions/25) segmentation protocol image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"/></a> | |
|
|[TencentARC/t2iadapter_canny_sd15v2](https://huggingface.co/TencentARC/t2iadapter_canny_sd15v2)|| |
|
|[TencentARC/t2iadapter_depth_sd15v2](https://huggingface.co/TencentARC/t2iadapter_depth_sd15v2)|| |
|
|[TencentARC/t2iadapter_sketch_sd15v2](https://huggingface.co/TencentARC/t2iadapter_sketch_sd15v2)|| |
|
|[TencentARC/t2iadapter_zoedepth_sd15v1](https://huggingface.co/TencentARC/t2iadapter_zoedepth_sd15v1)|| |
|
|
|
## Example |
|
|
|
1. Dependencies |
|
|
|
```sh |
|
pip install diffusers transformers |
|
``` |
|
|
|
2. Run code: |
|
|
|
```python |
|
import torch |
|
from PIL import Image |
|
import numpy as np |
|
from transformers import AutoImageProcessor, UperNetForSemanticSegmentation |
|
|
|
from diffusers import ( |
|
T2IAdapter, |
|
StableDiffusionAdapterPipeline |
|
) |
|
|
|
ada_palette = np.asarray([ |
|
[0, 0, 0], |
|
[120, 120, 120], |
|
[180, 120, 120], |
|
[6, 230, 230], |
|
[80, 50, 50], |
|
[4, 200, 3], |
|
[120, 120, 80], |
|
[140, 140, 140], |
|
[204, 5, 255], |
|
[230, 230, 230], |
|
[4, 250, 7], |
|
[224, 5, 255], |
|
[235, 255, 7], |
|
[150, 5, 61], |
|
[120, 120, 70], |
|
[8, 255, 51], |
|
[255, 6, 82], |
|
[143, 255, 140], |
|
[204, 255, 4], |
|
[255, 51, 7], |
|
[204, 70, 3], |
|
[0, 102, 200], |
|
[61, 230, 250], |
|
[255, 6, 51], |
|
[11, 102, 255], |
|
[255, 7, 71], |
|
[255, 9, 224], |
|
[9, 7, 230], |
|
[220, 220, 220], |
|
[255, 9, 92], |
|
[112, 9, 255], |
|
[8, 255, 214], |
|
[7, 255, 224], |
|
[255, 184, 6], |
|
[10, 255, 71], |
|
[255, 41, 10], |
|
[7, 255, 255], |
|
[224, 255, 8], |
|
[102, 8, 255], |
|
[255, 61, 6], |
|
[255, 194, 7], |
|
[255, 122, 8], |
|
[0, 255, 20], |
|
[255, 8, 41], |
|
[255, 5, 153], |
|
[6, 51, 255], |
|
[235, 12, 255], |
|
[160, 150, 20], |
|
[0, 163, 255], |
|
[140, 140, 140], |
|
[250, 10, 15], |
|
[20, 255, 0], |
|
[31, 255, 0], |
|
[255, 31, 0], |
|
[255, 224, 0], |
|
[153, 255, 0], |
|
[0, 0, 255], |
|
[255, 71, 0], |
|
[0, 235, 255], |
|
[0, 173, 255], |
|
[31, 0, 255], |
|
[11, 200, 200], |
|
[255, 82, 0], |
|
[0, 255, 245], |
|
[0, 61, 255], |
|
[0, 255, 112], |
|
[0, 255, 133], |
|
[255, 0, 0], |
|
[255, 163, 0], |
|
[255, 102, 0], |
|
[194, 255, 0], |
|
[0, 143, 255], |
|
[51, 255, 0], |
|
[0, 82, 255], |
|
[0, 255, 41], |
|
[0, 255, 173], |
|
[10, 0, 255], |
|
[173, 255, 0], |
|
[0, 255, 153], |
|
[255, 92, 0], |
|
[255, 0, 255], |
|
[255, 0, 245], |
|
[255, 0, 102], |
|
[255, 173, 0], |
|
[255, 0, 20], |
|
[255, 184, 184], |
|
[0, 31, 255], |
|
[0, 255, 61], |
|
[0, 71, 255], |
|
[255, 0, 204], |
|
[0, 255, 194], |
|
[0, 255, 82], |
|
[0, 10, 255], |
|
[0, 112, 255], |
|
[51, 0, 255], |
|
[0, 194, 255], |
|
[0, 122, 255], |
|
[0, 255, 163], |
|
[255, 153, 0], |
|
[0, 255, 10], |
|
[255, 112, 0], |
|
[143, 255, 0], |
|
[82, 0, 255], |
|
[163, 255, 0], |
|
[255, 235, 0], |
|
[8, 184, 170], |
|
[133, 0, 255], |
|
[0, 255, 92], |
|
[184, 0, 255], |
|
[255, 0, 31], |
|
[0, 184, 255], |
|
[0, 214, 255], |
|
[255, 0, 112], |
|
[92, 255, 0], |
|
[0, 224, 255], |
|
[112, 224, 255], |
|
[70, 184, 160], |
|
[163, 0, 255], |
|
[153, 0, 255], |
|
[71, 255, 0], |
|
[255, 0, 163], |
|
[255, 204, 0], |
|
[255, 0, 143], |
|
[0, 255, 235], |
|
[133, 255, 0], |
|
[255, 0, 235], |
|
[245, 0, 255], |
|
[255, 0, 122], |
|
[255, 245, 0], |
|
[10, 190, 212], |
|
[214, 255, 0], |
|
[0, 204, 255], |
|
[20, 0, 255], |
|
[255, 255, 0], |
|
[0, 153, 255], |
|
[0, 41, 255], |
|
[0, 255, 204], |
|
[41, 0, 255], |
|
[41, 255, 0], |
|
[173, 0, 255], |
|
[0, 245, 255], |
|
[71, 0, 255], |
|
[122, 0, 255], |
|
[0, 255, 184], |
|
[0, 92, 255], |
|
[184, 255, 0], |
|
[0, 133, 255], |
|
[255, 214, 0], |
|
[25, 194, 194], |
|
[102, 255, 0], |
|
[92, 0, 255], |
|
]) |
|
|
|
|
|
image_processor = AutoImageProcessor.from_pretrained("openmmlab/upernet-convnext-small") |
|
image_segmentor = UperNetForSemanticSegmentation.from_pretrained("openmmlab/upernet-convnext-small") |
|
|
|
checkpoint = "lllyasviel/control_v11p_sd15_seg" |
|
|
|
image = Image.open('./images/seg_input.jpeg') |
|
|
|
pixel_values = image_processor(image, return_tensors="pt").pixel_values |
|
with torch.no_grad(): |
|
outputs = image_segmentor(pixel_values) |
|
|
|
seg = image_processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] |
|
|
|
color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) # height, width, 3 |
|
|
|
for label, color in enumerate(ada_palette): |
|
color_seg[seg == label, :] = color |
|
|
|
color_seg = color_seg.astype(np.uint8) |
|
control_image = Image.fromarray(color_seg) |
|
|
|
control_image.save("./images/segment_image.png") |
|
|
|
adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_seg_sd14v1", torch_dtype=torch.float16) |
|
pipe = StableDiffusionAdapterPipeline.from_pretrained( |
|
"CompVis/stable-diffusion-v1-4", adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16" |
|
) |
|
|
|
pipe.to('cuda') |
|
|
|
generator = torch.Generator().manual_seed(0) |
|
|
|
sketch_image_out = pipe(prompt="motorcycles driving", image=control_image, generator=generator).images[0] |
|
|
|
sketch_image_out.save('./images/seg_image_out.png') |
|
``` |
|
|
|
![seg_input](./images/seg_input.jpeg) |
|
![segment_image](./images/segment_image.png) |
|
![seg_image_out](./images/seg_image_out.png) |