Adapter commited on
Commit
882b239
1 Parent(s): 3575584

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +126 -0
README.md CHANGED
@@ -1,3 +1,129 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ base_model: stabilityai/stable-diffusion-xl-base-1.0
4
+ tags:
5
+ - art
6
+ - t2i-adapter
7
+ - image-to-image
8
+ - stable-diffusion-xl-diffusers
9
+ - stable-diffusion-xl
10
  ---
11
+
12
+ # T2I-Adapter-SDXL - Openpose
13
+
14
+ T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint.
15
+
16
+ This checkpoint provides conditioning on openpose for the StableDiffusionXL checkpoint. This was a collaboration between **Tencent ARC** and [**Hugging Face**](https://huggingface.co/).
17
+
18
+ ## Model Details
19
+ - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
20
+ - **Model type:** Diffusion-based text-to-image generation model
21
+ - **Language(s):** English
22
+ - **License:** Apache 2.0
23
+ - **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453).
24
+ - **Model complexity:**
25
+ | | SD-V1.4/1.5 | SD-XL | T2I-Adapter | T2I-Adapter-SDXL |
26
+ | --- | --- |--- |--- |--- |
27
+ | Parameters | 860M | 2.6B |77 M | 77/79 M | |
28
+ - **Cite as:**
29
+
30
+ @misc{
31
+ title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models},
32
+ author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie},
33
+ year={2023},
34
+ eprint={2302.08453},
35
+ archivePrefix={arXiv},
36
+ primaryClass={cs.CV}
37
+ }
38
+
39
+
40
+ ### Checkpoints
41
+
42
+ | Model Name | Control Image Overview| Control Image Example | Generated Image Example |
43
+ |---|---|---|---|
44
+ |[TencentARC/t2i-adapter-canny-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-canny-sdxl-1.0)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"/></a>|
45
+ |[TencentARC/t2i-adapter-sketch-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-sketch-sdxl-1.0)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"/></a>|
46
+ |[TencentARC/t2i-adapter-lineart-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-lineart-sdxl-1.0)<br/> *Trained with lineart edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"/></a>|
47
+ |[TencentARC/t2i-adapter-depth-midas-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-depth-midas-sdxl-1.0)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"/></a>|
48
+ |[TencentARC/t2i-adapter-depth-zoe-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-depth-zoe-sdxl-1.0)<br/> *Trained with Zoe depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"/></a>|
49
+ |[Adapter/t2iadapter_openpose_sdxlv1](https://huggingface.co/Adapter/t2iadapter_openpose_sdxlv1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/openpose.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/openpose.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/res_pose.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/res_pose.png"/></a>|
50
+
51
+
52
+ ## Example
53
+
54
+ To get started, first install the required dependencies:
55
+
56
+ ```bash
57
+ pip install git+https://github.com/huggingface/diffusers.git@t2iadapterxl # for now
58
+ pip install -U controlnet_aux==0.0.7 # for conditioning models and detectors
59
+ pip install transformers accelerate safetensors
60
+ ```
61
+
62
+ 1. Images are first downloaded into the appropriate *control image* format.
63
+ 2. The *control image* and *prompt* are passed to the [`StableDiffusionXLAdapterPipeline`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py#L125).
64
+
65
+ Let's have a look at a simple example using the [Canny Adapter](https://huggingface.co/TencentARC/t2i-adapter-lineart-sdxl-1.0).
66
+
67
+ - Dependency
68
+ ```py
69
+ from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteScheduler, AutoencoderKL
70
+ from diffusers.utils import load_image, make_image_grid
71
+ from controlnet_aux.midas import MidasDetector
72
+ import torch
73
+
74
+ # load adapter
75
+ adapter = T2IAdapter.from_pretrained(
76
+ "TencentARC/t2i-adapter-depth-midas-sdxl-1.0", torch_dtype=torch.float16, varient="fp16"
77
+ ).to("cuda")
78
+
79
+ # load euler_a scheduler
80
+ model_id = 'stabilityai/stable-diffusion-xl-base-1.0'
81
+ euler_a = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
82
+ vae=AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
83
+ pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
84
+ model_id, vae=vae, adapter=adapter, scheduler=euler_a, torch_dtype=torch.float16, variant="fp16",
85
+ ).to("cuda")
86
+ pipe.enable_xformers_memory_efficient_attention()
87
+
88
+ midas_depth = MidasDetector.from_pretrained(
89
+ "valhalla/t2iadapter-aux-models", filename="dpt_large_384.pt", model_type="dpt_large"
90
+ ).to("cuda")
91
+ ```
92
+
93
+ - Condition Image
94
+ ```py
95
+ url = "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_mid.jpg"
96
+ image = load_image(url)
97
+ image = midas_depth(
98
+ image, detect_resolution=512, image_resolution=1024
99
+ )
100
+ ```
101
+ <a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"><img width="480" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"/></a>
102
+
103
+ - Generation
104
+ ```py
105
+ prompt = "A photo of a room, 4k photo, highly detailed"
106
+ negative_prompt = "anime, cartoon, graphic, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured"
107
+
108
+ gen_images = pipe(
109
+ prompt=prompt,
110
+ negative_prompt=negative_prompt,
111
+ image=image,
112
+ num_inference_steps=30,
113
+ adapter_conditioning_scale=1,
114
+ guidance_scale=7.5,
115
+ ).images[0]
116
+ gen_images.save('out_mid.png')
117
+ ```
118
+ <a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"><img width="480" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"/></a>
119
+
120
+ ### Training
121
+
122
+ Our training script was built on top of the official training script that we provide [here](https://github.com/huggingface/diffusers/blob/main/examples/t2i_adapter/README_sdxl.md).
123
+
124
+ The model is trained on 3M high-resolution image-text pairs from LAION-Aesthetics V2 with
125
+
126
+ - Training steps: 35000
127
+ - Batch size: Data parallel with a single gpu batch size of `16` for a total batch size of `256`.
128
+ - Learning rate: Constant learning rate of `1e-5`.
129
+ - Mixed precision: fp16