ChongMou commited on
Commit
c038923
1 Parent(s): fb6c2da

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -214
README.md CHANGED
@@ -1,214 +1,9 @@
1
- <p align="center">
2
- <img src="assets/logo2.png" height=65>
3
- </p>
4
-
5
- <div align="center">
6
-
7
- ⏬[**Download Models**](#-download-models) **|** 💻[**How to Test**](#-how-to-test)
8
-
9
- </div>
10
-
11
- Official implementation of T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models.
12
-
13
- #### [Paper](https://arxiv.org/abs/2302.08453)
14
-
15
- <p align="center">
16
- <img src="assets/overview1.png" height=250>
17
- </p>
18
-
19
- We propose T2I-Adapter, a **simple and small (~70M parameters, ~300M storage space)** network that can provide extra guidance to pre-trained text-to-image models while **freezing** the original large text-to-image models.
20
-
21
- T2I-Adapter aligns internal knowledge in T2I models with external control signals.
22
- We can train various adapters according to different conditions, and achieve rich control and editing effects.
23
-
24
- <p align="center">
25
- <img src="assets/teaser.png" height=500>
26
- </p>
27
-
28
- ### ⏬ Download Models
29
-
30
- Put the downloaded models in the `T2I-Adapter/models` folder.
31
-
32
- 1. The **T2I-Adapters** can be download from <https://huggingface.co/TencentARC/T2I-Adapter>.
33
- 2. The pretrained **Stable Diffusion v1.4** models can be download from <https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/tree/main>. You need to download the `sd-v1-4.ckpt
34
- ` file.
35
- 3. [Optional] If you want to use **Anything v4.0** models, you can download the pretrained models from <https://huggingface.co/andite/anything-v4.0/tree/main>. You need to download the `anything-v4.0-pruned.ckpt` file.
36
- 4. The pretrained **clip-vit-large-patch14** folder can be download from <https://huggingface.co/openai/clip-vit-large-patch14/tree/main>. Remember to download the whole folder!
37
- 5. The pretrained keypose detection models include FasterRCNN (human detection) from <https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth> and HRNet (pose detection) from <https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth>.
38
-
39
- After downloading, the folder structure should be like this:
40
-
41
- <p align="center">
42
- <img src="assets/downloaded_models.png" height=100>
43
- </p>
44
-
45
- ### 🔧 Dependencies and Installation
46
-
47
- - Python >= 3.6 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))
48
- - [PyTorch >= 1.4](https://pytorch.org/)
49
- ```bash
50
- pip install -r requirements.txt
51
- ```
52
- - If you want to use the full function of keypose-guided generation, you need to install MMPose. For details please refer to <https://github.com/open-mmlab/mmpose>.
53
-
54
- ### 💻 How to Test
55
-
56
- - The results are in the `experiments` folder.
57
- - If you want to use the `Anything v4.0`, please add `--ckpt models/anything-v4.0-pruned.ckpt` in the following commands.
58
-
59
- #### **For Simple Experience**
60
-
61
- > python app.py
62
-
63
- #### **Sketch Adapter**
64
-
65
- - Sketch to Image Generation
66
-
67
- > python test_sketch.py --plms --auto_resume --prompt "A car with flying wings" --path_cond examples/sketch/car.png --ckpt models/sd-v1-4.ckpt --type_in sketch
68
-
69
- - Image to Image Generation
70
-
71
- > python test_sketch.py --plms --auto_resume --prompt "A beautiful girl" --path_cond examples/anything_sketch/human.png --ckpt models/sd-v1-4.ckpt --type_in image
72
-
73
- - Generation with **Anything** setting
74
-
75
- > python test_sketch.py --plms --auto_resume --prompt "A beautiful girl" --path_cond examples/anything_sketch/human.png --ckpt models/anything-v4.0-pruned.ckpt --type_in image
76
-
77
- ##### Gradio Demo
78
- <p align="center">
79
- <img src="assets/gradio_sketch.png">
80
- </p>
81
- You can use gradio to experience all these three functions at once. CPU is also supported by setting device to 'cpu'.
82
-
83
- ```bash
84
- python gradio_sketch.py
85
- ```
86
-
87
- #### **Keypose Adapter**
88
-
89
- - Keypose to Image Generation
90
-
91
- > python test_keypose.py --plms --auto_resume --prompt "A beautiful girl" --path_cond examples/keypose/iron.png --type_in pose
92
-
93
- - Image to Image Generation
94
-
95
- > python test_keypose.py --plms --auto_resume --prompt "A beautiful girl" --path_cond examples/sketch/human.png --type_in image
96
-
97
- - Generation with **Anything** setting
98
-
99
- > python test_keypose.py --plms --auto_resume --prompt "A beautiful girl" --path_cond examples/sketch/human.png --ckpt models/anything-v4.0-pruned.ckpt --type_in image
100
-
101
- ##### Gradio Demo
102
- <p align="center">
103
- <img src="assets/gradio_keypose.png">
104
- </p>
105
- You can use gradio to experience all these three functions at once. CPU is also supported by setting device to 'cpu'.
106
-
107
- ```bash
108
- python gradio_keypose.py
109
- ```
110
-
111
- #### **Segmentation Adapter**
112
-
113
- > python test_seg.py --plms --auto_resume --prompt "A black Honda motorcycle parked in front of a garage" --path_cond examples/seg/motor.png
114
-
115
- #### **Two adapters: Segmentation and Sketch Adapters**
116
-
117
- > python test_seg_sketch.py --plms --auto_resume --prompt "An all white kitchen with an electric stovetop" --path_cond examples/seg_sketch/mask.png --path_cond2 examples/seg_sketch/edge.png
118
-
119
- #### **Local editing with adapters**
120
-
121
- > python test_sketch_edit.py --plms --auto_resume --prompt "A white cat" --path_cond examples/edit_cat/edge_2.png --path_x0 examples/edit_cat/im.png --path_mask examples/edit_cat/mask.png
122
-
123
- ## Stable Diffusion + T2I-Adapters (only ~70M parameters, ~300M storage space)
124
-
125
- The following is the detailed structure of a **Stable Diffusion** model with the **T2I-Adapter**.
126
- <p align="center">
127
- <img src="assets/overview2.png" height=300>
128
- </p>
129
-
130
- <!-- ## Web Demo
131
-
132
- * All the usage of three T2I-Adapters (i.e, sketch, keypose and segmentation) are integrated into [Huggingface Spaces]() 🤗 using [Gradio](). Have fun with the Web Demo. -->
133
-
134
- ## 🚀 Interesting Applications
135
-
136
- ### Stable Diffusion results guided with the sketch T2I-Adapter
137
-
138
- The corresponding edge maps are predicted by PiDiNet. The sketch T2I-Adapter can well generalize to other similar sketch types, for example, sketches from the Internet and user scribbles.
139
-
140
- <p align="center">
141
- <img src="assets/sketch_base.png" height=800>
142
- </p>
143
-
144
- ### Stable Diffusion results guided with the keypose T2I-Adapter
145
-
146
- The keypose results predicted by the [MMPose](https://github.com/open-mmlab/mmpose).
147
- With the keypose guidance, the keypose T2I-Adapter can also help to generate animals with the same keypose, for example, pandas and tigers.
148
-
149
- <p align="center">
150
- <img src="assets/keypose_base.png" height=600>
151
- </p>
152
-
153
- ### T2I-Adapter with Anything-v4.0
154
-
155
- Once the T2I-Adapter is trained, it can act as a **plug-and-play module** and can be seamlessly integrated into the finetuned diffusion models **without re-training**, for example, Anything-4.0.
156
-
157
- #### ✨ Anything results with the plug-and-play sketch T2I-Adapter (no extra training)
158
-
159
- <p align="center">
160
- <img src="assets/sketch_anything.png" height=600>
161
- </p>
162
-
163
- #### Anything results with the plug-and-play keypose T2I-Adapter (no extra training)
164
-
165
- <p align="center">
166
- <img src="assets/keypose_anything.png" height=600>
167
- </p>
168
-
169
- ### Local editing with the sketch adapter
170
-
171
- When combined with the inpaiting mode of Stable Diffusion, we can realize local editing with user specific guidance.
172
-
173
- #### ✨ Change the head direction of the cat
174
-
175
- <p align="center">
176
- <img src="assets/local_editing_cat.png" height=300>
177
- </p>
178
-
179
- #### ✨ Add rabbit ears on the head of the Iron Man.
180
-
181
- <p align="center">
182
- <img src="assets/local_editing_ironman.png" height=400>
183
- </p>
184
-
185
- ### Combine different concepts with adapter
186
-
187
- Adapter can be used to enhance the SD ability to combine different concepts.
188
-
189
- #### ✨ A car with flying wings. / A doll in the shape of letter ‘A’.
190
-
191
- <p align="center">
192
- <img src="assets/enhance_SD2.png" height=600>
193
- </p>
194
-
195
- ### Sequential editing with the sketch adapter
196
-
197
- We can realize the sequential editing with the adapter guidance.
198
-
199
- <p align="center">
200
- <img src="assets/sequential_edit.png">
201
- </p>
202
-
203
- ### Composable Guidance with multiple adapters
204
-
205
- Stable Diffusion results guided with the segmentation and sketch adapters together.
206
-
207
- <p align="center">
208
- <img src="assets/multiple_adapters.png">
209
- </p>
210
-
211
-
212
- ![visitors](https://visitor-badge.glitch.me/badge?page_id=TencentARC/T2I-Adapter)
213
-
214
- Logo materials: [adapter](https://www.flaticon.com/free-icon/adapter_4777242), [lightbulb](https://www.flaticon.com/free-icon/lightbulb_3176369)
 
1
+ ---
2
+ license: openrail
3
+ title: T2I-Adapter
4
+ sdk: gradio
5
+ emoji: 😻
6
+ colorFrom: pink
7
+ colorTo: blue
8
+ pinned: false
9
+ ---