Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,48 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
|
5 |
+
# IterComp
|
6 |
+
|
7 |
+
Official Repository of the paper: *[IterComp](https://arxiv.org)*.
|
8 |
+
|
9 |
+
<img src="./itercomp.png" style="zoom:50%;" />
|
10 |
+
|
11 |
+
## News🔥🔥🔥
|
12 |
+
|
13 |
+
* Oct.9, 2024. Our checkpoints are publicly available on [HuggingFace Repo](https://huggingface.co/comin/IterComp).
|
14 |
+
|
15 |
+
## Introduction
|
16 |
+
|
17 |
+
IterComp is one of the new State-of-the-Art compositional generation methods. In this repository, we release the model training from [SDXL Base 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) .
|
18 |
+
|
19 |
+
## Text-to-Image Usage
|
20 |
+
|
21 |
+
```python
|
22 |
+
from diffusers import DiffusionPipeline
|
23 |
+
import torch
|
24 |
+
|
25 |
+
pipe = DiffusionPipeline.from_pretrained("comin/IterComp", torch_dtype=torch.float16, use_safetensors=True)
|
26 |
+
pipe.to("cuda")
|
27 |
+
# if using torch < 2.0
|
28 |
+
# pipe.enable_xformers_memory_efficient_attention()
|
29 |
+
|
30 |
+
prompt = "An astronaut riding a green horse"
|
31 |
+
image = pipe(prompt=prompt).images[0]
|
32 |
+
image.save("output.png")
|
33 |
+
```
|
34 |
+
|
35 |
+
IterComp can **serve as a powerful backbone for various compositional generation methods**, such as [RPG](https://github.com/YangLing0818/RPG-DiffusionMaster) and [Omost](https://github.com/lllyasviel/Omost). We recommend integrating IterComp into these approaches to achieve more advanced compositional generation results.
|
36 |
+
|
37 |
+
## Citation
|
38 |
+
|
39 |
+
```
|
40 |
+
@article{zhang2024itercomp,
|
41 |
+
title={IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation},
|
42 |
+
author={Zhang, Xinchen and Yang, Ling and Li, Guohao and Cai, Yaqi and Xie, Jiake and Tang, Yong and Yang, Yujiu and Mengdi Wang and Cui, Bin},
|
43 |
+
journal={arXiv preprint arXiv:},
|
44 |
+
year={2024}
|
45 |
+
}
|
46 |
+
```
|
47 |
+
|
48 |
+
##
|