File size: 8,680 Bytes
6e2fc7c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d7992bd
 
 
 
 
 
 
 
6e2fc7c
 
 
e9854e7
6e2fc7c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9ddccb5
6e2fc7c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
---
tags:
- text-to-image
- KOALA
---

<div align="center">
  <img src="https://dl.dropboxusercontent.com/scl/fi/yosvi68jvyarbvymxc4hm/github_logo.png?rlkey=r9ouwcd7cqxjbvio43q9b3djd&dl=1" width="1024px" />
</div>



<div style="display:flex;justify-content: center">
    <a href="https://youngwanlee.github.io/KOALA/"><img src="https://img.shields.io/static/v1?label=Project%20Page&message=Github&color=blue&logo=github-pages"></a> &ensp;
    <a href="https://github.com/youngwanLEE/sdxl-koala"><img src="https://img.shields.io/static/v1?label=Code&message=Github&color=blue&logo=github"></a> &ensp;
    <a href="https://arxiv.org/abs/2312.04005"><img src="https://img.shields.io/static/v1?label=Paper&message=Arxiv:KOALA&color=red&logo=arxiv"></a> &ensp;
</div>



# KOALA-700M-LLaVA-Caption Model Card

KOALA-700M and -1B models originally were trained on [LAION-aesthetics-V2 6+](https://laion.ai/blog/laion-aesthetics/) which has some noisy or very short texts.
So we construct synthesized captions of LAION-aesthetics-V2 6+ by using a large multimodal model, [LLaVA](https://llava-vl.github.io/). 
KOALA-700M-LLaVA-Caption and KOALA-1B-LLaVA-Caption is trained on the synthesized caption-image pairs of LAION-aesthetics-V2 6+. 

## KOALA Model Cards

|Model|link|
|:--|:--|
|koala-700m | https://huggingface.co/etri-vilab/koala-700m|
|koala-700m-llava-cap | https://huggingface.co/etri-vilab/koala-700m-llava-cap|
|koala-1b | https://huggingface.co/etri-vilab/koala-1bm|
|koala-1b-llava-cap | https://huggingface.co/etri-vilab/koala-1b-llava-cap|

## Abstract
### TL;DR
> We propose a fast text-to-image model, called KOALA, by compressing SDXL's U-Net and distilling knowledge from SDXL into our model. KOALA-700M can generate a 1024x1024 image in less than 1.5 seconds on an NVIDIA 4090 GPU, which is more than 2x faster than SDXL. KOALA-700M can be used as a decent alternative between SDM and SDXL in limited resources.

<details><summary>FULL abstract</summary>
Stable diffusion is the mainstay of the text-to-image (T2I) synthesis in the community due to its generation performance and open-source nature. 
Recently, Stable Diffusion XL (SDXL), the successor of stable diffusion, has received a lot of attention due to its significant performance improvements with a higher resolution of 1024x1024 and a larger model. 
However, its increased computation cost and model size require higher-end hardware (e.g., bigger VRAM GPU) for end-users, incurring higher costs of operation. 
To address this problem, in this work, we propose an efficient latent diffusion model for text-to-image synthesis obtained by distilling the knowledge of SDXL. 
To this end, we first perform an in-depth analysis of the denoising U-Net in SDXL, which is the main bottleneck of the model, and then design a more efficient U-Net based on the analysis. 
Secondly, we explore how to effectively distill the generation capability of SDXL into an efficient U-Net and eventually identify four essential factors, the core of which is that self-attention is the most important part. 
With our efficient U-Net and self-attention-based knowledge distillation strategy, we build our efficient T2I models, called KOALA-1B &-700M, while reducing the model size up to 54% and 69% of the original SDXL model. 
In particular, the KOALA-700M is more than twice as fast as SDXL while still retaining a decent generation quality. 
We hope that due to its balanced speed-performance tradeoff, our KOALA models can serve as a cost-effective alternative to SDXL in resource-constrained environments.
</details>

<br>


These 1024x1024 samples are generated by KOALA-700M with 25 denoising steps.

<div align="center">
  <img src="https://dl.dropboxusercontent.com/scl/fi/rjsqqgfney7be069y2yr7/teaser.png?rlkey=7lq0m90xpjcoqclzl4tieajpo&dl=1" width="1024px" />
</div>


## Architecture 
There are two two types of compressed U-Net, KOALA-1B and KOALA-700M, which are realized by reducing residual blocks and transformer blocks.

<div align="center">
  <img src="https://dl.dropboxusercontent.com/scl/fi/5ydeywgiyt1d3njw63dpk/arch.png?rlkey=1p6imbjs4lkmfpcxy153i1a2t&dl=1" width="1024px" />
</div>

### U-Net comparison

| U-Net | SDM-v2.0 | SDXL-Base-1.0 | KOALA-1B | KOALA-700M |
|-------|:----------:|:-----------:|:-----------:|:-------------:|
| Param.       | 865M  | 2,567M | 1,161M | 782M  |
| CKPT size    | 3.46GB | 10.3GB | 4.4GB  | 3.0GB |
| Tx blocks    | [1, 1, 1, 1] | [0, 2, 10] | [0, 2, 6] | [0, 2, 5] |
| Mid block    | ✓ | ✓ | ✓ | ✗ |
| Latency      | 1.131s | 3.133s | 1.604s | 1.257s |

- Tx menans transformer block and CKPT means the trained checkpoint file. 
- We measured latency with FP16-precision, and 25 denoising steps in NVIDIA 4090 GPU (24GB).
- SDM-v2.0 uses 768x768 resolution, while SDXL and KOALA models uses 1024x1024 resolution.


## Latency and memory usage comparison on different GPUs

We measure the inference time of SDM-v2.0 with 768x768 resolution and the other models with 1024x1024 using a variety of consumer-grade GPUs: NVIDIA 3060Ti (8GB), 2080Ti (11GB), and 4090 (24GB). We use 25 denoising steps and FP16/FP32 precisions. OOM means Out-of-Memory. Note that SDXL-Base cannot operate in the 8GB-GPU.


<div align="center">
  <img src="https://dl.dropboxusercontent.com/scl/fi/u1az20y0zfww1l5lhbcyd/latency_gpu.svg?rlkey=vjn3gpkmywmp7jpilar4km7sd&dl=1" width="1024px" />
</div>





## Key Features
- **Efficient U-Net Architecture**: KOALA models use a simplified U-Net architecture that reduces the model size by up to 54% and 69% respectively compared to its predecessor, Stable Diffusion XL (SDXL).
- **Self-Attention-Based Knowledge Distillation**: The core technique in KOALA focuses on the distillation of self-attention features, which proves crucial for maintaining image generation quality.
  


## Model Description

- Developed by [ETRI Visual Intelligence Lab](https://huggingface.co/etri-vilab)
- Developer: [Youngwan Lee](https://youngwanlee.github.io/), [Kwanyong Park](https://pkyong95.github.io/), [Yoorhim Cho](https://ofzlo.github.io/), [Young-Ju Lee](https://scholar.google.com/citations?user=6goOQh8AAAAJ&hl=en), [Sung Ju Hwang](http://www.sungjuhwang.com/)
- Model Description: Latent Diffusion based text-to-image generative model. KOALA models uses the same text encoders as [SDXL-Base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) and only replace the denoising U-Net with the compressed U-Nets.
- Training data: [LAION-aesthetics-V2 6+](https://laion.ai/blog/laion-aesthetics/)
- Resources for more information: Check out [KOALA report on arXiv](https://arxiv.org/abs/2312.04005) and [project page](https://youngwanlee.github.io/KOALA/).




## Usage with 🤗[Diffusers library](https://github.com/huggingface/diffusers)
The inference code with denoising step 25
```python
import torch
from diffusers import StableDiffusionXLPipeline

pipe = StableDiffusionXLPipeline.from_pretrained("etri-vilab/koala-700m-llava-cap", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "A portrait painting of a Golden Retriever like Leonard da Vinci"
negative = "worst quality, low quality, illustration, low resolution"
image = pipe(prompt=prompt, negative_prompt=negative).images[0]
```



## Uses
### Direct Use
The model is intended for research purposes only. Possible research areas and tasks include

- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Excluded uses are described below.

### Out-of-Scope Use

The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.


## Limitations and Bias
- Text Rendering: The models face challenges in rendering long, legible text within images.
- Complex Prompts: KOALA sometimes struggles with complex prompts involving multiple attributes.
- Dataset Dependencies: The current limitations are partially attributed to the characteristics of the training dataset (LAION-aesthetics-V2 6+).



## Citation
```bibtex
@misc{Lee@koala,
    title={KOALA: Self-Attention Matters in Knowledge Distillation of Latent Diffusion Models for Memory-Efficient and Fast Image Synthesis}, 
    author={Youngwan Lee and Kwanyong Park and Yoorhim Cho and Yong-Ju Lee and Sung Ju Hwang},
    year={2023},
    eprint={2312.04005},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
```