bokyeong1015 commited on
Commit
87f973c
1 Parent(s): 490404f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +140 -0
README.md ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: creativeml-openrail-m
3
+ tags:
4
+ - stable-diffusion
5
+ - stable-diffusion-diffusers
6
+ - text-to-image
7
+ datasets:
8
+ - ChristophSchuhmann/improved_aesthetics_6.25plus
9
+ library_name: diffusers
10
+ pipeline_tag: text-to-image
11
+ extra_gated_prompt: >-
12
+ This model is open access and available to all, with a CreativeML OpenRAIL-M
13
+ license further specifying rights and usage.
14
+
15
+ The CreativeML OpenRAIL License specifies:
16
+
17
+
18
+ 1. You can't use the model to deliberately produce nor share illegal or
19
+ harmful outputs or content
20
+
21
+ 2. The authors claim no rights on the outputs you generate, you are free to
22
+ use them and are accountable for their use which must not go against the
23
+ provisions set in the license
24
+
25
+ 3. You may re-distribute the weights and use the model commercially and/or as
26
+ a service. If you do, please be aware you have to include the same use
27
+ restrictions as the ones in the license and share a copy of the CreativeML
28
+ OpenRAIL-M to all your users (please read the license entirely and carefully)
29
+
30
+ Please read the full license carefully here:
31
+ https://huggingface.co/spaces/CompVis/stable-diffusion-license
32
+
33
+ extra_gated_heading: Please read the LICENSE to access this model
34
+
35
+ ---
36
+
37
+
38
+ # BK-SDM-v2 Model Card
39
+
40
+ BK-SDM-{[**v2-Base**](https://huggingface.co/nota-ai/bk-sdm-v2-base), [**v2-Small**](https://huggingface.co/nota-ai/bk-sdm-v2-small), [**v2-Tiny**](https://huggingface.co/nota-ai/bk-sdm-v2-tiny)} are obtained by compressing [SD-v2.1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base).
41
+ - Block-removed Knowledge-distilled Stable Diffusion Models (BK-SDMs) are developed for efficient text-to-image (T2I) synthesis:
42
+ - Certain residual & attention blocks are eliminated from the U-Net of SD.
43
+ - Despite the use of very limited data, distillation retraining remains surprisingly effective.
44
+ - Resources for more information: [Paper](https://arxiv.org/abs/2305.15798), [GitHub](https://github.com/Nota-NetsPresso/BK-SDM).
45
+
46
+
47
+ ## Examples with 🤗[Diffusers library](https://github.com/huggingface/diffusers).
48
+
49
+ An inference code with the default PNDM scheduler and 50 denoising steps is as follows.
50
+
51
+ ```python
52
+ import torch
53
+ from diffusers import StableDiffusionPipeline
54
+
55
+ pipe = StableDiffusionPipeline.from_pretrained("nota-ai/bk-sdm-v2-base", torch_dtype=torch.float16)
56
+ pipe = pipe.to("cuda")
57
+
58
+ prompt = "a black vase holding a bouquet of roses"
59
+ image = pipe(prompt).images[0]
60
+
61
+ image.save("example.png")
62
+ ```
63
+
64
+
65
+ ## Compression Method
66
+
67
+ Based on the [U-Net architecture](https://huggingface.co/nota-ai/bk-sdm-base#u-net-architecture) and [distillation retraining](https://huggingface.co/nota-ai/bk-sdm-base#distillation-pretraining) of BK-SDM, a reduced batch size (from 256 to 128) is used in BK-SDM-v2 for faster training speeds.
68
+
69
+ - **Training Data**: 212,776 image-text pairs (i.e., 0.22M pairs) from [LAION-Aesthetics V2 6.5+](https://laion.ai/blog/laion-aesthetics/).
70
+ - **Hardware:** A single NVIDIA A100 80GB GPU
71
+ - **Gradient Accumulations**: 4
72
+ - **Batch:** 128 (=4×32)
73
+ - **Optimizer:** AdamW
74
+ - **Learning Rate:** a constant learning rate of 5e-5 for 50K-iteration pretraining
75
+
76
+
77
+ ## Experimental Results
78
+
79
+ The following table shows the zero-shot results on 30K samples from the MS-COCO validation split. After generating 512×512 images with the PNDM scheduler and 25 denoising steps, we downsampled them to 256×256 for evaluating generation scores.
80
+
81
+ - Our models were drawn at the 50K-th training iteration.
82
+
83
+ #### Compression of SD-v2.1-base
84
+ | Model | FID↓ | IS↑ | CLIP Score↑<br>(ViT-g/14) | # Params,<br>U-Net | # Params,<br>Whole SDM |
85
+ |---|:---:|:---:|:---:|:---:|:---:|
86
+ | [Stable Diffusion v2.1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) | 13.93 | 35.93 | 0.3075 | 0.87B | 1.26B |
87
+ | [BK-SDM-v2-Base](https://huggingface.co/nota-ai/bk-sdm-v2-base) (Ours) | 15.85 | 31.70 | 0.2868 | 0.59B | 0.98B |
88
+ | [BK-SDM-v2-Small](https://huggingface.co/nota-ai/bk-sdm-v2-small) (Ours) | 16.61 | 31.73 | 0.2901 | 0.49B | 0.88B |
89
+ | [BK-SDM-v2-Tiny](https://huggingface.co/nota-ai/bk-sdm-v2-tiny) (Ours) | 15.68 | 31.64 | 0.2897 | 0.33B | 0.72B |
90
+
91
+ #### Compression of SD-v1.4
92
+ | Model | FID↓ | IS↑ | CLIP Score↑<br>(ViT-g/14) | # Params,<br>U-Net | # Params,<br>Whole SDM |
93
+ |---|:---:|:---:|:---:|:---:|:---:|
94
+ | [Stable Diffusion v1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4) | 13.05 | 36.76 | 0.2958 | 0.86B | 1.04B |
95
+ | [BK-SDM-Base](https://huggingface.co/nota-ai/bk-sdm-base) (Ours) | 15.76 | 33.79 | 0.2878 | 0.58B | 0.76B |
96
+ | [BK-SDM-Base-2M](https://huggingface.co/nota-ai/bk-sdm-base-2m) (Ours) | 14.81 | 34.17 | 0.2883 | 0.58B | 0.76B |
97
+ | [BK-SDM-Small](https://huggingface.co/nota-ai/bk-sdm-small) (Ours) | 16.98 | 31.68 | 0.2677 | 0.49B | 0.66B |
98
+ | [BK-SDM-Small-2M](https://huggingface.co/nota-ai/bk-sdm-small-2m) (Ours) | 17.05 | 33.10 | 0.2734 | 0.49B | 0.66B |
99
+ | [BK-SDM-Tiny](https://huggingface.co/nota-ai/bk-sdm-tiny) (Ours) | 17.12 | 30.09 | 0.2653 | 0.33B | 0.50B |
100
+ | [BK-SDM-Tiny-2M](https://huggingface.co/nota-ai/bk-sdm-tiny-2m) (Ours) | 17.53 | 31.32 | 0.2690 | 0.33B | 0.50B |
101
+
102
+ #### Visual Analysis: Image Areas Affected By Each Word
103
+ KD enables our models to mimic the SDM, yielding similar per-word attribution maps. The model without KD behaves differently, causing dissimilar maps and inaccurate generation (e.g., two sheep and unusual bird shapes).
104
+
105
+ <center>
106
+ <img alt="cross-attn-maps" img src="https://netspresso-research-code-release.s3.us-east-2.amazonaws.com/assets-bk-sdm/fig_cross-attn-maps_bk-sd-v2.png" width="100%">
107
+ </center>
108
+
109
+
110
+ # Uses
111
+ Please follow [the usage guidelines of Stable Diffusion v1](https://huggingface.co/CompVis/stable-diffusion-v1-4#uses).
112
+
113
+
114
+ # Acknowledgments
115
+ - [Microsoft for Startups Founders Hub](https://www.microsoft.com/en-us/startups) and [Gwangju AICA](http://www.aica-gj.kr/main.php) for generously providing GPU resources.
116
+ - [CompVis](https://github.com/CompVis/latent-diffusion), [Runway](https://runwayml.com/), and [Stability AI](https://stability.ai/) for the pioneering research on Stable Diffusion.
117
+ - [LAION](https://laion.ai/), [Diffusers](https://github.com/huggingface/diffusers), [PEFT](https://github.com/huggingface/peft), [DreamBooth](https://dreambooth.github.io/), [Gradio](https://www.gradio.app/), and [Core ML Stable Diffusion](https://github.com/apple/ml-stable-diffusion) for their valuable contributions.
118
+
119
+
120
+ # Citation
121
+ ```bibtex
122
+ @article{kim2023architectural,
123
+ title={BK-SDM: A Lightweight, Fast, and Cheap Version of Stable Diffusion},
124
+ author={Kim, Bo-Kyeong and Song, Hyoung-Kyu and Castells, Thibault and Choi, Shinkook},
125
+ journal={arXiv preprint arXiv:2305.15798},
126
+ year={2023},
127
+ url={https://arxiv.org/abs/2305.15798}
128
+ }
129
+ ```
130
+ ```bibtex
131
+ @article{kim2023bksdm,
132
+ title={BK-SDM: Architecturally Compressed Stable Diffusion for Efficient Text-to-Image Generation},
133
+ author={Kim, Bo-Kyeong and Song, Hyoung-Kyu and Castells, Thibault and Choi, Shinkook},
134
+ journal={ICML Workshop on Efficient Systems for Foundation Models (ES-FoMo)},
135
+ year={2023},
136
+ url={https://openreview.net/forum?id=bOVydU0XKC}
137
+ }
138
+ ```
139
+
140
+ *This model card is based on the [Stable Diffusion v1 model card]( https://huggingface.co/CompVis/stable-diffusion-v1-4).*