bokyeong1015 commited on
Commit
105640e
1 Parent(s): 5f10bb6

Shorten Readme V2

Browse files
Files changed (1) hide show
  1. docs/description.md +11 -14
docs/description.md CHANGED
@@ -1,5 +1,5 @@
1
  This demo showcases a lightweight Stable Diffusion model (SDM) for general-purpose text-to-image synthesis. Our model [**BK-SDM-Small**](https://huggingface.co/nota-ai/bk-sdm-small) achieves **36% reduced** parameters and latency. This model is bulit with (i) removing several residual and attention blocks from the U-Net of [SDM-v1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4) and (ii) distillation pretraining on only 0.22M LAION pairs (fewer than 0.1% of the full training set). Despite very limited training resources, our model can imitate the original SDM by benefiting from transferred knowledge.
2
- - **For more information and acknowledgments**, please visit [GitHub](https://github.com/Nota-NetsPresso/BK-SDM) and [Paper](https://arxiv.org/abs/2305.15798).
3
 
4
  <center>
5
  <img alt="U-Net architectures and KD-based pretraining" img src="https://huggingface.co/spaces/nota-ai/compressed-stable-diffusion/resolve/91f349ab3b900cbfec20163edd6a312d1e8c8193/docs/fig_model.png" width="65%">
@@ -7,17 +7,14 @@ This demo showcases a lightweight Stable Diffusion model (SDM) for general-purpo
7
 
8
  <br/>
9
 
 
 
 
10
 
11
- ### Notice
12
- - The model weights are available at BK-SDM-{[Base](https://huggingface.co/nota-ai/bk-sdm-base), [Small](https://huggingface.co/nota-ai/bk-sdm-small), [Tiny](https://huggingface.co/nota-ai/bk-sdm-tiny)} and can be easily used with 🤗 Diffusers.
13
- - This research was accepted to
14
- - [**ICML 2023 Workshop on Efficient Systems for Foundation Models** (ES-FoMo)](https://es-fomo.com/)
15
- - [**ICCV 2023 Demo Track**](https://iccv2023.thecvf.com/)
16
- - Please be aware that your prompts are logged, _without_ any personally identifiable information.
17
- - For different images with the same prompt, please change _Random Seed_ in Advanced Settings (because of using the firstly sampled latent code per seed).
18
-
19
- ### Demo Environment
20
- - Regardless of machine types, our compressed model achieves speedups while preserving visually compelling results.
21
- - [July/27/2023] **NVIDIA T4-small** (4 vCPU · 15 GB RAM · 16GB VRAM) — 5~10 sec inference of the original SDM (for a 512×512 image with 25 denoising steps).
22
- - [June/30/2023] **Free CPU-basic** (2 vCPU · 16 GB RAM) — 7~10 min slow inference of the original SDM.
23
- - [May/31/2023] **NVIDIA T4-small**
 
1
  This demo showcases a lightweight Stable Diffusion model (SDM) for general-purpose text-to-image synthesis. Our model [**BK-SDM-Small**](https://huggingface.co/nota-ai/bk-sdm-small) achieves **36% reduced** parameters and latency. This model is bulit with (i) removing several residual and attention blocks from the U-Net of [SDM-v1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4) and (ii) distillation pretraining on only 0.22M LAION pairs (fewer than 0.1% of the full training set). Despite very limited training resources, our model can imitate the original SDM by benefiting from transferred knowledge.
2
+ - **For more information & acknowledgments**, please see [Paper](https://arxiv.org/abs/2305.15798), [GitHub](https://github.com/Nota-NetsPresso/BK-SDM), BK-SDM-{[Base](https://huggingface.co/nota-ai/bk-sdm-base), [Small](https://huggingface.co/nota-ai/bk-sdm-small), [Tiny](https://huggingface.co/nota-ai/bk-sdm-tiny)} Model Card.
3
 
4
  <center>
5
  <img alt="U-Net architectures and KD-based pretraining" img src="https://huggingface.co/spaces/nota-ai/compressed-stable-diffusion/resolve/91f349ab3b900cbfec20163edd6a312d1e8c8193/docs/fig_model.png" width="65%">
 
7
 
8
  <br/>
9
 
10
+ - This research was accepted to [**ICCV 2023 Demo Track**](https://iccv2023.thecvf.com/) & [**ICML 2023 Workshop on Efficient Systems for Foundation Models** (ES-FoMo)](https://es-fomo.com/).
11
+ - Please be aware that your prompts are logged, _without_ any personally identifiable information.
12
+ - For different images with the same prompt, please change _Random Seed_ in Advanced Settings (because of using the firstly sampled latent code per seed).
13
 
14
+ **Demo Environment**: [July/27/2023] NVIDIA T4-small (4 vCPU · 15 GB RAM · 16GB VRAM) — 5~10 sec inference of the original SDM (for a 512×512 image with 25 denoising steps).
15
+ <details>
16
+ <summary>Previous Env Setup:</summary>
17
+ [June/30/2023] Free CPU-basic (2 vCPU · 16 GB RAM) 7~10 min slow inference of the original SDM.
18
+ <br/>
19
+ [May/31/2023] NVIDIA T4-small (4 vCPU · 15 GB RAM · 16GB VRAM)
20
+ </details>