Spaces:
Runtime error
Runtime error
bokyeong1015
commited on
Commit
•
5f10bb6
1
Parent(s):
42d2d5d
Shorten Readme
Browse files- docs/description.md +2 -8
docs/description.md
CHANGED
@@ -1,5 +1,6 @@
|
|
1 |
This demo showcases a lightweight Stable Diffusion model (SDM) for general-purpose text-to-image synthesis. Our model [**BK-SDM-Small**](https://huggingface.co/nota-ai/bk-sdm-small) achieves **36% reduced** parameters and latency. This model is bulit with (i) removing several residual and attention blocks from the U-Net of [SDM-v1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4) and (ii) distillation pretraining on only 0.22M LAION pairs (fewer than 0.1% of the full training set). Despite very limited training resources, our model can imitate the original SDM by benefiting from transferred knowledge.
|
2 |
-
|
|
|
3 |
<center>
|
4 |
<img alt="U-Net architectures and KD-based pretraining" img src="https://huggingface.co/spaces/nota-ai/compressed-stable-diffusion/resolve/91f349ab3b900cbfec20163edd6a312d1e8c8193/docs/fig_model.png" width="65%">
|
5 |
</center>
|
@@ -14,16 +15,9 @@ This demo showcases a lightweight Stable Diffusion model (SDM) for general-purpo
|
|
14 |
- [**ICCV 2023 Demo Track**](https://iccv2023.thecvf.com/)
|
15 |
- Please be aware that your prompts are logged, _without_ any personally identifiable information.
|
16 |
- For different images with the same prompt, please change _Random Seed_ in Advanced Settings (because of using the firstly sampled latent code per seed).
|
17 |
-
|
18 |
-
### Acknowledgments
|
19 |
-
- We thank [Microsoft for Startups Founders Hub](https://www.microsoft.com/en-us/startups) for generously providing the Azure credits used during pretraining.
|
20 |
-
- We appreciate the pioneering research on Latent/Stable Diffusion conducted by [CompVis](https://github.com/CompVis/latent-diffusion), [Runway](https://runwayml.com/), and [Stability AI](https://stability.ai/).
|
21 |
-
- Special thanks to the contributors to [LAION](https://laion.ai/), [Diffusers](https://github.com/huggingface/diffusers), and [Gradio](https://www.gradio.app/) for their valuable support.
|
22 |
-
- Some demo codes were borrowed from the repo of Stability AI ([stabilityai/stable-diffusion](https://huggingface.co/spaces/stabilityai/stable-diffusion)) and AK ([akhaliq/small-stable-diffusion-v0](https://huggingface.co/spaces/akhaliq/small-stable-diffusion-v0)). Thanks!
|
23 |
|
24 |
### Demo Environment
|
25 |
- Regardless of machine types, our compressed model achieves speedups while preserving visually compelling results.
|
26 |
- [July/27/2023] **NVIDIA T4-small** (4 vCPU · 15 GB RAM · 16GB VRAM) — 5~10 sec inference of the original SDM (for a 512×512 image with 25 denoising steps).
|
27 |
- [June/30/2023] **Free CPU-basic** (2 vCPU · 16 GB RAM) — 7~10 min slow inference of the original SDM.
|
28 |
-
- Because free CPU resources are dynamically allocated with other demos, it may take much longer, depending on the server situation.
|
29 |
- [May/31/2023] **NVIDIA T4-small**
|
|
|
1 |
This demo showcases a lightweight Stable Diffusion model (SDM) for general-purpose text-to-image synthesis. Our model [**BK-SDM-Small**](https://huggingface.co/nota-ai/bk-sdm-small) achieves **36% reduced** parameters and latency. This model is bulit with (i) removing several residual and attention blocks from the U-Net of [SDM-v1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4) and (ii) distillation pretraining on only 0.22M LAION pairs (fewer than 0.1% of the full training set). Despite very limited training resources, our model can imitate the original SDM by benefiting from transferred knowledge.
|
2 |
+
- **For more information and acknowledgments**, please visit [GitHub](https://github.com/Nota-NetsPresso/BK-SDM) and [Paper](https://arxiv.org/abs/2305.15798).
|
3 |
+
|
4 |
<center>
|
5 |
<img alt="U-Net architectures and KD-based pretraining" img src="https://huggingface.co/spaces/nota-ai/compressed-stable-diffusion/resolve/91f349ab3b900cbfec20163edd6a312d1e8c8193/docs/fig_model.png" width="65%">
|
6 |
</center>
|
|
|
15 |
- [**ICCV 2023 Demo Track**](https://iccv2023.thecvf.com/)
|
16 |
- Please be aware that your prompts are logged, _without_ any personally identifiable information.
|
17 |
- For different images with the same prompt, please change _Random Seed_ in Advanced Settings (because of using the firstly sampled latent code per seed).
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
### Demo Environment
|
20 |
- Regardless of machine types, our compressed model achieves speedups while preserving visually compelling results.
|
21 |
- [July/27/2023] **NVIDIA T4-small** (4 vCPU · 15 GB RAM · 16GB VRAM) — 5~10 sec inference of the original SDM (for a 512×512 image with 25 denoising steps).
|
22 |
- [June/30/2023] **Free CPU-basic** (2 vCPU · 16 GB RAM) — 7~10 min slow inference of the original SDM.
|
|
|
23 |
- [May/31/2023] **NVIDIA T4-small**
|