bokyeong1015 commited on
Commit
25230d9
1 Parent(s): 271d743

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -34,7 +34,7 @@ extra_gated_heading: Please read the LICENSE to access this model
34
 
35
  # BK-SDM Model Card
36
  Block-removed Knowledge-distilled Stable Diffusion Model (BK-SDM) is an architecturally compressed SDM for efficient general-purpose text-to-image synthesis. This model is bulit with (i) removing several residual and attention blocks from the U-Net of [Stable Diffusion v1.4]( https://huggingface.co/CompVis/stable-diffusion-v1-4) and (ii) distillation pretraining on only 0.22M LAION pairs (fewer than 0.1% of the full training set). Despite being trained with very limited resources, our compact model can imitate the original SDM by benefiting from transferred knowledge.
37
- - **Resources for more information**: [Paper](https://arxiv.org/abs/2305.15798), [Demo]( https://huggingface.co/spaces/nota-ai/compressed-stable-diffusion).
38
 
39
 
40
 
@@ -189,8 +189,8 @@ The intended use of this model is with the [Safety Checker](https://github.com/h
189
 
190
  # Acknowledgments
191
  - We express our gratitude to [Microsoft for Startups Founders Hub](https://www.microsoft.com/en-us/startups) for generously providing the Azure credits used during pretraining.
192
- - We deeply appreciate the pioneering research on Latent/Stable Diffusion conducted by [CompVis](https://github.com/CompVis/latent-diffusion) and [Runway](https://runwayml.com/).
193
- - Special thanks to the contributors to [Diffusers](https://github.com/huggingface/diffusers) for their valuable support.
194
 
195
 
196
  # Citation
 
34
 
35
  # BK-SDM Model Card
36
  Block-removed Knowledge-distilled Stable Diffusion Model (BK-SDM) is an architecturally compressed SDM for efficient general-purpose text-to-image synthesis. This model is bulit with (i) removing several residual and attention blocks from the U-Net of [Stable Diffusion v1.4]( https://huggingface.co/CompVis/stable-diffusion-v1-4) and (ii) distillation pretraining on only 0.22M LAION pairs (fewer than 0.1% of the full training set). Despite being trained with very limited resources, our compact model can imitate the original SDM by benefiting from transferred knowledge.
37
+ - **Resources for more information**: [Paper](https://arxiv.org/abs/2305.15798), [GitHub](https://github.com/Nota-NetsPresso/BK-SDM), [Demo]( https://huggingface.co/spaces/nota-ai/compressed-stable-diffusion).
38
 
39
 
40
 
 
189
 
190
  # Acknowledgments
191
  - We express our gratitude to [Microsoft for Startups Founders Hub](https://www.microsoft.com/en-us/startups) for generously providing the Azure credits used during pretraining.
192
+ - We deeply appreciate the pioneering research on Latent/Stable Diffusion conducted by [CompVis](https://github.com/CompVis/latent-diffusion), [Runway](https://runwayml.com/), and [Stability AI](https://stability.ai/).
193
+ - Special thanks to the contributors to [LAION](https://laion.ai/), [Diffusers](https://github.com/huggingface/diffusers), and [Gradio](https://www.gradio.app/) for their valuable support.
194
 
195
 
196
  # Citation