Text-to-Image
Diffusers
Safetensors
StableDiffusionPipeline
stable-diffusion
stable-diffusion-diffusers
Inference Endpoints
bokyeong1015 commited on
Commit
e3d045f
1 Parent(s): 0bb7744

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -34,7 +34,7 @@ extra_gated_heading: Please read the LICENSE to access this model
34
 
35
  # BK-SDM Model Card
36
  Block-removed Knowledge-distilled Stable Diffusion Model (BK-SDM) is an architecturally compressed SDM for efficient general-purpose text-to-image synthesis. This model is bulit with (i) removing several residual and attention blocks from the U-Net of [Stable Diffusion v1.4]( https://huggingface.co/CompVis/stable-diffusion-v1-4) and (ii) distillation pretraining on only 0.22M LAION pairs (fewer than 0.1% of the full training set). Despite being trained with very limited resources, our compact model can imitate the original SDM by benefiting from transferred knowledge.
37
- - **Resources for more information**: [Paper](https://arxiv.org/abs/2305.15798), [Demo]( https://huggingface.co/spaces/nota-ai/compressed-stable-diffusion).
38
 
39
 
40
 
@@ -188,8 +188,8 @@ The intended use of this model is with the [Safety Checker](https://github.com/h
188
 
189
  # Acknowledgments
190
  - We express our gratitude to [Microsoft for Startups Founders Hub](https://www.microsoft.com/en-us/startups) for generously providing the Azure credits used during pretraining.
191
- - We deeply appreciate the pioneering research on Latent/Stable Diffusion conducted by [CompVis](https://github.com/CompVis/latent-diffusion) and [Runway](https://runwayml.com/).
192
- - Special thanks to the contributors to [Diffusers](https://github.com/huggingface/diffusers) for their valuable support.
193
 
194
 
195
  # Citation
 
34
 
35
  # BK-SDM Model Card
36
  Block-removed Knowledge-distilled Stable Diffusion Model (BK-SDM) is an architecturally compressed SDM for efficient general-purpose text-to-image synthesis. This model is bulit with (i) removing several residual and attention blocks from the U-Net of [Stable Diffusion v1.4]( https://huggingface.co/CompVis/stable-diffusion-v1-4) and (ii) distillation pretraining on only 0.22M LAION pairs (fewer than 0.1% of the full training set). Despite being trained with very limited resources, our compact model can imitate the original SDM by benefiting from transferred knowledge.
37
+ - **Resources for more information**: [Paper](https://arxiv.org/abs/2305.15798), [GitHub](https://github.com/Nota-NetsPresso/BK-SDM), [Demo]( https://huggingface.co/spaces/nota-ai/compressed-stable-diffusion).
38
 
39
 
40
 
 
188
 
189
  # Acknowledgments
190
  - We express our gratitude to [Microsoft for Startups Founders Hub](https://www.microsoft.com/en-us/startups) for generously providing the Azure credits used during pretraining.
191
+ - We deeply appreciate the pioneering research on Latent/Stable Diffusion conducted by [CompVis](https://github.com/CompVis/latent-diffusion), [Runway](https://runwayml.com/), and [Stability AI](https://stability.ai/).
192
+ - Special thanks to the contributors to [LAION](https://laion.ai/), [Diffusers](https://github.com/huggingface/diffusers), and [Gradio](https://www.gradio.app/) for their valuable support.
193
 
194
 
195
  # Citation