This demo showcases a compressed Stable Diffusion model (SDM) for general-purpose text-to-image synthesis. Our lightest model (**BK-SDM-Small**) achieves **36% reduced** parameters and latency. This model is bulit with (i) removing several residual and attention blocks from the U-Net of SDM and (ii) distillation pretraining on only 0.22M LAION pairs (fewer than 0.1% of the full training set). Despite very limited training resources, our model can imitate the original SDM by benefiting from transferred knowledge.