patrickvonplaten commited on
Commit
b4aec14
1 Parent(s): c97b41c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -13,7 +13,7 @@ inference: false
13
  Latent Consistency Model (LCM) LoRA was proposed in [LCM-LoRA: A universal Stable-Diffusion Acceleration Module](TODO:)
14
  by *Simian Luo, Yiqin Tan, Suraj Patil, Daniel Gu et al.*
15
 
16
- It is a distilled consistency adapter for [`stable-diffusion-xl-base-1.0`](stabilityai/stable-diffusion-xl-base-1.0) that allows
17
  to reduce the number of inference steps to only between **2 - 8 steps**.
18
 
19
  | Model | Params / M |
@@ -76,6 +76,10 @@ Works as well! TODO docs
76
 
77
  Works as well! TODO docs
78
 
 
 
 
 
79
  ## Training
80
 
81
  TODO
 
13
  Latent Consistency Model (LCM) LoRA was proposed in [LCM-LoRA: A universal Stable-Diffusion Acceleration Module](TODO:)
14
  by *Simian Luo, Yiqin Tan, Suraj Patil, Daniel Gu et al.*
15
 
16
+ It is a distilled consistency adapter for [`stable-diffusion-xl-base-1.0`](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) that allows
17
  to reduce the number of inference steps to only between **2 - 8 steps**.
18
 
19
  | Model | Params / M |
 
76
 
77
  Works as well! TODO docs
78
 
79
+ ## Speed Benchmark
80
+
81
+ TODO
82
+
83
  ## Training
84
 
85
  TODO