docs: readme
Browse files
README.md
CHANGED
@@ -9,6 +9,13 @@ inference: false
|
|
9 |
**Original model**: [sd-turbo](https://huggingface.co/stabilityai/sd-turbo)<br/>
|
10 |
**GGUF quantization**: based on stable-diffusion.cpp [ac54e](https://github.com/leejet/stable-diffusion.cpp/commit/ac54e0076052a196b7df961eb1f792c9ff4d7f22) that patched by llama-box.
|
11 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
---
|
13 |
|
14 |
# SD-Turbo Model Card
|
|
|
9 |
**Original model**: [sd-turbo](https://huggingface.co/stabilityai/sd-turbo)<br/>
|
10 |
**GGUF quantization**: based on stable-diffusion.cpp [ac54e](https://github.com/leejet/stable-diffusion.cpp/commit/ac54e0076052a196b7df961eb1f792c9ff4d7f22) that patched by llama-box.
|
11 |
|
12 |
+
| Quantization | OpenCLIP ViT-H/14 Quantization | VAE Quantization |
|
13 |
+
| --- | --- | --- |
|
14 |
+
| FP16 | FP16 | FP16 |
|
15 |
+
| Q8_0 | FP16 | FP16 |
|
16 |
+
| Q4_1 | FP16 | FP16 |
|
17 |
+
| Q4_0 | FP16 | FP16 |
|
18 |
+
|
19 |
---
|
20 |
|
21 |
# SD-Turbo Model Card
|