gpustack commited on
Commit
937e5a9
·
verified ·
1 Parent(s): da53a4d

docs: readme

Browse files
Files changed (1) hide show
  1. README.md +9 -0
README.md CHANGED
@@ -45,6 +45,15 @@ pipeline_tag: text-to-image
45
  **Original model**: [stable-diffusion-3.5-large](https://huggingface.co/stabilityai/stable-diffusion-3.5-large)<br/>
46
  **GGUF quantization**: based on stable-diffusion.cpp [ac54e](https://github.com/leejet/stable-diffusion.cpp/commit/ac54e0076052a196b7df961eb1f792c9ff4d7f22) that patched by llama-box.
47
 
 
 
 
 
 
 
 
 
 
48
  ---
49
 
50
  # Stable Diffusion 3.5 Large
 
45
  **Original model**: [stable-diffusion-3.5-large](https://huggingface.co/stabilityai/stable-diffusion-3.5-large)<br/>
46
  **GGUF quantization**: based on stable-diffusion.cpp [ac54e](https://github.com/leejet/stable-diffusion.cpp/commit/ac54e0076052a196b7df961eb1f792c9ff4d7f22) that patched by llama-box.
47
 
48
+ | Quantization | OpenAI CLIP ViT-L/14 Quantization | OpenCLIP ViT-G/14 Quantization | Google T5-xxl Quantization | VAE Quantization |
49
+ | --- | --- | --- | --- | --- |
50
+ | FP16 | FP16 | FP16 | FP16 | FP16 |
51
+ | Q8_0 | FP16 | FP16 | Q8_0 | FP16 |
52
+ | (pure) Q8_0 | Q8_0 | Q8_0 | Q8_0 | FP16 |
53
+ | Q4_1 | FP16 | FP16 | Q8_0 | FP16 |
54
+ | Q4_0 | FP16 | FP16 | Q8_0 | FP16 |
55
+ | (pure) Q4_0 | Q4_0 | Q4_0 | Q4_0 | FP16 |
56
+
57
  ---
58
 
59
  # Stable Diffusion 3.5 Large