mradermacher commited on
Commit
490526f
·
verified ·
1 Parent(s): 12175a3

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -34,9 +34,13 @@ more details, including on how to concatenate multi-part files.
34
  | [GGUF](https://huggingface.co/mradermacher/MT2-Gen4-MAGBMM-gemma-2-9B-GGUF/resolve/main/MT2-Gen4-MAGBMM-gemma-2-9B.Q2_K.gguf) | Q2_K | 3.9 | |
35
  | [GGUF](https://huggingface.co/mradermacher/MT2-Gen4-MAGBMM-gemma-2-9B-GGUF/resolve/main/MT2-Gen4-MAGBMM-gemma-2-9B.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
36
  | [GGUF](https://huggingface.co/mradermacher/MT2-Gen4-MAGBMM-gemma-2-9B-GGUF/resolve/main/MT2-Gen4-MAGBMM-gemma-2-9B.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
 
37
  | [GGUF](https://huggingface.co/mradermacher/MT2-Gen4-MAGBMM-gemma-2-9B-GGUF/resolve/main/MT2-Gen4-MAGBMM-gemma-2-9B.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
 
 
38
  | [GGUF](https://huggingface.co/mradermacher/MT2-Gen4-MAGBMM-gemma-2-9B-GGUF/resolve/main/MT2-Gen4-MAGBMM-gemma-2-9B.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
39
  | [GGUF](https://huggingface.co/mradermacher/MT2-Gen4-MAGBMM-gemma-2-9B-GGUF/resolve/main/MT2-Gen4-MAGBMM-gemma-2-9B.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
 
40
 
41
  Here is a handy graph by ikawrakow comparing some lower-quality quant
42
  types (lower is better):
 
34
  | [GGUF](https://huggingface.co/mradermacher/MT2-Gen4-MAGBMM-gemma-2-9B-GGUF/resolve/main/MT2-Gen4-MAGBMM-gemma-2-9B.Q2_K.gguf) | Q2_K | 3.9 | |
35
  | [GGUF](https://huggingface.co/mradermacher/MT2-Gen4-MAGBMM-gemma-2-9B-GGUF/resolve/main/MT2-Gen4-MAGBMM-gemma-2-9B.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
36
  | [GGUF](https://huggingface.co/mradermacher/MT2-Gen4-MAGBMM-gemma-2-9B-GGUF/resolve/main/MT2-Gen4-MAGBMM-gemma-2-9B.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
37
+ | [GGUF](https://huggingface.co/mradermacher/MT2-Gen4-MAGBMM-gemma-2-9B-GGUF/resolve/main/MT2-Gen4-MAGBMM-gemma-2-9B.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
38
  | [GGUF](https://huggingface.co/mradermacher/MT2-Gen4-MAGBMM-gemma-2-9B-GGUF/resolve/main/MT2-Gen4-MAGBMM-gemma-2-9B.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
39
+ | [GGUF](https://huggingface.co/mradermacher/MT2-Gen4-MAGBMM-gemma-2-9B-GGUF/resolve/main/MT2-Gen4-MAGBMM-gemma-2-9B.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
40
+ | [GGUF](https://huggingface.co/mradermacher/MT2-Gen4-MAGBMM-gemma-2-9B-GGUF/resolve/main/MT2-Gen4-MAGBMM-gemma-2-9B.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
41
  | [GGUF](https://huggingface.co/mradermacher/MT2-Gen4-MAGBMM-gemma-2-9B-GGUF/resolve/main/MT2-Gen4-MAGBMM-gemma-2-9B.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
42
  | [GGUF](https://huggingface.co/mradermacher/MT2-Gen4-MAGBMM-gemma-2-9B-GGUF/resolve/main/MT2-Gen4-MAGBMM-gemma-2-9B.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
43
+ | [GGUF](https://huggingface.co/mradermacher/MT2-Gen4-MAGBMM-gemma-2-9B-GGUF/resolve/main/MT2-Gen4-MAGBMM-gemma-2-9B.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
44
 
45
  Here is a handy graph by ikawrakow comparing some lower-quality quant
46
  types (lower is better):