mradermacher commited on
Commit
73e7bf7
·
verified ·
1 Parent(s): ce86d51

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -1
README.md CHANGED
@@ -7,7 +7,7 @@ library_name: transformers
7
  license: other
8
  license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
9
  license_name: yi-license
10
- no_imatrix: "GGML_ASSERT: llama.cpp/ggml-quants.c:11239: grid_index >= 0"
11
  quantized_by: mradermacher
12
  ---
13
  ## About
@@ -35,7 +35,14 @@ more details, including on how to concatenate multi-part files.
35
  | Link | Type | Size/GB | Notes |
36
  |:-----|:-----|--------:|:------|
37
  | [GGUF](https://huggingface.co/mradermacher/Pallas-0.5-frankenmerge-i1-GGUF/resolve/main/Pallas-0.5-frankenmerge.i1-Q2_K.gguf) | i1-Q2_K | 13.5 | IQ3_XXS probably better |
 
 
 
38
  | [GGUF](https://huggingface.co/mradermacher/Pallas-0.5-frankenmerge-i1-GGUF/resolve/main/Pallas-0.5-frankenmerge.i1-Q4_K_S.gguf) | i1-Q4_K_S | 20.6 | optimal size/speed/quality |
 
 
 
 
39
 
40
 
41
  Here is a handy graph by ikawrakow comparing some lower-quality quant
 
7
  license: other
8
  license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
9
  license_name: yi-license
10
+ no_imatrix: 'GGML_ASSERT: llama.cpp/ggml-quants.c:11239: grid_index >= 0'
11
  quantized_by: mradermacher
12
  ---
13
  ## About
 
35
  | Link | Type | Size/GB | Notes |
36
  |:-----|:-----|--------:|:------|
37
  | [GGUF](https://huggingface.co/mradermacher/Pallas-0.5-frankenmerge-i1-GGUF/resolve/main/Pallas-0.5-frankenmerge.i1-Q2_K.gguf) | i1-Q2_K | 13.5 | IQ3_XXS probably better |
38
+ | [GGUF](https://huggingface.co/mradermacher/Pallas-0.5-frankenmerge-i1-GGUF/resolve/main/Pallas-0.5-frankenmerge.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.8 | IQ3_XS probably better |
39
+ | [GGUF](https://huggingface.co/mradermacher/Pallas-0.5-frankenmerge-i1-GGUF/resolve/main/Pallas-0.5-frankenmerge.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.6 | IQ3_S probably better |
40
+ | [GGUF](https://huggingface.co/mradermacher/Pallas-0.5-frankenmerge-i1-GGUF/resolve/main/Pallas-0.5-frankenmerge.i1-Q3_K_L.gguf) | i1-Q3_K_L | 19.1 | IQ3_M probably better |
41
  | [GGUF](https://huggingface.co/mradermacher/Pallas-0.5-frankenmerge-i1-GGUF/resolve/main/Pallas-0.5-frankenmerge.i1-Q4_K_S.gguf) | i1-Q4_K_S | 20.6 | optimal size/speed/quality |
42
+ | [GGUF](https://huggingface.co/mradermacher/Pallas-0.5-frankenmerge-i1-GGUF/resolve/main/Pallas-0.5-frankenmerge.i1-Q4_K_M.gguf) | i1-Q4_K_M | 21.7 | fast, recommended |
43
+ | [GGUF](https://huggingface.co/mradermacher/Pallas-0.5-frankenmerge-i1-GGUF/resolve/main/Pallas-0.5-frankenmerge.i1-Q5_K_S.gguf) | i1-Q5_K_S | 25.0 | |
44
+ | [GGUF](https://huggingface.co/mradermacher/Pallas-0.5-frankenmerge-i1-GGUF/resolve/main/Pallas-0.5-frankenmerge.i1-Q5_K_M.gguf) | i1-Q5_K_M | 25.6 | |
45
+ | [GGUF](https://huggingface.co/mradermacher/Pallas-0.5-frankenmerge-i1-GGUF/resolve/main/Pallas-0.5-frankenmerge.i1-Q6_K.gguf) | i1-Q6_K | 29.7 | practically like static Q6_K |
46
 
47
 
48
  Here is a handy graph by ikawrakow comparing some lower-quality quant