Transformers
GGUF
Finnish
English
Inference Endpoints
mradermacher commited on
Commit
6edfbb7
1 Parent(s): 0642c56

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -6,6 +6,7 @@ datasets:
6
  - mc4
7
  - allenai/dolma
8
  language:
 
9
  - en
10
  library_name: transformers
11
  license: apache-2.0
@@ -34,7 +35,7 @@ more details, including on how to concatenate multi-part files.
34
  | Link | Type | Size/GB | Notes |
35
  |:-----|:-----|--------:|:------|
36
  | [GGUF](https://huggingface.co/mradermacher/Poro-34B-i1-GGUF/resolve/main/Poro-34B.i1-IQ1_S.gguf) | i1-IQ1_S | 7.9 | for the desperate |
37
- | [GGUF](https://huggingface.co/mradermacher/Poro-34B-i1-GGUF/resolve/main/Poro-34B.i1-IQ1_M.gguf) | i1-IQ1_M | 8.6 | for the desperate |
38
  | [GGUF](https://huggingface.co/mradermacher/Poro-34B-i1-GGUF/resolve/main/Poro-34B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.7 | |
39
  | [GGUF](https://huggingface.co/mradermacher/Poro-34B-i1-GGUF/resolve/main/Poro-34B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.7 | |
40
  | [GGUF](https://huggingface.co/mradermacher/Poro-34B-i1-GGUF/resolve/main/Poro-34B.i1-IQ2_S.gguf) | i1-IQ2_S | 11.3 | |
@@ -55,7 +56,6 @@ more details, including on how to concatenate multi-part files.
55
  | [GGUF](https://huggingface.co/mradermacher/Poro-34B-i1-GGUF/resolve/main/Poro-34B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 26.2 | |
56
  | [GGUF](https://huggingface.co/mradermacher/Poro-34B-i1-GGUF/resolve/main/Poro-34B.i1-Q6_K.gguf) | i1-Q6_K | 28.9 | practically like static Q6_K |
57
 
58
-
59
  Here is a handy graph by ikawrakow comparing some lower-quality quant
60
  types (lower is better):
61
 
 
6
  - mc4
7
  - allenai/dolma
8
  language:
9
+ - fi
10
  - en
11
  library_name: transformers
12
  license: apache-2.0
 
35
  | Link | Type | Size/GB | Notes |
36
  |:-----|:-----|--------:|:------|
37
  | [GGUF](https://huggingface.co/mradermacher/Poro-34B-i1-GGUF/resolve/main/Poro-34B.i1-IQ1_S.gguf) | i1-IQ1_S | 7.9 | for the desperate |
38
+ | [GGUF](https://huggingface.co/mradermacher/Poro-34B-i1-GGUF/resolve/main/Poro-34B.i1-IQ1_M.gguf) | i1-IQ1_M | 8.6 | mostly desperate |
39
  | [GGUF](https://huggingface.co/mradermacher/Poro-34B-i1-GGUF/resolve/main/Poro-34B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.7 | |
40
  | [GGUF](https://huggingface.co/mradermacher/Poro-34B-i1-GGUF/resolve/main/Poro-34B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.7 | |
41
  | [GGUF](https://huggingface.co/mradermacher/Poro-34B-i1-GGUF/resolve/main/Poro-34B.i1-IQ2_S.gguf) | i1-IQ2_S | 11.3 | |
 
56
  | [GGUF](https://huggingface.co/mradermacher/Poro-34B-i1-GGUF/resolve/main/Poro-34B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 26.2 | |
57
  | [GGUF](https://huggingface.co/mradermacher/Poro-34B-i1-GGUF/resolve/main/Poro-34B.i1-Q6_K.gguf) | i1-Q6_K | 28.9 | practically like static Q6_K |
58
 
 
59
  Here is a handy graph by ikawrakow comparing some lower-quality quant
60
  types (lower is better):
61