mradermacher
commited on
Commit
•
27b860c
1
Parent(s):
8d00c0a
auto-patch README.md
Browse files
README.md
CHANGED
@@ -24,7 +24,6 @@ tags:
|
|
24 |
<!-- ### vocab_type: -->
|
25 |
weighted/imatrix quants of https://huggingface.co/Virt-io/Llama-3-8B-Irene-v0.1
|
26 |
|
27 |
-
|
28 |
<!-- provided-files -->
|
29 |
static quants are available at https://huggingface.co/mradermacher/Llama-3-8B-Irene-v0.1-GGUF
|
30 |
## Usage
|
@@ -61,7 +60,6 @@ more details, including on how to concatenate multi-part files.
|
|
61 |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Irene-v0.1-i1-GGUF/resolve/main/Llama-3-8B-Irene-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
|
62 |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Irene-v0.1-i1-GGUF/resolve/main/Llama-3-8B-Irene-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
|
63 |
|
64 |
-
|
65 |
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
66 |
types (lower is better):
|
67 |
|
|
|
24 |
<!-- ### vocab_type: -->
|
25 |
weighted/imatrix quants of https://huggingface.co/Virt-io/Llama-3-8B-Irene-v0.1
|
26 |
|
|
|
27 |
<!-- provided-files -->
|
28 |
static quants are available at https://huggingface.co/mradermacher/Llama-3-8B-Irene-v0.1-GGUF
|
29 |
## Usage
|
|
|
60 |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Irene-v0.1-i1-GGUF/resolve/main/Llama-3-8B-Irene-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
|
61 |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Irene-v0.1-i1-GGUF/resolve/main/Llama-3-8B-Irene-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
|
62 |
|
|
|
63 |
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
64 |
types (lower is better):
|
65 |
|