mradermacher
commited on
auto-patch README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,6 @@ quantized_by: mradermacher
|
|
11 |
|
12 |
weighted/imatrix quants of https://huggingface.co/tiiuae/falcon-40b-instruct
|
13 |
|
14 |
-
|
15 |
<!-- provided-files -->
|
16 |
static quants are available at https://huggingface.co/mradermacher/falcon-40b-instruct-GGUF
|
17 |
## Usage
|
@@ -27,6 +26,8 @@ more details, including on how to concatenate multi-part files.
|
|
27 |
| Link | Type | Size/GB | Notes |
|
28 |
|:-----|:-----|--------:|:------|
|
29 |
| [GGUF](https://huggingface.co/mradermacher/falcon-40b-instruct-i1-GGUF/resolve/main/falcon-40b-instruct.i1-Q2_K.gguf) | i1-Q2_K | 16.4 | IQ3_XXS probably better |
|
|
|
|
|
30 |
|
31 |
|
32 |
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
|
|
11 |
|
12 |
weighted/imatrix quants of https://huggingface.co/tiiuae/falcon-40b-instruct
|
13 |
|
|
|
14 |
<!-- provided-files -->
|
15 |
static quants are available at https://huggingface.co/mradermacher/falcon-40b-instruct-GGUF
|
16 |
## Usage
|
|
|
26 |
| Link | Type | Size/GB | Notes |
|
27 |
|:-----|:-----|--------:|:------|
|
28 |
| [GGUF](https://huggingface.co/mradermacher/falcon-40b-instruct-i1-GGUF/resolve/main/falcon-40b-instruct.i1-Q2_K.gguf) | i1-Q2_K | 16.4 | IQ3_XXS probably better |
|
29 |
+
| [GGUF](https://huggingface.co/mradermacher/falcon-40b-instruct-i1-GGUF/resolve/main/falcon-40b-instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 20.7 | IQ3_S probably better |
|
30 |
+
| [GGUF](https://huggingface.co/mradermacher/falcon-40b-instruct-i1-GGUF/resolve/main/falcon-40b-instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 26.1 | fast, medium quality |
|
31 |
|
32 |
|
33 |
Here is a handy graph by ikawrakow comparing some lower-quality quant
|