Update README.md
Browse files
README.md
CHANGED
@@ -35,12 +35,10 @@ All quants made using imatrix option with dataset from [here](https://gist.githu
|
|
35 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-Q5_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
|
36 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-Q4_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
|
37 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-Q4_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
|
38 |
-
| [Meta-Llama-3-8B-Instruct-abliterated-v3-IQ4_NL.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF//main/Meta-Llama-3-8B-Instruct-abliterated-v3-IQ4_NL.gguf) | IQ4_NL | | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
|
39 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-IQ4_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
|
40 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-Q3_K_L.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
|
41 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-Q3_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
|
42 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-IQ3_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
|
43 |
-
| [Meta-Llama-3-8B-Instruct-abliterated-v3-IQ3_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF//main/Meta-Llama-3-8B-Instruct-abliterated-v3-IQ3_S.gguf) | IQ3_S | | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
|
44 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-Q3_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
|
45 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-IQ3_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
|
46 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-IQ3_XXS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
|
@@ -48,9 +46,6 @@ All quants made using imatrix option with dataset from [here](https://gist.githu
|
|
48 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-IQ2_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
|
49 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-IQ2_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
|
50 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-IQ2_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
|
51 |
-
| [Meta-Llama-3-8B-Instruct-abliterated-v3-IQ2_XXS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF//main/Meta-Llama-3-8B-Instruct-abliterated-v3-IQ2_XXS.gguf) | IQ2_XXS | | Lower quality, uses SOTA techniques to be usable. |
|
52 |
-
| [Meta-Llama-3-8B-Instruct-abliterated-v3-IQ1_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF//main/Meta-Llama-3-8B-Instruct-abliterated-v3-IQ1_M.gguf) | IQ1_M | | Extremely low quality, *not* recommended. |
|
53 |
-
| [Meta-Llama-3-8B-Instruct-abliterated-v3-IQ1_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF//main/Meta-Llama-3-8B-Instruct-abliterated-v3-IQ1_S.gguf) | IQ1_S | | Extremely low quality, *not* recommended. |
|
54 |
|
55 |
## Downloading using huggingface-cli
|
56 |
|
|
|
35 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-Q5_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
|
36 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-Q4_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
|
37 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-Q4_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
|
|
|
38 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-IQ4_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
|
39 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-Q3_K_L.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
|
40 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-Q3_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
|
41 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-IQ3_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
|
|
|
42 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-Q3_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
|
43 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-IQ3_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
|
44 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-IQ3_XXS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
|
|
|
46 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-IQ2_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
|
47 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-IQ2_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
|
48 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3-IQ2_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
|
|
|
|
|
|
|
49 |
|
50 |
## Downloading using huggingface-cli
|
51 |
|