Update README.md
Browse files
README.md
CHANGED
@@ -39,15 +39,7 @@ All quants made using imatrix option with dataset from [here](https://gist.githu
|
|
39 |
| [Qwen2-0.5B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF/blob/main/Qwen2-0.5B-Instruct-Q4_K_S.gguf) | Q4_K_S | .38GB | Slightly lower quality with more space savings, *recommended*. |
|
40 |
| [Qwen2-0.5B-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF/blob/main/Qwen2-0.5B-Instruct-IQ4_XS.gguf) | IQ4_XS | .34GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
|
41 |
| [Qwen2-0.5B-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF/blob/main/Qwen2-0.5B-Instruct-Q3_K_L.gguf) | Q3_K_L | .36GB | Lower quality but usable, good for low RAM availability. |
|
42 |
-
| [Qwen2-0.5B-Instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF//main/Qwen2-0.5B-Instruct-Q3_K_M.gguf) | Q3_K_M | | Even lower quality. |
|
43 |
| [Qwen2-0.5B-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF/blob/main/Qwen2-0.5B-Instruct-IQ3_M.gguf) | IQ3_M | .34GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
|
44 |
-
| [Qwen2-0.5B-Instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF//main/Qwen2-0.5B-Instruct-Q3_K_S.gguf) | Q3_K_S | | Low quality, not recommended. |
|
45 |
-
| [Qwen2-0.5B-Instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF//main/Qwen2-0.5B-Instruct-IQ3_XS.gguf) | IQ3_XS | | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
|
46 |
-
| [Qwen2-0.5B-Instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF//main/Qwen2-0.5B-Instruct-IQ3_XXS.gguf) | IQ3_XXS | | Lower quality, new method with decent performance, comparable to Q3 quants. |
|
47 |
-
| [Qwen2-0.5B-Instruct-Q2_K.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF//main/Qwen2-0.5B-Instruct-Q2_K.gguf) | Q2_K | | Very low quality but surprisingly usable. |
|
48 |
-
| [Qwen2-0.5B-Instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF//main/Qwen2-0.5B-Instruct-IQ2_M.gguf) | IQ2_M | | Very low quality, uses SOTA techniques to also be surprisingly usable. |
|
49 |
-
| [Qwen2-0.5B-Instruct-IQ2_S.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF//main/Qwen2-0.5B-Instruct-IQ2_S.gguf) | IQ2_S | | Very low quality, uses SOTA techniques to be usable. |
|
50 |
-
| [Qwen2-0.5B-Instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF//main/Qwen2-0.5B-Instruct-IQ2_XS.gguf) | IQ2_XS | | Very low quality, uses SOTA techniques to be usable. |
|
51 |
|
52 |
## Downloading using huggingface-cli
|
53 |
|
|
|
39 |
| [Qwen2-0.5B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF/blob/main/Qwen2-0.5B-Instruct-Q4_K_S.gguf) | Q4_K_S | .38GB | Slightly lower quality with more space savings, *recommended*. |
|
40 |
| [Qwen2-0.5B-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF/blob/main/Qwen2-0.5B-Instruct-IQ4_XS.gguf) | IQ4_XS | .34GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
|
41 |
| [Qwen2-0.5B-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF/blob/main/Qwen2-0.5B-Instruct-Q3_K_L.gguf) | Q3_K_L | .36GB | Lower quality but usable, good for low RAM availability. |
|
|
|
42 |
| [Qwen2-0.5B-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF/blob/main/Qwen2-0.5B-Instruct-IQ3_M.gguf) | IQ3_M | .34GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
|
44 |
## Downloading using huggingface-cli
|
45 |
|