mradermacher commited on
Commit
561be08
1 Parent(s): 0578c10

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -40,14 +40,13 @@ more details, including on how to concatenate multi-part files.
40
  | [GGUF](https://huggingface.co/mradermacher/NeuralContext-7b-v2-GGUF/resolve/main/NeuralContext-7b-v2.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
41
  | [GGUF](https://huggingface.co/mradermacher/NeuralContext-7b-v2-GGUF/resolve/main/NeuralContext-7b-v2.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
42
  | [GGUF](https://huggingface.co/mradermacher/NeuralContext-7b-v2-GGUF/resolve/main/NeuralContext-7b-v2.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
43
- | [GGUF](https://huggingface.co/mradermacher/NeuralContext-7b-v2-GGUF/resolve/main/NeuralContext-7b-v2.IQ4_NL.gguf) | IQ4_NL | 4.4 | slightly worse than Q4_K_S |
44
  | [GGUF](https://huggingface.co/mradermacher/NeuralContext-7b-v2-GGUF/resolve/main/NeuralContext-7b-v2.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
45
  | [GGUF](https://huggingface.co/mradermacher/NeuralContext-7b-v2-GGUF/resolve/main/NeuralContext-7b-v2.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
46
  | [GGUF](https://huggingface.co/mradermacher/NeuralContext-7b-v2-GGUF/resolve/main/NeuralContext-7b-v2.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
47
  | [GGUF](https://huggingface.co/mradermacher/NeuralContext-7b-v2-GGUF/resolve/main/NeuralContext-7b-v2.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
48
  | [GGUF](https://huggingface.co/mradermacher/NeuralContext-7b-v2-GGUF/resolve/main/NeuralContext-7b-v2.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
49
 
50
-
51
  Here is a handy graph by ikawrakow comparing some lower-quality quant
52
  types (lower is better):
53
 
 
40
  | [GGUF](https://huggingface.co/mradermacher/NeuralContext-7b-v2-GGUF/resolve/main/NeuralContext-7b-v2.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
41
  | [GGUF](https://huggingface.co/mradermacher/NeuralContext-7b-v2-GGUF/resolve/main/NeuralContext-7b-v2.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
42
  | [GGUF](https://huggingface.co/mradermacher/NeuralContext-7b-v2-GGUF/resolve/main/NeuralContext-7b-v2.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
43
+ | [GGUF](https://huggingface.co/mradermacher/NeuralContext-7b-v2-GGUF/resolve/main/NeuralContext-7b-v2.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
44
  | [GGUF](https://huggingface.co/mradermacher/NeuralContext-7b-v2-GGUF/resolve/main/NeuralContext-7b-v2.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
45
  | [GGUF](https://huggingface.co/mradermacher/NeuralContext-7b-v2-GGUF/resolve/main/NeuralContext-7b-v2.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
46
  | [GGUF](https://huggingface.co/mradermacher/NeuralContext-7b-v2-GGUF/resolve/main/NeuralContext-7b-v2.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
47
  | [GGUF](https://huggingface.co/mradermacher/NeuralContext-7b-v2-GGUF/resolve/main/NeuralContext-7b-v2.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
48
  | [GGUF](https://huggingface.co/mradermacher/NeuralContext-7b-v2-GGUF/resolve/main/NeuralContext-7b-v2.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
49
 
 
50
  Here is a handy graph by ikawrakow comparing some lower-quality quant
51
  types (lower is better):
52