Update README.md
Browse files
README.md
CHANGED
@@ -29,7 +29,24 @@ widget:
|
|
29 |
|
30 |
This is [Nous Hermes 2 Mistral 7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO), quantized with the help of imatrix so it could run on lower-memory devices. [Kalomaze's "groups_merged.txt"](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) was used for the importance matrix, with context set to 8,192.
|
31 |
|
32 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
***
|
35 |
|
|
|
29 |
|
30 |
This is [Nous Hermes 2 Mistral 7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO), quantized with the help of imatrix so it could run on lower-memory devices. [Kalomaze's "groups_merged.txt"](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) was used for the importance matrix, with context set to 8,192.
|
31 |
|
32 |
+
Here's a chart that provides an approximation of the HellaSwag score (out of 1,000 tasks) and the RAM usage (with `--no-mmap`) with llama.cpp.
|
33 |
+
|Quantization|HellaSwag|256 ctx RAM|512 ctx RAM|1024 ctx RAM|2048 ctx RAM|4096 ctx RAM|8192 ctx RAM
|
34 |
+
|--------|--------|--------|--------|--------|--------|--------|--------|
|
35 |
+
|iq1_S |51.7% |1.6 GiB |1.6 GiB |1.7 GiB |1.8 GiB |2.0 GiB |2.5 GiB |
|
36 |
+
|iq2_XXS |72.5% |1.9 GiB |1.9 GiB |2.0 GiB |2.1 GiB |2.4 GiB |2.9 GiB |
|
37 |
+
|iq2_XS |74.2% |2.1 GiB |2.1 GiB |2.2 GiB |2.3 GiB |2.6 GiB |3.1 GiB |
|
38 |
+
|iq2_S |76.8% |2.2 GiB |2.2 GiB |2.3 GiB |2.4 GiB |2.7 GiB |3.2 GiB |
|
39 |
+
|q2_K |77.4% |2.6 GiB |2.6 GiB |2.7 GiB |2.8 GiB |3.1 GiB |3.6 GiB |
|
40 |
+
|q3_K_M |80.0% |3.3 GiB |3.4 GiB |3.4 GiB |3.6 GiB |3.8 GiB |4.3 GiB |
|
41 |
+
|q4_K_M |81.8% |4.1 GiB |4.2 GiB |4.2 GiB |4.3 GiB |4.6 GiB |5.1 GiB |
|
42 |
+
|q5_K_M |82.1% |4.8 GiB |4.9 GiB |4.9 GiB |5.1 GiB |5.3 GiB |5.8 GiB |
|
43 |
+
|q6_K |81.7% |5.6 GiB |5.6 GiB |5.7 GiB |5.8 GiB |6.1 GiB |6.6 GiB |
|
44 |
+
|
45 |
+
I don't recommend using iq1_S as it gives a BIG drop in quality yet scores worse than lighter models at Q4_K_M like TinyDolphin-2.8B (HellaSwag: 59.0%) and Dolphin 2.6 Phi-2 (HellaSwag: 71.6%).
|
46 |
+
|
47 |
+
Rest of the quants may be added to the chart later.
|
48 |
+
|
49 |
+
Other GGUFs can be found at [NousResearch/Nous-Hermes-2-Mistral-7B-DPO-GGUF](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO-GGUF). Original model card below.
|
50 |
|
51 |
***
|
52 |
|