Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,42 @@
|
|
1 |
-
---
|
2 |
-
license: llama3.1
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama3.1
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
base_model:
|
6 |
+
- nvidia/OpenMath2-Llama3.1-8B
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
tags:
|
9 |
+
- math
|
10 |
+
- nvidia
|
11 |
+
- llama
|
12 |
+
---
|
13 |
+
|
14 |
+
## GGUF quantized version of OpenMath2-Llama3.1-8B
|
15 |
+
|
16 |
+
project original [source](https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B) (base model)
|
17 |
+
|
18 |
+
Q_2_K (not nice)
|
19 |
+
|
20 |
+
Q_3_K_S (acceptable)
|
21 |
+
|
22 |
+
Q_3_K_M is acceptable (good for running with CPU)
|
23 |
+
|
24 |
+
Q_3_K_L (acceptable)
|
25 |
+
|
26 |
+
Q_4_K_S (okay)
|
27 |
+
|
28 |
+
Q_4_K_M is recommanded (balance)
|
29 |
+
|
30 |
+
Q_5_K_S (good)
|
31 |
+
|
32 |
+
Q_5_K_M (good in general)
|
33 |
+
|
34 |
+
Q_6_K is good also; if you want a better result; take this one instead of Q_5_K_M
|
35 |
+
|
36 |
+
Q_8_0 which is very good; need a reasonable size of RAM otherwise you might expect a long wait
|
37 |
+
|
38 |
+
f16 is similar to the original hf model; opt this one or hf also fine; make sure you have a good machine
|
39 |
+
|
40 |
+
### how to run it
|
41 |
+
|
42 |
+
use any connector for interacting with gguf; i.e., [gguf-connector](https://pypi.org/project/gguf-connector/)
|