Update README.md
Browse files
README.md
CHANGED
@@ -6,4 +6,22 @@ base_model:
|
|
6 |
- allenai/OLMo-7B-0724-Instruct-hf
|
7 |
---
|
8 |
|
9 |
-
original [source](https://huggingface.co/allenai/OLMo-7B-0724-Instruct-hf)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
- allenai/OLMo-7B-0724-Instruct-hf
|
7 |
---
|
8 |
|
9 |
+
project original [source](https://huggingface.co/allenai/OLMo-7B-0724-Instruct-hf)
|
10 |
+
|
11 |
+
Q_2_K (not nice)
|
12 |
+
|
13 |
+
Q_3_K_M (acceptable)
|
14 |
+
|
15 |
+
Q_4_K_M is recommanded (good for running with CPU as well)
|
16 |
+
|
17 |
+
Q_5_K_M (good in general)
|
18 |
+
|
19 |
+
Q_6_K is good also; if you want a better result; take this one instead of Q_5_K_M
|
20 |
+
|
21 |
+
Q_8_0 which is very good; need a reasonable size of RAM otherwise you might expect a long wait
|
22 |
+
|
23 |
+
f16 is similar to the original hf model; opt this or hf also fine; make sure you have a good machine
|
24 |
+
|
25 |
+
### how to run it
|
26 |
+
|
27 |
+
use any connector for interacting with gguf; i.e., [gguf-connector](https://pypi.org/project/gguf-connector/)
|