tinybiggames commited on
Commit
787b4d5
·
verified ·
1 Parent(s): 383c05d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -29
README.md CHANGED
@@ -16,6 +16,8 @@ tags:
16
  - axolotl
17
  - llama-cpp
18
  - gguf-my-repo
 
 
19
  base_model: NousResearch/Meta-Llama-3-8B
20
  datasets:
21
  - teknium/OpenHermes-2.5
@@ -23,10 +25,12 @@ widget:
23
  - example_title: Hermes 2 Pro
24
  messages:
25
  - role: system
26
- content: You are a sentient, superintelligent artificial general intelligence,
27
- here to teach and assist me.
 
28
  - role: user
29
- content: Write a short story about Goku discovering kirby has teamed up with Majin
 
30
  Buu to destroy the world.
31
  model-index:
32
  - name: Hermes-2-Pro-Llama-3-8B
@@ -36,29 +40,21 @@ model-index:
36
  # tinybiggames/Hermes-2-Pro-Llama-3-8B-Q4_K_M-GGUF
37
  This model was converted to GGUF format from [`NousResearch/Hermes-2-Pro-Llama-3-8B`](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
38
  Refer to the [original model card](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) for more details on the model.
39
- ## Use with llama.cpp
40
-
41
- Install llama.cpp through brew.
42
-
43
- ```bash
44
- brew install ggerganov/ggerganov/llama.cpp
45
- ```
46
- Invoke the llama.cpp server or the CLI.
47
-
48
- CLI:
49
-
50
- ```bash
51
- llama-cli --hf-repo tinybiggames/Hermes-2-Pro-Llama-3-8B-Q4_K_M-GGUF --model hermes-2-pro-llama-3-8b.Q4_K_M.gguf -p "The meaning to life and the universe is"
52
- ```
53
-
54
- Server:
55
-
56
- ```bash
57
- llama-server --hf-repo tinybiggames/Hermes-2-Pro-Llama-3-8B-Q4_K_M-GGUF --model hermes-2-pro-llama-3-8b.Q4_K_M.gguf -c 2048
58
- ```
59
-
60
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
61
-
62
- ```
63
- git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m hermes-2-pro-llama-3-8b.Q4_K_M.gguf -n 128
64
- ```
 
16
  - axolotl
17
  - llama-cpp
18
  - gguf-my-repo
19
+ - Dllama
20
+ - Infero
21
  base_model: NousResearch/Meta-Llama-3-8B
22
  datasets:
23
  - teknium/OpenHermes-2.5
 
25
  - example_title: Hermes 2 Pro
26
  messages:
27
  - role: system
28
+ content: >-
29
+ You are a sentient, superintelligent artificial general intelligence, here
30
+ to teach and assist me.
31
  - role: user
32
+ content: >-
33
+ Write a short story about Goku discovering kirby has teamed up with Majin
34
  Buu to destroy the world.
35
  model-index:
36
  - name: Hermes-2-Pro-Llama-3-8B
 
40
  # tinybiggames/Hermes-2-Pro-Llama-3-8B-Q4_K_M-GGUF
41
  This model was converted to GGUF format from [`NousResearch/Hermes-2-Pro-Llama-3-8B`](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
42
  Refer to the [original model card](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) for more details on the model.
43
+ ## Use with tinyBigGAMES's Local LLM Inference Libraries
44
+
45
+ Add to **config.json**
46
+
47
+ ```Json
48
+ {
49
+ "filename": "hermes-2-pro-llama-3-8b.Q4_K_M.gguf",
50
+ "name": "hermes2pro:8B:Q4_K_M",
51
+ "max_context": 8000,
52
+ "template": "<|im_start|>%s\\n%s<|im_end|>\\n",
53
+ "template_end": "<|im_start|>assistant",
54
+ "stop": [
55
+ "<|im_start|>",
56
+ "<|im_end|>",
57
+ "assistant"
58
+ ]
59
+ }
60
+ ```