tinybiggames
commited on
Commit
•
7a0412d
1
Parent(s):
787b4d5
Update README.md
Browse files
README.md
CHANGED
@@ -40,21 +40,54 @@ model-index:
|
|
40 |
# tinybiggames/Hermes-2-Pro-Llama-3-8B-Q4_K_M-GGUF
|
41 |
This model was converted to GGUF format from [`NousResearch/Hermes-2-Pro-Llama-3-8B`](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
42 |
Refer to the [original model card](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) for more details on the model.
|
43 |
-
## Use with tinyBigGAMES's
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
# tinybiggames/Hermes-2-Pro-Llama-3-8B-Q4_K_M-GGUF
|
41 |
This model was converted to GGUF format from [`NousResearch/Hermes-2-Pro-Llama-3-8B`](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
42 |
Refer to the [original model card](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) for more details on the model.
|
43 |
+
## Use with tinyBigGAMES's LMEngine Inference Library
|
44 |
+
|
45 |
+
|
46 |
+
How to configure LMEngine:
|
47 |
+
|
48 |
+
```Delphi
|
49 |
+
Config_Init(
|
50 |
+
'C:/LLM/gguf', // path to model files
|
51 |
+
-1 // number of GPU layer, -1 to use all available layers
|
52 |
+
);
|
53 |
+
```
|
54 |
+
|
55 |
+
How to define model:
|
56 |
+
|
57 |
+
```Delphi
|
58 |
+
Model_Define('hermes-2-pro-llama-3-8b.Q4_K_M.gguf','hermes2pro:8B:Q4KM', 8000, '<|im_start|>{role}\n{content}<|im_end|>\n', '<|im_start|>assistant');
|
59 |
+
```
|
60 |
+
|
61 |
+
How to add a message:
|
62 |
+
|
63 |
+
```Delphi
|
64 |
+
Message_Add(
|
65 |
+
ROLE_USER, // role
|
66 |
+
'What is AI?' // content
|
67 |
+
);
|
68 |
+
```
|
69 |
+
|
70 |
+
`{role}` - will be substituted with the message "role"
|
71 |
+
`{content}` - will be substituted with the message "content"
|
72 |
+
|
73 |
+
How to do inference:
|
74 |
+
|
75 |
+
```Delphi
|
76 |
+
var
|
77 |
+
LTokenOutputSpeed: Single;
|
78 |
+
LInputTokens: Int32;
|
79 |
+
LOutputTokens: Int32;
|
80 |
+
LTotalTokens: Int32;
|
81 |
+
|
82 |
+
if Inference_Run('hermes2pro:8B:Q4KM', 1024) then
|
83 |
+
begin
|
84 |
+
Inference_GetUsage(nil, @LTokenOutputSpeed, @LInputTokens, @LOutputTokens, @LTotalTokens);
|
85 |
+
Console_PrintLn('', FG_WHITE);
|
86 |
+
Console_PrintLn('Tokens :: Input: %d, Output: %d, Total: %d, Speed: %3.1f t/s', FG_BRIGHTYELLOW, LInputTokens, LOutputTokens, LTotalTokens, LTokenOutputSpeed);
|
87 |
+
end
|
88 |
+
else
|
89 |
+
begin
|
90 |
+
Console_PrintLn('', FG_WHITE);
|
91 |
+
Console_PrintLn('Error: %s', FG_RED, Error_Get());
|
92 |
+
end;
|
93 |
+
```
|