Triangle104
commited on
Commit
•
53130dc
1
Parent(s):
bf48139
Update README.md
Browse files
README.md
CHANGED
@@ -17,6 +17,49 @@ base_model: vicgalle/Roleplay-Hermes-3-Llama-3.1-8B
|
|
17 |
This model was converted to GGUF format from [`vicgalle/Roleplay-Hermes-3-Llama-3.1-8B`](https://huggingface.co/vicgalle/Roleplay-Hermes-3-Llama-3.1-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
18 |
Refer to the [original model card](https://huggingface.co/vicgalle/Roleplay-Hermes-3-Llama-3.1-8B) for more details on the model.
|
19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
## Use with llama.cpp
|
21 |
Install llama.cpp through brew (works on Mac and Linux)
|
22 |
|
|
|
17 |
This model was converted to GGUF format from [`vicgalle/Roleplay-Hermes-3-Llama-3.1-8B`](https://huggingface.co/vicgalle/Roleplay-Hermes-3-Llama-3.1-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
18 |
Refer to the [original model card](https://huggingface.co/vicgalle/Roleplay-Hermes-3-Llama-3.1-8B) for more details on the model.
|
19 |
|
20 |
+
---
|
21 |
+
Model details:
|
22 |
+
-
|
23 |
+
A DPO-tuned Hermes-3-Llama-3.1-8B to behave more "humanish", i.e.,
|
24 |
+
avoiding AI assistant slop. It also works for role-play (RP). To achieve
|
25 |
+
this, the model was fine-tuned over a series of datasets:
|
26 |
+
|
27 |
+
|
28 |
+
Undi95/Weyaxi-humanish-dpo-project-noemoji, to make the model react as a human, rejecting assistant-like or too neutral responses.
|
29 |
+
ResplendentAI/NSFW_RP_Format_DPO, to steer the model
|
30 |
+
towards using the *action* format in RP settings. Works best if in the
|
31 |
+
first message you also use this format naturally (see example)
|
32 |
+
|
33 |
+
|
34 |
+
|
35 |
+
|
36 |
+
|
37 |
+
|
38 |
+
|
39 |
+
Usage example
|
40 |
+
|
41 |
+
|
42 |
+
|
43 |
+
|
44 |
+
conversation = [{'role': 'user', 'content': """*With my face blushing in red* Tell me about your favorite film!"""}]
|
45 |
+
|
46 |
+
prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)
|
47 |
+
|
48 |
+
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
|
49 |
+
|
50 |
+
outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, temperature=0.8)
|
51 |
+
|
52 |
+
The response is
|
53 |
+
|
54 |
+
*blushing* Aw, that's a tough one! There are so many great films out
|
55 |
+
there. I'd have to say one of my all-time favorites is "Eternal Sunshine
|
56 |
+
of the Spotless Mind" - it's such a unique and thought-provoking love
|
57 |
+
story. But really, there are so many amazing films! What's your
|
58 |
+
favorite? *I hope mine is at least somewhat decent!*
|
59 |
+
|
60 |
+
Note: you can use system prompts for better results, describing the persona.
|
61 |
+
|
62 |
+
---
|
63 |
## Use with llama.cpp
|
64 |
Install llama.cpp through brew (works on Mac and Linux)
|
65 |
|