Update README.md
Browse files
README.md
CHANGED
@@ -21,6 +21,9 @@ This Hermes model uses the exact same dataset as Hermes on Llama-1. This is to e
|
|
21 |
|
22 |
This model stands out for its long responses, lower hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 4096 sequence length on an 8x a100 80GB DGX machine.
|
23 |
|
|
|
|
|
|
|
24 |
## Model Training
|
25 |
|
26 |
The model was trained almost entirely on synthetic GPT-4 outputs. Curating high quality GPT-4 datasets enables incredibly high quality in knowledge, task completion, and style.
|
|
|
21 |
|
22 |
This model stands out for its long responses, lower hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 4096 sequence length on an 8x a100 80GB DGX machine.
|
23 |
|
24 |
+
## Example Outputs:
|
25 |
+
|
26 |
+
|
27 |
## Model Training
|
28 |
|
29 |
The model was trained almost entirely on synthetic GPT-4 outputs. Curating high quality GPT-4 datasets enables incredibly high quality in knowledge, task completion, and style.
|