Update README.md
Browse files
README.md
CHANGED
@@ -23,16 +23,17 @@ tags:
|
|
23 |
|
24 |
Autolycus is a son of Hermes.
|
25 |
|
26 |
-
Autolycus-Mistral is a language/content refinement of
|
27 |
|
28 |
7-billion models are not known for their complete honesty.
|
29 |
|
30 |
-
The most
|
31 |
|
32 |
## Example
|
33 |
|
34 |
-
|
35 |
-
|
|
|
36 |
|
37 |
- Original model: [OpenHermes 2.5 Mistral 7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
|
38 |
|
|
|
23 |
|
24 |
Autolycus is a son of Hermes.
|
25 |
|
26 |
+
Autolycus-Mistral is a language/content refinement of OpenHermes 2.5 Mistral, intended to take its output from stilted GPT-4 robotic gobbledygook into something approaching natural English - at the cost of only a very slight increase in prevarication, exaggeration and downright BS.
|
27 |
|
28 |
7-billion models are not known for their complete honesty.
|
29 |
|
30 |
+
The most brazen examples of 'making things up', were those occasions where Autolycus actually quoted a source; usually a book title or author, sometimes a date, but which you find to be nothing more than a load of hogwash when you check it out for yourself.
|
31 |
|
32 |
## Example
|
33 |
|
34 |
+
Compare this example (Llama-precise, with Low top_p), where the Autolycus (bottom image) improves on the response by adding extra material - making it more informative, more relevant and personal ('Visit Japan') - and at the same time gives the whole thing an earthy, almost human touch.
|
35 |
+
|
36 |
+
The OpenHermes Mistral (top image) responds, rather impersonally, in the dry tones of GPT-4.
|
37 |
|
38 |
- Original model: [OpenHermes 2.5 Mistral 7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
|
39 |
|