|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
tags: |
|
- mistral |
|
- instruct |
|
- finetune |
|
- chatml |
|
- gpt4 |
|
--- |
|
|
|
<!-- header start --> |
|
<div style="display: flex; flex-direction: column; align-items: center;"> |
|
</div> |
|
<div style="width: 100%;"> |
|
<img src="https://huggingface.co/FPHam/OpenAutolycus-Mistral_7B/resolve/main/openautolycustitle.jpg" alt="Open Autolycus" style="width: 40%; min-width: 200px; display: block; margin: auto;"> |
|
</div> |
|
<div style="display: flex; flex-direction: column; align-items: center;"> |
|
<p><a href="https://ko-fi.com/Q5Q5MOB4M">Support me at Ko-fi</a></p> |
|
</div> |
|
<!-- header end --> |
|
|
|
Autolycus is a son of Hermes. |
|
|
|
Autolycus-Mistral is a language/content refinement of OpenHermes 2.5 Mistral, intended to take its output from stilted GPT-4 robotic gobbledygook into something approaching natural English - at the cost of only a very slight increase in prevarication, exaggeration and downright BS. |
|
|
|
7-billion models are not known for their complete honesty. |
|
|
|
The most brazen examples of 'making things up', were those occasions where Autolycus actually quoted a source; usually a book title or author, sometimes a date, but which you find to be nothing more than a load of hogwash when you check it out for yourself. |
|
|
|
## Example |
|
|
|
Compare this example (Llama-precise, with Low top_p), where the Autolycus (bottom image) improves on the response by adding extra material - making it more informative, more relevant and personal ('Visit Japan') - and at the same time gives the whole thing an earthy, almost human touch. |
|
|
|
The OpenHermes Mistral (top image) responds, rather impersonally, in the dry tones of GPT-4. |
|
|
|
- Original model: [OpenHermes 2.5 Mistral 7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) |
|
|
|
|
|
<img src="https://huggingface.co/FPHam/OpenAutolycus-Mistral_7B/resolve/main/openautolycus.jpg"> |
|
|
|
|