File size: 1,606 Bytes
d51484d
 
759b2ac
 
d51484d
759b2ac
 
4fa77a9
759b2ac
 
 
 
 
 
 
 
 
d920445
 
759b2ac
 
 
 
43f822e
759b2ac
 
43f822e
 
759b2ac
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
license: other
language:
- en
---
## Model details

MythoLogic-Mini-7b can be considered the little brother in my Mytho series of models: [MythoLogic-13b](https://huggingface.co/Gryphe/MythoLogic-13b) and [MythoBoros-13b](https://huggingface.co/Gryphe/MythoBoros-13b)).

Its Llama-2 core is powered by [Nous Hermes-2](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b), which is further augmented by [Stable Beluga](https://huggingface.co/stabilityai/StableBeluga-7B) and a carefully distilled [Kimiko LoRa](https://huggingface.co/nRuaif/Kimiko_7B).

Since 7B models tend to be less capable all-rounders, more emphasis was put on improving the roleplaying aspects for this gradient merge, of which various gradients were benchmarked before settling on the configuration shown below.

![](MythoLogic-Mini-7b.png)

In technical terms, the Hermes-2 core starts at 90% strength before fading away completely at the 12th layer level, where Stable Beluga (and Kimiko) handle the more intricate linguistic aspects.

Quantized models are available from TheBloke: [GGML](https://huggingface.co/TheBloke/MythoLogic-Mini-7b-GGML) - [GPTQ](https://huggingface.co/TheBloke/MythoLogic-Mini-7b-GPTQ) (You're the best!)

## Prompt Format

Due to its Hermes-2 core this model works best with Alpaca formatting, so for optimal model performance, use:
```
<System prompt/Character Card>

### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.

### Response:
```