Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ hasnonname's trashpanda baby is getting a sequel. More JLLM-ish than ever, too.
|
|
17 |
|
18 |
## Recommended settings
|
19 |
|
20 |
-
<p><b>Context/instruct template</b>: Mistral V7 or V3 Tekken, <b>but for some godforsaken reason, we found that this is
|
21 |
|
22 |
<p><b>Samplers</b>: temperature at 0.8 - 1.1, min_p at 0.05, top_a at 0.2. Some optional settings include smoothing_factor at 0.2 and repetition_penalty at 1.03, maybe DRY if you have access to it.</p>
|
23 |
|
|
|
17 |
|
18 |
## Recommended settings
|
19 |
|
20 |
+
<p><b>Context/instruct template</b>: Mistral V7 or V3 Tekken, <b>but for some godforsaken reason, we found that this is (arguably) better with Llama 3 context/instruct.</b> It's funny, stupid and insane, we don't know why this is the case. Trying out Llama 3 instruct/context on base MS24B told us it was coherent for it too in 4/5 responses, but not better than Mistral ones. As to why V1 seems to do better than base MS24B, we don't really know.</p>
|
21 |
|
22 |
<p><b>Samplers</b>: temperature at 0.8 - 1.1, min_p at 0.05, top_a at 0.2. Some optional settings include smoothing_factor at 0.2 and repetition_penalty at 1.03, maybe DRY if you have access to it.</p>
|
23 |
|