Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,30 @@
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
3 |
+
tags:
|
4 |
+
- not-for-all-audiences
|
5 |
---
|
6 |
+
# This model isn't particularly great. It's just an undercooked experiment.
|
7 |
+
|
8 |
+
Releasing it anyways just in case it accidentally makes good merge meat.
|
9 |
+
|
10 |
+
# It also has a tendency to produce mature content without warning.
|
11 |
+
|
12 |
+
This model is tuned off of the base Llama-3-8B model.
|
13 |
+
|
14 |
+
I adapted the leaked Undi dataset into training samples for custom formatting. This model pretty much only functions properly in SillyTavern.
|
15 |
+
|
16 |
+
The formatting has two pairs of pseudotokens
|
17 |
+
|
18 |
+
```
|
19 |
+
[EGO]Name: Character name and then Everything that forms the personality and speech patterns.(i.e. scenario, sample dialogue, character definitions, etc)[/EGO]
|
20 |
+
[SEEN]User message.[/SEEN]
|
21 |
+
Character Name:
|
22 |
+
```
|
23 |
+
|
24 |
+
The self attention modules were fine tuned separately on this dataset and the pseudotokens were chosen because they made logical sense with respect to the character giving a reply without allowing the model to 'connect the dots' during training and figure out that it is indeed an AI language model.
|
25 |
+
|
26 |
+
After this was done all modules were then finetuned together on the dendrite dataset in order to connect the changes made to the attention modules.
|
27 |
+
|
28 |
+
So with regards to building a SillyTavern prompt template you basically want the entire story string and any additional stylistic instructions enclosed in the [EGO] tags and then the user messages enclosed in [SEEN] tags.
|
29 |
+
|
30 |
+
It doesn't give particularly verbose replies unless you're continueing a roleplay with verbose messages. Otherwise it's pretty bad.
|