File size: 2,978 Bytes
714453e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6659eac
 
 
 
 
 
 
 
 
714453e
 
 
3623907
 
714453e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bf101e4
714453e
 
5e67251
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11b04ac
5e67251
 
 
 
11b04ac
5e67251
 
 
 
 
 
 
 
 
11b04ac
5e67251
 
 
 
11b04ac
5e67251
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
license: cc-by-nc-4.0
language:
- en
---

A roleplay-based model intended for multiple AI characters / group based roleplaying sessions.

This is an experimental model, trained entirely off of purely human data, none from LLMs or AI Models. Data taken is from roleplaying forum scrapes and more, etc.

Built off of Llama-3-Instruct.

This is a Test / Alpha model. A proof of concept.

Make sure to adapt your cards to a group-chat friendly style?

---

Notes:
```
- Don't expect this to beat Stheno or other mature models. It won't.
- Works best when in group chat scenarios, with properly defined cards, I think it is a successful test?
- A very small dataset of varying quality (human data) was used. Does not work well outside of its specified scenario.
```

---

Training Details:
```
- Uses L3-Instruct Format.
- One designated character per entry is the Human-turn, while all other characters are different, assigned GPT-turns, not in order to simulate real group chats.
- Per entry, it varies from 2-5 Unique Characters usually.
- There is roughly only ~3K sample entries.
- May not be the smartest due to all samples being roleplay / conversational data.
```

---

Potential Issues:
```
- 1 on 1 RP Performance might be Affected as focus is solely on group chats.
- The names may be multiple tokens instead of one token as they replace User / Assistant -> May Affect Output Quality -> Another Idea is in the works.
- Dataset Quality? While it is filtered, a few times... there's still the occasional low quality in there. I have not gone through a manual pass, this is a proof of concept.
```

SillyTavern Settings:
```
Llama-3-Instruct-With-Names -->>>> Remove the Square Brackets in `[{{name}}]` or `[{{char}}]` or `[{{user}}]` within the instruction template to match the format used for training.
```

---

Example Dataset Entry:

```
    {
        "token_length": x,
        "Unique_chars": 3,
        "conversations": [
            {
                "from": "system",
                "value": "text"
            },
            {
                "from": "human-chat",
                "name": "User-1",
                "value": "text"
            },
            {
                "from": "gpt-chat",
                "name": "User-2",
                "value": "text"
            },
            {
                "from": "human-chat",
                "name": "User-1",
                "value": "text"
            },
            {
                "from": "gpt-chat",
                "name": "User-3",
                "value": "text"
            },
            {
                "from": "human-chat",
                "name": "User-1",
                "value": "text"
            },
            {
                "from": "gpt-chat",
                "name": "User-2",
                "value": "text"
            },
            {
                "from": "gpt-chat",
                "name": "User-3",
                "value": "text"
            }
        ]
    },
```