Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -3,6 +3,12 @@ license: cc-by-nc-4.0
|
|
3 |
---
|
4 |
This dataset is comprised of roleplay chat conversations scraped from several Discord RP fandom servers. The conversations have been split in terms of days, the assumption being that a majority of long-form roleplays are started/continued and completed within a day.
|
5 |
|
6 |
-
The original dataset consisted of ~30K samples. Light filtering striped that down to ~6K samples. Stricter filtering striped it down to ~
|
7 |
|
8 |
Some effort was made to remove OOC, links, and other miscellanous fluff, but more work still needs to be done. This isn't a "completed" dataset so much as a test to see if the data gathered is conducive to training LLMs for roleplay purposes. If determined to be useful, I will continue to scrape more data.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
This dataset is comprised of roleplay chat conversations scraped from several Discord RP fandom servers. The conversations have been split in terms of days, the assumption being that a majority of long-form roleplays are started/continued and completed within a day.
|
5 |
|
6 |
+
The original dataset consisted of ~30K samples. Light filtering striped that down to ~6K samples. Stricter filtering striped it down to ~2k samples.
|
7 |
|
8 |
Some effort was made to remove OOC, links, and other miscellanous fluff, but more work still needs to be done. This isn't a "completed" dataset so much as a test to see if the data gathered is conducive to training LLMs for roleplay purposes. If determined to be useful, I will continue to scrape more data.
|
9 |
+
|
10 |
+
In here are several files:
|
11 |
+
* `125_tokens_10_messages_discord_rp.json` - Original dataset filtered for an average token length of 125 and a minimum conversation length of 10 messages. Unprocessed.
|
12 |
+
* `80_tokens_6_messages_discord_rp.json` - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 6 messages. Unprocessed. The latter contains the former, so use one or the other, but not both.
|
13 |
+
* `opencai_rp.json` - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 6 messages, then processed. Contains descriptions of characters, summary, scene, and genre tags provided by `gpt-3.5-turbo-16k`.
|
14 |
+
* `opencai_rp_metharme` - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 6 messages, then processed, filtered to 4800 samples, and converted to metharme format.
|