OpenCAI / README.md
Norquinal's picture
Update README.md
ba55687 verified
metadata
license: cc-by-nc-4.0
language:
  - en
size_categories:
  - 10K<n<100K
configs:
  - config_name: default
    data_files: discord_logs.json
  - config_name: unsquashed
    data_files: discord_logs_unsquashed.json
  - config_name: two_users
    data_files: discord_logs_two_users.json
  - config_name: split_threads
    data_files: discord_logs_split_threads.json
  - config_name: anonymized
    data_files: discord_logs_anonymized.json

This dataset is comprised of roleplay chat conversations scraped from several Discord RP fandom servers. The conversations have been split in terms of days, the assumption being that a majority of long-form roleplays are started/continued and completed within a day.

The original dataset consists of ~14K samples. Light filtering striped that down to ~10K samples. Stricter filtering striped it down to ~5k samples. Strictest filtering striped it down to ~4k samples.

Effort was taken to remove emotes, links, reactions, OOC, channel mentions, user mentions, and other superfluous content that could've been destructive to finetuning. Still, there may be imperfections. The dataset is in a continuous state of improvement and is occassionally updated with additional training data as I find more servers to scape from.

In here are several files:

  • discord_logs_unsquashed.json - The original dataset without squashing consecutive messages from the same author. All subsequent files are squashed.
  • discord_logs.json - The original dataset and default option.
  • discord_logs_two_users.json - The original dataset limited to conversations to those with only two users. I recommend using this file.
  • discord_logs_split_threads.json - The original dataset with threads split by timestamp like channels.
  • discord_logs_anonymized.json - The original dataset with usernames replaced with randomized substitutes.
  • 125_tokens_6_messages.json (Strictest) - Original dataset filtered for an average and median token length of 125 and a minimum conversation length of 6 messages.
  • 80_tokens_6_messages.json (Stricter) - Original dataset filtered for an average and median token length of 80 tokens and a minimum conversation length of 6 messages. The latter contains the former, so use one or the other, but not both.
  • 80_tokens_3_messages.json (Light) - Original dataset filtered for an average and median token length of 80 tokens and a minimum conversation length of 3 messages. The latter contains the former, so use one or the other, but not both.
  • opencai_rp.json - Original dataset filtered for an average and median token length of 125 tokens and a minimum conversation length of 6 messages, then processed. Contains descriptions of characters, summary, scenario, chat and genre tags provided by gpt-4o.
  • opencai_rp_metharme.json - Original dataset filtered for an average token length of 125 tokens and a minimum conversation length of 6 messages, then processed and converted to metharme format.

Explanation of Properties:

  • timestamp: Date of the interaction in YYYY-MM-DD format
  • type: Whether the interaction originated from a channel (GuildTextChat) or thread (GuildPublicThread). Threads were parsed differently than channels and use a static timestamp of 1776-07-04 to differentiate them.
  • token_length: The total token length of all messages in the conversation, calculated using tiktoken.
  • average_token_length: The average token length of all messages in the conversation.
  • median_token_length: The median token length of all messages in the conversation.
  • conversations: The conversation between the users in the chat. This is represented as a list of dictionaries, each dictionary representing a single utterance and containing two key-value pairs: message, referring to the utterance itself, and author referring to their discord username.