DDD_French_version / README.md
BakrAsskali's picture
Upload dataset
31185f8 verified
metadata
dataset_info:
  features:
    - name: index
      dtype: int64
    - name: thread_id
      dtype: int64
    - name: message_id
      dtype: int64
    - name: author_id
      dtype: int64
    - name: author_num_posts
      dtype: int64
    - name: message
      dtype: string
    - name: character
      dtype: string
  splits:
    - name: train
      num_bytes: 216610832
      num_examples: 26401
  download_size: 58427511
  dataset_size: 216610832
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Deep Dungeons and Dragons

This dataset card aims to be a base template for new datasets. It has been generated using this raw template.

Dataset Details

Dataset Description

Deep Dungeons and Dragons

A dataset of long-form multi-turn and multi-character collaborative RPG stories, complete with associated character cards.

This dataset comprises of 56,000 turns in 1544 stories following 9771 characters: a total of 50M Llama tokens. Each turn comprises a multi-paragraph continuation of a story from the perspective of a defined character including both dialogue and prose.

This dataset is a cleaned and reformatted version of Deep Dungeons and Dragons, originally released in 2018 by Annie Louis and Charles Sutton and comprising of transcripts collected from public games at roleplayerguild.com. We've removed images and links (as well as their references) from posts to make this a text-only dataset, as well as anonymising usernames - although this is still available in the original dataset.

Citation for source dataset:

@inproceedings{ddd2018, author={Louis, Annie, and Sutton, Charles}, title={{Deep Dungeons and Dragons: Learning Character-Action Interactions from Role-Playing Game Transcripts}}, booktitle={The 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, year={2018}, pages={--}, organization={ACL} }

Credits to IconicAI/DDD, for the original dataset in english