Datasets:
The dataset preview is not available for this dataset.
Error code: SplitsNamesError Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. Traceback: Traceback (most recent call last): File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 495, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/core.py", line 419, in open return open_files( File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/core.py", line 272, in open_files fs, fs_token, paths = get_fs_token_paths( File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/core.py", line 586, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 252, in filesystem return cls(**storage_options) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 76, in __call__ obj = super().__call__(*args, **kwargs) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 54, in __init__ self.zip = zipfile.ZipFile(self.fo, mode=mode) File "/usr/local/lib/python3.9/zipfile.py", line 1266, in __init__ self._RealGetContents() File "/usr/local/lib/python3.9/zipfile.py", line 1329, in _RealGetContents endrec = _EndRecData(fp) File "/usr/local/lib/python3.9/zipfile.py", line 263, in _EndRecData fpin.seek(0, 2) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 737, in seek raise ValueError("Cannot seek streaming HTTP file") ValueError: Cannot seek streaming HTTP file The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 388, in get_dataset_config_info for split_generator in builder._split_generators( File "/tmp/modules-cache/datasets_modules/datasets/shanya--crd3/3f438e0d81d19d73a1616b3adfe6fad337b5b1595dd5fd4d48e854ce8cdf384d/crd3.py", line 95, in _split_generators with open(test_file, encoding="utf-8") as f: File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 70, in wrapper return function(*args, use_auth_token=use_auth_token, **kwargs) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 498, in xopen raise NonStreamableDatasetError( datasets.download.streaming_download_manager.NonStreamableDatasetError: Streaming is not possible for this dataset because data host server doesn't support HTTP range requests. You can still load this dataset in non-streaming mode by passing `streaming=False` (default) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/workers/datasets_based/src/datasets_based/workers/splits.py", line 119, in compute_splits_response split_items = get_dataset_split_full_names(dataset=dataset, use_auth_token=use_auth_token) File "/src/workers/datasets_based/src/datasets_based/workers/splits.py", line 76, in get_dataset_split_full_names return [ File "/src/workers/datasets_based/src/datasets_based/workers/splits.py", line 79, in <listcomp> for split in get_dataset_split_names(path=dataset, config_name=config, use_auth_token=use_auth_token) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 442, in get_dataset_split_names info = get_dataset_config_info( File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 393, in get_dataset_config_info raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.
Need help to make the dataset viewer work? Open an discussion for direct support.
Dataset Card for "crd3"
Dataset Summary
Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset. Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an open-ended role-playing game. The dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding abstractive summaries collected from the Fandom wiki. The dataset is linguistically unique in that the narratives are generated entirely through player collaboration and spoken interaction. For each dialogue, there are a large number of turns, multiple abstractive summaries with varying levels of detail, and semantic ties to the previous dialogues.
Supported Tasks and Leaderboards
summarization
: The dataset can be used to train a model for abstractive summarization. A fast abstractive summarization-RL model was presented as a baseline, which achieves ROUGE-L-F1 of 25.18.
Languages
The text in the dataset is in English, as spoken by actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game.
Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
Data Instances
default
- Size of downloaded dataset files: 279.93 MB
- Size of the generated dataset: 4020.33 MB
- Total amount of disk used: 4300.25 MB
An example of 'train' looks as follows.
{
"alignment_score": 3.679936647415161,
"chunk": "Wish them a Happy Birthday on their Facebook and Twitter pages! Also, as a reminder: D&D Beyond streams their weekly show (\"And Beyond\") every Wednesday on twitch.tv/dndbeyond.",
"chunk_id": 1,
"turn_end": 6,
"turn_num": 4,
"turn_start": 4,
"turns": {
"names": ["SAM"],
"utterances": ["Yesterday, guys, was D&D Beyond's first one--", "first one-year anniversary. Take two. Hey guys,", "yesterday was D&D Beyond's one-year anniversary.", "Wish them a happy birthday on their Facebook and", "Twitter pages."]
}
}
Data Fields
The data fields are the same among all splits.
default
chunk
: astring
feature.chunk_id
: aint32
feature.turn_start
: aint32
feature.turn_end
: aint32
feature.alignment_score
: afloat32
feature.turn_num
: aint32
feature.turns
: a dictionary feature containing:names
: astring
feature.utterances
: astring
feature.
Data Splits
name | train | validation | test |
---|---|---|---|
default | 26,232 | 3,470 | 4,541 |
Dataset Creation
Curation Rationale
Dialogue understanding and abstractive summarization remain both important and challenging problems for computational linguistics. Current paradigms in summarization modeling have specific failures in capturing semantics and pragmatics, content selection, rewriting, and evaluation in the domain of long, story-telling dialogue. CRD3 offers a linguistically rich dataset to explore these domains.
Source Data
Initial Data Collection and Normalization
Dungeons and Dragons is a popular roleplaying game that is driven by structured storytelling. Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons. This dataset consists of 159 episodes of the show, where the episodes are transcribed. Inconsistencies (e.g. spelling of speaker names) were manually resolved.
The abstractive summaries were collected from the Critical Role Fandom wiki
Who are the source language producers?
The language producers are actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game.
Annotations
Annotation process
[N/A]
Who are the annotators?
[N/A]
Personal and Sensitive Information
[N/A]
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
CRTranscript provided transcripts of the show; contributors of the Critical Role Wiki provided the abstractive summaries.
Licensing Information
This work is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa-4.0]., as corresponding to the Critical Role Wiki https://criticalrole.fandom.com/
Citation Information
@inproceedings{
title = {Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset},
author = {Rameshkumar, Revanth and Bailey, Peter},
year = {2020},
publisher = {Association for Computational Linguistics},
conference = {ACL}
}
Contributions
Thanks to @thomwolf, @lhoestq, @mariamabarham, @lewtun for adding this dataset.
- Downloads last month
- 92