
Datasets:
The dataset preview is not available for this split.
Error code: StreamingRowsError Exception: NotImplementedError Message: Extraction protocol for TAR archives like 'http://nlp.cs.washington.edu/triviaqa/data/triviaqa-rc.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead. Example usage: url = dl_manager.download(url) tar_archive_iterator = dl_manager.iter_archive(url) for filename, file in tar_archive_iterator: ... Traceback: Traceback (most recent call last): File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 485, in compute_first_rows_response rows = get_rows( File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 120, in decorator return func(*args, **kwargs) File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 164, in get_rows ds = load_dataset( File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1751, in load_dataset return builder_instance.as_streaming_dataset(split=split) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1206, in as_streaming_dataset splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)} File "/tmp/modules-cache/datasets_modules/datasets/trivia_qa/e73c5e47a8704744fa9ded33504b35a6c098661813d1c2a09892eb9b9e9d59ae/trivia_qa.py", line 207, in _split_generators file_paths = dl_manager.download_and_extract(download_urls) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1074, in download_and_extract return self.extract(self.download(url_or_urls)) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1026, in extract urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 444, in map_nested mapped = [ File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 445, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 346, in _single_map_nested return function(data_struct) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1036, in _extract raise NotImplementedError( NotImplementedError: Extraction protocol for TAR archives like 'http://nlp.cs.washington.edu/triviaqa/data/triviaqa-rc.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead. Example usage: url = dl_manager.download(url) tar_archive_iterator = dl_manager.iter_archive(url) for filename, file in tar_archive_iterator: ...
Need help to make the dataset viewer work? Open an discussion for direct support.
Dataset Card for "trivia_qa"
Dataset Summary
TriviaqQA is a reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaqQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions.
Supported Tasks and Leaderboards
Languages
English.
Dataset Structure
Data Instances
rc
- Size of downloaded dataset files: 2542.29 MB
- Size of the generated dataset: 15275.31 MB
- Total amount of disk used: 17817.60 MB
An example of 'train' looks as follows.
rc.nocontext
- Size of downloaded dataset files: 2542.29 MB
- Size of the generated dataset: 120.42 MB
- Total amount of disk used: 2662.71 MB
An example of 'train' looks as follows.
unfiltered
- Size of downloaded dataset files: 3145.53 MB
- Size of the generated dataset: 27884.47 MB
- Total amount of disk used: 31030.00 MB
An example of 'validation' looks as follows.
unfiltered.nocontext
- Size of downloaded dataset files: 603.25 MB
- Size of the generated dataset: 71.11 MB
- Total amount of disk used: 674.35 MB
An example of 'train' looks as follows.
Data Fields
The data fields are the same among all splits.
rc
question
: astring
feature.question_id
: astring
feature.question_source
: astring
feature.entity_pages
: a dictionary feature containing:doc_source
: astring
feature.filename
: astring
feature.title
: astring
feature.wiki_context
: astring
feature.
search_results
: a dictionary feature containing:description
: astring
feature.filename
: astring
feature.rank
: aint32
feature.title
: astring
feature.url
: astring
feature.search_context
: astring
feature.
aliases
: alist
ofstring
features.normalized_aliases
: alist
ofstring
features.matched_wiki_entity_name
: astring
feature.normalized_matched_wiki_entity_name
: astring
feature.normalized_value
: astring
feature.type
: astring
feature.value
: astring
feature.
rc.nocontext
question
: astring
feature.question_id
: astring
feature.question_source
: astring
feature.entity_pages
: a dictionary feature containing:doc_source
: astring
feature.filename
: astring
feature.title
: astring
feature.wiki_context
: astring
feature.
search_results
: a dictionary feature containing:description
: astring
feature.filename
: astring
feature.rank
: aint32
feature.title
: astring
feature.url
: astring
feature.search_context
: astring
feature.
aliases
: alist
ofstring
features.normalized_aliases
: alist
ofstring
features.matched_wiki_entity_name
: astring
feature.normalized_matched_wiki_entity_name
: astring
feature.normalized_value
: astring
feature.type
: astring
feature.value
: astring
feature.
unfiltered
question
: astring
feature.question_id
: astring
feature.question_source
: astring
feature.entity_pages
: a dictionary feature containing:doc_source
: astring
feature.filename
: astring
feature.title
: astring
feature.wiki_context
: astring
feature.
search_results
: a dictionary feature containing:description
: astring
feature.filename
: astring
feature.rank
: aint32
feature.title
: astring
feature.url
: astring
feature.search_context
: astring
feature.
aliases
: alist
ofstring
features.normalized_aliases
: alist
ofstring
features.matched_wiki_entity_name
: astring
feature.normalized_matched_wiki_entity_name
: astring
feature.normalized_value
: astring
feature.type
: astring
feature.value
: astring
feature.
unfiltered.nocontext
question
: astring
feature.question_id
: astring
feature.question_source
: astring
feature.entity_pages
: a dictionary feature containing:doc_source
: astring
feature.filename
: astring
feature.title
: astring
feature.wiki_context
: astring
feature.
search_results
: a dictionary feature containing:description
: astring
feature.filename
: astring
feature.rank
: aint32
feature.title
: astring
feature.url
: astring
feature.search_context
: astring
feature.
aliases
: alist
ofstring
features.normalized_aliases
: alist
ofstring
features.matched_wiki_entity_name
: astring
feature.normalized_matched_wiki_entity_name
: astring
feature.normalized_value
: astring
feature.type
: astring
feature.value
: astring
feature.
Data Splits
name | train | validation | test |
---|---|---|---|
rc | 138384 | 18669 | 17210 |
rc.nocontext | 138384 | 18669 | 17210 |
unfiltered | 87622 | 11313 | 10832 |
unfiltered.nocontext | 87622 | 11313 | 10832 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
The University of Washington does not own the copyright of the questions and documents included in TriviaQA.
Citation Information
@article{2017arXivtriviaqa,
author = {{Joshi}, Mandar and {Choi}, Eunsol and {Weld},
Daniel and {Zettlemoyer}, Luke},
title = "{triviaqa: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension}",
journal = {arXiv e-prints},
year = 2017,
eid = {arXiv:1705.03551},
pages = {arXiv:1705.03551},
archivePrefix = {arXiv},
eprint = {1705.03551},
}
Contributions
Thanks to @thomwolf, @patrickvonplaten, @lewtun for adding this dataset.
- Downloads last month
- 12,491
Models trained or fine-tuned on trivia_qa
