Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
# Trivia QA | |
### Paper | |
Title: `TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension` | |
Abstract: https://arxiv.org/abs/1705.03551 | |
TriviaQA is a reading comprehension dataset containing over 650K question-answer-evidence | |
triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts | |
and independently gathered evidence documents, six per question on average, that provide | |
high quality distant supervision for answering the questions. | |
Homepage: https://nlp.cs.washington.edu/triviaqa/ | |
### Citation | |
``` | |
@InProceedings{JoshiTriviaQA2017, | |
author = {Joshi, Mandar and Choi, Eunsol and Weld, Daniel S. and Zettlemoyer, Luke}, | |
title = {TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension}, | |
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics}, | |
month = {July}, | |
year = {2017}, | |
address = {Vancouver, Canada}, | |
publisher = {Association for Computational Linguistics}, | |
} | |
``` | |
### Groups and Tasks | |
#### Groups | |
* Not part of a group yet. | |
#### Tasks | |
* `triviaqa`: `Generate and answer based on the question.` | |
### Checklist | |
For adding novel benchmarks/datasets to the library: | |
* [ ] Is the task an existing benchmark in the literature? | |
* [ ] Have you referenced the original paper that introduced the task? | |
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? | |
If other tasks on this dataset are already supported: | |
* [ ] Is the "Main" variant of this task clearly denoted? | |
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? | |
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | |