Datasets

27

adversarial_qa

AdversarialQA is a Reading Comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles using an adversarial model-in-the-loop. We use three different models; BiDAF (Seo et al., 2016), BERT-Large (Devlin et al., 2018), and RoBERTa-Large (Liu et al., 2019) in the annotation loop and construct three datasets;...

aquamuse

AQuaMuSe is a novel scalable approach to automatically mine dual query based multi-document summarization datasets for extractive and abstractive summaries using question answering dataset (Google Natural Questions) and large document corpora (Common Crawl)

kilt_tasks

KILT tasks training and evaluation data. - [FEVER](https://fever.ai) | Fact Checking | fever - [AIDA CoNLL-YAGO](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/ambiverse-nlu/aida/downloads) | Entity Linking | aidayago2 - [WNED-WIKI](https://github.com/U-Alberta/wned) | Entity Linking | wned - [WNED-CWEB](https:...

annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated language_creators: crowdsourced language_creators: crowdsourced language_creators: found language_creators: crowdsourced language_creators: crowdsourced language_creators: found language_creators: found language_creators: crowdsourced language_creators: crowdsourced language_creators: found language_creators: crowdsourced language_creators: crowdsourced languages: en licenses: mit multilinguality: monolingual size_categories: 10K<n<100K size_categories: 10K<n<100K size_categories: 100K<n<1M size_categories: 100K<n<1M size_categories: 100K<n<1M size_categories: 10K<n<100K size_categories: 100K<n<1M size_categories: n>1M size_categories: 10K<n<100K size_categories: 1K<n<10K size_categories: 100K<n<1M source_datasets: extended|other-aidayago source_datasets: original source_datasets: extended|other-wned-cweb source_datasets: original source_datasets: extended|other-hotpotqa source_datasets: original source_datasets: extended|other-fever source_datasets: original source_datasets: extended|other-hotpotqa source_datasets: original source_datasets: extended|natural_questions source_datasets: original source_datasets: extended|other-zero-shot-re source_datasets: original source_datasets: extended|other-trex source_datasets: original source_datasets: extended|other-triviaqa source_datasets: original source_datasets: extended|other-wned-wiki source_datasets: original source_datasets: extended|other-wizardsofwikipedia source_datasets: original task_categories: text-retrieval task_categories: text-retrieval task_categories: question-answering task_categories: text-retrieval task_categories: text-classification task_categories: text-retrieval task_categories: question-answering task_categories: text-retrieval task_categories: question-answering task_categories: text-retrieval task_categories: sequence-modeling task_categories: text-retrieval task_categories: sequence-modeling task_categories: text-retrieval task_categories: question-answering task_categories: text-retrieval task_categories: text-retrieval task_categories: sequence-modeling task_categories: text-retrieval task_ids: document-retrieval task_ids: entity-linking-retrieval task_ids: document-retrieval task_ids: entity-linking-retrieval task_ids: abstractive-qa task_ids: document-retrieval task_ids: open-domain-qa task_ids: document-retrieval task_ids: fact-checking task_ids: fact-checking-retrieval task_ids: document-retrieval task_ids: extractive-qa task_ids: open-domain-qa task_ids: document-retrieval task_ids: extractive-qa task_ids: open-domain-qa task_ids: document-retrieval task_ids: slot-filling task_ids: document-retrieval task_ids: slot-filling task_ids: document-retrieval task_ids: extractive-qa task_ids: open-domain-qa task_ids: document-retrieval task_ids: entity-linking-retrieval task_ids: dialogue-modeling task_ids: document-retrieval

med_hop

MedHop is based on research paper abstracts from PubMed, and the queries are about interactions between pairs of drugs. The correct answer has to be inferred by combining information from a chain of reactions of drugs and proteins.

mrqa

The MRQA 2019 Shared Task focuses on generalization in question answering. An effective question answering system should do more than merely interpolate from the training set to answer test examples drawn from the same distribution: it should also be able to extrapolate to out-of-distribution examples — a significantly harder challenge. The dat...

msr_sqa

Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We created...

multi_re_qa

MultiReQA contains the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA. Five of these datasets, including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, contain both training and test data, and three, includi...

neural_code_search

Neural-Code-Search-Evaluation-Dataset presents an evaluation dataset consisting of natural language query and code snippet pairs and a search corpus consisting of code snippets collected from the most popular Android repositories on GitHub.

newsqa

NewsQA is a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles.

qed

QED, is a linguistically informed, extensible framework for explanations in question answering. A QED explanation specifies the relationship between a question and answer according to formal semantic notions such as referential equality, sentencehood, and entailment. It is an expertannotated dataset of QED explanations built upon a subset of the...

quac

Question Answering in Context is a dataset for modeling, understanding, and participating in information seeking dialog. Data instances consist of an interactive dialog between two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the ...

ropes

ROPES (Reasoning Over Paragraph Effects in Situations) is a QA dataset which tests a system's ability to apply knowledge from a passage of text to a new situation. A system is presented a background passage containing a causal or qualitative relation(s) (e.g., "animal pollinators increase efficiency of fertilization in flowers"), a novel situati...

sharc

ShARC is a Conversational Question Answering dataset focussing on question answering from texts containing rules. The goal is to answer questions by possibly asking follow-up questions first. It is assumed assume that the question is often underspecified, in the sense that the question does not provide enough information to be answered directly....

sharc_modified

ShARC, a conversational QA task, requires a system to answer user questions based on rules expressed in natural language text. However, it is found that in the ShARC dataset there are multiple spurious patterns that could be exploited by neural models. SharcModified is a new dataset which reduces the patterns identified in the original dataset. ...

squad

Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.

squad_adversarial

Here are two different adversaries, each of which uses a different procedure to pick the sentence it adds to the paragraph: AddSent: Generates up to five candidate adversarial sentences that don't answer the question, but have a lot of words in common with the question. Picks the one that most confuses the model. AddOneSent: Similar to AddSent, ...

squad_kor_v1

KorQuAD 1.0 is a large-scale Korean dataset for machine reading comprehension task consisting of human generated questions for Wikipedia articles. We benchmark the data collecting process of SQuADv1.0 and crowdsourced 70,000+ question-answer pairs. 1,637 articles and 70,079 pairs of question answers were collected. 1,420 articles are used for th...

squad_kor_v2

KorQuAD 2.0 is a Korean question and answering dataset consisting of a total of 100,000+ pairs. There are three major differences from KorQuAD 1.0, which is the standard Korean Q & A data. The first is that a given document is a whole Wikipedia page, not just one or two paragraphs. Second, because the document also contains tables and lists, it ...

thaiqa_squad

`thaiqa_squad` is an open-domain, extractive question answering dataset (4,000 questions in `train` and 74 questions in `dev`) in [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, originally created by [NECTEC](https://www.nectec.or.th/en/) from Wikipedia articles and adapted to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) for...

wiki_hop

WikiHop is open-domain and based on Wikipedia articles; the goal is to recover Wikidata information by hopping through documents. The goal is to answer text understanding queries by combining multiple facts that are spread across different documents.

wiki_summary

\ The dataset extracted from Persian Wikipedia into the form of articles and highlights and cleaned the dataset into pairs of articles and highlights and reduced the articles' length (only version 1.0.0) and highlights' length to a maximum of 512 and 128, respectively, suitable for parsBERT.

xglue

XGLUE is a new benchmark dataset to evaluate the performance of cross-lingual pre-trained models with respect to cross-lingual natural language understanding and generation. The benchmark is composed of the following 11 tasks: - NER - POS Tagging (POS) - News Classification (NC) - MLQA - XNLI - PAWS-X - Query-Ad Matching (QADSM) - Web Page Ranki...

annotations_creators: crowdsourced annotations_creators: machine-generated annotations_creators: expert-generated annotations_creators: found annotations_creators: machine-generated annotations_creators: expert-generated annotations_creators: expert-generated annotations_creators: found annotations_creators: machine-generated annotations_creators: machine-generated annotations_creators: machine-generated annotations_creators: machine-generated annotations_creators: machine-generated language_creators: found language_creators: found language_creators: crowdsourced language_creators: expert-generated language_creators: machine-generated language_creators: expert-generated language_creators: crowdsourced language_creators: expert-generated language_creators: found language_creators: found language_creators: machine-generated language_creators: found language_creators: crowdsourced language_creators: expert-generated languages: ar languages: de languages: en languages: es languages: hi languages: vi languages: zh languages: en languages: de languages: es languages: fr languages: ru languages: de languages: en languages: es languages: nl languages: en languages: de languages: es languages: fr languages: ru languages: en languages: de languages: es languages: fr languages: ar languages: bg languages: de languages: el languages: en languages: es languages: fr languages: hi languages: it languages: nl languages: pl languages: ru languages: th languages: tr languages: ur languages: vi languages: zh languages: en languages: de languages: fr languages: en languages: de languages: fr languages: en languages: de languages: fr languages: pt languages: it languages: zh languages: en languages: de languages: fr languages: es languages: it languages: pt languages: zh languages: ar languages: bg languages: de languages: el languages: en languages: es languages: fr languages: hi languages: ru languages: sw languages: th languages: tr languages: ur languages: vi languages: zh licenses: cc-by-sa-4.0 licenses: unknown licenses: unknown licenses: unknown licenses: unknown licenses: other-Licence Universal Dependencies v2.5 licenses: unknown licenses: unknown licenses: unknown licenses: unknown licenses: cc-by-nc-4.0 multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: translation size_categories: 100K<n<1M size_categories: 100K<n<1M size_categories: 10K<n<100K size_categories: 100K<n<1M size_categories: 10K<n<100K size_categories: 10K<n<100K size_categories: 100K<n<1M size_categories: 100K<n<1M size_categories: 100K<n<1M size_categories: 100K<n<1M size_categories: 100K<n<1M source_datasets: extended|squad source_datasets: original source_datasets: extended|conll2003 source_datasets: original source_datasets: original source_datasets: original source_datasets: original source_datasets: original source_datasets: original source_datasets: original source_datasets: extended|xnli task_categories: question-answering task_categories: text-classification task_categories: structure-prediction task_categories: conditional-text-generation task_categories: text-classification task_categories: structure-prediction task_categories: text-classification task_categories: text-classification task_categories: conditional-text-generation task_categories: text-classification task_categories: text-classification task_ids: extractive-qa task_ids: open-domain-qa task_ids: topic-classification task_ids: named-entity-recognition task_ids: summarization task_ids: text-classification-other-paraphrase identification task_ids: parsing task_ids: acceptability-classification task_ids: acceptability-classification task_ids: conditional-text-generation-other-question-answering task_ids: acceptability-classification task_ids: natural-language-inference

xquad_r

XQuAD-R is a retrieval version of the XQuAD dataset (a cross-lingual extractive QA dataset). Like XQuAD, XQUAD-R is an 11-way parallel dataset, where each question appears in 11 different languages and has 11 parallel correct answers across the languages.

zest

ZEST tests whether NLP systems can perform unseen tasks in a zero-shot way, given a natural language description of the task. It is an instantiation of our proposed framework "learning from task descriptions". The tasks include classification, typed entity extraction and relationship extraction, and each task is paired with 20 different annotate...