Datasets

21

adversarial_qa

AdversarialQA is a Reading Comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles using an adversarial model-in-the-loop. We use three different models; BiDAF (Seo et al., 2016), BERT-Large (Devlin et al., 2018), and RoBERTa-Large (Liu et al., 2019) in the annotation loop and construct three datasets;...

ambig_qa

AmbigNQ, a dataset covering 14,042 questions from NQ-open, an existing open-domain QA benchmark. We find that over half of the questions in NQ-open are ambiguous. The types of ambiguity are diverse and sometimes subtle, many of which are only apparent after examining evidence provided by a very large text corpus. AMBIGNQ, a dataset with 14,042 ...

kilt_tasks

KILT tasks training and evaluation data. - [FEVER](https://fever.ai) | Fact Checking | fever - [AIDA CoNLL-YAGO](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/ambiverse-nlu/aida/downloads) | Entity Linking | aidayago2 - [WNED-WIKI](https://github.com/U-Alberta/wned) | Entity Linking | wned - [WNED-CWEB](https:...

annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated language_creators: crowdsourced language_creators: crowdsourced language_creators: found language_creators: crowdsourced language_creators: crowdsourced language_creators: found language_creators: found language_creators: crowdsourced language_creators: crowdsourced language_creators: found language_creators: crowdsourced language_creators: crowdsourced languages: en licenses: mit multilinguality: monolingual size_categories: 10K<n<100K size_categories: 10K<n<100K size_categories: 100K<n<1M size_categories: 100K<n<1M size_categories: 100K<n<1M size_categories: 10K<n<100K size_categories: 100K<n<1M size_categories: n>1M size_categories: 10K<n<100K size_categories: 1K<n<10K size_categories: 100K<n<1M source_datasets: extended|other-aidayago source_datasets: original source_datasets: extended|other-wned-cweb source_datasets: original source_datasets: extended|other-hotpotqa source_datasets: original source_datasets: extended|other-fever source_datasets: original source_datasets: extended|other-hotpotqa source_datasets: original source_datasets: extended|natural_questions source_datasets: original source_datasets: extended|other-zero-shot-re source_datasets: original source_datasets: extended|other-trex source_datasets: original source_datasets: extended|other-triviaqa source_datasets: original source_datasets: extended|other-wned-wiki source_datasets: original source_datasets: extended|other-wizardsofwikipedia source_datasets: original task_categories: text-retrieval task_categories: text-retrieval task_categories: question-answering task_categories: text-retrieval task_categories: text-classification task_categories: text-retrieval task_categories: question-answering task_categories: text-retrieval task_categories: question-answering task_categories: text-retrieval task_categories: sequence-modeling task_categories: text-retrieval task_categories: sequence-modeling task_categories: text-retrieval task_categories: question-answering task_categories: text-retrieval task_categories: text-retrieval task_categories: sequence-modeling task_categories: text-retrieval task_ids: document-retrieval task_ids: entity-linking-retrieval task_ids: document-retrieval task_ids: entity-linking-retrieval task_ids: abstractive-qa task_ids: document-retrieval task_ids: open-domain-qa task_ids: document-retrieval task_ids: fact-checking task_ids: fact-checking-retrieval task_ids: document-retrieval task_ids: extractive-qa task_ids: open-domain-qa task_ids: document-retrieval task_ids: extractive-qa task_ids: open-domain-qa task_ids: document-retrieval task_ids: slot-filling task_ids: document-retrieval task_ids: slot-filling task_ids: document-retrieval task_ids: extractive-qa task_ids: open-domain-qa task_ids: document-retrieval task_ids: entity-linking-retrieval task_ids: dialogue-modeling task_ids: document-retrieval

mkqa

We introduce MKQA, an open-domain question answering evaluation set comprising 10k question-answer pairs sampled from the Google Natural Questions dataset, aligned across 26 typologically diverse languages (260k question-answer pairs in total). For each query we collected new passage-independent answers. These queries and answers were then human...

multi_re_qa

MultiReQA contains the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA. Five of these datasets, including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, contain both training and test data, and three, includi...

nq_open

The NQ-Open task, introduced by Lee et.al. 2019, is an open domain question answering benchmark that is derived from Natural Questions. The goal is to predict an English answer string for an input English question. All questions can be answered using the contents of English Wikipedia.

proto_qa

This dataset is for studying computational models trained to reason about prototypical situations. Using deterministic filtering a sampling from a larger set of all transcriptions was built. It contains 9789 instances where each instance represents a survey question from Family Feud game. Each instance exactly is a question, a set of answers, an...

qa_srl

The dataset contains question-answer pairs to model verbal predicate-argument structure. The questions start with wh-words (Who, What, Where, What, etc.) and contain a verb predicate in the sentence; the answers are phrases in the sentence. There were 2 datsets used in the paper, newswire and wikipedia. Unfortunately the newswiredataset is built...

simple_questions_v2

SimpleQuestions is a dataset for simple QA, which consists of a total of 108,442 questions written in natural language by human English-speaking annotators each paired with a corresponding fact, formatted as (subject, relationship, object), that provides the answer but also a complete explanation. Fast have been extracted from the Knowledge Bas...

so_stacksample

Dataset with the text of 10% of questions and answers from the Stack Overflow programming Q&A website. This is organized as three tables: Questions contains the title, body, creation date, closed date (if applicable), score, and owner ID for all non-deleted Stack Overflow questions whose Id is a multiple of 10. Answers contains the body, creat...

thaiqa_squad

`thaiqa_squad` is an open-domain, extractive question answering dataset (4,000 questions in `train` and 74 questions in `dev`) in [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, originally created by [NECTEC](https://www.nectec.or.th/en/) from Wikipedia articles and adapted to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) for...

wiki_summary

\ The dataset extracted from Persian Wikipedia into the form of articles and highlights and cleaned the dataset into pairs of articles and highlights and reduced the articles' length (only version 1.0.0) and highlights' length to a maximum of 512 and 128, respectively, suitable for parsBERT.

xglue

XGLUE is a new benchmark dataset to evaluate the performance of cross-lingual pre-trained models with respect to cross-lingual natural language understanding and generation. The benchmark is composed of the following 11 tasks: - NER - POS Tagging (POS) - News Classification (NC) - MLQA - XNLI - PAWS-X - Query-Ad Matching (QADSM) - Web Page Ranki...

annotations_creators: crowdsourced annotations_creators: machine-generated annotations_creators: expert-generated annotations_creators: found annotations_creators: machine-generated annotations_creators: expert-generated annotations_creators: expert-generated annotations_creators: found annotations_creators: machine-generated annotations_creators: machine-generated annotations_creators: machine-generated annotations_creators: machine-generated annotations_creators: machine-generated language_creators: found language_creators: found language_creators: crowdsourced language_creators: expert-generated language_creators: machine-generated language_creators: expert-generated language_creators: crowdsourced language_creators: expert-generated language_creators: found language_creators: found language_creators: machine-generated language_creators: found language_creators: crowdsourced language_creators: expert-generated languages: ar languages: de languages: en languages: es languages: hi languages: vi languages: zh languages: en languages: de languages: es languages: fr languages: ru languages: de languages: en languages: es languages: nl languages: en languages: de languages: es languages: fr languages: ru languages: en languages: de languages: es languages: fr languages: ar languages: bg languages: de languages: el languages: en languages: es languages: fr languages: hi languages: it languages: nl languages: pl languages: ru languages: th languages: tr languages: ur languages: vi languages: zh languages: en languages: de languages: fr languages: en languages: de languages: fr languages: en languages: de languages: fr languages: pt languages: it languages: zh languages: en languages: de languages: fr languages: es languages: it languages: pt languages: zh languages: ar languages: bg languages: de languages: el languages: en languages: es languages: fr languages: hi languages: ru languages: sw languages: th languages: tr languages: ur languages: vi languages: zh licenses: cc-by-sa-4.0 licenses: unknown licenses: unknown licenses: unknown licenses: unknown licenses: other-Licence Universal Dependencies v2.5 licenses: unknown licenses: unknown licenses: unknown licenses: unknown licenses: cc-by-nc-4.0 multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: translation size_categories: 100K<n<1M size_categories: 100K<n<1M size_categories: 10K<n<100K size_categories: 100K<n<1M size_categories: 10K<n<100K size_categories: 10K<n<100K size_categories: 100K<n<1M size_categories: 100K<n<1M size_categories: 100K<n<1M size_categories: 100K<n<1M size_categories: 100K<n<1M source_datasets: extended|squad source_datasets: original source_datasets: extended|conll2003 source_datasets: original source_datasets: original source_datasets: original source_datasets: original source_datasets: original source_datasets: original source_datasets: original source_datasets: extended|xnli task_categories: question-answering task_categories: text-classification task_categories: structure-prediction task_categories: conditional-text-generation task_categories: text-classification task_categories: structure-prediction task_categories: text-classification task_categories: text-classification task_categories: conditional-text-generation task_categories: text-classification task_categories: text-classification task_ids: extractive-qa task_ids: open-domain-qa task_ids: topic-classification task_ids: named-entity-recognition task_ids: summarization task_ids: text-classification-other-paraphrase identification task_ids: parsing task_ids: acceptability-classification task_ids: acceptability-classification task_ids: conditional-text-generation-other-question-answering task_ids: acceptability-classification task_ids: natural-language-inference

xor_tydi_qa

XOR-TyDi QA brings together for the first time information-seeking questions, open-retrieval QA, and multilingual QA to create a multilingual open-retrieval QA dataset that enables cross-lingual answer retrieval. It consists of questions written by information-seeking native speakers in 7 typologically diverse languages and a...

yahoo_answers_qa

Yahoo Non-Factoid Question Dataset is derived from Yahoo's Webscope L6 collection using machine learning techiques such that the questions would contain non-factoid answers.The dataset contains 87,361 questions and their corresponding answers. Each question contains its best answer along with additional other answers submitted by users. Only the...