Datasets

7

ambig_qa

AmbigNQ, a dataset covering 14,042 questions from NQ-open, an existing open-domain QA benchmark. We find that over half of the questions in NQ-open are ambiguous. The types of ambiguity are diverse and sometimes subtle, many of which are only apparent after examining evidence provided by a very large text corpus. AMBIGNQ, a dataset with 14,042 ...

aquamuse

AQuaMuSe is a novel scalable approach to automatically mine dual query based multi-document summarization datasets for extractive and abstractive summaries using question answering dataset (Google Natural Questions) and large document corpora (Common Crawl)

kilt_tasks

KILT tasks training and evaluation data. - [FEVER](https://fever.ai) | Fact Checking | fever - [AIDA CoNLL-YAGO](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/ambiverse-nlu/aida/downloads) | Entity Linking | aidayago2 - [WNED-WIKI](https://github.com/U-Alberta/wned) | Entity Linking | wned - [WNED-CWEB](https:...

annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated annotations_creators: crowdsourced annotations_creators: found annotations_creators: machine-generated language_creators: crowdsourced language_creators: crowdsourced language_creators: found language_creators: crowdsourced language_creators: crowdsourced language_creators: found language_creators: found language_creators: crowdsourced language_creators: crowdsourced language_creators: found language_creators: crowdsourced language_creators: crowdsourced languages: en licenses: mit multilinguality: monolingual size_categories: 10K<n<100K size_categories: 10K<n<100K size_categories: 100K<n<1M size_categories: 100K<n<1M size_categories: 100K<n<1M size_categories: 10K<n<100K size_categories: 100K<n<1M size_categories: n>1M size_categories: 10K<n<100K size_categories: 1K<n<10K size_categories: 100K<n<1M source_datasets: extended|other-aidayago source_datasets: original source_datasets: extended|other-wned-cweb source_datasets: original source_datasets: extended|other-hotpotqa source_datasets: original source_datasets: extended|other-fever source_datasets: original source_datasets: extended|other-hotpotqa source_datasets: original source_datasets: extended|natural_questions source_datasets: original source_datasets: extended|other-zero-shot-re source_datasets: original source_datasets: extended|other-trex source_datasets: original source_datasets: extended|other-triviaqa source_datasets: original source_datasets: extended|other-wned-wiki source_datasets: original source_datasets: extended|other-wizardsofwikipedia source_datasets: original task_categories: text-retrieval task_categories: text-retrieval task_categories: question-answering task_categories: text-retrieval task_categories: text-classification task_categories: text-retrieval task_categories: question-answering task_categories: text-retrieval task_categories: question-answering task_categories: text-retrieval task_categories: sequence-modeling task_categories: text-retrieval task_categories: sequence-modeling task_categories: text-retrieval task_categories: question-answering task_categories: text-retrieval task_categories: text-retrieval task_categories: sequence-modeling task_categories: text-retrieval task_ids: document-retrieval task_ids: entity-linking-retrieval task_ids: document-retrieval task_ids: entity-linking-retrieval task_ids: abstractive-qa task_ids: document-retrieval task_ids: open-domain-qa task_ids: document-retrieval task_ids: fact-checking task_ids: fact-checking-retrieval task_ids: document-retrieval task_ids: extractive-qa task_ids: open-domain-qa task_ids: document-retrieval task_ids: extractive-qa task_ids: open-domain-qa task_ids: document-retrieval task_ids: slot-filling task_ids: document-retrieval task_ids: slot-filling task_ids: document-retrieval task_ids: extractive-qa task_ids: open-domain-qa task_ids: document-retrieval task_ids: entity-linking-retrieval task_ids: dialogue-modeling task_ids: document-retrieval

mkqa

We introduce MKQA, an open-domain question answering evaluation set comprising 10k question-answer pairs sampled from the Google Natural Questions dataset, aligned across 26 typologically diverse languages (260k question-answer pairs in total). For each query we collected new passage-independent answers. These queries and answers were then human...

mrqa

The MRQA 2019 Shared Task focuses on generalization in question answering. An effective question answering system should do more than merely interpolate from the training set to answer test examples drawn from the same distribution: it should also be able to extrapolate to out-of-distribution examples — a significantly harder challenge. The dat...

nq_open

The NQ-Open task, introduced by Lee et.al. 2019, is an open domain question answering benchmark that is derived from Natural Questions. The goal is to predict an English answer string for an input English question. All questions can be answered using the contents of English Wikipedia.

qed

QED, is a linguistically informed, extensible framework for explanations in question answering. A QED explanation specifies the relationship between a question and answer according to formal semantic notions such as referential equality, sentencehood, and entailment. It is an expertannotated dataset of QED explanations built upon a subset of the...