
Datasets:
Tasks:
open-domain-qa
Task Categories:
question-answering
Languages:
en
Multilinguality:
monolingual
Size Categories:
100K<n<1M
Licenses:
cc-by-sa-3.0
Language Creators:
crowdsourced
Annotations Creators:
no-annotation
Source Datasets:
original
Dataset Preview
The dataset preview is not available for this dataset.
Server Error
Status code: 400 Exception: TypeError Message: _split_generators() missing 1 required positional argument: 'pipeline'
Need help to make the dataset viewer work? Open an issue for direct support.
Dataset Card for Natural Questions
Dataset Summary
The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. The inclusion of real user questions, and the requirement that solutions should read an entire page to find the answer, cause NQ to be a more realistic and challenging task than prior QA datasets.
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
default
- Size of downloaded dataset files: 42981.34 MB
- Size of the generated dataset: 95175.86 MB
- Total amount of disk used: 138157.19 MB
An example of 'train' looks as follows.
Data Fields
The data fields are the same among all splits.
default
id
: astring
feature.title
: astring
feature.url
: astring
feature.html
: astring
feature.tokens
: a dictionary feature containing:token
: astring
feature.is_html
: abool
feature.
text
: astring
feature.tokens
: alist
ofstring
features.annotations
: a dictionary feature containing:id
: astring
feature.start_token
: aint64
feature.end_token
: aint64
feature.start_byte
: aint64
feature.end_byte
: aint64
feature.short_answers
: a dictionary feature containing:start_token
: aint64
feature.end_token
: aint64
feature.start_byte
: aint64
feature.end_byte
: aint64
feature.text
: astring
feature.
yes_no_answer
: a classification label, with possible values includingNO
(0),YES
(1).
Data Splits
name | train | validation |
---|---|---|
default | 307373 | 7830 |
dev | N/A | 7830 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Creative Commons Attribution-ShareAlike 3.0 Unported.
Citation Information
@article{47761,
title = {Natural Questions: a Benchmark for Question Answering Research},
author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov},
year = {2019},
journal = {Transactions of the Association of Computational Linguistics}
}
Contributions
Homepage:
ai.google.com
Size of downloaded dataset files:
42981.34 MB
Size of the generated dataset:
95175.86 MB
Total amount of disk used:
138157.19 MB
Models trained or fine-tuned on natural_questions

Question Answering
•
Updated
•
2.16k
•
2