Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
100K<n<1M
Language Creators:
crowdsourced
Annotations Creators:
no-annotation
Source Datasets:
original
Tags:
License:
parquet-converter commited on
Commit
29332c3
1 Parent(s): 3d1d4a0

Update parquet files

Browse files
README.md DELETED
@@ -1,333 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - no-annotation
4
- language_creators:
5
- - crowdsourced
6
- language:
7
- - en
8
- license:
9
- - cc-by-sa-3.0
10
- multilinguality:
11
- - monolingual
12
- pretty_name: Natural Questions
13
- size_categories:
14
- - 100K<n<1M
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - question-answering
19
- task_ids:
20
- - open-domain-qa
21
- paperswithcode_id: natural-questions
22
- dataset_info:
23
- features:
24
- - name: id
25
- dtype: string
26
- - name: document
27
- struct:
28
- - name: title
29
- dtype: string
30
- - name: url
31
- dtype: string
32
- - name: html
33
- dtype: string
34
- - name: tokens
35
- sequence:
36
- - name: token
37
- dtype: string
38
- - name: is_html
39
- dtype: bool
40
- - name: question
41
- struct:
42
- - name: text
43
- dtype: string
44
- - name: tokens
45
- sequence: string
46
- - name: annotations
47
- sequence:
48
- - name: id
49
- dtype: string
50
- - name: long_answer
51
- struct:
52
- - name: start_token
53
- dtype: int64
54
- - name: end_token
55
- dtype: int64
56
- - name: start_byte
57
- dtype: int64
58
- - name: end_byte
59
- dtype: int64
60
- - name: short_answers
61
- sequence:
62
- - name: start_token
63
- dtype: int64
64
- - name: end_token
65
- dtype: int64
66
- - name: start_byte
67
- dtype: int64
68
- - name: end_byte
69
- dtype: int64
70
- - name: text
71
- dtype: string
72
- - name: yes_no_answer
73
- dtype:
74
- class_label:
75
- names:
76
- 0: 'NO'
77
- 1: 'YES'
78
- - name: long_answer_candidates
79
- sequence:
80
- - name: start_token
81
- dtype: int64
82
- - name: end_token
83
- dtype: int64
84
- - name: start_byte
85
- dtype: int64
86
- - name: end_byte
87
- dtype: int64
88
- - name: top_label
89
- dtype: bool
90
- splits:
91
- - name: train
92
- num_bytes: 97445142568
93
- num_examples: 307373
94
- - name: validation
95
- num_bytes: 2353975312
96
- num_examples: 7830
97
- download_size: 45069199013
98
- dataset_size: 99799117880
99
- ---
100
-
101
- # Dataset Card for Natural Questions
102
-
103
- ## Table of Contents
104
- - [Dataset Description](#dataset-description)
105
- - [Dataset Summary](#dataset-summary)
106
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
107
- - [Languages](#languages)
108
- - [Dataset Structure](#dataset-structure)
109
- - [Data Instances](#data-instances)
110
- - [Data Fields](#data-fields)
111
- - [Data Splits](#data-splits)
112
- - [Dataset Creation](#dataset-creation)
113
- - [Curation Rationale](#curation-rationale)
114
- - [Source Data](#source-data)
115
- - [Annotations](#annotations)
116
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
117
- - [Considerations for Using the Data](#considerations-for-using-the-data)
118
- - [Social Impact of Dataset](#social-impact-of-dataset)
119
- - [Discussion of Biases](#discussion-of-biases)
120
- - [Other Known Limitations](#other-known-limitations)
121
- - [Additional Information](#additional-information)
122
- - [Dataset Curators](#dataset-curators)
123
- - [Licensing Information](#licensing-information)
124
- - [Citation Information](#citation-information)
125
- - [Contributions](#contributions)
126
-
127
- ## Dataset Description
128
-
129
- - **Homepage:** [https://ai.google.com/research/NaturalQuestions/dataset](https://ai.google.com/research/NaturalQuestions/dataset)
130
- - **Repository:** [https://github.com/google-research-datasets/natural-questions](https://github.com/google-research-datasets/natural-questions)
131
- - **Paper:** [https://research.google/pubs/pub47761/](https://research.google/pubs/pub47761/)
132
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
133
- - **Size of downloaded dataset files:** 42981.34 MB
134
- - **Size of the generated dataset:** 95175.86 MB
135
- - **Total amount of disk used:** 138157.19 MB
136
-
137
- ### Dataset Summary
138
-
139
- The NQ corpus contains questions from real users, and it requires QA systems to
140
- read and comprehend an entire Wikipedia article that may or may not contain the
141
- answer to the question. The inclusion of real user questions, and the
142
- requirement that solutions should read an entire page to find the answer, cause
143
- NQ to be a more realistic and challenging task than prior QA datasets.
144
-
145
- ### Supported Tasks and Leaderboards
146
-
147
- [https://ai.google.com/research/NaturalQuestions](https://ai.google.com/research/NaturalQuestions)
148
-
149
- ### Languages
150
-
151
- en
152
-
153
- ## Dataset Structure
154
-
155
- ### Data Instances
156
-
157
- - **Size of downloaded dataset files:** 42981.34 MB
158
- - **Size of the generated dataset:** 95175.86 MB
159
- - **Total amount of disk used:** 138157.19 MB
160
-
161
- An example of 'train' looks as follows. This is a toy example.
162
- ```
163
- {
164
- "id": "797803103760793766",
165
- "document": {
166
- "title": "Google",
167
- "url": "http://www.wikipedia.org/Google",
168
- "html": "<html><body><h1>Google Inc.</h1><p>Google was founded in 1998 By:<ul><li>Larry</li><li>Sergey</li></ul></p></body></html>",
169
- "tokens":[
170
- {"token": "<h1>", "start_byte": 12, "end_byte": 16, "is_html": True},
171
- {"token": "Google", "start_byte": 16, "end_byte": 22, "is_html": False},
172
- {"token": "inc", "start_byte": 23, "end_byte": 26, "is_html": False},
173
- {"token": ".", "start_byte": 26, "end_byte": 27, "is_html": False},
174
- {"token": "</h1>", "start_byte": 27, "end_byte": 32, "is_html": True},
175
- {"token": "<p>", "start_byte": 32, "end_byte": 35, "is_html": True},
176
- {"token": "Google", "start_byte": 35, "end_byte": 41, "is_html": False},
177
- {"token": "was", "start_byte": 42, "end_byte": 45, "is_html": False},
178
- {"token": "founded", "start_byte": 46, "end_byte": 53, "is_html": False},
179
- {"token": "in", "start_byte": 54, "end_byte": 56, "is_html": False},
180
- {"token": "1998", "start_byte": 57, "end_byte": 61, "is_html": False},
181
- {"token": "by", "start_byte": 62, "end_byte": 64, "is_html": False},
182
- {"token": ":", "start_byte": 64, "end_byte": 65, "is_html": False},
183
- {"token": "<ul>", "start_byte": 65, "end_byte": 69, "is_html": True},
184
- {"token": "<li>", "start_byte": 69, "end_byte": 73, "is_html": True},
185
- {"token": "Larry", "start_byte": 73, "end_byte": 78, "is_html": False},
186
- {"token": "</li>", "start_byte": 78, "end_byte": 83, "is_html": True},
187
- {"token": "<li>", "start_byte": 83, "end_byte": 87, "is_html": True},
188
- {"token": "Sergey", "start_byte": 87, "end_byte": 92, "is_html": False},
189
- {"token": "</li>", "start_byte": 92, "end_byte": 97, "is_html": True},
190
- {"token": "</ul>", "start_byte": 97, "end_byte": 102, "is_html": True},
191
- {"token": "</p>", "start_byte": 102, "end_byte": 106, "is_html": True}
192
- ],
193
- },
194
- "question" :{
195
- "text": "who founded google",
196
- "tokens": ["who", "founded", "google"]
197
- },
198
- "long_answer_candidates": [
199
- {"start_byte": 32, "end_byte": 106, "start_token": 5, "end_token": 22, "top_level": True},
200
- {"start_byte": 65, "end_byte": 102, "start_token": 13, "end_token": 21, "top_level": False},
201
- {"start_byte": 69, "end_byte": 83, "start_token": 14, "end_token": 17, "top_level": False},
202
- {"start_byte": 83, "end_byte": 92, "start_token": 17, "end_token": 20 , "top_level": False}
203
- ],
204
- "annotations": [{
205
- "id": "6782080525527814293",
206
- "long_answer": {"start_byte": 32, "end_byte": 106, "start_token": 5, "end_token": 22, "candidate_index": 0},
207
- "short_answers": [
208
- {"start_byte": 73, "end_byte": 78, "start_token": 15, "end_token": 16, "text": "Larry"},
209
- {"start_byte": 87, "end_byte": 92, "start_token": 18, "end_token": 19, "text": "Sergey"}
210
- ],
211
- "yes_no_answer": -1
212
- }]
213
- }
214
- ```
215
-
216
- ### Data Fields
217
-
218
- The data fields are the same among all splits.
219
-
220
- #### default
221
- - `id`: a `string` feature.
222
- - `document` a dictionary feature containing:
223
- - `title`: a `string` feature.
224
- - `url`: a `string` feature.
225
- - `html`: a `string` feature.
226
- - `tokens`: a dictionary feature containing:
227
- - `token`: a `string` feature.
228
- - `is_html`: a `bool` feature.
229
- - `start_byte`: a `int64` feature.
230
- - `end_byte`: a `int64` feature.
231
- - `question`: a dictionary feature containing:
232
- - `text`: a `string` feature.
233
- - `tokens`: a `list` of `string` features.
234
- - `long_answer_candidates`: a dictionary feature containing:
235
- - `start_token`: a `int64` feature.
236
- - `end_token`: a `int64` feature.
237
- - `start_byte`: a `int64` feature.
238
- - `end_byte`: a `int64` feature.
239
- - `top_level`: a `bool` feature.
240
- - `annotations`: a dictionary feature containing:
241
- - `id`: a `string` feature.
242
- - `long_answers`: a dictionary feature containing:
243
- - `start_token`: a `int64` feature.
244
- - `end_token`: a `int64` feature.
245
- - `start_byte`: a `int64` feature.
246
- - `end_byte`: a `int64` feature.
247
- - `candidate_index`: a `int64` feature.
248
- - `short_answers`: a dictionary feature containing:
249
- - `start_token`: a `int64` feature.
250
- - `end_token`: a `int64` feature.
251
- - `start_byte`: a `int64` feature.
252
- - `end_byte`: a `int64` feature.
253
- - `text`: a `string` feature.
254
- - `yes_no_answer`: a classification label, with possible values including `NO` (0), `YES` (1).
255
-
256
- ### Data Splits
257
-
258
- | name | train | validation |
259
- |---------|-------:|-----------:|
260
- | default | 307373 | 7830 |
261
- | dev | N/A | 7830 |
262
-
263
- ## Dataset Creation
264
-
265
- ### Curation Rationale
266
-
267
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
268
-
269
- ### Source Data
270
-
271
- #### Initial Data Collection and Normalization
272
-
273
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
274
-
275
- #### Who are the source language producers?
276
-
277
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
278
-
279
- ### Annotations
280
-
281
- #### Annotation process
282
-
283
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
284
-
285
- #### Who are the annotators?
286
-
287
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
288
-
289
- ### Personal and Sensitive Information
290
-
291
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
292
-
293
- ## Considerations for Using the Data
294
-
295
- ### Social Impact of Dataset
296
-
297
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
298
-
299
- ### Discussion of Biases
300
-
301
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
302
-
303
- ### Other Known Limitations
304
-
305
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
306
-
307
- ## Additional Information
308
-
309
- ### Dataset Curators
310
-
311
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
312
-
313
- ### Licensing Information
314
-
315
- [Creative Commons Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/).
316
-
317
- ### Citation Information
318
-
319
- ```
320
-
321
- @article{47761,
322
- title = {Natural Questions: a Benchmark for Question Answering Research},
323
- author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov},
324
- year = {2019},
325
- journal = {Transactions of the Association of Computational Linguistics}
326
- }
327
-
328
- ```
329
-
330
-
331
- ### Contributions
332
-
333
- Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "\nThe NQ corpus contains questions from real users, and it requires QA systems to\nread and comprehend an entire Wikipedia article that may or may not contain the\nanswer to the question. The inclusion of real user questions, and the\nrequirement that solutions should read an entire page to find the answer, cause\nNQ to be a more realistic and challenging task than prior QA datasets.\n", "citation": "\n@article{47761,\ntitle\t= {Natural Questions: a Benchmark for Question Answering Research},\nauthor\t= {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov},\nyear\t= {2019},\njournal\t= {Transactions of the Association of Computational Linguistics}\n}\n", "homepage": "https://ai.google.com/research/NaturalQuestions/dataset", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"title": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "html": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"token": {"dtype": "string", "id": null, "_type": "Value"}, "is_html": {"dtype": "bool", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "question": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "annotations": {"feature": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "long_answer": {"start_token": {"dtype": "int64", "id": null, "_type": "Value"}, "end_token": {"dtype": "int64", "id": null, "_type": "Value"}, "start_byte": {"dtype": "int64", "id": null, "_type": "Value"}, "end_byte": {"dtype": "int64", "id": null, "_type": "Value"}}, "short_answers": {"feature": {"start_token": {"dtype": "int64", "id": null, "_type": "Value"}, "end_token": {"dtype": "int64", "id": null, "_type": "Value"}, "start_byte": {"dtype": "int64", "id": null, "_type": "Value"}, "end_byte": {"dtype": "int64", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "yes_no_answer": {"num_classes": 2, "names": ["NO", "YES"], "names_file": null, "id": null, "_type": "ClassLabel"}, "long_answer_candidates": {"feature": {"start_token": {"dtype": "int64", "id": null, "_type": "Value"}, "end_token": {"dtype": "int64", "id": null, "_type": "Value"}, "start_byte": {"dtype": "int64", "id": null, "_type": "Value"}, "end_byte": {"dtype": "int64", "id": null, "_type": "Value"}, "top_label": {"dtype": "bool", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}}, "supervised_keys": null, "builder_name": "natural_questions", "config_name": "default", "version": {"version_str": "0.0.2", "description": null, "datasets_version_to_prepare": null, "major": 0, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 97445142568, "num_examples": 307373, "dataset_name": "natural_questions"}, "validation": {"name": "validation", "num_bytes": 2353975312, "num_examples": 7830, "dataset_name": "natural_questions"}}, "download_checksums": {"https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-00.jsonl.gz": {"num_bytes": 858728609, "checksum": "fb63ed2a5af2921898d566a4e8e514ed17bd079735f5a37f9b0c5e83ce087106"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-01.jsonl.gz": {"num_bytes": 891498165, "checksum": "bbccdbc261ced6ee6351ede78c8be5af43d1024c72a60070ea658767d4c3023a"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-02.jsonl.gz": {"num_bytes": 885374316, "checksum": "923afd3c645b0bd887f7b6a43c03889936226708ec7a66d83e5e5fa9cee98f4e"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-03.jsonl.gz": {"num_bytes": 885313666, "checksum": "272b2fcdc37cf23ab4bcdf831a84e3b755da066ad4727cdded57a383a18f45de"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-04.jsonl.gz": {"num_bytes": 890873425, "checksum": "8a9eb2dcf818ab7a44c4fa4b73112547e7f250ec85bdf83d2a3f32542fc3e8c2"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-05.jsonl.gz": {"num_bytes": 873023109, "checksum": "2566560a3ad89300552385c3aba0cb51f9968083f01f04c494623542619cdaca"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-06.jsonl.gz": {"num_bytes": 866509301, "checksum": "8ae5491a1d86fea5025e9ec27fed574fe5886fb36a7b3567ab0dba498603728d"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-07.jsonl.gz": {"num_bytes": 838940867, "checksum": "7d1ee955d5a8dee1dc024e7b6a278314c85514f046d40d56ad5f1c2bb1fd794a"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-08.jsonl.gz": {"num_bytes": 902610214, "checksum": "233ab07737289b4122d0fd2d2278dd4d7de3ef44d5b7d7e2e5abb79dbae55541"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-09.jsonl.gz": {"num_bytes": 883494801, "checksum": "a1e546ee7db94117804c41c5fe80af91c78ee5b10878fc2714adb5322f56bb9b"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-10.jsonl.gz": {"num_bytes": 876311133, "checksum": "0d27b7682c4ebc655e18eb9f8dcbb800ae1d5b09ef1183e29faa10168a015724"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-11.jsonl.gz": {"num_bytes": 878127326, "checksum": "9b457cc0d4021da388c1322538b2b2140f0b2439c8eb056b5247c39ecb0de198"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-12.jsonl.gz": {"num_bytes": 889257016, "checksum": "e3078d51686869be12343e1d02ae656577b290355d540870a370c58baeb89bc6"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-13.jsonl.gz": {"num_bytes": 891769129, "checksum": "ff898b89d8423e4b5c9b35996fed80c8e1ddcc5f8a57c9af2a760d408bfa5df4"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-14.jsonl.gz": {"num_bytes": 892523839, "checksum": "7f28f63e565bfa3b9013a62000da6e070c2cdd2aa6f9fc5cfb14365a1a98ab0f"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-15.jsonl.gz": {"num_bytes": 910660095, "checksum": "64db3145b5021e52611f8aedf49bbd0b5f648fef43acc8b1a4481b3dfe96c248"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-16.jsonl.gz": {"num_bytes": 878177689, "checksum": "c12de70e57943288511596b5ebbf5c914a5f99e8fb50d74286274021e8a18fb7"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-17.jsonl.gz": {"num_bytes": 872805189, "checksum": "2beb6c9f24c650c60354b6b513634e1a209cba28c6f204df4e9e2efc8b7ca59e"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-18.jsonl.gz": {"num_bytes": 875275428, "checksum": "2420b73b47cfbb04bca2b1352371dc893879634956b98446bdbde3090556556c"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-19.jsonl.gz": {"num_bytes": 862034169, "checksum": "c514885fc1bff8f4e6291813debbc3a9568b538781eb17e273ac9e88b0b16f80"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-20.jsonl.gz": {"num_bytes": 887586358, "checksum": "59cd4abad74a38265d8e506afd29e3ea498e2f39fe0ee70e9b733810286b3959"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-21.jsonl.gz": {"num_bytes": 890472815, "checksum": "c8d0b1f4cdf78fd658185e92bf1ece16fd16cdde4d27da5221d1a37688ee935e"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-22.jsonl.gz": {"num_bytes": 888396337, "checksum": "6e1ca3851f138e75cc0bab36f5cad83db2e6ae126fac7c6fdc4ce71ad8f410ca"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-23.jsonl.gz": {"num_bytes": 900331594, "checksum": "d34bd25d0b7b8af8aa27b6b9fad8b7febdca6f0c4c1f5779dfc9b4ccbbec6ed2"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-24.jsonl.gz": {"num_bytes": 871216444, "checksum": "40972a44f50c460bcd8fa90a9a0794a2bc169504dc04dbee2a4896c88536f51d"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-25.jsonl.gz": {"num_bytes": 871166814, "checksum": "7028865d9a77d8f0b4b06a1291ff75a488578879ba87e9e679b2d68e8e1accd4"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-26.jsonl.gz": {"num_bytes": 903385811, "checksum": "e4fd4bdc5c63fa1d1310c0ab573601ca87b3809ce1346fc912b398a6bed7f205"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-27.jsonl.gz": {"num_bytes": 842966594, "checksum": "54b8cccea4799351259c3264d077b8df1f291332c0b93f08e66aa78f83a58d18"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-28.jsonl.gz": {"num_bytes": 876393409, "checksum": "a8ee205427dcf3be03759d44de276741f855892d76338ca26a72c76bc07cd3c4"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-29.jsonl.gz": {"num_bytes": 872982425, "checksum": "cb3c96df23bbb9097b61ce1a524c3eb375165404da72d9f0a51eff9744d75643"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-30.jsonl.gz": {"num_bytes": 899739217, "checksum": "e64447543e83b66b725686af6c753f8b08bb6bc9adbe8db36ab31cba11bfcd5b"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-31.jsonl.gz": {"num_bytes": 875703668, "checksum": "7f6195da4b45887d56563924a8741d9db64b4cca32cf50c9d07f8836a761ab09"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-32.jsonl.gz": {"num_bytes": 895840703, "checksum": "5c6574f0f8a157d585bef31fb79a53b1e1b37fdf638b475c92adbb83812b64db"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-33.jsonl.gz": {"num_bytes": 874713497, "checksum": "4d75fd17b0b6ee3133b405b7a90867b0b0b49a51659a5e1eb8bd1d70d0181473"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-34.jsonl.gz": {"num_bytes": 872620262, "checksum": "b70c517e40b7283f10b291f44e6a61a9c9f6dacb9de89ae37e2a7e92a96eec01"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-35.jsonl.gz": {"num_bytes": 854439473, "checksum": "c6e3615fb8753dd3ffe0890a99793847c99b364b50136c8e0430007023bd5506"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-36.jsonl.gz": {"num_bytes": 866233094, "checksum": "dbf6f9227c3558e5195690ace9ec1ccfc84c705eecdd2557d7ead73b88e264ff"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-37.jsonl.gz": {"num_bytes": 894411832, "checksum": "bcbf932a71ef07f0217a2620ec395854c2f200e18829c2f28400e52ad9799aaf"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-38.jsonl.gz": {"num_bytes": 879967719, "checksum": "6518d41f6a205a4551358a154e16e795a40d4d0cd164fa6556f367a7652e3a0d"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-39.jsonl.gz": {"num_bytes": 887056754, "checksum": "f82ba5c7bd19c853e34b2dfdee9c458ef7e9b55f022aed08c3753ebf93034293"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-40.jsonl.gz": {"num_bytes": 873720601, "checksum": "9a6a19e4c408858935bd5456d08e155b9418aa2c1e4fe5ea81d227e57bd6517f"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-41.jsonl.gz": {"num_bytes": 880452966, "checksum": "c3d3ba79c0f6bb718fa58e473dbc70b2064c8168fc59e3b8ef8df2dbea6bfa37"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-42.jsonl.gz": {"num_bytes": 856217171, "checksum": "1d6921d56ff4143e3c189c95e4ab506b70dc569fa4d91f94f9cf29052d253eb6"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-43.jsonl.gz": {"num_bytes": 908184635, "checksum": "595a069528f5988b4808821d1dc81bb8c6dfbd672e69f991bd4004b9e1c02736"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-44.jsonl.gz": {"num_bytes": 891701874, "checksum": "9a290d4d9c9c9507aeec304e1340a3a02e969f17021f02c969aa90b30a970a0d"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-45.jsonl.gz": {"num_bytes": 870559738, "checksum": "40f16e923391fca5f1a30eeacc39ca6c87fc522b9d7b86b7308683ed39c51d5d"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-46.jsonl.gz": {"num_bytes": 883791796, "checksum": "0a5425ac0b9800fb492f0199f358846fd63a10a377a80b7ce784fb715a1d5f90"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-47.jsonl.gz": {"num_bytes": 882109720, "checksum": "65c230069c85c8c74d1ff562c62c443e69e1e93869ecbdb0a2c673faaf4a184e"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-48.jsonl.gz": {"num_bytes": 882241605, "checksum": "df613f0496b7d5f7a49d837b914d1ea80e15c925bb3cf91720ec5b2a25710245"}, "https://storage.googleapis.com/natural_questions/v1.0/train/nq-train-49.jsonl.gz": {"num_bytes": 863247626, "checksum": "ff023c8380d2e9a8c23a1babb24ab6fe2eb5c174f35d74e025bbe0961ea706ec"}, "https://storage.googleapis.com/natural_questions/v1.0/dev/nq-dev-00.jsonl.gz": {"num_bytes": 219593373, "checksum": "78a7f7899aa7d0bc9a29878cdb90daabbeda21a93e3730d8861f20ec736790b2"}, "https://storage.googleapis.com/natural_questions/v1.0/dev/nq-dev-01.jsonl.gz": {"num_bytes": 200209706, "checksum": "9cebaa5eb69cf4ce067079370456b2939d4154a17da88faf73844d8c418cfb9e"}, "https://storage.googleapis.com/natural_questions/v1.0/dev/nq-dev-02.jsonl.gz": {"num_bytes": 210446574, "checksum": "7b82aa74a35025ed91f514ad21e05c4a66cdec56ac1f6b77767a578156ff3bfc"}, "https://storage.googleapis.com/natural_questions/v1.0/dev/nq-dev-03.jsonl.gz": {"num_bytes": 216859801, "checksum": "c7d45bb464bda3da7788c985b07def313ab5bed69bcc258acbe6f0918050bf6e"}, "https://storage.googleapis.com/natural_questions/v1.0/dev/nq-dev-04.jsonl.gz": {"num_bytes": 220929521, "checksum": "00969275e9fb6a5dcc7e20ec9589c23ac00de61c979c8b957f4180b5b9a3043a"}}, "download_size": 45069199013, "dataset_size": 99799117880, "size_in_bytes": 144868316893}}
 
 
dev/validation/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8b2b5eec67ec9bc255354e93a762e435e51691a4ba2b586073519827071c498
3
+ size 352699699
dev/validation/0001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05d4b036e8cb8972919a441cac7a7c4689103d7ccde011d577ec729f461ad2b7
3
+ size 207081362
dev/validation/0002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2ddcb7273937ca66570fa567366e4bc581117187161dd2f7acfde9f0b9e5e13
3
+ size 187850634
dev/validation/0003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f9c390dd93e75ebcf1bbea424445f07e3e18a25138c10be7bf66f3d5d612b42
3
+ size 353050916
dev/validation/0004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a703eb3856618ca1145f60282a1b057e730d572625510fb34e794450b1c29948
3
+ size 205861342
natural_questions.py DELETED
@@ -1,222 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """Natural Questions: A Benchmark for Question Answering Research."""
18
-
19
-
20
- import html
21
- import json
22
- import re
23
-
24
- import datasets
25
-
26
-
27
- _CITATION = """
28
- @article{47761,
29
- title = {Natural Questions: a Benchmark for Question Answering Research},
30
- author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov},
31
- year = {2019},
32
- journal = {Transactions of the Association of Computational Linguistics}
33
- }
34
- """
35
-
36
- _DESCRIPTION = """
37
- The NQ corpus contains questions from real users, and it requires QA systems to
38
- read and comprehend an entire Wikipedia article that may or may not contain the
39
- answer to the question. The inclusion of real user questions, and the
40
- requirement that solutions should read an entire page to find the answer, cause
41
- NQ to be a more realistic and challenging task than prior QA datasets.
42
- """
43
-
44
- _URL = "https://ai.google.com/research/NaturalQuestions/dataset"
45
-
46
- _BASE_DOWNLOAD_URL = "https://storage.googleapis.com/natural_questions/v1.0"
47
- _DOWNLOAD_URLS = {
48
- "train": ["%s/train/nq-train-%02d.jsonl.gz" % (_BASE_DOWNLOAD_URL, i) for i in range(50)],
49
- "validation": ["%s/dev/nq-dev-%02d.jsonl.gz" % (_BASE_DOWNLOAD_URL, i) for i in range(5)],
50
- }
51
-
52
- _VERSION = datasets.Version("0.0.4")
53
-
54
-
55
- class NaturalQuestions(datasets.BeamBasedBuilder):
56
- """Natural Questions: A Benchmark for Question Answering Research."""
57
-
58
- BUILDER_CONFIGS = [
59
- datasets.BuilderConfig(name="default", version=_VERSION),
60
- datasets.BuilderConfig(name="dev", version=_VERSION, description="Only dev split"),
61
- ]
62
- DEFAULT_CONFIG_NAME = "default"
63
-
64
- def _info(self):
65
- return datasets.DatasetInfo(
66
- description=_DESCRIPTION,
67
- features=datasets.Features(
68
- {
69
- "id": datasets.Value("string"),
70
- "document": {
71
- "title": datasets.Value("string"),
72
- "url": datasets.Value("string"),
73
- "html": datasets.Value("string"),
74
- "tokens": datasets.features.Sequence(
75
- {
76
- "token": datasets.Value("string"),
77
- "is_html": datasets.Value("bool"),
78
- "start_byte": datasets.Value("int64"),
79
- "end_byte": datasets.Value("int64"),
80
- }
81
- ),
82
- },
83
- "question": {
84
- "text": datasets.Value("string"),
85
- "tokens": datasets.features.Sequence(datasets.Value("string")),
86
- },
87
- "long_answer_candidates": datasets.features.Sequence(
88
- {
89
- "start_token": datasets.Value("int64"),
90
- "end_token": datasets.Value("int64"),
91
- "start_byte": datasets.Value("int64"),
92
- "end_byte": datasets.Value("int64"),
93
- "top_level": datasets.Value("bool"),
94
- }
95
- ),
96
- "annotations": datasets.features.Sequence(
97
- {
98
- "id": datasets.Value("string"),
99
- "long_answer": {
100
- "start_token": datasets.Value("int64"),
101
- "end_token": datasets.Value("int64"),
102
- "start_byte": datasets.Value("int64"),
103
- "end_byte": datasets.Value("int64"),
104
- "candidate_index": datasets.Value("int64"),
105
- },
106
- "short_answers": datasets.features.Sequence(
107
- {
108
- "start_token": datasets.Value("int64"),
109
- "end_token": datasets.Value("int64"),
110
- "start_byte": datasets.Value("int64"),
111
- "end_byte": datasets.Value("int64"),
112
- "text": datasets.Value("string"),
113
- }
114
- ),
115
- "yes_no_answer": datasets.features.ClassLabel(
116
- names=["NO", "YES"]
117
- ), # Can also be -1 for NONE.
118
- }
119
- ),
120
- }
121
- ),
122
- supervised_keys=None,
123
- homepage=_URL,
124
- citation=_CITATION,
125
- )
126
-
127
- def _split_generators(self, dl_manager, pipeline):
128
- """Returns SplitGenerators."""
129
- urls = _DOWNLOAD_URLS
130
- if self.config.name == "dev":
131
- urls = {"validation": urls["validation"]}
132
- files = dl_manager.download(urls)
133
- if not pipeline.is_local():
134
- files = dl_manager.ship_files_with_pipeline(files, pipeline)
135
- return [
136
- datasets.SplitGenerator(
137
- name=split,
138
- gen_kwargs={"filepaths": files[split]},
139
- )
140
- for split in [datasets.Split.TRAIN, datasets.Split.VALIDATION]
141
- if split in files
142
- ]
143
-
144
- def _build_pcollection(self, pipeline, filepaths):
145
- """Build PCollection of examples."""
146
- try:
147
- import apache_beam as beam
148
- except ImportError as err:
149
- raise ImportError(
150
- "To be able to load natural_questions, you need to install apache_beam: 'pip install apache_beam'"
151
- ) from err
152
-
153
- def _parse_example(line):
154
- """Parse a single json line and emit an example dict."""
155
- ex_json = json.loads(line)
156
- html_bytes = ex_json["document_html"].encode("utf-8")
157
-
158
- def _parse_short_answer(short_ans):
159
- """Extract text of short answer."""
160
- ans_bytes = html_bytes[short_ans["start_byte"] : short_ans["end_byte"]]
161
- # Remove non-breaking spaces.
162
- ans_bytes = ans_bytes.replace(b"\xc2\xa0", b" ")
163
- text = ans_bytes.decode("utf-8")
164
- # Remove HTML markup.
165
- text = re.sub("<([^>]*)>", "", html.unescape(text))
166
- # Replace \xa0 characters with spaces.
167
- return {
168
- "start_token": short_ans["start_token"],
169
- "end_token": short_ans["end_token"],
170
- "start_byte": short_ans["start_byte"],
171
- "end_byte": short_ans["end_byte"],
172
- "text": text,
173
- }
174
-
175
- def _parse_annotation(an_json):
176
- return {
177
- # Convert to str since some IDs cannot be represented by datasets.Value('int64').
178
- "id": str(an_json["annotation_id"]),
179
- "long_answer": {
180
- "start_token": an_json["long_answer"]["start_token"],
181
- "end_token": an_json["long_answer"]["end_token"],
182
- "start_byte": an_json["long_answer"]["start_byte"],
183
- "end_byte": an_json["long_answer"]["end_byte"],
184
- "candidate_index": an_json["long_answer"]["candidate_index"],
185
- },
186
- "short_answers": [_parse_short_answer(ans) for ans in an_json["short_answers"]],
187
- "yes_no_answer": (-1 if an_json["yes_no_answer"] == "NONE" else an_json["yes_no_answer"]),
188
- }
189
-
190
- beam.metrics.Metrics.counter("nq", "examples").inc()
191
- # Convert to str since some IDs cannot be represented by datasets.Value('int64').
192
- id_ = str(ex_json["example_id"])
193
- return (
194
- id_,
195
- {
196
- "id": id_,
197
- "document": {
198
- "title": ex_json["document_title"],
199
- "url": ex_json["document_url"],
200
- "html": ex_json["document_html"],
201
- "tokens": [
202
- {
203
- "token": t["token"],
204
- "is_html": t["html_token"],
205
- "start_byte": t["start_byte"],
206
- "end_byte": t["end_byte"],
207
- }
208
- for t in ex_json["document_tokens"]
209
- ],
210
- },
211
- "question": {"text": ex_json["question_text"], "tokens": ex_json["question_tokens"]},
212
- "long_answer_candidates": [lac_json for lac_json in ex_json["long_answer_candidates"]],
213
- "annotations": [_parse_annotation(an_json) for an_json in ex_json["annotations"]],
214
- },
215
- )
216
-
217
- return (
218
- pipeline
219
- | beam.Create(filepaths)
220
- | beam.io.ReadAllFromText(compression_type=beam.io.textio.CompressionTypes.GZIP)
221
- | beam.Map(_parse_example)
222
- )