Datasets:

Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Libraries:
Datasets
Dask
License:
parquet-converter commited on
Commit
c7dc915
1 Parent(s): 6a70798

Update parquet files

Browse files
README.md DELETED
@@ -1,307 +0,0 @@
1
- ---
2
- pretty_name: TyDi QA
3
- annotations_creators:
4
- - crowdsourced
5
- language_creators:
6
- - crowdsourced
7
- language:
8
- - ar
9
- - bn
10
- - en
11
- - fi
12
- - id
13
- - ja
14
- - ko
15
- - ru
16
- - sw
17
- - te
18
- - th
19
- license:
20
- - apache-2.0
21
- multilinguality:
22
- - multilingual
23
- size_categories:
24
- - unknown
25
- source_datasets:
26
- - extended|wikipedia
27
- task_categories:
28
- - question-answering
29
- task_ids:
30
- - extractive-qa
31
- paperswithcode_id: tydi-qa
32
- dataset_info:
33
- - config_name: primary_task
34
- features:
35
- - name: passage_answer_candidates
36
- sequence:
37
- - name: plaintext_start_byte
38
- dtype: int32
39
- - name: plaintext_end_byte
40
- dtype: int32
41
- - name: question_text
42
- dtype: string
43
- - name: document_title
44
- dtype: string
45
- - name: language
46
- dtype: string
47
- - name: annotations
48
- sequence:
49
- - name: passage_answer_candidate_index
50
- dtype: int32
51
- - name: minimal_answers_start_byte
52
- dtype: int32
53
- - name: minimal_answers_end_byte
54
- dtype: int32
55
- - name: yes_no_answer
56
- dtype: string
57
- - name: document_plaintext
58
- dtype: string
59
- - name: document_url
60
- dtype: string
61
- splits:
62
- - name: train
63
- num_bytes: 5550574617
64
- num_examples: 166916
65
- - name: validation
66
- num_bytes: 484380443
67
- num_examples: 18670
68
- download_size: 1953887429
69
- dataset_size: 6034955060
70
- - config_name: secondary_task
71
- features:
72
- - name: id
73
- dtype: string
74
- - name: title
75
- dtype: string
76
- - name: context
77
- dtype: string
78
- - name: question
79
- dtype: string
80
- - name: answers
81
- sequence:
82
- - name: text
83
- dtype: string
84
- - name: answer_start
85
- dtype: int32
86
- splits:
87
- - name: train
88
- num_bytes: 52948607
89
- num_examples: 49881
90
- - name: validation
91
- num_bytes: 5006461
92
- num_examples: 5077
93
- download_size: 1953887429
94
- dataset_size: 57955068
95
- ---
96
-
97
- # Dataset Card for "tydiqa"
98
-
99
- ## Table of Contents
100
- - [Dataset Description](#dataset-description)
101
- - [Dataset Summary](#dataset-summary)
102
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
103
- - [Languages](#languages)
104
- - [Dataset Structure](#dataset-structure)
105
- - [Data Instances](#data-instances)
106
- - [Data Fields](#data-fields)
107
- - [Data Splits](#data-splits)
108
- - [Dataset Creation](#dataset-creation)
109
- - [Curation Rationale](#curation-rationale)
110
- - [Source Data](#source-data)
111
- - [Annotations](#annotations)
112
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
113
- - [Considerations for Using the Data](#considerations-for-using-the-data)
114
- - [Social Impact of Dataset](#social-impact-of-dataset)
115
- - [Discussion of Biases](#discussion-of-biases)
116
- - [Other Known Limitations](#other-known-limitations)
117
- - [Additional Information](#additional-information)
118
- - [Dataset Curators](#dataset-curators)
119
- - [Licensing Information](#licensing-information)
120
- - [Citation Information](#citation-information)
121
- - [Contributions](#contributions)
122
-
123
- ## Dataset Description
124
-
125
- - **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa)
126
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
127
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
128
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
129
- - **Size of downloaded dataset files:** 3726.74 MB
130
- - **Size of the generated dataset:** 5812.92 MB
131
- - **Total amount of disk used:** 9539.67 MB
132
-
133
- ### Dataset Summary
134
-
135
- TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.
136
- The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language
137
- expresses -- such that we expect models performing well on this set to generalize across a large number of the languages
138
- in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic
139
- information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but
140
- don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without
141
- the use of translation (unlike MLQA and XQuAD).
142
-
143
- ### Supported Tasks and Leaderboards
144
-
145
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
146
-
147
- ### Languages
148
-
149
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
150
-
151
- ## Dataset Structure
152
-
153
- ### Data Instances
154
-
155
- #### primary_task
156
-
157
- - **Size of downloaded dataset files:** 1863.37 MB
158
- - **Size of the generated dataset:** 5757.59 MB
159
- - **Total amount of disk used:** 7620.96 MB
160
-
161
- An example of 'validation' looks as follows.
162
- ```
163
- This example was too long and was cropped:
164
-
165
- {
166
- "annotations": {
167
- "minimal_answers_end_byte": [-1, -1, -1],
168
- "minimal_answers_start_byte": [-1, -1, -1],
169
- "passage_answer_candidate_index": [-1, -1, -1],
170
- "yes_no_answer": ["NONE", "NONE", "NONE"]
171
- },
172
- "document_plaintext": "\"\\nรองศาสตราจารย์[1] หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร (22 กันยายน 2495 -) ผู้ว่าราชการกรุงเทพมหานครคนที่ 15 อดีตรองหัวหน้าพรรคปร...",
173
- "document_title": "หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร",
174
- "document_url": "\"https://th.wikipedia.org/wiki/%E0%B8%AB%E0%B8%A1%E0%B9%88%E0%B8%AD%E0%B8%A1%E0%B8%A3%E0%B8%B2%E0%B8%8A%E0%B8%A7%E0%B8%87%E0%B8%...",
175
- "language": "thai",
176
- "passage_answer_candidates": "{\"plaintext_end_byte\": [494, 1779, 2931, 3904, 4506, 5588, 6383, 7122, 8224, 9375, 10473, 12563, 15134, 17765, 19863, 21902, 229...",
177
- "question_text": "\"หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร เรียนจบจากที่ไหน ?\"..."
178
- }
179
- ```
180
-
181
- #### secondary_task
182
-
183
- - **Size of downloaded dataset files:** 1863.37 MB
184
- - **Size of the generated dataset:** 55.34 MB
185
- - **Total amount of disk used:** 1918.71 MB
186
-
187
- An example of 'validation' looks as follows.
188
- ```
189
- This example was too long and was cropped:
190
-
191
- {
192
- "answers": {
193
- "answer_start": [394],
194
- "text": ["بطولتين"]
195
- },
196
- "context": "\"أقيمت البطولة 21 مرة، شارك في النهائيات 78 دولة، وعدد الفرق التي فازت بالبطولة حتى الآن 8 فرق، ويعد المنتخب البرازيلي الأكثر تت...",
197
- "id": "arabic-2387335860751143628-1",
198
- "question": "\"كم عدد مرات فوز الأوروغواي ببطولة كاس العالم لكرو القدم؟\"...",
199
- "title": "قائمة نهائيات كأس العالم"
200
- }
201
- ```
202
-
203
- ### Data Fields
204
-
205
- The data fields are the same among all splits.
206
-
207
- #### primary_task
208
- - `passage_answer_candidates`: a dictionary feature containing:
209
- - `plaintext_start_byte`: a `int32` feature.
210
- - `plaintext_end_byte`: a `int32` feature.
211
- - `question_text`: a `string` feature.
212
- - `document_title`: a `string` feature.
213
- - `language`: a `string` feature.
214
- - `annotations`: a dictionary feature containing:
215
- - `passage_answer_candidate_index`: a `int32` feature.
216
- - `minimal_answers_start_byte`: a `int32` feature.
217
- - `minimal_answers_end_byte`: a `int32` feature.
218
- - `yes_no_answer`: a `string` feature.
219
- - `document_plaintext`: a `string` feature.
220
- - `document_url`: a `string` feature.
221
-
222
- #### secondary_task
223
- - `id`: a `string` feature.
224
- - `title`: a `string` feature.
225
- - `context`: a `string` feature.
226
- - `question`: a `string` feature.
227
- - `answers`: a dictionary feature containing:
228
- - `text`: a `string` feature.
229
- - `answer_start`: a `int32` feature.
230
-
231
- ### Data Splits
232
-
233
- | name | train | validation |
234
- | -------------- | -----: | ---------: |
235
- | primary_task | 166916 | 18670 |
236
- | secondary_task | 49881 | 5077 |
237
-
238
- ## Dataset Creation
239
-
240
- ### Curation Rationale
241
-
242
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
243
-
244
- ### Source Data
245
-
246
- #### Initial Data Collection and Normalization
247
-
248
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
249
-
250
- #### Who are the source language producers?
251
-
252
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
253
-
254
- ### Annotations
255
-
256
- #### Annotation process
257
-
258
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
259
-
260
- #### Who are the annotators?
261
-
262
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
263
-
264
- ### Personal and Sensitive Information
265
-
266
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
267
-
268
- ## Considerations for Using the Data
269
-
270
- ### Social Impact of Dataset
271
-
272
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
273
-
274
- ### Discussion of Biases
275
-
276
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
277
-
278
- ### Other Known Limitations
279
-
280
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
281
-
282
- ## Additional Information
283
-
284
- ### Dataset Curators
285
-
286
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
287
-
288
- ### Licensing Information
289
-
290
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
291
-
292
- ### Citation Information
293
-
294
- ```
295
- @article{tydiqa,
296
- title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
297
- author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
298
- year = {2020},
299
- journal = {Transactions of the Association for Computational Linguistics}
300
- }
301
-
302
- ```
303
-
304
-
305
- ### Contributions
306
-
307
- Thanks to [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"primary_task": {"description": "TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.\nThe languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language\nexpresses -- such that we expect models performing well on this set to generalize across a large number of the languages\nin the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic\ninformation-seeking task and avoid priming effects, questions are written by people who want to know the answer, but\ndon\u2019t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without\nthe use of translation (unlike MLQA and XQuAD).\n", "citation": "@article{tydiqa,\ntitle = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},\nauthor = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}\nyear = {2020},\njournal = {Transactions of the Association for Computational Linguistics}\n}\n", "homepage": "https://github.com/google-research-datasets/tydiqa", "license": "", "features": {"passage_answer_candidates": {"feature": {"plaintext_start_byte": {"dtype": "int32", "id": null, "_type": "Value"}, "plaintext_end_byte": {"dtype": "int32", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "question_text": {"dtype": "string", "id": null, "_type": "Value"}, "document_title": {"dtype": "string", "id": null, "_type": "Value"}, "language": {"dtype": "string", "id": null, "_type": "Value"}, "annotations": {"feature": {"passage_answer_candidate_index": {"dtype": "int32", "id": null, "_type": "Value"}, "minimal_answers_start_byte": {"dtype": "int32", "id": null, "_type": "Value"}, "minimal_answers_end_byte": {"dtype": "int32", "id": null, "_type": "Value"}, "yes_no_answer": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "document_plaintext": {"dtype": "string", "id": null, "_type": "Value"}, "document_url": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "tydiqa", "config_name": "primary_task", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5550574617, "num_examples": 166916, "dataset_name": "tydiqa"}, "validation": {"name": "validation", "num_bytes": 484380443, "num_examples": 18670, "dataset_name": "tydiqa"}}, "download_checksums": {"https://storage.googleapis.com/tydiqa/v1.0/tydiqa-v1.0-train.jsonl.gz": {"num_bytes": 1729651634, "checksum": "8eeedfee7593db7c3637d65a3d5c67b82486137ac6ac3ea7d08be9a64d71b629"}, "https://storage.googleapis.com/tydiqa/v1.0/tydiqa-v1.0-dev.jsonl.gz": {"num_bytes": 160614310, "checksum": "b52b8d4db1850b1549e960219e6056d8139986f8caf1b5e8b4eecadabed24413"}, "https://storage.googleapis.com/tydiqa/v1.1/tydiqa-goldp-v1.1-train.json": {"num_bytes": 58004076, "checksum": "cefc8e09ff2548d9b10a678d3a6bbbe5bc036be543f92418819ea676c97be23b"}, "https://storage.googleapis.com/tydiqa/v1.1/tydiqa-goldp-v1.1-dev.json": {"num_bytes": 5617409, "checksum": "b286e0f34bc7f52259359989716f369b160565bd12ad8f3a3e311f9b0dbad1c0"}}, "download_size": 1953887429, "post_processing_size": null, "dataset_size": 6034955060, "size_in_bytes": 7988842489}, "secondary_task": {"description": "TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.\nThe languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language\nexpresses -- such that we expect models performing well on this set to generalize across a large number of the languages\nin the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic\ninformation-seeking task and avoid priming effects, questions are written by people who want to know the answer, but\ndon\u2019t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without\nthe use of translation (unlike MLQA and XQuAD).\n", "citation": "@article{tydiqa,\ntitle = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},\nauthor = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}\nyear = {2020},\njournal = {Transactions of the Association for Computational Linguistics}\n}\n", "homepage": "https://github.com/google-research-datasets/tydiqa", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "answer_start": {"dtype": "int32", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": [{"task": "question-answering-extractive", "question_column": "question", "context_column": "context", "answers_column": "answers"}], "builder_name": "tydiqa", "config_name": "secondary_task", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 52948607, "num_examples": 49881, "dataset_name": "tydiqa"}, "validation": {"name": "validation", "num_bytes": 5006461, "num_examples": 5077, "dataset_name": "tydiqa"}}, "download_checksums": {"https://storage.googleapis.com/tydiqa/v1.0/tydiqa-v1.0-train.jsonl.gz": {"num_bytes": 1729651634, "checksum": "8eeedfee7593db7c3637d65a3d5c67b82486137ac6ac3ea7d08be9a64d71b629"}, "https://storage.googleapis.com/tydiqa/v1.0/tydiqa-v1.0-dev.jsonl.gz": {"num_bytes": 160614310, "checksum": "b52b8d4db1850b1549e960219e6056d8139986f8caf1b5e8b4eecadabed24413"}, "https://storage.googleapis.com/tydiqa/v1.1/tydiqa-goldp-v1.1-train.json": {"num_bytes": 58004076, "checksum": "cefc8e09ff2548d9b10a678d3a6bbbe5bc036be543f92418819ea676c97be23b"}, "https://storage.googleapis.com/tydiqa/v1.1/tydiqa-goldp-v1.1-dev.json": {"num_bytes": 5617409, "checksum": "b286e0f34bc7f52259359989716f369b160565bd12ad8f3a3e311f9b0dbad1c0"}}, "download_size": 1953887429, "post_processing_size": null, "dataset_size": 57955068, "size_in_bytes": 2011842497}}
 
 
secondary_task/tydiqa-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd851c0fc40ecd7e13ef610b86fc628cc28d8158c380f91487aa6dc56c19217d
3
+ size 26917474
secondary_task/tydiqa-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20ed613224aad40738db65e537f8766d735c7a45fe1aa5fe2a6be7de6012252e
3
+ size 2484104
tydiqa.py DELETED
@@ -1,268 +0,0 @@
1
- """TODO(tydiqa): Add a description here."""
2
-
3
-
4
- import json
5
- import textwrap
6
-
7
- import datasets
8
- from datasets.tasks import QuestionAnsweringExtractive
9
-
10
-
11
- # TODO(tydiqa): BibTeX citation
12
- _CITATION = """\
13
- @article{tydiqa,
14
- title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
15
- author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
16
- year = {2020},
17
- journal = {Transactions of the Association for Computational Linguistics}
18
- }
19
- """
20
-
21
- # TODO(tydiqa):
22
- _DESCRIPTION = """\
23
- TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.
24
- The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language
25
- expresses -- such that we expect models performing well on this set to generalize across a large number of the languages
26
- in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic
27
- information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but
28
- don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without
29
- the use of translation (unlike MLQA and XQuAD).
30
- """
31
-
32
- _URL = "https://storage.googleapis.com/tydiqa/"
33
- _PRIMARY_URLS = {
34
- "train": _URL + "v1.0/tydiqa-v1.0-train.jsonl.gz",
35
- "dev": _URL + "v1.0/tydiqa-v1.0-dev.jsonl.gz",
36
- }
37
- _SECONDARY_URLS = {
38
- "train": _URL + "v1.1/tydiqa-goldp-v1.1-train.json",
39
- "dev": _URL + "v1.1/tydiqa-goldp-v1.1-dev.json",
40
- }
41
-
42
-
43
- class TydiqaConfig(datasets.BuilderConfig):
44
-
45
- """BuilderConfig for Tydiqa"""
46
-
47
- def __init__(self, **kwargs):
48
- """
49
-
50
- Args:
51
- **kwargs: keyword arguments forwarded to super.
52
- """
53
- super(TydiqaConfig, self).__init__(version=datasets.Version("1.0.0", ""), **kwargs)
54
-
55
-
56
- class Tydiqa(datasets.GeneratorBasedBuilder):
57
- """TODO(tydiqa): Short description of my dataset."""
58
-
59
- # TODO(tydiqa): Set up version.
60
- VERSION = datasets.Version("0.1.0")
61
- BUILDER_CONFIGS = [
62
- TydiqaConfig(
63
- name="primary_task",
64
- description=textwrap.dedent(
65
- """\
66
- Passage selection task (SelectP): Given a list of the passages in the article, return either (a) the index of
67
- the passage that answers the question or (b) NULL if no such passage exists.
68
- Minimal answer span task (MinSpan): Given the full text of an article, return one of (a) the start and end
69
- byte indices of the minimal span that completely answers the question; (b) YES or NO if the question requires
70
- a yes/no answer and we can draw a conclusion from the passage; (c) NULL if it is not possible to produce a
71
- minimal answer for this question."""
72
- ),
73
- ),
74
- TydiqaConfig(
75
- name="secondary_task",
76
- description=textwrap.dedent(
77
- """Gold passage task (GoldP): Given a passage that is guaranteed to contain the
78
- answer, predict the single contiguous span of characters that answers the question. This is more similar to
79
- existing reading comprehension datasets (as opposed to the information-seeking task outlined above).
80
- This task is constructed with two goals in mind: (1) more directly comparing with prior work and (2) providing
81
- a simplified way for researchers to use TyDi QA by providing compatibility with existing code for SQuAD 1.1,
82
- XQuAD, and MLQA. Toward these goals, the gold passage task differs from the primary task in several ways:
83
- only the gold answer passage is provided rather than the entire Wikipedia article;
84
- unanswerable questions have been discarded, similar to MLQA and XQuAD;
85
- we evaluate with the SQuAD 1.1 metrics like XQuAD; and
86
- Thai and Japanese are removed since the lack of whitespace breaks some tools.
87
- """
88
- ),
89
- ),
90
- ]
91
-
92
- def _info(self):
93
- # TODO(tydiqa): Specifies the datasets.DatasetInfo object
94
- if self.config.name == "primary_task":
95
- return datasets.DatasetInfo(
96
- # This is the description that will appear on the datasets page.
97
- description=_DESCRIPTION,
98
- # datasets.features.FeatureConnectors
99
- features=datasets.Features(
100
- {
101
- "passage_answer_candidates": datasets.features.Sequence(
102
- {
103
- "plaintext_start_byte": datasets.Value("int32"),
104
- "plaintext_end_byte": datasets.Value("int32"),
105
- }
106
- ),
107
- "question_text": datasets.Value("string"),
108
- "document_title": datasets.Value("string"),
109
- "language": datasets.Value("string"),
110
- "annotations": datasets.features.Sequence(
111
- {
112
- # 'annotation_id': datasets.Value('variant'),
113
- "passage_answer_candidate_index": datasets.Value("int32"),
114
- "minimal_answers_start_byte": datasets.Value("int32"),
115
- "minimal_answers_end_byte": datasets.Value("int32"),
116
- "yes_no_answer": datasets.Value("string"),
117
- }
118
- ),
119
- "document_plaintext": datasets.Value("string"),
120
- # 'example_id': datasets.Value('variant'),
121
- "document_url": datasets.Value("string")
122
- # These are the features of your dataset like images, labels ...
123
- }
124
- ),
125
- # If there's a common (input, target) tuple from the features,
126
- # specify them here. They'll be used if as_supervised=True in
127
- # builder.as_dataset.
128
- supervised_keys=None,
129
- # Homepage of the dataset for documentation
130
- homepage="https://github.com/google-research-datasets/tydiqa",
131
- citation=_CITATION,
132
- )
133
- elif self.config.name == "secondary_task":
134
- return datasets.DatasetInfo(
135
- description=_DESCRIPTION,
136
- features=datasets.Features(
137
- {
138
- "id": datasets.Value("string"),
139
- "title": datasets.Value("string"),
140
- "context": datasets.Value("string"),
141
- "question": datasets.Value("string"),
142
- "answers": datasets.features.Sequence(
143
- {
144
- "text": datasets.Value("string"),
145
- "answer_start": datasets.Value("int32"),
146
- }
147
- ),
148
- }
149
- ),
150
- # No default supervised_keys (as we have to pass both question
151
- # and context as input).
152
- supervised_keys=None,
153
- homepage="https://github.com/google-research-datasets/tydiqa",
154
- citation=_CITATION,
155
- task_templates=[
156
- QuestionAnsweringExtractive(
157
- question_column="question", context_column="context", answers_column="answers"
158
- )
159
- ],
160
- )
161
-
162
- def _split_generators(self, dl_manager):
163
- """Returns SplitGenerators."""
164
- # TODO(tydiqa): Downloads the data and defines the splits
165
- # dl_manager is a datasets.download.DownloadManager that can be used to
166
- # download and extract URLs
167
- primary_downloaded = dl_manager.download_and_extract(_PRIMARY_URLS)
168
- secondary_downloaded = dl_manager.download_and_extract(_SECONDARY_URLS)
169
- if self.config.name == "primary_task":
170
- return [
171
- datasets.SplitGenerator(
172
- name=datasets.Split.TRAIN,
173
- # These kwargs will be passed to _generate_examples
174
- gen_kwargs={"filepath": primary_downloaded["train"]},
175
- ),
176
- datasets.SplitGenerator(
177
- name=datasets.Split.VALIDATION,
178
- # These kwargs will be passed to _generate_examples
179
- gen_kwargs={"filepath": primary_downloaded["dev"]},
180
- ),
181
- ]
182
- elif self.config.name == "secondary_task":
183
- return [
184
- datasets.SplitGenerator(
185
- name=datasets.Split.TRAIN,
186
- # These kwargs will be passed to _generate_examples
187
- gen_kwargs={"filepath": secondary_downloaded["train"]},
188
- ),
189
- datasets.SplitGenerator(
190
- name=datasets.Split.VALIDATION,
191
- # These kwargs will be passed to _generate_examples
192
- gen_kwargs={"filepath": secondary_downloaded["dev"]},
193
- ),
194
- ]
195
-
196
- def _generate_examples(self, filepath):
197
- """Yields examples."""
198
- # TODO(tydiqa): Yields (key, example) tuples from the dataset
199
- if self.config.name == "primary_task":
200
- with open(filepath, encoding="utf-8") as f:
201
- for id_, row in enumerate(f):
202
- data = json.loads(row)
203
- passages = data["passage_answer_candidates"]
204
- end_byte = [passage["plaintext_end_byte"] for passage in passages]
205
- start_byte = [passage["plaintext_start_byte"] for passage in passages]
206
- title = data["document_title"]
207
- lang = data["language"]
208
- question = data["question_text"]
209
- annotations = data["annotations"]
210
- # annot_ids = [annotation["annotation_id"] for annotation in annotations]
211
- yes_no_answers = [annotation["yes_no_answer"] for annotation in annotations]
212
- min_answers_end_byte = [
213
- annotation["minimal_answer"]["plaintext_end_byte"] for annotation in annotations
214
- ]
215
- min_answers_start_byte = [
216
- annotation["minimal_answer"]["plaintext_start_byte"] for annotation in annotations
217
- ]
218
- passage_cand_answers = [
219
- annotation["passage_answer"]["candidate_index"] for annotation in annotations
220
- ]
221
- doc = data["document_plaintext"]
222
- # example_id = data["example_id"]
223
- url = data["document_url"]
224
- yield id_, {
225
- "passage_answer_candidates": {
226
- "plaintext_start_byte": start_byte,
227
- "plaintext_end_byte": end_byte,
228
- },
229
- "question_text": question,
230
- "document_title": title,
231
- "language": lang,
232
- "annotations": {
233
- # 'annotation_id': annot_ids,
234
- "passage_answer_candidate_index": passage_cand_answers,
235
- "minimal_answers_start_byte": min_answers_start_byte,
236
- "minimal_answers_end_byte": min_answers_end_byte,
237
- "yes_no_answer": yes_no_answers,
238
- },
239
- "document_plaintext": doc,
240
- # 'example_id': example_id,
241
- "document_url": url,
242
- }
243
- elif self.config.name == "secondary_task":
244
- with open(filepath, encoding="utf-8") as f:
245
- data = json.load(f)
246
- for article in data["data"]:
247
- title = article.get("title", "").strip()
248
- for paragraph in article["paragraphs"]:
249
- context = paragraph["context"].strip()
250
- for qa in paragraph["qas"]:
251
- question = qa["question"].strip()
252
- id_ = qa["id"]
253
-
254
- answer_starts = [answer["answer_start"] for answer in qa["answers"]]
255
- answers = [answer["text"].strip() for answer in qa["answers"]]
256
-
257
- # Features currently used are "context", "question", and "answers".
258
- # Others are extracted here for the ease of future expansions.
259
- yield id_, {
260
- "title": title,
261
- "context": context,
262
- "question": question,
263
- "id": id_,
264
- "answers": {
265
- "answer_start": answer_starts,
266
- "text": answers,
267
- },
268
- }