Nadav commited on
Commit
93d2d36
1 Parent(s): 5c195a0

add json files

Browse files
.gitattributes CHANGED
@@ -49,3 +49,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
49
  *.jpg filter=lfs diff=lfs merge=lfs -text
50
  *.jpeg filter=lfs diff=lfs merge=lfs -text
51
  *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
49
  *.jpg filter=lfs diff=lfs merge=lfs -text
50
  *.jpeg filter=lfs diff=lfs merge=lfs -text
51
  *.webp filter=lfs diff=lfs merge=lfs -text
52
+ /Users/knf792/PycharmProjects/NLP_course/nlp_course_tydiqa/tidyqa_answerable/train/dataset_info.json filter=lfs diff=lfs merge=lfs -text
53
+ /Users/knf792/PycharmProjects/NLP_course/nlp_course_tydiqa/tidyqa_answerable/train/state.json filter=lfs diff=lfs merge=lfs -text
54
+ /Users/knf792/PycharmProjects/NLP_course/nlp_course_tydiqa/tidyqa_answerable/validation/dataset_info.json filter=lfs diff=lfs merge=lfs -text
55
+ /Users/knf792/PycharmProjects/NLP_course/nlp_course_tydiqa/tidyqa_answerable/validation/state.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,244 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: TyDi QA
3
+ annotations_creators:
4
+ - crowdsourced
5
+ language_creators:
6
+ - crowdsourced
7
+ language:
8
+ - ar
9
+ - bn
10
+ - en
11
+ - fi
12
+ - id
13
+ - ja
14
+ - ko
15
+ - ru
16
+ - sw
17
+ - te
18
+ - th
19
+ license:
20
+ - apache-2.0
21
+ multilinguality:
22
+ - multilingual
23
+ size_categories:
24
+ - unknown
25
+ source_datasets:
26
+ - extended|wikipedia
27
+ task_categories:
28
+ - question-answering
29
+ task_ids:
30
+ - extractive-qa
31
+ paperswithcode_id: tydi-qa
32
+ ---
33
+
34
+ # Dataset Card for "tydiqa"
35
+
36
+ ## Table of Contents
37
+ - [Dataset Description](#dataset-description)
38
+ - [Dataset Summary](#dataset-summary)
39
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
40
+ - [Languages](#languages)
41
+ - [Dataset Structure](#dataset-structure)
42
+ - [Data Instances](#data-instances)
43
+ - [Data Fields](#data-fields)
44
+ - [Data Splits](#data-splits)
45
+ - [Dataset Creation](#dataset-creation)
46
+ - [Curation Rationale](#curation-rationale)
47
+ - [Source Data](#source-data)
48
+ - [Annotations](#annotations)
49
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
50
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
51
+ - [Social Impact of Dataset](#social-impact-of-dataset)
52
+ - [Discussion of Biases](#discussion-of-biases)
53
+ - [Other Known Limitations](#other-known-limitations)
54
+ - [Additional Information](#additional-information)
55
+ - [Dataset Curators](#dataset-curators)
56
+ - [Licensing Information](#licensing-information)
57
+ - [Citation Information](#citation-information)
58
+ - [Contributions](#contributions)
59
+
60
+ ## Dataset Description
61
+
62
+ - **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa)
63
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
64
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
65
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
66
+ - **Size of downloaded dataset files:** 3726.74 MB
67
+ - **Size of the generated dataset:** 5812.92 MB
68
+ - **Total amount of disk used:** 9539.67 MB
69
+
70
+ ### Dataset Summary
71
+
72
+ TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.
73
+ The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language
74
+ expresses -- such that we expect models performing well on this set to generalize across a large number of the languages
75
+ in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic
76
+ information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but
77
+ don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without
78
+ the use of translation (unlike MLQA and XQuAD).
79
+
80
+ ### Supported Tasks and Leaderboards
81
+
82
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
83
+
84
+ ### Languages
85
+
86
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
87
+
88
+ ## Dataset Structure
89
+
90
+ ### Data Instances
91
+
92
+ #### primary_task
93
+
94
+ - **Size of downloaded dataset files:** 1863.37 MB
95
+ - **Size of the generated dataset:** 5757.59 MB
96
+ - **Total amount of disk used:** 7620.96 MB
97
+
98
+ An example of 'validation' looks as follows.
99
+ ```
100
+ This example was too long and was cropped:
101
+
102
+ {
103
+ "annotations": {
104
+ "minimal_answers_end_byte": [-1, -1, -1],
105
+ "minimal_answers_start_byte": [-1, -1, -1],
106
+ "passage_answer_candidate_index": [-1, -1, -1],
107
+ "yes_no_answer": ["NONE", "NONE", "NONE"]
108
+ },
109
+ "document_plaintext": "\"\\nรองศาสตราจารย์[1] หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร (22 กันยายน 2495 -) ผู้ว่าราชการกรุงเทพมหานครคนที่ 15 อดีตรองหัวหน้าพรรคปร...",
110
+ "document_title": "หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร",
111
+ "document_url": "\"https://th.wikipedia.org/wiki/%E0%B8%AB%E0%B8%A1%E0%B9%88%E0%B8%AD%E0%B8%A1%E0%B8%A3%E0%B8%B2%E0%B8%8A%E0%B8%A7%E0%B8%87%E0%B8%...",
112
+ "language": "thai",
113
+ "passage_answer_candidates": "{\"plaintext_end_byte\": [494, 1779, 2931, 3904, 4506, 5588, 6383, 7122, 8224, 9375, 10473, 12563, 15134, 17765, 19863, 21902, 229...",
114
+ "question_text": "\"หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร เรียนจบจากที่ไหน ?\"..."
115
+ }
116
+ ```
117
+
118
+ #### secondary_task
119
+
120
+ - **Size of downloaded dataset files:** 1863.37 MB
121
+ - **Size of the generated dataset:** 55.34 MB
122
+ - **Total amount of disk used:** 1918.71 MB
123
+
124
+ An example of 'validation' looks as follows.
125
+ ```
126
+ This example was too long and was cropped:
127
+
128
+ {
129
+ "answers": {
130
+ "answer_start": [394],
131
+ "text": ["بطولتين"]
132
+ },
133
+ "context": "\"أقيمت البطولة 21 مرة، شارك في النهائيات 78 دولة، وعدد الفرق التي فازت بالبطولة حتى الآن 8 فرق، ويعد المنتخب البرازيلي الأكثر تت...",
134
+ "id": "arabic-2387335860751143628-1",
135
+ "question": "\"كم عدد مرات فوز الأوروغواي ببطولة كاس العالم لكرو القدم؟\"...",
136
+ "title": "قائمة نهائيات كأس العالم"
137
+ }
138
+ ```
139
+
140
+ ### Data Fields
141
+
142
+ The data fields are the same among all splits.
143
+
144
+ #### primary_task
145
+ - `passage_answer_candidates`: a dictionary feature containing:
146
+ - `plaintext_start_byte`: a `int32` feature.
147
+ - `plaintext_end_byte`: a `int32` feature.
148
+ - `question_text`: a `string` feature.
149
+ - `document_title`: a `string` feature.
150
+ - `language`: a `string` feature.
151
+ - `annotations`: a dictionary feature containing:
152
+ - `passage_answer_candidate_index`: a `int32` feature.
153
+ - `minimal_answers_start_byte`: a `int32` feature.
154
+ - `minimal_answers_end_byte`: a `int32` feature.
155
+ - `yes_no_answer`: a `string` feature.
156
+ - `document_plaintext`: a `string` feature.
157
+ - `document_url`: a `string` feature.
158
+
159
+ #### secondary_task
160
+ - `id`: a `string` feature.
161
+ - `title`: a `string` feature.
162
+ - `context`: a `string` feature.
163
+ - `question`: a `string` feature.
164
+ - `answers`: a dictionary feature containing:
165
+ - `text`: a `string` feature.
166
+ - `answer_start`: a `int32` feature.
167
+
168
+ ### Data Splits
169
+
170
+ | name | train | validation |
171
+ | -------------- | -----: | ---------: |
172
+ | primary_task | 166916 | 18670 |
173
+ | secondary_task | 49881 | 5077 |
174
+
175
+ ## Dataset Creation
176
+
177
+ ### Curation Rationale
178
+
179
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
180
+
181
+ ### Source Data
182
+
183
+ #### Initial Data Collection and Normalization
184
+
185
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
186
+
187
+ #### Who are the source language producers?
188
+
189
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
190
+
191
+ ### Annotations
192
+
193
+ #### Annotation process
194
+
195
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
196
+
197
+ #### Who are the annotators?
198
+
199
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
200
+
201
+ ### Personal and Sensitive Information
202
+
203
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
204
+
205
+ ## Considerations for Using the Data
206
+
207
+ ### Social Impact of Dataset
208
+
209
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
210
+
211
+ ### Discussion of Biases
212
+
213
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
214
+
215
+ ### Other Known Limitations
216
+
217
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
218
+
219
+ ## Additional Information
220
+
221
+ ### Dataset Curators
222
+
223
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
224
+
225
+ ### Licensing Information
226
+
227
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
228
+
229
+ ### Citation Information
230
+
231
+ ```
232
+ @article{tydiqa,
233
+ title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
234
+ author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
235
+ year = {2020},
236
+ journal = {Transactions of the Association for Computational Linguistics}
237
+ }
238
+
239
+ ```
240
+
241
+
242
+ ### Contributions
243
+
244
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
tidyqa_answerable/.DS_Store ADDED
Binary file (6.15 kB). View file
 
tidyqa_answerable/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["train", "validation"]}
tidyqa_answerable/train/dataset.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f11d6ec19d1be525da433ffc8050d7b03cd8b5e8c0903463c0fb5af12136b307
3
+ size 124768536
tidyqa_answerable/train/dataset_info.json ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "builder_name": null,
3
+ "citation": "@article{tydiqa,\ntitle = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},\nauthor = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}\nyear = {2020},\njournal = {Transactions of the Association for Computational Linguistics}\n}",
4
+ "config_name": null,
5
+ "dataset_size": null,
6
+ "description": "TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.\nThe languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language\nexpresses -- such that we expect models performing well on this set to generalize across a large number of the languages\nin the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic\ninformation-seeking task and avoid priming effects, questions are written by people who want to know the answer, but\ndon\u2019t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without\nthe use of translation (unlike MLQA and XQuAD).",
7
+ "download_checksums": null,
8
+ "download_size": null,
9
+ "features": {
10
+ "question_text": {
11
+ "dtype": "string",
12
+ "id": null,
13
+ "_type": "Value"
14
+ },
15
+ "document_title": {
16
+ "dtype": "string",
17
+ "id": null,
18
+ "_type": "Value"
19
+ },
20
+ "language": {
21
+ "dtype": "string",
22
+ "id": null,
23
+ "_type": "Value"
24
+ },
25
+ "annotations": {
26
+ "answer_start": {
27
+ "feature": {
28
+ "dtype": "int64",
29
+ "id": null,
30
+ "_type": "Value"
31
+ },
32
+ "length": -1,
33
+ "id": null,
34
+ "_type": "Sequence"
35
+ },
36
+ "answer_text": {
37
+ "feature": {
38
+ "dtype": "string",
39
+ "id": null,
40
+ "_type": "Value"
41
+ },
42
+ "length": -1,
43
+ "id": null,
44
+ "_type": "Sequence"
45
+ }
46
+ },
47
+ "document_plaintext": {
48
+ "dtype": "string",
49
+ "id": null,
50
+ "_type": "Value"
51
+ },
52
+ "document_url": {
53
+ "dtype": "string",
54
+ "id": null,
55
+ "_type": "Value"
56
+ }
57
+ },
58
+ "homepage": "https://github.com/google-research-datasets/tydiqa",
59
+ "license": "",
60
+ "post_processed": null,
61
+ "post_processing_size": null,
62
+ "size_in_bytes": null,
63
+ "splits": null,
64
+ "supervised_keys": null,
65
+ "task_templates": null,
66
+ "version": null
67
+ }
tidyqa_answerable/train/state.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "dataset.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "7b2bb503a95f9504",
8
+ "_format_columns": [
9
+ "annotations",
10
+ "document_plaintext",
11
+ "document_title",
12
+ "document_url",
13
+ "language",
14
+ "question_text"
15
+ ],
16
+ "_format_kwargs": {},
17
+ "_format_type": null,
18
+ "_indexes": {},
19
+ "_output_all_columns": false,
20
+ "_split": null
21
+ }
tidyqa_answerable/validation/dataset.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35efea14cc86a4e9f2f0443534e04ee989bef903a0de7526d307670a96940e02
3
+ size 13574904
tidyqa_answerable/validation/dataset_info.json ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "builder_name": null,
3
+ "citation": "@article{tydiqa,\ntitle = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},\nauthor = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}\nyear = {2020},\njournal = {Transactions of the Association for Computational Linguistics}\n}",
4
+ "config_name": null,
5
+ "dataset_size": null,
6
+ "description": "TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.\nThe languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language\nexpresses -- such that we expect models performing well on this set to generalize across a large number of the languages\nin the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic\ninformation-seeking task and avoid priming effects, questions are written by people who want to know the answer, but\ndon\u2019t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without\nthe use of translation (unlike MLQA and XQuAD).",
7
+ "download_checksums": null,
8
+ "download_size": null,
9
+ "features": {
10
+ "question_text": {
11
+ "dtype": "string",
12
+ "id": null,
13
+ "_type": "Value"
14
+ },
15
+ "document_title": {
16
+ "dtype": "string",
17
+ "id": null,
18
+ "_type": "Value"
19
+ },
20
+ "language": {
21
+ "dtype": "string",
22
+ "id": null,
23
+ "_type": "Value"
24
+ },
25
+ "annotations": {
26
+ "answer_start": {
27
+ "feature": {
28
+ "dtype": "int64",
29
+ "id": null,
30
+ "_type": "Value"
31
+ },
32
+ "length": -1,
33
+ "id": null,
34
+ "_type": "Sequence"
35
+ },
36
+ "answer_text": {
37
+ "feature": {
38
+ "dtype": "string",
39
+ "id": null,
40
+ "_type": "Value"
41
+ },
42
+ "length": -1,
43
+ "id": null,
44
+ "_type": "Sequence"
45
+ }
46
+ },
47
+ "document_plaintext": {
48
+ "dtype": "string",
49
+ "id": null,
50
+ "_type": "Value"
51
+ },
52
+ "document_url": {
53
+ "dtype": "string",
54
+ "id": null,
55
+ "_type": "Value"
56
+ }
57
+ },
58
+ "homepage": "https://github.com/google-research-datasets/tydiqa",
59
+ "license": "",
60
+ "post_processed": null,
61
+ "post_processing_size": null,
62
+ "size_in_bytes": null,
63
+ "splits": null,
64
+ "supervised_keys": null,
65
+ "task_templates": null,
66
+ "version": null
67
+ }
tidyqa_answerable/validation/state.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "dataset.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "a775741166635784",
8
+ "_format_columns": [
9
+ "annotations",
10
+ "document_plaintext",
11
+ "document_title",
12
+ "document_url",
13
+ "language",
14
+ "question_text"
15
+ ],
16
+ "_format_kwargs": {},
17
+ "_format_type": null,
18
+ "_indexes": {},
19
+ "_output_all_columns": false,
20
+ "_split": null
21
+ }