shivangibithel commited on
Commit
8d1b163
1 Parent(s): 519a53e

Upload 3 files

Browse files
Files changed (3) hide show
  1. README.md +336 -0
  2. dataset_infos.json +1 -0
  3. sotab.py +184 -0
README.md ADDED
@@ -0,0 +1,336 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - en
8
+ license:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ paperswithcode_id: null
13
+ pretty_name: WikiTableQuestions
14
+ size_categories:
15
+ - 10K<n<100K
16
+ source_datasets:
17
+ - original
18
+ task_categories:
19
+ - question-answering
20
+ task_ids: []
21
+ tags:
22
+ - table-question-answering
23
+ dataset_info:
24
+ - config_name: random-split-1
25
+ features:
26
+ - name: id
27
+ dtype: string
28
+ - name: question
29
+ dtype: string
30
+ - name: answers
31
+ sequence: string
32
+ - name: table
33
+ struct:
34
+ - name: header
35
+ sequence: string
36
+ - name: rows
37
+ sequence:
38
+ sequence: string
39
+ - name: name
40
+ dtype: string
41
+ splits:
42
+ - name: train
43
+ num_bytes: 30364389
44
+ num_examples: 11321
45
+ - name: test
46
+ num_bytes: 11423506
47
+ num_examples: 4344
48
+ - name: validation
49
+ num_bytes: 7145768
50
+ num_examples: 2831
51
+ download_size: 29267445
52
+ dataset_size: 48933663
53
+ - config_name: random-split-2
54
+ features:
55
+ - name: id
56
+ dtype: string
57
+ - name: question
58
+ dtype: string
59
+ - name: answers
60
+ sequence: string
61
+ - name: table
62
+ struct:
63
+ - name: header
64
+ sequence: string
65
+ - name: rows
66
+ sequence:
67
+ sequence: string
68
+ - name: name
69
+ dtype: string
70
+ splits:
71
+ - name: train
72
+ num_bytes: 30098954
73
+ num_examples: 11314
74
+ - name: test
75
+ num_bytes: 11423506
76
+ num_examples: 4344
77
+ - name: validation
78
+ num_bytes: 7411203
79
+ num_examples: 2838
80
+ download_size: 29267445
81
+ dataset_size: 48933663
82
+ - config_name: random-split-3
83
+ features:
84
+ - name: id
85
+ dtype: string
86
+ - name: question
87
+ dtype: string
88
+ - name: answers
89
+ sequence: string
90
+ - name: table
91
+ struct:
92
+ - name: header
93
+ sequence: string
94
+ - name: rows
95
+ sequence:
96
+ sequence: string
97
+ - name: name
98
+ dtype: string
99
+ splits:
100
+ - name: train
101
+ num_bytes: 28778697
102
+ num_examples: 11314
103
+ - name: test
104
+ num_bytes: 11423506
105
+ num_examples: 4344
106
+ - name: validation
107
+ num_bytes: 8731460
108
+ num_examples: 2838
109
+ download_size: 29267445
110
+ dataset_size: 48933663
111
+ - config_name: random-split-4
112
+ features:
113
+ - name: id
114
+ dtype: string
115
+ - name: question
116
+ dtype: string
117
+ - name: answers
118
+ sequence: string
119
+ - name: table
120
+ struct:
121
+ - name: header
122
+ sequence: string
123
+ - name: rows
124
+ sequence:
125
+ sequence: string
126
+ - name: name
127
+ dtype: string
128
+ splits:
129
+ - name: train
130
+ num_bytes: 30166421
131
+ num_examples: 11321
132
+ - name: test
133
+ num_bytes: 11423506
134
+ num_examples: 4344
135
+ - name: validation
136
+ num_bytes: 7343736
137
+ num_examples: 2831
138
+ download_size: 29267445
139
+ dataset_size: 48933663
140
+ - config_name: random-split-5
141
+ features:
142
+ - name: id
143
+ dtype: string
144
+ - name: question
145
+ dtype: string
146
+ - name: answers
147
+ sequence: string
148
+ - name: table
149
+ struct:
150
+ - name: header
151
+ sequence: string
152
+ - name: rows
153
+ sequence:
154
+ sequence: string
155
+ - name: name
156
+ dtype: string
157
+ splits:
158
+ - name: train
159
+ num_bytes: 30333964
160
+ num_examples: 11316
161
+ - name: test
162
+ num_bytes: 11423506
163
+ num_examples: 4344
164
+ - name: validation
165
+ num_bytes: 7176193
166
+ num_examples: 2836
167
+ download_size: 29267445
168
+ dataset_size: 48933663
169
+ ---
170
+
171
+ # Dataset Card for WikiTableQuestions
172
+
173
+ ## Table of Contents
174
+ - [Dataset Description](#dataset-description)
175
+ - [Dataset Summary](#dataset-summary)
176
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
177
+ - [Languages](#languages)
178
+ - [Dataset Structure](#dataset-structure)
179
+ - [Data Instances](#data-instances)
180
+ - [Data Fields](#data-instances)
181
+ - [Data Splits](#data-instances)
182
+ - [Dataset Creation](#dataset-creation)
183
+ - [Curation Rationale](#curation-rationale)
184
+ - [Source Data](#source-data)
185
+ - [Annotations](#annotations)
186
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
187
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
188
+ - [Social Impact of Dataset](#social-impact-of-dataset)
189
+ - [Discussion of Biases](#discussion-of-biases)
190
+ - [Other Known Limitations](#other-known-limitations)
191
+ - [Additional Information](#additional-information)
192
+ - [Dataset Curators](#dataset-curators)
193
+ - [Licensing Information](#licensing-information)
194
+ - [Citation Information](#citation-information)
195
+
196
+ ## Dataset Description
197
+
198
+ - **Homepage:** [WikiTableQuestions homepage](https://nlp.stanford.edu/software/sempre/wikitable)
199
+ - **Repository:** [WikiTableQuestions repository](https://github.com/ppasupat/WikiTableQuestions)
200
+ - **Paper:** [Compositional Semantic Parsing on Semi-Structured Tables](https://arxiv.org/abs/1508.00305)
201
+ - **Leaderboard:** [WikiTableQuestions leaderboard on PaperWithCode](https://paperswithcode.com/dataset/wikitablequestions)
202
+ - **Point of Contact:** [Needs More Information]
203
+
204
+ ### Dataset Summary
205
+
206
+ The WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.
207
+
208
+ ### Supported Tasks and Leaderboards
209
+
210
+ question-answering, table-question-answering
211
+
212
+ ### Languages
213
+
214
+ en
215
+
216
+ ## Dataset Structure
217
+
218
+ ### Data Instances
219
+
220
+ #### default
221
+
222
+ - **Size of downloaded dataset files:** 29.27 MB
223
+ - **Size of the generated dataset:** 47.90 MB
224
+ - **Total amount of disk used:** 77.18 MB
225
+
226
+ An example of 'validation' looks as follows:
227
+ ```
228
+ {
229
+ "id": "nt-0",
230
+ "question": "what was the last year where this team was a part of the usl a-league?",
231
+ "answers": ["2004"],
232
+ "table": {
233
+ "header": ["Year", "Division", "League", ...],
234
+ "name": "csv/204-csv/590.csv",
235
+ "rows": [
236
+ ["2001", "2", "USL A-League", ...],
237
+ ["2002", "2", "USL A-League", ...],
238
+ ...
239
+ ]
240
+ }
241
+ }
242
+ ```
243
+
244
+ ### Data Fields
245
+
246
+ The data fields are the same among all splits.
247
+
248
+ #### default
249
+ - `id`: a `string` feature.
250
+ - `question`: a `string` feature.
251
+ - `answers`: a `list` of `string` feature.
252
+ - `table`: a dictionary feature containing:
253
+ - `header`: a `list` of `string` features.
254
+ - `rows`: a `list` of `list` of `string` features:
255
+ - `name`: a `string` feature.
256
+
257
+ ### Data Splits
258
+
259
+ | name |train|validation|test |
260
+ |-------|----:|---------:|----:|
261
+ |default|11321| 2831|4344|
262
+
263
+ ## Dataset Creation
264
+
265
+ ### Curation Rationale
266
+
267
+ [Needs More Information]
268
+
269
+ ### Source Data
270
+
271
+ #### Initial Data Collection and Normalization
272
+
273
+ [Needs More Information]
274
+
275
+ #### Who are the source language producers?
276
+
277
+ [Needs More Information]
278
+
279
+ ### Annotations
280
+
281
+ #### Annotation process
282
+
283
+ [Needs More Information]
284
+
285
+ #### Who are the annotators?
286
+
287
+ [Needs More Information]
288
+
289
+ ### Personal and Sensitive Information
290
+
291
+ [Needs More Information]
292
+
293
+ ## Considerations for Using the Data
294
+
295
+ ### Social Impact of Dataset
296
+
297
+ [Needs More Information]
298
+
299
+ ### Discussion of Biases
300
+
301
+ [Needs More Information]
302
+
303
+ ### Other Known Limitations
304
+
305
+ [Needs More Information]
306
+
307
+ ## Additional Information
308
+
309
+ ### Dataset Curators
310
+
311
+ Panupong Pasupat and Percy Liang
312
+
313
+ ### Licensing Information
314
+
315
+ Creative Commons Attribution Share Alike 4.0 International
316
+
317
+ ### Citation Information
318
+
319
+ ```
320
+ @inproceedings{pasupat-liang-2015-compositional,
321
+ title = "Compositional Semantic Parsing on Semi-Structured Tables",
322
+ author = "Pasupat, Panupong and Liang, Percy",
323
+ booktitle = "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
324
+ month = jul,
325
+ year = "2015",
326
+ address = "Beijing, China",
327
+ publisher = "Association for Computational Linguistics",
328
+ url = "https://aclanthology.org/P15-1142",
329
+ doi = "10.3115/v1/P15-1142",
330
+ pages = "1470--1480",
331
+ }
332
+ ```
333
+
334
+ ### Contributions
335
+
336
+ Thanks to [@SivilTaram](https://github.com/SivilTaram) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"random-split-1": {"description": "This WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.\n", "citation": "@inproceedings{pasupat-liang-2015-compositional,\n title = \"Compositional Semantic Parsing on Semi-Structured Tables\",\n author = \"Pasupat, Panupong and Liang, Percy\",\n booktitle = \"Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)\",\n month = jul,\n year = \"2015\",\n address = \"Beijing, China\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/P15-1142\",\n doi = \"10.3115/v1/P15-1142\",\n pages = \"1470--1480\",\n}\n", "homepage": "https://nlp.stanford.edu/software/sempre/wikitable", "license": "Creative Commons Attribution Share Alike 4.0 International", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "table": {"header": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "rows": {"feature": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}, "name": {"dtype": "string", "id": null, "_type": "Value"}}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_table_questions", "config_name": "random-split-1", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 30364389, "num_examples": 11321, "dataset_name": "wiki_table_questions"}, "test": {"name": "test", "num_bytes": 11423506, "num_examples": 4344, "dataset_name": "wiki_table_questions"}, "validation": {"name": "validation", "num_bytes": 7145768, "num_examples": 2831, "dataset_name": "wiki_table_questions"}}, "download_checksums": {"https://github.com/ppasupat/WikiTableQuestions/releases/download/v1.0.2/WikiTableQuestions-1.0.2-compact.zip": {"num_bytes": 29267445, "checksum": "7c9ca7cc1ccd75fe4be0255b44be63f7b566761005f4ee6ce67e51c129d8b085"}}, "download_size": 29267445, "post_processing_size": null, "dataset_size": 48933663, "size_in_bytes": 78201108}, "random-split-2": {"description": "This WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.\n", "citation": "@inproceedings{pasupat-liang-2015-compositional,\n title = \"Compositional Semantic Parsing on Semi-Structured Tables\",\n author = \"Pasupat, Panupong and Liang, Percy\",\n booktitle = \"Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)\",\n month = jul,\n year = \"2015\",\n address = \"Beijing, China\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/P15-1142\",\n doi = \"10.3115/v1/P15-1142\",\n pages = \"1470--1480\",\n}\n", "homepage": "https://nlp.stanford.edu/software/sempre/wikitable", "license": "Creative Commons Attribution Share Alike 4.0 International", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "table": {"header": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "rows": {"feature": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}, "name": {"dtype": "string", "id": null, "_type": "Value"}}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_table_questions", "config_name": "random-split-2", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 30098954, "num_examples": 11314, "dataset_name": "wiki_table_questions"}, "test": {"name": "test", "num_bytes": 11423506, "num_examples": 4344, "dataset_name": "wiki_table_questions"}, "validation": {"name": "validation", "num_bytes": 7411203, "num_examples": 2838, "dataset_name": "wiki_table_questions"}}, "download_checksums": {"https://github.com/ppasupat/WikiTableQuestions/releases/download/v1.0.2/WikiTableQuestions-1.0.2-compact.zip": {"num_bytes": 29267445, "checksum": "7c9ca7cc1ccd75fe4be0255b44be63f7b566761005f4ee6ce67e51c129d8b085"}}, "download_size": 29267445, "post_processing_size": null, "dataset_size": 48933663, "size_in_bytes": 78201108}, "random-split-3": {"description": "This WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.\n", "citation": "@inproceedings{pasupat-liang-2015-compositional,\n title = \"Compositional Semantic Parsing on Semi-Structured Tables\",\n author = \"Pasupat, Panupong and Liang, Percy\",\n booktitle = \"Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)\",\n month = jul,\n year = \"2015\",\n address = \"Beijing, China\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/P15-1142\",\n doi = \"10.3115/v1/P15-1142\",\n pages = \"1470--1480\",\n}\n", "homepage": "https://nlp.stanford.edu/software/sempre/wikitable", "license": "Creative Commons Attribution Share Alike 4.0 International", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "table": {"header": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "rows": {"feature": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}, "name": {"dtype": "string", "id": null, "_type": "Value"}}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_table_questions", "config_name": "random-split-3", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 28778697, "num_examples": 11314, "dataset_name": "wiki_table_questions"}, "test": {"name": "test", "num_bytes": 11423506, "num_examples": 4344, "dataset_name": "wiki_table_questions"}, "validation": {"name": "validation", "num_bytes": 8731460, "num_examples": 2838, "dataset_name": "wiki_table_questions"}}, "download_checksums": {"https://github.com/ppasupat/WikiTableQuestions/releases/download/v1.0.2/WikiTableQuestions-1.0.2-compact.zip": {"num_bytes": 29267445, "checksum": "7c9ca7cc1ccd75fe4be0255b44be63f7b566761005f4ee6ce67e51c129d8b085"}}, "download_size": 29267445, "post_processing_size": null, "dataset_size": 48933663, "size_in_bytes": 78201108}, "random-split-4": {"description": "This WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.\n", "citation": "@inproceedings{pasupat-liang-2015-compositional,\n title = \"Compositional Semantic Parsing on Semi-Structured Tables\",\n author = \"Pasupat, Panupong and Liang, Percy\",\n booktitle = \"Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)\",\n month = jul,\n year = \"2015\",\n address = \"Beijing, China\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/P15-1142\",\n doi = \"10.3115/v1/P15-1142\",\n pages = \"1470--1480\",\n}\n", "homepage": "https://nlp.stanford.edu/software/sempre/wikitable", "license": "Creative Commons Attribution Share Alike 4.0 International", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "table": {"header": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "rows": {"feature": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}, "name": {"dtype": "string", "id": null, "_type": "Value"}}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_table_questions", "config_name": "random-split-4", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 30166421, "num_examples": 11321, "dataset_name": "wiki_table_questions"}, "test": {"name": "test", "num_bytes": 11423506, "num_examples": 4344, "dataset_name": "wiki_table_questions"}, "validation": {"name": "validation", "num_bytes": 7343736, "num_examples": 2831, "dataset_name": "wiki_table_questions"}}, "download_checksums": {"https://github.com/ppasupat/WikiTableQuestions/releases/download/v1.0.2/WikiTableQuestions-1.0.2-compact.zip": {"num_bytes": 29267445, "checksum": "7c9ca7cc1ccd75fe4be0255b44be63f7b566761005f4ee6ce67e51c129d8b085"}}, "download_size": 29267445, "post_processing_size": null, "dataset_size": 48933663, "size_in_bytes": 78201108}, "random-split-5": {"description": "This WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.\n", "citation": "@inproceedings{pasupat-liang-2015-compositional,\n title = \"Compositional Semantic Parsing on Semi-Structured Tables\",\n author = \"Pasupat, Panupong and Liang, Percy\",\n booktitle = \"Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)\",\n month = jul,\n year = \"2015\",\n address = \"Beijing, China\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/P15-1142\",\n doi = \"10.3115/v1/P15-1142\",\n pages = \"1470--1480\",\n}\n", "homepage": "https://nlp.stanford.edu/software/sempre/wikitable", "license": "Creative Commons Attribution Share Alike 4.0 International", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "table": {"header": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "rows": {"feature": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}, "name": {"dtype": "string", "id": null, "_type": "Value"}}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_table_questions", "config_name": "random-split-5", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 30333964, "num_examples": 11316, "dataset_name": "wiki_table_questions"}, "test": {"name": "test", "num_bytes": 11423506, "num_examples": 4344, "dataset_name": "wiki_table_questions"}, "validation": {"name": "validation", "num_bytes": 7176193, "num_examples": 2836, "dataset_name": "wiki_table_questions"}}, "download_checksums": {"https://github.com/ppasupat/WikiTableQuestions/releases/download/v1.0.2/WikiTableQuestions-1.0.2-compact.zip": {"num_bytes": 29267445, "checksum": "7c9ca7cc1ccd75fe4be0255b44be63f7b566761005f4ee6ce67e51c129d8b085"}}, "download_size": 29267445, "post_processing_size": null, "dataset_size": 48933663, "size_in_bytes": 78201108}}
sotab.py ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """The WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables."""
15
+
16
+ import os
17
+
18
+ import datasets
19
+
20
+
21
+ # Find for instance the citation on arxiv or on the dataset repo/website
22
+ _CITATION = """\
23
+ @inproceedings{pasupat-liang-2015-compositional,
24
+ title = "Compositional Semantic Parsing on Semi-Structured Tables",
25
+ author = "Pasupat, Panupong and Liang, Percy",
26
+ booktitle = "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
27
+ month = jul,
28
+ year = "2015",
29
+ address = "Beijing, China",
30
+ publisher = "Association for Computational Linguistics",
31
+ url = "https://aclanthology.org/P15-1142",
32
+ doi = "10.3115/v1/P15-1142",
33
+ pages = "1470--1480",
34
+ }
35
+ """
36
+
37
+ # You can copy an official description
38
+ _DESCRIPTION = """\
39
+ This WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.
40
+ """
41
+
42
+ _HOMEPAGE = "https://nlp.stanford.edu/software/sempre/wikitable"
43
+
44
+ _LICENSE = "Creative Commons Attribution Share Alike 4.0 International"
45
+
46
+ # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
47
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
48
+ _DATA_URL = (
49
+ "https://github.com/ppasupat/WikiTableQuestions/releases/download/v1.0.2/WikiTableQuestions-1.0.2-compact.zip"
50
+ )
51
+
52
+
53
+ class WikiTableQuestions(datasets.GeneratorBasedBuilder):
54
+ """WikiTableQuestions: a large-scale dataset for the task of question answering on semi-structured tables."""
55
+
56
+ VERSION = datasets.Version("1.0.2")
57
+
58
+ # This is an example of a dataset with multiple configurations.
59
+ # If you don't want/need to define several sub-sets in your dataset,
60
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
61
+
62
+ # If you need to make complex sub-parts in the datasets with configurable options
63
+ # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
64
+ # BUILDER_CONFIG_CLASS = MyBuilderConfig
65
+
66
+ # You will be able to load one or the other configurations in the following list with
67
+ # data = datasets.load_dataset('my_dataset', 'first_domain')
68
+ # data = datasets.load_dataset('my_dataset', 'second_domain')
69
+ BUILDER_CONFIGS = [
70
+ datasets.BuilderConfig(
71
+ name="random-split-1",
72
+ version=VERSION,
73
+ description="The random-split-1-train/dev.tsv and pristine-unseen-tables.tsv",
74
+ ),
75
+ datasets.BuilderConfig(
76
+ name="random-split-2",
77
+ version=VERSION,
78
+ description="The random-split-2-train/dev.tsv and pristine-unseen-tables.tsv",
79
+ ),
80
+ datasets.BuilderConfig(
81
+ name="random-split-3",
82
+ version=VERSION,
83
+ description="The random-split-3-train/dev.tsv and pristine-unseen-tables.tsv",
84
+ ),
85
+ datasets.BuilderConfig(
86
+ name="random-split-4",
87
+ version=VERSION,
88
+ description="The random-split-4-train/dev.tsv and pristine-unseen-tables.tsv",
89
+ ),
90
+ datasets.BuilderConfig(
91
+ name="random-split-5",
92
+ version=VERSION,
93
+ description="The random-split-5-train/dev.tsv and pristine-unseen-tables.tsv",
94
+ ),
95
+ ]
96
+
97
+ DEFAULT_CONFIG_NAME = (
98
+ "random-split-1" # It's not mandatory to have a default configuration. Just use one if it make sense.
99
+ )
100
+
101
+ def _info(self):
102
+ features = datasets.Features(
103
+ {
104
+ "id": datasets.Value("string"),
105
+ "question": datasets.Value("string"),
106
+ "answers": datasets.features.Sequence(datasets.Value("string")),
107
+ "table": {
108
+ "header": datasets.features.Sequence(datasets.Value("string")),
109
+ "rows": datasets.features.Sequence(datasets.features.Sequence(datasets.Value("string"))),
110
+ "name": datasets.Value("string"),
111
+ },
112
+ }
113
+ )
114
+ return datasets.DatasetInfo(
115
+ # This is the description that will appear on the datasets page.
116
+ description=_DESCRIPTION,
117
+ # This defines the different columns of the dataset and their types
118
+ features=features, # Here we define them above because they are different between the two configurations
119
+ # If there's a common (input, target) tuple from the features, uncomment supervised_keys line below and
120
+ # specify them. They'll be used if as_supervised=True in builder.as_dataset.
121
+ # supervised_keys=("sentence", "label"),
122
+ # Homepage of the dataset for documentation
123
+ homepage=_HOMEPAGE,
124
+ # License for the dataset if available
125
+ license=_LICENSE,
126
+ # Citation for the dataset
127
+ citation=_CITATION,
128
+ )
129
+
130
+ def _split_generators(self, dl_manager):
131
+ train_file = "{}-train.tsv".format(self.config.name)
132
+ dev_file = "{}-dev.tsv".format(self.config.name)
133
+ test_file = "pristine-unseen-tables.tsv"
134
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLS
135
+ urls = _DATA_URL
136
+ root_dir = os.path.join(dl_manager.download_and_extract(urls), "WikiTableQuestions")
137
+ return [
138
+ datasets.SplitGenerator(
139
+ name=datasets.Split.TRAIN,
140
+ # These kwargs will be passed to _generate_examples
141
+ gen_kwargs={"main_filepath": os.path.join(root_dir, "data", train_file), "root_dir": root_dir},
142
+ ),
143
+ datasets.SplitGenerator(
144
+ name=datasets.Split.TEST,
145
+ # These kwargs will be passed to _generate_examples
146
+ gen_kwargs={"main_filepath": os.path.join(root_dir, "data", test_file), "root_dir": root_dir},
147
+ ),
148
+ datasets.SplitGenerator(
149
+ name=datasets.Split.VALIDATION,
150
+ # These kwargs will be passed to _generate_examples
151
+ gen_kwargs={"main_filepath": os.path.join(root_dir, "data", dev_file), "root_dir": root_dir},
152
+ ),
153
+ ]
154
+
155
+ def _read_table_from_file(self, table_name: str, root_dir: str):
156
+ def _extract_table_content(_line: str):
157
+ _vals = [_.replace("\n", " ").strip() for _ in _line.strip("\n").split("\t")]
158
+ return _vals
159
+
160
+ rows = []
161
+ # assert ".csv" in _wtq_table_name
162
+ # use the normalized table file
163
+ table_name = table_name.replace(".csv", ".tsv")
164
+ with open(os.path.join(root_dir, table_name), "r", encoding="utf8") as table_f:
165
+ table_lines = table_f.readlines()
166
+ # the first line is header
167
+ header = _extract_table_content(table_lines[0])
168
+ for line in table_lines[1:]:
169
+ rows.append(_extract_table_content(line))
170
+ return {"header": header, "rows": rows, "name": table_name}
171
+
172
+ # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
173
+ def _generate_examples(self, main_filepath, root_dir):
174
+ # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
175
+ with open(main_filepath, encoding="utf-8") as f:
176
+ # skip the first line since it is the tsv header
177
+ next(f)
178
+ for idx, line in enumerate(f):
179
+ example_id, question, table_name, answer = line.strip("\n").split("\t")
180
+ answer = answer.split("|")
181
+ # must contain rows and header keys
182
+ table_content = self._read_table_from_file(table_name, root_dir)
183
+
184
+ yield idx, {"id": example_id, "question": question, "answers": answer, "table": table_content}