Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
License:
Qian commited on
Commit
3ff337e
1 Parent(s): 3adfe29

Add wikitablequestions dataset (#3870)

Browse files

* Add wikitablequestions dataset

* Using tsv instead of csv file to support better.

* fix checksum for dataset wikitablequestions - pass all tests.

* Fix the answer as a sequence instead of a string.

* reduce the dummy data size

* fix the answer name and the table example json

* Fix the answer as a sequence instead of a string.

* Fix the dummy data files.

* * fix the skip on streaming mode.

* * remove other dummy data

Commit from https://github.com/huggingface/datasets/commit/b2af98ca83f4509a0c885c3187bfe97f38c9d99c

README.md ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ paperswithcode_id: null
13
+ pretty_name: WikiTableQuestions
14
+ size_categories:
15
+ - 10K<n<100K
16
+ source_datasets:
17
+ - original
18
+ task_categories:
19
+ - question-answering
20
+ task_ids:
21
+ - question-answering-other-table-question-answering
22
+ ---
23
+
24
+ # Dataset Card for WikiTableQuestions
25
+
26
+ ## Table of Contents
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-instances)
34
+ - [Data Splits](#data-instances)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+
49
+ ## Dataset Description
50
+
51
+ - **Homepage:** [WikiTableQuestions homepage](https://nlp.stanford.edu/software/sempre/wikitable)
52
+ - **Repository:** [WikiTableQuestions repository](https://github.com/ppasupat/WikiTableQuestions)
53
+ - **Paper:** [Compositional Semantic Parsing on Semi-Structured Tables](https://arxiv.org/abs/1508.00305)
54
+ - **Leaderboard:** [WikiTableQuestions leaderboard on PaperWithCode](https://paperswithcode.com/dataset/wikitablequestions)
55
+ - **Point of Contact:** [Needs More Information]
56
+
57
+ ### Dataset Summary
58
+
59
+ The WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.
60
+
61
+ ### Supported Tasks and Leaderboards
62
+
63
+ question-answering, table-question-answering
64
+
65
+ ### Languages
66
+
67
+ en
68
+
69
+ ## Dataset Structure
70
+
71
+ ### Data Instances
72
+
73
+ #### default
74
+
75
+ - **Size of downloaded dataset files:** 27.91 MB
76
+ - **Size of the generated dataset:** 45.68 MB
77
+ - **Total amount of disk used:** 73.60 MB
78
+
79
+ An example of 'validation' looks as follows:
80
+ ```
81
+ {
82
+ "id": "nt-0",
83
+ "question": "what was the last year where this team was a part of the usl a-league?",
84
+ "answers": ["2004"],
85
+ "table": {
86
+ "header": ["Year", "Division", "League", ...],
87
+ "name": "csv/204-csv/590.csv",
88
+ "rows": [
89
+ ["2001", "2", "USL A-League", ...],
90
+ ["2002", "2", "USL A-League", ...],
91
+ ...
92
+ ]
93
+ }
94
+ }
95
+ ```
96
+
97
+ ### Data Fields
98
+
99
+ The data fields are the same among all splits.
100
+
101
+ #### default
102
+ - `id`: a `string` feature.
103
+ - `question`: a `string` feature.
104
+ - `answers`: a `list` of `string` feature.
105
+ - `table`: a dictionary feature containing:
106
+ - `header`: a `list` of `string` features.
107
+ - `rows`: a `list` of `list` of `string` features:
108
+ - `name`: a `string` feature.
109
+
110
+ ### Data Splits
111
+
112
+ | name |train|validation|test |
113
+ |-------|----:|---------:|----:|
114
+ |default|11321| 2831|4344|
115
+
116
+ ## Dataset Creation
117
+
118
+ ### Curation Rationale
119
+
120
+ [Needs More Information]
121
+
122
+ ### Source Data
123
+
124
+ #### Initial Data Collection and Normalization
125
+
126
+ [Needs More Information]
127
+
128
+ #### Who are the source language producers?
129
+
130
+ [Needs More Information]
131
+
132
+ ### Annotations
133
+
134
+ #### Annotation process
135
+
136
+ [Needs More Information]
137
+
138
+ #### Who are the annotators?
139
+
140
+ [Needs More Information]
141
+
142
+ ### Personal and Sensitive Information
143
+
144
+ [Needs More Information]
145
+
146
+ ## Considerations for Using the Data
147
+
148
+ ### Social Impact of Dataset
149
+
150
+ [Needs More Information]
151
+
152
+ ### Discussion of Biases
153
+
154
+ [Needs More Information]
155
+
156
+ ### Other Known Limitations
157
+
158
+ [Needs More Information]
159
+
160
+ ## Additional Information
161
+
162
+ ### Dataset Curators
163
+
164
+ Panupong Pasupat and Percy Liang
165
+
166
+ ### Licensing Information
167
+
168
+ Creative Commons Attribution Share Alike 4.0 International
169
+
170
+ ### Citation Information
171
+
172
+ ```
173
+ @inproceedings{pasupat-liang-2015-compositional,
174
+ title = "Compositional Semantic Parsing on Semi-Structured Tables",
175
+ author = "Pasupat, Panupong and Liang, Percy",
176
+ booktitle = "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
177
+ month = jul,
178
+ year = "2015",
179
+ address = "Beijing, China",
180
+ publisher = "Association for Computational Linguistics",
181
+ url = "https://aclanthology.org/P15-1142",
182
+ doi = "10.3115/v1/P15-1142",
183
+ pages = "1470--1480",
184
+ }
185
+ ```
186
+
187
+ ### Contributions
188
+
189
+ Thanks to [@SivilTaram](https://github.com/SivilTaram) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"random-split-1": {"description": "This WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.\n", "citation": "@inproceedings{pasupat-liang-2015-compositional,\n title = \"Compositional Semantic Parsing on Semi-Structured Tables\",\n author = \"Pasupat, Panupong and Liang, Percy\",\n booktitle = \"Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)\",\n month = jul,\n year = \"2015\",\n address = \"Beijing, China\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/P15-1142\",\n doi = \"10.3115/v1/P15-1142\",\n pages = \"1470--1480\",\n}\n", "homepage": "https://nlp.stanford.edu/software/sempre/wikitable", "license": "Creative Commons Attribution Share Alike 4.0 International", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "table": {"header": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "rows": {"feature": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}, "name": {"dtype": "string", "id": null, "_type": "Value"}}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_table_questions", "config_name": "random-split-1", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 30364389, "num_examples": 11321, "dataset_name": "wiki_table_questions"}, "test": {"name": "test", "num_bytes": 11423506, "num_examples": 4344, "dataset_name": "wiki_table_questions"}, "validation": {"name": "validation", "num_bytes": 7145768, "num_examples": 2831, "dataset_name": "wiki_table_questions"}}, "download_checksums": {"https://github.com/ppasupat/WikiTableQuestions/releases/download/v1.0.2/WikiTableQuestions-1.0.2-compact.zip": {"num_bytes": 29267445, "checksum": "7c9ca7cc1ccd75fe4be0255b44be63f7b566761005f4ee6ce67e51c129d8b085"}}, "download_size": 29267445, "post_processing_size": null, "dataset_size": 48933663, "size_in_bytes": 78201108}, "random-split-2": {"description": "This WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.\n", "citation": "@inproceedings{pasupat-liang-2015-compositional,\n title = \"Compositional Semantic Parsing on Semi-Structured Tables\",\n author = \"Pasupat, Panupong and Liang, Percy\",\n booktitle = \"Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)\",\n month = jul,\n year = \"2015\",\n address = \"Beijing, China\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/P15-1142\",\n doi = \"10.3115/v1/P15-1142\",\n pages = \"1470--1480\",\n}\n", "homepage": "https://nlp.stanford.edu/software/sempre/wikitable", "license": "Creative Commons Attribution Share Alike 4.0 International", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "table": {"header": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "rows": {"feature": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}, "name": {"dtype": "string", "id": null, "_type": "Value"}}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_table_questions", "config_name": "random-split-2", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 30098954, "num_examples": 11314, "dataset_name": "wiki_table_questions"}, "test": {"name": "test", "num_bytes": 11423506, "num_examples": 4344, "dataset_name": "wiki_table_questions"}, "validation": {"name": "validation", "num_bytes": 7411203, "num_examples": 2838, "dataset_name": "wiki_table_questions"}}, "download_checksums": {"https://github.com/ppasupat/WikiTableQuestions/releases/download/v1.0.2/WikiTableQuestions-1.0.2-compact.zip": {"num_bytes": 29267445, "checksum": "7c9ca7cc1ccd75fe4be0255b44be63f7b566761005f4ee6ce67e51c129d8b085"}}, "download_size": 29267445, "post_processing_size": null, "dataset_size": 48933663, "size_in_bytes": 78201108}, "random-split-3": {"description": "This WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.\n", "citation": "@inproceedings{pasupat-liang-2015-compositional,\n title = \"Compositional Semantic Parsing on Semi-Structured Tables\",\n author = \"Pasupat, Panupong and Liang, Percy\",\n booktitle = \"Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)\",\n month = jul,\n year = \"2015\",\n address = \"Beijing, China\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/P15-1142\",\n doi = \"10.3115/v1/P15-1142\",\n pages = \"1470--1480\",\n}\n", "homepage": "https://nlp.stanford.edu/software/sempre/wikitable", "license": "Creative Commons Attribution Share Alike 4.0 International", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "table": {"header": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "rows": {"feature": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}, "name": {"dtype": "string", "id": null, "_type": "Value"}}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_table_questions", "config_name": "random-split-3", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 28778697, "num_examples": 11314, "dataset_name": "wiki_table_questions"}, "test": {"name": "test", "num_bytes": 11423506, "num_examples": 4344, "dataset_name": "wiki_table_questions"}, "validation": {"name": "validation", "num_bytes": 8731460, "num_examples": 2838, "dataset_name": "wiki_table_questions"}}, "download_checksums": {"https://github.com/ppasupat/WikiTableQuestions/releases/download/v1.0.2/WikiTableQuestions-1.0.2-compact.zip": {"num_bytes": 29267445, "checksum": "7c9ca7cc1ccd75fe4be0255b44be63f7b566761005f4ee6ce67e51c129d8b085"}}, "download_size": 29267445, "post_processing_size": null, "dataset_size": 48933663, "size_in_bytes": 78201108}, "random-split-4": {"description": "This WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.\n", "citation": "@inproceedings{pasupat-liang-2015-compositional,\n title = \"Compositional Semantic Parsing on Semi-Structured Tables\",\n author = \"Pasupat, Panupong and Liang, Percy\",\n booktitle = \"Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)\",\n month = jul,\n year = \"2015\",\n address = \"Beijing, China\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/P15-1142\",\n doi = \"10.3115/v1/P15-1142\",\n pages = \"1470--1480\",\n}\n", "homepage": "https://nlp.stanford.edu/software/sempre/wikitable", "license": "Creative Commons Attribution Share Alike 4.0 International", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "table": {"header": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "rows": {"feature": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}, "name": {"dtype": "string", "id": null, "_type": "Value"}}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_table_questions", "config_name": "random-split-4", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 30166421, "num_examples": 11321, "dataset_name": "wiki_table_questions"}, "test": {"name": "test", "num_bytes": 11423506, "num_examples": 4344, "dataset_name": "wiki_table_questions"}, "validation": {"name": "validation", "num_bytes": 7343736, "num_examples": 2831, "dataset_name": "wiki_table_questions"}}, "download_checksums": {"https://github.com/ppasupat/WikiTableQuestions/releases/download/v1.0.2/WikiTableQuestions-1.0.2-compact.zip": {"num_bytes": 29267445, "checksum": "7c9ca7cc1ccd75fe4be0255b44be63f7b566761005f4ee6ce67e51c129d8b085"}}, "download_size": 29267445, "post_processing_size": null, "dataset_size": 48933663, "size_in_bytes": 78201108}, "random-split-5": {"description": "This WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.\n", "citation": "@inproceedings{pasupat-liang-2015-compositional,\n title = \"Compositional Semantic Parsing on Semi-Structured Tables\",\n author = \"Pasupat, Panupong and Liang, Percy\",\n booktitle = \"Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)\",\n month = jul,\n year = \"2015\",\n address = \"Beijing, China\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/P15-1142\",\n doi = \"10.3115/v1/P15-1142\",\n pages = \"1470--1480\",\n}\n", "homepage": "https://nlp.stanford.edu/software/sempre/wikitable", "license": "Creative Commons Attribution Share Alike 4.0 International", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "table": {"header": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "rows": {"feature": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}, "name": {"dtype": "string", "id": null, "_type": "Value"}}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_table_questions", "config_name": "random-split-5", "version": {"version_str": "1.0.2", "description": null, "major": 1, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 30333964, "num_examples": 11316, "dataset_name": "wiki_table_questions"}, "test": {"name": "test", "num_bytes": 11423506, "num_examples": 4344, "dataset_name": "wiki_table_questions"}, "validation": {"name": "validation", "num_bytes": 7176193, "num_examples": 2836, "dataset_name": "wiki_table_questions"}}, "download_checksums": {"https://github.com/ppasupat/WikiTableQuestions/releases/download/v1.0.2/WikiTableQuestions-1.0.2-compact.zip": {"num_bytes": 29267445, "checksum": "7c9ca7cc1ccd75fe4be0255b44be63f7b566761005f4ee6ce67e51c129d8b085"}}, "download_size": 29267445, "post_processing_size": null, "dataset_size": 48933663, "size_in_bytes": 78201108}}
dummy/random-split-1/1.0.2/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:117d768167b66026421067e7ca8c9ad2bc5d32a4c7e622e77d5fbbb2843c4ef5
3
+ size 56682
wikitablequestions.py ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """The WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables."""
15
+
16
+ import os
17
+
18
+ import datasets
19
+
20
+
21
+ # Find for instance the citation on arxiv or on the dataset repo/website
22
+ _CITATION = """\
23
+ @inproceedings{pasupat-liang-2015-compositional,
24
+ title = "Compositional Semantic Parsing on Semi-Structured Tables",
25
+ author = "Pasupat, Panupong and Liang, Percy",
26
+ booktitle = "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
27
+ month = jul,
28
+ year = "2015",
29
+ address = "Beijing, China",
30
+ publisher = "Association for Computational Linguistics",
31
+ url = "https://aclanthology.org/P15-1142",
32
+ doi = "10.3115/v1/P15-1142",
33
+ pages = "1470--1480",
34
+ }
35
+ """
36
+
37
+ # You can copy an official description
38
+ _DESCRIPTION = """\
39
+ This WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.
40
+ """
41
+
42
+ _HOMEPAGE = "https://nlp.stanford.edu/software/sempre/wikitable"
43
+
44
+ _LICENSE = "Creative Commons Attribution Share Alike 4.0 International"
45
+
46
+ # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
47
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
48
+ _DATA_URL = (
49
+ "https://github.com/ppasupat/WikiTableQuestions/releases/download/v1.0.2/WikiTableQuestions-1.0.2-compact.zip"
50
+ )
51
+
52
+
53
+ class WikiTableQuestions(datasets.GeneratorBasedBuilder):
54
+ """WikiTableQuestions: a large-scale dataset for the task of question answering on semi-structured tables."""
55
+
56
+ VERSION = datasets.Version("1.0.2")
57
+
58
+ # This is an example of a dataset with multiple configurations.
59
+ # If you don't want/need to define several sub-sets in your dataset,
60
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
61
+
62
+ # If you need to make complex sub-parts in the datasets with configurable options
63
+ # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
64
+ # BUILDER_CONFIG_CLASS = MyBuilderConfig
65
+
66
+ # You will be able to load one or the other configurations in the following list with
67
+ # data = datasets.load_dataset('my_dataset', 'first_domain')
68
+ # data = datasets.load_dataset('my_dataset', 'second_domain')
69
+ BUILDER_CONFIGS = [
70
+ datasets.BuilderConfig(
71
+ name="random-split-1",
72
+ version=VERSION,
73
+ description="The random-split-1-train/dev.tsv and pristine-unseen-tables.tsv",
74
+ ),
75
+ datasets.BuilderConfig(
76
+ name="random-split-2",
77
+ version=VERSION,
78
+ description="The random-split-2-train/dev.tsv and pristine-unseen-tables.tsv",
79
+ ),
80
+ datasets.BuilderConfig(
81
+ name="random-split-3",
82
+ version=VERSION,
83
+ description="The random-split-3-train/dev.tsv and pristine-unseen-tables.tsv",
84
+ ),
85
+ datasets.BuilderConfig(
86
+ name="random-split-4",
87
+ version=VERSION,
88
+ description="The random-split-4-train/dev.tsv and pristine-unseen-tables.tsv",
89
+ ),
90
+ datasets.BuilderConfig(
91
+ name="random-split-5",
92
+ version=VERSION,
93
+ description="The random-split-5-train/dev.tsv and pristine-unseen-tables.tsv",
94
+ ),
95
+ ]
96
+
97
+ DEFAULT_CONFIG_NAME = (
98
+ "random-split-1" # It's not mandatory to have a default configuration. Just use one if it make sense.
99
+ )
100
+
101
+ def _info(self):
102
+ features = datasets.Features(
103
+ {
104
+ "id": datasets.Value("string"),
105
+ "question": datasets.Value("string"),
106
+ "answers": datasets.features.Sequence(datasets.Value("string")),
107
+ "table": {
108
+ "header": datasets.features.Sequence(datasets.Value("string")),
109
+ "rows": datasets.features.Sequence(datasets.features.Sequence(datasets.Value("string"))),
110
+ "name": datasets.Value("string"),
111
+ },
112
+ }
113
+ )
114
+ return datasets.DatasetInfo(
115
+ # This is the description that will appear on the datasets page.
116
+ description=_DESCRIPTION,
117
+ # This defines the different columns of the dataset and their types
118
+ features=features, # Here we define them above because they are different between the two configurations
119
+ # If there's a common (input, target) tuple from the features, uncomment supervised_keys line below and
120
+ # specify them. They'll be used if as_supervised=True in builder.as_dataset.
121
+ # supervised_keys=("sentence", "label"),
122
+ # Homepage of the dataset for documentation
123
+ homepage=_HOMEPAGE,
124
+ # License for the dataset if available
125
+ license=_LICENSE,
126
+ # Citation for the dataset
127
+ citation=_CITATION,
128
+ )
129
+
130
+ def _split_generators(self, dl_manager):
131
+ train_file = "{}-train.tsv".format(self.config.name)
132
+ dev_file = "{}-dev.tsv".format(self.config.name)
133
+ test_file = "pristine-unseen-tables.tsv"
134
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLS
135
+ urls = _DATA_URL
136
+ root_dir = os.path.join(dl_manager.download_and_extract(urls), "WikiTableQuestions")
137
+ return [
138
+ datasets.SplitGenerator(
139
+ name=datasets.Split.TRAIN,
140
+ # These kwargs will be passed to _generate_examples
141
+ gen_kwargs={"main_filepath": os.path.join(root_dir, "data", train_file), "root_dir": root_dir},
142
+ ),
143
+ datasets.SplitGenerator(
144
+ name=datasets.Split.TEST,
145
+ # These kwargs will be passed to _generate_examples
146
+ gen_kwargs={"main_filepath": os.path.join(root_dir, "data", test_file), "root_dir": root_dir},
147
+ ),
148
+ datasets.SplitGenerator(
149
+ name=datasets.Split.VALIDATION,
150
+ # These kwargs will be passed to _generate_examples
151
+ gen_kwargs={"main_filepath": os.path.join(root_dir, "data", dev_file), "root_dir": root_dir},
152
+ ),
153
+ ]
154
+
155
+ def _read_table_from_file(self, table_name: str, root_dir: str):
156
+ def _extract_table_content(_line: str):
157
+ _vals = [_.replace("\n", " ").strip() for _ in _line.strip("\n").split("\t")]
158
+ return _vals
159
+
160
+ rows = []
161
+ # assert ".csv" in _wtq_table_name
162
+ # use the normalized table file
163
+ table_name = table_name.replace(".csv", ".tsv")
164
+ with open(os.path.join(root_dir, table_name), "r", encoding="utf8") as table_f:
165
+ table_lines = table_f.readlines()
166
+ # the first line is header
167
+ header = _extract_table_content(table_lines[0])
168
+ for line in table_lines[1:]:
169
+ rows.append(_extract_table_content(line))
170
+ return {"header": header, "rows": rows, "name": table_name}
171
+
172
+ # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
173
+ def _generate_examples(self, main_filepath, root_dir):
174
+ # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
175
+ with open(main_filepath, encoding="utf-8") as f:
176
+ # skip the first line since it is the tsv header
177
+ next(f)
178
+ for idx, line in enumerate(f):
179
+ example_id, question, table_name, answer = line.strip("\n").split("\t")
180
+ answer = answer.split("|")
181
+ # must contain rows and header keys
182
+ table_content = self._read_table_from_file(table_name, root_dir)
183
+
184
+ yield idx, {"id": example_id, "question": question, "answers": answer, "table": table_content}