Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
1K<n<10K
Language Creators:
found
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
Tags:
License:
system HF staff commited on
Commit
319045e
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,309 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - apache-2-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - question-answering
18
+ task_ids:
19
+ - open-domain-qa
20
+ ---
21
+
22
+ # Dataset Card for SelQA
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** https://github.com/emorynlp/selqa
50
+ - **Repository:** https://github.com/emorynlp/selqa
51
+ - **Paper:** https://arxiv.org/abs/1606.00851
52
+ - **Leaderboard:** [Needs More Information]
53
+ - **Point of Contact:** Tomasz Jurczyk <http://tomaszjurczyk.com/>, Jinho D. Choi <http://www.mathcs.emory.edu/~choi/home.html>
54
+
55
+ ### Dataset Summary
56
+
57
+ SelQA: A New Benchmark for Selection-Based Question Answering
58
+
59
+
60
+ ### Supported Tasks and Leaderboards
61
+
62
+ Question Answering
63
+
64
+ ### Languages
65
+
66
+ English
67
+
68
+ ## Dataset Structure
69
+
70
+ ### Data Instances
71
+
72
+ An example from the `answer selection` set:
73
+ ```
74
+ {
75
+ "section": "Museums",
76
+ "question": "Where are Rockefeller Museum and LA Mayer Institute for Islamic Art?",
77
+ "article": "Israel",
78
+ "is_paraphrase": true,
79
+ "topic": "COUNTRY",
80
+ "answers": [
81
+ 5
82
+ ],
83
+ "candidates": [
84
+ "The Israel Museum in Jerusalem is one of Israel's most important cultural institutions and houses the Dead Sea scrolls, along with an extensive collection of Judaica and European art.",
85
+ "Israel's national Holocaust museum, Yad Vashem, is the world central archive of Holocaust-related information.",
86
+ "Beth Hatefutsoth (the Diaspora Museum), on the campus of Tel Aviv University, is an interactive museum devoted to the history of Jewish communities around the world.",
87
+ "Apart from the major museums in large cities, there are high-quality artspaces in many towns and \"kibbutzim\".",
88
+ "\"Mishkan Le'Omanut\" on Kibbutz Ein Harod Meuhad is the largest art museum in the north of the country.",
89
+ "Several Israeli museums are devoted to Islamic culture, including the Rockefeller Museum and the L. A. Mayer Institute for Islamic Art, both in Jerusalem.",
90
+ "The Rockefeller specializes in archaeological remains from the Ottoman and other periods of Middle East history.",
91
+ "It is also the home of the first hominid fossil skull found in Western Asia called Galilee Man.",
92
+ "A cast of the skull is on display at the Israel Museum."
93
+ ],
94
+ "q_types": [
95
+ "where"
96
+ ]
97
+ }
98
+ ```
99
+
100
+ An example from the `answer triggering` set:
101
+ ```
102
+ {
103
+ "section": "Museums",
104
+ "question": "Where are Rockefeller Museum and LA Mayer Institute for Islamic Art?",
105
+ "article": "Israel",
106
+ "is_paraphrase": true,
107
+ "topic": "COUNTRY",
108
+ "candidate_list": [
109
+ {
110
+ "article": "List of places in Jerusalem",
111
+ "section": "List_of_places_in_Jerusalem-Museums",
112
+ "answers": [],
113
+ "candidates": [
114
+ " Israel Museum *Shrine of the Book *Rockefeller Museum of Archeology Bible Lands Museum Jerusalem Yad Vashem Holocaust Museum L.A. Mayer Institute for Islamic Art Bloomfield Science Museum Natural History Museum Museum of Italian Jewish Art Ticho House Tower of David Jerusalem Tax Museum Herzl Museum Siebenberg House Museums.",
115
+ "Museum on the Seam "
116
+ ]
117
+ },
118
+ {
119
+ "article": "Israel",
120
+ "section": "Israel-Museums",
121
+ "answers": [
122
+ 5
123
+ ],
124
+ "candidates": [
125
+ "The Israel Museum in Jerusalem is one of Israel's most important cultural institutions and houses the Dead Sea scrolls, along with an extensive collection of Judaica and European art.",
126
+ "Israel's national Holocaust museum, Yad Vashem, is the world central archive of Holocaust-related information.",
127
+ "Beth Hatefutsoth (the Diaspora Museum), on the campus of Tel Aviv University, is an interactive museum devoted to the history of Jewish communities around the world.",
128
+ "Apart from the major museums in large cities, there are high-quality artspaces in many towns and \"kibbutzim\".",
129
+ "\"Mishkan Le'Omanut\" on Kibbutz Ein Harod Meuhad is the largest art museum in the north of the country.",
130
+ "Several Israeli museums are devoted to Islamic culture, including the Rockefeller Museum and the L. A. Mayer Institute for Islamic Art, both in Jerusalem.",
131
+ "The Rockefeller specializes in archaeological remains from the Ottoman and other periods of Middle East history.",
132
+ "It is also the home of the first hominid fossil skull found in Western Asia called Galilee Man.",
133
+ "A cast of the skull is on display at the Israel Museum."
134
+ ]
135
+ },
136
+ {
137
+ "article": "L. A. Mayer Institute for Islamic Art",
138
+ "section": "L._A._Mayer_Institute_for_Islamic_Art-Abstract",
139
+ "answers": [],
140
+ "candidates": [
141
+ "The L.A. Mayer Institute for Islamic Art (Hebrew: \u05de\u05d5\u05d6\u05d9\u05d0\u05d5\u05df \u05dc.",
142
+ "\u05d0.",
143
+ "\u05de\u05d0\u05d9\u05e8 \u05dc\u05d0\u05de\u05e0\u05d5\u05ea \u05d4\u05d0\u05e1\u05dc\u05d0\u05dd) is a museum in Jerusalem, Israel, established in 1974.",
144
+ "It is located in Katamon, down the road from the Jerusalem Theater.",
145
+ "The museum houses Islamic pottery, textiles, jewelry, ceremonial objects and other Islamic cultural artifacts.",
146
+ "It is not to be confused with the Islamic Museum, Jerusalem. "
147
+ ]
148
+ },
149
+ {
150
+ "article": "Islamic Museum, Jerusalem",
151
+ "section": "Islamic_Museum,_Jerusalem-Abstract",
152
+ "answers": [],
153
+ "candidates": [
154
+ "The Islamic Museum is a museum on the Temple Mount in the Old City section of Jerusalem.",
155
+ "On display are exhibits from ten periods of Islamic history encompassing several Muslim regions.",
156
+ "The museum is located adjacent to al-Aqsa Mosque.",
157
+ "It is not to be confused with the L. A. Mayer Institute for Islamic Art, also a museum in Jerusalem. "
158
+ ]
159
+ },
160
+ {
161
+ "article": "L. A. Mayer Institute for Islamic Art",
162
+ "section": "L._A._Mayer_Institute_for_Islamic_Art-Contemporary_Arab_art",
163
+ "answers": [],
164
+ "candidates": [
165
+ "In 2008, a group exhibit of contemporary Arab art opened at L.A. Mayer Institute, the first show of local Arab art in an Israeli museum and the first to be mounted by an Arab curator.",
166
+ "Thirteen Arab artists participated in the show. "
167
+ ]
168
+ }
169
+ ],
170
+ "q_types": [
171
+ "where"
172
+ ]
173
+ }
174
+ ```
175
+
176
+ An example from any of the `experiments` data:
177
+ ```
178
+ Where are Rockefeller Museum and LA Mayer Institute for Islamic Art ? The Israel Museum in Jerusalem is one of Israel 's most important cultural institutions and houses the Dead Sea scrolls , along with an extensive collection of Judaica and European art . 0
179
+ Where are Rockefeller Museum and LA Mayer Institute for Islamic Art ? Israel 's national Holocaust museum , Yad Vashem , is the world central archive of Holocaust - related information . 0
180
+ Where are Rockefeller Museum and LA Mayer Institute for Islamic Art ? Beth Hatefutsoth ( the Diaspora Museum ) , on the campus of Tel Aviv University , is an interactive museum devoted to the history of Jewish communities around the world . 0
181
+ Where are Rockefeller Museum and LA Mayer Institute for Islamic Art ? Apart from the major museums in large cities , there are high - quality artspaces in many towns and " kibbutzim " . 0
182
+ Where are Rockefeller Museum and LA Mayer Institute for Islamic Art ? " Mishkan Le'Omanut " on Kibbutz Ein Harod Meuhad is the largest art museum in the north of the country . 0
183
+ Where are Rockefeller Museum and LA Mayer Institute for Islamic Art ? Several Israeli museums are devoted to Islamic culture , including the Rockefeller Museum and the L. A. Mayer Institute for Islamic Art , both in Jerusalem . 1
184
+ Where are Rockefeller Museum and LA Mayer Institute for Islamic Art ? The Rockefeller specializes in archaeological remains from the Ottoman and other periods of Middle East history . 0
185
+ Where are Rockefeller Museum and LA Mayer Institute for Islamic Art ? It is also the home of the first hominid fossil skull found in Western Asia called Galilee Man . 0
186
+ Where are Rockefeller Museum and LA Mayer Institute for Islamic Art ? A cast of the skull is on display at the Israel Museum . 0
187
+ ```
188
+
189
+ ### Data Fields
190
+
191
+ #### Answer Selection
192
+ ##### Data for Analysis
193
+
194
+ for analysis, the columns are:
195
+
196
+ * `question`: the question.
197
+ * `article`: the Wikipedia article related to this question.
198
+ * `section`: the section in the Wikipedia article related to this question.
199
+ * `topic`: the topic of this question, where the topics are *MUSIC*, *TV*, *TRAVEL*, *ART*, *SPORT*, *COUNTRY*, *MOVIES*, *HISTORICAL EVENTS*, *SCIENCE*, *FOOD*.
200
+ * `q_types`: the list of question types, where the types are *what*, *why*, *when*, *who*, *where*, and *how*. If empty, none of the those types are recognized in this question.
201
+ * `is_paraphrase`: *True* if this question is a paragraph of some other question in this dataset; otherwise, *False*.
202
+ * `candidates`: the list of sentences in the related section.
203
+ * `answers`: the list of candidate indices containing the answer context of this question.
204
+
205
+ ##### Data for Experiments
206
+
207
+ for experiments, each column gives:
208
+
209
+ * `0`: a question where all tokens are separated.
210
+ * `1`: a candidate of the question where all tokens are separated.
211
+ * `2`: the label where `0` implies no answer to the question is found in this candidate and `1` implies the answer is found.
212
+
213
+ #### Answer Triggering
214
+ ##### Data for Analysis
215
+
216
+ for analysis, the columns are:
217
+
218
+ * `question`: the question.
219
+ * `article`: the Wikipedia article related to this question.
220
+ * `section`: the section in the Wikipedia article related to this question.
221
+ * `topic`: the topic of this question, where the topics are *MUSIC*, *TV*, *TRAVEL*, *ART*, *SPORT*, *COUNTRY*, *MOVIES*, *HISTORICAL EVENTS*, *SCIENCE*, *FOOD*.
222
+ * `q_types`: the list of question types, where the types are *what*, *why*, *when*, *who*, *where*, and *how*. If empty, none of the those types are recognized in this question.
223
+ * `is_paraphrase`: *True* if this question is a paragraph of some other question in this dataset; otherwise, *False*.
224
+ * `candidate_list`: the list of 5 candidate sections:
225
+ * `article`: the title of the candidate article.
226
+ * `section`: the section in the candidate article.
227
+ * `candidates`: the list of sentences in this candidate section.
228
+ * `answers`: the list of candidate indices containing the answer context of this question (can be empty).
229
+
230
+ ##### Data for Experiments
231
+
232
+ for experiments, each column gives:
233
+
234
+ * `0`: a question where all tokens are separated.
235
+ * `1`: a candidate of the question where all tokens are separated.
236
+ * `2`: the label where `0` implies no answer to the question is found in this candidate and `1` implies the answer is found.
237
+
238
+ ### Data Splits
239
+
240
+ | |Train| Valid| Test|
241
+ | --- | --- | --- | --- |
242
+ | Answer Selection | 5529 | 785 | 1590 |
243
+ | Answer Triggering | 27645 | 3925 | 7950 |
244
+
245
+ ## Dataset Creation
246
+
247
+ ### Curation Rationale
248
+
249
+ To encourage research and provide an initial benchmark for selection based question answering and answer triggering tasks
250
+
251
+ ### Source Data
252
+
253
+ #### Initial Data Collection and Normalization
254
+
255
+ [Needs More Information]
256
+
257
+ #### Who are the source language producers?
258
+
259
+ [Needs More Information]
260
+
261
+ ### Annotations
262
+
263
+ #### Annotation process
264
+
265
+ Crowdsourced
266
+
267
+ #### Who are the annotators?
268
+
269
+ [Needs More Information]
270
+
271
+ ### Personal and Sensitive Information
272
+
273
+ [Needs More Information]
274
+
275
+ ## Considerations for Using the Data
276
+
277
+ ### Social Impact of Dataset
278
+
279
+ The purpose of this dataset is to help develop better selection-based question answering systems.
280
+
281
+ ### Discussion of Biases
282
+
283
+ [Needs More Information]
284
+
285
+ ### Other Known Limitations
286
+
287
+ [Needs More Information]
288
+
289
+ ## Additional Information
290
+
291
+ ### Dataset Curators
292
+
293
+ [Needs More Information]
294
+
295
+ ### Licensing Information
296
+
297
+ Apache License 2.0
298
+
299
+ ### Citation Information
300
+ @InProceedings{7814688,
301
+ author={T. {Jurczyk} and M. {Zhai} and J. D. {Choi}},
302
+ booktitle={2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI)},
303
+ title={SelQA: A New Benchmark for Selection-Based Question Answering},
304
+ year={2016},
305
+ volume={},
306
+ number={},
307
+ pages={820-827},
308
+ doi={10.1109/ICTAI.2016.0128}
309
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"answer_selection_analysis": {"description": "The SelQA dataset provides crowdsourced annotation for two selection-based question answer tasks, \nanswer sentence selection and answer triggering.\n", "citation": "@InProceedings{7814688,\n author={T. {Jurczyk} and M. {Zhai} and J. D. {Choi}},\n booktitle={2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI)}, \n title={SelQA: A New Benchmark for Selection-Based Question Answering}, \n year={2016},\n volume={},\n number={},\n pages={820-827},\n doi={10.1109/ICTAI.2016.0128}\n}\n", "homepage": "", "license": "", "features": {"section": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"dtype": "string", "id": null, "_type": "Value"}, "is_paraphrase": {"dtype": "bool", "id": null, "_type": "Value"}, "topic": {"num_classes": 10, "names": ["MUSIC", "TV", "TRAVEL", "ART", "SPORT", "COUNTRY", "MOVIES", "HISTORICAL EVENTS", "SCIENCE", "FOOD"], "names_file": null, "id": null, "_type": "ClassLabel"}, "answers": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "candidates": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "q_types": {"feature": {"num_classes": 7, "names": ["what", "why", "when", "who", "where", "how", ""], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "selqa", "config_name": "answer_selection_analysis", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 9676758, "num_examples": 5529, "dataset_name": "selqa"}, "test": {"name": "test", "num_bytes": 2798537, "num_examples": 1590, "dataset_name": "selqa"}, "validation": {"name": "validation", "num_bytes": 1378407, "num_examples": 785, "dataset_name": "selqa"}}, "download_checksums": {"https://raw.githubusercontent.com/emorynlp/selqa/master/ass/selqa-ass-train.json": {"num_bytes": 10320158, "checksum": "30622b7820bb2fa8e766d0ad3c7cf29dac658772cd763a9dabf81d9cab1fd534"}, "https://raw.githubusercontent.com/emorynlp/selqa/master/ass/selqa-ass-dev.json": {"num_bytes": 1470163, "checksum": "b4e6687e44a30b486e24d2b06aa3012ec07d61145f3521f35b7d49daae3e0ca4"}, "https://raw.githubusercontent.com/emorynlp/selqa/master/ass/selqa-ass-test.json": {"num_bytes": 2983123, "checksum": "ca1184d94cc9030883723fab76ef8180b3cf5fb142549a5648d22f59fe7c6fc6"}}, "download_size": 14773444, "post_processing_size": null, "dataset_size": 13853702, "size_in_bytes": 28627146}, "answer_selection_experiments": {"description": "The SelQA dataset provides crowdsourced annotation for two selection-based question answer tasks, \nanswer sentence selection and answer triggering.\n", "citation": "@InProceedings{7814688,\n author={T. {Jurczyk} and M. {Zhai} and J. D. {Choi}},\n booktitle={2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI)}, \n title={SelQA: A New Benchmark for Selection-Based Question Answering}, \n year={2016},\n volume={},\n number={},\n pages={820-827},\n doi={10.1109/ICTAI.2016.0128}\n}\n", "homepage": "", "license": "", "features": {"question": {"dtype": "string", "id": null, "_type": "Value"}, "candidate": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 2, "names": ["0", "1"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "selqa", "config_name": "answer_selection_experiments", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 13782826, "num_examples": 66438, "dataset_name": "selqa"}, "test": {"name": "test", "num_bytes": 4008077, "num_examples": 19435, "dataset_name": "selqa"}, "validation": {"name": "validation", "num_bytes": 1954877, "num_examples": 9377, "dataset_name": "selqa"}}, "download_checksums": {"https://raw.githubusercontent.com/emorynlp/selqa/master/ass/selqa-ass-train.tsv": {"num_bytes": 12985514, "checksum": "9f40017c0bf97f2f5816fba5ac18c7eafb847a9e351d85584afaecd1296010db"}, "https://raw.githubusercontent.com/emorynlp/selqa/master/ass/selqa-ass-dev.tsv": {"num_bytes": 1842345, "checksum": "0f0d73b379bb4efc6e678e36b122ea17c957998a1d002e3c480b3bc7854f77a9"}, "https://raw.githubusercontent.com/emorynlp/selqa/master/ass/selqa-ass-test.tsv": {"num_bytes": 3774841, "checksum": "4129ffa31237eb7f673baf6313bdd7d01658000c253b45e195d235493a435b91"}}, "download_size": 18602700, "post_processing_size": null, "dataset_size": 19745780, "size_in_bytes": 38348480}, "answer_triggering_analysis": {"description": "The SelQA dataset provides crowdsourced annotation for two selection-based question answer tasks, \nanswer sentence selection and answer triggering.\n", "citation": "@InProceedings{7814688,\n author={T. {Jurczyk} and M. {Zhai} and J. D. {Choi}},\n booktitle={2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI)}, \n title={SelQA: A New Benchmark for Selection-Based Question Answering}, \n year={2016},\n volume={},\n number={},\n pages={820-827},\n doi={10.1109/ICTAI.2016.0128}\n}\n", "homepage": "", "license": "", "features": {"section": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"dtype": "string", "id": null, "_type": "Value"}, "is_paraphrase": {"dtype": "bool", "id": null, "_type": "Value"}, "topic": {"num_classes": 10, "names": ["MUSIC", "TV", "TRAVEL", "ART", "SPORT", "COUNTRY", "MOVIES", "HISTORICAL EVENTS", "SCIENCE", "FOOD"], "names_file": null, "id": null, "_type": "ClassLabel"}, "q_types": {"feature": {"num_classes": 7, "names": ["what", "why", "when", "who", "where", "how", ""], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "candidate_list": {"feature": {"article": {"dtype": "string", "id": null, "_type": "Value"}, "section": {"dtype": "string", "id": null, "_type": "Value"}, "candidates": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "answers": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "selqa", "config_name": "answer_triggering_analysis", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 30176650, "num_examples": 5529, "dataset_name": "selqa"}, "test": {"name": "test", "num_bytes": 8766787, "num_examples": 1590, "dataset_name": "selqa"}, "validation": {"name": "validation", "num_bytes": 4270904, "num_examples": 785, "dataset_name": "selqa"}}, "download_checksums": {"https://raw.githubusercontent.com/emorynlp/selqa/master/at/selqa-at-train.json": {"num_bytes": 32230643, "checksum": "6af1e82dbec94d2c87c0cd6463a0d7eba1dd746cbdc72f481697843c466f4952"}, "https://raw.githubusercontent.com/emorynlp/selqa/master/at/selqa-at-dev.json": {"num_bytes": 4562321, "checksum": "8cf266e9b8404e9ba1c062a1dbf43c79ae9bd2da929cb11351872c4f221815ac"}, "https://raw.githubusercontent.com/emorynlp/selqa/master/at/selqa-at-test.json": {"num_bytes": 9356712, "checksum": "38971e74506b74c808756fefb1816453eb1a3c3989f2feb77d864c93da468905"}}, "download_size": 46149676, "post_processing_size": null, "dataset_size": 43214341, "size_in_bytes": 89364017}, "answer_triggering_experiments": {"description": "The SelQA dataset provides crowdsourced annotation for two selection-based question answer tasks, \nanswer sentence selection and answer triggering.\n", "citation": "@InProceedings{7814688,\n author={T. {Jurczyk} and M. {Zhai} and J. D. {Choi}},\n booktitle={2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI)}, \n title={SelQA: A New Benchmark for Selection-Based Question Answering}, \n year={2016},\n volume={},\n number={},\n pages={820-827},\n doi={10.1109/ICTAI.2016.0128}\n}\n", "homepage": "", "license": "", "features": {"question": {"dtype": "string", "id": null, "_type": "Value"}, "candidate": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 2, "names": ["0", "1"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "selqa", "config_name": "answer_triggering_experiments", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 42956518, "num_examples": 205075, "dataset_name": "selqa"}, "test": {"name": "test", "num_bytes": 12504961, "num_examples": 59845, "dataset_name": "selqa"}, "validation": {"name": "validation", "num_bytes": 6055616, "num_examples": 28798, "dataset_name": "selqa"}}, "download_checksums": {"https://raw.githubusercontent.com/emorynlp/selqa/master/at/selqa-at-train.tsv": {"num_bytes": 40495450, "checksum": "9cf58039e30583187e7e93e19043dceb2540d72fc13eb4eb09fd8147b3022346"}, "https://raw.githubusercontent.com/emorynlp/selqa/master/at/selqa-at-dev.tsv": {"num_bytes": 5710016, "checksum": "76466b282ab62353e029af4292acb658c0659860c716c637c3e5f3faa9c693d1"}, "https://raw.githubusercontent.com/emorynlp/selqa/master/at/selqa-at-test.tsv": {"num_bytes": 11786773, "checksum": "4151fa580983f7d3903ea70e71d5d86f20abe75cb975b7d77434ea2e978fc132"}}, "download_size": 57992239, "post_processing_size": null, "dataset_size": 61517095, "size_in_bytes": 119509334}}
dummy/answer_selection_analysis/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b02f9623b26b7ba5f2905fd8bdd4b1a45206b232ad9b6bf7bc57b1855ca6b067
3
+ size 14661
dummy/answer_selection_experiments/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7fe2c6bdea312179a138bf5c1048a86d8f9e0419f6b2e4491a9213da092fe39f
3
+ size 1948
dummy/answer_triggering_analysis/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ff41c905f1027b837386f4e1a1ce46c7f6b82fd419f4eced087580c969587f6
3
+ size 40685
dummy/answer_triggering_experiments/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6f52a492342820072a0435fa73dc4ac344e5cf7a084eecb493f3fa22b21d70c
3
+ size 1926
selqa.py ADDED
@@ -0,0 +1,301 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """SelQA: A New Benchmark for Selection-Based Question Answering"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import csv
20
+ import json
21
+
22
+ import datasets
23
+
24
+
25
+ # TODO: Add BibTeX citation
26
+ # Find for instance the citation on arxiv or on the dataset repo/website
27
+ _CITATION = """\
28
+ @InProceedings{7814688,
29
+ author={T. {Jurczyk} and M. {Zhai} and J. D. {Choi}},
30
+ booktitle={2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI)},
31
+ title={SelQA: A New Benchmark for Selection-Based Question Answering},
32
+ year={2016},
33
+ volume={},
34
+ number={},
35
+ pages={820-827},
36
+ doi={10.1109/ICTAI.2016.0128}
37
+ }
38
+ """
39
+
40
+ # TODO: Add description of the dataset here
41
+ # You can copy an official description
42
+ _DESCRIPTION = """\
43
+ The SelQA dataset provides crowdsourced annotation for two selection-based question answer tasks,
44
+ answer sentence selection and answer triggering.
45
+ """
46
+
47
+ # TODO: Add a link to an official homepage for the dataset here
48
+ _HOMEPAGE = ""
49
+
50
+ # TODO: Add the licence for the dataset here if you can find it
51
+ _LICENSE = ""
52
+
53
+ # TODO: Add link to the official dataset URLs here
54
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
55
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
56
+ types = {
57
+ "answer_selection": "ass",
58
+ "answer_triggering": "at",
59
+ }
60
+
61
+ modes = {"analysis": "json", "experiments": "tsv"}
62
+
63
+
64
+ class SelqaConfig(datasets.BuilderConfig):
65
+ """"BuilderConfig for SelQA Dataset"""
66
+
67
+ def __init__(self, mode, type_, **kwargs):
68
+ super(SelqaConfig, self).__init__(**kwargs)
69
+ self.mode = mode
70
+ self.type_ = type_
71
+
72
+
73
+ # TODO: Name of the dataset usually match the script name with CamelCase instead of snake_case
74
+ class Selqa(datasets.GeneratorBasedBuilder):
75
+ """A New Benchmark for Selection-based Question Answering."""
76
+
77
+ VERSION = datasets.Version("1.1.0")
78
+
79
+ # This is an example of a dataset with multiple configurations.
80
+ # If you don't want/need to define several sub-sets in your dataset,
81
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
82
+
83
+ # If you need to make complex sub-parts in the datasets with configurable options
84
+ # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
85
+ BUILDER_CONFIG_CLASS = SelqaConfig
86
+
87
+ # You will be able to load one or the other configurations in the following list with
88
+ # data = datasets.load_dataset('my_dataset', 'first_domain')
89
+ # data = datasets.load_dataset('my_dataset', 'second_domain')
90
+ BUILDER_CONFIGS = [
91
+ SelqaConfig(
92
+ name="answer_selection_analysis",
93
+ mode="analysis",
94
+ type_="answer_selection",
95
+ version=VERSION,
96
+ description="This part covers answer selection analysis",
97
+ ),
98
+ SelqaConfig(
99
+ name="answer_selection_experiments",
100
+ mode="experiments",
101
+ type_="answer_selection",
102
+ version=VERSION,
103
+ description="This part covers answer selection experiments",
104
+ ),
105
+ SelqaConfig(
106
+ name="answer_triggering_analysis",
107
+ mode="analysis",
108
+ type_="answer_triggering",
109
+ version=VERSION,
110
+ description="This part covers answer triggering analysis",
111
+ ),
112
+ SelqaConfig(
113
+ name="answer_triggering_experiments",
114
+ mode="experiments",
115
+ type_="answer_triggering",
116
+ version=VERSION,
117
+ description="This part covers answer triggering experiments",
118
+ ),
119
+ ]
120
+
121
+ DEFAULT_CONFIG_NAME = "answer_selection_analysis" # It's not mandatory to have a default configuration. Just use one if it make sense.
122
+
123
+ def _info(self):
124
+ if (
125
+ self.config.mode == "experiments"
126
+ ): # This is the name of the configuration selected in BUILDER_CONFIGS above
127
+ features = datasets.Features(
128
+ {
129
+ "question": datasets.Value("string"),
130
+ "candidate": datasets.Value("string"),
131
+ "label": datasets.ClassLabel(names=["0", "1"]),
132
+ }
133
+ )
134
+ else:
135
+ if self.config.type_ == "answer_selection":
136
+ features = datasets.Features(
137
+ {
138
+ "section": datasets.Value("string"),
139
+ "question": datasets.Value("string"),
140
+ "article": datasets.Value("string"),
141
+ "is_paraphrase": datasets.Value("bool"),
142
+ "topic": datasets.ClassLabel(
143
+ names=[
144
+ "MUSIC",
145
+ "TV",
146
+ "TRAVEL",
147
+ "ART",
148
+ "SPORT",
149
+ "COUNTRY",
150
+ "MOVIES",
151
+ "HISTORICAL EVENTS",
152
+ "SCIENCE",
153
+ "FOOD",
154
+ ]
155
+ ),
156
+ "answers": datasets.Sequence(datasets.Value("int32")),
157
+ "candidates": datasets.Sequence(datasets.Value("string")),
158
+ "q_types": datasets.Sequence(
159
+ datasets.ClassLabel(names=["what", "why", "when", "who", "where", "how", ""])
160
+ ),
161
+ }
162
+ )
163
+ else:
164
+ features = datasets.Features(
165
+ {
166
+ "section": datasets.Value("string"),
167
+ "question": datasets.Value("string"),
168
+ "article": datasets.Value("string"),
169
+ "is_paraphrase": datasets.Value("bool"),
170
+ "topic": datasets.ClassLabel(
171
+ names=[
172
+ "MUSIC",
173
+ "TV",
174
+ "TRAVEL",
175
+ "ART",
176
+ "SPORT",
177
+ "COUNTRY",
178
+ "MOVIES",
179
+ "HISTORICAL EVENTS",
180
+ "SCIENCE",
181
+ "FOOD",
182
+ ]
183
+ ),
184
+ "q_types": datasets.Sequence(
185
+ datasets.ClassLabel(names=["what", "why", "when", "who", "where", "how", ""])
186
+ ),
187
+ "candidate_list": datasets.Sequence(
188
+ {
189
+ "article": datasets.Value("string"),
190
+ "section": datasets.Value("string"),
191
+ "candidates": datasets.Sequence(datasets.Value("string")),
192
+ "answers": datasets.Sequence(datasets.Value("int32")),
193
+ }
194
+ ),
195
+ }
196
+ )
197
+ return datasets.DatasetInfo(
198
+ # This is the description that will appear on the datasets page.
199
+ description=_DESCRIPTION,
200
+ # This defines the different columns of the dataset and their types
201
+ features=features, # Here we define them above because they are different between the two configurations
202
+ # If there's a common (input, target) tuple from the features,
203
+ # specify them here. They'll be used if as_supervised=True in
204
+ # builder.as_dataset.
205
+ supervised_keys=None,
206
+ # Homepage of the dataset for documentation
207
+ homepage=_HOMEPAGE,
208
+ # License for the dataset if available
209
+ license=_LICENSE,
210
+ # Citation for the dataset
211
+ citation=_CITATION,
212
+ )
213
+
214
+ def _split_generators(self, dl_manager):
215
+ """Returns SplitGenerators."""
216
+ # TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
217
+ # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
218
+
219
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
220
+ # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
221
+ # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
222
+ urls = {
223
+ "train": f"https://raw.githubusercontent.com/emorynlp/selqa/master/{types[self.config.type_]}/selqa-{types[self.config.type_]}-train.{modes[self.config.mode]}",
224
+ "dev": f"https://raw.githubusercontent.com/emorynlp/selqa/master/{types[self.config.type_]}/selqa-{types[self.config.type_]}-dev.{modes[self.config.mode]}",
225
+ "test": f"https://raw.githubusercontent.com/emorynlp/selqa/master/{types[self.config.type_]}/selqa-{types[self.config.type_]}-test.{modes[self.config.mode]}",
226
+ }
227
+ data_dir = dl_manager.download_and_extract(urls)
228
+ return [
229
+ datasets.SplitGenerator(
230
+ name=datasets.Split.TRAIN,
231
+ # These kwargs will be passed to _generate_examples
232
+ gen_kwargs={
233
+ "filepath": data_dir["train"],
234
+ "split": "train",
235
+ },
236
+ ),
237
+ datasets.SplitGenerator(
238
+ name=datasets.Split.TEST,
239
+ # These kwargs will be passed to _generate_examples
240
+ gen_kwargs={"filepath": data_dir["test"], "split": "test"},
241
+ ),
242
+ datasets.SplitGenerator(
243
+ name=datasets.Split.VALIDATION,
244
+ # These kwargs will be passed to _generate_examples
245
+ gen_kwargs={
246
+ "filepath": data_dir["dev"],
247
+ "split": "dev",
248
+ },
249
+ ),
250
+ ]
251
+
252
+ def _generate_examples(self, filepath, split):
253
+ """ Yields examples. """
254
+ # TODO: This method will receive as arguments the `gen_kwargs` defined in the previous `_split_generators` method.
255
+ # It is in charge of opening the given file and yielding (key, example) tuples from the dataset
256
+ # The key is not important, it's more here for legacy reason (legacy from tfds)
257
+ with open(filepath, encoding="utf-8") as f:
258
+ if self.config.mode == "experiments":
259
+ csv_reader = csv.DictReader(
260
+ f, delimiter="\t", quoting=csv.QUOTE_NONE, fieldnames=["question", "candidate", "label"]
261
+ )
262
+ for id_, row in enumerate(csv_reader):
263
+ yield id_, row
264
+ else:
265
+ if self.config.type_ == "answer_selection":
266
+ for row in f:
267
+ data = json.loads(row)
268
+ for id_, item in enumerate(data):
269
+ yield id_, {
270
+ "section": item["section"],
271
+ "question": item["question"],
272
+ "article": item["article"],
273
+ "is_paraphrase": item["is_paraphrase"],
274
+ "topic": item["topic"],
275
+ "answers": item["answers"],
276
+ "candidates": item["candidates"],
277
+ "q_types": item["q_types"],
278
+ }
279
+ else:
280
+ for row in f:
281
+ data = json.loads(row)
282
+ for id_, item in enumerate(data):
283
+ candidate_list = []
284
+ for entity in item["candidate_list"]:
285
+ candidate_list.append(
286
+ {
287
+ "article": entity["article"],
288
+ "section": entity["section"],
289
+ "answers": entity["answers"],
290
+ "candidates": entity["candidates"],
291
+ }
292
+ )
293
+ yield id_, {
294
+ "section": item["section"],
295
+ "question": item["question"],
296
+ "article": item["article"],
297
+ "is_paraphrase": item["is_paraphrase"],
298
+ "topic": item["topic"],
299
+ "q_types": item["q_types"],
300
+ "candidate_list": candidate_list,
301
+ }