Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
1K<n<10K
Language Creators:
crowdsourced
other
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
Tags:
License:
system HF staff commited on
Commit
539215f
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ - other
7
+ languages:
8
+ - en
9
+ licenses:
10
+ - cc-by-4-0
11
+ multilinguality:
12
+ - monolingual
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - question-answering
19
+ task_ids:
20
+ - multiple-choice-qa
21
+ - open-domain-qa
22
+ ---
23
+
24
+ # Dataset Card for [Dataset Name]
25
+
26
+ ## Table of Contents
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-instances)
34
+ - [Data Splits](#data-instances)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+
49
+ ## Dataset Description
50
+
51
+ - **Interactive Demo:** [Interactive demo](http://protoqa.com)
52
+ - **Repository:** [proto_qa repository](https://github.com/iesl/protoqa-data)
53
+ - **Paper:** [proto_qa paper](https://arxiv.org/pdf/2005.00771.pdf)
54
+ - **Point of Contact:** [Michael Boratko](mailto:mboratko@cs.umass.edu)
55
+ [Xiang Lorraine Li](mailto:xiangl@cs.umass.edu)
56
+ [Tim O’Gorman](mailto:togorman@cs.umass.edu)
57
+ [Rajarshi Das](mailto:rajarshi@cs.umass.edu)
58
+ [Dan Le](mailto:dhle@cs.umass.edu)
59
+ [Andrew McCallum](mailto:mccallum@cs.umass.edu)
60
+
61
+
62
+ ### Dataset Summary
63
+
64
+ This dataset is for studying computational models trained to reason about prototypical situations. It is anticipated that still would not lead to usage in a downstream task, but as a way of studying the knowledge (and biases) of prototypical situations already contained in pre-trained models. The data it is partially based on (Family Feud).
65
+ Using deterministic filtering a sampling from a larger set of all transcriptions was built. Scraped data was acquired through fan transcriptions at [family feud](https://www.familyfeudinfo.com) and [family feud friends](http://familyfeudfriends.arjdesigns.com/); crowdsourced data was acquired with FigureEight (now Appen)
66
+
67
+ ### Supported Tasks and Leaderboards
68
+
69
+ [More Information Needed]
70
+
71
+ ### Languages
72
+
73
+ The text in the dataset is in English
74
+
75
+ ## Dataset Structure
76
+
77
+ ### Data Instances
78
+
79
+ **What do the instances that comprise the dataset represent?**<br>
80
+ Each represents a survey question from Family Feud game and reported answer clusters
81
+
82
+ **How many instances are there in total?**<br>
83
+ 9789 instances
84
+
85
+ **What data does each instance consist of?**<br>
86
+ Each instance is a question, a set of answers, and a count associated with each answer.
87
+
88
+
89
+ ### Data Fields
90
+
91
+ **Data Files**<br>
92
+ Each line is a json dictionary, in which:<br>
93
+ **question** contains the question (in original and a normalized form)<br>
94
+ **answerstrings** contains the original answers provided by survey respondents (when available), along with the counts for each string. Because the FamilyFeud data has only cluster names rather than strings, those cluster names are included with 0 weight.<br>
95
+ **answer-clusters** list of clusters, with the count of each cluster and the strings included in that cluster. Each cluster is given a unique ID that can be linked to in the assessment files.
96
+
97
+ The simplified configuration includes:
98
+ - `question`: contains the original question
99
+ - `normalized-question`: contains the question in normalized form
100
+ - `totalcount`: unique identifier of the comment (can be used to look up the entry in the raw dataset)
101
+ - `id`: unique identifier of the commen
102
+ - `source`: unique identifier of the commen
103
+ - `answerstrings`: unique identifier of the commen
104
+ - `answer-clusters | answers-cleaned`: list clusters of:
105
+ * `clusterid`: Each cluster is given a unique ID that can be linked to in the assessment files
106
+ * `count`: the count of each cluster
107
+ * `answers`: the strings included in that cluster
108
+
109
+
110
+ In addition to the above, there is crowdsourced assessments file. The config "proto_qa_cs_assessments" provides mappings from additional human and model answers to clusters, to evaluate different assessment methods.
111
+
112
+
113
+ **Assessment files**<br>
114
+
115
+ The file **data/dev/crowdsource_dev.assessments.jsonl** contains mappings from additional human and model answers to clusters, to evaluate different assessment methods.
116
+ Each line contains:<br>
117
+ * `question`: contains the ID of the question
118
+ * `assessments`: maps individual strings to one of three options, either the answer cluster id, "invalid" if the answer is judged to be bad, or "valid_new_cluster" if the answer is valid but does not match any existing clusters.
119
+
120
+ ### Data Splits
121
+
122
+ * proto_qa `Train` : 8781 instances for training or fine-tuning scraped from Family Feud fan sites (see paper). Scraped data has answer clusters with sizes, but only has a single string per cluster (corresponding to the original cluster name
123
+ * proto_qa `Validation` : 979 instances sampled from the same Family Feud data, for use in model validation and development.
124
+
125
+ * proto_qa_cs `Validation` :: 51 questions collected with exhaustive answer collection and manual clustering, matching the details of the eval test set (roughly 100 human answers per question)
126
+
127
+ **data/dev/crowdsource_dev.assessments.jsonl**: assessment file (format described above) for study of assessment methods.
128
+
129
+ ## Dataset Creation
130
+
131
+ ### Curation Rationale
132
+
133
+ [More Information Needed]
134
+
135
+ ### Source Data
136
+
137
+ #### Initial Data Collection and Normalization
138
+
139
+ **How was the data associated with each instance acquired?**<br>
140
+ Scraped data was acquired through fan transcriptions at https://www.familyfeudinfo.com and http://familyfeudfriends.arjdesigns.com/ ; crowdsourced data was acquired with FigureEight (now Appen)
141
+
142
+ **If the dataset is a sample from a larger set, what was the sampling strategy?**<br>
143
+ Deterministic filtering was used (noted elsewhere), but no probabilistic sampling was used.
144
+
145
+ **Who was involved in the data collection process (e.g., students,crowdworkers , contractors) and how were they compensated?**<br>
146
+ Crowdworkers were used in the evalaution dataset. Time per task was calculated and per-task cost was set to attempt to provide a living wage
147
+
148
+ **Over what timeframe was the data collected?**<br>
149
+ Crowdsource answers were collected between Fall of 2018 and Spring of 2019. Scraped data covers question-answer pairs collected since the origin of the show in 1976
150
+
151
+
152
+ #### Who are the source language producers?
153
+
154
+ [More Information Needed]
155
+
156
+ ### Annotations
157
+
158
+ #### Annotation process
159
+
160
+ **Was any preprocessing/cleaning/labeling of the data done?**<br>
161
+ Obvious typos in the crowdsourced answer set were corrected
162
+
163
+ #### Who are the annotators?
164
+
165
+ The original question-answer pairs were generated by surveys of US English-speakers in a period from 1976 to present day. Crowd-sourced evaluation was constrained geographically to US English speakers but not otherwise constrained. Additional demographic data was not collected.
166
+
167
+ ### Personal and Sensitive Information
168
+
169
+ **Does the dataset contain data that might be considered sensitive in any way?**<br>
170
+ As the questions address prototypical/stereotypical activities, models trained on more offensive material (such as large language models) may provide offensive answers to such questions. While we had found a few questions which we worried would actually encourage models to provide offensive answers, we cannot guarantee that the data is clean of such questions. Even a perfectly innocent version of this dataset would be encouraging models to express generalizations about situations, and therefore may provoke offensive material that is oontained in language models
171
+
172
+ **Does the dataset contain data that might be considered confidential?**<br>
173
+ The data does not concern individuals and thus does not contain any information to identify persons. Crowdsourced answers do not provide any user identifiers.
174
+
175
+ ## Considerations for Using the Data
176
+
177
+ ### Social Impact of Dataset
178
+
179
+ **Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?**<br>
180
+ Not egregiously so (questions are all designed to be shown on television or replications thereof),
181
+
182
+ ### Discussion of Biases
183
+
184
+ **Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?**
185
+ <br>All original questions were written with US television audiences in mind, and therefore characterize prototypical situations with a specific lens. Any usages which deploy this to actually model prototypical situations globally will carry that bias.
186
+
187
+ **Are there tasks for which the dataset should not be used?**
188
+ <br>We caution regarding free-form use of this dataset for interactive "commonsense question answering" purposes without more study of the biases and stereotypes learned by such models.
189
+
190
+ ### Other Known Limitations
191
+
192
+ [More Information Needed]
193
+
194
+ ## Additional Information
195
+
196
+ ### Dataset Curators
197
+
198
+ The listed authors are maintaining/supporting the dataset. They pledge to help support issues, but cannot guarantee long-term support
199
+
200
+ ### Licensing Information
201
+
202
+ The Proto_qa dataset is licensed under the [Creative Commons Attribution 4.0 International](https://github.com/iesl/protoqa-data/blob/master/LICENSE)
203
+
204
+ ### Citation Information
205
+ ```
206
+ @InProceedings{
207
+ huggingface:dataset,
208
+ title = {ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning},
209
+ authors = {Michael Boratko, Xiang Lorraine Li, Tim O’Gorman, Rajarshi Das, Dan Le, Andrew McCallum},
210
+ year = {2020},
211
+ publisher = {GitHub},
212
+ journal = {GitHub repository},
213
+ howpublished = {https://github.com/iesl/protoqa-data},
214
+ }
215
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"proto_qa": {"description": "This dataset is for studying computational models trained to reason about prototypical situations. Using deterministic filtering a sampling from a larger set of all transcriptions was built. It contains 9789 instances where each instance represents a survey question from Family Feud game. Each instance exactly is a question, a set of answers, and a count associated with each answer.\nEach line is a json dictionary, in which:\n1. question - contains the question (in original and a normalized form)\n2. answerstrings - contains the original answers provided by survey respondents (when available), along with the counts for each string. Because the FamilyFeud data has only cluster names rather than strings, those cluster names are included with 0 weight.\n3. answer-clusters - lists clusters, with the count of each cluster and the strings included in that cluster. Each cluster is given a unique ID that can be linked to in the assessment files.\n\n", "citation": "@InProceedings{huggingface:dataset,\ntitle = {ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning},\nauthors={Michael Boratko, Xiang Lorraine Li, Tim O\u2019Gorman, Rajarshi Das, Dan Le, Andrew McCallum},\nyear={2020},\npublisher = {GitHub},\njournal = {GitHub repository},\nhowpublished={\\url{https://github.com/iesl/protoqa-data}},\n}\n", "homepage": "https://github.com/iesl/protoqa-data", "license": "cc-by-4.0", "features": {"normalized-question": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answer-clusters": {"feature": {"count": {"dtype": "int32", "id": null, "_type": "Value"}, "clusterid": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "answerstrings": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "totalcount": {"dtype": "int32", "id": null, "_type": "Value"}, "id": {"dtype": "string", "id": null, "_type": "Value"}, "source": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "proto_qa", "config_name": "proto_qa", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 3943484, "num_examples": 8782, "dataset_name": "proto_qa"}, "validation": {"name": "validation", "num_bytes": 472121, "num_examples": 980, "dataset_name": "proto_qa"}}, "download_checksums": {"https://raw.githubusercontent.com/iesl/protoqa-data/master/data/train/protoqa_train.jsonl": {"num_bytes": 6587901, "checksum": "3387c658053ceca6eec3261d2d0b03da4109eb05fa6480b6d02a577714f867e2"}, "https://raw.githubusercontent.com/iesl/protoqa-data/master/data/dev/protoqa_scraped_dev.jsonl": {"num_bytes": 765031, "checksum": "906385430e473ce7b63e82caa9db34e1f55571a6afcbccfb518308f009bc8af7"}}, "download_size": 7352932, "post_processing_size": null, "dataset_size": 4415605, "size_in_bytes": 11768537}, "proto_qa_cs": {"description": "This dataset is for studying computational models trained to reason about prototypical situations. Using deterministic filtering a sampling from a larger set of all transcriptions was built. It contains 9789 instances where each instance represents a survey question from Family Feud game. Each instance exactly is a question, a set of answers, and a count associated with each answer.\nEach line is a json dictionary, in which:\n1. question - contains the question (in original and a normalized form)\n2. answerstrings - contains the original answers provided by survey respondents (when available), along with the counts for each string. Because the FamilyFeud data has only cluster names rather than strings, those cluster names are included with 0 weight.\n3. answer-clusters - lists clusters, with the count of each cluster and the strings included in that cluster. Each cluster is given a unique ID that can be linked to in the assessment files.\n\n", "citation": "@InProceedings{huggingface:dataset,\ntitle = {ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning},\nauthors={Michael Boratko, Xiang Lorraine Li, Tim O\u2019Gorman, Rajarshi Das, Dan Le, Andrew McCallum},\nyear={2020},\npublisher = {GitHub},\njournal = {GitHub repository},\nhowpublished={\\url{https://github.com/iesl/protoqa-data}},\n}\n", "homepage": "https://github.com/iesl/protoqa-data", "license": "cc-by-4.0", "features": {"normalized-question": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answers-cleaned": {"feature": {"count": {"dtype": "int32", "id": null, "_type": "Value"}, "clusterid": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "answerstrings": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "totalcount": {"dtype": "int32", "id": null, "_type": "Value"}, "id": {"dtype": "string", "id": null, "_type": "Value"}, "source": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "proto_qa", "config_name": "proto_qa_cs", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 84466, "num_examples": 52, "dataset_name": "proto_qa"}}, "download_checksums": {"https://raw.githubusercontent.com/iesl/protoqa-data/master/data/dev/crowdsource_dev.jsonl": {"num_bytes": 115704, "checksum": "bbf9113ad57d68937de9367a48bc4994f39d14f4e7a5cd1114b1de0509de4434"}}, "download_size": 115704, "post_processing_size": null, "dataset_size": 84466, "size_in_bytes": 200170}, "proto_qa_cs_assessments": {"description": "This dataset is for studying computational models trained to reason about prototypical situations. Using deterministic filtering a sampling from a larger set of all transcriptions was built. It contains 9789 instances where each instance represents a survey question from Family Feud game. Each instance exactly is a question, a set of answers, and a count associated with each answer.\nEach line is a json dictionary, in which:\n1. question - contains the question (in original and a normalized form)\n2. answerstrings - contains the original answers provided by survey respondents (when available), along with the counts for each string. Because the FamilyFeud data has only cluster names rather than strings, those cluster names are included with 0 weight.\n3. answer-clusters - lists clusters, with the count of each cluster and the strings included in that cluster. Each cluster is given a unique ID that can be linked to in the assessment files.\n\n", "citation": "@InProceedings{huggingface:dataset,\ntitle = {ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning},\nauthors={Michael Boratko, Xiang Lorraine Li, Tim O\u2019Gorman, Rajarshi Das, Dan Le, Andrew McCallum},\nyear={2020},\npublisher = {GitHub},\njournal = {GitHub repository},\nhowpublished={\\url{https://github.com/iesl/protoqa-data}},\n}\n", "homepage": "https://github.com/iesl/protoqa-data", "license": "cc-by-4.0", "features": {"question": {"dtype": "string", "id": null, "_type": "Value"}, "assessments": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "proto_qa", "config_name": "proto_qa_cs_assessments", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 12473, "num_examples": 52, "dataset_name": "proto_qa"}}, "download_checksums": {"https://raw.githubusercontent.com/iesl/protoqa-data/master/data/dev/crowdsource_dev.assessments.jsonl": {"num_bytes": 24755, "checksum": "2abcf5f7d7ae55847898ac0a76becaaa9a0e72aeecb78c44eeadcec01263e71a"}}, "download_size": 24755, "post_processing_size": null, "dataset_size": 12473, "size_in_bytes": 37228}}
dummy/proto_qa/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:192e5c3a2fea066f533ab9c8e1a4f347e390ec6ace9573138e04ab5c3bff7f76
3
+ size 2366
dummy/proto_qa_cs/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6fdaad8854f556c9170f5b1be5211f633fe1d959b7a72c3027190fba4bbb9c0a
3
+ size 2843
dummy/proto_qa_cs_assessments/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:932506638ea2b3eb2978934e5d8749d57fbc2da72ef44d8ba62ec4b144e7b7f6
3
+ size 910
proto_qa.py ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Dataset for ProtoQA ("family feud") data. The dataset is gathered from an existing set of questions played in a long-running international game show – FAMILY-FEUD."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import json
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @InProceedings{huggingface:dataset,
26
+ title = {ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning},
27
+ authors={Michael Boratko, Xiang Lorraine Li, Tim O’Gorman, Rajarshi Das, Dan Le, Andrew McCallum},
28
+ year={2020},
29
+ publisher = {GitHub},
30
+ journal = {GitHub repository},
31
+ howpublished={\\url{https://github.com/iesl/protoqa-data}},
32
+ }
33
+ """
34
+
35
+ _DESCRIPTION = """\
36
+ This dataset is for studying computational models trained to reason about prototypical situations. Using deterministic filtering a sampling from a larger set of all transcriptions was built. It contains 9789 instances where each instance represents a survey question from Family Feud game. Each instance exactly is a question, a set of answers, and a count associated with each answer.
37
+ Each line is a json dictionary, in which:
38
+ 1. question - contains the question (in original and a normalized form)
39
+ 2. answerstrings - contains the original answers provided by survey respondents (when available), along with the counts for each string. Because the FamilyFeud data has only cluster names rather than strings, those cluster names are included with 0 weight.
40
+ 3. answer-clusters - lists clusters, with the count of each cluster and the strings included in that cluster. Each cluster is given a unique ID that can be linked to in the assessment files.
41
+
42
+ """
43
+
44
+ _HOMEPAGE = "https://github.com/iesl/protoqa-data"
45
+
46
+ _LICENSE = "cc-by-4.0"
47
+
48
+ _URLs = {
49
+ "proto_qa": {
50
+ "dev": "https://raw.githubusercontent.com/iesl/protoqa-data/master/data/dev/protoqa_scraped_dev.jsonl",
51
+ "train": "https://raw.githubusercontent.com/iesl/protoqa-data/master/data/train/protoqa_train.jsonl",
52
+ },
53
+ "proto_qa_cs": "https://raw.githubusercontent.com/iesl/protoqa-data/master/data/dev/crowdsource_dev.jsonl",
54
+ "proto_qa_cs_assessments": "https://raw.githubusercontent.com/iesl/protoqa-data/master/data/dev/crowdsource_dev.assessments.jsonl",
55
+ }
56
+
57
+
58
+ class ProtoQA(datasets.GeneratorBasedBuilder):
59
+ """This is a question answering dataset for Prototypical Common-Sense Reasoning"""
60
+
61
+ VERSION = datasets.Version("1.0.0")
62
+
63
+ BUILDER_CONFIGS = [
64
+ datasets.BuilderConfig(
65
+ name="proto_qa",
66
+ version=VERSION,
67
+ description="This is a question answering dataset for Prototypical Common-Sense Reasoning",
68
+ ),
69
+ datasets.BuilderConfig(
70
+ name="proto_qa_cs",
71
+ version=VERSION,
72
+ description="Prototypical Common-Sense Reasoning, 51 questions collected with exhaustive answer collection and manual clustering, matching the details of the eval test set",
73
+ ),
74
+ datasets.BuilderConfig(
75
+ name="proto_qa_cs_assessments",
76
+ version=VERSION,
77
+ description="Prototypical Common-Sense Reasoning, assessment file for study of assessment methods",
78
+ ),
79
+ ]
80
+
81
+ DEFAULT_CONFIG_NAME = "proto_qa"
82
+
83
+ def _info(self):
84
+ if self.config.name == "proto_qa_cs_assessments":
85
+ features = datasets.Features(
86
+ {
87
+ "question": datasets.Value("string"),
88
+ "assessments": datasets.Sequence(datasets.Value("string")),
89
+ }
90
+ )
91
+ else:
92
+
93
+ if self.config.name == "proto_qa_cs":
94
+ label = "answers-cleaned"
95
+ else:
96
+ label = "answer-clusters"
97
+ features = datasets.Features(
98
+ {
99
+ "normalized-question": datasets.Value("string"),
100
+ "question": datasets.Value("string"),
101
+ label: datasets.Sequence(
102
+ {
103
+ "count": datasets.Value("int32"),
104
+ "clusterid": datasets.Value("string"),
105
+ "answers": datasets.Sequence(datasets.Value("string")),
106
+ }
107
+ ),
108
+ "answerstrings": datasets.Sequence(datasets.Value("string")),
109
+ "totalcount": datasets.Value("int32"),
110
+ "id": datasets.Value("string"),
111
+ "source": datasets.Value("string"),
112
+ }
113
+ )
114
+ return datasets.DatasetInfo(
115
+ # This is the description that will appear on the datasets page.
116
+ description=_DESCRIPTION,
117
+ # This defines the different columns of the dataset and their types
118
+ features=features, # Here we define them above because they are different between the two configurations
119
+ supervised_keys=None,
120
+ # Homepage of the dataset for documentation
121
+ homepage=_HOMEPAGE,
122
+ # License for the dataset if available
123
+ license=_LICENSE,
124
+ # Citation for the dataset
125
+ citation=_CITATION,
126
+ )
127
+
128
+ def _split_generators(self, dl_manager):
129
+ """Returns SplitGenerators."""
130
+
131
+ if self.config.name == "proto_qa":
132
+ train_fpath = dl_manager.download(_URLs[self.config.name]["train"])
133
+ dev_fpath = dl_manager.download(_URLs[self.config.name]["dev"])
134
+
135
+ return [
136
+ datasets.SplitGenerator(
137
+ name=datasets.Split.TRAIN,
138
+ # These kwargs will be passed to _generate_examples
139
+ gen_kwargs={
140
+ "filepath": train_fpath,
141
+ },
142
+ ),
143
+ datasets.SplitGenerator(
144
+ name=datasets.Split.VALIDATION,
145
+ # These kwargs will be passed to _generate_examples
146
+ gen_kwargs={
147
+ "filepath": dev_fpath,
148
+ },
149
+ ),
150
+ ]
151
+ else:
152
+ filepath = dl_manager.download(_URLs[self.config.name])
153
+ return [
154
+ datasets.SplitGenerator(
155
+ name=datasets.Split.VALIDATION,
156
+ # These kwargs will be passed to _generate_examples
157
+ gen_kwargs={
158
+ "filepath": filepath,
159
+ },
160
+ )
161
+ ]
162
+
163
+ def _generate_examples(self, filepath):
164
+ """ Yields examples. """
165
+
166
+ if self.config.name == "proto_qa_cs_assessments":
167
+ with open(filepath, encoding="utf-8") as f:
168
+ for id_, row in enumerate(f):
169
+ data = json.loads(row)
170
+ question = data["question"]
171
+ assessments = data["assessments"]
172
+
173
+ yield id_, {"question": question, "assessments": assessments}
174
+
175
+ else:
176
+ if self.config.name == "proto_qa_cs":
177
+ label = "answers-cleaned"
178
+ else:
179
+ label = "answer-clusters"
180
+
181
+ with open(filepath, encoding="utf-8") as f:
182
+
183
+ for id_, row in enumerate(f):
184
+
185
+ data = json.loads(row)
186
+
187
+ normalized_question = data["question"]["normalized-question"]
188
+ question = data["question"]["question"]
189
+
190
+ answer_clusters = data[label]
191
+
192
+ details = []
193
+ for answer_cluster in answer_clusters:
194
+ count = answer_cluster["count"]
195
+ answers = answer_cluster["answers"]
196
+ clusterid = answer_cluster["clusterid"]
197
+ details.append({"count": count, "answers": answers, "clusterid": clusterid})
198
+
199
+ answerstrings = data["answerstrings"]
200
+ metadata = data["metadata"]
201
+ yield id_, {
202
+ "normalized-question": normalized_question,
203
+ "question": question,
204
+ label: details,
205
+ "answerstrings": answerstrings,
206
+ "totalcount": metadata["totalcount"],
207
+ "id": metadata["id"],
208
+ "source": metadata["source"],
209
+ }