Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
crowdsourced
found
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
Tags:
License:
system HF staff commited on
Commit
a464bd8
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ - found
7
+ languages:
8
+ - en
9
+ licenses:
10
+ - unknown
11
+ multilinguality:
12
+ - monolingual
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - question-answering
19
+ task_ids:
20
+ - multiple-choice-qa
21
+ ---
22
+
23
+ # Dataset Card Creation Guide
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-instances)
33
+ - [Data Splits](#data-instances)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+
48
+ ## Dataset Description
49
+
50
+ - **Homepage:** [PIQA homepage](https://yonatanbisk.com/piqa/)
51
+ - **Paper:** [PIQA: Reasoning about Physical Commonsense in Natural Language](https://arxiv.org/abs/1911.11641)
52
+ - **Leaderboard:** [Official leaderboard](https://yonatanbisk.com/piqa/) *Note that there is a [2nd leaderboard](https://leaderboard.allenai.org/physicaliqa) featuring a different (blind) test set with 3,446 examples as part of the Machine Commonsense DARPA project.*
53
+ - **Point of Contact:** [Yonatan Bisk](https://yonatanbisk.com/piqa/)
54
+
55
+ ### Dataset Summary
56
+
57
+ *To apply eyeshadow without a brush, should I use a cotton swab or a toothpick?*
58
+ Questions requiring this kind of physical commonsense pose a challenge to state-of-the-art
59
+ natural language understanding systems. The PIQA dataset introduces the task of physical commonsense reasoning
60
+ and a corresponding benchmark dataset Physical Interaction: Question Answering or PIQA.
61
+
62
+ Physical commonsense knowledge is a major challenge on the road to true AI-completeness,
63
+ including robots that interact with the world and understand natural language.
64
+
65
+ PIQA focuses on everyday situations with a preference for atypical solutions.
66
+ The dataset is inspired by instructables.com, which provides users with instructions on how to build, craft,
67
+ bake, or manipulate objects using everyday materials.
68
+
69
+ ### Supported Tasks and Leaderboards
70
+
71
+ The underlying task is formualted as multiple choice question answering: given a question `q` and two possible solutions `s1`, `s2`, a model or a human must choose the most appropriate solution, of which exactly one is correct.
72
+
73
+ ### Languages
74
+
75
+ The text in the dataset is in English. The associated BCP-47 code is `en`.
76
+
77
+ ## Dataset Structure
78
+
79
+ ### Data Instances
80
+
81
+ An example looks like this:
82
+
83
+ ```
84
+ {
85
+ "goal": "How do I ready a guinea pig cage for it's new occupants?",
86
+ "sol1": "Provide the guinea pig with a cage full of a few inches of bedding made of ripped paper strips, you will also need to supply it with a water bottle and a food dish.",
87
+ "sol2": "Provide the guinea pig with a cage full of a few inches of bedding made of ripped jeans material, you will also need to supply it with a water bottle and a food dish.",
88
+ "label": 0,
89
+ }
90
+ ```
91
+
92
+ Note that the test set contains no labels. Predictions need to be submitted to the leaderboard.
93
+
94
+ ### Data Fields
95
+
96
+ List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
97
+
98
+ - `goal`: the question which requires physical commonsense to be answered correctly
99
+ - `sol1`: the first solution
100
+ - `sol2`: the second solution
101
+ - `label`: the correct solution. `0` refers to `sol1` and `1` refers to `sol2`
102
+
103
+ ### Data Splits
104
+
105
+ The dataset contains 16,000 examples for training, 2,000 for development and 3,000 for testing.
106
+
107
+ ## Dataset Creation
108
+
109
+ ### Curation Rationale
110
+
111
+ The goal of the dataset is to construct a resource that requires concrete physical reasoning.
112
+
113
+ ### Source Data
114
+
115
+ The authors provide a prompt to the annotators derived from instructables.com. The instructables website is a crowdsourced collection of instruc- tions for doing everything from cooking to car repair. In most cases, users provide images or videos detailing each step and a list of tools that will be required. Most goals are simultaneously rare and unsurprising. While an annotator is unlikely to have built a UV-Flourescent steampunk lamp or made a backpack out of duct tape, it is not surprising that someone interested in home crafting would create these, nor will the tools and materials be unfamiliar to the average person. Using these examples as the seed for their annotation, helps remind annotators about the less prototypical uses of everyday objects. Second, and equally important, is that instructions build on one another. This means that any QA pair inspired by an instructable is more likely to explicitly state assumptions about what preconditions need to be met to start the task and what postconditions define success.
116
+
117
+ Annotators were asked to glance at the instructions of an instructable and pull out or have it inspire them to construct two component tasks. They would then articulate the goal (often centered on atypical materials) and how to achieve it. In addition, annotaters were asked to provide a permutation to their own solution which makes it invalid (the negative solution), often subtly.
118
+
119
+ #### Initial Data Collection and Normalization
120
+
121
+ During validation, examples with low agreement were removed from the data.
122
+
123
+ The dataset is further cleaned to remove stylistic artifacts and trivial examples from the data, which have been shown to artificially inflate model performance on previous NLI benchmarks.using the AFLite algorithm introduced in ([Sakaguchi et al. 2020](https://arxiv.org/abs/1907.10641); [Sap et al. 2019](https://arxiv.org/abs/1904.09728)) which is an improvement on adversarial filtering ([Zellers et al, 2018](https://arxiv.org/abs/1808.05326)).
124
+
125
+ #### Who are the source language producers?
126
+
127
+ [More Information Needed]
128
+
129
+ ### Annotations
130
+
131
+ #### Annotation process
132
+
133
+ Annotations are by construction obtained when crowdsourcers complete the prompt.
134
+
135
+ #### Who are the annotators?
136
+
137
+ Paid crowdsourcers
138
+
139
+ ### Personal and Sensitive Information
140
+
141
+ [More Information Needed]
142
+
143
+ ## Considerations for Using the Data
144
+
145
+ ### Social Impact of Dataset
146
+
147
+ [More Information Needed]
148
+
149
+ ### Discussion of Biases
150
+
151
+ [More Information Needed]
152
+
153
+ ### Other Known Limitations
154
+
155
+ [More Information Needed]
156
+
157
+ ## Additional Information
158
+
159
+ ### Dataset Curators
160
+
161
+ [More Information Needed]
162
+
163
+ ### Licensing Information
164
+
165
+ Unknown
166
+
167
+ ### Citation Information
168
+
169
+ ```
170
+ @inproceedings{Bisk2020,
171
+ author = {Yonatan Bisk and Rowan Zellers and
172
+ Ronan Le Bras and Jianfeng Gao
173
+ and Yejin Choi},
174
+ title = {PIQA: Reasoning about Physical Commonsense in
175
+ Natural Language},
176
+ booktitle = {Thirty-Fourth AAAI Conference on
177
+ Artificial Intelligence},
178
+ year = {2020},
179
+ }
180
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"plain_text": {"description": "To apply eyeshadow without a brush, should I use a cotton swab or a toothpick?\nQuestions requiring this kind of physical commonsense pose a challenge to state-of-the-art\nnatural language understanding systems. The PIQA dataset introduces the task of physical commonsense reasoning\nand a corresponding benchmark dataset Physical Interaction: Question Answering or PIQA.\n\nPhysical commonsense knowledge is a major challenge on the road to true AI-completeness,\nincluding robots that interact with the world and understand natural language.\n\nThe dataset focuses on everyday situations with a preference for atypical solutions.\nThe dataset is inspired by instructables.com, which provides users with instructions on how to build, craft,\nbake, or manipulate objects using everyday materials.\n\nThe underlying task is formualted as multiple choice question answering:\ngiven a question `q` and two possible solutions `s1`, `s2`, a model or\na human must choose the most appropriate solution, of which exactly one is correct.\nThe dataset is further cleaned of basic artifacts using the AFLite algorithm which is an improvement of\nadversarial filtering. The dataset contains 16,000 examples for training, 2,000 for development and 3,000 for testing.\n", "citation": "@inproceedings{Bisk2020,\n author = {Yonatan Bisk and Rowan Zellers and\n Ronan Le Bras and Jianfeng Gao\n and Yejin Choi},\n title = {PIQA: Reasoning about Physical Commonsense in\n Natural Language},\n booktitle = {Thirty-Fourth AAAI Conference on\n Artificial Intelligence},\n year = {2020},\n}\n", "homepage": "https://yonatanbisk.com/piqa/", "license": "", "features": {"goal": {"dtype": "string", "id": null, "_type": "Value"}, "sol1": {"dtype": "string", "id": null, "_type": "Value"}, "sol2": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 2, "names": ["0", "1"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "piqa", "config_name": "plain_text", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4104026, "num_examples": 16113, "dataset_name": "piqa"}, "test": {"name": "test", "num_bytes": 761521, "num_examples": 3084, "dataset_name": "piqa"}, "validation": {"name": "validation", "num_bytes": 464321, "num_examples": 1838, "dataset_name": "piqa"}}, "download_checksums": {"https://storage.googleapis.com/ai2-mosaic/public/physicaliqa/physicaliqa-train-dev.zip": {"num_bytes": 1824009, "checksum": "54d32a04f59a7e354396f321723c8d7ec35cc6b08506563d8d1ffcc15ce98ddd"}, "https://yonatanbisk.com/piqa/data/tests.jsonl": {"num_bytes": 814616, "checksum": "402f1e2e61347db773e6e5e0a6b24f97396b59f6fd046dcdcbc12f483ac8553b"}}, "download_size": 2638625, "post_processing_size": null, "dataset_size": 5329868, "size_in_bytes": 7968493}}
dummy/plain_text/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed3cac71efd6b3779315062c58a1fb8870994f90345615ba1e558d74fe5391cd
3
+ size 3412
piqa.py ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """PIQA dataset."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import json
20
+ import os
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """\
26
+ @inproceedings{Bisk2020,
27
+ author = {Yonatan Bisk and Rowan Zellers and
28
+ Ronan Le Bras and Jianfeng Gao
29
+ and Yejin Choi},
30
+ title = {PIQA: Reasoning about Physical Commonsense in
31
+ Natural Language},
32
+ booktitle = {Thirty-Fourth AAAI Conference on
33
+ Artificial Intelligence},
34
+ year = {2020},
35
+ }
36
+ """
37
+
38
+ _DESCRIPTION = """\
39
+ To apply eyeshadow without a brush, should I use a cotton swab or a toothpick?
40
+ Questions requiring this kind of physical commonsense pose a challenge to state-of-the-art
41
+ natural language understanding systems. The PIQA dataset introduces the task of physical commonsense reasoning
42
+ and a corresponding benchmark dataset Physical Interaction: Question Answering or PIQA.
43
+
44
+ Physical commonsense knowledge is a major challenge on the road to true AI-completeness,
45
+ including robots that interact with the world and understand natural language.
46
+
47
+ PIQA focuses on everyday situations with a preference for atypical solutions.
48
+ The dataset is inspired by instructables.com, which provides users with instructions on how to build, craft,
49
+ bake, or manipulate objects using everyday materials.
50
+
51
+ The underlying task is formualted as multiple choice question answering:
52
+ given a question `q` and two possible solutions `s1`, `s2`, a model or
53
+ a human must choose the most appropriate solution, of which exactly one is correct.
54
+ The dataset is further cleaned of basic artifacts using the AFLite algorithm which is an improvement of
55
+ adversarial filtering. The dataset contains 16,000 examples for training, 2,000 for development and 3,000 for testing.
56
+ """
57
+
58
+ _URLs = {
59
+ "train-dev": "https://storage.googleapis.com/ai2-mosaic/public/physicaliqa/physicaliqa-train-dev.zip",
60
+ "test": "https://yonatanbisk.com/piqa/data/tests.jsonl",
61
+ }
62
+
63
+
64
+ class Piqa(datasets.GeneratorBasedBuilder):
65
+ """PIQA dataset."""
66
+
67
+ VERSION = datasets.Version("1.1.0")
68
+
69
+ BUILDER_CONFIGS = [
70
+ datasets.BuilderConfig(
71
+ name="plain_text",
72
+ description="Plain text",
73
+ version=VERSION,
74
+ )
75
+ ]
76
+
77
+ def _info(self):
78
+ return datasets.DatasetInfo(
79
+ description=_DESCRIPTION,
80
+ features=datasets.Features(
81
+ {
82
+ "goal": datasets.Value("string"),
83
+ "sol1": datasets.Value("string"),
84
+ "sol2": datasets.Value("string"),
85
+ "label": datasets.ClassLabel(names=["0", "1"]),
86
+ }
87
+ ),
88
+ supervised_keys=None,
89
+ homepage="https://yonatanbisk.com/piqa/",
90
+ citation=_CITATION,
91
+ )
92
+
93
+ def _split_generators(self, dl_manager):
94
+ """Returns SplitGenerators."""
95
+ data_dir = dl_manager.download_and_extract(_URLs)
96
+ return [
97
+ datasets.SplitGenerator(
98
+ name=datasets.Split.TRAIN,
99
+ gen_kwargs={
100
+ "input_filepath": os.path.join(data_dir["train-dev"], "physicaliqa-train-dev", "train.jsonl"),
101
+ "label_filepath": os.path.join(data_dir["train-dev"], "physicaliqa-train-dev", "train-labels.lst"),
102
+ },
103
+ ),
104
+ datasets.SplitGenerator(
105
+ name=datasets.Split.TEST,
106
+ gen_kwargs={
107
+ "input_filepath": data_dir["test"],
108
+ },
109
+ ),
110
+ datasets.SplitGenerator(
111
+ name=datasets.Split.VALIDATION,
112
+ gen_kwargs={
113
+ "input_filepath": os.path.join(data_dir["train-dev"], "physicaliqa-train-dev", "dev.jsonl"),
114
+ "label_filepath": os.path.join(data_dir["train-dev"], "physicaliqa-train-dev", "dev-labels.lst"),
115
+ },
116
+ ),
117
+ ]
118
+
119
+ def _generate_examples(self, input_filepath, label_filepath=None):
120
+ """ Yields examples. """
121
+ with open(input_filepath, encoding="utf-8") as input_file:
122
+ inputs = input_file.read().splitlines()
123
+
124
+ if label_filepath is not None:
125
+ with open(label_filepath, encoding="utf-8") as label_file:
126
+ labels = label_file.read().splitlines()
127
+ else:
128
+ # Labels are not available for the test set.
129
+ # Filling the `label` column with -1 by default
130
+ labels = [-1] * len(inputs)
131
+
132
+ for idx, (row, lab) in enumerate(zip(inputs, labels)):
133
+ data = json.loads(row)
134
+ goal = data["goal"]
135
+ sol1 = data["sol1"]
136
+ sol2 = data["sol2"]
137
+ yield idx, {"goal": goal, "sol1": sol1, "sol2": sol2, "label": lab}