Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
100K<n<1M
Language Creators:
found
Source Datasets:
original
ArXiv:
Tags:
License:
system HF staff commited on
Commit
99da0f1
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ - machine-generated
5
+ language_creators:
6
+ - found
7
+ languages:
8
+ - en
9
+ licenses:
10
+ - unknown
11
+ multilinguality:
12
+ - monolingual
13
+ size_categories:
14
+ - 100K<n<1M
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text-classification
19
+ task_ids:
20
+ - natural-language-inference
21
+ ---
22
+
23
+ # Dataset Card Creation Guide
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-instances)
33
+ - [Data Splits](#data-instances)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+
48
+ ## Dataset Description
49
+
50
+ - **Homepage:** [SWAG AF](https://rowanzellers.com/swag/)
51
+ - **Repository:** [Github repository](https://github.com/rowanz/swagaf/tree/master/data)
52
+ - **Paper:** [SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference](https://arxiv.org/abs/1808.05326)
53
+ - **Leaderboard:** [SWAG Leaderboard](https://leaderboard.allenai.org/swag)
54
+ - **Point of Contact:** [Rowan Zellers](https://rowanzellers.com/#contact)
55
+
56
+ ### Dataset Summary
57
+
58
+ Given a partial description like "she opened the hood of the car,"
59
+ humans can reason about the situation and anticipate what might come
60
+ next ("then, she examined the engine"). SWAG (Situations With Adversarial Generations)
61
+ is a large-scale dataset for this task of grounded commonsense
62
+ inference, unifying natural language inference and physically grounded reasoning.
63
+
64
+ The dataset consists of 113k multiple choice questions about grounded situations
65
+ (73k training, 20k validation, 20k test).
66
+ Each question is a video caption from LSMDC or ActivityNet Captions,
67
+ with four answer choices about what might happen next in the scene.
68
+ The correct answer is the (real) video caption for the next event in the video;
69
+ the three incorrect answers are adversarially generated and human verified,
70
+ so as to fool machines but not humans. SWAG aims to be a benchmark for
71
+ evaluating grounded commonsense NLI and for learning representations.
72
+
73
+ ### Supported Tasks and Leaderboards
74
+
75
+ The dataset introduces the task of grounded commonsense inference, unifying natural language inference and commonsense reasoning.
76
+
77
+ ### Languages
78
+
79
+ The text in the dataset is in English. The associated BCP-47 code is `en`.
80
+
81
+ ## Dataset Structure
82
+
83
+ ### Data Instances
84
+
85
+ The `regular` configuration should be used for modeling. An example looks like this:
86
+
87
+ ```
88
+ {
89
+ "video-id": "anetv_dm5WXFiQZUQ",
90
+ "fold-ind": "18419",
91
+ "startphrase", "He rides the motorcycle down the hall and into the elevator. He",
92
+ "sent1": "He rides the motorcycle down the hall and into the elevator."
93
+ "sent2": "He",
94
+ "gold-source": "gold",
95
+ "ending0": "looks at a mirror in the mirror as he watches someone walk through a door.",
96
+ "ending1": "stops, listening to a cup of coffee with the seated woman, who's standing.",
97
+ "ending2": "exits the building and rides the motorcycle into a casino where he performs several tricks as people watch.",
98
+ "ending3": "pulls the bag out of his pocket and hands it to someone's grandma.",
99
+ "label": 2,
100
+ }
101
+ ```
102
+
103
+ Note that the test are reseved for blind submission on the leaderboard.
104
+
105
+ The full train and validation sets provide more information regarding the collection process.
106
+
107
+ ### Data Fields
108
+
109
+ - `video-id`: identification
110
+ - `fold-ind`: identification
111
+ - `startphrase`: the context to be filled
112
+ - `sent1`: the first sentence
113
+ - `sent2`: the start of the second sentence (to be filled)
114
+ - `gold-source`: generated or comes from the found completion
115
+ - `ending0`: first proposition
116
+ - `ending1`: second proposition
117
+ - `ending2`: third proposition
118
+ - `ending3`: fourth proposition
119
+ - `label`: the correct proposition
120
+
121
+ More info concerning the fields can be found [on the original repo](https://github.com/rowanz/swagaf/tree/master/data).
122
+
123
+ ### Data Splits
124
+
125
+ The dataset consists of 113k multiple choice questions about grounded situations: 73k for training, 20k for validation, and 20k for (blind) test.
126
+
127
+ ## Dataset Creation
128
+
129
+ ### Curation Rationale
130
+
131
+ The authors seek dataset diversity while minimizing annotation artifacts, conditional stylistic patterns such as length and word-preference biases. To avoid introducing easily “gamed” patterns, they introduce Adversarial Filtering (AF), a generally- applicable treatment involving the iterative refinement of a set of assignments to increase the entropy under a chosen model family. The dataset is then human verified by paid crowdsourcers.
132
+
133
+ ### Source Data
134
+
135
+ This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)
136
+
137
+ #### Initial Data Collection and Normalization
138
+
139
+ The dataset is derived from pairs of consecutive video captions from [ActivityNet Captions](https://cs.stanford.edu/people/ranjaykrishna/densevid/) and the [Large Scale Movie Description Challenge](https://sites.google.com/site/describingmovies/). The two datasets are slightly different in nature and allow us to achieve broader coverage: ActivityNet contains 20k YouTube clips containing one of 203 activity types (such as doing gymnastics or playing guitar); LSMDC consists of 128k movie captions (audio descriptions and scripts).
140
+
141
+ #### Who are the source language producers?
142
+
143
+ [More Information Needed]
144
+
145
+ ### Annotations
146
+
147
+ #### Annotation process
148
+
149
+ Annotations are first machine generated and then adversarially filtered. Finally, the remaining examples are human-verified by paid crowdsourcers.
150
+
151
+ #### Who are the annotators?
152
+
153
+ [More Information Needed]
154
+
155
+ ### Personal and Sensitive Information
156
+
157
+ [More Information Needed]
158
+
159
+ ## Considerations for Using the Data
160
+
161
+ ### Social Impact of Dataset
162
+
163
+ [More Information Needed]
164
+
165
+ ### Discussion of Biases
166
+
167
+ [More Information Needed]
168
+
169
+ ### Other Known Limitations
170
+
171
+ [More Information Needed]
172
+
173
+ ## Additional Information
174
+
175
+ ### Dataset Curators
176
+
177
+ [More Information Needed]
178
+
179
+ ### Licensing Information
180
+
181
+ Unknown
182
+
183
+ ### Citation Information
184
+
185
+ ```
186
+ @inproceedings{zellers2018swagaf,
187
+ title={SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference},
188
+ author={Zellers, Rowan and Bisk, Yonatan and Schwartz, Roy and Choi, Yejin},
189
+ booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
190
+ year={2018}
191
+ }
192
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"regular": {"description": "Given a partial description like \"she opened the hood of the car,\"\nhumans can reason about the situation and anticipate what might come\nnext (\"then, she examined the engine\"). SWAG (Situations With Adversarial Generations)\nis a large-scale dataset for this task of grounded commonsense\ninference, unifying natural language inference and physically grounded reasoning.\n\nThe dataset consists of 113k multiple choice questions about grounded situations\n(73k training, 20k validation, 20k test).\nEach question is a video caption from LSMDC or ActivityNet Captions,\nwith four answer choices about what might happen next in the scene.\nThe correct answer is the (real) video caption for the next event in the video;\nthe three incorrect answers are adversarially generated and human verified,\nso as to fool machines but not humans. SWAG aims to be a benchmark for\nevaluating grounded commonsense NLI and for learning representations.\n\nThe full data contain more information,\nbut the regular configuration will be more interesting for modeling\n(note that the regular data are shuffled). The test set for leaderboard submission\nis under the regular configuration.\n", "citation": "@inproceedings{zellers2018swagaf,\n title={SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference},\n author={Zellers, Rowan and Bisk, Yonatan and Schwartz, Roy and Choi, Yejin},\n booktitle = \"Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)\",\n year={2018}\n}\n", "homepage": "https://rowanzellers.com/swag/", "license": "Unknown", "features": {"video-id": {"dtype": "string", "id": null, "_type": "Value"}, "fold-ind": {"dtype": "string", "id": null, "_type": "Value"}, "startphrase": {"dtype": "string", "id": null, "_type": "Value"}, "sent1": {"dtype": "string", "id": null, "_type": "Value"}, "sent2": {"dtype": "string", "id": null, "_type": "Value"}, "gold-source": {"dtype": "string", "id": null, "_type": "Value"}, "ending0": {"dtype": "string", "id": null, "_type": "Value"}, "ending1": {"dtype": "string", "id": null, "_type": "Value"}, "ending2": {"dtype": "string", "id": null, "_type": "Value"}, "ending3": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 4, "names": ["0", "1", "2", "3"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "swag", "config_name": "regular", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 30274672, "num_examples": 73546, "dataset_name": "swag"}, "validation": {"name": "validation", "num_bytes": 8451771, "num_examples": 20006, "dataset_name": "swag"}, "test": {"name": "test", "num_bytes": 8417644, "num_examples": 20005, "dataset_name": "swag"}}, "download_checksums": {"https://raw.githubusercontent.com/rowanz/swagaf/master/data/train.csv": {"num_bytes": 28243333, "checksum": "5748b51126ac255c5a6f26e1ba473b51116d6c822aeb25e63ecba282c9d0e610"}, "https://raw.githubusercontent.com/rowanz/swagaf/master/data/val.csv": {"num_bytes": 7893588, "checksum": "c0497b2cd7f3e6b7df995524b1853f62285d60d110d659b19545ca80b2903234"}, "https://raw.githubusercontent.com/rowanz/swagaf/master/data/test.csv": {"num_bytes": 7817885, "checksum": "a689a1a4e892a65ca625c1f0fcf77bcce004b59ad1caeb134ca5ec080a711cb6"}}, "download_size": 43954806, "post_processing_size": null, "dataset_size": 47144087, "size_in_bytes": 91098893}, "full": {"description": "Given a partial description like \"she opened the hood of the car,\"\nhumans can reason about the situation and anticipate what might come\nnext (\"then, she examined the engine\"). SWAG (Situations With Adversarial Generations)\nis a large-scale dataset for this task of grounded commonsense\ninference, unifying natural language inference and physically grounded reasoning.\n\nThe dataset consists of 113k multiple choice questions about grounded situations\n(73k training, 20k validation, 20k test).\nEach question is a video caption from LSMDC or ActivityNet Captions,\nwith four answer choices about what might happen next in the scene.\nThe correct answer is the (real) video caption for the next event in the video;\nthe three incorrect answers are adversarially generated and human verified,\nso as to fool machines but not humans. SWAG aims to be a benchmark for\nevaluating grounded commonsense NLI and for learning representations.\n\nThe full data contain more information,\nbut the regular configuration will be more interesting for modeling\n(note that the regular data are shuffled). The test set for leaderboard submission\nis under the regular configuration.\n", "citation": "@inproceedings{zellers2018swagaf,\n title={SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference},\n author={Zellers, Rowan and Bisk, Yonatan and Schwartz, Roy and Choi, Yejin},\n booktitle = \"Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)\",\n year={2018}\n}\n", "homepage": "https://rowanzellers.com/swag/", "license": "Unknown", "features": {"video-id": {"dtype": "string", "id": null, "_type": "Value"}, "fold-ind": {"dtype": "string", "id": null, "_type": "Value"}, "startphrase": {"dtype": "string", "id": null, "_type": "Value"}, "gold-ending": {"dtype": "string", "id": null, "_type": "Value"}, "distractor-0": {"dtype": "string", "id": null, "_type": "Value"}, "distractor-1": {"dtype": "string", "id": null, "_type": "Value"}, "distractor-2": {"dtype": "string", "id": null, "_type": "Value"}, "distractor-3": {"dtype": "string", "id": null, "_type": "Value"}, "gold-source": {"dtype": "string", "id": null, "_type": "Value"}, "gold-type": {"dtype": "string", "id": null, "_type": "Value"}, "distractor-0-type": {"dtype": "string", "id": null, "_type": "Value"}, "distractor-1-type": {"dtype": "string", "id": null, "_type": "Value"}, "distractor-2-type": {"dtype": "string", "id": null, "_type": "Value"}, "distractor-3-type": {"dtype": "string", "id": null, "_type": "Value"}, "sent1": {"dtype": "string", "id": null, "_type": "Value"}, "sent2": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "swag", "config_name": "full", "version": "0.0.0", "splits": {"train": {"name": "train", "num_bytes": 34941649, "num_examples": 73546, "dataset_name": "swag"}, "validation": {"name": "validation", "num_bytes": 9832603, "num_examples": 20006, "dataset_name": "swag"}}, "download_checksums": {"https://raw.githubusercontent.com/rowanz/swagaf/master/data/train_full.csv": {"num_bytes": 31608559, "checksum": "2353de255a79d4e699f478a42454758062d9d36aac75a4035948915877e1a248"}, "https://raw.githubusercontent.com/rowanz/swagaf/master/data/val_full.csv": {"num_bytes": 8929065, "checksum": "59f4905390446352ffbdbb1ebcd88ae790df91fd59661c626eeddd7a4b184502"}}, "download_size": 40537624, "post_processing_size": null, "dataset_size": 44774252, "size_in_bytes": 85311876}}
dummy/full/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ba1c3a052dc32b120e75abd87119eaf9bdc4adb2e61d2f6f706ed68a7510ed2
3
+ size 1977
dummy/regular/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd896129d9bbeba2a6688e35c57b641fd8a20f73ff0112cda34aaaa05c3b7c07
3
+ size 2386
swag.py ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """SWAG dataset."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import csv
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @inproceedings{zellers2018swagaf,
26
+ title={SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference},
27
+ author={Zellers, Rowan and Bisk, Yonatan and Schwartz, Roy and Choi, Yejin},
28
+ booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
29
+ year={2018}
30
+ }
31
+ """
32
+
33
+ _DESCRIPTION = """\
34
+ Given a partial description like "she opened the hood of the car,"
35
+ humans can reason about the situation and anticipate what might come
36
+ next ("then, she examined the engine"). SWAG (Situations With Adversarial Generations)
37
+ is a large-scale dataset for this task of grounded commonsense
38
+ inference, unifying natural language inference and physically grounded reasoning.
39
+
40
+ The dataset consists of 113k multiple choice questions about grounded situations
41
+ (73k training, 20k validation, 20k test).
42
+ Each question is a video caption from LSMDC or ActivityNet Captions,
43
+ with four answer choices about what might happen next in the scene.
44
+ The correct answer is the (real) video caption for the next event in the video;
45
+ the three incorrect answers are adversarially generated and human verified,
46
+ so as to fool machines but not humans. SWAG aims to be a benchmark for
47
+ evaluating grounded commonsense NLI and for learning representations.
48
+
49
+ The full data contain more information,
50
+ but the regular configuration will be more interesting for modeling
51
+ (note that the regular data are shuffled). The test set for leaderboard submission
52
+ is under the regular configuration.
53
+ """
54
+
55
+ _LICENSE = "Unknown"
56
+
57
+ _URLs = {
58
+ "full": {
59
+ "train": "https://raw.githubusercontent.com/rowanz/swagaf/master/data/train_full.csv",
60
+ "val": "https://raw.githubusercontent.com/rowanz/swagaf/master/data/val_full.csv",
61
+ },
62
+ "regular": {
63
+ "train": "https://raw.githubusercontent.com/rowanz/swagaf/master/data/train.csv",
64
+ "val": "https://raw.githubusercontent.com/rowanz/swagaf/master/data/val.csv",
65
+ "test": "https://raw.githubusercontent.com/rowanz/swagaf/master/data/test.csv",
66
+ },
67
+ }
68
+
69
+
70
+ class Swag(datasets.GeneratorBasedBuilder):
71
+ """SWAG dataset"""
72
+
73
+ VERSION = datasets.Version("1.1.0")
74
+
75
+ BUILDER_CONFIGS = [
76
+ datasets.BuilderConfig(name="regular", description="The configuration to use for modeling."),
77
+ datasets.BuilderConfig(name="full", description="The full data."),
78
+ ]
79
+
80
+ DEFAULT_CONFIG_NAME = "regular"
81
+
82
+ def _info(self):
83
+ if self.config.name == "regular":
84
+ features = datasets.Features(
85
+ {
86
+ "video-id": datasets.Value("string"),
87
+ "fold-ind": datasets.Value("string"),
88
+ "startphrase": datasets.Value("string"),
89
+ "sent1": datasets.Value("string"),
90
+ "sent2": datasets.Value("string"),
91
+ "gold-source": datasets.Value("string"),
92
+ "ending0": datasets.Value("string"),
93
+ "ending1": datasets.Value("string"),
94
+ "ending2": datasets.Value("string"),
95
+ "ending3": datasets.Value("string"),
96
+ "label": datasets.ClassLabel(names=["0", "1", "2", "3"]),
97
+ }
98
+ )
99
+ else:
100
+ features = datasets.Features(
101
+ {
102
+ "video-id": datasets.Value("string"),
103
+ "fold-ind": datasets.Value("string"),
104
+ "startphrase": datasets.Value("string"),
105
+ "gold-ending": datasets.Value("string"),
106
+ "distractor-0": datasets.Value("string"),
107
+ "distractor-1": datasets.Value("string"),
108
+ "distractor-2": datasets.Value("string"),
109
+ "distractor-3": datasets.Value("string"),
110
+ "gold-source": datasets.Value("string"),
111
+ "gold-type": datasets.Value("string"),
112
+ "distractor-0-type": datasets.Value("string"),
113
+ "distractor-1-type": datasets.Value("string"),
114
+ "distractor-2-type": datasets.Value("string"),
115
+ "distractor-3-type": datasets.Value("string"),
116
+ "sent1": datasets.Value("string"),
117
+ "sent2": datasets.Value("string"),
118
+ }
119
+ )
120
+ return datasets.DatasetInfo(
121
+ description=_DESCRIPTION,
122
+ features=features,
123
+ supervised_keys=None,
124
+ homepage="https://rowanzellers.com/swag/",
125
+ license=_LICENSE,
126
+ citation=_CITATION,
127
+ )
128
+
129
+ def _split_generators(self, dl_manager):
130
+ """Returns SplitGenerators."""
131
+ my_urls = _URLs[self.config.name]
132
+ data_dir = dl_manager.download_and_extract(my_urls)
133
+
134
+ splits = [
135
+ datasets.SplitGenerator(
136
+ name=datasets.Split.TRAIN,
137
+ gen_kwargs={
138
+ "filepath": data_dir["train"],
139
+ "split": "train",
140
+ },
141
+ ),
142
+ datasets.SplitGenerator(
143
+ name=datasets.Split.VALIDATION,
144
+ gen_kwargs={
145
+ "filepath": data_dir["val"],
146
+ "split": "val",
147
+ },
148
+ ),
149
+ ]
150
+ if self.config.name == "regular":
151
+ splits.append(
152
+ datasets.SplitGenerator(
153
+ name=datasets.Split.TEST,
154
+ gen_kwargs={"filepath": data_dir["test"], "split": "test"},
155
+ )
156
+ )
157
+
158
+ return splits
159
+
160
+ def _generate_examples(self, filepath, split):
161
+ """ Yields examples. """
162
+ with open(filepath, "r", encoding="utf-8") as f:
163
+ lines = list(csv.reader(f, delimiter=","))
164
+
165
+ for id_, row in enumerate(lines[1:]):
166
+ if self.config.name == "regular":
167
+ yield id_, {
168
+ "video-id": row[1],
169
+ "fold-ind": row[2],
170
+ "startphrase": row[3],
171
+ "sent1": row[4],
172
+ "sent2": row[5],
173
+ "gold-source": row[6],
174
+ "ending0": row[7],
175
+ "ending1": row[8],
176
+ "ending2": row[9],
177
+ "ending3": row[10],
178
+ "label": -1 if split == "test" else row[11],
179
+ }
180
+ else:
181
+ yield id_, {
182
+ "video-id": row[0],
183
+ "fold-ind": row[1],
184
+ "startphrase": row[2],
185
+ "gold-ending": row[3],
186
+ "distractor-0": row[4],
187
+ "distractor-1": row[5],
188
+ "distractor-2": row[6],
189
+ "distractor-3": row[7],
190
+ "gold-source": row[8],
191
+ "gold-type": row[9],
192
+ "distractor-0-type": row[10],
193
+ "distractor-1-type": row[11],
194
+ "distractor-2-type": row[12],
195
+ "distractor-3-type": row[13],
196
+ "sent1": row[14],
197
+ "sent2": row[15],
198
+ }