Datasets:

Modalities:
Text
Formats:
parquet
Size:
n<1K
ArXiv:
Tags:
License:
parquet-converter commited on
Commit
7338a56
1 Parent(s): af90817

Update parquet files

Browse files
README.md DELETED
@@ -1,205 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language:
5
- - en
6
- language_creators:
7
- - expert-generated
8
- license:
9
- - cc-by-4.0
10
- multilinguality:
11
- - monolingual
12
- pretty_name: Modified Winograd Schema Challenge (MWSC)
13
- size_categories:
14
- - n<1K
15
- source_datasets:
16
- - extended|winograd_wsc
17
- task_categories:
18
- - multiple-choice
19
- task_ids:
20
- - multiple-choice-coreference-resolution
21
- paperswithcode_id: null
22
- dataset_info:
23
- features:
24
- - name: sentence
25
- dtype: string
26
- - name: question
27
- dtype: string
28
- - name: options
29
- sequence: string
30
- - name: answer
31
- dtype: string
32
- splits:
33
- - name: train
34
- num_bytes: 11022
35
- num_examples: 80
36
- - name: test
37
- num_bytes: 15220
38
- num_examples: 100
39
- - name: validation
40
- num_bytes: 13109
41
- num_examples: 82
42
- download_size: 19197
43
- dataset_size: 39351
44
- ---
45
-
46
- # Dataset Card for The modified Winograd Schema Challenge (MWSC)
47
-
48
- ## Table of Contents
49
- - [Dataset Description](#dataset-description)
50
- - [Dataset Summary](#dataset-summary)
51
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
52
- - [Languages](#languages)
53
- - [Dataset Structure](#dataset-structure)
54
- - [Data Instances](#data-instances)
55
- - [Data Fields](#data-fields)
56
- - [Data Splits](#data-splits)
57
- - [Dataset Creation](#dataset-creation)
58
- - [Curation Rationale](#curation-rationale)
59
- - [Source Data](#source-data)
60
- - [Annotations](#annotations)
61
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
62
- - [Considerations for Using the Data](#considerations-for-using-the-data)
63
- - [Social Impact of Dataset](#social-impact-of-dataset)
64
- - [Discussion of Biases](#discussion-of-biases)
65
- - [Other Known Limitations](#other-known-limitations)
66
- - [Additional Information](#additional-information)
67
- - [Dataset Curators](#dataset-curators)
68
- - [Licensing Information](#licensing-information)
69
- - [Citation Information](#citation-information)
70
- - [Contributions](#contributions)
71
-
72
- ## Dataset Description
73
-
74
- - **Homepage:** [http://decanlp.com](http://decanlp.com)
75
- - **Repository:** https://github.com/salesforce/decaNLP
76
- - **Paper:** [The Natural Language Decathlon: Multitask Learning as Question Answering](https://arxiv.org/abs/1806.08730)
77
- - **Point of Contact:** [Bryan McCann](mailto:bmccann@salesforce.com), [Nitish Shirish Keskar](mailto:nkeskar@salesforce.com)
78
- - **Size of downloaded dataset files:** 19.20 kB
79
- - **Size of the generated dataset:** 39.35 kB
80
- - **Total amount of disk used:** 58.55 kB
81
-
82
- ### Dataset Summary
83
-
84
- Examples taken from the Winograd Schema Challenge modified to ensure that answers are a single word from the context.
85
- This Modified Winograd Schema Challenge (MWSC) ensures that scores are neither inflated nor deflated by oddities in phrasing.
86
-
87
- ### Supported Tasks and Leaderboards
88
-
89
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
90
-
91
- ### Languages
92
-
93
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
94
-
95
- ## Dataset Structure
96
-
97
- ### Data Instances
98
-
99
- #### default
100
-
101
- - **Size of downloaded dataset files:** 0.02 MB
102
- - **Size of the generated dataset:** 0.04 MB
103
- - **Total amount of disk used:** 0.06 MB
104
-
105
- An example looks as follows:
106
- ```
107
- {
108
- "sentence": "The city councilmen refused the demonstrators a permit because they feared violence.",
109
- "question": "Who feared violence?",
110
- "options": [ "councilmen", "demonstrators" ],
111
- "answer": "councilmen"
112
- }
113
- ```
114
-
115
- ### Data Fields
116
-
117
- The data fields are the same among all splits.
118
-
119
- #### default
120
- - `sentence`: a `string` feature.
121
- - `question`: a `string` feature.
122
- - `options`: a `list` of `string` features.
123
- - `answer`: a `string` feature.
124
-
125
- ### Data Splits
126
-
127
- | name |train|validation|test|
128
- |-------|----:|---------:|---:|
129
- |default| 80| 82| 100|
130
-
131
- ## Dataset Creation
132
-
133
- ### Curation Rationale
134
-
135
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
136
-
137
- ### Source Data
138
-
139
- #### Initial Data Collection and Normalization
140
-
141
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
142
-
143
- #### Who are the source language producers?
144
-
145
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
146
-
147
- ### Annotations
148
-
149
- #### Annotation process
150
-
151
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
152
-
153
- #### Who are the annotators?
154
-
155
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
156
-
157
- ### Personal and Sensitive Information
158
-
159
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
160
-
161
- ## Considerations for Using the Data
162
-
163
- ### Social Impact of Dataset
164
-
165
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
166
-
167
- ### Discussion of Biases
168
-
169
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
170
-
171
- ### Other Known Limitations
172
-
173
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
174
-
175
- ## Additional Information
176
-
177
- ### Dataset Curators
178
-
179
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
180
-
181
- ### Licensing Information
182
-
183
- Our code for running decaNLP has been open sourced under BSD-3-Clause.
184
-
185
- We chose to restrict decaNLP to datasets that were free and publicly accessible for research, but you should check their individual terms if you deviate from this use case.
186
-
187
- From the [Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html):
188
- > Both versions of the collections are licenced under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
189
-
190
- ### Citation Information
191
-
192
- If you use this in your work, please cite:
193
- ```
194
- @article{McCann2018decaNLP,
195
- title={The Natural Language Decathlon: Multitask Learning as Question Answering},
196
- author={Bryan McCann and Nitish Shirish Keskar and Caiming Xiong and Richard Socher},
197
- journal={arXiv preprint arXiv:1806.08730},
198
- year={2018}
199
- }
200
- ```
201
-
202
-
203
- ### Contributions
204
-
205
- Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@ghomasHudson](https://github.com/ghomasHudson), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "Examples taken from the Winograd Schema Challenge modified to ensure that answers are a single word from the context.\nThis modified Winograd Schema Challenge (MWSC) ensures that scores are neither inflated nor deflated by oddities in phrasing.\n", "citation": "@article{McCann2018decaNLP,\n title={The Natural Language Decathlon: Multitask Learning as Question Answering},\n author={Bryan McCann and Nitish Shirish Keskar and Caiming Xiong and Richard Socher},\n journal={arXiv preprint arXiv:1806.08730},\n year={2018}\n}\n", "homepage": "http://decanlp.com", "license": "", "features": {"sentence": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "options": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "mwsc", "config_name": "default", "version": {"version_str": "0.1.0", "description": null, "datasets_version_to_prepare": null, "major": 0, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 11022, "num_examples": 80, "dataset_name": "mwsc"}, "test": {"name": "test", "num_bytes": 15220, "num_examples": 100, "dataset_name": "mwsc"}, "validation": {"name": "validation", "num_bytes": 13109, "num_examples": 82, "dataset_name": "mwsc"}}, "download_checksums": {"https://raw.githubusercontent.com/salesforce/decaNLP/1e9605f246b9e05199b28bde2a2093bc49feeeaa/local_data/schema.txt": {"num_bytes": 19197, "checksum": "31da9bee05796bbe0f6c957f54d1eb82eb5c644a8ee59f2ff1fa890eff3885dd"}}, "download_size": 19197, "dataset_size": 39351, "size_in_bytes": 58548}}
 
 
default/test/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4bde7894c28d6854442831fd7b440cc4056e025163014f1d679903ff35884b6b
3
+ size 10962
default/train/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1623e6894519a6f0f706ffd7cbe72f55a94b7f4f0494934f10a431a4aec716ba
3
+ size 8368
default/validation/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4af3bfac03b678fc4fd4156e2e6dfd273a5554b0a0fd4503a591497ac9ceb066
3
+ size 9380
mwsc.py DELETED
@@ -1,121 +0,0 @@
1
- """A modification of the Winograd Schema Challenge to ensure answers are a single context word"""
2
-
3
- import os
4
- import re
5
-
6
- import datasets
7
-
8
-
9
- _CITATION = """\
10
- @article{McCann2018decaNLP,
11
- title={The Natural Language Decathlon: Multitask Learning as Question Answering},
12
- author={Bryan McCann and Nitish Shirish Keskar and Caiming Xiong and Richard Socher},
13
- journal={arXiv preprint arXiv:1806.08730},
14
- year={2018}
15
- }
16
- """
17
-
18
- _DESCRIPTION = """\
19
- Examples taken from the Winograd Schema Challenge modified to ensure that answers are a single word from the context.
20
- This modified Winograd Schema Challenge (MWSC) ensures that scores are neither inflated nor deflated by oddities in phrasing.
21
- """
22
-
23
- _DATA_URL = "https://raw.githubusercontent.com/salesforce/decaNLP/1e9605f246b9e05199b28bde2a2093bc49feeeaa/local_data/schema.txt"
24
- # Alternate: https://s3.amazonaws.com/research.metamind.io/decaNLP/data/schema.txt
25
-
26
-
27
- class MWSC(datasets.GeneratorBasedBuilder):
28
- """MWSC: modified Winograd Schema Challenge"""
29
-
30
- VERSION = datasets.Version("0.1.0")
31
-
32
- def _info(self):
33
- return datasets.DatasetInfo(
34
- description=_DESCRIPTION,
35
- features=datasets.Features(
36
- {
37
- "sentence": datasets.Value("string"),
38
- "question": datasets.Value("string"),
39
- "options": datasets.features.Sequence(datasets.Value("string")),
40
- "answer": datasets.Value("string"),
41
- }
42
- ),
43
- # If there's a common (input, target) tuple from the features,
44
- # specify them here. They'll be used if as_supervised=True in
45
- # builder.as_dataset.
46
- supervised_keys=None,
47
- # Homepage of the dataset for documentation
48
- homepage="http://decanlp.com",
49
- citation=_CITATION,
50
- )
51
-
52
- def _split_generators(self, dl_manager):
53
- """Returns SplitGenerators."""
54
- schemas_file = dl_manager.download_and_extract(_DATA_URL)
55
-
56
- if os.path.isdir(schemas_file):
57
- # During testing the download manager mock gives us a directory
58
- schemas_file = os.path.join(schemas_file, "schema.txt")
59
-
60
- return [
61
- datasets.SplitGenerator(
62
- name=datasets.Split.TRAIN,
63
- gen_kwargs={"filepath": schemas_file, "split": "train"},
64
- ),
65
- datasets.SplitGenerator(
66
- name=datasets.Split.TEST,
67
- gen_kwargs={"filepath": schemas_file, "split": "test"},
68
- ),
69
- datasets.SplitGenerator(
70
- name=datasets.Split.VALIDATION,
71
- gen_kwargs={"filepath": schemas_file, "split": "dev"},
72
- ),
73
- ]
74
-
75
- def _get_both_schema(self, context):
76
- """Split [option1/option2] into 2 sentences.
77
- From https://github.com/salesforce/decaNLP/blob/1e9605f246b9e05199b28bde2a2093bc49feeeaa/text/torchtext/datasets/generic.py#L815-L827"""
78
- pattern = r"\[.*\]"
79
- variations = [x[1:-1].split("/") for x in re.findall(pattern, context)]
80
- splits = re.split(pattern, context)
81
- results = []
82
- for which_schema in range(2):
83
- vs = [v[which_schema] for v in variations]
84
- context = ""
85
- for idx in range(len(splits)):
86
- context += splits[idx]
87
- if idx < len(vs):
88
- context += vs[idx]
89
- results.append(context)
90
- return results
91
-
92
- def _generate_examples(self, filepath, split):
93
- """Yields examples."""
94
-
95
- schemas = []
96
- with open(filepath, encoding="utf-8") as schema_file:
97
- schema = []
98
- for line in schema_file:
99
- if len(line.split()) == 0:
100
- schemas.append(schema)
101
- schema = []
102
- continue
103
- else:
104
- schema.append(line.strip())
105
-
106
- # Train/test/dev split from decaNLP code
107
- splits = {}
108
- traindev = schemas[:-50]
109
- splits["test"] = schemas[-50:]
110
- splits["train"] = traindev[:40]
111
- splits["dev"] = traindev[40:]
112
-
113
- idx = 0
114
- for schema in splits[split]:
115
- sentence, question, answers = schema
116
- sentence = self._get_both_schema(sentence)
117
- question = self._get_both_schema(question)
118
- answers = answers.split("/")
119
- for i in range(2):
120
- yield idx, {"sentence": sentence[i], "question": question[i], "options": answers, "answer": answers[i]}
121
- idx += 1