Datasets:
Tasks:
Multiple Choice
Sub-tasks:
multiple-choice-coreference-resolution
Languages:
English
Multilinguality:
monolingual
Size Categories:
n<1K
Language Creators:
expert-generated
Annotations Creators:
expert-generated
Source Datasets:
extended|winograd_wsc
ArXiv:
License:
Commit
•
cdb2890
1
Parent(s):
d2448cb
Fix missing tags in dataset cards (#4891)
Browse filesCommit from https://github.com/huggingface/datasets/commit/9d800051052660a723fe12538afb388b64b9ca50
README.md
CHANGED
@@ -1,8 +1,24 @@
|
|
1 |
---
|
|
|
|
|
2 |
language:
|
3 |
- en
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
paperswithcode_id: null
|
5 |
-
pretty_name: The modified Winograd Schema Challenge (MWSC)
|
6 |
---
|
7 |
|
8 |
# Dataset Card for The modified Winograd Schema Challenge (MWSC)
|
@@ -34,9 +50,9 @@ pretty_name: The modified Winograd Schema Challenge (MWSC)
|
|
34 |
## Dataset Description
|
35 |
|
36 |
- **Homepage:** [http://decanlp.com](http://decanlp.com)
|
37 |
-
- **Repository:**
|
38 |
-
- **Paper:** [
|
39 |
-
- **Point of Contact:** [
|
40 |
- **Size of downloaded dataset files:** 0.02 MB
|
41 |
- **Size of the generated dataset:** 0.04 MB
|
42 |
- **Total amount of disk used:** 0.06 MB
|
@@ -44,7 +60,7 @@ pretty_name: The modified Winograd Schema Challenge (MWSC)
|
|
44 |
### Dataset Summary
|
45 |
|
46 |
Examples taken from the Winograd Schema Challenge modified to ensure that answers are a single word from the context.
|
47 |
-
This
|
48 |
|
49 |
### Supported Tasks and Leaderboards
|
50 |
|
@@ -64,13 +80,13 @@ This modified Winograd Schema Challenge (MWSC) ensures that scores are neither i
|
|
64 |
- **Size of the generated dataset:** 0.04 MB
|
65 |
- **Total amount of disk used:** 0.06 MB
|
66 |
|
67 |
-
An example
|
68 |
```
|
69 |
{
|
70 |
-
"
|
71 |
-
"
|
72 |
-
"
|
73 |
-
"
|
74 |
}
|
75 |
```
|
76 |
|
@@ -142,10 +158,16 @@ The data fields are the same among all splits.
|
|
142 |
|
143 |
### Licensing Information
|
144 |
|
145 |
-
|
|
|
|
|
|
|
|
|
|
|
146 |
|
147 |
### Citation Information
|
148 |
|
|
|
149 |
```
|
150 |
@article{McCann2018decaNLP,
|
151 |
title={The Natural Language Decathlon: Multitask Learning as Question Answering},
|
@@ -153,7 +175,6 @@ The data fields are the same among all splits.
|
|
153 |
journal={arXiv preprint arXiv:1806.08730},
|
154 |
year={2018}
|
155 |
}
|
156 |
-
|
157 |
```
|
158 |
|
159 |
|
|
|
1 |
---
|
2 |
+
annotations_creators:
|
3 |
+
- expert-generated
|
4 |
language:
|
5 |
- en
|
6 |
+
language_creators:
|
7 |
+
- expert-generated
|
8 |
+
license:
|
9 |
+
- cc-by-4.0
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
pretty_name: Modified Winograd Schema Challenge (MWSC)
|
13 |
+
size_categories:
|
14 |
+
- n<1K
|
15 |
+
source_datasets:
|
16 |
+
- extended|winograd_wsc
|
17 |
+
task_categories:
|
18 |
+
- multiple-choice
|
19 |
+
task_ids:
|
20 |
+
- multiple-choice-coreference-resolution
|
21 |
paperswithcode_id: null
|
|
|
22 |
---
|
23 |
|
24 |
# Dataset Card for The modified Winograd Schema Challenge (MWSC)
|
|
|
50 |
## Dataset Description
|
51 |
|
52 |
- **Homepage:** [http://decanlp.com](http://decanlp.com)
|
53 |
+
- **Repository:** https://github.com/salesforce/decaNLP
|
54 |
+
- **Paper:** [The Natural Language Decathlon: Multitask Learning as Question Answering](https://arxiv.org/abs/1806.08730)
|
55 |
+
- **Point of Contact:** [Bryan McCann](mailto:bmccann@salesforce.com), [Nitish Shirish Keskar](mailto:nkeskar@salesforce.com)
|
56 |
- **Size of downloaded dataset files:** 0.02 MB
|
57 |
- **Size of the generated dataset:** 0.04 MB
|
58 |
- **Total amount of disk used:** 0.06 MB
|
|
|
60 |
### Dataset Summary
|
61 |
|
62 |
Examples taken from the Winograd Schema Challenge modified to ensure that answers are a single word from the context.
|
63 |
+
This Modified Winograd Schema Challenge (MWSC) ensures that scores are neither inflated nor deflated by oddities in phrasing.
|
64 |
|
65 |
### Supported Tasks and Leaderboards
|
66 |
|
|
|
80 |
- **Size of the generated dataset:** 0.04 MB
|
81 |
- **Total amount of disk used:** 0.06 MB
|
82 |
|
83 |
+
An example looks as follows:
|
84 |
```
|
85 |
{
|
86 |
+
"sentence": "The city councilmen refused the demonstrators a permit because they feared violence.",
|
87 |
+
"question": "Who feared violence?",
|
88 |
+
"options": [ "councilmen", "demonstrators" ],
|
89 |
+
"answer": "councilmen"
|
90 |
}
|
91 |
```
|
92 |
|
|
|
158 |
|
159 |
### Licensing Information
|
160 |
|
161 |
+
Our code for running decaNLP has been open sourced under BSD-3-Clause.
|
162 |
+
|
163 |
+
We chose to restrict decaNLP to datasets that were free and publicly accessible for research, but you should check their individual terms if you deviate from this use case.
|
164 |
+
|
165 |
+
From the [Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html):
|
166 |
+
> Both versions of the collections are licenced under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
|
167 |
|
168 |
### Citation Information
|
169 |
|
170 |
+
If you use this in your work, please cite:
|
171 |
```
|
172 |
@article{McCann2018decaNLP,
|
173 |
title={The Natural Language Decathlon: Multitask Learning as Question Answering},
|
|
|
175 |
journal={arXiv preprint arXiv:1806.08730},
|
176 |
year={2018}
|
177 |
}
|
|
|
178 |
```
|
179 |
|
180 |
|