system HF staff commited on
Commit
830b6a8
0 Parent(s):

Update files from the datasets library (from 1.16.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.16.0

Files changed (36) hide show
  1. .gitattributes +27 -0
  2. README.md +243 -0
  3. dataset_infos.json +0 -0
  4. dummy/X-CODAH-ar/1.1.0/dummy_data.zip +3 -0
  5. dummy/X-CODAH-de/1.1.0/dummy_data.zip +3 -0
  6. dummy/X-CODAH-en/1.1.0/dummy_data.zip +3 -0
  7. dummy/X-CODAH-es/1.1.0/dummy_data.zip +3 -0
  8. dummy/X-CODAH-fr/1.1.0/dummy_data.zip +3 -0
  9. dummy/X-CODAH-hi/1.1.0/dummy_data.zip +3 -0
  10. dummy/X-CODAH-it/1.1.0/dummy_data.zip +3 -0
  11. dummy/X-CODAH-jap/1.1.0/dummy_data.zip +3 -0
  12. dummy/X-CODAH-nl/1.1.0/dummy_data.zip +3 -0
  13. dummy/X-CODAH-pl/1.1.0/dummy_data.zip +3 -0
  14. dummy/X-CODAH-pt/1.1.0/dummy_data.zip +3 -0
  15. dummy/X-CODAH-ru/1.1.0/dummy_data.zip +3 -0
  16. dummy/X-CODAH-sw/1.1.0/dummy_data.zip +3 -0
  17. dummy/X-CODAH-ur/1.1.0/dummy_data.zip +3 -0
  18. dummy/X-CODAH-vi/1.1.0/dummy_data.zip +3 -0
  19. dummy/X-CODAH-zh/1.1.0/dummy_data.zip +3 -0
  20. dummy/X-CSQA-ar/1.1.0/dummy_data.zip +3 -0
  21. dummy/X-CSQA-de/1.1.0/dummy_data.zip +3 -0
  22. dummy/X-CSQA-en/1.1.0/dummy_data.zip +3 -0
  23. dummy/X-CSQA-es/1.1.0/dummy_data.zip +3 -0
  24. dummy/X-CSQA-fr/1.1.0/dummy_data.zip +3 -0
  25. dummy/X-CSQA-hi/1.1.0/dummy_data.zip +3 -0
  26. dummy/X-CSQA-it/1.1.0/dummy_data.zip +3 -0
  27. dummy/X-CSQA-jap/1.1.0/dummy_data.zip +3 -0
  28. dummy/X-CSQA-nl/1.1.0/dummy_data.zip +3 -0
  29. dummy/X-CSQA-pl/1.1.0/dummy_data.zip +3 -0
  30. dummy/X-CSQA-pt/1.1.0/dummy_data.zip +3 -0
  31. dummy/X-CSQA-ru/1.1.0/dummy_data.zip +3 -0
  32. dummy/X-CSQA-sw/1.1.0/dummy_data.zip +3 -0
  33. dummy/X-CSQA-ur/1.1.0/dummy_data.zip +3 -0
  34. dummy/X-CSQA-vi/1.1.0/dummy_data.zip +3 -0
  35. dummy/X-CSQA-zh/1.1.0/dummy_data.zip +3 -0
  36. xcsr.py +278 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,243 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ - machine-generated
7
+ languages:
8
+ - en
9
+ - zh
10
+ - de
11
+ - es
12
+ - fr
13
+ - it
14
+ - ja
15
+ - nl
16
+ - pl
17
+ - pt
18
+ - ru
19
+ - ar
20
+ - vi
21
+ - hi
22
+ - sw
23
+ - ur
24
+ licenses:
25
+ - mit
26
+ multilinguality:
27
+ - multilingual
28
+ pretty_name: X-CSR
29
+ size_categories:
30
+ - 1K<n<10K
31
+ source_datasets:
32
+ - extended|codah
33
+ - extended|commonsense_qa
34
+ task_categories:
35
+ - question-answering
36
+ task_ids:
37
+ - multiple-choice-qa
38
+ ---
39
+
40
+ # Dataset Card for X-CSR
41
+
42
+ ## Table of Contents
43
+ - [Dataset Description](#dataset-description)
44
+ - [Dataset Summary](#dataset-summary)
45
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
46
+ - [Languages](#languages)
47
+ - [Dataset Structure](#dataset-structure)
48
+ - [Data Instances](#data-instances)
49
+ - [Data Fields](#data-instances)
50
+ - [Data Splits](#data-instances)
51
+ - [Dataset Creation](#dataset-creation)
52
+ - [Curation Rationale](#curation-rationale)
53
+ - [Source Data](#source-data)
54
+ - [Annotations](#annotations)
55
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
56
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
57
+ - [Social Impact of Dataset](#social-impact-of-dataset)
58
+ - [Discussion of Biases](#discussion-of-biases)
59
+ - [Other Known Limitations](#other-known-limitations)
60
+ - [Additional Information](#additional-information)
61
+ - [Dataset Curators](#dataset-curators)
62
+ - [Licensing Information](#licensing-information)
63
+ - [Citation Information](#citation-information)
64
+
65
+ ## Dataset Description
66
+
67
+ - **Homepage:** https://inklab.usc.edu//XCSR/
68
+ - **Repository:** https://github.com/INK-USC/XCSR
69
+ - **Paper:** https://arxiv.org/abs/2106.06937
70
+ - **Leaderboard:** https://inklab.usc.edu//XCSR/leaderboard
71
+ - **Point of Contact:** https://yuchenlin.xyz/
72
+
73
+ ### Dataset Summary
74
+
75
+ To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
76
+
77
+
78
+ ### Supported Tasks and Leaderboards
79
+
80
+ https://inklab.usc.edu//XCSR/leaderboard
81
+
82
+ ### Languages
83
+
84
+ The total 16 languages for X-CSR: {en, zh, de, es, fr, it, jap, nl, pl, pt, ru, ar, vi, hi, sw, ur}.
85
+
86
+
87
+ ## Dataset Structure
88
+
89
+ ### Data Instances
90
+
91
+ An example of the X-CSQA dataset:
92
+ ```
93
+ {
94
+ "id": "be1920f7ba5454ad", # an id shared by all languages
95
+ "lang": "en", # one of the 16 language codes.
96
+ "question": {
97
+ "stem": "What will happen to your knowledge with more learning?", # question text
98
+ "choices": [
99
+ {"label": "A", "text": "headaches" },
100
+ {"label": "B", "text": "bigger brain" },
101
+ {"label": "C", "text": "education" },
102
+ {"label": "D", "text": "growth" },
103
+ {"label": "E", "text": "knowing more" }
104
+ ] },
105
+ "answerKey": "D" # hidden for test data.
106
+ }
107
+ ```
108
+
109
+ An example of the X-CODAH dataset:
110
+ ```
111
+ {
112
+ "id": "b8eeef4a823fcd4b", # an id shared by all languages
113
+ "lang": "en", # one of the 16 language codes.
114
+ "question_tag": "o", # one of 6 question types
115
+ "question": {
116
+ "stem": " ", # always a blank as a dummy question
117
+ "choices": [
118
+ {"label": "A",
119
+ "text": "Jennifer loves her school very much, she plans to drop every courses."},
120
+ {"label": "B",
121
+ "text": "Jennifer loves her school very much, she is never absent even when she's sick."},
122
+ {"label": "C",
123
+ "text": "Jennifer loves her school very much, she wants to get a part-time job."},
124
+ {"label": "D",
125
+ "text": "Jennifer loves her school very much, she quits school happily."}
126
+ ]
127
+ },
128
+ "answerKey": "B" # hidden for test data.
129
+ }
130
+ ```
131
+
132
+ ### Data Fields
133
+
134
+ - id: an id shared by all languages
135
+ - lang: one of the 16 language codes.
136
+ - question_tag: one of 6 question types
137
+ - stem: always a blank as a dummy question
138
+ - choices: a list of answers, each answer has:
139
+ - label: a string answer identifier for each answer
140
+ - text: the answer text
141
+
142
+ ### Data Splits
143
+
144
+ - X-CSQA: There are 8,888 examples for training in English, 1,000 for development in each language, and 1,074 examples for testing in each language.
145
+ - X-CODAH: There are 8,476 examples for training in English, 300 for development in each language, and 1,000 examples for testing in each language.
146
+
147
+ ## Dataset Creation
148
+
149
+ ### Curation Rationale
150
+
151
+ To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH.
152
+
153
+ The details of the dataset construction, especially the translation procedures, can be found in section A of the appendix of the [paper](https://inklab.usc.edu//XCSR/XCSR_paper.pdf).
154
+
155
+ ### Source Data
156
+
157
+ #### Initial Data Collection and Normalization
158
+
159
+ [Needs More Information]
160
+
161
+ #### Who are the source language producers?
162
+
163
+ [Needs More Information]
164
+
165
+ ### Annotations
166
+
167
+ #### Annotation process
168
+
169
+ [Needs More Information]
170
+
171
+ #### Who are the annotators?
172
+
173
+ [Needs More Information]
174
+
175
+ ### Personal and Sensitive Information
176
+
177
+ [Needs More Information]
178
+
179
+ ## Considerations for Using the Data
180
+
181
+ ### Social Impact of Dataset
182
+
183
+ [Needs More Information]
184
+
185
+ ### Discussion of Biases
186
+
187
+ [Needs More Information]
188
+
189
+ ### Other Known Limitations
190
+
191
+ [Needs More Information]
192
+
193
+ ## Additional Information
194
+
195
+ ### Dataset Curators
196
+
197
+ [Needs More Information]
198
+
199
+ ### Licensing Information
200
+
201
+ [Needs More Information]
202
+
203
+ ### Citation Information
204
+ ```
205
+ # X-CSR
206
+ @inproceedings{lin-etal-2021-xcsr,
207
+ title = "Common Sense Beyond English: Evaluating and Improving Multilingual Language Models for Commonsense Reasoning",
208
+ author = "Lin, Bill Yuchen and Lee, Seyeon and Qiao, Xiaoyang and Ren, Xiang",
209
+ booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL-IJCNLP 2021)",
210
+ year = "2021",
211
+ note={to appear}
212
+ }
213
+
214
+ # CSQA
215
+ @inproceedings{Talmor2019commonsenseqaaq,
216
+ address = {Minneapolis, Minnesota},
217
+ author = {Talmor, Alon and Herzig, Jonathan and Lourie, Nicholas and Berant, Jonathan},
218
+ booktitle = {Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)},
219
+ doi = {10.18653/v1/N19-1421},
220
+ pages = {4149--4158},
221
+ publisher = {Association for Computational Linguistics},
222
+ title = {CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge},
223
+ url = {https://www.aclweb.org/anthology/N19-1421},
224
+ year = {2019}
225
+ }
226
+
227
+ # CODAH
228
+ @inproceedings{Chen2019CODAHAA,
229
+ address = {Minneapolis, USA},
230
+ author = {Chen, Michael and D{'}Arcy, Mike and Liu, Alisa and Fernandez, Jared and Downey, Doug},
231
+ booktitle = {Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for {NLP}},
232
+ doi = {10.18653/v1/W19-2008},
233
+ pages = {63--69},
234
+ publisher = {Association for Computational Linguistics},
235
+ title = {CODAH: An Adversarially-Authored Question Answering Dataset for Common Sense},
236
+ url = {https://www.aclweb.org/anthology/W19-2008},
237
+ year = {2019}
238
+ }
239
+ ```
240
+
241
+ ### Contributions
242
+
243
+ Thanks to [Bill Yuchen Lin](https://yuchenlin.xyz/), [Seyeon Lee](https://seyeon-lee.github.io/), [Xiaoyang Qiao](https://www.linkedin.com/in/xiaoyang-qiao/), [Xiang Ren](http://www-bcf.usc.edu/~xiangren/) for adding this dataset.
dataset_infos.json ADDED
The diff for this file is too large to render. See raw diff
dummy/X-CODAH-ar/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1cce650a5013cc08672214364479196c18c0e9d455b0fe49d29179c19716880e
3
+ size 2966
dummy/X-CODAH-de/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1efb0af331c806873ccf1a376851ffc3194b63755ea6f65b30a5223a76c3dd6d
3
+ size 2639
dummy/X-CODAH-en/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc8dff924a689266d6a679a7ba6c5db081ac9db824f605872b4a20874518bfb4
3
+ size 3380
dummy/X-CODAH-es/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8438a08e8446ed818a344216290c9525e0c14925029d87aba7ed0fed1af46eee
3
+ size 2394
dummy/X-CODAH-fr/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb3f45d60324c8007ec6d142330f078a5f6609dda1b1914a6187e5f8cd263f47
3
+ size 2675
dummy/X-CODAH-hi/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9135a1d7c1a3842b5dacacd28674a871489fb4e8a96763c3d61ab01e9a5fa5c6
3
+ size 3307
dummy/X-CODAH-it/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d857ca059fc0a8ae507b939f03ffd6a18fe5ae9ad2214ee5e798d5981cdd6508
3
+ size 2377
dummy/X-CODAH-jap/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40d29e19e003b0922113cbf42794871995b75a9e975124b53a29fb64d73de018
3
+ size 2842
dummy/X-CODAH-nl/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a496aaa1ebd696131d280463c73469f4bd01e8734737c94f1583a1bf960ec883
3
+ size 2468
dummy/X-CODAH-pl/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c015733775caf0ce1f029fabdf3f4237ea750015a1e8ce0c2ffa1ced9e74e8ab
3
+ size 2505
dummy/X-CODAH-pt/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8b7a90550091cd0b528c44b21a4a06d0d6ae5c61665df1a85f8f8ca32c79560
3
+ size 2331
dummy/X-CODAH-ru/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:582bb66dbef05a664f8db770344ba80afab37bc5b1017b6b14815b65ac93b133
3
+ size 3314
dummy/X-CODAH-sw/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:502cf7ad72d0e0ffdabad90c9067dcba88a0a9af524e8a8a8180abfe5684b9be
3
+ size 2451
dummy/X-CODAH-ur/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6507bb369ce86acf6e21f44c58d12cac6c3002c35fc824baf09753b0969c50a7
3
+ size 3064
dummy/X-CODAH-vi/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1a003d6efa7f5bd02b103d6f666e3100c91fe2af68fe387548b1a292a7b9a06
3
+ size 2611
dummy/X-CODAH-zh/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34bce0c015867a197ad3ee0d993be64bbf80cad341ac9cbc2a0b4a27eac709af
3
+ size 2760
dummy/X-CSQA-ar/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f61bc452f500da8af0ce4c9939b58e61f0c4337aee3f131bdbb9660199c71496
3
+ size 2916
dummy/X-CSQA-de/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f61bc452f500da8af0ce4c9939b58e61f0c4337aee3f131bdbb9660199c71496
3
+ size 2916
dummy/X-CSQA-en/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f61bc452f500da8af0ce4c9939b58e61f0c4337aee3f131bdbb9660199c71496
3
+ size 2916
dummy/X-CSQA-es/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f61bc452f500da8af0ce4c9939b58e61f0c4337aee3f131bdbb9660199c71496
3
+ size 2916
dummy/X-CSQA-fr/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f61bc452f500da8af0ce4c9939b58e61f0c4337aee3f131bdbb9660199c71496
3
+ size 2916
dummy/X-CSQA-hi/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f61bc452f500da8af0ce4c9939b58e61f0c4337aee3f131bdbb9660199c71496
3
+ size 2916
dummy/X-CSQA-it/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f61bc452f500da8af0ce4c9939b58e61f0c4337aee3f131bdbb9660199c71496
3
+ size 2916
dummy/X-CSQA-jap/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f61bc452f500da8af0ce4c9939b58e61f0c4337aee3f131bdbb9660199c71496
3
+ size 2916
dummy/X-CSQA-nl/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f61bc452f500da8af0ce4c9939b58e61f0c4337aee3f131bdbb9660199c71496
3
+ size 2916
dummy/X-CSQA-pl/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f61bc452f500da8af0ce4c9939b58e61f0c4337aee3f131bdbb9660199c71496
3
+ size 2916
dummy/X-CSQA-pt/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f61bc452f500da8af0ce4c9939b58e61f0c4337aee3f131bdbb9660199c71496
3
+ size 2916
dummy/X-CSQA-ru/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f61bc452f500da8af0ce4c9939b58e61f0c4337aee3f131bdbb9660199c71496
3
+ size 2916
dummy/X-CSQA-sw/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f61bc452f500da8af0ce4c9939b58e61f0c4337aee3f131bdbb9660199c71496
3
+ size 2916
dummy/X-CSQA-ur/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f61bc452f500da8af0ce4c9939b58e61f0c4337aee3f131bdbb9660199c71496
3
+ size 2916
dummy/X-CSQA-vi/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f61bc452f500da8af0ce4c9939b58e61f0c4337aee3f131bdbb9660199c71496
3
+ size 2916
dummy/X-CSQA-zh/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f61bc452f500da8af0ce4c9939b58e61f0c4337aee3f131bdbb9660199c71496
3
+ size 2916
xcsr.py ADDED
@@ -0,0 +1,278 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """XCSR: A dataset for cross-lingual commonsense reasoning."""
16
+
17
+
18
+ import json
19
+ import os
20
+
21
+ import datasets
22
+
23
+
24
+ # TODO: Add BibTeX citation
25
+ # Find for instance the citation on arxiv or on the dataset repo/website
26
+ _CITATION = """\
27
+ # X-CSR
28
+ @inproceedings{lin-etal-2021-xcsr,
29
+ title = "Common Sense Beyond English: Evaluating and Improving Multilingual Language Models for Commonsense Reasoning",
30
+ author = "Lin, Bill Yuchen and Lee, Seyeon and Qiao, Xiaoyang and Ren, Xiang",
31
+ booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL-IJCNLP 2021)",
32
+ year = "2021",
33
+ note={to appear}
34
+ }
35
+
36
+ # CSQA
37
+ @inproceedings{Talmor2019commonsenseqaaq,
38
+ address = {Minneapolis, Minnesota},
39
+ author = {Talmor, Alon and Herzig, Jonathan and Lourie, Nicholas and Berant, Jonathan},
40
+ booktitle = {Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)},
41
+ doi = {10.18653/v1/N19-1421},
42
+ pages = {4149--4158},
43
+ publisher = {Association for Computational Linguistics},
44
+ title = {CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge},
45
+ url = {https://www.aclweb.org/anthology/N19-1421},
46
+ year = {2019}
47
+ }
48
+
49
+ # CODAH
50
+ @inproceedings{Chen2019CODAHAA,
51
+ address = {Minneapolis, USA},
52
+ author = {Chen, Michael and D{'}Arcy, Mike and Liu, Alisa and Fernandez, Jared and Downey, Doug},
53
+ booktitle = {Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for {NLP}},
54
+ doi = {10.18653/v1/W19-2008},
55
+ pages = {63--69},
56
+ publisher = {Association for Computational Linguistics},
57
+ title = {CODAH: An Adversarially-Authored Question Answering Dataset for Common Sense},
58
+ url = {https://www.aclweb.org/anthology/W19-2008},
59
+ year = {2019}
60
+ }
61
+ """
62
+
63
+ # TODO: Add description of the dataset here
64
+ # You can copy an official description
65
+ _DESCRIPTION = """\
66
+ To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
67
+ """
68
+
69
+ # TODO: Add a link to an official homepage for the dataset here
70
+ _HOMEPAGE = "https://inklab.usc.edu//XCSR/"
71
+
72
+ # TODO: Add the licence for the dataset here if you can find it
73
+ # _LICENSE = ""
74
+
75
+ # TODO: Add link to the official dataset URLs here
76
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
77
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
78
+
79
+ _URL = "https://inklab.usc.edu/XCSR/xcsr_datasets.zip"
80
+
81
+ _LANGUAGES = ("en", "zh", "de", "es", "fr", "it", "jap", "nl", "pl", "pt", "ru", "ar", "vi", "hi", "sw", "ur")
82
+
83
+
84
+ class XcsrConfig(datasets.BuilderConfig):
85
+ """BuilderConfig for XCSR."""
86
+
87
+ def __init__(self, name: str, language: str, languages=None, **kwargs):
88
+ """BuilderConfig for XCSR.
89
+ Args:
90
+ language: One of {en, zh, de, es, fr, it, jap, nl, pl, pt, ru, ar, vi, hi, sw, ur}, or all_languages
91
+ **kwargs: keyword arguments forwarded to super.
92
+ """
93
+ super(XcsrConfig, self).__init__(**kwargs)
94
+ self.name = name
95
+ self.language = language
96
+
97
+
98
+ # TODO: Name of the dataset usually match the script name with CamelCase instead of snake_case
99
+ class Xcsr(datasets.GeneratorBasedBuilder):
100
+ """XCSR: A dataset for evaluating multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting"""
101
+
102
+ VERSION = datasets.Version("1.1.0", "")
103
+ BUILDER_CONFIG_CLASS = XcsrConfig
104
+ BUILDER_CONFIGS = [
105
+ XcsrConfig(
106
+ name="X-CSQA-" + lang,
107
+ language="en",
108
+ version=datasets.Version("1.1.0", ""),
109
+ description=f"Plain text import of X-CSQA for the {lang} language",
110
+ )
111
+ for lang in _LANGUAGES
112
+ ] + [
113
+ XcsrConfig(
114
+ name="X-CODAH-" + lang,
115
+ language=lang,
116
+ version=datasets.Version("1.1.0", ""),
117
+ description=f"Plain text import of X-CODAH for the {lang} language",
118
+ )
119
+ for lang in _LANGUAGES
120
+ ]
121
+
122
+ def _info(self):
123
+ # TODO: This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
124
+ if self.config.name.startswith("X-CSQA"):
125
+ features = datasets.Features(
126
+ {
127
+ "id": datasets.Value("string"),
128
+ "lang": datasets.Value("string"),
129
+ "question": datasets.features.Sequence(
130
+ {
131
+ "stem": datasets.Value("string"),
132
+ "choices": datasets.features.Sequence(
133
+ {
134
+ "label": datasets.Value("string"),
135
+ "text": datasets.Value("string"),
136
+ }
137
+ ),
138
+ }
139
+ ),
140
+ "answerKey": datasets.Value("string"),
141
+ }
142
+ )
143
+ elif self.config.name.startswith("X-CODAH"):
144
+ features = datasets.Features(
145
+ {
146
+ "id": datasets.Value("string"),
147
+ "lang": datasets.Value("string"),
148
+ "question_tag": datasets.Value("string"),
149
+ "question": datasets.features.Sequence(
150
+ {
151
+ "stem": datasets.Value("string"),
152
+ "choices": datasets.features.Sequence(
153
+ {
154
+ "label": datasets.Value("string"),
155
+ "text": datasets.Value("string"),
156
+ }
157
+ ),
158
+ }
159
+ ),
160
+ "answerKey": datasets.Value("string"),
161
+ }
162
+ )
163
+
164
+ return datasets.DatasetInfo(
165
+ # This is the description that will appear on the datasets page.
166
+ description=_DESCRIPTION,
167
+ # This defines the different columns of the dataset and their types
168
+ features=features, # Here we define them above because they are different between the two configurations
169
+ # If there's a common (input, target) tuple from the features,
170
+ # specify them here. They'll be used if as_supervised=True in
171
+ # builder.as_dataset.
172
+ supervised_keys=None,
173
+ # Homepage of the dataset for documentation
174
+ homepage=_HOMEPAGE,
175
+ # License for the dataset if available
176
+ # license=_LICENSE,
177
+ # Citation for the dataset
178
+ citation=_CITATION,
179
+ )
180
+
181
+ def _split_generators(self, dl_manager):
182
+ """Returns SplitGenerators."""
183
+ # TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
184
+ # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
185
+
186
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
187
+ # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
188
+ # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
189
+ my_urls = _URL
190
+ data_dir = dl_manager.download_and_extract(my_urls)
191
+ if self.config.name.startswith("X-CSQA"):
192
+ sub_test_path = "X-CSR_datasets/X-CSQA/" + self.config.language + "/test.jsonl"
193
+ sub_dev_path = "X-CSR_datasets/X-CSQA/" + self.config.language + "/dev.jsonl"
194
+ elif self.config.name.startswith("X-CODAH"):
195
+ sub_test_path = "X-CSR_datasets/X-CODAH/" + self.config.language + "/test.jsonl"
196
+ sub_dev_path = "X-CSR_datasets/X-CODAH/" + self.config.language + "/dev.jsonl"
197
+
198
+ return [
199
+ datasets.SplitGenerator(
200
+ name=datasets.Split.TEST,
201
+ gen_kwargs={
202
+ "filepath": os.path.join(data_dir, sub_test_path),
203
+ "split": "test",
204
+ },
205
+ ),
206
+ datasets.SplitGenerator(
207
+ name=datasets.Split.VALIDATION,
208
+ gen_kwargs={
209
+ "filepath": os.path.join(data_dir, sub_dev_path),
210
+ "split": "dev",
211
+ },
212
+ ),
213
+ ]
214
+
215
+ def _generate_examples(
216
+ self, filepath, split # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
217
+ ):
218
+ """Yields examples as (key, example) tuples."""
219
+ # This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
220
+ # The `key` is here for legacy reason (tfds) and is not important in itself.
221
+ key = 0
222
+ if self.config.name.startswith("X-CSQA"):
223
+ with open(filepath, encoding="utf-8") as f:
224
+ for _id, row in enumerate(f):
225
+ data = json.loads(row)
226
+
227
+ ID = data["id"]
228
+ lang = data["lang"]
229
+ question = data["question"]
230
+ stem = question["stem"]
231
+ choices = question["choices"]
232
+ labels = [label["label"] for label in choices]
233
+ texts = [text["text"] for text in choices]
234
+
235
+ if split == "test":
236
+ answerkey = ""
237
+ else:
238
+ answerkey = data["answerKey"]
239
+
240
+ yield key, {
241
+ "id": ID,
242
+ "lang": lang,
243
+ "question": {
244
+ "stem": stem,
245
+ "choices": [{"label": label, "text": text} for label, text in zip(labels, texts)],
246
+ },
247
+ "answerKey": answerkey,
248
+ }
249
+ key += 1
250
+ elif self.config.name.startswith("X-CODAH"):
251
+ with open(filepath, encoding="utf-8") as f:
252
+ for _id, row in enumerate(f):
253
+ data = json.loads(row)
254
+ ID = data["id"]
255
+ lang = data["lang"]
256
+ question_tag = data["question_tag"]
257
+ question = data["question"]
258
+ stem = question["stem"]
259
+ choices = question["choices"]
260
+ labels = [label["label"] for label in choices]
261
+ texts = [text["text"] for text in choices]
262
+
263
+ if split == "test":
264
+ answerkey = ""
265
+ else:
266
+ answerkey = data["answerKey"]
267
+
268
+ yield key, {
269
+ "id": ID,
270
+ "lang": lang,
271
+ "question_tag": question_tag,
272
+ "question": {
273
+ "stem": stem,
274
+ "choices": [{"label": label, "text": text} for label, text in zip(labels, texts)],
275
+ },
276
+ "answerKey": answerkey,
277
+ }
278
+ key += 1