Datasets:

Multilinguality:
multilingual
Size Categories:
1K<n<10K
Annotations Creators:
crowdsourced
ArXiv:
Tags:
License:
albertvillanova HF staff commited on
Commit
30535c5
1 Parent(s): cc36266

Fix string features of xcsr dataset (#5024)

Browse files

* Fix string features to avoid character splitting

* Simplify _generate_examples

* Simplify _split_generators

* Clean code

* Update metadata JSON

* Update citation information

* Update metadata JSON

Commit from https://github.com/huggingface/datasets/commit/0a0e8858922be122e0eecf2fe9d0a1f1cd9b9f6d

Files changed (3) hide show
  1. README.md +16 -6
  2. dataset_infos.json +0 -0
  3. xcsr.py +51 -137
README.md CHANGED
@@ -201,14 +201,24 @@ The details of the dataset construction, especially the translation procedures,
201
  [Needs More Information]
202
 
203
  ### Citation Information
 
204
  ```
205
  # X-CSR
206
- @inproceedings{lin-etal-2021-xcsr,
207
- title = "Common Sense Beyond English: Evaluating and Improving Multilingual Language Models for Commonsense Reasoning",
208
- author = "Lin, Bill Yuchen and Lee, Seyeon and Qiao, Xiaoyang and Ren, Xiang",
209
- booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL-IJCNLP 2021)",
 
 
 
 
210
  year = "2021",
211
- note={to appear}
 
 
 
 
 
212
  }
213
 
214
  # CSQA
@@ -240,4 +250,4 @@ The details of the dataset construction, especially the translation procedures,
240
 
241
  ### Contributions
242
 
243
- Thanks to [Bill Yuchen Lin](https://yuchenlin.xyz/), [Seyeon Lee](https://seyeon-lee.github.io/), [Xiaoyang Qiao](https://www.linkedin.com/in/xiaoyang-qiao/), [Xiang Ren](http://www-bcf.usc.edu/~xiangren/) for adding this dataset.
 
201
  [Needs More Information]
202
 
203
  ### Citation Information
204
+
205
  ```
206
  # X-CSR
207
+ @inproceedings{lin-etal-2021-common,
208
+ title = "Common Sense Beyond {E}nglish: Evaluating and Improving Multilingual Language Models for Commonsense Reasoning",
209
+ author = "Lin, Bill Yuchen and
210
+ Lee, Seyeon and
211
+ Qiao, Xiaoyang and
212
+ Ren, Xiang",
213
+ booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
214
+ month = aug,
215
  year = "2021",
216
+ address = "Online",
217
+ publisher = "Association for Computational Linguistics",
218
+ url = "https://aclanthology.org/2021.acl-long.102",
219
+ doi = "10.18653/v1/2021.acl-long.102",
220
+ pages = "1274--1287",
221
+ abstract = "Commonsense reasoning research has so far been limited to English. We aim to evaluate and improve popular multilingual language models (ML-LMs) to help advance commonsense reasoning (CSR) beyond English. We collect the Mickey corpus, consisting of 561k sentences in 11 different languages, which can be used for analyzing and improving ML-LMs. We propose Mickey Probe, a language-general probing task for fairly evaluating the common sense of popular ML-LMs across different languages. In addition, we also create two new datasets, X-CSQA and X-CODAH, by translating their English versions to 14 other languages, so that we can evaluate popular ML-LMs for cross-lingual commonsense reasoning. To improve the performance beyond English, we propose a simple yet effective method {---} multilingual contrastive pretraining (MCP). It significantly enhances sentence representations, yielding a large performance gain on both benchmarks (e.g., +2.7{\%} accuracy for X-CSQA over XLM-R{\_}L).",
222
  }
223
 
224
  # CSQA
 
250
 
251
  ### Contributions
252
 
253
+ Thanks to [Bill Yuchen Lin](https://yuchenlin.xyz/), [Seyeon Lee](https://seyeon-lee.github.io/), [Xiaoyang Qiao](https://www.linkedin.com/in/xiaoyang-qiao/), [Xiang Ren](http://www-bcf.usc.edu/~xiangren/) for adding this dataset.
dataset_infos.json CHANGED
The diff for this file is too large to render. See raw diff
 
xcsr.py CHANGED
@@ -21,16 +21,22 @@ import os
21
  import datasets
22
 
23
 
24
- # TODO: Add BibTeX citation
25
- # Find for instance the citation on arxiv or on the dataset repo/website
26
  _CITATION = """\
27
  # X-CSR
28
- @inproceedings{lin-etal-2021-xcsr,
29
- title = "Common Sense Beyond English: Evaluating and Improving Multilingual Language Models for Commonsense Reasoning",
30
- author = "Lin, Bill Yuchen and Lee, Seyeon and Qiao, Xiaoyang and Ren, Xiang",
31
- booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL-IJCNLP 2021)",
 
 
 
 
32
  year = "2021",
33
- note={to appear}
 
 
 
 
34
  }
35
 
36
  # CSQA
@@ -60,19 +66,15 @@ _CITATION = """\
60
  }
61
  """
62
 
63
- # TODO: Add description of the dataset here
64
- # You can copy an official description
65
  _DESCRIPTION = """\
66
  To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
67
  """
68
 
69
- # TODO: Add a link to an official homepage for the dataset here
70
  _HOMEPAGE = "https://inklab.usc.edu//XCSR/"
71
 
72
  # TODO: Add the licence for the dataset here if you can find it
73
  # _LICENSE = ""
74
 
75
- # TODO: Add link to the official dataset URLs here
76
  # The HuggingFace dataset library don't host the datasets but only point to the original files
77
  # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
78
 
@@ -84,26 +86,25 @@ _LANGUAGES = ("en", "zh", "de", "es", "fr", "it", "jap", "nl", "pl", "pt", "ru",
84
  class XcsrConfig(datasets.BuilderConfig):
85
  """BuilderConfig for XCSR."""
86
 
87
- def __init__(self, name: str, language: str, languages=None, **kwargs):
88
  """BuilderConfig for XCSR.
89
  Args:
90
  language: One of {en, zh, de, es, fr, it, jap, nl, pl, pt, ru, ar, vi, hi, sw, ur}, or all_languages
91
  **kwargs: keyword arguments forwarded to super.
92
  """
93
- super(XcsrConfig, self).__init__(**kwargs)
94
- self.name = name
95
  self.language = language
96
 
97
 
98
- # TODO: Name of the dataset usually match the script name with CamelCase instead of snake_case
99
  class Xcsr(datasets.GeneratorBasedBuilder):
100
- """XCSR: A dataset for evaluating multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting"""
 
101
 
102
- VERSION = datasets.Version("1.1.0", "")
103
  BUILDER_CONFIG_CLASS = XcsrConfig
104
  BUILDER_CONFIGS = [
105
  XcsrConfig(
106
- name="X-CSQA-" + lang,
107
  language=lang,
108
  version=datasets.Version("1.1.0", ""),
109
  description=f"Plain text import of X-CSQA for the {lang} language",
@@ -111,7 +112,7 @@ class Xcsr(datasets.GeneratorBasedBuilder):
111
  for lang in _LANGUAGES
112
  ] + [
113
  XcsrConfig(
114
- name="X-CODAH-" + lang,
115
  language=lang,
116
  version=datasets.Version("1.1.0", ""),
117
  description=f"Plain text import of X-CODAH for the {lang} language",
@@ -120,159 +121,72 @@ class Xcsr(datasets.GeneratorBasedBuilder):
120
  ]
121
 
122
  def _info(self):
123
- # TODO: This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
124
- if self.config.name.startswith("X-CSQA"):
125
  features = datasets.Features(
126
  {
127
  "id": datasets.Value("string"),
128
  "lang": datasets.Value("string"),
129
- "question": datasets.features.Sequence(
130
- {
131
- "stem": datasets.Value("string"),
132
- "choices": datasets.features.Sequence(
133
- {
134
- "label": datasets.Value("string"),
135
- "text": datasets.Value("string"),
136
- }
137
- ),
138
- }
139
- ),
140
  "answerKey": datasets.Value("string"),
141
  }
142
  )
143
- elif self.config.name.startswith("X-CODAH"):
144
  features = datasets.Features(
145
  {
146
  "id": datasets.Value("string"),
147
  "lang": datasets.Value("string"),
148
  "question_tag": datasets.Value("string"),
149
- "question": datasets.features.Sequence(
150
- {
151
- "stem": datasets.Value("string"),
152
- "choices": datasets.features.Sequence(
153
- {
154
- "label": datasets.Value("string"),
155
- "text": datasets.Value("string"),
156
- }
157
- ),
158
- }
159
- ),
160
  "answerKey": datasets.Value("string"),
161
  }
162
  )
163
 
164
  return datasets.DatasetInfo(
165
- # This is the description that will appear on the datasets page.
166
  description=_DESCRIPTION,
167
- # This defines the different columns of the dataset and their types
168
- features=features, # Here we define them above because they are different between the two configurations
169
- # If there's a common (input, target) tuple from the features,
170
- # specify them here. They'll be used if as_supervised=True in
171
- # builder.as_dataset.
172
- supervised_keys=None,
173
- # Homepage of the dataset for documentation
174
  homepage=_HOMEPAGE,
175
- # License for the dataset if available
176
- # license=_LICENSE,
177
- # Citation for the dataset
178
  citation=_CITATION,
179
  )
180
 
181
  def _split_generators(self, dl_manager):
182
  """Returns SplitGenerators."""
183
- # TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
184
- # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
185
-
186
- # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
187
- # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
188
- # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
189
- my_urls = _URL
190
- data_dir = dl_manager.download_and_extract(my_urls)
191
- if self.config.name.startswith("X-CSQA"):
192
- sub_test_path = "X-CSR_datasets/X-CSQA/" + self.config.language + "/test.jsonl"
193
- sub_dev_path = "X-CSR_datasets/X-CSQA/" + self.config.language + "/dev.jsonl"
194
- elif self.config.name.startswith("X-CODAH"):
195
- sub_test_path = "X-CSR_datasets/X-CODAH/" + self.config.language + "/test.jsonl"
196
- sub_dev_path = "X-CSR_datasets/X-CODAH/" + self.config.language + "/dev.jsonl"
197
-
198
  return [
199
  datasets.SplitGenerator(
200
  name=datasets.Split.TEST,
201
  gen_kwargs={
202
- "filepath": os.path.join(data_dir, sub_test_path),
203
- "split": "test",
204
  },
205
  ),
206
  datasets.SplitGenerator(
207
  name=datasets.Split.VALIDATION,
208
  gen_kwargs={
209
- "filepath": os.path.join(data_dir, sub_dev_path),
210
- "split": "dev",
211
  },
212
  ),
213
  ]
214
 
215
- def _generate_examples(
216
- self, filepath, split # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
217
- ):
218
  """Yields examples as (key, example) tuples."""
219
- # This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
220
- # The `key` is here for legacy reason (tfds) and is not important in itself.
221
- key = 0
222
- if self.config.name.startswith("X-CSQA"):
223
- with open(filepath, encoding="utf-8") as f:
224
- for _id, row in enumerate(f):
225
- data = json.loads(row)
226
-
227
- ID = data["id"]
228
- lang = data["lang"]
229
- question = data["question"]
230
- stem = question["stem"]
231
- choices = question["choices"]
232
- labels = [label["label"] for label in choices]
233
- texts = [text["text"] for text in choices]
234
-
235
- if split == "test":
236
- answerkey = ""
237
- else:
238
- answerkey = data["answerKey"]
239
-
240
- yield key, {
241
- "id": ID,
242
- "lang": lang,
243
- "question": {
244
- "stem": stem,
245
- "choices": [{"label": label, "text": text} for label, text in zip(labels, texts)],
246
- },
247
- "answerKey": answerkey,
248
- }
249
- key += 1
250
- elif self.config.name.startswith("X-CODAH"):
251
- with open(filepath, encoding="utf-8") as f:
252
- for _id, row in enumerate(f):
253
- data = json.loads(row)
254
- ID = data["id"]
255
- lang = data["lang"]
256
- question_tag = data["question_tag"]
257
- question = data["question"]
258
- stem = question["stem"]
259
- choices = question["choices"]
260
- labels = [label["label"] for label in choices]
261
- texts = [text["text"] for text in choices]
262
-
263
- if split == "test":
264
- answerkey = ""
265
- else:
266
- answerkey = data["answerKey"]
267
-
268
- yield key, {
269
- "id": ID,
270
- "lang": lang,
271
- "question_tag": question_tag,
272
- "question": {
273
- "stem": stem,
274
- "choices": [{"label": label, "text": text} for label, text in zip(labels, texts)],
275
- },
276
- "answerKey": answerkey,
277
- }
278
- key += 1
 
21
  import datasets
22
 
23
 
 
 
24
  _CITATION = """\
25
  # X-CSR
26
+ @inproceedings{lin-etal-2021-common,
27
+ title = "Common Sense Beyond {E}nglish: Evaluating and Improving Multilingual Language Models for Commonsense Reasoning",
28
+ author = "Lin, Bill Yuchen and
29
+ Lee, Seyeon and
30
+ Qiao, Xiaoyang and
31
+ Ren, Xiang",
32
+ booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
33
+ month = aug,
34
  year = "2021",
35
+ address = "Online",
36
+ publisher = "Association for Computational Linguistics",
37
+ url = "https://aclanthology.org/2021.acl-long.102",
38
+ doi = "10.18653/v1/2021.acl-long.102",
39
+ pages = "1274--1287",
40
  }
41
 
42
  # CSQA
 
66
  }
67
  """
68
 
 
 
69
  _DESCRIPTION = """\
70
  To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
71
  """
72
 
 
73
  _HOMEPAGE = "https://inklab.usc.edu//XCSR/"
74
 
75
  # TODO: Add the licence for the dataset here if you can find it
76
  # _LICENSE = ""
77
 
 
78
  # The HuggingFace dataset library don't host the datasets but only point to the original files
79
  # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
80
 
 
86
  class XcsrConfig(datasets.BuilderConfig):
87
  """BuilderConfig for XCSR."""
88
 
89
+ def __init__(self, subset: str, language: str, **kwargs):
90
  """BuilderConfig for XCSR.
91
  Args:
92
  language: One of {en, zh, de, es, fr, it, jap, nl, pl, pt, ru, ar, vi, hi, sw, ur}, or all_languages
93
  **kwargs: keyword arguments forwarded to super.
94
  """
95
+ super().__init__(name=f"{subset}-{language}", **kwargs)
96
+ self.subset = subset
97
  self.language = language
98
 
99
 
 
100
  class Xcsr(datasets.GeneratorBasedBuilder):
101
+ """XCSR: A dataset for evaluating multi-lingual language models (ML-LMs) for commonsense reasoning in a
102
+ cross-lingual zero-shot transfer setting"""
103
 
 
104
  BUILDER_CONFIG_CLASS = XcsrConfig
105
  BUILDER_CONFIGS = [
106
  XcsrConfig(
107
+ subset="X-CSQA",
108
  language=lang,
109
  version=datasets.Version("1.1.0", ""),
110
  description=f"Plain text import of X-CSQA for the {lang} language",
 
112
  for lang in _LANGUAGES
113
  ] + [
114
  XcsrConfig(
115
+ subset="X-CODAH",
116
  language=lang,
117
  version=datasets.Version("1.1.0", ""),
118
  description=f"Plain text import of X-CODAH for the {lang} language",
 
121
  ]
122
 
123
  def _info(self):
124
+ if self.config.subset == "X-CSQA":
 
125
  features = datasets.Features(
126
  {
127
  "id": datasets.Value("string"),
128
  "lang": datasets.Value("string"),
129
+ "question": {
130
+ "stem": datasets.Value("string"),
131
+ "choices": datasets.features.Sequence(
132
+ {
133
+ "label": datasets.Value("string"),
134
+ "text": datasets.Value("string"),
135
+ }
136
+ ),
137
+ },
 
 
138
  "answerKey": datasets.Value("string"),
139
  }
140
  )
141
+ elif self.config.subset == "X-CODAH":
142
  features = datasets.Features(
143
  {
144
  "id": datasets.Value("string"),
145
  "lang": datasets.Value("string"),
146
  "question_tag": datasets.Value("string"),
147
+ "question": {
148
+ "stem": datasets.Value("string"),
149
+ "choices": datasets.features.Sequence(
150
+ {
151
+ "label": datasets.Value("string"),
152
+ "text": datasets.Value("string"),
153
+ }
154
+ ),
155
+ },
 
 
156
  "answerKey": datasets.Value("string"),
157
  }
158
  )
159
 
160
  return datasets.DatasetInfo(
 
161
  description=_DESCRIPTION,
162
+ features=features,
 
 
 
 
 
 
163
  homepage=_HOMEPAGE,
 
 
 
164
  citation=_CITATION,
165
  )
166
 
167
  def _split_generators(self, dl_manager):
168
  """Returns SplitGenerators."""
169
+ data_dir = dl_manager.download_and_extract(_URL)
170
+ filepath = os.path.join(data_dir, "X-CSR_datasets", self.config.subset, self.config.language, "{split}.jsonl")
 
 
 
 
 
 
 
 
 
 
 
 
 
171
  return [
172
  datasets.SplitGenerator(
173
  name=datasets.Split.TEST,
174
  gen_kwargs={
175
+ "filepath": filepath.format(split="test"),
 
176
  },
177
  ),
178
  datasets.SplitGenerator(
179
  name=datasets.Split.VALIDATION,
180
  gen_kwargs={
181
+ "filepath": filepath.format(split="dev"),
 
182
  },
183
  ),
184
  ]
185
 
186
+ def _generate_examples(self, filepath):
 
 
187
  """Yields examples as (key, example) tuples."""
188
+ with open(filepath, encoding="utf-8") as f:
189
+ for key, row in enumerate(f):
190
+ data = json.loads(row)
191
+ _ = data.setdefault("answerKey", "")
192
+ yield key, data