albertvillanova HF staff commited on
Commit
0d03fdb
1 Parent(s): f45cb5a

Support streaming XGLUE dataset (#4249)

Browse files

* Support streaming XGLUE dataset

* Fix dataset card

* Fix dataset card by adding task ID for ntg config

Commit from https://github.com/huggingface/datasets/commit/e74d69c1d41dd320e77ca7244c624592f1a9fa3d

Files changed (2) hide show
  1. README.md +21 -8
  2. xglue.py +84 -74
README.md CHANGED
@@ -262,7 +262,8 @@ task_ids:
262
  - topic-classification
263
  ner:
264
  - named-entity-recognition
265
- ntg: []
 
266
  paws-x:
267
  - text-classification-other-paraphrase-identification
268
  pos:
@@ -284,6 +285,7 @@ pretty_name: XGLUE
284
  # Dataset Card for XGLUE
285
 
286
  ## Table of Contents
 
287
  - [Dataset Description](#dataset-description)
288
  - [Dataset Summary](#dataset-summary)
289
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
@@ -323,11 +325,28 @@ The following table shows which languages are present as validation and test dat
323
 
324
  Therefore, for each config, a cross-lingual pre-trained model should be fine-tuned on the English training data, and evaluated on for all languages.
325
 
326
- ### Leaderboards
327
 
328
  The XGLUE leaderboard can be found on the [homepage](https://microsoft.github.io/XGLUE/) and
329
  consits of a XGLUE-Understanding Score (the average of the tasks `ner`, `pos`, `mlqa`, `nc`, `xnli`, `paws-x`, `qadsm`, `wpr`, `qam`) and a XGLUE-Generation Score (the average of the tasks `qg`, `ntg`).
330
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
331
  ## Dataset Structure
332
 
333
  ### Data Instances
@@ -720,12 +739,6 @@ The following table shows the number of data samples/number of rows for each spl
720
  |----|-----:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|
721
  |xnli|392702| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010|
722
 
723
- The following table shows the number of data samples/number of rows for each split in mlqa.
724
-
725
- | |train|validation.en|validation.de|validation.ar|validation.es|validation.hi|validation.vi|validation.zh|test.en|test.de|test.ar|test.es|test.hi|test.vi|test.zh|
726
- |----|----:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------:|------:|------:|------:|------:|------:|------:|
727
- |mlqa|87599| 1148| 512| 517| 500| 507| 511| 504| 11590| 4517| 5335| 5253| 4918| 5495| 5137|
728
-
729
 
730
  #### nc
731
 
262
  - topic-classification
263
  ner:
264
  - named-entity-recognition
265
+ ntg:
266
+ - news-articles-headline-generation
267
  paws-x:
268
  - text-classification-other-paraphrase-identification
269
  pos:
285
  # Dataset Card for XGLUE
286
 
287
  ## Table of Contents
288
+ - [Table of Contents](#table-of-contents)
289
  - [Dataset Description](#dataset-description)
290
  - [Dataset Summary](#dataset-summary)
291
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
325
 
326
  Therefore, for each config, a cross-lingual pre-trained model should be fine-tuned on the English training data, and evaluated on for all languages.
327
 
328
+ ### Supported Tasks and Leaderboards
329
 
330
  The XGLUE leaderboard can be found on the [homepage](https://microsoft.github.io/XGLUE/) and
331
  consits of a XGLUE-Understanding Score (the average of the tasks `ner`, `pos`, `mlqa`, `nc`, `xnli`, `paws-x`, `qadsm`, `wpr`, `qam`) and a XGLUE-Generation Score (the average of the tasks `qg`, `ntg`).
332
 
333
+ ### Languages
334
+
335
+ For all tasks (configurations), the "train" split is in English (`en`).
336
+
337
+ For each task, the "validation" and "test" splits are present in these languages:
338
+ - ner: `en`, `de`, `es`, `nl`
339
+ - pos: `en`, `de`, `es`, `nl`, `bg`, `el`, `fr`, `pl`, `tr`, `vi`, `zh`, `ur`, `hi`, `it`, `ar`, `ru`, `th`
340
+ - mlqa: `en`, `de`, `ar`, `es`, `hi`, `vi`, `zh`
341
+ - nc: `en`, `de`, `es`, `fr`, `ru`
342
+ - xnli: `en`, `ar`, `bg`, `de`, `el`, `es`, `fr`, `hi`, `ru`, `sw`, `th`, `tr`, `ur`, `vi`, `zh`
343
+ - paws-x: `en`, `de`, `es`, `fr`
344
+ - qadsm: `en`, `de`, `fr`
345
+ - wpr: `en`, `de`, `es`, `fr`, `it`, `pt`, `zh`
346
+ - qam: `en`, `de`, `fr`
347
+ - qg: `en`, `de`, `es`, `fr`, `it`, `pt`
348
+ - ntg: `en`, `de`, `es`, `fr`, `ru`
349
+
350
  ## Dataset Structure
351
 
352
  ### Data Instances
739
  |----|-----:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|
740
  |xnli|392702| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010|
741
 
 
 
 
 
 
 
742
 
743
  #### nc
744
 
xglue.py CHANGED
@@ -18,7 +18,6 @@
18
 
19
 
20
  import json
21
- import os
22
  import textwrap
23
 
24
  import datasets
@@ -75,15 +74,15 @@ _LANGUAGES = {
75
 
76
  _PATHS = {
77
  "mlqa": {
78
- "train": os.path.join("squad1.1", "train-v1.1.json"),
79
- "dev": os.path.join("MLQA_V1", "dev", "dev-context-{0}-question-{0}.json"),
80
- "test": os.path.join("MLQA_V1", "test", "test-context-{0}-question-{0}.json"),
81
  },
82
  "xnli": {"train": "multinli.train.en.tsv", "dev": "{}.dev", "test": "{}.test"},
83
  "paws-x": {
84
- "train": os.path.join("en", "train.tsv"),
85
- "dev": os.path.join("{}", "dev_2k.tsv"),
86
- "test": os.path.join("{}", "test_2k.tsv"),
87
  },
88
  }
89
  for name in ["ner", "pos"]:
@@ -473,8 +472,8 @@ Portuguese. BLEU-4 score should be used as the metric.
473
  )
474
 
475
  def _split_generators(self, dl_manager):
476
- all_data_folder = dl_manager.download_and_extract(_XGLUE_ALL_DATA)
477
- data_folder = os.path.join(all_data_folder, "xglue_full_dataset", self.config.data_dir)
478
  name = self.config.name
479
 
480
  languages = _LANGUAGES[name]
@@ -482,14 +481,19 @@ Portuguese. BLEU-4 score should be used as the metric.
482
  [
483
  datasets.SplitGenerator(
484
  name=datasets.Split.TRAIN,
485
- gen_kwargs={"data_file": os.path.join(data_folder, _PATHS[name]["train"]), "split": "train"},
 
 
 
 
486
  ),
487
  ]
488
  + [
489
  datasets.SplitGenerator(
490
  name=datasets.Split(f"validation.{lang}"),
491
  gen_kwargs={
492
- "data_file": os.path.join(data_folder, _PATHS[name]["dev"].format(lang)),
 
493
  "split": "dev",
494
  },
495
  )
@@ -499,7 +503,8 @@ Portuguese. BLEU-4 score should be used as the metric.
499
  datasets.SplitGenerator(
500
  name=datasets.Split(f"test.{lang}"),
501
  gen_kwargs={
502
- "data_file": os.path.join(data_folder, _PATHS[name]["test"].format(lang)),
 
503
  "split": "test",
504
  },
505
  )
@@ -507,68 +512,73 @@ Portuguese. BLEU-4 score should be used as the metric.
507
  ]
508
  )
509
 
510
- def _generate_examples(self, data_file, split=None):
511
  keys = list(self._info().features.keys())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
512
 
513
- if self.config.name == "mlqa":
514
- with open(data_file, encoding="utf-8") as f:
515
- data = json.load(f)
516
- for examples in data["data"]:
517
- for example in examples["paragraphs"]:
518
- context = example["context"]
519
- for qa in example["qas"]:
520
- question = qa["question"]
521
- id_ = qa["id"]
522
- answers = qa["answers"]
523
- answers_start = [answer["answer_start"] for answer in answers]
524
- answers_text = [answer["text"] for answer in answers]
525
- yield id_, {
526
- "context": context,
527
- "question": question,
528
- "answers": {"answer_start": answers_start, "text": answers_text},
529
- }
530
- elif self.config.name in ["ner", "pos"]:
531
- words = []
532
- result = []
533
- idx = -1
534
- with open(data_file, encoding="utf-8") as f:
535
- for line in f:
536
- if line.strip() == "":
537
- if len(words) > 0:
538
- out_dict = {keys[0]: words, keys[1]: result}
539
- words = []
540
- result = []
541
- idx += 1
542
- yield idx, out_dict
543
- else:
544
- splits = line.strip().split(" ")
545
- words.append(splits[0])
546
- result.append(splits[1])
547
- elif self.config.name in ["ntg", "qg"]:
548
- with open(data_file + ".src." + split, encoding="utf-8") as src_f, open(
549
- data_file + ".tgt." + split, encoding="utf-8"
550
- ) as tgt_f:
551
- for idx, (src_line, tgt_line) in enumerate(zip(src_f, tgt_f)):
552
- yield idx, {keys[0]: src_line.strip(), keys[1]: tgt_line.strip()}
553
- else:
554
- _process_dict = {
555
- "paws-x": {"0": "different", "1": "same"},
556
- "xnli": {"contradictory": "contradiction"},
557
- "qam": {"0": "False", "1": "True"},
558
- "wpr": {"0": "Bad", "1": "Fair", "2": "Good", "3": "Excellent", "4": "Perfect"},
559
- }
560
-
561
- def _process(value):
562
- if self.config.name in _process_dict and value in _process_dict[self.config.name]:
563
- return _process_dict[self.config.name][value]
564
- return value
565
 
566
- with open(data_file, encoding="utf-8") as f:
567
- for idx, line in enumerate(f):
568
- if data_file.split(".")[-1] == "tsv" and idx == 0:
569
- continue
570
- items = line.strip().split("\t")
571
- yield idx, {
572
- key: _process(value)
573
- for key, value in zip(keys, items[1:] if self.config.name == "paws-x" else items)
574
- }
 
18
 
19
 
20
  import json
 
21
  import textwrap
22
 
23
  import datasets
74
 
75
  _PATHS = {
76
  "mlqa": {
77
+ "train": "squad1.1/train-v1.1.json",
78
+ "dev": "MLQA_V1/dev/dev-context-{0}-question-{0}.json",
79
+ "test": "MLQA_V1/test/test-context-{0}-question-{0}.json",
80
  },
81
  "xnli": {"train": "multinli.train.en.tsv", "dev": "{}.dev", "test": "{}.test"},
82
  "paws-x": {
83
+ "train": "en/train.tsv",
84
+ "dev": "{}/dev_2k.tsv",
85
+ "test": "{}/test_2k.tsv",
86
  },
87
  }
88
  for name in ["ner", "pos"]:
472
  )
473
 
474
  def _split_generators(self, dl_manager):
475
+ archive = dl_manager.download(_XGLUE_ALL_DATA)
476
+ data_folder = f"xglue_full_dataset/{self.config.data_dir}"
477
  name = self.config.name
478
 
479
  languages = _LANGUAGES[name]
481
  [
482
  datasets.SplitGenerator(
483
  name=datasets.Split.TRAIN,
484
+ gen_kwargs={
485
+ "archive": dl_manager.iter_archive(archive),
486
+ "data_path": f"{data_folder}/{_PATHS[name]['train']}",
487
+ "split": "train",
488
+ },
489
  ),
490
  ]
491
  + [
492
  datasets.SplitGenerator(
493
  name=datasets.Split(f"validation.{lang}"),
494
  gen_kwargs={
495
+ "archive": dl_manager.iter_archive(archive),
496
+ "data_path": f"{data_folder}/{_PATHS[name]['dev'].format(lang)}",
497
  "split": "dev",
498
  },
499
  )
503
  datasets.SplitGenerator(
504
  name=datasets.Split(f"test.{lang}"),
505
  gen_kwargs={
506
+ "archive": dl_manager.iter_archive(archive),
507
+ "data_path": f"{data_folder}/{_PATHS[name]['test'].format(lang)}",
508
  "split": "test",
509
  },
510
  )
512
  ]
513
  )
514
 
515
+ def _generate_examples(self, archive, data_path, split=None):
516
  keys = list(self._info().features.keys())
517
+ src_f = tgt_f = None
518
+ for path, file in archive:
519
+ if self.config.name == "mlqa":
520
+ if path == data_path:
521
+ data = json.load(file)
522
+ for examples in data["data"]:
523
+ for example in examples["paragraphs"]:
524
+ context = example["context"]
525
+ for qa in example["qas"]:
526
+ question = qa["question"]
527
+ id_ = qa["id"]
528
+ answers = qa["answers"]
529
+ answers_start = [answer["answer_start"] for answer in answers]
530
+ answers_text = [answer["text"] for answer in answers]
531
+ yield id_, {
532
+ "context": context,
533
+ "question": question,
534
+ "answers": {"answer_start": answers_start, "text": answers_text},
535
+ }
536
+ elif self.config.name in ["ner", "pos"]:
537
+ if path == data_path:
538
+ words = []
539
+ result = []
540
+ idx = -1
541
+ for line in file:
542
+ line = line.decode("utf-8")
543
+ if line.strip() == "":
544
+ if len(words) > 0:
545
+ out_dict = {keys[0]: words, keys[1]: result}
546
+ words = []
547
+ result = []
548
+ idx += 1
549
+ yield idx, out_dict
550
+ else:
551
+ splits = line.strip().split(" ")
552
+ words.append(splits[0])
553
+ result.append(splits[1])
554
+ elif self.config.name in ["ntg", "qg"]:
555
+ if path == data_path + ".src." + split:
556
+ src_f = [line.decode("utf-8") for line in file]
557
+ elif path == data_path + ".tgt." + split:
558
+ tgt_f = [line.decode("utf-8") for line in file]
559
+ if src_f and tgt_f:
560
+ for idx, (src_line, tgt_line) in enumerate(zip(src_f, tgt_f)):
561
+ yield idx, {keys[0]: src_line.strip(), keys[1]: tgt_line.strip()}
562
+ else:
563
+ _process_dict = {
564
+ "paws-x": {"0": "different", "1": "same"},
565
+ "xnli": {"contradictory": "contradiction"},
566
+ "qam": {"0": "False", "1": "True"},
567
+ "wpr": {"0": "Bad", "1": "Fair", "2": "Good", "3": "Excellent", "4": "Perfect"},
568
+ }
569
 
570
+ def _process(value):
571
+ if self.config.name in _process_dict and value in _process_dict[self.config.name]:
572
+ return _process_dict[self.config.name][value]
573
+ return value
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
574
 
575
+ if path == data_path:
576
+ for idx, line in enumerate(file):
577
+ line = line.decode("utf-8")
578
+ if data_path.split(".")[-1] == "tsv" and idx == 0:
579
+ continue
580
+ items = line.strip().split("\t")
581
+ yield idx, {
582
+ key: _process(value)
583
+ for key, value in zip(keys, items[1:] if self.config.name == "paws-x" else items)
584
+ }