The dataset viewer is not available for this split.
Rows from parquet row groups are too big to be read: 308.98 MiB (max=286.10 MiB)
Error code:   TooBigContentError

Need help to make the dataset viewer work? Open a discussion for direct support.

I generated the dataset following mewsli-x.md#getting-started and converted into different parts (see process.py):

  • ar/de/en/es/fa/ja/pl/ro/ta/tr/uk wikinews_mentions dev and test (from wikinews_mentions-dev/test.jsonl)
  • candidate entities of 50 languages (from candidate_set_entities.jsonl)
  • English wikipedia_pairs to fine-tune models (from wikipedia_pairs-dev/train.jsonl)

Raw data files are in raw.tar.gz, which contains:

[...] 535M Feb 24 22:06 candidate_set_entities.jsonl
[...] 9.8M Feb 24 22:06 wikinews_mentions-dev.jsonl
[...]  35M Feb 24 22:06 wikinews_mentions-test.jsonl
[...]  24M Feb 24 22:06 wikipedia_pairs-dev.jsonl
[...] 283M Feb 24 22:06 wikipedia_pairs-train.jsonl

Below is from the original readme

Mewsli-X

Mewsli-X is a multilingual dataset of entity mentions appearing in WikiNews and Wikipedia articles, that have been automatically linked to WikiData entries.

The primary use case is to evaluate transfer-learning in the zero-shot cross-lingual setting of the XTREME-R benchmark suite:

  1. Fine-tune a pretrained model on English Wikipedia examples;
  2. Evaluate on WikiNews in other languages β€” given an entity mention in a WikiNews article, retrieve the correct entity from the predefined candidate set by means of its textual description.

Mewsli-X constitutes a doubly zero-shot task by construction: at test time, a model has to contend with different languages and a different set of entities from those observed during fine-tuning.

πŸ‘‰ For data examples and other editions of Mewsli, see README.md.

πŸ‘‰ Consider submitting to the XTREME-R leaderboard. The XTREME-R repository includes code for getting started with training and evaluating a baseline model in PyTorch.

πŸ‘‰ Please cite this paper if you use the data/code in your work: XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation (Ruder et al., 2021).

NOTE: New evaluation results on Mewsli-X are not directly comparable to those reported in the paper because the dataset required further updates, as detailed below. This does not affect the overall findings of the paper.

@inproceedings{ruder-etal-2021-xtreme,
    title = "{XTREME}-{R}: Towards More Challenging and Nuanced Multilingual Evaluation",
    author = "Ruder, Sebastian  and
      Constant, Noah  and
      Botha, Jan  and
      Siddhant, Aditya  and
      Firat, Orhan  and
      Fu, Jinlan  and
      Liu, Pengfei  and
      Hu, Junjie  and
      Garrette, Dan  and
      Neubig, Graham  and
      Johnson, Melvin",
    booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2021",
    address = "Online and Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.emnlp-main.802",
    doi = "10.18653/v1/2021.emnlp-main.802",
    pages = "10215--10245",
}
Downloads last month
0
Edit dataset card