Datasets:
license: cc-by-4.0
language:
- ar
- zh
- cs
- en
- fr
- el
- he
- hi
- ro
- es
task_categories:
- summarization
tags:
- multi-document
- single-document
- abstractive
- multilingual
configs:
- config_name: single
description: Single-document summarisation pairs (article + summary).
- config_name: multi
description: Multi-document summarisation clusters with 3 human summaries.
MultiLing Multilingual Summarisation Corpus
The MultiLing Multilingual Summarisation Corpus is a comprehensive multilingual benchmark for single-document and multi-document abstractive summarisation, originally created for the MultiLing 2011 and MultiLing 2013 shared tasks held under ACL.
This release consolidates, cleans, and reformats the original resources into a standard, machine-readable dataset suitable for modern sequence-to-sequence and large language model research.
The corpus covers ten languages:
Arabic, Chinese, Czech, English, French, Greek, Hebrew, Hindi, Romanian, Spanish
and contains both:
- Single-document summarisation pairs (source article, 1 gold summary)
- Multi-document summarisation clusters (10 related articles, 3 human-written abstractive summaries)
All texts originate from WikiNews, following the Creative Commons BY licence. See http://multiling.iit.demokritos.gr/
β¨ Key Features
Single-Document Summarisation
- 40 languages in the original collection; the present release includes the main ten used for MultiLing 2013.
- For each language, every document has:
- One source article
- One human abstractive summary
- Fully parallel across languages.
Multi-Document Summarisation
- Each topic consists of 10 articles describing an event sequence.
- Topics appear consistently across all languages that contributed to that yearβs task.
- Each cluster includes three human-written abstractive summaries, produced independently.
- Human summaries were constrained to 240β250 words (or equivalent byte limits for Chinese).
Parallel & Comparable Structure
The corpus was originally designed to allow:
- Cross-lingual and multilingual summarisation
- Comparative analyses of summarisation difficulty across languages
- Multilingual evaluation of automatic summarisation metrics (ROUGE, AutoSummENG-MeMoG, NPowER)
π Source and Citation
This dataset is derived from the corpus described in:
Li L, ForΔscu C, El-Haj M, Giannakopoulos G. (2013)
Multi-Document Multilingual Summarization Corpus Preparation, Part 1: Arabic, English, Greek, Chinese, Romanian.
In Proceedings of the MultiLing 2013 Workshop on Multilingual Multi-Document Summarization, pp. 1β12.
ACL 2013, Sofia, Bulgaria.
PDF: https://aclanthology.org/W13-3101.pdf
Please cite the paper above when using this dataset.
π Dataset Structure
Single-Document Format
Each sample includes:
languageβ ISO folder name (ar,en,fr, etc.)doc_iddocument_textsummary
Multi-Document Format
Each sample includes:
cluster_idβ e.g.M000,M014,M103languagedoc_idsβ list of the ten document identifiersdocuments_textβ concatenated with<DOC id=β¦>wrapperssummary_1,summary_2,summary_3β three reference human summaries
All files are provided in CSV and JSONL, with train/dev/test splits.
π§ Recommended Use Cases
- Multilingual abstractive summarisation
- Cross-lingual evaluation of LLMs
- Multi-document summarisation research
- Training summarisation models on parallel news texts
- Research on multilingual evaluation metrics
- Cross-lingual transfer learning
- Low-resource summarisation investigations
π Splits
The dataset is released with deterministic:
trainvalidationtest
splits for single-document and multi-document subsets.
For multi-document summarisation, splits are cluster-based to prevent data leakage.
π₯ Loading the Dataset
from datasets import load_dataset
ds = load_dataset("YOUR_DATASET_NAME", "multi")
# or
ds = load_dataset("YOUR_DATASET_NAME", "single")
π Licence
All texts originate from WikiNews under Creative Commons BY 2.5/3.0 licences. This consolidated dataset is released under CC-BY-4.0.
π Acknowledgements
MultiLing is the result of a large international community effort involving contributors from more than ten universities and research centres. This cleaned and repackaged release builds on that original work to make the corpus more accessible for modern NLP research.