Multilinguality: multilingual
Size Categories: 1M<n<10M
Language Creators: found
Annotations Creators: found
Source Datasets: original
Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    JSONDecodeError
Message:      Expecting value: line 1 column 1 (char 0)
Traceback:    Traceback (most recent call last):
                File "/src/workers/datasets_based/src/datasets_based/workers/", line 485, in compute_first_rows_response
                  rows = get_rows(
                File "/src/workers/datasets_based/src/datasets_based/workers/", line 120, in decorator
                  return func(*args, **kwargs)
                File "/src/workers/datasets_based/src/datasets_based/workers/", line 176, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/", line 846, in __iter__
                  for key, example in self._iter():
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/", line 788, in _iter
                  yield from ex_iterable
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/", line 113, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/csebuetnlp--xlsum/518ab0af76048660bcc2240ca6e8692a977c80e384ffb18fdddebaca6daebdce/", line 156, in _generate_examples
                  data = json.loads(row)
                File "/usr/local/lib/python3.9/json/", line 346, in loads
                  return _default_decoder.decode(s)
                File "/usr/local/lib/python3.9/json/", line 337, in decode
                  obj, end = self.raw_decode(s, idx=_w(s, 0).end())
                File "/usr/local/lib/python3.9/json/", line 355, in raw_decode
                  raise JSONDecodeError("Expecting value", s, err.value) from None
              json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Need help to make the dataset viewer work? Open an discussion for direct support.

YAML Metadata Warning: The task_categories "conditional-text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, conversational, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, other
YAML Metadata Warning: The task_ids "summarization" is not in the official list: acceptability-classification, entity-linking-classification, fact-checking, intent-classification, language-identification, multi-class-classification, multi-label-classification, multi-input-text-classification, natural-language-inference, semantic-similarity-classification, sentiment-classification, topic-classification, semantic-similarity-scoring, sentiment-scoring, sentiment-analysis, hate-speech-detection, text-scoring, named-entity-recognition, part-of-speech, parsing, lemmatization, word-sense-disambiguation, coreference-resolution, extractive-qa, open-domain-qa, closed-domain-qa, news-articles-summarization, news-articles-headline-generation, dialogue-generation, dialogue-modeling, language-modeling, text-simplification, explanation-generation, abstractive-qa, open-domain-abstractive-qa, closed-domain-qa, open-book-qa, closed-book-qa, slot-filling, masked-language-modeling, keyword-spotting, speaker-identification, audio-intent-classification, audio-emotion-recognition, audio-language-identification, multi-label-image-classification, multi-class-image-classification, face-detection, vehicle-detection, instance-segmentation, semantic-segmentation, panoptic-segmentation, image-captioning, grasping, task-planning, tabular-multi-class-classification, tabular-multi-label-classification, tabular-single-column-regression, rdf-to-text, multiple-choice-qa, multiple-choice-coreference-resolution, document-retrieval, utterance-retrieval, entity-linking-retrieval, fact-checking-retrieval, univariate-time-series-forecasting, multivariate-time-series-forecasting, visual-question-answering, document-question-answering

Dataset Card for "XL-Sum"

Dataset Summary

We present XLSum, a comprehensive and diverse dataset comprising 1.35 million professionally annotated article-summary pairs from BBC, extracted using a set of carefully designed heuristics. The dataset covers 45 languages ranging from low to high-resource, for many of which no public dataset is currently available. XL-Sum is highly abstractive, concise, and of high quality, as indicated by human and intrinsic evaluation.

Supported Tasks and Leaderboards

More information needed


  • amharic
  • arabic
  • azerbaijani
  • bengali
  • burmese
  • chinese_simplified
  • chinese_traditional
  • english
  • french
  • gujarati
  • hausa
  • hindi
  • igbo
  • indonesian
  • japanese
  • kirundi
  • korean
  • kyrgyz
  • marathi
  • nepali
  • oromo
  • pashto
  • persian
  • pidgin
  • portuguese
  • punjabi
  • russian
  • scottish_gaelic
  • serbian_cyrillic
  • serbian_latin
  • sinhala
  • somali
  • spanish
  • swahili
  • tamil
  • telugu
  • thai
  • tigrinya
  • turkish
  • ukrainian
  • urdu
  • uzbek
  • vietnamese
  • welsh
  • yoruba

Dataset Structure

Data Instances

One example from the English dataset is given below in JSON format.

  "id": "technology-17657859",
  "url": "",
  "title": "Yahoo files e-book advert system patent applications",
  "summary": "Yahoo has signalled it is investigating e-book adverts as a way to stimulate its earnings.",
  "text": "Yahoo's patents suggest users could weigh the type of ads against the sizes of discount before purchase. It says in two US patent applications that ads for digital book readers have been \"less than optimal\" to date. The filings suggest that users could be offered titles at a variety of prices depending on the ads' prominence They add that the products shown could be determined by the type of book being read, or even the contents of a specific chapter, phrase or word. The paperwork was published by the US Patent and Trademark Office late last week and relates to work carried out at the firm's headquarters in Sunnyvale, California. \"Greater levels of advertising, which may be more valuable to an advertiser and potentially more distracting to an e-book reader, may warrant higher discounts,\" it states. Free books It suggests users could be offered ads as hyperlinks based within the book's text, in-laid text or even \"dynamic content\" such as video. Another idea suggests boxes at the bottom of a page could trail later chapters or quotes saying \"brought to you by Company A\". It adds that the more willing the customer is to see the ads, the greater the potential discount. \"Higher frequencies... may even be great enough to allow the e-book to be obtained for free,\" it states. The authors write that the type of ad could influence the value of the discount, with \"lower class advertising... such as teeth whitener advertisements\" offering a cheaper price than \"high\" or \"middle class\" adverts, for things like pizza. The inventors also suggest that ads could be linked to the mood or emotional state the reader is in as a they progress through a title. For example, they say if characters fall in love or show affection during a chapter, then ads for flowers or entertainment could be triggered. The patents also suggest this could applied to children's books - giving the Tom Hanks animated film Polar Express as an example. It says a scene showing a waiter giving the protagonists hot drinks \"may be an excellent opportunity to show an advertisement for hot cocoa, or a branded chocolate bar\". Another example states: \"If the setting includes young characters, a Coke advertisement could be provided, inviting the reader to enjoy a glass of Coke with his book, and providing a graphic of a cool glass.\" It adds that such targeting could be further enhanced by taking account of previous titles the owner has bought. 'Advertising-free zone' At present, several Amazon and Kobo e-book readers offer full-screen adverts when the device is switched off and show smaller ads on their menu screens, but the main text of the titles remains free of marketing. Yahoo does not currently provide ads to these devices, and a move into the area could boost its shrinking revenues. However, Philip Jones, deputy editor of the Bookseller magazine, said that the internet firm might struggle to get some of its ideas adopted. \"This has been mooted before and was fairly well decried,\" he said. \"Perhaps in a limited context it could work if the merchandise was strongly related to the title and was kept away from the text. \"But readers - particularly parents - like the fact that reading is an advertising-free zone. Authors would also want something to say about ads interrupting their narrative flow.\""

Data Fields

  • 'id': A string representing the article ID.
  • 'url': A string representing the article URL.
  • 'title': A string containing the article title.
  • 'summary': A string containing the article summary.
  • 'text' : A string containing the article text.

Data Splits

We used a 80%-10%-10% split for all languages with a few exceptions. English was split 93%-3.5%-3.5% for the evaluation set size to resemble that of CNN/DM and XSum; Scottish Gaelic, Kyrgyz and Sinhala had relatively fewer samples, their evaluation sets were increased to 500 samples for more reliable evaluation. Same articles were used for evaluation in the two variants of Chinese and Serbian to prevent data leakage in multilingual training. Individual dataset download links with train-dev-test example counts are given below:

Language ISO 639-1 Code BBC subdomain(s) Train Dev Test Total
Amharic am 5761 719 719 7199
Arabic ar 37519 4689 4689 46897
Azerbaijani az 6478 809 809 8096
Bengali bn 8102 1012 1012 10126
Burmese my 4569 570 570 5709
Chinese (Simplified) zh-CN, 37362 4670 4670 46702
Chinese (Traditional) zh-TW, 37373 4670 4670 46713
English en, * 306522 11535 11535 329592
French fr 8697 1086 1086 10869
Gujarati gu 9119 1139 1139 11397
Hausa ha 6418 802 802 8022
Hindi hi 70778 8847 8847 88472
Igbo ig 4183 522 522 5227
Indonesian id 38242 4780 4780 47802
Japanese ja 7113 889 889 8891
Kirundi rn 5746 718 718 7182
Korean ko 4407 550 550 5507
Kyrgyz ky 2266 500 500 3266
Marathi mr 10903 1362 1362 13627
Nepali np 5808 725 725 7258
Oromo om 6063 757 757 7577
Pashto ps 14353 1794 1794 17941
Persian fa 47251 5906 5906 59063
Pidgin** n/a 9208 1151 1151 11510
Portuguese pt 57402 7175 7175 71752
Punjabi pa 8215 1026 1026 10267
Russian ru, * 62243 7780 7780 77803
Scottish Gaelic gd 1313 500 500 2313
Serbian (Cyrillic) sr 7275 909 909 9093
Serbian (Latin) sr 7276 909 909 9094
Sinhala si 3249 500 500 4249
Somali so 5962 745 745 7452
Spanish es 38110 4763 4763 47636
Swahili sw 7898 987 987 9872
Tamil ta 16222 2027 2027 20276
Telugu te 10421 1302 1302 13025
Thai th 6616 826 826 8268
Tigrinya ti 5451 681 681 6813
Turkish tr 27176 3397 3397 33970
Ukrainian uk 43201 5399 5399 53999
Urdu ur 67665 8458 8458 84581
Uzbek uz 4728 590 590 5908
Vietnamese vi 32111 4013 4013 40137
Welsh cy 9732 1216 1216 12164
Yoruba yo 6350 793 793 7936

* A lot of articles in BBC Sinhala and BBC Ukrainian were written in English and Russian respectively. They were identified using Fasttext and moved accordingly.

** West African Pidgin English

Dataset Creation

Curation Rationale

More information needed

Source Data

BBC News

Initial Data Collection and Normalization

Detailed in the paper

Who are the source language producers?

Detailed in the paper


Detailed in the paper

Annotation process

Detailed in the paper

Who are the annotators?

Detailed in the paper

Personal and Sensitive Information

More information needed

Considerations for Using the Data

Social Impact of Dataset

More information needed

Discussion of Biases

More information needed

Other Known Limitations

More information needed

Additional Information

Dataset Curators

More information needed

Licensing Information

Contents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). Copyright of the dataset contents belongs to the original copyright holders.

Citation Information

If you use any of the datasets, models or code modules, please cite the following paper:

    title = "{XL}-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages",
    author = "Hasan, Tahmid  and
      Bhattacharjee, Abhik  and
      Islam, Md. Saiful  and
      Mubasshir, Kazi  and
      Li, Yuan-Fang  and
      Kang, Yong-Bin  and
      Rahman, M. Sohel  and
      Shahriyar, Rifat",
    booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "",
    pages = "4693--4703",


Thanks to @abhik1505040 and @Tahmid for adding this dataset.

Downloads last month

Models trained or fine-tuned on csebuetnlp/xlsum