Datasets:
GEM
/

Languages:
Undetermined
Multilinguality:
unknown
Size Categories:
unknown
Language Creators:
unknown
Annotations Creators:
none
Source Datasets:
original
ArXiv:
Tags:
License:
Sebastian Gehrmann commited on
Commit
6ca4f82
1 Parent(s): 858bb92

data card.

Browse files
Files changed (1) hide show
  1. README.md +492 -185
README.md CHANGED
@@ -1,119 +1,232 @@
1
  ---
2
- task_categories:
3
- - conditional-text-generation
4
- task_ids:
5
- - summarization
6
  languages:
7
- - am
8
- - ar
9
- - az
10
- - bn
11
- - my
12
- - zh
13
- - en
14
- - fr
15
- - gu
16
- - ha
17
- - hi
18
- - ig
19
- - id
20
- - ja
21
- - rn
22
- - ko
23
- - ky
24
- - mr
25
- - ne
26
- - om
27
- - ps
28
- - fa
29
- - pcm
30
- - pt
31
- - pa
32
- - ru
33
- - gd
34
- - sr
35
- - si
36
- - so
37
- - es
38
- - sw
39
- - ta
40
- - te
41
- - th
42
- - ti
43
- - tr
44
- - uk
45
- - ur
46
- - uz
47
- - vi
48
- - cy
49
- - yo
50
- size_categories:
51
- - 1M<n<10M
52
  licenses:
53
  - cc-by-nc-sa-4.0
54
  multilinguality:
55
- - multilingual
 
 
 
56
  source_datasets:
57
  - original
58
- paperswithcode_id: xl-sum
59
- annotations_creators:
60
- - found
61
- language_creators:
62
- - found
63
- pretty_name: XL-Sum
64
  ---
65
 
66
- # Dataset Card for "XL-Sum"
67
-
68
- ## Table of Contents
69
- - [Dataset Card Creation Guide](#dataset-card-creation-guide)
70
- - [Table of Contents](#table-of-contents)
71
- - [Dataset Description](#dataset-description)
72
- - [Dataset Summary](#dataset-summary)
73
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
74
- - [Languages](#languages)
75
- - [Dataset Structure](#dataset-structure)
76
- - [Data Instances](#data-instances)
77
- - [Data Fields](#data-fields)
78
- - [Data Splits](#data-splits)
79
- - [Dataset Creation](#dataset-creation)
80
- - [Curation Rationale](#curation-rationale)
81
- - [Source Data](#source-data)
82
- - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
83
- - [Who are the source language producers?](#who-are-the-source-language-producers)
84
- - [Annotations](#annotations)
85
- - [Annotation process](#annotation-process)
86
- - [Who are the annotators?](#who-are-the-annotators)
87
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
88
- - [Considerations for Using the Data](#considerations-for-using-the-data)
89
- - [Social Impact of Dataset](#social-impact-of-dataset)
90
- - [Discussion of Biases](#discussion-of-biases)
91
- - [Other Known Limitations](#other-known-limitations)
92
- - [Additional Information](#additional-information)
93
- - [Dataset Curators](#dataset-curators)
94
- - [Licensing Information](#licensing-information)
95
- - [Citation Information](#citation-information)
96
- - [Contributions](#contributions)
97
 
98
  ## Dataset Description
99
 
100
- - **Repository:** [https://github.com/csebuetnlp/xl-sum](https://github.com/csebuetnlp/xl-sum)
101
- - **Paper:** [XL-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages](https://aclanthology.org/2021.findings-acl.413/)
102
- - **Point of Contact:** [Tahmid Hasan](mailto:tahmidhasan@cse.buet.ac.bd)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
103
 
104
- ### Dataset Summary
 
 
105
 
106
- We present XLSum, a comprehensive and diverse dataset comprising 1.35 million professionally annotated article-summary pairs from BBC, extracted using a set of carefully designed heuristics. The dataset covers 45 languages ranging from low to high-resource, for many of which no public dataset is currently available. XL-Sum is highly abstractive, concise, and of high quality, as indicated by human and intrinsic evaluation.
107
 
108
 
109
- ### Supported Tasks and Leaderboards
110
 
111
- **Tasks:** Summarization
112
 
113
- **Leaderboards:** [ExplainaBoard](http://explainaboard.nlpedia.ai/leaderboard/task_xlsum/)
 
 
 
114
 
115
- ### Languages
116
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
117
  - `amharic`
118
  - `arabic`
119
  - `azerbaijani`
@@ -160,45 +273,10 @@ We present XLSum, a comprehensive and diverse dataset comprising 1.35 million pr
160
  - `welsh`
161
  - `yoruba`
162
 
163
- ## Dataset Structure
164
-
165
- ### Data Instances
166
-
167
- One example from the `English` dataset is given below in JSON format.
168
- ```
169
- {
170
- "gem_id": "GEM-xlsum_english-train-1589",
171
- "url": "https://www.bbc.com/news/technology-17657859",
172
- "title": "Yahoo files e-book advert system patent applications",
173
- "summary": "Yahoo has signalled it is investigating e-book adverts as a way to stimulate its earnings.",
174
- "text": "Yahoo's patents suggest users could weigh the type of ads against the sizes of discount before purchase. It says in two US patent applications that ads for digital book readers have been \"less than optimal\" to date. The filings suggest that users could be offered titles at a variety of prices depending on the ads' prominence They add that the products shown could be determined by the type of book being read, or even the contents of a specific chapter, phrase or word. The paperwork was published by the US Patent and Trademark Office late last week and relates to work carried out at the firm's headquarters in Sunnyvale, California. \"Greater levels of advertising, which may be more valuable to an advertiser and potentially more distracting to an e-book reader, may warrant higher discounts,\" it states. Free books It suggests users could be offered ads as hyperlinks based within the book's text, in-laid text or even \"dynamic content\" such as video. Another idea suggests boxes at the bottom of a page could trail later chapters or quotes saying \"brought to you by Company A\". It adds that the more willing the customer is to see the ads, the greater the potential discount. \"Higher frequencies... may even be great enough to allow the e-book to be obtained for free,\" it states. The authors write that the type of ad could influence the value of the discount, with \"lower class advertising... such as teeth whitener advertisements\" offering a cheaper price than \"high\" or \"middle class\" adverts, for things like pizza. The inventors also suggest that ads could be linked to the mood or emotional state the reader is in as a they progress through a title. For example, they say if characters fall in love or show affection during a chapter, then ads for flowers or entertainment could be triggered. The patents also suggest this could applied to children's books - giving the Tom Hanks animated film Polar Express as an example. It says a scene showing a waiter giving the protagonists hot drinks \"may be an excellent opportunity to show an advertisement for hot cocoa, or a branded chocolate bar\". Another example states: \"If the setting includes young characters, a Coke advertisement could be provided, inviting the reader to enjoy a glass of Coke with his book, and providing a graphic of a cool glass.\" It adds that such targeting could be further enhanced by taking account of previous titles the owner has bought. 'Advertising-free zone' At present, several Amazon and Kobo e-book readers offer full-screen adverts when the device is switched off and show smaller ads on their menu screens, but the main text of the titles remains free of marketing. Yahoo does not currently provide ads to these devices, and a move into the area could boost its shrinking revenues. However, Philip Jones, deputy editor of the Bookseller magazine, said that the internet firm might struggle to get some of its ideas adopted. \"This has been mooted before and was fairly well decried,\" he said. \"Perhaps in a limited context it could work if the merchandise was strongly related to the title and was kept away from the text. \"But readers - particularly parents - like the fact that reading is an advertising-free zone. Authors would also want something to say about ads interrupting their narrative flow.\""
175
- }
176
- ```
177
- The `text` field maintains newlines and paragraph boundaries, these extra whitespaces can be collapsed with the following function:
178
- ```
179
- import re
180
- WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip()))
181
- ```
182
-
183
-
184
- When downloading the dataset, the intended language name is required. For instance:
185
-
186
- ```
187
- from datasets import load_dataset
188
- ds = load_dataset("GEM/xlsum", "english")
189
- ```
190
-
191
-
192
- ### Data Fields
193
- - `gem_id`: A string representing the article ID.
194
- - `url`: A string representing the article URL.
195
- - `title`: A string containing the article title.
196
- - `summary`: A string containing the article summary.
197
- - `text` : A string containing the article text.
198
-
199
-
200
- ### Data Splits
201
 
 
 
202
  We used a 80%-10%-10% split for all languages with a few exceptions. `English` was split 93%-3.5%-3.5% for the evaluation set size to resemble that of `CNN/DM` and `XSum`; `Scottish Gaelic`, `Kyrgyz` and `Sinhala` had relatively fewer samples, their evaluation sets were increased to 500 samples for more reliable evaluation. Same articles were used for evaluation in the two variants of Chinese and Serbian to prevent data leakage in multilingual training. Individual dataset download links with train-dev-test example counts are given below:
203
 
204
  Language | ISO 639-1 Code | BBC subdomain(s) | Train | Dev | Test | Total |
@@ -250,23 +328,175 @@ Welsh | cy | https://www.bbc.com/cymrufyw | 9732 | 1216 | 1216 | 12164 |
250
  Yoruba | yo | https://www.bbc.com/yoruba | 6350 | 793 | 793 | 7936 |
251
 
252
  `*` A lot of articles in BBC Sinhala and BBC Ukrainian were written in English and Russian respectively. They were identified using [Fasttext](https://arxiv.org/abs/1607.01759) and moved accordingly.
253
-
254
  `**` West African Pidgin English
255
 
256
- ## Dataset Creation
257
 
258
- ### Curation Rationale
259
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
260
  State-of-the-art text summarization models are heavily data-driven, i.e., a large number of article-summary pairs are required to train them effectively. As a result, abstractive summarization has centered around the English language, as most large abstractive summarization datasets are available in English only. Though there have been some recent efforts for curating multilingual abstractive summarization datasets, they are limited in terms of the number of languages covered, the number of training samples, or both. To this end, we curate **XL-Sum**, a large-scale abstractive summarization dataset of 1.35 million news articles from 45 languages crawled from the British Broadcasting Corporation website.
261
 
262
- ### Source Data
263
 
264
- [BBC News](https://www.bbc.co.uk/ws/languages)
265
 
266
- #### Initial Data Collection and Normalization
 
 
267
 
268
- We designed a crawler to recursively crawl pages starting from the homepage by visiting different article links present in each page visited. We were able to take advantage of the fact that all BBC sites have somewhat similar structures, and were able to scrape articles from all sites. We discarded pages with no textual contents (mostly pages consisting of multimedia contents) before further processing. We designed a number of heuristics to make the extraction effective by carefully examining the HTML structures of the crawled pages:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
269
 
 
 
 
270
  1. The desired summary must be present within the beginning two paragraphs of an article.
271
  2. The summary paragraph must have some portion of texts in bold format.
272
  3. The summary paragraph may contain some hyperlinks that may not be bold. The proportion of bold texts and hyperlinked texts to the total length of the paragraph in consideration must be at least 95\%.
@@ -275,74 +505,151 @@ We designed a crawler to recursively crawl pages starting from the homepage by v
275
 
276
 
277
 
278
- #### Who are the source language producers?
279
 
280
- [BBC News Editorial Team](https://www.bbc.co.uk/ws/languages)
281
 
 
 
 
 
282
 
283
- ### Annotations
284
 
285
- #### Annotation process
 
 
286
 
287
- BBC typically provides a summary of a whole article in the form of a bold paragraph containing one or two sentences at the beginning of each article. These summaries are written professionally by the authors of the articles in order to convey its main story within one small paragraph. This is in contrast to the headline which serves to draw the attention of viewers into reading the article. We used the bold texts as summary and the rest of the article as input.
288
 
289
- #### Who are the annotators?
290
 
291
- [BBC News Editorial Team](https://www.bbc.co.uk/ws/languages)
292
 
293
- ### Personal and Sensitive Information
 
 
294
 
295
- Meta-information like author names are discarded. However, we cannot guarantee removal of all personal information.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
296
 
297
- ## Considerations for Using the Data
298
 
299
- ### Social Impact of Dataset
300
 
301
- We believe that our efforts in this work will encourage the community to push the boundaries of abstractive text summarization beyond the English language, especially for low and mid-resource languages, bringing technological advances to communities of these languages that have been traditionally under-served.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
302
 
303
 
304
  ### Discussion of Biases
305
 
306
- Human evaluation showed most languages had a high percentage of good summaries in the upper nineties, almost none of the summaries contained any conflicting information, while about one-third on average had information that was not directly inferrable from the source article.
307
 
308
- ### Other Known Limitations
 
 
309
 
310
- The dataset is limited to news domain only.
311
 
312
- ## Additional Information
 
 
313
 
314
- ### Dataset Curators
315
 
316
- [Authors of this paper](https://aclanthology.org/2021.findings-acl.413)
317
 
318
- ### Licensing Information
319
 
320
- Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
321
- ### Citation Information
322
 
323
- If you use any of the datasets, models or code modules, please cite the following paper:
324
- ```
325
- @inproceedings{hasan-etal-2021-xl,
326
- title = "{XL}-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages",
327
- author = "Hasan, Tahmid and
328
- Bhattacharjee, Abhik and
329
- Islam, Md. Saiful and
330
- Mubasshir, Kazi and
331
- Li, Yuan-Fang and
332
- Kang, Yong-Bin and
333
- Rahman, M. Sohel and
334
- Shahriyar, Rifat",
335
- booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
336
- month = aug,
337
- year = "2021",
338
- address = "Online",
339
- publisher = "Association for Computational Linguistics",
340
- url = "https://aclanthology.org/2021.findings-acl.413",
341
- pages = "4693--4703",
342
- }
343
- ```
344
 
345
 
346
- ### Contributions
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
347
 
348
- Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset.
 
1
  ---
2
+ annotations_creators:
3
+ - none
4
+ language_creators:
5
+ - unknown
6
  languages:
7
+ - unknown
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  licenses:
9
  - cc-by-nc-sa-4.0
10
  multilinguality:
11
+ - unknown
12
+ pretty_name: xlsum
13
+ size_categories:
14
+ - unknown
15
  source_datasets:
16
  - original
17
+ task_categories:
18
+ - summarization
19
+ task_ids:
20
+ - unknown
 
 
21
  ---
22
 
23
+ # Dataset Card for GEM/xlsum
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
  ## Dataset Description
26
 
27
+ - **Homepage:** https://github.com/csebuetnlp/xl-sum
28
+ - **Repository:** https://huggingface.co/datasets/csebuetnlp/xlsum/tree/main/data
29
+ - **Paper:** https://aclanthology.org/2021.findings-acl.413/
30
+ - **Leaderboard:** http://explainaboard.nlpedia.ai/leaderboard/task_xlsum/
31
+ - **Point of Contact:** Tahmid Hasan
32
+
33
+ ### Link to Main Data Card
34
+
35
+ You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/xlsum).
36
+
37
+ ### Dataset Summary
38
+
39
+ XLSum is a highly multilingual summarization dataset supporting 44 language. The data stems from BBC news articles.
40
+
41
+ You can load the dataset via:
42
+ ```
43
+ import datasets
44
+ data = datasets.load_dataset('GEM/xlsum')
45
+ ```
46
+ The data loader can be found [here](https://huggingface.co/datasets/GEM/xlsum).
47
+
48
+ #### website
49
+ [Github](https://github.com/csebuetnlp/xl-sum)
50
+
51
+ #### paper
52
+ [ACL Anthology](https://aclanthology.org/2021.findings-acl.413/)
53
+
54
+ ## Dataset Overview
55
+
56
+ ### Where to find the Data and its Documentation
57
+
58
+ #### Webpage
59
+
60
+ <!-- info: What is the webpage for the dataset (if it exists)? -->
61
+ <!-- scope: telescope -->
62
+ [Github](https://github.com/csebuetnlp/xl-sum)
63
+
64
+ #### Download
65
+
66
+ <!-- info: What is the link to where the original dataset is hosted? -->
67
+ <!-- scope: telescope -->
68
+ [Huggingface](https://huggingface.co/datasets/csebuetnlp/xlsum/tree/main/data)
69
+
70
+ #### Paper
71
+
72
+ <!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
73
+ <!-- scope: telescope -->
74
+ [ACL Anthology](https://aclanthology.org/2021.findings-acl.413/)
75
+
76
+ #### BibTex
77
+
78
+ <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
79
+ <!-- scope: microscope -->
80
+ ```
81
+ @inproceedings{hasan-etal-2021-xl,
82
+ title = "{XL}-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages",
83
+ author = "Hasan, Tahmid and
84
+ Bhattacharjee, Abhik and
85
+ Islam, Md. Saiful and
86
+ Mubasshir, Kazi and
87
+ Li, Yuan-Fang and
88
+ Kang, Yong-Bin and
89
+ Rahman, M. Sohel and
90
+ Shahriyar, Rifat",
91
+ booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
92
+ month = aug,
93
+ year = "2021",
94
+ address = "Online",
95
+ publisher = "Association for Computational Linguistics",
96
+ url = "https://aclanthology.org/2021.findings-acl.413",
97
+ pages = "4693--4703",
98
+ }
99
+ ```
100
+
101
+ #### Contact Name
102
+
103
+ <!-- quick -->
104
+ <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
105
+ <!-- scope: periscope -->
106
+ Tahmid Hasan
107
+
108
+ #### Contact Email
109
+
110
+ <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
111
+ <!-- scope: periscope -->
112
+ tahmidhasan@cse.buet.ac.bd
113
+
114
+ #### Has a Leaderboard?
115
+
116
+ <!-- info: Does the dataset have an active leaderboard? -->
117
+ <!-- scope: telescope -->
118
+ yes
119
+
120
+ #### Leaderboard Link
121
+
122
+ <!-- info: Provide a link to the leaderboard. -->
123
+ <!-- scope: periscope -->
124
+ [Explainaboard](http://explainaboard.nlpedia.ai/leaderboard/task_xlsum/)
125
+
126
+ #### Leaderboard Details
127
 
128
+ <!-- info: Briefly describe how the leaderboard evaluates models. -->
129
+ <!-- scope: microscope -->
130
+ The leaderboard ranks models based on ROUGE scores (R1/R2/RL) of the generated summaries.
131
 
 
132
 
133
 
134
+ ### Languages and Intended Use
135
 
136
+ #### Multilingual?
137
 
138
+ <!-- quick -->
139
+ <!-- info: Is the dataset multilingual? -->
140
+ <!-- scope: telescope -->
141
+ yes
142
 
143
+ #### Covered Languages
144
 
145
+ <!-- quick -->
146
+ <!-- info: What languages/dialects are covered in the dataset? -->
147
+ <!-- scope: telescope -->
148
+ `Amharic`, `Arabic`, `Azerbaijani`, `Bengali, Bangla`, `Burmese`, `Chinese (family)`, `English`, `French`, `Gujarati`, `Hausa`, `Hindi`, `Igbo`, `Indonesian`, `Japanese`, `Rundi`, `Korean`, `Kirghiz, Kyrgyz`, `Marathi`, `Nepali (individual language)`, `Oromo`, `Pushto, Pashto`, `Persian`, `Ghanaian Pidgin English`, `Portuguese`, `Panjabi, Punjabi`, `Russian`, `Scottish Gaelic, Gaelic`, `Serbian`, `Romano-Serbian`, `Sinhala, Sinhalese`, `Somali`, `Spanish, Castilian`, `Swahili (individual language), Kiswahili`, `Tamil`, `Telugu`, `Thai`, `Tigrinya`, `Turkish`, `Ukrainian`, `Urdu`, `Uzbek`, `Vietnamese`, `Welsh`, `Yoruba`
149
+
150
+ #### License
151
+
152
+ <!-- quick -->
153
+ <!-- info: What is the license of the dataset? -->
154
+ <!-- scope: telescope -->
155
+ cc-by-nc-sa-4.0: Creative Commons Attribution Non Commercial Share Alike 4.0 International
156
+
157
+ #### Intended Use
158
+
159
+ <!-- info: What is the intended use of the dataset? -->
160
+ <!-- scope: microscope -->
161
+ Abstractive summarization has centered around the English language, as most large abstractive summarization datasets are available in English only. Though there have been some recent efforts for curating multilingual abstractive summarization datasets, they are limited in terms of the number of languages covered, the number of training samples, or both. To this end, **XL-Sum** presents a large-scale abstractive summarization dataset of 1.35 million news articles from 45 languages crawled from the British Broadcasting Corporation website. It is intended to be used for both multilingual and per-language summarization tasks.
162
+
163
+ #### Primary Task
164
+
165
+ <!-- info: What primary task does the dataset support? -->
166
+ <!-- scope: telescope -->
167
+ Summarization
168
+
169
+ #### Communicative Goal
170
+
171
+ <!-- quick -->
172
+ <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
173
+ <!-- scope: periscope -->
174
+ Summarize news-like text in one of 45 languages.
175
+
176
+
177
+ ### Credit
178
+
179
+ #### Curation Organization Type(s)
180
+
181
+ <!-- info: In what kind of organization did the dataset curation happen? -->
182
+ <!-- scope: telescope -->
183
+ `academic`
184
+
185
+ #### Curation Organization(s)
186
+
187
+ <!-- info: Name the organization(s). -->
188
+ <!-- scope: periscope -->
189
+ Bangladesh University of Engineering and Technology
190
+
191
+ #### Who added the Dataset to GEM?
192
+
193
+ <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
194
+ <!-- scope: microscope -->
195
+ Tahmid Hasan (Bangladesh University of Engineering and Technology), Abhik Bhattacharjee (Bangladesh University of Engineering and Technology)
196
+
197
+
198
+ ### Dataset Structure
199
+
200
+ #### Data Fields
201
+
202
+ <!-- info: List and describe the fields present in the dataset. -->
203
+ <!-- scope: telescope -->
204
+ - `gem_id`: A string representing the article ID.
205
+ - `url`: A string representing the article URL.
206
+ - `title`: A string containing the article title.
207
+ - `summary`: A string containing the article summary.
208
+ - `text` : A string containing the article text.
209
+
210
+
211
+ #### Example Instance
212
+
213
+ <!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
214
+ <!-- scope: periscope -->
215
+ ```
216
+ {
217
+ "gem_id": "GEM-xlsum_english-train-1589",
218
+ "url": "https://www.bbc.com/news/technology-17657859",
219
+ "title": "Yahoo files e-book advert system patent applications",
220
+ "summary": "Yahoo has signalled it is investigating e-book adverts as a way to stimulate its earnings.",
221
+ "text": "Yahoo's patents suggest users could weigh the type of ads against the sizes of discount before purchase. It says in two US patent applications that ads for digital book readers have been \"less than optimal\" to date. The filings suggest that users could be offered titles at a variety of prices depending on the ads' prominence They add that the products shown could be determined by the type of book being read, or even the contents of a specific chapter, phrase or word. The paperwork was published by the US Patent and Trademark Office late last week and relates to work carried out at the firm's headquarters in Sunnyvale, California. \"Greater levels of advertising, which may be more valuable to an advertiser and potentially more distracting to an e-book reader, may warrant higher discounts,\" it states. Free books It suggests users could be offered ads as hyperlinks based within the book's text, in-laid text or even \"dynamic content\" such as video. Another idea suggests boxes at the bottom of a page could trail later chapters or quotes saying \"brought to you by Company A\". It adds that the more willing the customer is to see the ads, the greater the potential discount. \"Higher frequencies... may even be great enough to allow the e-book to be obtained for free,\" it states. The authors write that the type of ad could influence the value of the discount, with \"lower class advertising... such as teeth whitener advertisements\" offering a cheaper price than \"high\" or \"middle class\" adverts, for things like pizza. The inventors also suggest that ads could be linked to the mood or emotional state the reader is in as a they progress through a title. For example, they say if characters fall in love or show affection during a chapter, then ads for flowers or entertainment could be triggered. The patents also suggest this could applied to children's books - giving the Tom Hanks animated film Polar Express as an example. It says a scene showing a waiter giving the protagonists hot drinks \"may be an excellent opportunity to show an advertisement for hot cocoa, or a branded chocolate bar\". Another example states: \"If the setting includes young characters, a Coke advertisement could be provided, inviting the reader to enjoy a glass of Coke with his book, and providing a graphic of a cool glass.\" It adds that such targeting could be further enhanced by taking account of previous titles the owner has bought. 'Advertising-free zone' At present, several Amazon and Kobo e-book readers offer full-screen adverts when the device is switched off and show smaller ads on their menu screens, but the main text of the titles remains free of marketing. Yahoo does not currently provide ads to these devices, and a move into the area could boost its shrinking revenues. However, Philip Jones, deputy editor of the Bookseller magazine, said that the internet firm might struggle to get some of its ideas adopted. \"This has been mooted before and was fairly well decried,\" he said. \"Perhaps in a limited context it could work if the merchandise was strongly related to the title and was kept away from the text. \"But readers - particularly parents - like the fact that reading is an advertising-free zone. Authors would also want something to say about ads interrupting their narrative flow.\""
222
+ }
223
+ ```
224
+
225
+ #### Data Splits
226
+
227
+ <!-- info: Describe and name the splits in the dataset if there are more than one. -->
228
+ <!-- scope: periscope -->
229
+ The splits in the dataset are specified by the language names, which are as follows:
230
  - `amharic`
231
  - `arabic`
232
  - `azerbaijani`
 
273
  - `welsh`
274
  - `yoruba`
275
 
276
+ #### Splitting Criteria
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
277
 
278
+ <!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
279
+ <!-- scope: microscope -->
280
  We used a 80%-10%-10% split for all languages with a few exceptions. `English` was split 93%-3.5%-3.5% for the evaluation set size to resemble that of `CNN/DM` and `XSum`; `Scottish Gaelic`, `Kyrgyz` and `Sinhala` had relatively fewer samples, their evaluation sets were increased to 500 samples for more reliable evaluation. Same articles were used for evaluation in the two variants of Chinese and Serbian to prevent data leakage in multilingual training. Individual dataset download links with train-dev-test example counts are given below:
281
 
282
  Language | ISO 639-1 Code | BBC subdomain(s) | Train | Dev | Test | Total |
 
328
  Yoruba | yo | https://www.bbc.com/yoruba | 6350 | 793 | 793 | 7936 |
329
 
330
  `*` A lot of articles in BBC Sinhala and BBC Ukrainian were written in English and Russian respectively. They were identified using [Fasttext](https://arxiv.org/abs/1607.01759) and moved accordingly.
 
331
  `**` West African Pidgin English
332
 
 
333
 
 
334
 
335
+ ## Dataset in GEM
336
+
337
+ ### Rationale for Inclusion in GEM
338
+
339
+ #### Why is the Dataset in GEM?
340
+
341
+ <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
342
+ <!-- scope: microscope -->
343
+ Traditional abstractive text summarization has been centered around English and other high-resource languages. **XL-Sum** provides a large collection of high-quality article-summary pairs for 45 languages where the languages range from high-resource to extremely low-resource. This enables the research community to explore the summarization capabilities of different models for multiple languages and languages in isolation. We believe the addition of **XL-Sum** to GEM makes the domain of abstractive text summarization more diversified and inclusive to the research community. We hope our efforts in this work will encourage the community to push the boundaries of abstractive text summarization beyond the English language, especially for low and mid-resource languages, bringing technological advances to communities of these languages that have been traditionally under-served.
344
+
345
+
346
+ #### Similar Datasets
347
+
348
+ <!-- info: Do other datasets for the high level task exist? -->
349
+ <!-- scope: telescope -->
350
+ yes
351
+
352
+ #### Unique Language Coverage
353
+
354
+ <!-- info: Does this dataset cover other languages than other datasets for the same task? -->
355
+ <!-- scope: periscope -->
356
+ yes
357
+
358
+ #### Difference from other GEM datasets
359
+
360
+ <!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
361
+ <!-- scope: microscope -->
362
+ The summaries are highly concise and abstractive.
363
+
364
+ #### Ability that the Dataset measures
365
+
366
+ <!-- info: What aspect of model ability can be measured with this dataset? -->
367
+ <!-- scope: periscope -->
368
+ Conciseness, abstractiveness, and overall summarization capability.
369
+
370
+
371
+ ### GEM-Specific Curation
372
+
373
+ #### Modificatied for GEM?
374
+
375
+ <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
376
+ <!-- scope: telescope -->
377
+ no
378
+
379
+ #### Additional Splits?
380
+
381
+ <!-- info: Does GEM provide additional splits to the dataset? -->
382
+ <!-- scope: telescope -->
383
+ no
384
+
385
+
386
+ ### Getting Started with the Task
387
+
388
+
389
+
390
+
391
+ ## Previous Results
392
+
393
+ ### Previous Results
394
+
395
+ #### Measured Model Abilities
396
+
397
+ <!-- info: What aspect of model ability can be measured with this dataset? -->
398
+ <!-- scope: telescope -->
399
+ Conciseness, abstractiveness, and overall summarization capability.
400
+
401
+ #### Metrics
402
+
403
+ <!-- info: What metrics are typically used for this task? -->
404
+ <!-- scope: periscope -->
405
+ `ROUGE`
406
+
407
+ #### Proposed Evaluation
408
+
409
+ <!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
410
+ <!-- scope: microscope -->
411
+ ROUGE is the de facto evaluation metric used for text summarization. However, it was designed specifically for evaluating English texts. Due to the nature of the metric, scores are heavily dependent on text tokenization / stemming / unnecessary character removal, etc. Some modifications to the original ROUGE evaluation were done such as punctuation only removal, language specific tokenization/stemming to enable reliable comparison of source and target summaries across different scripts.
412
+
413
+ #### Previous results available?
414
+
415
+ <!-- info: Are previous results available? -->
416
+ <!-- scope: telescope -->
417
+ no
418
+
419
+
420
+
421
+ ## Dataset Curation
422
+
423
+ ### Original Curation
424
+
425
+ #### Original Curation Rationale
426
+
427
+ <!-- info: Original curation rationale -->
428
+ <!-- scope: telescope -->
429
  State-of-the-art text summarization models are heavily data-driven, i.e., a large number of article-summary pairs are required to train them effectively. As a result, abstractive summarization has centered around the English language, as most large abstractive summarization datasets are available in English only. Though there have been some recent efforts for curating multilingual abstractive summarization datasets, they are limited in terms of the number of languages covered, the number of training samples, or both. To this end, we curate **XL-Sum**, a large-scale abstractive summarization dataset of 1.35 million news articles from 45 languages crawled from the British Broadcasting Corporation website.
430
 
 
431
 
432
+ #### Communicative Goal
433
 
434
+ <!-- info: What was the communicative goal? -->
435
+ <!-- scope: periscope -->
436
+ Introduce new languages in the english-centric domain of abstractive text summarization and enable both multilingual and per-language summarization.
437
 
438
+ #### Sourced from Different Sources
439
+
440
+ <!-- info: Is the dataset aggregated from different data sources? -->
441
+ <!-- scope: telescope -->
442
+ yes
443
+
444
+ #### Source Details
445
+
446
+ <!-- info: List the sources (one per line) -->
447
+ <!-- scope: periscope -->
448
+ British Broadcasting Corporation (BBC) news websites.
449
+
450
+
451
+ ### Language Data
452
+
453
+ #### How was Language Data Obtained?
454
+
455
+ <!-- info: How was the language data obtained? -->
456
+ <!-- scope: telescope -->
457
+ `Found`
458
+
459
+ #### Where was it found?
460
+
461
+ <!-- info: If found, where from? -->
462
+ <!-- scope: telescope -->
463
+ `Multiple websites`
464
+
465
+ #### Language Producers
466
+
467
+ <!-- info: What further information do we have on the language producers? -->
468
+ <!-- scope: microscope -->
469
+ The language content was written by professional news editors hired by BBC.
470
+
471
+ #### Topics Covered
472
+
473
+ <!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
474
+ <!-- scope: periscope -->
475
+ News
476
+
477
+ #### Data Validation
478
+
479
+ <!-- info: Was the text validated by a different worker or a data curator? -->
480
+ <!-- scope: telescope -->
481
+ not validated
482
+
483
+ #### Data Preprocessing
484
+
485
+ <!-- info: How was the text data pre-processed? (Enter N/A if the text was not pre-processed) -->
486
+ <!-- scope: microscope -->
487
+ We used 'NFKC' normalization on all text instances.
488
+
489
+ #### Was Data Filtered?
490
+
491
+ <!-- info: Were text instances selected or filtered? -->
492
+ <!-- scope: telescope -->
493
+ algorithmically
494
+
495
+ #### Filter Criteria
496
 
497
+ <!-- info: What were the selection criteria? -->
498
+ <!-- scope: microscope -->
499
+ We designed a crawler to recursively crawl pages starting from the homepage by visiting different article links present in each page visited. We were able to take advantage of the fact that all BBC sites have somewhat similar structures, and were able to scrape articles from all sites. We discarded pages with no textual contents (mostly pages consisting of multimedia contents) before further processing. We designed a number of heuristics to make the extraction effective by carefully examining the HTML structures of the crawled pages:
500
  1. The desired summary must be present within the beginning two paragraphs of an article.
501
  2. The summary paragraph must have some portion of texts in bold format.
502
  3. The summary paragraph may contain some hyperlinks that may not be bold. The proportion of bold texts and hyperlinked texts to the total length of the paragraph in consideration must be at least 95\%.
 
505
 
506
 
507
 
508
+ ### Structured Annotations
509
 
510
+ #### Additional Annotations?
511
 
512
+ <!-- quick -->
513
+ <!-- info: Does the dataset have additional annotations for each instance? -->
514
+ <!-- scope: telescope -->
515
+ none
516
 
517
+ #### Annotation Service?
518
 
519
+ <!-- info: Was an annotation service used? -->
520
+ <!-- scope: telescope -->
521
+ no
522
 
 
523
 
524
+ ### Consent
525
 
526
+ #### Any Consent Policy?
527
 
528
+ <!-- info: Was there a consent policy involved when gathering the data? -->
529
+ <!-- scope: telescope -->
530
+ yes
531
 
532
+ #### Consent Policy Details
533
+
534
+ <!-- info: What was the consent policy? -->
535
+ <!-- scope: microscope -->
536
+ BBC's policy specifies that the text content within its websites can be used for non-commercial research only.
537
+
538
+
539
+ ### Private Identifying Information (PII)
540
+
541
+ #### Contains PII?
542
+
543
+ <!-- quick -->
544
+ <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
545
+ <!-- scope: telescope -->
546
+ likely
547
+
548
+ #### Categories of PII
549
+
550
+ <!-- info: What categories of PII are present or suspected in the data? -->
551
+ <!-- scope: periscope -->
552
+ `generic PII`
553
+
554
+ #### Any PII Identification?
555
+
556
+ <!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? -->
557
+ <!-- scope: periscope -->
558
+ no identification
559
+
560
+
561
+ ### Maintenance
562
+
563
+ #### Any Maintenance Plan?
564
+
565
+ <!-- info: Does the original dataset have a maintenance plan? -->
566
+ <!-- scope: telescope -->
567
+ no
568
 
 
569
 
 
570
 
571
+ ## Broader Social Context
572
+
573
+ ### Previous Work on the Social Impact of the Dataset
574
+
575
+ #### Usage of Models based on the Data
576
+
577
+ <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
578
+ <!-- scope: telescope -->
579
+ no
580
+
581
+
582
+ ### Impact on Under-Served Communities
583
+
584
+ #### Addresses needs of underserved Communities?
585
+
586
+ <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
587
+ <!-- scope: telescope -->
588
+ yes
589
+
590
+ #### Details on how Dataset Addresses the Needs
591
+
592
+ <!-- info: Describe how this dataset addresses the needs of underserved communities. -->
593
+ <!-- scope: microscope -->
594
+ This dataset introduces summarization corpus for many languages where there weren't any datasets like this curated before.
595
 
596
 
597
  ### Discussion of Biases
598
 
599
+ #### Any Documented Social Biases?
600
 
601
+ <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
602
+ <!-- scope: telescope -->
603
+ no
604
 
605
+ #### Are the Language Producers Representative of the Language?
606
 
607
+ <!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
608
+ <!-- scope: periscope -->
609
+ Yes
610
 
 
611
 
 
612
 
613
+ ## Considerations for Using the Data
614
 
615
+ ### PII Risks and Liability
 
616
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
617
 
618
 
619
+ ### Licenses
620
+
621
+ #### Copyright Restrictions on the Dataset
622
+
623
+ <!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
624
+ <!-- scope: periscope -->
625
+ `research use only`, `non-commercial use only`
626
+
627
+ #### Copyright Restrictions on the Language Data
628
+
629
+ <!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
630
+ <!-- scope: periscope -->
631
+ `research use only`, `non-commercial use only`
632
+
633
+
634
+ ### Known Technical Limitations
635
+
636
+ #### Technical Limitations
637
+
638
+ <!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
639
+ <!-- scope: microscope -->
640
+ Human evaluation showed most languages had a high percentage of good summaries in the upper nineties, almost none of the summaries contained any conflicting information, while about one-third on average had information that was not directly inferrable from the source article. Since generally multiple articles are written regarding an important event, there could be an overlap between the training and evaluation data in terms on content.
641
+
642
+
643
+ #### Unsuited Applications
644
+
645
+ <!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
646
+ <!-- scope: microscope -->
647
+ The dataset is limited to news domain only. Hence it wouldn't be advisable to use a model trained on this dataset for summarizing texts from a different domain i.e. literature, scientific text etc. Another pitfall could be hallucinations in the model generated summary.
648
+
649
+ #### Discouraged Use Cases
650
+
651
+ <!-- info: What are some discouraged use cases of a model trained to maximize the proposed metrics on this dataset? In particular, think about settings where decisions made by a model that performs reasonably well on the metric my still have strong negative consequences for user or members of the public. -->
652
+ <!-- scope: microscope -->
653
+ ROUGE evaluates the quality of the summary as a whole by considering up to 4-gram overlaps. Therefore, in an article about India if the word "India" in the generated summary gets replaced by "Pakistan" due to model hallucination, the overall score wouldn't be reduced significantly, but the entire meaning could get changed.
654
+
655