system HF staff commited on
Commit
c4da2f9
1 Parent(s): b7f9de3

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +291 -0
README.md ADDED
@@ -0,0 +1,291 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - de
8
+ - es
9
+ - fr
10
+ - ru
11
+ - tr
12
+ licenses:
13
+ - other-research-only
14
+ multilinguality:
15
+ - multilingual
16
+ size_categories:
17
+ - 1M<n<5M
18
+ source_datasets:
19
+ - extended|cnn_dailymail
20
+ - original
21
+ task_categories:
22
+ - conditional-text-generation
23
+ - text-classification
24
+ task_ids:
25
+ - machine-translation
26
+ - multi-class-classification
27
+ - multi-label-classification
28
+ - summarization
29
+ - topic-classification
30
+ ---
31
+
32
+ # Dataset Card for "mlsum"
33
+
34
+ ## Table of Contents
35
+ - [Dataset Description](#dataset-description)
36
+ - [Dataset Summary](#dataset-summary)
37
+ - [Supported Tasks](#supported-tasks)
38
+ - [Languages](#languages)
39
+ - [Dataset Structure](#dataset-structure)
40
+ - [Data Instances](#data-instances)
41
+ - [Data Fields](#data-fields)
42
+ - [Data Splits Sample Size](#data-splits-sample-size)
43
+ - [Dataset Creation](#dataset-creation)
44
+ - [Curation Rationale](#curation-rationale)
45
+ - [Source Data](#source-data)
46
+ - [Annotations](#annotations)
47
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
48
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
49
+ - [Social Impact of Dataset](#social-impact-of-dataset)
50
+ - [Discussion of Biases](#discussion-of-biases)
51
+ - [Other Known Limitations](#other-known-limitations)
52
+ - [Additional Information](#additional-information)
53
+ - [Dataset Curators](#dataset-curators)
54
+ - [Licensing Information](#licensing-information)
55
+ - [Citation Information](#citation-information)
56
+ - [Contributions](#contributions)
57
+
58
+ ## [Dataset Description](#dataset-description)
59
+
60
+ - **Homepage:** []()
61
+ - **Repository:** https://github.com/recitalAI/MLSUM
62
+ - **Paper:** https://www.aclweb.org/anthology/2020.emnlp-main.647/
63
+ - **Point of Contact:** thomas@recital.ai
64
+ - **Size of downloaded dataset files:** 1748.64 MB
65
+ - **Size of the generated dataset:** 4635.42 MB
66
+ - **Total amount of disk used:** 6384.06 MB
67
+
68
+ ### [Dataset Summary](#dataset-summary)
69
+
70
+ We present MLSUM, the first large-scale MultiLingual SUMmarization dataset.
71
+ Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, Spanish, Russian, Turkish.
72
+ Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community.
73
+ We report cross-lingual comparative analyses based on state-of-the-art systems.
74
+ These highlight existing biases which motivate the use of a multi-lingual dataset.
75
+
76
+ ### [Supported Tasks](#supported-tasks)
77
+
78
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
79
+
80
+ ### [Languages](#languages)
81
+
82
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
83
+
84
+ ## [Dataset Structure](#dataset-structure)
85
+
86
+ We show detailed information for up to 5 configurations of the dataset.
87
+
88
+ ### [Data Instances](#data-instances)
89
+
90
+ #### de
91
+
92
+ - **Size of downloaded dataset files:** 330.52 MB
93
+ - **Size of the generated dataset:** 897.34 MB
94
+ - **Total amount of disk used:** 1227.86 MB
95
+
96
+ An example of 'validation' looks as follows.
97
+ ```
98
+ {
99
+ "date": "01/01/2001",
100
+ "summary": "A text",
101
+ "text": "This is a text",
102
+ "title": "A sample",
103
+ "topic": "football",
104
+ "url": "https://www.google.com"
105
+ }
106
+ ```
107
+
108
+ #### es
109
+
110
+ - **Size of downloaded dataset files:** 489.53 MB
111
+ - **Size of the generated dataset:** 1274.55 MB
112
+ - **Total amount of disk used:** 1764.09 MB
113
+
114
+ An example of 'validation' looks as follows.
115
+ ```
116
+ {
117
+ "date": "01/01/2001",
118
+ "summary": "A text",
119
+ "text": "This is a text",
120
+ "title": "A sample",
121
+ "topic": "football",
122
+ "url": "https://www.google.com"
123
+ }
124
+ ```
125
+
126
+ #### fr
127
+
128
+ - **Size of downloaded dataset files:** 591.27 MB
129
+ - **Size of the generated dataset:** 1537.36 MB
130
+ - **Total amount of disk used:** 2128.63 MB
131
+
132
+ An example of 'validation' looks as follows.
133
+ ```
134
+ {
135
+ "date": "01/01/2001",
136
+ "summary": "A text",
137
+ "text": "This is a text",
138
+ "title": "A sample",
139
+ "topic": "football",
140
+ "url": "https://www.google.com"
141
+ }
142
+ ```
143
+
144
+ #### ru
145
+
146
+ - **Size of downloaded dataset files:** 101.30 MB
147
+ - **Size of the generated dataset:** 263.38 MB
148
+ - **Total amount of disk used:** 364.68 MB
149
+
150
+ An example of 'train' looks as follows.
151
+ ```
152
+ {
153
+ "date": "01/01/2001",
154
+ "summary": "A text",
155
+ "text": "This is a text",
156
+ "title": "A sample",
157
+ "topic": "football",
158
+ "url": "https://www.google.com"
159
+ }
160
+ ```
161
+
162
+ #### tu
163
+
164
+ - **Size of downloaded dataset files:** 236.03 MB
165
+ - **Size of the generated dataset:** 662.79 MB
166
+ - **Total amount of disk used:** 898.82 MB
167
+
168
+ An example of 'train' looks as follows.
169
+ ```
170
+ {
171
+ "date": "01/01/2001",
172
+ "summary": "A text",
173
+ "text": "This is a text",
174
+ "title": "A sample",
175
+ "topic": "football",
176
+ "url": "https://www.google.com"
177
+ }
178
+ ```
179
+
180
+ ### [Data Fields](#data-fields)
181
+
182
+ The data fields are the same among all splits.
183
+
184
+ #### de
185
+ - `text`: a `string` feature.
186
+ - `summary`: a `string` feature.
187
+ - `topic`: a `string` feature.
188
+ - `url`: a `string` feature.
189
+ - `title`: a `string` feature.
190
+ - `date`: a `string` feature.
191
+
192
+ #### es
193
+ - `text`: a `string` feature.
194
+ - `summary`: a `string` feature.
195
+ - `topic`: a `string` feature.
196
+ - `url`: a `string` feature.
197
+ - `title`: a `string` feature.
198
+ - `date`: a `string` feature.
199
+
200
+ #### fr
201
+ - `text`: a `string` feature.
202
+ - `summary`: a `string` feature.
203
+ - `topic`: a `string` feature.
204
+ - `url`: a `string` feature.
205
+ - `title`: a `string` feature.
206
+ - `date`: a `string` feature.
207
+
208
+ #### ru
209
+ - `text`: a `string` feature.
210
+ - `summary`: a `string` feature.
211
+ - `topic`: a `string` feature.
212
+ - `url`: a `string` feature.
213
+ - `title`: a `string` feature.
214
+ - `date`: a `string` feature.
215
+
216
+ #### tu
217
+ - `text`: a `string` feature.
218
+ - `summary`: a `string` feature.
219
+ - `topic`: a `string` feature.
220
+ - `url`: a `string` feature.
221
+ - `title`: a `string` feature.
222
+ - `date`: a `string` feature.
223
+
224
+ ### [Data Splits Sample Size](#data-splits-sample-size)
225
+
226
+ |name|train |validation|test |
227
+ |----|-----:|---------:|----:|
228
+ |de |220887| 11394|10701|
229
+ |es |266367| 10358|13920|
230
+ |fr |392902| 16059|15828|
231
+ |ru | 25556| 750| 757|
232
+ |tu |249277| 11565|12775|
233
+
234
+ ## [Dataset Creation](#dataset-creation)
235
+
236
+ ### [Curation Rationale](#curation-rationale)
237
+
238
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
239
+
240
+ ### [Source Data](#source-data)
241
+
242
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
243
+
244
+ ### [Annotations](#annotations)
245
+
246
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
247
+
248
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
249
+
250
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
251
+
252
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
253
+
254
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
255
+
256
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
257
+
258
+ ### [Discussion of Biases](#discussion-of-biases)
259
+
260
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
261
+
262
+ ### [Other Known Limitations](#other-known-limitations)
263
+
264
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
265
+
266
+ ## [Additional Information](#additional-information)
267
+
268
+ ### [Dataset Curators](#dataset-curators)
269
+
270
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
271
+
272
+ ### [Licensing Information](#licensing-information)
273
+
274
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
275
+
276
+ ### [Citation Information](#citation-information)
277
+
278
+ ```
279
+ @article{scialom2020mlsum,
280
+ title={MLSUM: The Multilingual Summarization Corpus},
281
+ author={Scialom, Thomas and Dray, Paul-Alexis and Lamprier, Sylvain and Piwowarski, Benjamin and Staiano, Jacopo},
282
+ journal={arXiv preprint arXiv:2004.14900},
283
+ year={2020}
284
+ }
285
+
286
+ ```
287
+
288
+
289
+ ### Contributions
290
+
291
+ Thanks to [@RachelKer](https://github.com/RachelKer), [@albertvillanova](https://github.com/albertvillanova), [@thomwolf](https://github.com/thomwolf) for adding this dataset.