Datasets:
GEM
/

ArXiv:
License:
Sebastian Gehrmann commited on
Commit
49a62d9
1 Parent(s): 97d6be4

data card.

Browse files
Files changed (2) hide show
  1. README.md +400 -0
  2. xwikis-05_18_2022_16_42_36.json +0 -153
README.md CHANGED
@@ -0,0 +1,400 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - unknown
6
+ languages:
7
+ - unknown
8
+ licenses:
9
+ - cc-by-sa-4.0
10
+ multilinguality:
11
+ - unknown
12
+ pretty_name: xwikis
13
+ size_categories:
14
+ - unknown
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - summarization
19
+ task_ids:
20
+ - unknown
21
+ ---
22
+
23
+ # Dataset Card for GEM/xwikis
24
+
25
+ ## Dataset Description
26
+
27
+ - **Homepage:** https://github.com/lauhaide/clads
28
+ - **Repository:** [Needs More Information]
29
+ - **Paper:** https://arxiv.org/abs/2202.09583
30
+ - **Leaderboard:** N/A
31
+ - **Point of Contact:** Laura Perez-Beltrachini
32
+
33
+ ### Link to Main Data Card
34
+
35
+ You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/xwikis).
36
+
37
+ ### Dataset Summary
38
+
39
+ The XWikis Corpus provides datasets with different language pairs and directions for cross-lingual and multi-lingual abstractive document summarisation.
40
+
41
+ You can load the dataset via:
42
+ ```
43
+ import datasets
44
+ data = datasets.load_dataset('GEM/xwikis')
45
+ ```
46
+ The data loader can be found [here](https://huggingface.co/datasets/GEM/xwikis).
47
+
48
+ #### website
49
+ https://github.com/lauhaide/clads
50
+
51
+ #### paper
52
+ https://arxiv.org/abs/2202.09583
53
+
54
+ #### authors
55
+ Laura Perez-Beltrachini (University of Edinburgh)
56
+
57
+ ## Dataset Overview
58
+
59
+ ### Where to find the Data and its Documentation
60
+
61
+ #### Webpage
62
+
63
+ <!-- info: What is the webpage for the dataset (if it exists)? -->
64
+ <!-- scope: telescope -->
65
+ https://github.com/lauhaide/clads
66
+
67
+ #### Paper
68
+
69
+ <!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
70
+ <!-- scope: telescope -->
71
+ https://arxiv.org/abs/2202.09583
72
+
73
+ #### BibTex
74
+
75
+ <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
76
+ <!-- scope: microscope -->
77
+ @InProceedings{clads-emnlp,
78
+ author = "Laura Perez-Beltrachini and Mirella Lapata",
79
+ title = "Models and Datasets for Cross-Lingual Summarisation",
80
+ booktitle = "Proceedings of The 2021 Conference on Empirical Methods in Natural Language Processing ",
81
+ year = "2021",
82
+ address = "Punta Cana, Dominican Republic",
83
+ }
84
+
85
+ #### Contact Name
86
+
87
+ <!-- quick -->
88
+ <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
89
+ <!-- scope: periscope -->
90
+ Laura Perez-Beltrachini
91
+
92
+ #### Contact Email
93
+
94
+ <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
95
+ <!-- scope: periscope -->
96
+ lperez@ed.ac.uk
97
+
98
+ #### Has a Leaderboard?
99
+
100
+ <!-- info: Does the dataset have an active leaderboard? -->
101
+ <!-- scope: telescope -->
102
+ no
103
+
104
+
105
+ ### Languages and Intended Use
106
+
107
+ #### Multilingual?
108
+
109
+ <!-- quick -->
110
+ <!-- info: Is the dataset multilingual? -->
111
+ <!-- scope: telescope -->
112
+ yes
113
+
114
+ #### Covered Languages
115
+
116
+ <!-- quick -->
117
+ <!-- info: What languages/dialects are covered in the dataset? -->
118
+ <!-- scope: telescope -->
119
+ `German`, `English`, `French`, `Czech`
120
+
121
+ #### License
122
+
123
+ <!-- quick -->
124
+ <!-- info: What is the license of the dataset? -->
125
+ <!-- scope: telescope -->
126
+ cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
127
+
128
+ #### Intended Use
129
+
130
+ <!-- info: What is the intended use of the dataset? -->
131
+ <!-- scope: microscope -->
132
+ Cross-lingual and Multi-lingual single long input document abstractive summarisation.
133
+
134
+ #### Primary Task
135
+
136
+ <!-- info: What primary task does the dataset support? -->
137
+ <!-- scope: telescope -->
138
+ Summarization
139
+
140
+ #### Communicative Goal
141
+
142
+ <!-- quick -->
143
+ <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
144
+ <!-- scope: periscope -->
145
+ Entity descriptive summarisation, that is, generate a summary that conveys the most salient facts of a document related to a given entity.
146
+
147
+
148
+ ### Credit
149
+
150
+ #### Curation Organization Type(s)
151
+
152
+ <!-- info: In what kind of organization did the dataset curation happen? -->
153
+ <!-- scope: telescope -->
154
+ `academic`
155
+
156
+ #### Dataset Creators
157
+
158
+ <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
159
+ <!-- scope: microscope -->
160
+ Laura Perez-Beltrachini (University of Edinburgh)
161
+
162
+ #### Who added the Dataset to GEM?
163
+
164
+ <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
165
+ <!-- scope: microscope -->
166
+ Laura Perez-Beltrachini (University of Edinburgh) and Ronald Cardenas (University of Edinburgh)
167
+
168
+
169
+ ### Dataset Structure
170
+
171
+ #### Data Splits
172
+
173
+ <!-- info: Describe and name the splits in the dataset if there are more than one. -->
174
+ <!-- scope: periscope -->
175
+ For each language pair and direction there exists a train/valid/test split.
176
+ The test split is a sample of size 7k from the intersection of titles existing in the four languages (cs,fr,en,de).
177
+ Train/valid are randomly split.
178
+
179
+
180
+
181
+ ## Dataset in GEM
182
+
183
+ ### Rationale for Inclusion in GEM
184
+
185
+ #### Similar Datasets
186
+
187
+ <!-- info: Do other datasets for the high level task exist? -->
188
+ <!-- scope: telescope -->
189
+ no
190
+
191
+
192
+ ### GEM-Specific Curation
193
+
194
+ #### Modificatied for GEM?
195
+
196
+ <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
197
+ <!-- scope: telescope -->
198
+ no
199
+
200
+ #### Additional Splits?
201
+
202
+ <!-- info: Does GEM provide additional splits to the dataset? -->
203
+ <!-- scope: telescope -->
204
+ no
205
+
206
+
207
+ ### Getting Started with the Task
208
+
209
+
210
+
211
+
212
+ ## Previous Results
213
+
214
+ ### Previous Results
215
+
216
+ #### Measured Model Abilities
217
+
218
+ <!-- info: What aspect of model ability can be measured with this dataset? -->
219
+ <!-- scope: telescope -->
220
+ - identification of entity salient information
221
+ - translation
222
+ - multi-linguality
223
+ - cross-lingual transfer, zero-shot, few-shot
224
+
225
+ #### Metrics
226
+
227
+ <!-- info: What metrics are typically used for this task? -->
228
+ <!-- scope: periscope -->
229
+ `ROUGE`
230
+
231
+ #### Previous results available?
232
+
233
+ <!-- info: Are previous results available? -->
234
+ <!-- scope: telescope -->
235
+ yes
236
+
237
+ #### Other Evaluation Approaches
238
+
239
+ <!-- info: What evaluation approaches have others used? -->
240
+ <!-- scope: periscope -->
241
+ ROUGE-1/2/L
242
+
243
+
244
+
245
+ ## Dataset Curation
246
+
247
+ ### Original Curation
248
+
249
+ #### Sourced from Different Sources
250
+
251
+ <!-- info: Is the dataset aggregated from different data sources? -->
252
+ <!-- scope: telescope -->
253
+ no
254
+
255
+
256
+ ### Language Data
257
+
258
+ #### How was Language Data Obtained?
259
+
260
+ <!-- info: How was the language data obtained? -->
261
+ <!-- scope: telescope -->
262
+ `Found`
263
+
264
+ #### Where was it found?
265
+
266
+ <!-- info: If found, where from? -->
267
+ <!-- scope: telescope -->
268
+ `Single website`
269
+
270
+ #### Data Validation
271
+
272
+ <!-- info: Was the text validated by a different worker or a data curator? -->
273
+ <!-- scope: telescope -->
274
+ other
275
+
276
+ #### Was Data Filtered?
277
+
278
+ <!-- info: Were text instances selected or filtered? -->
279
+ <!-- scope: telescope -->
280
+ not filtered
281
+
282
+
283
+ ### Structured Annotations
284
+
285
+ #### Additional Annotations?
286
+
287
+ <!-- quick -->
288
+ <!-- info: Does the dataset have additional annotations for each instance? -->
289
+ <!-- scope: telescope -->
290
+ found
291
+
292
+ #### Annotation Service?
293
+
294
+ <!-- info: Was an annotation service used? -->
295
+ <!-- scope: telescope -->
296
+ no
297
+
298
+ #### Annotation Values
299
+
300
+ <!-- info: Purpose and values for each annotation -->
301
+ <!-- scope: microscope -->
302
+ The input documents have section structure information.
303
+
304
+ #### Any Quality Control?
305
+
306
+ <!-- info: Quality control measures? -->
307
+ <!-- scope: telescope -->
308
+ validated by another rater
309
+
310
+ #### Quality Control Details
311
+
312
+ <!-- info: Describe the quality control measures that were taken. -->
313
+ <!-- scope: microscope -->
314
+ Bilingual annotators assessed the content overlap of source document and target summaries.
315
+
316
+
317
+ ### Consent
318
+
319
+ #### Any Consent Policy?
320
+
321
+ <!-- info: Was there a consent policy involved when gathering the data? -->
322
+ <!-- scope: telescope -->
323
+ no
324
+
325
+
326
+ ### Private Identifying Information (PII)
327
+
328
+ #### Contains PII?
329
+
330
+ <!-- quick -->
331
+ <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
332
+ <!-- scope: telescope -->
333
+ no PII
334
+
335
+
336
+ ### Maintenance
337
+
338
+ #### Any Maintenance Plan?
339
+
340
+ <!-- info: Does the original dataset have a maintenance plan? -->
341
+ <!-- scope: telescope -->
342
+ no
343
+
344
+
345
+
346
+ ## Broader Social Context
347
+
348
+ ### Previous Work on the Social Impact of the Dataset
349
+
350
+ #### Usage of Models based on the Data
351
+
352
+ <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
353
+ <!-- scope: telescope -->
354
+ no
355
+
356
+
357
+ ### Impact on Under-Served Communities
358
+
359
+ #### Addresses needs of underserved Communities?
360
+
361
+ <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
362
+ <!-- scope: telescope -->
363
+ no
364
+
365
+
366
+ ### Discussion of Biases
367
+
368
+ #### Any Documented Social Biases?
369
+
370
+ <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
371
+ <!-- scope: telescope -->
372
+ no
373
+
374
+
375
+
376
+ ## Considerations for Using the Data
377
+
378
+ ### PII Risks and Liability
379
+
380
+
381
+
382
+ ### Licenses
383
+
384
+ #### Copyright Restrictions on the Dataset
385
+
386
+ <!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
387
+ <!-- scope: periscope -->
388
+ `public domain`
389
+
390
+ #### Copyright Restrictions on the Language Data
391
+
392
+ <!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
393
+ <!-- scope: periscope -->
394
+ `public domain`
395
+
396
+
397
+ ### Known Technical Limitations
398
+
399
+
400
+
xwikis-05_18_2022_16_42_36.json DELETED
@@ -1,153 +0,0 @@
1
- {
2
- "overview": {
3
- "what": {
4
- "dataset": "The XWikis Corpus provides datasets with different language pairs and directions for cross-lingual and multi-lingual abstractive document summarisation. "
5
- },
6
- "where": {
7
- "has-leaderboard": "no",
8
- "leaderboard-url": "N/A",
9
- "leaderboard-description": "N/A",
10
- "website": "https://github.com/lauhaide/clads",
11
- "paper-bibtext": "@InProceedings{clads-emnlp,\n author = \"Laura Perez-Beltrachini and Mirella Lapata\",\n title = \"Models and Datasets for Cross-Lingual Summarisation\",\n booktitle = \"Proceedings of The 2021 Conference on Empirical Methods in Natural Language Processing \",\n year = \"2021\",\n address = \"Punta Cana, Dominican Republic\",\n}",
12
- "paper-url": "https://arxiv.org/abs/2202.09583",
13
- "contact-name": "Laura Perez-Beltrachini",
14
- "contact-email": "lperez@ed.ac.uk"
15
- },
16
- "languages": {
17
- "is-multilingual": "yes",
18
- "license": "cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International",
19
- "task-other": "N/A",
20
- "language-names": [
21
- "German",
22
- "English",
23
- "French",
24
- "Czech"
25
- ],
26
- "intended-use": "Cross-lingual and Multi-lingual single long input document abstractive summarisation.",
27
- "license-other": "N/A",
28
- "task": "Summarization",
29
- "communicative": "Entity descriptive summarisation, that is, generate a summary that conveys the most salient facts of a document related to a given entity."
30
- },
31
- "credit": {
32
- "organization-type": [
33
- "academic"
34
- ],
35
- "creators": "Laura Perez-Beltrachini (University of Edinburgh)",
36
- "gem-added-by": "Laura Perez-Beltrachini (University of Edinburgh) and Ronald Cardenas (University of Edinburgh)"
37
- },
38
- "structure": {
39
- "structure-splits": "For each language pair and direction there exists a train/valid/test split. \nThe test split is a sample of size 7k from the intersection of titles existing in the four languages (cs,fr,en,de).\nTrain/valid are randomly split."
40
- }
41
- },
42
- "curation": {
43
- "original": {
44
- "is-aggregated": "no",
45
- "aggregated-sources": "N/A"
46
- },
47
- "language": {
48
- "found": [
49
- "Single website"
50
- ],
51
- "crowdsourced": [],
52
- "created": "N/A",
53
- "machine-generated": "N/A",
54
- "validated": "other",
55
- "is-filtered": "not filtered",
56
- "filtered-criteria": "N/A",
57
- "obtained": [
58
- "Found"
59
- ]
60
- },
61
- "annotations": {
62
- "origin": "found",
63
- "rater-number": "N/A",
64
- "rater-qualifications": "N/A",
65
- "rater-training-num": "N/A",
66
- "rater-test-num": "N/A",
67
- "rater-annotation-service-bool": "no",
68
- "rater-annotation-service": [],
69
- "values": "The input documents have section structure information.",
70
- "quality-control": "validated by another rater",
71
- "quality-control-details": "Bilingual annotators assessed the content overlap of source document and target summaries."
72
- },
73
- "consent": {
74
- "has-consent": "no",
75
- "consent-policy": "N/A",
76
- "consent-other": "N/A"
77
- },
78
- "pii": {
79
- "has-pii": "no PII",
80
- "no-pii-justification": "N/A",
81
- "is-pii-identified": "N/A",
82
- "pii-identified-method": "N/A",
83
- "is-pii-replaced": "N/A",
84
- "pii-replaced-method": "N/A",
85
- "pii-categories": []
86
- },
87
- "maintenance": {
88
- "has-maintenance": "no",
89
- "description": "N/A",
90
- "contact": "N/A",
91
- "contestation-mechanism": "N/A",
92
- "contestation-link": "N/A",
93
- "contestation-description": "N/A"
94
- }
95
- },
96
- "gem": {
97
- "rationale": {
98
- "sole-task-dataset": "no",
99
- "sole-language-task-dataset": "N/A",
100
- "distinction-description": "N/A"
101
- },
102
- "curation": {
103
- "has-additional-curation": "no",
104
- "modification-types": [],
105
- "modification-description": "N/A",
106
- "has-additional-splits": "no",
107
- "additional-splits-description": "N/A",
108
- "additional-splits-capacicites": "N/A"
109
- },
110
- "starting": {}
111
- },
112
- "results": {
113
- "results": {
114
- "other-metrics-definitions": "N/A",
115
- "has-previous-results": "yes",
116
- "current-evaluation": "ROUGE-1/2/L",
117
- "previous-results": "N/A",
118
- "model-abilities": "- identification of entity salient information\n- translation\n- multi-linguality\n- cross-lingual transfer, zero-shot, few-shot",
119
- "metrics": [
120
- "ROUGE"
121
- ]
122
- }
123
- },
124
- "considerations": {
125
- "pii": {},
126
- "licenses": {
127
- "dataset-restrictions-other": "N/A",
128
- "data-copyright-other": "N/A",
129
- "dataset-restrictions": [
130
- "public domain"
131
- ],
132
- "data-copyright": [
133
- "public domain"
134
- ]
135
- },
136
- "limitations": {}
137
- },
138
- "context": {
139
- "previous": {
140
- "is-deployed": "no",
141
- "described-risks": "N/A",
142
- "changes-from-observation": "N/A"
143
- },
144
- "underserved": {
145
- "helps-underserved": "no",
146
- "underserved-description": "N/A"
147
- },
148
- "biases": {
149
- "has-biases": "no",
150
- "bias-analyses": "N/A"
151
- }
152
- }
153
- }