Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
100K<n<1M
Language Creators:
found
ArXiv:
Tags:
License:
File size: 15,374 Bytes
94f3afd
 
b641cc6
 
94f3afd
 
fb726e8
94f3afd
fb726e8
b641cc6
94f3afd
 
 
 
 
 
 
cb733c5
94f3afd
 
0c980ff
07ecdd5
 
 
 
 
 
 
5668c73
 
 
07ecdd5
 
 
 
 
 
 
 
 
 
 
b3acce6
 
 
07ecdd5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2b51c81
 
 
 
 
 
94f3afd
 
 
 
 
 
 
49fb980
94f3afd
 
 
49fb980
 
94f3afd
 
 
 
 
 
 
 
 
 
 
 
 
da0bdb7
94f3afd
 
 
 
 
 
 
 
 
 
 
 
 
b266476
94f3afd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7d55b4e
 
94f3afd
 
 
 
 
7d55b4e
 
 
94f3afd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c2a371f
94f3afd
 
 
 
 
 
 
 
 
 
 
 
7d55b4e
94f3afd
 
7d55b4e
94f3afd
 
 
 
 
 
9c1e182
 
 
 
 
94f3afd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
da0bdb7
 
 
07ecdd5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- found
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|other-wikipedia
task_categories:
- text2text-generation
task_ids:
- text-simplification
pretty_name: WikiAuto
dataset_info:
- config_name: manual
  features:
  - name: alignment_label
    dtype:
      class_label:
        names:
          '0': notAligned
          '1': aligned
          '2': partialAligned
  - name: normal_sentence_id
    dtype: string
  - name: simple_sentence_id
    dtype: string
  - name: normal_sentence
    dtype: string
  - name: simple_sentence
    dtype: string
  - name: gleu_score
    dtype: float32
  splits:
  - name: train
    num_bytes: 110838475
    num_examples: 373801
  - name: dev
    num_bytes: 21112775
    num_examples: 73249
  - name: test
    num_bytes: 33851634
    num_examples: 118074
  download_size: 168957430
  dataset_size: 165802884
- config_name: auto_acl
  features:
  - name: normal_sentence
    dtype: string
  - name: simple_sentence
    dtype: string
  splits:
  - name: full
    num_bytes: 121975414
    num_examples: 488332
  download_size: 118068366
  dataset_size: 121975414
- config_name: auto
  features:
  - name: example_id
    dtype: string
  - name: normal
    struct:
    - name: normal_article_id
      dtype: int32
    - name: normal_article_title
      dtype: string
    - name: normal_article_url
      dtype: string
    - name: normal_article_content
      sequence:
      - name: normal_sentence_id
        dtype: string
      - name: normal_sentence
        dtype: string
  - name: simple
    struct:
    - name: simple_article_id
      dtype: int32
    - name: simple_article_title
      dtype: string
    - name: simple_article_url
      dtype: string
    - name: simple_article_content
      sequence:
      - name: simple_sentence_id
        dtype: string
      - name: simple_sentence
        dtype: string
  - name: paragraph_alignment
    sequence:
    - name: normal_paragraph_id
      dtype: string
    - name: simple_paragraph_id
      dtype: string
  - name: sentence_alignment
    sequence:
    - name: normal_sentence_id
      dtype: string
    - name: simple_sentence_id
      dtype: string
  splits:
  - name: part_1
    num_bytes: 1773240295
    num_examples: 125059
  - name: part_2
    num_bytes: 80417651
    num_examples: 13036
  download_size: 2160638921
  dataset_size: 1853657946
- config_name: auto_full_no_split
  features:
  - name: normal_sentence
    dtype: string
  - name: simple_sentence
    dtype: string
  splits:
  - name: full
    num_bytes: 146310611
    num_examples: 591994
  download_size: 141574179
  dataset_size: 146310611
- config_name: auto_full_with_split
  features:
  - name: normal_sentence
    dtype: string
  - name: simple_sentence
    dtype: string
  splits:
  - name: full
    num_bytes: 124549115
    num_examples: 483801
  download_size: 120678315
  dataset_size: 124549115
config_names:
- auto
- auto_acl
- auto_full_no_split
- auto_full_with_split
- manual
---

# Dataset Card for WikiAuto

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Repository:** [WikiAuto github repository](https://github.com/chaojiang06/wiki-auto)
- **Paper:** [Neural CRF Model for Sentence Alignment in Text Simplification](https://arxiv.org/abs/2005.02324)
- **Point of Contact:** [Chao Jiang](jiang.1530@osu.edu)

### Dataset Summary

WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia as a resource to train sentence simplification systems.

The authors first crowd-sourced a set of manual alignments between sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia (this corresponds to the `manual` config in this version of dataset), then trained a neural CRF system to predict these alignments.

The trained alignment prediction model was then applied to the other articles in Simple English Wikipedia with an English counterpart to create a larger corpus of aligned sentences (corresponding to the `auto`, `auto_acl`, `auto_full_no_split`, and `auto_full_with_split` configs here).

### Supported Tasks and Leaderboards

The dataset was created to support a `text-simplification` task. Success in these tasks is typically measured using the [SARI](https://huggingface.co/metrics/sari) and [FKBLEU](https://huggingface.co/metrics/fkbleu) metrics described in the paper [Optimizing Statistical Machine Translation for Text Simplification](https://www.aclweb.org/anthology/Q16-1029.pdf).

### Languages

While both the input and output of the proposed task are in English (`en`), it should be noted that it is presented as a translation task where Wikipedia Simple English is treated as its own idiom. For a statement of what is intended (but not always observed) to constitute Simple English on this platform, see [Simple English in Wikipedia](https://simple.wikipedia.org/wiki/Wikipedia:About#Simple_English).

## Dataset Structure

### Data Instances

The data in all of the configurations looks a little different.

A `manual` config instance consists of a sentence from the Simple English Wikipedia article, one from the linked English Wikipedia article, IDs for each of them, and a label indicating whether they are  aligned. Sentences on either side can be repeated so that the aligned sentences are in the same instances. For example:
```
{'alignment_label': 1,
 'normal_sentence_id': '0_66252-1-0-0',
 'simple_sentence_id': '0_66252-0-0-0',
 'normal_sentence': 'The Local Government Act 1985 is an Act of Parliament in the United Kingdom.', 'simple_sentence': 'The Local Government Act 1985 was an Act of Parliament in the United Kingdom', 'gleu_score': 0.800000011920929}
```
Is followed by
```
{'alignment_label': 0,
 'normal_sentence_id': '0_66252-1-0-1',
 'simple_sentence_id': '0_66252-0-0-0',
 'normal_sentence': 'Its main effect was to abolish the six county councils of the metropolitan counties that had been set up in 1974, 11 years earlier, by the Local Government Act 1972, along with the Greater London Council that had been established in 1965.',
 'simple_sentence': 'The Local Government Act 1985 was an Act of Parliament in the United Kingdom', 'gleu_score': 0.08641975373029709}
```

The `auto` config shows a pair of an English and corresponding Simple English Wikipedia as an instance, with an alignment at the paragraph and sentence level:
```
{'example_id': '0',
 'normal': {'normal_article_content': {'normal_sentence': ["Lata Mondal ( ; born: 16 January 1993, Dhaka) is a Bangladeshi cricketer who plays for the Bangladesh national women's cricket team.",
    'She is a right handed batter.',
    'Mondal was born on January 16, 1993 in Dhaka, Bangladesh.',
    "Mondal made her ODI career against the Ireland women's cricket team on November 26, 2011.",
    "Mondal made her T20I career against the Ireland women's cricket team on August 28, 2012.",
    "In October 2018, she was named in Bangladesh's squad for the 2018 ICC Women's World Twenty20 tournament in the West Indies.",
    "Mondal was a member of the team that won a silver medal in cricket against the China national women's cricket team at the 2010 Asian Games in Guangzhou, China."],
   'normal_sentence_id': ['normal-41918715-0-0',
    'normal-41918715-0-1',
    'normal-41918715-1-0',
    'normal-41918715-2-0',
    'normal-41918715-3-0',
    'normal-41918715-3-1',
    'normal-41918715-4-0']},
  'normal_article_id': 41918715,
  'normal_article_title': 'Lata Mondal',
  'normal_article_url': 'https://en.wikipedia.org/wiki?curid=41918715'},
 'paragraph_alignment': {'normal_paragraph_id': ['normal-41918715-0'],
  'simple_paragraph_id': ['simple-702227-0']},
 'sentence_alignment': {'normal_sentence_id': ['normal-41918715-0-0',
   'normal-41918715-0-1'],
  'simple_sentence_id': ['simple-702227-0-0', 'simple-702227-0-1']},
 'simple': {'simple_article_content': {'simple_sentence': ["Lata Mondal (born: 16 January 1993) is a Bangladeshi cricketer who plays for the Bangladesh national women's cricket team.",
    'She is a right handed bat.'],
   'simple_sentence_id': ['simple-702227-0-0', 'simple-702227-0-1']},
  'simple_article_id': 702227,
  'simple_article_title': 'Lata Mondal',
  'simple_article_url': 'https://simple.wikipedia.org/wiki?curid=702227'}}
```

Finally, the `auto_acl`, the `auto_full_no_split`, and the `auto_full_with_split` configs were obtained by selecting the aligned pairs of sentences from `auto` to provide a ready-to-go aligned dataset to train a sequence-to-sequence system. While `auto_acl` corresponds to the filtered version of the data used to train the systems in the paper, `auto_full_no_split` and `auto_full_with_split` correspond to the unfiltered versions with and without sentence splits respectively. In the `auto_full_with_split` config, we join the sentences in the simple article mapped to the same sentence in the complex article to capture sentence splitting. Split sentences are separated by a `<SEP>` token.  In the `auto_full_no_split` config, we do not join the splits and treat them as separate pairs. An instance is a single pair of sentences:
```
{'normal_sentence': 'In early work , Rutherford discovered the concept of radioactive half-life , the radioactive element radon , and differentiated and named alpha and beta radiation .\n',
 'simple_sentence': 'Rutherford discovered the radioactive half-life , and the three parts of radiation which he named Alpha , Beta , and Gamma .\n'}
```

### Data Fields

The data has the following field:
- `normal_sentence`: a sentence from English Wikipedia.
- `normal_sentence_id`: a unique ID for each English Wikipedia sentence. The last two dash-separated numbers correspond to the paragraph number in the article and the sentence number in the paragraph.
- `simple_sentence`: a sentence from Simple English Wikipedia.
- `simple_sentence_id`: a unique ID for each Simple English Wikipedia sentence. The last two dash-separated numbers correspond to the paragraph number in the article and the sentence number in the paragraph.
- `alignment_label`: signifies whether a pair of sentences is aligned: labels are `2:partialAligned`, `1:aligned` and `0:notAligned`
- `paragraph_alignment`: a first step of alignment mapping English and Simple English paragraphs from linked articles
- `sentence_alignment`: the full alignment mapping English and Simple English sentences from linked articles
- `gleu_score`: the sentence level GLEU (Google-BLEU) score for each pair.

### Data Splits

In `auto`, the `part_2` split corresponds to the articles used in `manual`, and `part_1` has the rest of Wikipedia.

The `manual` config is provided with a `train`/`dev`/`test` split with the following amounts of data:

|                        |   train | validation |    test |
|------------------------|--------:|-----------:|--------:|
| Total sentence pairs   |  373801 |      73249 |  118074 |
| Aligned sentence pairs |    1889 |        346 |     677 |

## Dataset Creation

### Curation Rationale

Simple English Wikipedia provides a ready source of training data for text simplification systems, as 1. articles in different languages are linked, making it easier to find parallel data and 2. the Simple English data is written by users for users rather than by professional translators. However, even though articles are aligned, finding a good sentence-level alignment can remain challenging. This work aims to provide a solution for this problem. By manually annotating a sub-set of the articles, they manage to achieve an F1 score of over 88% on predicting alignment, which allows to create a good quality sentence level aligned corpus using all of Simple English Wikipedia.

### Source Data

#### Initial Data Collection and Normalization

The authors mention that they "extracted 138,095 article pairs from the 2019/09 Wikipedia dump [...] using an improved version of the [WikiExtractor](https://github.com/attardi/wikiextractor) library". The [SpaCy](https://spacy.io/) library is used for sentence splitting.

#### Who are the source language producers?

The dataset uses langauge from Wikipedia: some demographic information is provided [here](https://en.wikipedia.org/wiki/Wikipedia:Who_writes_Wikipedia%3F).

### Annotations

#### Annotation process

Sentence alignment labels were obtained for 500 randomly sampled document pairs (10,123 sentence pairs total). The authors pre-selected several alignment candidates from English Wikipedia for each Simple Wikipedia sentence based on various similarity metrics, then asked the crowd-workers to annotate these pairs.

#### Who are the annotators?

No demographic annotation is provided for the crowd workers.
[More Information Needed]

### Personal and Sensitive Information

[More Information Needed]

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed]

### Discussion of Biases

[More Information Needed]

### Other Known Limitations

[More Information Needed]

## Additional Information

### Dataset Curators

The dataset was created by Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, and Wei Xu working at Ohio State University.

### Licensing Information

The dataset is not licensed by itself, but the source Wikipedia data is under a `cc-by-sa-3.0` license.

### Citation Information

You can cite the paper presenting the dataset as:
```
@inproceedings{acl/JiangMLZX20,
  author    = {Chao Jiang and
               Mounica Maddela and
               Wuwei Lan and
               Yang Zhong and
               Wei Xu},
  editor    = {Dan Jurafsky and
               Joyce Chai and
               Natalie Schluter and
               Joel R. Tetreault},
  title     = {Neural {CRF} Model for Sentence Alignment in Text Simplification},
  booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational
               Linguistics, {ACL} 2020, Online, July 5-10, 2020},
  pages     = {7943--7960},
  publisher = {Association for Computational Linguistics},
  year      = {2020},
  url       = {https://www.aclweb.org/anthology/2020.acl-main.709/}
}
```

### Contributions

Thanks to [@yjernite](https://github.com/yjernite), [@mounicam](https://github.com/mounicam) for adding this dataset.