Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
100K<n<1M
Language Creators:
found
ArXiv:
Tags:
License:
system HF staff commited on
Commit
94f3afd
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ auto:
4
+ - machine-generated
5
+ auto_acl:
6
+ - machine-generated
7
+ manual:
8
+ - crowdsourced
9
+ language_creators:
10
+ - found
11
+ languages:
12
+ - en
13
+ licenses:
14
+ - cc-by-sa-3-0
15
+ multilinguality:
16
+ - monolingual
17
+ size_categories:
18
+ - 100K<n<1M
19
+ source_datasets:
20
+ - extended|other-wikipedia
21
+ task_categories:
22
+ - conditional-text-generation
23
+ task_ids:
24
+ - text-simplification
25
+ ---
26
+
27
+ # Dataset Card for WikiAuto
28
+
29
+ ## Table of Contents
30
+ - [Dataset Description](#dataset-description)
31
+ - [Dataset Summary](#dataset-summary)
32
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
33
+ - [Languages](#languages)
34
+ - [Dataset Structure](#dataset-structure)
35
+ - [Data Instances](#data-instances)
36
+ - [Data Fields](#data-instances)
37
+ - [Data Splits](#data-instances)
38
+ - [Dataset Creation](#dataset-creation)
39
+ - [Curation Rationale](#curation-rationale)
40
+ - [Source Data](#source-data)
41
+ - [Annotations](#annotations)
42
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
43
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
44
+ - [Social Impact of Dataset](#social-impact-of-dataset)
45
+ - [Discussion of Biases](#discussion-of-biases)
46
+ - [Other Known Limitations](#other-known-limitations)
47
+ - [Additional Information](#additional-information)
48
+ - [Dataset Curators](#dataset-curators)
49
+ - [Licensing Information](#licensing-information)
50
+ - [Citation Information](#citation-information)
51
+
52
+ ## Dataset Description
53
+
54
+ - **Repository:** [WikiAuto github repository](https://github.com/chaojiang06/wiki-auto)
55
+ - **Paper:** [Neural CRF Model for Sentence Alignment in Text Simplification](https://arxiv.org/abs/2005.02324)
56
+ - **Point of Contact:** [Chao Jiang](jiang.1530@osu.edu)
57
+
58
+ ### Dataset Summary
59
+
60
+ WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia as a resource to train sentence simplification systems.
61
+
62
+ The authors first crowd-sourced a set of manual alignments between sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia (this corresponds to the `manual` config in this version of dataset), then trained a neural CRF system to predict these alignments.
63
+
64
+ The trained alignment prediction model was then applied to the other articles in Simple English Wikipedia with an English counterpart to create a larger corpus of aligned sentences (corresponding to the `auto` and `auto_acl` configs here).
65
+
66
+ ### Supported Tasks and Leaderboards
67
+
68
+ The dataset was created to support a `text-simplification` task. Success in these tasks is typically measured using the [SARI](https://huggingface.co/metrics/sari) and [FKBLEU](https://huggingface.co/metrics/fkbleu) metrics described in the paper [Optimizing Statistical Machine Translation for Text Simplification](https://www.aclweb.org/anthology/Q16-1029.pdf).
69
+
70
+ ### Languages
71
+
72
+ While both the input and output of the proposed task are in English (`en`), it should be noted that it is presented as a translation task where Wikipedia Simple English is treated as its own idiom. For a statement of what is intended (but not always observed) to constitute Simple English on this platform, see [Simple English in Wikipedia](https://simple.wikipedia.org/wiki/Wikipedia:About#Simple_English).
73
+
74
+ ## Dataset Structure
75
+
76
+ ### Data Instances
77
+
78
+ The data in all of the configurations looks a little different.
79
+
80
+ A `manual` config instance consists of a sentence from the Simple English Wikipedia article, one from the linked English Wikipedia article, IDs for each of them, and a label indicating whether they are aligned. Sentences on either side can be repeated so that the aligned sentences are in the same instances. For example:
81
+ ```
82
+ {'alignment_label': 1,
83
+ 'normal_sentence': 'The Local Government Act 1985 is an Act of Parliament in the United Kingdom.',
84
+ 'normal_sentence_id': '0_66252-1-0-0',
85
+ 'simple_sentence': 'The Local Government Act 1985 was an Act of Parliament in the United Kingdom.',
86
+ 'simple_sentence_id': '0_66252-0-0-0'}
87
+ ```
88
+ Is followed by
89
+ ```
90
+ {'alignment_label': 0,
91
+ 'normal_sentence': 'Its main effect was to abolish the six county councils of the metropolitan counties that had been set up in 1974, 11 years earlier, by the Local Government Act 1972, along with the Greater London Council that had been established in 1965.',
92
+ 'normal_sentence_id': '0_66252-1-0-1',
93
+ 'simple_sentence': 'The Local Government Act 1985 was an Act of Parliament in the United Kingdom.',
94
+ 'simple_sentence_id': '0_66252-0-0-0'}
95
+ ```
96
+
97
+ The `auto` config shows a pair of an English and corresponding Simple English Wikipedia as an instance, with an alignment at the paragraph and sentence level:
98
+ ```
99
+ {'example_id': '0',
100
+ 'normal': {'normal_article_content': {'normal_sentence': ["Lata Mondal ( ; born: 16 January 1993, Dhaka) is a Bangladeshi cricketer who plays for the Bangladesh national women's cricket team.",
101
+ 'She is a right handed batter.',
102
+ 'Mondal was born on January 16, 1993 in Dhaka, Bangladesh.',
103
+ "Mondal made her ODI career against the Ireland women's cricket team on November 26, 2011.",
104
+ "Mondal made her T20I career against the Ireland women's cricket team on August 28, 2012.",
105
+ "In October 2018, she was named in Bangladesh's squad for the 2018 ICC Women's World Twenty20 tournament in the West Indies.",
106
+ "Mondal was a member of the team that won a silver medal in cricket against the China national women's cricket team at the 2010 Asian Games in Guangzhou, China."],
107
+ 'normal_sentence_id': ['normal-41918715-0-0',
108
+ 'normal-41918715-0-1',
109
+ 'normal-41918715-1-0',
110
+ 'normal-41918715-2-0',
111
+ 'normal-41918715-3-0',
112
+ 'normal-41918715-3-1',
113
+ 'normal-41918715-4-0']},
114
+ 'normal_article_id': 41918715,
115
+ 'normal_article_title': 'Lata Mondal',
116
+ 'normal_article_url': 'https://en.wikipedia.org/wiki?curid=41918715'},
117
+ 'paragraph_alignment': {'normal_paragraph_id': ['normal-41918715-0'],
118
+ 'simple_paragraph_id': ['simple-702227-0']},
119
+ 'sentence_alignment': {'normal_sentence_id': ['normal-41918715-0-0',
120
+ 'normal-41918715-0-1'],
121
+ 'simple_sentence_id': ['simple-702227-0-0', 'simple-702227-0-1']},
122
+ 'simple': {'simple_article_content': {'simple_sentence': ["Lata Mondal (born: 16 January 1993) is a Bangladeshi cricketer who plays for the Bangladesh national women's cricket team.",
123
+ 'She is a right handed bat.'],
124
+ 'simple_sentence_id': ['simple-702227-0-0', 'simple-702227-0-1']},
125
+ 'simple_article_id': 702227,
126
+ 'simple_article_title': 'Lata Mondal',
127
+ 'simple_article_url': 'https://simple.wikipedia.org/wiki?curid=702227'}}
128
+ ```
129
+
130
+ Finally, the `auto_acl` config was obtained by selecting the aligned pairs of sentences from `auto` to provide a ready-to-go aligned dataset to train a sequence-to-sequence system, so an instance is a single pair of sentences:
131
+ ```
132
+ {'normal_sentence': 'In early work , Rutherford discovered the concept of radioactive half-life , the radioactive element radon , and differentiated and named alpha and beta radiation .\n',
133
+ 'simple_sentence': 'Rutherford discovered the radioactive half-life , and the three parts of radiation which he named Alpha , Beta , and Gamma .\n'}
134
+ ```
135
+
136
+ ### Data Fields
137
+
138
+ The data has the following field:
139
+ - `normal_sentence`: a sentence from English Wikipedia.
140
+ - `normal_sentence_id`: a unique ID for each English Wikipedia sentence. The last two dash-separated numbers correspond to the paragraph number in the article and the sentence number in the paragraph.
141
+ - `simple_sentence`: a sentence from Simple English Wikipedia.
142
+ - `simple_sentence_id`: a unique ID for each Simple English Wikipedia sentence. The last two dash-separated numbers correspond to the paragraph number in the article and the sentence number in the paragraph.
143
+ - `alignment_label`: signifies whether a pair of sentences is aligned: labels are `1:aligned` and `0:notAligned`
144
+ - `paragraph_alignment`: a first step of alignment mapping English and Simple English paragraphs from linked articles
145
+ - `sentence_alignment`: the full alignment mapping English and Simple English sentences from linked articles
146
+
147
+ ### Data Splits
148
+
149
+ In `auto`, the `part_2` split corresponds to the articles used in `manual`, and `part_1` has the rest of Wikipedia.
150
+
151
+ The `manual` config is provided with a `train`/`dev`/`test` split with the following amounts of data:
152
+ | | Tain | Dev | Test |
153
+ | ----- | ------ | ----- | ---- |
154
+ | Total sentence pairs | 373801 | 73249 | 118074 |
155
+ | Aligned sentence pairs | 1889 | 346 | 677 |
156
+
157
+ ## Dataset Creation
158
+
159
+ ### Curation Rationale
160
+
161
+ Simple English Wikipedia provides a ready source of training data for text simplification systems, as 1. articles in different languages are linked, making it easier to find parallel data and 2. the Simple English data is written by users for users rather than by professional translators. However, even though articles are aligned, finding a good sentence-level alignment can remain challenging. This work aims to provide a solution for this problem. By manually annotating a sub-set of the articles, they manage to achieve an F1 score of over 88% on predicting alignment, which allows to create a good quality sentence level aligned corpus using all of Simple English Wikipedia.
162
+
163
+ ### Source Data
164
+
165
+ #### Initial Data Collection and Normalization
166
+
167
+ The authors mention that they "extracted 138,095 article pairs from the 2019/09 Wikipedia dump [...] using an improved version of the [WikiExtractor](https://github.com/attardi/wikiextractor) library". The [SpaCy](https://spacy.io/) library is used for sentence splitting.
168
+
169
+ #### Who are the source language producers?
170
+
171
+ The dataset uses langauge from Wikipedia: some demographic information is provided [here](https://en.wikipedia.org/wiki/Wikipedia:Who_writes_Wikipedia%3F).
172
+
173
+ ### Annotations
174
+
175
+ #### Annotation process
176
+
177
+ Sentence alignment labels were obtained for 500 randomly sampled document pairs (10,123 sentence pairs total). The authors pre-selected several alignment candidates from English Wikipedia for each Simple Wikipedia sentence based on various similarity metrics, then asked the crowd-workers to annotate these pairs.
178
+
179
+ #### Who are the annotators?
180
+
181
+ No demographic annotation is provided for the crowd workers.
182
+ [More Information Needed]
183
+
184
+ ### Personal and Sensitive Information
185
+
186
+ [More Information Needed]
187
+
188
+ ## Considerations for Using the Data
189
+
190
+ ### Social Impact of Dataset
191
+
192
+ [More Information Needed]
193
+
194
+ ### Discussion of Biases
195
+
196
+ [More Information Needed]
197
+
198
+ ### Other Known Limitations
199
+
200
+ [More Information Needed]
201
+
202
+ ## Additional Information
203
+
204
+ ### Dataset Curators
205
+
206
+ The dataset was created by Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, and Wei Xu working at Ohio State University.
207
+
208
+ ### Licensing Information
209
+
210
+ The dataset is not licensed by itself, but the source Wikipedia data is under a `cc-by-sa-3.0` license.
211
+
212
+ ### Citation Information
213
+
214
+ You can cite the paper presenting the dataset as:
215
+ ```
216
+ @inproceedings{acl/JiangMLZX20,
217
+ author = {Chao Jiang and
218
+ Mounica Maddela and
219
+ Wuwei Lan and
220
+ Yang Zhong and
221
+ Wei Xu},
222
+ editor = {Dan Jurafsky and
223
+ Joyce Chai and
224
+ Natalie Schluter and
225
+ Joel R. Tetreault},
226
+ title = {Neural {CRF} Model for Sentence Alignment in Text Simplification},
227
+ booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational
228
+ Linguistics, {ACL} 2020, Online, July 5-10, 2020},
229
+ pages = {7943--7960},
230
+ publisher = {Association for Computational Linguistics},
231
+ year = {2020},
232
+ url = {https://www.aclweb.org/anthology/2020.acl-main.709/}
233
+ }
234
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"manual": {"description": "WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia\nas a resource to train sentence simplification systems. The authors first crowd-sourced a set of manual alignments\nbetween sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia\n(this corresponds to the `manual` config), then trained a neural CRF system to predict these alignments.\nThe trained model was then applied to the other articles in Simple English Wikipedia with an English counterpart to\ncreate a larger corpus of aligned sentences (corresponding to the `auto` and `auto_acl` configs here).\n", "citation": "@inproceedings{acl/JiangMLZX20,\n author = {Chao Jiang and\n Mounica Maddela and\n Wuwei Lan and\n Yang Zhong and\n Wei Xu},\n editor = {Dan Jurafsky and\n Joyce Chai and\n Natalie Schluter and\n Joel R. Tetreault},\n title = {Neural {CRF} Model for Sentence Alignment in Text Simplification},\n booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational\n Linguistics, {ACL} 2020, Online, July 5-10, 2020},\n pages = {7943--7960},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.acl-main.709/}\n}\n", "homepage": "https://github.com/chaojiang06/wiki-auto", "license": "CC-BY-SA 3.0", "features": {"alignment_label": {"num_classes": 2, "names": ["notAligned", "aligned"], "names_file": null, "id": null, "_type": "ClassLabel"}, "normal_sentence_id": {"dtype": "string", "id": null, "_type": "Value"}, "simple_sentence_id": {"dtype": "string", "id": null, "_type": "Value"}, "normal_sentence": {"dtype": "string", "id": null, "_type": "Value"}, "simple_sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_auto", "config_name": "manual", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 109343271, "num_examples": 373801, "dataset_name": "wiki_auto"}, "dev": {"name": "dev", "num_bytes": 20819779, "num_examples": 73249, "dataset_name": "wiki_auto"}, "test": {"name": "test", "num_bytes": 33379338, "num_examples": 118074, "dataset_name": "wiki_auto"}}, "download_checksums": {"https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/train.tsv": {"num_bytes": 106346588, "checksum": "82fa388de3ded6d303b95fcd11ba70e0b6158d2df1cbf24913bb54503bd32e95"}, "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/dev.tsv": {"num_bytes": 20232621, "checksum": "c56a9d2a739f9da83f90c54e266e1d60dd036cb80c463f118cb55613232e2e41"}, "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/test.tsv": {"num_bytes": 32432523, "checksum": "ab8b818b0eeb7aa7712d244ee0ea16cfd915a896c40f02a34a808b597a5e68a0"}}, "download_size": 159011732, "post_processing_size": null, "dataset_size": 163542388, "size_in_bytes": 322554120}, "auto_acl": {"description": "WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia\nas a resource to train sentence simplification systems. The authors first crowd-sourced a set of manual alignments\nbetween sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia\n(this corresponds to the `manual` config), then trained a neural CRF system to predict these alignments.\nThe trained model was then applied to the other articles in Simple English Wikipedia with an English counterpart to\ncreate a larger corpus of aligned sentences (corresponding to the `auto` and `auto_acl` configs here).\n", "citation": "@inproceedings{acl/JiangMLZX20,\n author = {Chao Jiang and\n Mounica Maddela and\n Wuwei Lan and\n Yang Zhong and\n Wei Xu},\n editor = {Dan Jurafsky and\n Joyce Chai and\n Natalie Schluter and\n Joel R. Tetreault},\n title = {Neural {CRF} Model for Sentence Alignment in Text Simplification},\n booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational\n Linguistics, {ACL} 2020, Online, July 5-10, 2020},\n pages = {7943--7960},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.acl-main.709/}\n}\n", "homepage": "https://github.com/chaojiang06/wiki-auto", "license": "CC-BY-SA 3.0", "features": {"normal_sentence": {"dtype": "string", "id": null, "_type": "Value"}, "simple_sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_auto", "config_name": "auto_acl", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"full": {"name": "full", "num_bytes": 121975414, "num_examples": 488332, "dataset_name": "wiki_auto"}}, "download_checksums": {"https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/ACL2020/train.src": {"num_bytes": 70209062, "checksum": "02141edbb735be50c9942f5e0bced4528dc8d844753d46a1f3bdf0b6e550c0e6"}, "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/ACL2020/train.dst": {"num_bytes": 47859304, "checksum": "d9e2106722e2e29f34d5d9b697c236b38e920724727cefb71f42072dd9fd8807"}}, "download_size": 118068366, "post_processing_size": null, "dataset_size": 121975414, "size_in_bytes": 240043780}, "auto": {"description": "WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia\nas a resource to train sentence simplification systems. The authors first crowd-sourced a set of manual alignments\nbetween sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia\n(this corresponds to the `manual` config), then trained a neural CRF system to predict these alignments.\nThe trained model was then applied to the other articles in Simple English Wikipedia with an English counterpart to\ncreate a larger corpus of aligned sentences (corresponding to the `auto` and `auto_acl` configs here).\n", "citation": "@inproceedings{acl/JiangMLZX20,\n author = {Chao Jiang and\n Mounica Maddela and\n Wuwei Lan and\n Yang Zhong and\n Wei Xu},\n editor = {Dan Jurafsky and\n Joyce Chai and\n Natalie Schluter and\n Joel R. Tetreault},\n title = {Neural {CRF} Model for Sentence Alignment in Text Simplification},\n booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational\n Linguistics, {ACL} 2020, Online, July 5-10, 2020},\n pages = {7943--7960},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.acl-main.709/}\n}\n", "homepage": "https://github.com/chaojiang06/wiki-auto", "license": "CC-BY-SA 3.0", "features": {"example_id": {"dtype": "string", "id": null, "_type": "Value"}, "normal": {"normal_article_id": {"dtype": "int32", "id": null, "_type": "Value"}, "normal_article_title": {"dtype": "string", "id": null, "_type": "Value"}, "normal_article_url": {"dtype": "string", "id": null, "_type": "Value"}, "normal_article_content": {"feature": {"normal_sentence_id": {"dtype": "string", "id": null, "_type": "Value"}, "normal_sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "simple": {"simple_article_id": {"dtype": "int32", "id": null, "_type": "Value"}, "simple_article_title": {"dtype": "string", "id": null, "_type": "Value"}, "simple_article_url": {"dtype": "string", "id": null, "_type": "Value"}, "simple_article_content": {"feature": {"simple_sentence_id": {"dtype": "string", "id": null, "_type": "Value"}, "simple_sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "paragraph_alignment": {"feature": {"normal_paragraph_id": {"dtype": "string", "id": null, "_type": "Value"}, "simple_paragraph_id": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "sentence_alignment": {"feature": {"normal_sentence_id": {"dtype": "string", "id": null, "_type": "Value"}, "simple_sentence_id": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_auto", "config_name": "auto", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"part_1": {"name": "part_1", "num_bytes": 1773240295, "num_examples": 125059, "dataset_name": "wiki_auto"}, "part_2": {"name": "part_2", "num_bytes": 80417651, "num_examples": 13036, "dataset_name": "wiki_auto"}}, "download_checksums": {"https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/all_data/wiki-auto-part-1-data.json": {"num_bytes": 2067424750, "checksum": "136d8e113a773d3669228a57cae733fca079954daf0b3514505410c66d1a69b6"}, "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/all_data/wiki-auto-part-2-data.json": {"num_bytes": 93214171, "checksum": "94b33a11447c121a0ce7293de20fb969c36d8a62b31afc5873a4174ed17a1d4e"}}, "download_size": 2160638921, "post_processing_size": null, "dataset_size": 1853657946, "size_in_bytes": 4014296867}}
dummy/auto/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:454128c017ed0c0d87fac2a0cb79432fa2e73cb426bb29b3ca1119424fdc6267
3
+ size 4484
dummy/auto_acl/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23f694cf207ec93895006089f786a9796327079310c0a4256871805149d478f4
3
+ size 1513
dummy/manual/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15067eb8b69bc1ef83aea73549346e45d92f56d41b77b08c06f8b404f0f2de14
3
+ size 1801
wiki_auto.py ADDED
@@ -0,0 +1,260 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """WikiAuto dataset for Text Simplification"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import json
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @inproceedings{acl/JiangMLZX20,
26
+ author = {Chao Jiang and
27
+ Mounica Maddela and
28
+ Wuwei Lan and
29
+ Yang Zhong and
30
+ Wei Xu},
31
+ editor = {Dan Jurafsky and
32
+ Joyce Chai and
33
+ Natalie Schluter and
34
+ Joel R. Tetreault},
35
+ title = {Neural {CRF} Model for Sentence Alignment in Text Simplification},
36
+ booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational
37
+ Linguistics, {ACL} 2020, Online, July 5-10, 2020},
38
+ pages = {7943--7960},
39
+ publisher = {Association for Computational Linguistics},
40
+ year = {2020},
41
+ url = {https://www.aclweb.org/anthology/2020.acl-main.709/}
42
+ }
43
+ """
44
+
45
+ # TODO: Add description of the dataset here
46
+ # You can copy an official description
47
+ _DESCRIPTION = """\
48
+ WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia
49
+ as a resource to train sentence simplification systems. The authors first crowd-sourced a set of manual alignments
50
+ between sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia
51
+ (this corresponds to the `manual` config), then trained a neural CRF system to predict these alignments.
52
+ The trained model was then applied to the other articles in Simple English Wikipedia with an English counterpart to
53
+ create a larger corpus of aligned sentences (corresponding to the `auto` and `auto_acl` configs here).
54
+ """
55
+
56
+ # TODO: Add the licence for the dataset here if you can find it
57
+ _LICENSE = "CC-BY-SA 3.0"
58
+
59
+ # TODO: Add link to the official dataset URLs here
60
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
61
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
62
+ _URLs = {
63
+ "manual": {
64
+ "train": "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/train.tsv",
65
+ "dev": "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/dev.tsv",
66
+ "test": "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/test.tsv",
67
+ },
68
+ "auto_acl": {
69
+ "normal": "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/ACL2020/train.src",
70
+ "simple": "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/ACL2020/train.dst",
71
+ },
72
+ "auto": {
73
+ "part_1": "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/all_data/wiki-auto-part-1-data.json",
74
+ "part_2": "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/all_data/wiki-auto-part-2-data.json",
75
+ },
76
+ }
77
+
78
+
79
+ # TODO: Name of the dataset usually match the script name with CamelCase instead of snake_case
80
+ class WikiAuto(datasets.GeneratorBasedBuilder):
81
+ """WikiAuto dataset for sentence simplification"""
82
+
83
+ VERSION = datasets.Version("1.0.0")
84
+
85
+ BUILDER_CONFIGS = [
86
+ datasets.BuilderConfig(
87
+ name="manual",
88
+ version=VERSION,
89
+ description="A set of 10K Wikipedia sentence pairs aligned by crowd workers.",
90
+ ),
91
+ datasets.BuilderConfig(
92
+ name="auto_acl", version=VERSION, description="Sentence pairs aligned to train the ACL2020 system."
93
+ ),
94
+ datasets.BuilderConfig(
95
+ name="auto", version=VERSION, description="A large set of automatically aligned sentence pairs."
96
+ ),
97
+ ]
98
+
99
+ DEFAULT_CONFIG_NAME = "auto"
100
+
101
+ def _info(self):
102
+ if self.config.name == "manual": # This is the name of the configuration selected in BUILDER_CONFIGS above
103
+ features = datasets.Features(
104
+ {
105
+ "alignment_label": datasets.ClassLabel(names=["notAligned", "aligned"]),
106
+ "normal_sentence_id": datasets.Value("string"),
107
+ "simple_sentence_id": datasets.Value("string"),
108
+ "normal_sentence": datasets.Value("string"),
109
+ "simple_sentence": datasets.Value("string"),
110
+ }
111
+ )
112
+ elif self.config.name == "auto_acl":
113
+ features = datasets.Features(
114
+ {
115
+ "normal_sentence": datasets.Value("string"),
116
+ "simple_sentence": datasets.Value("string"),
117
+ }
118
+ )
119
+ else:
120
+ features = datasets.Features(
121
+ {
122
+ "example_id": datasets.Value("string"),
123
+ "normal": {
124
+ "normal_article_id": datasets.Value("int32"),
125
+ "normal_article_title": datasets.Value("string"),
126
+ "normal_article_url": datasets.Value("string"),
127
+ "normal_article_content": datasets.Sequence(
128
+ {
129
+ "normal_sentence_id": datasets.Value("string"),
130
+ "normal_sentence": datasets.Value("string"),
131
+ }
132
+ ),
133
+ },
134
+ "simple": {
135
+ "simple_article_id": datasets.Value("int32"),
136
+ "simple_article_title": datasets.Value("string"),
137
+ "simple_article_url": datasets.Value("string"),
138
+ "simple_article_content": datasets.Sequence(
139
+ {
140
+ "simple_sentence_id": datasets.Value("string"),
141
+ "simple_sentence": datasets.Value("string"),
142
+ }
143
+ ),
144
+ },
145
+ "paragraph_alignment": datasets.Sequence(
146
+ {
147
+ "normal_paragraph_id": datasets.Value("string"),
148
+ "simple_paragraph_id": datasets.Value("string"),
149
+ }
150
+ ),
151
+ "sentence_alignment": datasets.Sequence(
152
+ {
153
+ "normal_sentence_id": datasets.Value("string"),
154
+ "simple_sentence_id": datasets.Value("string"),
155
+ }
156
+ ),
157
+ }
158
+ )
159
+ return datasets.DatasetInfo(
160
+ description=_DESCRIPTION,
161
+ features=features,
162
+ supervised_keys=None,
163
+ homepage="https://github.com/chaojiang06/wiki-auto",
164
+ license=_LICENSE,
165
+ citation=_CITATION,
166
+ )
167
+
168
+ def _split_generators(self, dl_manager):
169
+ my_urls = _URLs[self.config.name]
170
+ data_dir = dl_manager.download_and_extract(my_urls)
171
+ if self.config.name in ["manual", "auto"]:
172
+ return [
173
+ datasets.SplitGenerator(
174
+ name=spl,
175
+ gen_kwargs={
176
+ "filepaths": data_dir,
177
+ "split": spl,
178
+ },
179
+ )
180
+ for spl in data_dir
181
+ ]
182
+ else:
183
+ return [
184
+ datasets.SplitGenerator(
185
+ name="full",
186
+ gen_kwargs={"filepaths": data_dir, "split": "full"},
187
+ )
188
+ ]
189
+
190
+ def _generate_examples(self, filepaths, split):
191
+ if self.config.name == "manual":
192
+ keys = [
193
+ "alignment_label",
194
+ "simple_sentence_id",
195
+ "normal_sentence_id",
196
+ "simple_sentence",
197
+ "normal_sentence",
198
+ ]
199
+ with open(filepaths[split], encoding="utf-8") as f:
200
+ for id_, line in enumerate(f):
201
+ values = line.strip().split("\t")
202
+ assert len(values) == 5, f"Not enough fields in ---- {line} --- {values}"
203
+ yield id_, dict([(k, val) for k, val in zip(keys, values)])
204
+ elif self.config.name == "auto_acl":
205
+ with open(filepaths["normal"], encoding="utf-8") as fi:
206
+ with open(filepaths["simple"], encoding="utf-8") as fo:
207
+ for id_, (norm_se, simp_se) in enumerate(zip(fi, fo)):
208
+ yield id_, {
209
+ "normal_sentence": norm_se,
210
+ "simple_sentence": simp_se,
211
+ }
212
+ else:
213
+ dataset_dict = json.load(open(filepaths[split], encoding="utf-8"))
214
+ for id_, (eid, example_dict) in enumerate(dataset_dict.items()):
215
+ res = {
216
+ "example_id": eid,
217
+ "normal": {
218
+ "normal_article_id": example_dict["normal"]["id"],
219
+ "normal_article_title": example_dict["normal"]["title"],
220
+ "normal_article_url": example_dict["normal"]["url"],
221
+ "normal_article_content": {
222
+ "normal_sentence_id": [
223
+ sen_id for sen_id, sen_txt in example_dict["normal"]["content"].items()
224
+ ],
225
+ "normal_sentence": [
226
+ sen_txt for sen_id, sen_txt in example_dict["normal"]["content"].items()
227
+ ],
228
+ },
229
+ },
230
+ "simple": {
231
+ "simple_article_id": example_dict["simple"]["id"],
232
+ "simple_article_title": example_dict["simple"]["title"],
233
+ "simple_article_url": example_dict["simple"]["url"],
234
+ "simple_article_content": {
235
+ "simple_sentence_id": [
236
+ sen_id for sen_id, sen_txt in example_dict["simple"]["content"].items()
237
+ ],
238
+ "simple_sentence": [
239
+ sen_txt for sen_id, sen_txt in example_dict["simple"]["content"].items()
240
+ ],
241
+ },
242
+ },
243
+ "paragraph_alignment": {
244
+ "normal_paragraph_id": [
245
+ norm_id for simp_id, norm_id in example_dict.get("paragraph_alignment", [])
246
+ ],
247
+ "simple_paragraph_id": [
248
+ simp_id for simp_id, norm_id in example_dict.get("paragraph_alignment", [])
249
+ ],
250
+ },
251
+ "sentence_alignment": {
252
+ "normal_sentence_id": [
253
+ norm_id for simp_id, norm_id in example_dict.get("sentence_alignment", [])
254
+ ],
255
+ "simple_sentence_id": [
256
+ simp_id for simp_id, norm_id in example_dict.get("sentence_alignment", [])
257
+ ],
258
+ },
259
+ }
260
+ yield id_, res