--- annotations_creators: - crowdsourced - machine-generated language_creators: - found language: - en license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 100K` token. In the `auto_full_no_split` config, we do not join the splits and treat them as separate pairs. An instance is a single pair of sentences: ``` {'normal_sentence': 'In early work , Rutherford discovered the concept of radioactive half-life , the radioactive element radon , and differentiated and named alpha and beta radiation .\n', 'simple_sentence': 'Rutherford discovered the radioactive half-life , and the three parts of radiation which he named Alpha , Beta , and Gamma .\n'} ``` ### Data Fields The data has the following field: - `normal_sentence`: a sentence from English Wikipedia. - `normal_sentence_id`: a unique ID for each English Wikipedia sentence. The last two dash-separated numbers correspond to the paragraph number in the article and the sentence number in the paragraph. - `simple_sentence`: a sentence from Simple English Wikipedia. - `simple_sentence_id`: a unique ID for each Simple English Wikipedia sentence. The last two dash-separated numbers correspond to the paragraph number in the article and the sentence number in the paragraph. - `alignment_label`: signifies whether a pair of sentences is aligned: labels are `2:partialAligned`, `1:aligned` and `0:notAligned` - `paragraph_alignment`: a first step of alignment mapping English and Simple English paragraphs from linked articles - `sentence_alignment`: the full alignment mapping English and Simple English sentences from linked articles - `gleu_score`: the sentence level GLEU (Google-BLEU) score for each pair. ### Data Splits In `auto`, the `part_2` split corresponds to the articles used in `manual`, and `part_1` has the rest of Wikipedia. The `manual` config is provided with a `train`/`dev`/`test` split with the following amounts of data: | | train | validation | test | |------------------------|--------:|-----------:|--------:| | Total sentence pairs | 373801 | 73249 | 118074 | | Aligned sentence pairs | 1889 | 346 | 677 | ## Dataset Creation ### Curation Rationale Simple English Wikipedia provides a ready source of training data for text simplification systems, as 1. articles in different languages are linked, making it easier to find parallel data and 2. the Simple English data is written by users for users rather than by professional translators. However, even though articles are aligned, finding a good sentence-level alignment can remain challenging. This work aims to provide a solution for this problem. By manually annotating a sub-set of the articles, they manage to achieve an F1 score of over 88% on predicting alignment, which allows to create a good quality sentence level aligned corpus using all of Simple English Wikipedia. ### Source Data #### Initial Data Collection and Normalization The authors mention that they "extracted 138,095 article pairs from the 2019/09 Wikipedia dump [...] using an improved version of the [WikiExtractor](https://github.com/attardi/wikiextractor) library". The [SpaCy](https://spacy.io/) library is used for sentence splitting. #### Who are the source language producers? The dataset uses langauge from Wikipedia: some demographic information is provided [here](https://en.wikipedia.org/wiki/Wikipedia:Who_writes_Wikipedia%3F). ### Annotations #### Annotation process Sentence alignment labels were obtained for 500 randomly sampled document pairs (10,123 sentence pairs total). The authors pre-selected several alignment candidates from English Wikipedia for each Simple Wikipedia sentence based on various similarity metrics, then asked the crowd-workers to annotate these pairs. #### Who are the annotators? No demographic annotation is provided for the crowd workers. [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset was created by Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, and Wei Xu working at Ohio State University. ### Licensing Information The dataset is not licensed by itself, but the source Wikipedia data is under a `cc-by-sa-3.0` license. ### Citation Information You can cite the paper presenting the dataset as: ``` @inproceedings{acl/JiangMLZX20, author = {Chao Jiang and Mounica Maddela and Wuwei Lan and Yang Zhong and Wei Xu}, editor = {Dan Jurafsky and Joyce Chai and Natalie Schluter and Joel R. Tetreault}, title = {Neural {CRF} Model for Sentence Alignment in Text Simplification}, booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, {ACL} 2020, Online, July 5-10, 2020}, pages = {7943--7960}, publisher = {Association for Computational Linguistics}, year = {2020}, url = {https://www.aclweb.org/anthology/2020.acl-main.709/} } ``` ### Contributions Thanks to [@yjernite](https://github.com/yjernite), [@mounicam](https://github.com/mounicam) for adding this dataset.