Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
100K<n<1M
Language Creators:
found
ArXiv:
License:
system HF staff commited on
Commit
b266476
1 Parent(s): 94f3afd

Update files from the datasets library (from 1.2.1)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.1

README.md CHANGED
@@ -4,6 +4,10 @@ annotations_creators:
4
  - machine-generated
5
  auto_acl:
6
  - machine-generated
 
 
 
 
7
  manual:
8
  - crowdsourced
9
  language_creators:
@@ -61,7 +65,7 @@ WikiAuto provides a set of aligned sentences from English Wikipedia and Simple E
61
 
62
  The authors first crowd-sourced a set of manual alignments between sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia (this corresponds to the `manual` config in this version of dataset), then trained a neural CRF system to predict these alignments.
63
 
64
- The trained alignment prediction model was then applied to the other articles in Simple English Wikipedia with an English counterpart to create a larger corpus of aligned sentences (corresponding to the `auto` and `auto_acl` configs here).
65
 
66
  ### Supported Tasks and Leaderboards
67
 
@@ -127,7 +131,7 @@ The `auto` config shows a pair of an English and corresponding Simple English Wi
127
  'simple_article_url': 'https://simple.wikipedia.org/wiki?curid=702227'}}
128
  ```
129
 
130
- Finally, the `auto_acl` config was obtained by selecting the aligned pairs of sentences from `auto` to provide a ready-to-go aligned dataset to train a sequence-to-sequence system, so an instance is a single pair of sentences:
131
  ```
132
  {'normal_sentence': 'In early work , Rutherford discovered the concept of radioactive half-life , the radioactive element radon , and differentiated and named alpha and beta radiation .\n',
133
  'simple_sentence': 'Rutherford discovered the radioactive half-life , and the three parts of radiation which he named Alpha , Beta , and Gamma .\n'}
 
4
  - machine-generated
5
  auto_acl:
6
  - machine-generated
7
+ auto_full_no_split:
8
+ - machine-generated
9
+ auto_full_with_split:
10
+ - machine-generated
11
  manual:
12
  - crowdsourced
13
  language_creators:
 
65
 
66
  The authors first crowd-sourced a set of manual alignments between sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia (this corresponds to the `manual` config in this version of dataset), then trained a neural CRF system to predict these alignments.
67
 
68
+ The trained alignment prediction model was then applied to the other articles in Simple English Wikipedia with an English counterpart to create a larger corpus of aligned sentences (corresponding to the `auto`, `auto_acl`, `auto_full_no_split`, and `auto_full_with_split` configs here).
69
 
70
  ### Supported Tasks and Leaderboards
71
 
 
131
  'simple_article_url': 'https://simple.wikipedia.org/wiki?curid=702227'}}
132
  ```
133
 
134
+ Finally, the `auto_acl`, the `auto_full_no_split`, and the `auto_full_with_split` configs were obtained by selecting the aligned pairs of sentences from `auto` to provide a ready-to-go aligned dataset to train a sequence-to-sequence system. While `auto_acl` corresponds to the filtered version of the data used to train the systems in the paper, `auto_full_no_split` and `auto_full_with_split` correspond to the unfiltered versions with and without sentence splits respectively. In the `auto_full_with_split` config, we join the sentences in the simple article mapped to the same sentence in the complex article to capture sentence splitting. Split sentences are seperated by a `<SEP>` token. In the `auto_full_no_split` config, we do not join the splits and treat them as seperate pairs. An instance is a single pair of sentences:
135
  ```
136
  {'normal_sentence': 'In early work , Rutherford discovered the concept of radioactive half-life , the radioactive element radon , and differentiated and named alpha and beta radiation .\n',
137
  'simple_sentence': 'Rutherford discovered the radioactive half-life , and the three parts of radiation which he named Alpha , Beta , and Gamma .\n'}
dataset_infos.json CHANGED
@@ -1 +1 @@
1
- {"manual": {"description": "WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia\nas a resource to train sentence simplification systems. The authors first crowd-sourced a set of manual alignments\nbetween sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia\n(this corresponds to the `manual` config), then trained a neural CRF system to predict these alignments.\nThe trained model was then applied to the other articles in Simple English Wikipedia with an English counterpart to\ncreate a larger corpus of aligned sentences (corresponding to the `auto` and `auto_acl` configs here).\n", "citation": "@inproceedings{acl/JiangMLZX20,\n author = {Chao Jiang and\n Mounica Maddela and\n Wuwei Lan and\n Yang Zhong and\n Wei Xu},\n editor = {Dan Jurafsky and\n Joyce Chai and\n Natalie Schluter and\n Joel R. Tetreault},\n title = {Neural {CRF} Model for Sentence Alignment in Text Simplification},\n booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational\n Linguistics, {ACL} 2020, Online, July 5-10, 2020},\n pages = {7943--7960},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.acl-main.709/}\n}\n", "homepage": "https://github.com/chaojiang06/wiki-auto", "license": "CC-BY-SA 3.0", "features": {"alignment_label": {"num_classes": 2, "names": ["notAligned", "aligned"], "names_file": null, "id": null, "_type": "ClassLabel"}, "normal_sentence_id": {"dtype": "string", "id": null, "_type": "Value"}, "simple_sentence_id": {"dtype": "string", "id": null, "_type": "Value"}, "normal_sentence": {"dtype": "string", "id": null, "_type": "Value"}, "simple_sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_auto", "config_name": "manual", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 109343271, "num_examples": 373801, "dataset_name": "wiki_auto"}, "dev": {"name": "dev", "num_bytes": 20819779, "num_examples": 73249, "dataset_name": "wiki_auto"}, "test": {"name": "test", "num_bytes": 33379338, "num_examples": 118074, "dataset_name": "wiki_auto"}}, "download_checksums": {"https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/train.tsv": {"num_bytes": 106346588, "checksum": "82fa388de3ded6d303b95fcd11ba70e0b6158d2df1cbf24913bb54503bd32e95"}, "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/dev.tsv": {"num_bytes": 20232621, "checksum": "c56a9d2a739f9da83f90c54e266e1d60dd036cb80c463f118cb55613232e2e41"}, "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/test.tsv": {"num_bytes": 32432523, "checksum": "ab8b818b0eeb7aa7712d244ee0ea16cfd915a896c40f02a34a808b597a5e68a0"}}, "download_size": 159011732, "post_processing_size": null, "dataset_size": 163542388, "size_in_bytes": 322554120}, "auto_acl": {"description": "WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia\nas a resource to train sentence simplification systems. The authors first crowd-sourced a set of manual alignments\nbetween sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia\n(this corresponds to the `manual` config), then trained a neural CRF system to predict these alignments.\nThe trained model was then applied to the other articles in Simple English Wikipedia with an English counterpart to\ncreate a larger corpus of aligned sentences (corresponding to the `auto` and `auto_acl` configs here).\n", "citation": "@inproceedings{acl/JiangMLZX20,\n author = {Chao Jiang and\n Mounica Maddela and\n Wuwei Lan and\n Yang Zhong and\n Wei Xu},\n editor = {Dan Jurafsky and\n Joyce Chai and\n Natalie Schluter and\n Joel R. Tetreault},\n title = {Neural {CRF} Model for Sentence Alignment in Text Simplification},\n booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational\n Linguistics, {ACL} 2020, Online, July 5-10, 2020},\n pages = {7943--7960},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.acl-main.709/}\n}\n", "homepage": "https://github.com/chaojiang06/wiki-auto", "license": "CC-BY-SA 3.0", "features": {"normal_sentence": {"dtype": "string", "id": null, "_type": "Value"}, "simple_sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_auto", "config_name": "auto_acl", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"full": {"name": "full", "num_bytes": 121975414, "num_examples": 488332, "dataset_name": "wiki_auto"}}, "download_checksums": {"https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/ACL2020/train.src": {"num_bytes": 70209062, "checksum": "02141edbb735be50c9942f5e0bced4528dc8d844753d46a1f3bdf0b6e550c0e6"}, "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/ACL2020/train.dst": {"num_bytes": 47859304, "checksum": "d9e2106722e2e29f34d5d9b697c236b38e920724727cefb71f42072dd9fd8807"}}, "download_size": 118068366, "post_processing_size": null, "dataset_size": 121975414, "size_in_bytes": 240043780}, "auto": {"description": "WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia\nas a resource to train sentence simplification systems. The authors first crowd-sourced a set of manual alignments\nbetween sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia\n(this corresponds to the `manual` config), then trained a neural CRF system to predict these alignments.\nThe trained model was then applied to the other articles in Simple English Wikipedia with an English counterpart to\ncreate a larger corpus of aligned sentences (corresponding to the `auto` and `auto_acl` configs here).\n", "citation": "@inproceedings{acl/JiangMLZX20,\n author = {Chao Jiang and\n Mounica Maddela and\n Wuwei Lan and\n Yang Zhong and\n Wei Xu},\n editor = {Dan Jurafsky and\n Joyce Chai and\n Natalie Schluter and\n Joel R. Tetreault},\n title = {Neural {CRF} Model for Sentence Alignment in Text Simplification},\n booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational\n Linguistics, {ACL} 2020, Online, July 5-10, 2020},\n pages = {7943--7960},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.acl-main.709/}\n}\n", "homepage": "https://github.com/chaojiang06/wiki-auto", "license": "CC-BY-SA 3.0", "features": {"example_id": {"dtype": "string", "id": null, "_type": "Value"}, "normal": {"normal_article_id": {"dtype": "int32", "id": null, "_type": "Value"}, "normal_article_title": {"dtype": "string", "id": null, "_type": "Value"}, "normal_article_url": {"dtype": "string", "id": null, "_type": "Value"}, "normal_article_content": {"feature": {"normal_sentence_id": {"dtype": "string", "id": null, "_type": "Value"}, "normal_sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "simple": {"simple_article_id": {"dtype": "int32", "id": null, "_type": "Value"}, "simple_article_title": {"dtype": "string", "id": null, "_type": "Value"}, "simple_article_url": {"dtype": "string", "id": null, "_type": "Value"}, "simple_article_content": {"feature": {"simple_sentence_id": {"dtype": "string", "id": null, "_type": "Value"}, "simple_sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "paragraph_alignment": {"feature": {"normal_paragraph_id": {"dtype": "string", "id": null, "_type": "Value"}, "simple_paragraph_id": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "sentence_alignment": {"feature": {"normal_sentence_id": {"dtype": "string", "id": null, "_type": "Value"}, "simple_sentence_id": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_auto", "config_name": "auto", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"part_1": {"name": "part_1", "num_bytes": 1773240295, "num_examples": 125059, "dataset_name": "wiki_auto"}, "part_2": {"name": "part_2", "num_bytes": 80417651, "num_examples": 13036, "dataset_name": "wiki_auto"}}, "download_checksums": {"https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/all_data/wiki-auto-part-1-data.json": {"num_bytes": 2067424750, "checksum": "136d8e113a773d3669228a57cae733fca079954daf0b3514505410c66d1a69b6"}, "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/all_data/wiki-auto-part-2-data.json": {"num_bytes": 93214171, "checksum": "94b33a11447c121a0ce7293de20fb969c36d8a62b31afc5873a4174ed17a1d4e"}}, "download_size": 2160638921, "post_processing_size": null, "dataset_size": 1853657946, "size_in_bytes": 4014296867}}
 
1
+ {"manual": {"description": "WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia\nas a resource to train sentence simplification systems. The authors first crowd-sourced a set of manual alignments\nbetween sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia\n(this corresponds to the `manual` config), then trained a neural CRF system to predict these alignments.\nThe trained model was then applied to the other articles in Simple English Wikipedia with an English counterpart to\ncreate a larger corpus of aligned sentences (corresponding to the `auto`, `auto_acl`, `auto_full_no_split`, and `auto_full_with_split` configs here).\n", "citation": "@inproceedings{acl/JiangMLZX20,\n author = {Chao Jiang and\n Mounica Maddela and\n Wuwei Lan and\n Yang Zhong and\n Wei Xu},\n editor = {Dan Jurafsky and\n Joyce Chai and\n Natalie Schluter and\n Joel R. Tetreault},\n title = {Neural {CRF} Model for Sentence Alignment in Text Simplification},\n booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational\n Linguistics, {ACL} 2020, Online, July 5-10, 2020},\n pages = {7943--7960},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.acl-main.709/}\n}\n", "homepage": "https://github.com/chaojiang06/wiki-auto", "license": "CC-BY-SA 3.0", "features": {"alignment_label": {"num_classes": 2, "names": ["notAligned", "aligned"], "names_file": null, "id": null, "_type": "ClassLabel"}, "normal_sentence_id": {"dtype": "string", "id": null, "_type": "Value"}, "simple_sentence_id": {"dtype": "string", "id": null, "_type": "Value"}, "normal_sentence": {"dtype": "string", "id": null, "_type": "Value"}, "simple_sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_auto", "config_name": "manual", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 109343271, "num_examples": 373801, "dataset_name": "wiki_auto"}, "dev": {"name": "dev", "num_bytes": 20819779, "num_examples": 73249, "dataset_name": "wiki_auto"}, "test": {"name": "test", "num_bytes": 33379338, "num_examples": 118074, "dataset_name": "wiki_auto"}}, "download_checksums": {"https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/train.tsv": {"num_bytes": 106346588, "checksum": "82fa388de3ded6d303b95fcd11ba70e0b6158d2df1cbf24913bb54503bd32e95"}, "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/dev.tsv": {"num_bytes": 20232621, "checksum": "c56a9d2a739f9da83f90c54e266e1d60dd036cb80c463f118cb55613232e2e41"}, "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/test.tsv": {"num_bytes": 32432523, "checksum": "ab8b818b0eeb7aa7712d244ee0ea16cfd915a896c40f02a34a808b597a5e68a0"}}, "download_size": 159011732, "post_processing_size": null, "dataset_size": 163542388, "size_in_bytes": 322554120}, "auto_acl": {"description": "WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia\nas a resource to train sentence simplification systems. The authors first crowd-sourced a set of manual alignments\nbetween sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia\n(this corresponds to the `manual` config), then trained a neural CRF system to predict these alignments.\nThe trained model was then applied to the other articles in Simple English Wikipedia with an English counterpart to\ncreate a larger corpus of aligned sentences (corresponding to the `auto`, `auto_acl`, `auto_full_no_split`, and `auto_full_with_split` configs here).\n", "citation": "@inproceedings{acl/JiangMLZX20,\n author = {Chao Jiang and\n Mounica Maddela and\n Wuwei Lan and\n Yang Zhong and\n Wei Xu},\n editor = {Dan Jurafsky and\n Joyce Chai and\n Natalie Schluter and\n Joel R. Tetreault},\n title = {Neural {CRF} Model for Sentence Alignment in Text Simplification},\n booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational\n Linguistics, {ACL} 2020, Online, July 5-10, 2020},\n pages = {7943--7960},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.acl-main.709/}\n}\n", "homepage": "https://github.com/chaojiang06/wiki-auto", "license": "CC-BY-SA 3.0", "features": {"normal_sentence": {"dtype": "string", "id": null, "_type": "Value"}, "simple_sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_auto", "config_name": "auto_acl", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"full": {"name": "full", "num_bytes": 121975414, "num_examples": 488332, "dataset_name": "wiki_auto"}}, "download_checksums": {"https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/ACL2020/train.src": {"num_bytes": 70209062, "checksum": "02141edbb735be50c9942f5e0bced4528dc8d844753d46a1f3bdf0b6e550c0e6"}, "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/ACL2020/train.dst": {"num_bytes": 47859304, "checksum": "d9e2106722e2e29f34d5d9b697c236b38e920724727cefb71f42072dd9fd8807"}}, "download_size": 118068366, "post_processing_size": null, "dataset_size": 121975414, "size_in_bytes": 240043780}, "auto": {"description": "WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia\nas a resource to train sentence simplification systems. The authors first crowd-sourced a set of manual alignments\nbetween sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia\n(this corresponds to the `manual` config), then trained a neural CRF system to predict these alignments.\nThe trained model was then applied to the other articles in Simple English Wikipedia with an English counterpart to\ncreate a larger corpus of aligned sentences (corresponding to the `auto`, `auto_acl`, `auto_full_no_split`, and `auto_full_with_split` configs here).\n", "citation": "@inproceedings{acl/JiangMLZX20,\n author = {Chao Jiang and\n Mounica Maddela and\n Wuwei Lan and\n Yang Zhong and\n Wei Xu},\n editor = {Dan Jurafsky and\n Joyce Chai and\n Natalie Schluter and\n Joel R. Tetreault},\n title = {Neural {CRF} Model for Sentence Alignment in Text Simplification},\n booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational\n Linguistics, {ACL} 2020, Online, July 5-10, 2020},\n pages = {7943--7960},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.acl-main.709/}\n}\n", "homepage": "https://github.com/chaojiang06/wiki-auto", "license": "CC-BY-SA 3.0", "features": {"example_id": {"dtype": "string", "id": null, "_type": "Value"}, "normal": {"normal_article_id": {"dtype": "int32", "id": null, "_type": "Value"}, "normal_article_title": {"dtype": "string", "id": null, "_type": "Value"}, "normal_article_url": {"dtype": "string", "id": null, "_type": "Value"}, "normal_article_content": {"feature": {"normal_sentence_id": {"dtype": "string", "id": null, "_type": "Value"}, "normal_sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "simple": {"simple_article_id": {"dtype": "int32", "id": null, "_type": "Value"}, "simple_article_title": {"dtype": "string", "id": null, "_type": "Value"}, "simple_article_url": {"dtype": "string", "id": null, "_type": "Value"}, "simple_article_content": {"feature": {"simple_sentence_id": {"dtype": "string", "id": null, "_type": "Value"}, "simple_sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "paragraph_alignment": {"feature": {"normal_paragraph_id": {"dtype": "string", "id": null, "_type": "Value"}, "simple_paragraph_id": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "sentence_alignment": {"feature": {"normal_sentence_id": {"dtype": "string", "id": null, "_type": "Value"}, "simple_sentence_id": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_auto", "config_name": "auto", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"part_1": {"name": "part_1", "num_bytes": 1773240295, "num_examples": 125059, "dataset_name": "wiki_auto"}, "part_2": {"name": "part_2", "num_bytes": 80417651, "num_examples": 13036, "dataset_name": "wiki_auto"}}, "download_checksums": {"https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/all_data/wiki-auto-part-1-data.json": {"num_bytes": 2067424750, "checksum": "136d8e113a773d3669228a57cae733fca079954daf0b3514505410c66d1a69b6"}, "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/all_data/wiki-auto-part-2-data.json": {"num_bytes": 93214171, "checksum": "94b33a11447c121a0ce7293de20fb969c36d8a62b31afc5873a4174ed17a1d4e"}}, "download_size": 2160638921, "post_processing_size": null, "dataset_size": 1853657946, "size_in_bytes": 4014296867}, "auto_full_no_split": {"description": "WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia\nas a resource to train sentence simplification systems. The authors first crowd-sourced a set of manual alignments\nbetween sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia\n(this corresponds to the `manual` config), then trained a neural CRF system to predict these alignments.\nThe trained model was then applied to the other articles in Simple English Wikipedia with an English counterpart to\ncreate a larger corpus of aligned sentences (corresponding to the `auto`, `auto_acl`, `auto_full_no_split`, and `auto_full_with_split` configs here).\n", "citation": "@inproceedings{acl/JiangMLZX20,\n author = {Chao Jiang and\n Mounica Maddela and\n Wuwei Lan and\n Yang Zhong and\n Wei Xu},\n editor = {Dan Jurafsky and\n Joyce Chai and\n Natalie Schluter and\n Joel R. Tetreault},\n title = {Neural {CRF} Model for Sentence Alignment in Text Simplification},\n booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational\n Linguistics, {ACL} 2020, Online, July 5-10, 2020},\n pages = {7943--7960},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.acl-main.709/}\n}\n", "homepage": "https://github.com/chaojiang06/wiki-auto", "license": "CC-BY-SA 3.0", "features": {"normal_sentence": {"dtype": "string", "id": null, "_type": "Value"}, "simple_sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_auto", "config_name": "auto_full_no_split", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"full": {"name": "full", "num_bytes": 146310611, "num_examples": 591994, "dataset_name": "wiki_auto"}}, "download_checksums": {"https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/GEM2021/full_no_split/train.src": {"num_bytes": 86028898, "checksum": "48d820fc1e7bd7121edde04a2aedf62bb9069b9caed01b77b7cb3852cc6b33d0"}, "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/GEM2021/full_no_split/train.dst": {"num_bytes": 55545281, "checksum": "37b373c5ee8c3f4b844c9896f07fb6bb55a20a3d4a8e1a6b06a651b87a653415"}}, "download_size": 141574179, "post_processing_size": null, "dataset_size": 146310611, "size_in_bytes": 287884790}, "auto_full_with_split": {"description": "WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia\nas a resource to train sentence simplification systems. The authors first crowd-sourced a set of manual alignments\nbetween sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia\n(this corresponds to the `manual` config), then trained a neural CRF system to predict these alignments.\nThe trained model was then applied to the other articles in Simple English Wikipedia with an English counterpart to\ncreate a larger corpus of aligned sentences (corresponding to the `auto`, `auto_acl`, `auto_full_no_split`, and `auto_full_with_split` configs here).\n", "citation": "@inproceedings{acl/JiangMLZX20,\n author = {Chao Jiang and\n Mounica Maddela and\n Wuwei Lan and\n Yang Zhong and\n Wei Xu},\n editor = {Dan Jurafsky and\n Joyce Chai and\n Natalie Schluter and\n Joel R. Tetreault},\n title = {Neural {CRF} Model for Sentence Alignment in Text Simplification},\n booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational\n Linguistics, {ACL} 2020, Online, July 5-10, 2020},\n pages = {7943--7960},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.acl-main.709/}\n}\n", "homepage": "https://github.com/chaojiang06/wiki-auto", "license": "CC-BY-SA 3.0", "features": {"normal_sentence": {"dtype": "string", "id": null, "_type": "Value"}, "simple_sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_auto", "config_name": "auto_full_with_split", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"full": {"name": "full", "num_bytes": 124549115, "num_examples": 483801, "dataset_name": "wiki_auto"}}, "download_checksums": {"https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/GEM2021/full_with_split/train.src": {"num_bytes": 67389841, "checksum": "f70dd6083bf679254dfd6c1e14960ad2b526bc35596d9c87b0c636c0285ce8ac"}, "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/GEM2021/full_with_split/train.dst": {"num_bytes": 53288474, "checksum": "74529cfc316c8a3ed7720f87e25d0758620e0e838a5747ec4a1bbdccef018f75"}}, "download_size": 120678315, "post_processing_size": null, "dataset_size": 124549115, "size_in_bytes": 245227430}}
dummy/auto_full_no_split/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b0a237dd5adb45e4407eac11274b0da20fda345ad2f2acb0df8cfa725cc682e
3
+ size 983
dummy/auto_full_with_split/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8600e06b2762e11ffcfaf9600edb585fb3a0fbfaadf7be1405281a408c5a344
3
+ size 1521
wiki_auto.py CHANGED
@@ -50,7 +50,7 @@ as a resource to train sentence simplification systems. The authors first crowd-
50
  between sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia
51
  (this corresponds to the `manual` config), then trained a neural CRF system to predict these alignments.
52
  The trained model was then applied to the other articles in Simple English Wikipedia with an English counterpart to
53
- create a larger corpus of aligned sentences (corresponding to the `auto` and `auto_acl` configs here).
54
  """
55
 
56
  # TODO: Add the licence for the dataset here if you can find it
@@ -69,6 +69,14 @@ _URLs = {
69
  "normal": "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/ACL2020/train.src",
70
  "simple": "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/ACL2020/train.dst",
71
  },
 
 
 
 
 
 
 
 
72
  "auto": {
73
  "part_1": "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/all_data/wiki-auto-part-1-data.json",
74
  "part_2": "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/all_data/wiki-auto-part-2-data.json",
@@ -89,7 +97,19 @@ class WikiAuto(datasets.GeneratorBasedBuilder):
89
  description="A set of 10K Wikipedia sentence pairs aligned by crowd workers.",
90
  ),
91
  datasets.BuilderConfig(
92
- name="auto_acl", version=VERSION, description="Sentence pairs aligned to train the ACL2020 system."
 
 
 
 
 
 
 
 
 
 
 
 
93
  ),
94
  datasets.BuilderConfig(
95
  name="auto", version=VERSION, description="A large set of automatically aligned sentence pairs."
@@ -109,7 +129,11 @@ class WikiAuto(datasets.GeneratorBasedBuilder):
109
  "simple_sentence": datasets.Value("string"),
110
  }
111
  )
112
- elif self.config.name == "auto_acl":
 
 
 
 
113
  features = datasets.Features(
114
  {
115
  "normal_sentence": datasets.Value("string"),
@@ -201,7 +225,11 @@ class WikiAuto(datasets.GeneratorBasedBuilder):
201
  values = line.strip().split("\t")
202
  assert len(values) == 5, f"Not enough fields in ---- {line} --- {values}"
203
  yield id_, dict([(k, val) for k, val in zip(keys, values)])
204
- elif self.config.name == "auto_acl":
 
 
 
 
205
  with open(filepaths["normal"], encoding="utf-8") as fi:
206
  with open(filepaths["simple"], encoding="utf-8") as fo:
207
  for id_, (norm_se, simp_se) in enumerate(zip(fi, fo)):
 
50
  between sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia
51
  (this corresponds to the `manual` config), then trained a neural CRF system to predict these alignments.
52
  The trained model was then applied to the other articles in Simple English Wikipedia with an English counterpart to
53
+ create a larger corpus of aligned sentences (corresponding to the `auto`, `auto_acl`, `auto_full_no_split`, and `auto_full_with_split` configs here).
54
  """
55
 
56
  # TODO: Add the licence for the dataset here if you can find it
 
69
  "normal": "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/ACL2020/train.src",
70
  "simple": "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/ACL2020/train.dst",
71
  },
72
+ "auto_full_no_split": {
73
+ "normal": "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/GEM2021/full_no_split/train.src",
74
+ "simple": "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/GEM2021/full_no_split/train.dst",
75
+ },
76
+ "auto_full_with_split": {
77
+ "normal": "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/GEM2021/full_with_split/train.src",
78
+ "simple": "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/GEM2021/full_with_split/train.dst",
79
+ },
80
  "auto": {
81
  "part_1": "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/all_data/wiki-auto-part-1-data.json",
82
  "part_2": "https://github.com/chaojiang06/wiki-auto/raw/master/wiki-auto/all_data/wiki-auto-part-2-data.json",
 
97
  description="A set of 10K Wikipedia sentence pairs aligned by crowd workers.",
98
  ),
99
  datasets.BuilderConfig(
100
+ name="auto_acl",
101
+ version=VERSION,
102
+ description="Automatically aligned and filtered sentence pairs used to train the ACL2020 system.",
103
+ ),
104
+ datasets.BuilderConfig(
105
+ name="auto_full_no_split",
106
+ version=VERSION,
107
+ description="All automatically aligned sentence pairs without sentence splitting.",
108
+ ),
109
+ datasets.BuilderConfig(
110
+ name="auto_full_with_split",
111
+ version=VERSION,
112
+ description="All automatically aligned sentence pairs with sentence splitting.",
113
  ),
114
  datasets.BuilderConfig(
115
  name="auto", version=VERSION, description="A large set of automatically aligned sentence pairs."
 
129
  "simple_sentence": datasets.Value("string"),
130
  }
131
  )
132
+ elif (
133
+ self.config.name == "auto_acl"
134
+ or self.config.name == "auto_full_no_split"
135
+ or self.config.name == "auto_full_with_split"
136
+ ):
137
  features = datasets.Features(
138
  {
139
  "normal_sentence": datasets.Value("string"),
 
225
  values = line.strip().split("\t")
226
  assert len(values) == 5, f"Not enough fields in ---- {line} --- {values}"
227
  yield id_, dict([(k, val) for k, val in zip(keys, values)])
228
+ elif (
229
+ self.config.name == "auto_acl"
230
+ or self.config.name == "auto_full_no_split"
231
+ or self.config.name == "auto_full_with_split"
232
+ ):
233
  with open(filepaths["normal"], encoding="utf-8") as fi:
234
  with open(filepaths["simple"], encoding="utf-8") as fo:
235
  for id_, (norm_se, simp_se) in enumerate(zip(fi, fo)):