Datasets:
GEM
/

Languages:
English
Multilinguality:
unknown
Size Categories:
unknown
Language Creators:
unknown
Annotations Creators:
crowd-sourced
Source Datasets:
original
ArXiv:
Tags:
License:
Sebastian Gehrmann commited on
Commit
a517534
1 Parent(s): 5ca8de6
Files changed (1) hide show
  1. wiki_auto_asset_turk.json +3 -3
wiki_auto_asset_turk.json CHANGED
@@ -7,7 +7,7 @@
7
  "website": "n/a",
8
  "data-url": "[Wiki-Auto repository](https://github.com/chaojiang06/wiki-auto), [ASSET repository](https://github.com/facebookresearch/asset), [TURKCorpus](https://github.com/cocoxu/simplification)",
9
  "paper-url": "[WikiAuto](https://aclanthology.org/2020.acl-main.709/), [ASSET](https://aclanthology.org/2020.acl-main.424/), [TURK](https://aclanthology.org/Q16-1029/)",
10
- "paper-bibtext": "WikiAuto: \n```\n@inproceedings{jiang-etal-2020-neural,\n title = \"Neural {CRF} Model for Sentence Alignment in Text Simplification\",\n author = \"Jiang, Chao and\n Maddela, Mounica and\n Lan, Wuwei and\n Zhong, Yang and\n Xu, Wei\",\n booktitle = \"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.acl-main.709\",\n doi = \"10.18653/v1/2020.acl-main.709\",\n pages = \"7943--7960\",\n}\n```\n\nASSET:\n```\n@inproceedings{alva-manchego-etal-2020-asset,\n title = \"{ASSET}: {A} Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations\",\n author = \"Alva-Manchego, Fernando and\n Martin, Louis and\n Bordes, Antoine and\n Scarton, Carolina and\n Sagot, Beno{\\^\\i}t and\n Specia, Lucia\",\n booktitle = \"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.acl-main.424\",\n pages = \"4668--4679\",\n}\n```\n\nTURK:\n```\n @article{Xu-EtAl:2016:TACL,\n author = {Wei Xu and Courtney Napoles and Ellie Pavlick and Quanze Chen and Chris Callison-Burch},\n title = {Optimizing Statistical Machine Translation for Text Simplification},\n journal = {Transactions of the Association for Computational Linguistics},\n volume = {4},\n year = {2016},\n url = {https://cocoxu.github.io/publications/tacl2016-smt-simplification.pdf},\n pages = {401--415}\n }\n ```",
11
  "contact-name": "WikiAuto: Chao Jiang; ASSET: Fernando Alva-Manchego and Louis Martin; TURK: Wei Xu",
12
  "contact-email": "jiang.1530@osu.edu, f.alva@sheffield.ac.uk, louismartincs@gmail.com, wei.xu@cc.gatech.edu"
13
  },
@@ -39,13 +39,13 @@
39
  "data-fields": "- `source`: A source sentence from one of the datasets\n- `target`: A single simplified sentence corresponding to `source`\n- `references`: In the case of ASSET/TURK, references is a list of strings corresponding to the different references. ",
40
  "structure-description": "The underlying datasets have extensive secondary annotations that can be used in conjunction with the GEM version. We omit those annotations to simplify the format into one that can be used by seq2seq models. ",
41
  "structure-labels": "n/a",
42
- "structure-example": "```\n{\n 'source': 'In early work, Rutherford discovered the concept of radioactive half-life , the radioactive element radon, and differentiated and named alpha and beta radiation .',\n 'target': 'Rutherford discovered the radioactive half-life, and the three parts of radiation which he named Alpha, Beta, and Gamma.'\n}",
43
  "structure-splits": "In WikiAuto, which is used as training and validation set, the following splits are provided: \n\n| | Tain | Dev | Test |\n| ----- | ------ | ----- | ---- |\n| Total sentence pairs | 373801 | 73249 | 118074 |\n| Aligned sentence pairs | 1889 | 346 | 677 |\n\nASSET does not contain a training set; many models use [WikiLarge](https://github.com/XingxingZhang/dress) (Zhang and Lapata, 2017) for training. For GEM, [Wiki-Auto](https://github.com/chaojiang06/wiki-auto) will be used for training the model.\n\nEach input sentence has 10 associated reference simplified sentences. The statistics of ASSET are given below.\n\n| | Dev | Test | Total |\n| ----- | ------ | ---- | ----- |\n| Input Sentences | 2000 | 359 | 2359 |\n| Reference Simplifications | 20000 | 3590 | 23590 |\n\nThe test and validation sets are the same as those of [TurkCorpus](https://github.com/cocoxu/simplification/). The split was random.\n\nThere are 19.04 tokens per reference on average (lower than 21.29 and 25.49 for TurkCorpus and HSplit, respectively). Most (17,245) of the referece sentences do not involve sentence splitting.\n\nTURKCorpus does not contain a training set; many models use [WikiLarge](https://github.com/XingxingZhang/dress) (Zhang and Lapata, 2017) or [Wiki-Auto](https://github.com/chaojiang06/wiki-auto) (Jiang et. al 2020) for training.\n\nEach input sentence has 8 associated reference simplified sentences. 2,359 input sentences are randomly split into 2,000 validation and 359 test sentences.\n\n| | Dev | Test | Total |\n| ----- | ------ | ---- | ----- |\n| Input Sentences | 2000 | 359 | 2359 |\n| Reference Simplifications | 16000 | 2872 | 18872 |\n\n\nThere are 21.29 tokens per reference on average.\n\n",
44
  "structure-splits-criteria": "In our setup, we use WikiAuto as training/validation corpus and ASSET and TURK as test corpora. ASSET and TURK have the same inputs but differ in their reference style. Researchers can thus conduct targeted evaluations based on the strategies that a model should learn. ",
45
  "structure-outlier": "n/a"
46
  },
47
  "what": {
48
- "dataset": "WikiAuto is an English simplification dataset that we paired with ASSET and TURK, two very high-quality evaluation datasets as test sets. The input is an English sentence taken from Wikipedia and the target a simplified sentence. ASSET and TURK contain the same test examples but have references that are simplified in different ways (splitting sentences vs. rewriting and splitting). "
49
  }
50
  },
51
  "curation": {
 
7
  "website": "n/a",
8
  "data-url": "[Wiki-Auto repository](https://github.com/chaojiang06/wiki-auto), [ASSET repository](https://github.com/facebookresearch/asset), [TURKCorpus](https://github.com/cocoxu/simplification)",
9
  "paper-url": "[WikiAuto](https://aclanthology.org/2020.acl-main.709/), [ASSET](https://aclanthology.org/2020.acl-main.424/), [TURK](https://aclanthology.org/Q16-1029/)",
10
+ "paper-bibtext": "WikiAuto: \n```\n@inproceedings{jiang-etal-2020-neural,\n title = \"Neural {CRF} Model for Sentence Alignment in Text Simplification\",\n author = \"Jiang, Chao and\n Maddela, Mounica and\n Lan, Wuwei and\n Zhong, Yang and\n Xu, Wei\",\n booktitle = \"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.acl-main.709\",\n doi = \"10.18653/v1/2020.acl-main.709\",\n pages = \"7943--7960\",\n}\n```\n\nASSET:\n```\n@inproceedings{alva-manchego-etal-2020-asset,\n title = \"{ASSET}: {A} Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations\",\n author = \"Alva-Manchego, Fernando and\n Martin, Louis and\n Bordes, Antoine and\n Scarton, Carolina and\n Sagot, Beno{\\^\\i}t and\n Specia, Lucia\",\n booktitle = \"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.acl-main.424\",\n pages = \"4668--4679\",\n}\n```\n\nTURK:\n```\n@article{Xu-EtAl:2016:TACL,\n author = {Wei Xu and Courtney Napoles and Ellie Pavlick and Quanze Chen and Chris Callison-Burch},\n title = {Optimizing Statistical Machine Translation for Text Simplification},\n journal = {Transactions of the Association for Computational Linguistics},\n volume = {4},\n year = {2016},\n url = {https://cocoxu.github.io/publications/tacl2016-smt-simplification.pdf},\n pages = {401--415}\n }\n ```",
11
  "contact-name": "WikiAuto: Chao Jiang; ASSET: Fernando Alva-Manchego and Louis Martin; TURK: Wei Xu",
12
  "contact-email": "jiang.1530@osu.edu, f.alva@sheffield.ac.uk, louismartincs@gmail.com, wei.xu@cc.gatech.edu"
13
  },
 
39
  "data-fields": "- `source`: A source sentence from one of the datasets\n- `target`: A single simplified sentence corresponding to `source`\n- `references`: In the case of ASSET/TURK, references is a list of strings corresponding to the different references. ",
40
  "structure-description": "The underlying datasets have extensive secondary annotations that can be used in conjunction with the GEM version. We omit those annotations to simplify the format into one that can be used by seq2seq models. ",
41
  "structure-labels": "n/a",
42
+ "structure-example": "```\n{\n 'source': 'In early work, Rutherford discovered the concept of radioactive half-life , the radioactive element radon, and differentiated and named alpha and beta radiation .',\n 'target': 'Rutherford discovered the radioactive half-life, and the three parts of radiation which he named Alpha, Beta, and Gamma.'\n}\n```",
43
  "structure-splits": "In WikiAuto, which is used as training and validation set, the following splits are provided: \n\n| | Tain | Dev | Test |\n| ----- | ------ | ----- | ---- |\n| Total sentence pairs | 373801 | 73249 | 118074 |\n| Aligned sentence pairs | 1889 | 346 | 677 |\n\nASSET does not contain a training set; many models use [WikiLarge](https://github.com/XingxingZhang/dress) (Zhang and Lapata, 2017) for training. For GEM, [Wiki-Auto](https://github.com/chaojiang06/wiki-auto) will be used for training the model.\n\nEach input sentence has 10 associated reference simplified sentences. The statistics of ASSET are given below.\n\n| | Dev | Test | Total |\n| ----- | ------ | ---- | ----- |\n| Input Sentences | 2000 | 359 | 2359 |\n| Reference Simplifications | 20000 | 3590 | 23590 |\n\nThe test and validation sets are the same as those of [TurkCorpus](https://github.com/cocoxu/simplification/). The split was random.\n\nThere are 19.04 tokens per reference on average (lower than 21.29 and 25.49 for TurkCorpus and HSplit, respectively). Most (17,245) of the referece sentences do not involve sentence splitting.\n\nTURKCorpus does not contain a training set; many models use [WikiLarge](https://github.com/XingxingZhang/dress) (Zhang and Lapata, 2017) or [Wiki-Auto](https://github.com/chaojiang06/wiki-auto) (Jiang et. al 2020) for training.\n\nEach input sentence has 8 associated reference simplified sentences. 2,359 input sentences are randomly split into 2,000 validation and 359 test sentences.\n\n| | Dev | Test | Total |\n| ----- | ------ | ---- | ----- |\n| Input Sentences | 2000 | 359 | 2359 |\n| Reference Simplifications | 16000 | 2872 | 18872 |\n\n\nThere are 21.29 tokens per reference on average.\n\n",
44
  "structure-splits-criteria": "In our setup, we use WikiAuto as training/validation corpus and ASSET and TURK as test corpora. ASSET and TURK have the same inputs but differ in their reference style. Researchers can thus conduct targeted evaluations based on the strategies that a model should learn. ",
45
  "structure-outlier": "n/a"
46
  },
47
  "what": {
48
+ "dataset": "WikiAuto is an English simplification dataset that we paired with ASSET and TURK, two very high-quality evaluation datasets, as test sets. The input is an English sentence taken from Wikipedia and the target a simplified sentence. ASSET and TURK contain the same test examples but have references that are simplified in different ways (splitting sentences vs. rewriting and splitting)."
49
  }
50
  },
51
  "curation": {