Datasets:
GEM
/

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
albertvillanova HF staff commited on
Commit
46628a9
1 Parent(s): dad97a3
README.md CHANGED
@@ -80,8 +80,44 @@ dataset_info:
80
  - name: challenge_test_turk_nopunc
81
  num_bytes: 414388
82
  num_examples: 359
83
- download_size: 127498860
84
  dataset_size: 175006860
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85
  ---
86
 
87
  # Dataset Card for GEM/wiki_auto_asset_turk
 
80
  - name: challenge_test_turk_nopunc
81
  num_bytes: 414388
82
  num_examples: 359
83
+ download_size: 93810015
84
  dataset_size: 175006860
85
+ configs:
86
+ - config_name: wiki_auto_asset_turk
87
+ data_files:
88
+ - split: train
89
+ path: wiki_auto_asset_turk/train-*
90
+ - split: validation
91
+ path: wiki_auto_asset_turk/validation-*
92
+ - split: test_asset
93
+ path: wiki_auto_asset_turk/test_asset-*
94
+ - split: test_turk
95
+ path: wiki_auto_asset_turk/test_turk-*
96
+ - split: test_contract
97
+ path: wiki_auto_asset_turk/test_contract-*
98
+ - split: test_wiki
99
+ path: wiki_auto_asset_turk/test_wiki-*
100
+ - split: challenge_train_sample
101
+ path: wiki_auto_asset_turk/challenge_train_sample-*
102
+ - split: challenge_validation_sample
103
+ path: wiki_auto_asset_turk/challenge_validation_sample-*
104
+ - split: challenge_test_asset_backtranslation
105
+ path: wiki_auto_asset_turk/challenge_test_asset_backtranslation-*
106
+ - split: challenge_test_asset_bfp02
107
+ path: wiki_auto_asset_turk/challenge_test_asset_bfp02-*
108
+ - split: challenge_test_asset_bfp05
109
+ path: wiki_auto_asset_turk/challenge_test_asset_bfp05-*
110
+ - split: challenge_test_asset_nopunc
111
+ path: wiki_auto_asset_turk/challenge_test_asset_nopunc-*
112
+ - split: challenge_test_turk_backtranslation
113
+ path: wiki_auto_asset_turk/challenge_test_turk_backtranslation-*
114
+ - split: challenge_test_turk_bfp02
115
+ path: wiki_auto_asset_turk/challenge_test_turk_bfp02-*
116
+ - split: challenge_test_turk_bfp05
117
+ path: wiki_auto_asset_turk/challenge_test_turk_bfp05-*
118
+ - split: challenge_test_turk_nopunc
119
+ path: wiki_auto_asset_turk/challenge_test_turk_nopunc-*
120
+ default: true
121
  ---
122
 
123
  # Dataset Card for GEM/wiki_auto_asset_turk
benchmarks/README.txt DELETED
@@ -1,14 +0,0 @@
1
- # IBM Split and Rephrase 2019
2
-
3
- ## Benchmarks
4
- This folder includes the two sources for the Split and Rephrase dataset.
5
-
6
- `contract-benchmark.tsv`: Contract Benchmark dataset. Contains hundreds of rows of sample text from legal contracts.
7
-
8
- `wiki-benchmark.tsv`: Wikipedia Benchmark dataset. Contains hundreds of rows of sample text from Wikipedia.
9
-
10
- The `.tsv` files have two columns.
11
-
12
- `complex`: The complex sentence given to the crowdworkers.
13
-
14
- `simple`: The Split and Rephrase rewrites the crowdworkers wrote.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
benchmarks/contract-benchmark.tsv DELETED
The diff for this file is too large to render. See raw diff
 
benchmarks/wiki-benchmark.tsv DELETED
The diff for this file is too large to render. See raw diff
 
wiki_auto_asset_turk.json DELETED
@@ -1,184 +0,0 @@
1
- {
2
- "overview": {
3
- "where": {
4
- "has-leaderboard": "no",
5
- "leaderboard-url": "N/A",
6
- "leaderboard-description": "N/A",
7
- "website": "n/a",
8
- "data-url": "[Wiki-Auto repository](https://github.com/chaojiang06/wiki-auto), [ASSET repository](https://github.com/facebookresearch/asset), [TURKCorpus](https://github.com/cocoxu/simplification)",
9
- "paper-url": "[WikiAuto](https://aclanthology.org/2020.acl-main.709/), [ASSET](https://aclanthology.org/2020.acl-main.424/), [TURK](https://aclanthology.org/Q16-1029/)",
10
- "paper-bibtext": "WikiAuto: \n```\n@inproceedings{jiang-etal-2020-neural,\n title = \"Neural {CRF} Model for Sentence Alignment in Text Simplification\",\n author = \"Jiang, Chao and\n Maddela, Mounica and\n Lan, Wuwei and\n Zhong, Yang and\n Xu, Wei\",\n booktitle = \"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.acl-main.709\",\n doi = \"10.18653/v1/2020.acl-main.709\",\n pages = \"7943--7960\",\n}\n```\n\nASSET:\n```\n@inproceedings{alva-manchego-etal-2020-asset,\n title = \"{ASSET}: {A} Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations\",\n author = \"Alva-Manchego, Fernando and\n Martin, Louis and\n Bordes, Antoine and\n Scarton, Carolina and\n Sagot, Beno{\\^\\i}t and\n Specia, Lucia\",\n booktitle = \"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.acl-main.424\",\n pages = \"4668--4679\",\n}\n```\n\nTURK:\n```\n@article{Xu-EtAl:2016:TACL,\n author = {Wei Xu and Courtney Napoles and Ellie Pavlick and Quanze Chen and Chris Callison-Burch},\n title = {Optimizing Statistical Machine Translation for Text Simplification},\n journal = {Transactions of the Association for Computational Linguistics},\n volume = {4},\n year = {2016},\n url = {https://cocoxu.github.io/publications/tacl2016-smt-simplification.pdf},\n pages = {401--415}\n }\n ```",
11
- "contact-name": "WikiAuto: Chao Jiang; ASSET: Fernando Alva-Manchego and Louis Martin; TURK: Wei Xu",
12
- "contact-email": "jiang.1530@osu.edu, f.alva@sheffield.ac.uk, louismartincs@gmail.com, wei.xu@cc.gatech.edu"
13
- },
14
- "languages": {
15
- "is-multilingual": "no",
16
- "license": "other: Other license",
17
- "task-other": "N/A",
18
- "language-names": [
19
- "English"
20
- ],
21
- "language-dialects": "n/a",
22
- "language-speakers": "Wiki-Auto contains English text only (BCP-47: `en`). It is presented as a translation task where Wikipedia Simple English is treated as its own idiom. For a statement of what is intended (but not always observed) to constitute Simple English on this platform, see [Simple English in Wikipedia](https://simple.wikipedia.org/wiki/Wikipedia:About#Simple_English).\nBoth ASSET and TURK use crowdsourcing to change references, and their language is thus a combination of the WikiAuto data and the language of the demographic on mechanical Turk",
23
- "intended-use": "WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia as a resource to train sentence simplification systems.\n\nThe authors first crowd-sourced a set of manual alignments between sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia (this corresponds to the `manual` config in this version of the dataset), then trained a neural CRF system to predict these alignments.\n\nThe trained alignment prediction model was then applied to the other articles in Simple English Wikipedia with an English counterpart to create a larger corpus of aligned sentences (corresponding to the `auto` and `auto_acl` configs here).\n\n[ASSET](https://github.com/facebookresearch/asset) [(Alva-Manchego et al., 2020)](https://www.aclweb.org/anthology/2020.acl-main.424.pdf) is multi-reference dataset for the evaluation of sentence simplification in English. The dataset uses the same 2,359 sentences from [TurkCorpus](https://github.com/cocoxu/simplification/) [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf) and each sentence is associated with 10 crowdsourced simplifications. Unlike previous simplification datasets, which contain a single transformation (e.g., lexical paraphrasing in TurkCorpus or sentence\nsplitting in [HSplit](https://www.aclweb.org/anthology/D18-1081.pdf)), the simplifications in ASSET encompass a variety of rewriting transformations.\n\nTURKCorpus is a high quality simplification dataset where each source (not simple) sentence is associated with 8 human-written simplifications that focus on lexical paraphrasing. It is one of the two evaluation datasets for the text simplification task in GEM. It acts as the validation and test set for paraphrasing-based simplification that does not involve sentence splitting and deletion.",
24
- "license-other": "WikiAuto: `CC BY-NC 3.0`, ASSET: `CC BY-NC 4.0`, TURK: `GNU General Public License v3.0`",
25
- "task": "Simplification",
26
- "communicative": "The goal is to communicate the main ideas of source sentence in a way that is easier to understand by non-native speakers of English.\n"
27
- },
28
- "credit": {
29
- "organization-type": [
30
- "academic",
31
- "industry"
32
- ],
33
- "organization-names": "Ohio State University, University of Sheffield, Inria, Facebook AI Research, Imperial College London, University of Pennsylvania, John Hopkins University",
34
- "creators": "WikiAuto: Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, Wei Xu; ASSET: Fernando Alva-Manchego, Louis Martin, Antoine Bordes, Carolina Scarton, and Beno\u00ee\u0131t Sagot, and Lucia Specia; TURK: Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch",
35
- "funding": "WikiAuto: NSF, ODNI, IARPA, Figure Eight AI, and Criteo. ASSET: PRAIRIE Institute, ANR. TURK: NSF",
36
- "gem-added-by": "GEM v1 had separate data cards for WikiAuto, ASSET, and TURK. They were contributed by Dhruv Kumar and Mounica Maddela. The initial data loader was written by Yacine Jernite. Sebastian Gehrmann merged and extended the data cards and migrated the loader to the v2 infrastructure. "
37
- },
38
- "structure": {
39
- "data-fields": "- `source`: A source sentence from one of the datasets\n- `target`: A single simplified sentence corresponding to `source`\n- `references`: In the case of ASSET/TURK, references is a list of strings corresponding to the different references. ",
40
- "structure-description": "The underlying datasets have extensive secondary annotations that can be used in conjunction with the GEM version. We omit those annotations to simplify the format into one that can be used by seq2seq models. ",
41
- "structure-labels": "n/a",
42
- "structure-example": "```\n{\n 'source': 'In early work, Rutherford discovered the concept of radioactive half-life , the radioactive element radon, and differentiated and named alpha and beta radiation .',\n 'target': 'Rutherford discovered the radioactive half-life, and the three parts of radiation which he named Alpha, Beta, and Gamma.'\n}\n```",
43
- "structure-splits": "In WikiAuto, which is used as training and validation set, the following splits are provided: \n\n| | Tain | Dev | Test |\n| ----- | ------ | ----- | ---- |\n| Total sentence pairs | 373801 | 73249 | 118074 |\n| Aligned sentence pairs | 1889 | 346 | 677 |\n\nASSET does not contain a training set; many models use [WikiLarge](https://github.com/XingxingZhang/dress) (Zhang and Lapata, 2017) for training. For GEM, [Wiki-Auto](https://github.com/chaojiang06/wiki-auto) will be used for training the model.\n\nEach input sentence has 10 associated reference simplified sentences. The statistics of ASSET are given below.\n\n| | Dev | Test | Total |\n| ----- | ------ | ---- | ----- |\n| Input Sentences | 2000 | 359 | 2359 |\n| Reference Simplifications | 20000 | 3590 | 23590 |\n\nThe test and validation sets are the same as those of [TurkCorpus](https://github.com/cocoxu/simplification/). The split was random.\n\nThere are 19.04 tokens per reference on average (lower than 21.29 and 25.49 for TurkCorpus and HSplit, respectively). Most (17,245) of the referece sentences do not involve sentence splitting.\n\nTURKCorpus does not contain a training set; many models use [WikiLarge](https://github.com/XingxingZhang/dress) (Zhang and Lapata, 2017) or [Wiki-Auto](https://github.com/chaojiang06/wiki-auto) (Jiang et. al 2020) for training.\n\nEach input sentence has 8 associated reference simplified sentences. 2,359 input sentences are randomly split into 2,000 validation and 359 test sentences.\n\n| | Dev | Test | Total |\n| ----- | ------ | ---- | ----- |\n| Input Sentences | 2000 | 359 | 2359 |\n| Reference Simplifications | 16000 | 2872 | 18872 |\n\n\nThere are 21.29 tokens per reference on average.\n\n",
44
- "structure-splits-criteria": "In our setup, we use WikiAuto as training/validation corpus and ASSET and TURK as test corpora. ASSET and TURK have the same inputs but differ in their reference style. Researchers can thus conduct targeted evaluations based on the strategies that a model should learn. ",
45
- "structure-outlier": "n/a"
46
- },
47
- "what": {
48
- "dataset": "WikiAuto is an English simplification dataset that we paired with ASSET and TURK, two very high-quality evaluation datasets, as test sets. The input is an English sentence taken from Wikipedia and the target a simplified sentence. ASSET and TURK contain the same test examples but have references that are simplified in different ways (splitting sentences vs. rewriting and splitting)."
49
- }
50
- },
51
- "curation": {
52
- "original": {
53
- "is-aggregated": "yes",
54
- "aggregated-sources": "Wikipedia",
55
- "rationale": "Wiki-Auto provides a new version of the Wikipedia corpus that is larger, contains 75% less defective pairs and has more complex rewrites than the previous WIKILARGE dataset.\n\n\nASSET was created in order to improve the evaluation of sentence simplification. It uses the same input sentences as the [TurkCorpus](https://github.com/cocoxu/simplification/) dataset from [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). The 2,359 input sentences of TurkCorpus are a sample of \"standard\" (not simple) sentences from the [Parallel Wikipedia Simplification (PWKP)](https://www.informatik.tu-darmstadt.de/ukp/research_6/data/sentence_simplification/simple_complex_sentence_pairs/index.en.jsp) dataset [(Zhu et al., 2010)](https://www.aclweb.org/anthology/C10-1152.pdf), which come from the August 22, 2009 version of Wikipedia. The sentences of TurkCorpus were chosen to be of similar length [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). No further information is provided on the sampling strategy.\n\nThe TurkCorpus dataset was developed in order to overcome some of the problems with sentence pairs from Standard and Simple Wikipedia: a large fraction of sentences were misaligned, or not actually simpler [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). However, TurkCorpus mainly focused on *lexical paraphrasing*, and so cannot be used to evaluate simplifications involving *compression* (deletion) or *sentence splitting*. HSplit [(Sulem et al., 2018)](https://www.aclweb.org/anthology/D18-1081.pdf), on the other hand, can only be used to evaluate sentence splitting. The reference sentences in ASSET include a wider variety of sentence rewriting strategies, combining splitting, compression and paraphrasing. Annotators were given examples of each kind of transformation individually, as well as all three transformations used at once, but were allowed to decide which transformations to use for any given sentence.\n\nAn example illustrating the differences between TurkCorpus, HSplit and ASSET is given below:\n\n> **Original:** He settled in London, devoting himself chiefly to practical teaching.\n>\n> **TurkCorpus:** He rooted in London, devoting himself mainly to practical teaching.\n>\n> **HSplit:** He settled in London. He devoted himself chiefly to practical teaching.\n>\n> **ASSET:** He lived in London. He was a teacher.\n",
56
- "communicative": "The goal is to communicate the same information as the source sentence using simpler words and grammar.\n"
57
- },
58
- "language": {
59
- "found": [
60
- "Single website"
61
- ],
62
- "crowdsourced": [],
63
- "created": "N/A",
64
- "machine-generated": "N/A",
65
- "validated": "not validated",
66
- "is-filtered": "algorithmically",
67
- "filtered-criteria": "The authors mention that they \"extracted 138,095 article pairs from the 2019/09 Wikipedia dump using an improved version of the [WikiExtractor](https://github.com/attardi/wikiextractor) library\". The [SpaCy](https://spacy.io/) library is used for sentence splitting.\n",
68
- "obtained": [
69
- "Found"
70
- ],
71
- "producers-description": "The dataset uses language from Wikipedia: some demographic information is provided [here](https://en.wikipedia.org/wiki/Wikipedia:Who_writes_Wikipedia%3F).\n\n",
72
- "topics": "n/a"
73
- },
74
- "annotations": {
75
- "origin": "crowd-sourced",
76
- "rater-number": "11<n<50",
77
- "rater-qualifications": "WikiAuto (Figure Eight): No information provided.\n\nASSET (MTurk): \n- Having a HIT approval rate over 95%, and over 1000 HITs approved. No other demographic or compensation information is provided.\n- Passing a Qualification Test (appropriately simplifying sentences). Out of 100 workers, 42 passed the test.\n- Being a resident of the United States, United Kingdom or Canada.\n\nTURK (MTurk): \n- Reference sentences were written by workers with HIT approval rate over 95%. No other demographic or compensation information is provided.",
78
- "rater-training-num": "1",
79
- "rater-test-num": ">5",
80
- "rater-annotation-service-bool": "yes",
81
- "rater-annotation-service": [
82
- "Amazon Mechanical Turk",
83
- "Appen"
84
- ],
85
- "values": "WikiAuto: Sentence alignment labels were crowdsourced for 500 randomly sampled document pairs (10,123 sentence pairs total). The authors pre-selected several alignment candidates from English Wikipedia for each Simple Wikipedia sentence based on various similarity metrics, then asked the crowd-workers to annotate these pairs. Finally, they trained their alignment model on this manually annotated dataset to obtain automatically aligned sentences (138,095 document pairs, 488,332 sentence pairs).\nNo demographic annotation is provided for the crowd workers. The [Figure Eight](https://www.figure-eight.com/) platform now part of Appen) was used for the annotation process.\n\nASSET: The instructions given to the annotators are available [here](https://github.com/facebookresearch/asset/blob/master/crowdsourcing/AMT_AnnotationInstructions.pdf).\n\nTURK: The references are crowdsourced from Amazon Mechanical Turk. The annotators were asked to provide simplifications without losing any information or splitting the input sentence. No other demographic or compensation information is provided in the TURKCorpus paper. The instructions given to the annotators are available in the paper. \n\n\n",
86
- "quality-control": "none",
87
- "quality-control-details": "N/A"
88
- },
89
- "consent": {
90
- "has-consent": "yes",
91
- "consent-policy": "Both Figure Eight and Amazon Mechanical Turk raters forfeit the right to their data as part of their agreements. ",
92
- "consent-other": "N/A",
93
- "no-consent-justification": "N/A"
94
- },
95
- "pii": {
96
- "has-pii": "no PII",
97
- "no-pii-justification": "Since the dataset is created from Wikipedia/Simple Wikipedia, all the information contained in the dataset is already in the public domain.\n",
98
- "is-pii-identified": "N/A",
99
- "pii-identified-method": "N/A",
100
- "is-pii-replaced": "N/A",
101
- "pii-replaced-method": "N/A",
102
- "pii-categories": []
103
- },
104
- "maintenance": {
105
- "has-maintenance": "no",
106
- "description": "N/A",
107
- "contact": "N/A",
108
- "contestation-mechanism": "N/A",
109
- "contestation-link": "N/A",
110
- "contestation-description": "N/A"
111
- }
112
- },
113
- "gem": {
114
- "rationale": {
115
- "sole-task-dataset": "yes",
116
- "sole-language-task-dataset": "no",
117
- "distinction-description": "It's unique setup with multiple test sets makes the task interesting since it allows for evaluation of multiple generations and systems that simplify in different ways. ",
118
- "contribution": "WikiAuto is the largest open text simplification dataset currently available. ASSET and TURK are high quality test sets that are compatible with WikiAuto. \n",
119
- "model-ability": "simplification"
120
- },
121
- "curation": {
122
- "has-additional-curation": "yes",
123
- "modification-types": [
124
- "other"
125
- ],
126
- "modification-description": "We removed secondary annotations and focus on the simple `input->output` format, but combine the different sub-datasets. ",
127
- "has-additional-splits": "yes",
128
- "additional-splits-description": "we split the original test set according to syntactic complexity of the source sentences. To characterize sentence syntactic complexity, we use the 8-level developmental level (d-level) scale proposed by [Covington et al. (2006)](https://www.researchgate.net/publication/254033869_How_complex_is_that_sentence_A_proposed_revision_of_the_Rosenberg_and_Abbeduto_D-Level_Scale) and the implementation of [Lu, Xiaofei (2010)](https://www.jbe-platform.com/content/journals/10.1075/ijcl.15.4.02lu).\nWe thus split the original test set into 8 subsets corresponding to the 8 d-levels assigned to source sentences. We obtain the following number of instances per level and average d-level of the dataset:\n\n| Total nb. sentences | L0 | L1 | L2 | L3 | L4 | L5 | L6 | L7 | Mean Level |\n|-------------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ---------- |\n| 359 | 166 | 0 | 58 | 32 | 5 | 28 | 7 | 63 | 2.38 |\n\n",
129
- "additional-splits-capacicites": "The goal was to assess performance when simplifying source sentences with different syntactic structure and complexity. "
130
- },
131
- "starting": {
132
- "research-pointers": "There are recent supervised ([Martin et al., 2019](https://arxiv.org/abs/1910.02677), [Kriz et al., 2019](https://www.aclweb.org/anthology/N19-1317/), [Dong et al., 2019](https://www.aclweb.org/anthology/P19-1331/), [Zhang and Lapata, 2017](https://www.aclweb.org/anthology/D17-1062/)) and unsupervised ([Martin et al., 2020](https://arxiv.org/abs/2005.00352v1), [Kumar et al., 2020](https://www.aclweb.org/anthology/2020.acl-main.707/), [Surya et al., 2019](https://www.aclweb.org/anthology/P19-1198/)) text simplification models that can be used as baselines.\n\n",
133
- "technical-terms": "The common metric used for automatic evaluation is SARI [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029/).\n"
134
- }
135
- },
136
- "results": {
137
- "results": {
138
- "other-metrics-definitions": "SARI: A simplification metric that considers both input and references to measure the \"goodness\" of words that are added, deleted, and kept. ",
139
- "has-previous-results": "no",
140
- "current-evaluation": "N/A",
141
- "previous-results": "N/A",
142
- "metrics": [
143
- "Other: Other Metrics",
144
- "BLEU"
145
- ],
146
- "model-abilities": "Simplification",
147
- "original-evaluation": "The original authors of WikiAuto and ASSET used human evaluation to assess the fluency, adequacy, and simplicity (details provided in the paper). For TURK, the authors measured grammaticality, meaning-preservation, and simplicity gain (details in the paper)."
148
- }
149
- },
150
- "considerations": {
151
- "pii": {
152
- "risks-description": "All the data is in the public domain."
153
- },
154
- "licenses": {
155
- "dataset-restrictions-other": "N/A",
156
- "data-copyright-other": "N/A",
157
- "dataset-restrictions": [
158
- "open license - commercial use allowed"
159
- ],
160
- "data-copyright": [
161
- "open license - commercial use allowed"
162
- ]
163
- },
164
- "limitations": {
165
- "data-technical-limitations": "The dataset may contain some social biases, as the input sentences are based on Wikipedia. Studies have shown that the English Wikipedia contains both gender biases [(Schmahl et al., 2020)](https://research.tudelft.nl/en/publications/is-wikipedia-succeeding-in-reducing-gender-bias-assessing-changes) and racial biases [(Adams et al., 2019)](https://journals.sagepub.com/doi/pdf/10.1177/2378023118823946).\n",
166
- "data-unsuited-applications": "Since the test datasets contains only 2,359 sentences that are derived from Wikipedia, they are limited to a small subset of topics present on Wikipedia.\n"
167
- }
168
- },
169
- "context": {
170
- "previous": {
171
- "is-deployed": "no",
172
- "described-risks": "N/A",
173
- "changes-from-observation": "N/A"
174
- },
175
- "underserved": {
176
- "helps-underserved": "no",
177
- "underserved-description": "N/A"
178
- },
179
- "biases": {
180
- "has-biases": "yes",
181
- "bias-analyses": "The dataset may contain some social biases, as the input sentences are based on Wikipedia. Studies have shown that the English Wikipedia contains both gender biases [(Schmahl et al., 2020)](https://research.tudelft.nl/en/publications/is-wikipedia-succeeding-in-reducing-gender-bias-assessing-changes) and racial biases [(Adams et al., 2019)](https://journals.sagepub.com/doi/pdf/10.1177/2378023118823946).\n"
182
- }
183
- }
184
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
wiki_auto_asset_turk.py DELETED
@@ -1,246 +0,0 @@
1
- import csv
2
- import json
3
- import os
4
- import datasets
5
-
6
- _CITATION = """\
7
- @inproceedings{jiang-etal-2020-neural,
8
- title = "Neural {CRF} Model for Sentence Alignment in Text Simplification",
9
- author = "Jiang, Chao and
10
- Maddela, Mounica and
11
- Lan, Wuwei and
12
- Zhong, Yang and
13
- Xu, Wei",
14
- booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
15
- month = jul,
16
- year = "2020",
17
- address = "Online",
18
- publisher = "Association for Computational Linguistics",
19
- url = "https://www.aclweb.org/anthology/2020.acl-main.709",
20
- doi = "10.18653/v1/2020.acl-main.709",
21
- pages = "7943--7960",
22
- }
23
- """
24
-
25
- _DESCRIPTION = """\
26
- WikiAuto provides a set of aligned sentences from English Wikipedia and Simple
27
- English Wikipedia as a resource to train sentence simplification systems.
28
-
29
- The authors first crowd-sourced a set of manual alignments between sentences in
30
- a subset of the Simple English Wikipedia and their corresponding versions in
31
- English Wikipedia (this corresponds to the manual config in this version of the
32
- dataset), then trained a neural CRF system to predict these alignments.
33
-
34
- The trained alignment prediction model was then applied to the other articles in
35
- Simple English Wikipedia with an English counterpart to create a larger corpus
36
- of aligned sentences (corresponding to the auto and auto_acl configs here).
37
- """
38
-
39
- _URLs = {
40
- "train": "train.tsv",
41
- "validation": "valid.tsv",
42
- "test_turk": "https://storage.googleapis.com/huggingface-nlp/datasets/gem/gem_turk_detokenized.json",
43
- "challenge_set": "https://storage.googleapis.com/huggingface-nlp/datasets/gem/gem_challenge_sets/wiki_auto_asset_turk_train_valid.zip",
44
- "test_contract": "benchmarks/contract-benchmark.tsv",
45
- "test_wiki": "benchmarks/wiki-benchmark.tsv",
46
- }
47
-
48
- # Add Asset files.
49
- _URLs[
50
- "test_asset_orig"
51
- ] = "https://raw.githubusercontent.com/facebookresearch/asset/main/dataset/asset.test.orig"
52
- for i in range(10):
53
- _URLs[
54
- f"test_asset_{i}"
55
- ] = f"https://raw.githubusercontent.com/facebookresearch/asset/main/dataset/asset.test.simp.{i}"
56
-
57
-
58
- class WikiAuto(datasets.GeneratorBasedBuilder):
59
- VERSION = datasets.Version("1.0.0")
60
- DEFAULT_CONFIG_NAME = "wiki_auto_asset_turk"
61
-
62
- def _info(self):
63
- features = datasets.Features(
64
- {
65
- "gem_id": datasets.Value("string"),
66
- "gem_parent_id": datasets.Value("string"),
67
- "source": datasets.Value("string"),
68
- "target": datasets.Value("string"),
69
- "references": [datasets.Value("string")],
70
- }
71
- )
72
-
73
- return datasets.DatasetInfo(
74
- description=_DESCRIPTION,
75
- features=features,
76
- supervised_keys=datasets.info.SupervisedKeysData(
77
- input="source", output="target"
78
- ),
79
- homepage="",
80
- citation=_CITATION,
81
- )
82
-
83
- def _split_generators(self, dl_manager):
84
- """Returns SplitGenerators."""
85
- dl_dir = dl_manager.download_and_extract(_URLs)
86
- challenge_sets = [
87
- (
88
- "challenge_train_sample",
89
- "train_wiki_auto_asset_turk_RandomSample500.json",
90
- ),
91
- (
92
- "challenge_validation_sample",
93
- "validation_wiki_auto_asset_turk_RandomSample500.json",
94
- ),
95
- (
96
- "challenge_test_asset_backtranslation",
97
- "test_asset_wiki_auto_asset_turk_BackTranslation.json",
98
- ),
99
- (
100
- "challenge_test_asset_bfp02",
101
- "test_asset_wiki_auto_asset_turk_ButterFingersPerturbation_p=0.02.json",
102
- ),
103
- (
104
- "challenge_test_asset_bfp05",
105
- "test_asset_wiki_auto_asset_turk_ButterFingersPerturbation_p=0.05.json",
106
- ),
107
- (
108
- "challenge_test_asset_nopunc",
109
- "test_asset_wiki_auto_asset_turk_WithoutPunctuation.json",
110
- ),
111
- (
112
- "challenge_test_turk_backtranslation",
113
- "detok_test_turk_wiki_auto_asset_turk_BackTranslation.json",
114
- ),
115
- (
116
- "challenge_test_turk_bfp02",
117
- "detok_test_turk_wiki_auto_asset_turk_ButterFingersPerturbation_p=0.02.json",
118
- ),
119
- (
120
- "challenge_test_turk_bfp05",
121
- "detok_test_turk_wiki_auto_asset_turk_ButterFingersPerturbation_p=0.05.json",
122
- ),
123
- (
124
- "challenge_test_turk_nopunc",
125
- "detok_test_turk_wiki_auto_asset_turk_WithoutPunctuation.json",
126
- ),
127
- ]
128
- return [
129
- datasets.SplitGenerator(
130
- name=datasets.Split.TRAIN,
131
- gen_kwargs={
132
- "filepath": dl_dir["train"],
133
- "split": "train",
134
- },
135
- ),
136
- datasets.SplitGenerator(
137
- name=datasets.Split.VALIDATION,
138
- gen_kwargs={
139
- "filepath": dl_dir["validation"],
140
- "split": "validation",
141
- },
142
- ),
143
- datasets.SplitGenerator(
144
- name="test_asset",
145
- gen_kwargs={
146
- "filepath": "",
147
- "split": "test_asset",
148
- "filepaths": [dl_dir["test_asset_orig"]]
149
- + [dl_dir[f"test_asset_{i}"] for i in range(10)],
150
- },
151
- ),
152
- datasets.SplitGenerator(
153
- name="test_turk",
154
- gen_kwargs={
155
- "filepath": dl_dir["test_turk"],
156
- "split": "test_turk",
157
- },
158
- ),
159
- datasets.SplitGenerator(
160
- name="test_contract",
161
- gen_kwargs={
162
- "filepath": dl_dir["test_contract"],
163
- "split": "test_contract",
164
- },
165
- ),
166
- datasets.SplitGenerator(
167
- name="test_wiki",
168
- gen_kwargs={
169
- "filepath": dl_dir["test_wiki"],
170
- "split": "test_wiki",
171
- },
172
- ),
173
- ] + [
174
- datasets.SplitGenerator(
175
- name=challenge_split,
176
- gen_kwargs={
177
- "filepath": os.path.join(
178
- dl_dir["challenge_set"], "wiki_auto_asset_turk", filename
179
- ),
180
- "split": challenge_split,
181
- },
182
- )
183
- for challenge_split, filename in challenge_sets
184
- ]
185
-
186
- def _generate_examples(self, filepath, split, filepaths=None, lang=None):
187
- """Yields examples."""
188
- if split in ["train", "validation"]:
189
- keys = [
190
- "source",
191
- "target",
192
- ]
193
- with open(filepath, encoding="utf-8") as f:
194
- for id_, line in enumerate(f):
195
- values = line.strip().split("\t")
196
- assert (
197
- len(values) == 2
198
- ), f"Not enough fields in ---- {line} --- {values}"
199
- example = dict([(k, val) for k, val in zip(keys, values)])
200
- example["gem_id"] = f"wiki_auto_asset_turk-{split}-{id_}"
201
- example["gem_parent_id"] = example["gem_id"]
202
- example["references"] = (
203
- [] if split == "train" else [example["target"]]
204
- )
205
- yield id_, example
206
- elif split == "test_turk":
207
- examples = json.load(open(filepath, encoding="utf-8"))
208
- for id_, example in enumerate(examples):
209
- example["gem_parent_id"] = example["gem_id"]
210
- for k in ["source_id", "target_id"]:
211
- if k in example:
212
- del example[k]
213
- yield id_, example
214
- elif split == "test_asset":
215
- files = [open(f_name, encoding="utf-8") for f_name in filepaths]
216
- for id_, lines in enumerate(zip(*files)):
217
- yield id_, {
218
- "gem_id": f"wiki_auto_asset_turk-{split}-{id_}",
219
- "gem_parent_id": f"wiki_auto_asset_turk-{split}-{id_}",
220
- "target": lines[1].strip(),
221
- "source": lines[0].strip(),
222
- "references": [line.strip() for line in lines[1:]],
223
- }
224
- elif split == "test_wiki" or split == "test_contract":
225
- with open(filepath, 'r') as f:
226
- reader = csv.DictReader(f, delimiter="\t")
227
- for id_, entry in enumerate(reader):
228
- yield id_, {
229
- "gem_id": f"wiki_auto_asset_turk-{split}-{id_}",
230
- "gem_parent_id": f"wiki_auto_asset_turk-{split}-{id_}",
231
- "target": entry["simple"],
232
- "source": entry["complex"],
233
- "references": [entry["simple"]],
234
- }
235
- else:
236
- exples = json.load(open(filepath, encoding="utf-8"))
237
- if isinstance(exples, dict):
238
- assert len(exples) == 1, "multiple entries found"
239
- exples = list(exples.values())[0]
240
- for id_, exple in enumerate(exples):
241
- exple["gem_parent_id"] = exple["gem_id"]
242
- exple["gem_id"] = f"wiki_auto_asset_turk-{split}-{id_}"
243
- for k in ["source_id", "target_id"]:
244
- if k in exple:
245
- del exple[k]
246
- yield id_, exple
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
valid.tsv → wiki_auto_asset_turk/challenge_test_asset_backtranslation-00000-of-00001.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6be79b5d014a27facc0f3e892cef35774f48f6e08e4d6eefafb801bcf2ab7b09
3
- size 4338364
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84cd27fccc7b47543978ec44b948262bfee56dbde66ef2dd92cb707d9845419a
3
+ size 186200
train.tsv → wiki_auto_asset_turk/challenge_test_asset_bfp02-00000-of-00001.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0ed9ea351922ba39a9a2a5a15293619af5f2a94b9ead86b7ef2007bfcb76aadd
3
- size 120678315
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81a3ffadbe9cc1df33a44cf3315b097972376a304b7fca60f8c52ca57f77fa17
3
+ size 187167
wiki_auto_asset_turk/challenge_test_asset_bfp05-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:36f3675c5da76331d5c56eea3a1bfa473fa3df75242f89ecc1350c0ab4e19a79
3
+ size 188180
wiki_auto_asset_turk/challenge_test_asset_nopunc-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc3b720907ca5945bf19e9bb9db85a7709352e61d9f9ab53f4919b50c92f793c
3
+ size 185666
wiki_auto_asset_turk/challenge_test_turk_backtranslation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1df1cb2fbdd6c9a3692f7c3ddfed668a0b59b8c6cf3e4fae87bfc234fe791475
3
+ size 174298
wiki_auto_asset_turk/challenge_test_turk_bfp02-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d9126a290fb4b108cb223362f2bb0c3966503f29a17d4fb180e397ff19e37c8
3
+ size 175961
wiki_auto_asset_turk/challenge_test_turk_bfp05-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d8b1accd537489626b5aaf367b883bb48ef57ebecce79e9fbecb4de25613d14
3
+ size 177426
wiki_auto_asset_turk/challenge_test_turk_nopunc-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8c260a6a89ba0a7938880731b2c7c96f2553db28d2469e91c5ce7fc77de9c28
3
+ size 174272
wiki_auto_asset_turk/challenge_train_sample-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46979eba79b0b64bf21a85ea763ef954eb1b3badd2e73c637534090ec50b1d56
3
+ size 122970
wiki_auto_asset_turk/challenge_validation_sample-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:224bcbdffcd89028e61c4ec9f2c8647acf9aca1aee2655e228099a22e286d85b
3
+ size 90119
wiki_auto_asset_turk/test_asset-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f22335d1e44a9568b6c20d8fb9e37dce012661965ac1366dfecb6eebd912c8c4
3
+ size 203593
wiki_auto_asset_turk/test_contract-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1bbeb032a5d694b7d480d5abb38bee593b9bd03a445712c15b2ab0a5b4229e7c
3
+ size 193642
wiki_auto_asset_turk/test_turk-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:03652187ed6704ab9156d41db317780f3366f828ceb33ed9056a1689b0fd185c
3
+ size 174377
wiki_auto_asset_turk/test_wiki-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d999e85b8ebd7e9c3292a183298d0ffc48c302ffd7a1b6103869d4c28a32d253
3
+ size 179975
wiki_auto_asset_turk/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac2e3810ba66ed24f4c00e35cc9f6019ed85f135f0bb69382a7fd6709c346865
3
+ size 89483499
wiki_auto_asset_turk/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ebf89244687b51a15aef34a35e72e058a4faca00273a4131c598ac5813834c38
3
+ size 1912670