system HF staff commited on
Commit
1dd45ac
0 Parent(s):

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - gnu-gpl-v3-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - conditional-text-generation
18
+ task_ids:
19
+ - text-simplification
20
+ ---
21
+
22
+ # Dataset Card for TURK
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+ - [Contributions](#contributions)
47
+
48
+ ## Dataset Description
49
+
50
+ - **Homepage:** None
51
+ - **Repository:** [TURK](https://github.com/cocoxu/simplification)
52
+ - **Paper:** [Optimizing Statistical Machine Translation for Text Simplification](https://www.aclweb.org/anthology/Q16-1029/)
53
+ - **Leaderboard:** N/A
54
+ - **Point of Contact:** [Wei Xu](mailto:wei.xu@cc.gatech.edu)
55
+
56
+
57
+ ### Dataset Summary
58
+
59
+ TURK is a multi-reference dataset for the evaluation of sentence simplification in English. The dataset consists of 2,359 sentences from the [Parallel Wikipedia Simplification (PWKP) corpus](https://www.aclweb.org/anthology/C10-1152/). Each sentence is associated with 8 crowdsourced simplifications that focus on only lexical paraphrasing (no sentence splitting or deletion).
60
+
61
+ ### Supported Tasks and Leaderboards
62
+
63
+ No Leaderboard for the task.
64
+
65
+ ### Languages
66
+
67
+ TURK contains English text only (BCP-47: `en`).
68
+
69
+ ## Dataset Structure
70
+
71
+ ### Data Instances
72
+
73
+ An instance consists of an original sentence and 8 possible reference simplifications that focus on lexical paraphrasing.
74
+
75
+ ```
76
+ {'original': 'one side of the armed conflicts is composed mainly of the sudanese military and the janjaweed , a sudanese militia group recruited mostly from the afro-arab abbala tribes of the northern rizeigat region in sudan .',
77
+ 'simplifications': ['one side of the armed conflicts is made of sudanese military and the janjaweed , a sudanese militia recruited from the afro-arab abbala tribes of the northern rizeigat region in sudan .', 'one side of the armed conflicts consist of the sudanese military and the sudanese militia group janjaweed .', 'one side of the armed conflicts is mainly sudanese military and the janjaweed , which recruited from the afro-arab abbala tribes .', 'one side of the armed conflicts is composed mainly of the sudanese military and the janjaweed , a sudanese militia group recruited mostly from the afro-arab abbala tribes in sudan .', 'one side of the armed conflicts is made up mostly of the sudanese military and the janjaweed , a sudanese militia group whose recruits mostly come from the afro-arab abbala tribes from the northern rizeigat region in sudan .', 'the sudanese military and the janjaweed make up one of the armed conflicts , mostly from the afro-arab abbal tribes in sudan .', 'one side of the armed conflicts is composed mainly of the sudanese military and the janjaweed , a sudanese militia group recruited mostly from the afro-arab abbala tribes of the northern rizeigat regime in sudan .', 'one side of the armed conflicts is composed mainly of the sudanese military and the janjaweed , a sudanese militia group recruited mostly from the afro-arab abbala tribes of the northern rizeigat region in sudan .']}
78
+ ```
79
+
80
+
81
+ ### Data Fields
82
+
83
+ - `original`: an original sentence from the source datasets
84
+ - `simplifications`: a set of reference simplifications produced by crowd workers.
85
+
86
+ ### Data Splits
87
+
88
+ TURK does not contain a training set; many models use [WikiLarge](https://github.com/XingxingZhang/dress) (Zhang and Lapata, 2017) or [Wiki-Auto](https://github.com/chaojiang06/wiki-auto) (Jiang et. al 2020) for training.
89
+
90
+ Each input sentence has 8 associated reference simplified sentences. 2,359 input sentences are randomly split into 2,000 validation and 359 test sentences.
91
+
92
+ | | Dev | Test | Total |
93
+ | ----- | ------ | ---- | ----- |
94
+ | Input Sentences | 2000 | 359 | 2359 |
95
+ | Reference Simplifications | 16000 | 2872 | 18872 |
96
+
97
+
98
+ ## Dataset Creation
99
+
100
+ ### Curation Rationale
101
+
102
+ The TURK dataset was constructed to evaluate the task of text simplification. It contains multiple human-written references that focus on only lexical simplification.
103
+
104
+ ### Source Data
105
+
106
+ #### Initial Data Collection and Normalization
107
+
108
+ The input sentences in the dataset are extracted from the [Parallel Wikipedia Simplification (PWKP) corpus](https://www.aclweb.org/anthology/C10-1152/).
109
+
110
+ #### Who are the source language producers?
111
+
112
+ The references are crowdsourced from Amazon Mechanical Turk. The annotators were asked to provide simplifications without losing any information or splitting the input sentence. No other demographic or compensation information is provided in the paper.
113
+
114
+ ### Annotations
115
+
116
+ #### Annotation process
117
+
118
+ The instructions given to the annotators are available in the paper.
119
+
120
+ #### Who are the annotators?
121
+
122
+ The annotators are Amazon Mechanical Turk workers.
123
+
124
+ ### Personal and Sensitive Information
125
+
126
+ Since the dataset is created from English Wikipedia (August 22, 2009 version), all the information contained in the dataset is already in the public domain.
127
+
128
+ ## Considerations for Using the Data
129
+
130
+ ### Social Impact of Dataset
131
+
132
+ The dataset helps move forward the research towards text simplification by creating a higher quality validation and test dataset. Progress in text simplification in turn has the potential to increase the accessibility of written documents to wider audiences.
133
+
134
+ ### Discussion of Biases
135
+
136
+ The dataset may contain some social biases, as the input sentences are based on Wikipedia. Studies have shown that the English Wikipedia contains both gender biases [(Schmahl et al., 2020)](https://research.tudelft.nl/en/publications/is-wikipedia-succeeding-in-reducing-gender-bias-assessing-changes) and racial biases [(Adams et al., 2019)](https://journals.sagepub.com/doi/pdf/10.1177/2378023118823946).
137
+
138
+ ### Other Known Limitations
139
+
140
+ Since the dataset contains only 2,359 sentences that are derived from Wikipedia, it is limited to a small subset of topics present on Wikipedia.
141
+
142
+
143
+ ## Additional Information
144
+
145
+ ### Dataset Curators
146
+
147
+ TURK was developed by researchers at the University of Pennsylvania. The work was supported by the NSF under grant IIS-1430651 and the NSF GRFP under grant 1232825.
148
+
149
+ ### Licensing Information
150
+
151
+ [GNU General Public License v3.0](https://github.com/cocoxu/simplification/blob/master/LICENSE)
152
+
153
+ ### Citation Information
154
+ ```
155
+ @article{Xu-EtAl:2016:TACL,
156
+ author = {Wei Xu and Courtney Napoles and Ellie Pavlick and Quanze Chen and Chris Callison-Burch},
157
+ title = {Optimizing Statistical Machine Translation for Text Simplification},
158
+ journal = {Transactions of the Association for Computational Linguistics},
159
+ volume = {4},
160
+ year = {2016},
161
+ url = {https://cocoxu.github.io/publications/tacl2016-smt-simplification.pdf},
162
+ pages = {401--415}
163
+ }
164
+ ```
165
+ ### Contributions
166
+
167
+ Thanks to [@mounicam](https://github.com/mounicam) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"simplification": {"description": "TURKCorpus is a dataset for evaluating sentence simplification systems that focus on lexical paraphrasing,\nas described in \"Optimizing Statistical Machine Translation for Text Simplification\". The corpus is composed of 2000 validation and 359 test original sentences that were each simplified 8 times by different annotators.\n", "citation": " @article{Xu-EtAl:2016:TACL,\n author = {Wei Xu and Courtney Napoles and Ellie Pavlick and Quanze Chen and Chris Callison-Burch},\n title = {Optimizing Statistical Machine Translation for Text Simplification},\n journal = {Transactions of the Association for Computational Linguistics},\n volume = {4},\n year = {2016},\n url = {https://cocoxu.github.io/publications/tacl2016-smt-simplification.pdf},\n pages = {401--415}\n }\n}\n", "homepage": "https://github.com/cocoxu/simplification", "license": "GNU General Public License v3.0", "features": {"original": {"dtype": "string", "id": null, "_type": "Value"}, "simplifications": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "turk", "config_name": "simplification", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 2120187, "num_examples": 2000, "dataset_name": "turk"}, "test": {"name": "test", "num_bytes": 396378, "num_examples": 359, "dataset_name": "turk"}}, "download_checksums": {"https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/test.8turkers.tok.norm": {"num_bytes": 45291, "checksum": "5a45e4deb23524dbd06fae0bbaf4a547df8c5d982bf4c9867c0f1462ed99ac46"}, "https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/tune.8turkers.tok.norm": {"num_bytes": 242697, "checksum": "1a0a0bf500bac72486eda8816e0a64347e79bd3652daddd1289fd4eec773df00"}, "https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/tune.8turkers.tok.turk.0": {"num_bytes": 227391, "checksum": "fb7c373e88dd188e234c688e6c7ed22012658e06c5c127d4be5f19f0e66a6542"}, "https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/tune.8turkers.tok.turk.1": {"num_bytes": 227362, "checksum": "308fab45b60d36bbd0ff651245cc0ceed82654658679c27ce575c4b487827394"}, "https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/tune.8turkers.tok.turk.2": {"num_bytes": 227046, "checksum": "f428363b156759352c4240a218f5485909961c84554fd20dbcf076a4518c1f13"}, "https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/tune.8turkers.tok.turk.3": {"num_bytes": 228063, "checksum": "22a430a69b348643e4e86e33724ef8a0dc690e948827af9667d21536f7f19981"}, "https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/tune.8turkers.tok.turk.4": {"num_bytes": 226410, "checksum": "a07211cb2a493f8a6c00f3f437c826eb10d01abb354f910d278d74752c306c24"}, "https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/tune.8turkers.tok.turk.5": {"num_bytes": 226117, "checksum": "951a03c67fd726a946a7d303af6edc64b4c3aa351721c7e921bd83c5f8a7e1c6"}, "https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/tune.8turkers.tok.turk.6": {"num_bytes": 226780, "checksum": "2983e016b4a7edff749106865251653d93def0c8f4f6f30ef6800b83cc3becbb"}, "https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/tune.8turkers.tok.turk.7": {"num_bytes": 226300, "checksum": "f427962c2fa8aee00911c74b3c2c093e5b50acc70928a619d3f3225ba29f38eb"}, "https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/test.8turkers.tok.turk.0": {"num_bytes": 37584, "checksum": "33399612ddb7ec4f0cd798508ea2928a3ab9b2ec3a9e524a4d5a0da44bf1425a"}, "https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/test.8turkers.tok.turk.1": {"num_bytes": 39995, "checksum": "6ea0d23083ce25c7cceb19f4e454ddde7d8b4010243d7af2ab0a96884587e79b"}, "https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/test.8turkers.tok.turk.2": {"num_bytes": 39854, "checksum": "abe871f586783f6e2273557fbc1ed203b06e5a5c2a52da260113c939ce1e79e3"}, "https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/test.8turkers.tok.turk.3": {"num_bytes": 42606, "checksum": "b4387233b14c123c7cef8d15c2ee7c68244fedb10e6e37008c0eed782b98897e"}, "https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/test.8turkers.tok.turk.4": {"num_bytes": 42005, "checksum": "1abf53f4dc075660322be772b40cdd26545902d5a7fa8746a460ea55301dd847"}, "https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/test.8turkers.tok.turk.5": {"num_bytes": 44149, "checksum": "3bbb08c71bbf692a2b7f2b6421a833397f96574fb9d7ff1dfd2c0f52ea0c52d6"}, "https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/test.8turkers.tok.turk.6": {"num_bytes": 45780, "checksum": "d100c0a63c9a01cde27694f18275e760d3f77bcd8b46ab9f6f832e8bc37c4857"}, "https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/test.8turkers.tok.turk.7": {"num_bytes": 47964, "checksum": "e1956804ef69855a83a6c214acd07373533dad31615de0254ec60e3d0dbbedac"}}, "download_size": 2443394, "post_processing_size": null, "dataset_size": 2516565, "size_in_bytes": 4959959}}
dummy/simplification/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9c2863f41e8723938db26037e8b1298ccab4e73dd3ba93fa8af24618c6e58ba
3
+ size 9085
turk.py ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """TURKCorpus: a dataset for sentence simplification evaluation"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import datasets
20
+
21
+
22
+ _CITATION = """\
23
+ @article{Xu-EtAl:2016:TACL,
24
+ author = {Wei Xu and Courtney Napoles and Ellie Pavlick and Quanze Chen and Chris Callison-Burch},
25
+ title = {Optimizing Statistical Machine Translation for Text Simplification},
26
+ journal = {Transactions of the Association for Computational Linguistics},
27
+ volume = {4},
28
+ year = {2016},
29
+ url = {https://cocoxu.github.io/publications/tacl2016-smt-simplification.pdf},
30
+ pages = {401--415}
31
+ }
32
+ }
33
+ """
34
+
35
+ _DESCRIPTION = """\
36
+ TURKCorpus is a dataset for evaluating sentence simplification systems that focus on lexical paraphrasing,
37
+ as described in "Optimizing Statistical Machine Translation for Text Simplification". The corpus is composed of 2000 validation and 359 test original sentences that were each simplified 8 times by different annotators.
38
+ """
39
+
40
+ _HOMEPAGE = "https://github.com/cocoxu/simplification"
41
+
42
+ _LICENSE = "GNU General Public License v3.0"
43
+
44
+ _URL_LIST = [
45
+ (
46
+ "test.8turkers.tok.norm",
47
+ "https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/test.8turkers.tok.norm",
48
+ ),
49
+ (
50
+ "tune.8turkers.tok.norm",
51
+ "https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/tune.8turkers.tok.norm",
52
+ ),
53
+ ]
54
+ _URL_LIST += [
55
+ (
56
+ f"{spl}.8turkers.tok.turk.{i}",
57
+ f"https://raw.githubusercontent.com/cocoxu/simplification/master/data/turkcorpus/{spl}.8turkers.tok.turk.{i}",
58
+ )
59
+ for spl in ["tune", "test"]
60
+ for i in range(8)
61
+ ]
62
+
63
+ _URLs = dict(_URL_LIST)
64
+
65
+
66
+ class Turk(datasets.GeneratorBasedBuilder):
67
+
68
+ VERSION = datasets.Version("1.0.0")
69
+
70
+ BUILDER_CONFIGS = [
71
+ datasets.BuilderConfig(
72
+ name="simplification",
73
+ version=VERSION,
74
+ description="A set of original sentences aligned with 8 possible simplifications for each.",
75
+ )
76
+ ]
77
+
78
+ def _info(self):
79
+ features = datasets.Features(
80
+ {
81
+ "original": datasets.Value("string"),
82
+ "simplifications": datasets.Sequence(datasets.Value("string")),
83
+ }
84
+ )
85
+ return datasets.DatasetInfo(
86
+ description=_DESCRIPTION,
87
+ features=features,
88
+ supervised_keys=None,
89
+ homepage=_HOMEPAGE,
90
+ license=_LICENSE,
91
+ citation=_CITATION,
92
+ )
93
+
94
+ def _split_generators(self, dl_manager):
95
+ data_dir = dl_manager.download_and_extract(_URLs)
96
+ return [
97
+ datasets.SplitGenerator(
98
+ name=datasets.Split.VALIDATION,
99
+ gen_kwargs={
100
+ "filepaths": data_dir,
101
+ "split": "valid",
102
+ },
103
+ ),
104
+ datasets.SplitGenerator(
105
+ name=datasets.Split.TEST,
106
+ gen_kwargs={"filepaths": data_dir, "split": "test"},
107
+ ),
108
+ ]
109
+
110
+ def _generate_examples(self, filepaths, split):
111
+ """ Yields examples. """
112
+ if split == "valid":
113
+ split = "tune"
114
+ files = [open(filepaths[f"{split}.8turkers.tok.norm"], encoding="utf-8")] + [
115
+ open(filepaths[f"{split}.8turkers.tok.turk.{i}"], encoding="utf-8") for i in range(8)
116
+ ]
117
+ for id_, lines in enumerate(zip(*files)):
118
+ yield id_, {"original": lines[0].strip(), "simplifications": [line.strip() for line in lines[1:]]}