system HF staff commited on
Commit
6c04ba5
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-sa-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - original
16
+ - extented|other-turkcorpus
17
+ task_categories:
18
+ ratings:
19
+ - text-scoring
20
+ simplification:
21
+ - conditional-text-generation
22
+ task_ids:
23
+ ratings:
24
+ - text-scoring-other-simplification-evaluation
25
+ simplification:
26
+ - text-simplification
27
+ ---
28
+
29
+ # Dataset Card for ASSET
30
+
31
+ ## Table of Contents
32
+ - [Dataset Description](#dataset-description)
33
+ - [Dataset Summary](#dataset-summary)
34
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
35
+ - [Languages](#languages)
36
+ - [Dataset Structure](#dataset-structure)
37
+ - [Data Instances](#data-instances)
38
+ - [Data Fields](#data-instances)
39
+ - [Data Splits](#data-instances)
40
+ - [Dataset Creation](#dataset-creation)
41
+ - [Curation Rationale](#curation-rationale)
42
+ - [Source Data](#source-data)
43
+ - [Annotations](#annotations)
44
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
45
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
46
+ - [Social Impact of Dataset](#social-impact-of-dataset)
47
+ - [Discussion of Biases](#discussion-of-biases)
48
+ - [Other Known Limitations](#other-known-limitations)
49
+ - [Additional Information](#additional-information)
50
+ - [Dataset Curators](#dataset-curators)
51
+ - [Licensing Information](#licensing-information)
52
+ - [Citation Information](#citation-information)
53
+
54
+ ## Dataset Description
55
+
56
+ - **Repository:** [ASSET Github repository](https://github.com/facebookresearch/asset)
57
+ - **Paper:** [ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations](https://www.aclweb.org/anthology/2020.acl-main.424/)
58
+ - **Point of Contact:** [Louis Martin](louismartincs@gmail.com)
59
+
60
+ ### Dataset Summary
61
+
62
+ [ASSET](https://github.com/facebookresearch/asset) [(Alva-Manchego et al., 2020)](https://www.aclweb.org/anthology/2020.acl-main.424.pdf) is multi-reference dataset for the evaluation of sentence simplification in English. The dataset uses the same 2,359 sentences from [TurkCorpus]( https://github.com/cocoxu/simplification/) [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf) and each sentence is associated with 10 crowdsourced simplifications. Unlike previous simplification datasets, which contain a single transformation (e.g., lexical paraphrasing in TurkCorpus or sentence
63
+ splitting in [HSplit](https://www.aclweb.org/anthology/D18-1081.pdf)), the simplifications in ASSET encompass a variety of rewriting transformations.
64
+
65
+ ### Supported Tasks and Leaderboards
66
+
67
+ The dataset supports the evaluation of `test-simplification` systems. Success in this tasks is typically measured using the [SARI](https://huggingface.co/metrics/sari) and [FKBLEU](https://huggingface.co/metrics/fkbleu) metrics described in the paper [Optimizing Statistical Machine Translation for Text Simplification](https://www.aclweb.org/anthology/Q16-1029.pdf).
68
+
69
+ ### Languages
70
+
71
+ The text in this dataset is in English (`en`).
72
+
73
+ ## Dataset Structure
74
+
75
+ ### Data Instances
76
+
77
+ - `simplification` configuration: an instance consists in an original sentence and 10 possible reference simplifications.
78
+ - `ratings` configuration: a data instance consists in an original sentence, a simplification obtained by an automated system, and a judgment of quality along one of three axes by a crowd worker.
79
+
80
+ ### Data Fields
81
+
82
+ - `original`: an original sentence from the source datasets
83
+ - `simplifications`: in the `simplification` config, a set of reference simplifications produced by crowd workers.
84
+ - `simplification`: in the `ratings` config, a simplification of the original obtained by an automated system
85
+ - `aspect`: in the `ratings` config, the aspect on which the simplification is evaluated, one of `meaning`, `fluency`, `simplicity`
86
+ - `rating`: a quality rating between 0 and 100
87
+
88
+ ### Data Splits
89
+
90
+ ASSET does not contain a training set; many models use [WikiLarge](https://github.com/XingxingZhang/dress) (Zhang and Lapata, 2017) for training.
91
+
92
+ Each input sentence has 10 associated reference simplified sentences. The statistics of ASSET are given below.
93
+
94
+ | | Dev | Test | Total |
95
+ | ----- | ------ | ---- | ----- |
96
+ | Input Sentences | 2000 | 359 | 2359 |
97
+ | Reference Simplifications | 20000 | 3590 | 23590 |
98
+
99
+ The test and validation sets are the same as those of TurkCorpus. The split was random.
100
+
101
+ There are 19.04 tokens per reference on average (lower than 21.29 and 25.49 for TurkCorpus and HSplit, respectively). Most (17,245) of the referece sentences do not involve sentence splitting.
102
+
103
+ ## Dataset Creation
104
+
105
+ ### Curation Rationale
106
+
107
+ ASSET was created in order to improve the evaluation of sentence simplification. It uses the same input sentences as the [TurkCorpus]( https://github.com/cocoxu/simplification/) dataset from [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). The 2,359 input sentences of TurkCorpus are a sample of "standard" (not simple) sentences from the [Parallel Wikipedia Simplification (PWKP)](https://www.informatik.tu-darmstadt.de/ukp/research_6/data/sentence_simplification/simple_complex_sentence_pairs/index.en.jsp) dataset [(Zhu et al., 2010)](https://www.aclweb.org/anthology/C10-1152.pdf), which come from the August 22, 2009 version of Wikipedia. The sentences of TurkCorpus were chosen to be of similar length [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). No further information is provided on the sampling strategy.
108
+
109
+ The TurkCorpus dataset was developed in order to overcome some of the problems with sentence pairs from Standard and Simple Wikipedia: a large fraction of sentences were misaligned, or not actually simpler [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). However, TurkCorpus mainly focused on *lexical paraphrasing*, and so cannot be used to evaluate simplifications involving *compression* (deletion) or *sentence splitting*. HSplit [(Sulem et al., 2018)](https://www.aclweb.org/anthology/D18-1081.pdf), on the other hand, can only be used to evaluate sentence splitting. The reference sentences in ASSET include a wider variety of sentence rewriting strategies, combining splitting, compression and paraphrasing. Annotators were given examples of each kind of transformation individually, as well as all three transformations used at once, but were allowed to decide which transformations to use for any given sentence.
110
+
111
+ An example illustrating the differences between TurkCorpus, HSplit and ASSET is given below:
112
+
113
+ > **Original:** He settled in London, devoting himself chiefly to practical teaching.
114
+ >
115
+ > **TurkCorpus:** He rooted in London, devoting himself mainly to practical teaching.
116
+ >
117
+ > **HSplit:** He settled in London. He devoted himself chiefly to practical teaching.
118
+ >
119
+ > **ASSET:** He lived in London. He was a teacher.
120
+
121
+ ### Source Data
122
+
123
+ #### Initial Data Collection and Normalization
124
+
125
+ [More Information Needed]
126
+
127
+ #### Who are the source language producers?
128
+
129
+ The input sentences are from English Wikipedia (August 22, 2009 version). No demographic information is available for the writers of these sentences. However, most Wikipedia editors are male (Lam, 2011; Graells-Garrido, 2015), which has an impact on the topics covered (see also [the Wikipedia page on Wikipedia gender bias](https://en.wikipedia.org/wiki/Gender_bias_on_Wikipedia)). In addition, Wikipedia editors are mostly white, young, and from the Northern Hemisphere [(Wikipedia: Systemic bias)](https://en.wikipedia.org/wiki/Wikipedia:Systemic_bias).
130
+
131
+ Reference sentences were written by 42 workers on Amazon Mechanical Turk (AMT). The requirements for being an annotator were:
132
+ - Passing a Qualification Test (appropriately simplifying sentences). Out of 100 workers, 42 passed the test.
133
+ - Being a resident of the United States, United Kingdom or Canada.
134
+ - Having a HIT approval rate over 95%, and over 1000 HITs approved.
135
+
136
+ No other demographic or compensation information is provided in the ASSET paper.
137
+
138
+ ### Annotations
139
+
140
+ #### Annotation process
141
+
142
+ The instructions given to the annotators are available [here](https://github.com/facebookresearch/asset/blob/master/crowdsourcing/AMT_AnnotationInstructions.pdf).
143
+
144
+ #### Who are the annotators?
145
+
146
+ [More Information Needed]
147
+
148
+ ### Personal and Sensitive Information
149
+
150
+ [More Information Needed]
151
+
152
+ ## Considerations for Using the Data
153
+
154
+ ### Social Impact of Dataset
155
+
156
+ [More Information Needed]
157
+
158
+ ### Discussion of Biases
159
+
160
+ The dataset may contain some social biases, as the input sentences are based on Wikipedia. Studies have shown that the English Wikipedia contains both gender biases (Schmahl et al., 2020) and racial biases (Adams et al., 2019).
161
+
162
+ > Adams, Julia, Hannah Brückner, and Cambria Naslund. "Who Counts as a Notable Sociologist on Wikipedia? Gender, Race, and the “Professor Test”." Socius 5 (2019): 2378023118823946.
163
+ > Schmahl, Katja Geertruida, et al. "Is Wikipedia succeeding in reducing gender bias? Assessing changes in gender bias in Wikipedia using word embeddings." Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science. 2020.
164
+
165
+ ### Other Known Limitations
166
+
167
+ [More Information Needed]
168
+
169
+ ## Additional Information
170
+
171
+ ### Dataset Curators
172
+
173
+ ASSET was developed by researchers at the University of Sheffield, Inria,
174
+ Facebook AI Research, and Imperial College London. The work was partly supported by Benoît Sagot's chair in the PRAIRIE institute, funded by the French National Research Agency (ANR) as part of the "Investissements d’avenir" program (reference ANR-19-P3IA-0001).
175
+
176
+ ### Licensing Information
177
+
178
+ [Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/)
179
+
180
+ ### Citation Information
181
+
182
+ ```
183
+ @inproceedings{alva-manchego-etal-2020-asset,
184
+ title = "{ASSET}: {A} Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations",
185
+ author = "Alva-Manchego, Fernando and
186
+ Martin, Louis and
187
+ Bordes, Antoine and
188
+ Scarton, Carolina and
189
+ Sagot, Beno{\^\i}t and
190
+ Specia, Lucia",
191
+ booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
192
+ month = jul,
193
+ year = "2020",
194
+ address = "Online",
195
+ publisher = "Association for Computational Linguistics",
196
+ url = "https://www.aclweb.org/anthology/2020.acl-main.424",
197
+ pages = "4668--4679",
198
+ }
199
+ ```
200
+
201
+ This dataset card uses material written by [Juan Diego Rodriguez](https://github.com/juand-r).
asset.py ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ASSET: a dataset for sentence simplification evaluation"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import csv
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @inproceedings{alva-manchego-etal-2020-asset,
26
+ title = "{ASSET}: {A} Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations",
27
+ author = "Alva-Manchego, Fernando and
28
+ Martin, Louis and
29
+ Bordes, Antoine and
30
+ Scarton, Carolina and
31
+ Sagot, Benoit and
32
+ Specia, Lucia",
33
+ booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
34
+ month = jul,
35
+ year = "2020",
36
+ address = "Online",
37
+ publisher = "Association for Computational Linguistics",
38
+ url = "https://www.aclweb.org/anthology/2020.acl-main.424",
39
+ pages = "4668--4679",
40
+ }
41
+ """
42
+
43
+ _DESCRIPTION = """\
44
+ ASSET is a dataset for evaluating Sentence Simplification systems with multiple rewriting transformations,
45
+ as described in "ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations".
46
+ The corpus is composed of 2000 validation and 359 test original sentences that were each simplified 10 times by different annotators.
47
+ The corpus also contains human judgments of meaning preservation, fluency and simplicity for the outputs of several automatic text simplification systems.
48
+ """
49
+
50
+ _HOMEPAGE = "https://github.com/facebookresearch/asset"
51
+
52
+ _LICENSE = "Creative Common Attribution-NonCommercial 4.0 International"
53
+
54
+ _URL_LIST = [
55
+ ("human_ratings.csv", "https://github.com/facebookresearch/asset/raw/master/human_ratings/human_ratings.csv"),
56
+ ("asset.valid.orig", "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.orig"),
57
+ ("asset.test.orig", "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.orig"),
58
+ ]
59
+ _URL_LIST += [
60
+ (
61
+ f"asset.{spl}.simp.{i}",
62
+ f"https://github.com/facebookresearch/asset/raw/master/dataset/asset.{spl}.simp.{i}",
63
+ )
64
+ for spl in ["valid", "test"]
65
+ for i in range(10)
66
+ ]
67
+
68
+ _URLs = dict(_URL_LIST)
69
+
70
+
71
+ class Asset(datasets.GeneratorBasedBuilder):
72
+
73
+ VERSION = datasets.Version("1.0.0")
74
+
75
+ BUILDER_CONFIGS = [
76
+ datasets.BuilderConfig(
77
+ name="simplification",
78
+ version=VERSION,
79
+ description="A set of original sentences aligned with 10 possible simplifications for each.",
80
+ ),
81
+ datasets.BuilderConfig(
82
+ name="ratings", version=VERSION, description="Human ratings of automatically produced text implification."
83
+ ),
84
+ ]
85
+
86
+ DEFAULT_CONFIG_NAME = "simplification"
87
+
88
+ def _info(self):
89
+ if self.config.name == "simplification":
90
+ features = datasets.Features(
91
+ {
92
+ "original": datasets.Value("string"),
93
+ "simplifications": datasets.Sequence(datasets.Value("string")),
94
+ }
95
+ )
96
+ else:
97
+ features = datasets.Features(
98
+ {
99
+ "original": datasets.Value("string"),
100
+ "simplification": datasets.Value("string"),
101
+ "original_sentence_id": datasets.Value("int32"),
102
+ "aspect": datasets.ClassLabel(names=["meaning", "fluency", "simplicity"]),
103
+ "worker_id": datasets.Value("int32"),
104
+ "rating": datasets.Value("int32"),
105
+ }
106
+ )
107
+ return datasets.DatasetInfo(
108
+ description=_DESCRIPTION,
109
+ features=features,
110
+ supervised_keys=None,
111
+ homepage=_HOMEPAGE,
112
+ license=_LICENSE,
113
+ citation=_CITATION,
114
+ )
115
+
116
+ def _split_generators(self, dl_manager):
117
+ data_dir = dl_manager.download_and_extract(_URLs)
118
+ if self.config.name == "simplification":
119
+ return [
120
+ datasets.SplitGenerator(
121
+ name=datasets.Split.VALIDATION,
122
+ gen_kwargs={
123
+ "filepaths": data_dir,
124
+ "split": "valid",
125
+ },
126
+ ),
127
+ datasets.SplitGenerator(
128
+ name=datasets.Split.TEST,
129
+ gen_kwargs={"filepaths": data_dir, "split": "test"},
130
+ ),
131
+ ]
132
+ else:
133
+ return [
134
+ datasets.SplitGenerator(
135
+ name="full",
136
+ gen_kwargs={
137
+ "filepaths": data_dir,
138
+ "split": "full",
139
+ },
140
+ ),
141
+ ]
142
+
143
+ def _generate_examples(self, filepaths, split):
144
+ """ Yields examples. """
145
+ if self.config.name == "simplification":
146
+ files = [open(filepaths[f"asset.{split}.orig"], encoding="utf-8")] + [
147
+ open(filepaths[f"asset.{split}.simp.{i}"], encoding="utf-8") for i in range(10)
148
+ ]
149
+ for id_, lines in enumerate(zip(*files)):
150
+ yield id_, {"original": lines[0].strip(), "simplifications": [line.strip() for line in lines[1:]]}
151
+ else:
152
+ with open(filepaths[f"human_ratings.csv"], encoding="utf-8") as f:
153
+ reader = csv.reader(f, delimiter=",")
154
+ for id_, row in enumerate(reader):
155
+ if id_ == 0:
156
+ keys = row[:]
157
+ else:
158
+ res = dict([(k, v) for k, v in zip(keys, row)])
159
+ for k in ["original_sentence_id", "worker_id", "rating"]:
160
+ res[k] = int(res[k])
161
+ yield (id_ - 1), res
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"simplification": {"description": "ASSET is a dataset for evaluating Sentence Simplification systems with multiple rewriting transformations,\nas described in \"ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations\".\nThe corpus is composed of 2000 validation and 359 test original sentences that were each simplified 10 times by different annotators.\nThe corpus also contains human judgments of meaning preservation, fluency and simplicity for the outputs of several automatic text simplification systems.\n", "citation": "@inproceedings{alva-manchego-etal-2020-asset,\n title = \"{ASSET}: {A} Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations\",\n author = \"Alva-Manchego, Fernando and\n Martin, Louis and\n Bordes, Antoine and\n Scarton, Carolina and\n Sagot, Beno{\\^\\i}t and\n Specia, Lucia\",\n booktitle = \"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.acl-main.424\",\n pages = \"4668--4679\",\n}\n", "homepage": "https://github.com/facebookresearch/asset", "license": "Creative Common Attribution-NonCommercial 4.0 International", "features": {"original": {"dtype": "string", "id": null, "_type": "Value"}, "simplifications": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "asset", "config_name": "simplification", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 2303496, "num_examples": 2000, "dataset_name": "asset"}, "test": {"name": "test", "num_bytes": 411031, "num_examples": 359, "dataset_name": "asset"}}, "download_checksums": {"https://github.com/facebookresearch/asset/raw/master/human_ratings/human_ratings.csv": {"num_bytes": 1012140, "checksum": "09ea6ee887af56be380334677ca2f3ba561f67abbb64ef1e6382fadf440d5593"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.orig": {"num_bytes": 234319, "checksum": "83dca90b5c53365a9c4a70222aed129c6df3c3f6b5da82ee94312e179b93fff1"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.orig": {"num_bytes": 43745, "checksum": "673ceb2672a37168a52040d75e16f9ffd1e3777b9f68e19207f2adf6542723f1"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.simp.0": {"num_bytes": 193975, "checksum": "27b0c4a40c91b875c82a8ed76ff7cf0476b03a3a6998a2e3ef6e18000efda624"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.simp.1": {"num_bytes": 180040, "checksum": "5043e9db5934c3d538b91f56d23466177813896da10f153d2b16c0c415ac5e84"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.simp.2": {"num_bytes": 187445, "checksum": "578dd487cf03f6f66bb4acd2c44b464c3ac9fb42dc64b8e6afd391778ebc7ea7"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.simp.3": {"num_bytes": 207902, "checksum": "4ab95ba6f7a60adde2f57201c0b749384ce64c97f8378fb0ab185367709a8386"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.simp.4": {"num_bytes": 211459, "checksum": "f6f1d4bf9f87b532b37d7f5700ec384f817dba10247c91df629e1f6eee6c3aa9"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.simp.5": {"num_bytes": 194418, "checksum": "182114cfbb2960358b0e2d71737ead9a2abb0c27d3f65281335f079ae4447e3b"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.simp.6": {"num_bytes": 188962, "checksum": "aba27b505dad048982e902a04c4ffc5ab9e926b38d2383920aea798fd42de376"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.simp.7": {"num_bytes": 196950, "checksum": "73d7983107eca6b98a9aec62ea75ea8d5adf313755b7e105608f212dead124cd"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.simp.8": {"num_bytes": 213056, "checksum": "58f8e1109d87a4e3c5403705b8782750bb849cb8389a698156e00ca6512dd5c4"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.simp.9": {"num_bytes": 220962, "checksum": "35a861e174ce5458fdcd1866e49aa84297fcb6b51fd98d6633b71932a646832e"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.simp.0": {"num_bytes": 35457, "checksum": "66f36029d0c732eb92886021faefe531c6cfd0a32bdbe7ae4aa97fd45bd1b046"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.simp.1": {"num_bytes": 34096, "checksum": "d323ceb364abbe84c79b14b028aa1ff563cd94955fbab19049612548dbb0f83f"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.simp.2": {"num_bytes": 34348, "checksum": "786b55f8425ce4a993e98be5e2bea9ef87bf536b96dc13f7a57c4733fdb63e06"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.simp.3": {"num_bytes": 37292, "checksum": "e211c9e2ede1dfe315097132dbe4feda76b309bdc636a5394cb5d2664ba5bf52"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.simp.4": {"num_bytes": 35887, "checksum": "37be9cf0592c0f68d87848dc9c442fe62f344518c1993896c00788bf943b755d"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.simp.5": {"num_bytes": 35351, "checksum": "8485210573a3bd76116de8e978b227677c6c207111a4938729397c4e603dfa46"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.simp.6": {"num_bytes": 35846, "checksum": "f0cb3ab823d23203ea044f81bd7e67cc823db0632095e43b78a54a9891a0b0a8"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.simp.7": {"num_bytes": 34560, "checksum": "35cbb8b9964252a1470607634f19ad946c6bc2951b3e500eedd826baf12bd3c8"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.simp.8": {"num_bytes": 35830, "checksum": "047b6419590b88f93b435d3177bba1883dc9c0dc178676e48470b408236446f4"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.simp.9": {"num_bytes": 35313, "checksum": "3f5745e4f2743563b88ea4284ec35fa4ddb68d62de80b63ffb87751b998fe6b8"}}, "download_size": 3639353, "post_processing_size": null, "dataset_size": 2714527, "size_in_bytes": 6353880}, "ratings": {"description": "ASSET is a dataset for evaluating Sentence Simplification systems with multiple rewriting transformations,\nas described in \"ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations\".\nThe corpus is composed of 2000 validation and 359 test original sentences that were each simplified 10 times by different annotators.\nThe corpus also contains human judgments of meaning preservation, fluency and simplicity for the outputs of several automatic text simplification systems.\n", "citation": "@inproceedings{alva-manchego-etal-2020-asset,\n title = \"{ASSET}: {A} Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations\",\n author = \"Alva-Manchego, Fernando and\n Martin, Louis and\n Bordes, Antoine and\n Scarton, Carolina and\n Sagot, Beno{\\^\\i}t and\n Specia, Lucia\",\n booktitle = \"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.acl-main.424\",\n pages = \"4668--4679\",\n}\n", "homepage": "https://github.com/facebookresearch/asset", "license": "Creative Common Attribution-NonCommercial 4.0 International", "features": {"original": {"dtype": "string", "id": null, "_type": "Value"}, "simplification": {"dtype": "string", "id": null, "_type": "Value"}, "original_sentence_id": {"dtype": "int32", "id": null, "_type": "Value"}, "aspect": {"num_classes": 3, "names": ["meaning", "fluency", "simplicity"], "names_file": null, "id": null, "_type": "ClassLabel"}, "worker_id": {"dtype": "int32", "id": null, "_type": "Value"}, "rating": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "asset", "config_name": "ratings", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"full": {"name": "full", "num_bytes": 1036853, "num_examples": 4500, "dataset_name": "asset"}}, "download_checksums": {"https://github.com/facebookresearch/asset/raw/master/human_ratings/human_ratings.csv": {"num_bytes": 1012140, "checksum": "09ea6ee887af56be380334677ca2f3ba561f67abbb64ef1e6382fadf440d5593"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.orig": {"num_bytes": 234319, "checksum": "83dca90b5c53365a9c4a70222aed129c6df3c3f6b5da82ee94312e179b93fff1"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.orig": {"num_bytes": 43745, "checksum": "673ceb2672a37168a52040d75e16f9ffd1e3777b9f68e19207f2adf6542723f1"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.simp.0": {"num_bytes": 193975, "checksum": "27b0c4a40c91b875c82a8ed76ff7cf0476b03a3a6998a2e3ef6e18000efda624"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.simp.1": {"num_bytes": 180040, "checksum": "5043e9db5934c3d538b91f56d23466177813896da10f153d2b16c0c415ac5e84"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.simp.2": {"num_bytes": 187445, "checksum": "578dd487cf03f6f66bb4acd2c44b464c3ac9fb42dc64b8e6afd391778ebc7ea7"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.simp.3": {"num_bytes": 207902, "checksum": "4ab95ba6f7a60adde2f57201c0b749384ce64c97f8378fb0ab185367709a8386"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.simp.4": {"num_bytes": 211459, "checksum": "f6f1d4bf9f87b532b37d7f5700ec384f817dba10247c91df629e1f6eee6c3aa9"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.simp.5": {"num_bytes": 194418, "checksum": "182114cfbb2960358b0e2d71737ead9a2abb0c27d3f65281335f079ae4447e3b"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.simp.6": {"num_bytes": 188962, "checksum": "aba27b505dad048982e902a04c4ffc5ab9e926b38d2383920aea798fd42de376"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.simp.7": {"num_bytes": 196950, "checksum": "73d7983107eca6b98a9aec62ea75ea8d5adf313755b7e105608f212dead124cd"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.simp.8": {"num_bytes": 213056, "checksum": "58f8e1109d87a4e3c5403705b8782750bb849cb8389a698156e00ca6512dd5c4"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.valid.simp.9": {"num_bytes": 220962, "checksum": "35a861e174ce5458fdcd1866e49aa84297fcb6b51fd98d6633b71932a646832e"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.simp.0": {"num_bytes": 35457, "checksum": "66f36029d0c732eb92886021faefe531c6cfd0a32bdbe7ae4aa97fd45bd1b046"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.simp.1": {"num_bytes": 34096, "checksum": "d323ceb364abbe84c79b14b028aa1ff563cd94955fbab19049612548dbb0f83f"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.simp.2": {"num_bytes": 34348, "checksum": "786b55f8425ce4a993e98be5e2bea9ef87bf536b96dc13f7a57c4733fdb63e06"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.simp.3": {"num_bytes": 37292, "checksum": "e211c9e2ede1dfe315097132dbe4feda76b309bdc636a5394cb5d2664ba5bf52"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.simp.4": {"num_bytes": 35887, "checksum": "37be9cf0592c0f68d87848dc9c442fe62f344518c1993896c00788bf943b755d"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.simp.5": {"num_bytes": 35351, "checksum": "8485210573a3bd76116de8e978b227677c6c207111a4938729397c4e603dfa46"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.simp.6": {"num_bytes": 35846, "checksum": "f0cb3ab823d23203ea044f81bd7e67cc823db0632095e43b78a54a9891a0b0a8"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.simp.7": {"num_bytes": 34560, "checksum": "35cbb8b9964252a1470607634f19ad946c6bc2951b3e500eedd826baf12bd3c8"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.simp.8": {"num_bytes": 35830, "checksum": "047b6419590b88f93b435d3177bba1883dc9c0dc178676e48470b408236446f4"}, "https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.simp.9": {"num_bytes": 35313, "checksum": "3f5745e4f2743563b88ea4284ec35fa4ddb68d62de80b63ffb87751b998fe6b8"}}, "download_size": 3639353, "post_processing_size": null, "dataset_size": 1036853, "size_in_bytes": 4676206}}
dummy/ratings/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90ebef546959b28e12da25fc66eff7b338d1373119e5e103c59a5d2ec7d79ff1
3
+ size 10735
dummy/simplification/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90ebef546959b28e12da25fc66eff7b338d1373119e5e103c59a5d2ec7d79ff1
3
+ size 10735