system HF staff commited on
Commit
e25af6f
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - machine-generated
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - conditional-text-generation
18
+ task_ids:
19
+ - conditional-text-generation-other-grammatical-error-correction
20
+ ---
21
+
22
+ # Dataset Card for TMU-GFM-Dataset
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** [N/A]
50
+ - **Repository:** https://github.com/tmu-nlp/TMU-GFM-Dataset
51
+ - **Paper:** [SOME: Reference-less Sub-Metrics Optimized for Manual Evaluations of Grammatical Error Correction](https://www.aclweb.org/anthology/2020.coling-main.573.pdf)
52
+ - **Leaderboard:** [N/A]
53
+ - **Point of Contact:** Check the paper.
54
+
55
+ ### Dataset Summary
56
+
57
+ Authors collected manual evaluations for the grammaticality, fluency, and meaning preservation of the system outputs of 1,381 sentences from CoNLL 2013.
58
+ To collect the manual evaluations for various system outputs, each source sentence was corrected by the following five typical systems: statistical machine translation (SMT) (Grundkiewicz and Junczys-Dowmunt, 2018), recurrent neural network (RNN) (Luong et al., 2015), convolutional neural network (CNN) (Chollampatt and Ng, 2018), self-attention network (SAN) (Vaswani et al., 2017), and SAN with copy mechanism (SAN+Copy) (Zhao et al., 2019).
59
+ Manual evaluation for the grammaticality, fluency, and meaning preservation were assigned to a total of 4,223 sentences.
60
+
61
+ ### Supported Tasks and Leaderboards
62
+
63
+ Grammatical Error Correction
64
+
65
+ ### Languages
66
+
67
+ English
68
+
69
+ ## Dataset Structure
70
+
71
+ ### Data Instances
72
+
73
+ An example from the TMU-GFM-Dataset looks as follows:
74
+
75
+ ```
76
+ {'ave_f': 3.4000000953674316,
77
+ 'ave_g': 3.4000000953674316,
78
+ 'ave_m': 3.5999999046325684,
79
+ 'fluency': [3, 4, 3, 4, 3],
80
+ 'grammer': [3, 4, 3, 4, 3],
81
+ 'meaning': [3, 4, 4, 4, 3],
82
+ 'output': 'After all, there will be an endless battle between the technology and human mentality.',
83
+ 'source': 'Afterall there will be an endless battle between the technology and human mentality.',
84
+ 'system': 'lstm,cnn'}
85
+ ```
86
+
87
+ ### Data Fields
88
+
89
+ The are 9 columns in the tmu-gfm-dataset.
90
+
91
+ - source: source sentence.
92
+ - output: system output sentence.
93
+ - grammer: Grammaticaliry annotations by 5 annotators.
94
+ - fluency: Fluency annotations by 5 annotators.
95
+ - meaning: Meaning Preservation annotations by 5 annotators.
96
+ - system: Which system the output sentence is from.
97
+ - ave_g: Average grammer score.
98
+ - ave_f: Average fluency score.
99
+ - ave_m: Average meaning score.
100
+
101
+ ### Data Splits
102
+
103
+ Authors divided the dataset into train/dev/test with 3,376/422/423 sentences and used for fine-tuning BERT in thier paper.
104
+
105
+ ## Dataset Creation
106
+
107
+ ### Curation Rationale
108
+
109
+ The authors proposed a reference-less metric trained on manual evaluations of system outputs for grammatical error correction (GEC).
110
+ They said that previous studies have shown that reference-less metrics are promising; however, existing metrics are not optimized for manual evaluation of the system output because there is no dataset of system output with manual evaluation.
111
+ To achieve a better correlation with manual evaluation, they created a dataset to optimize each sub-metric to the manual evaluation of GEC systems. Their annotators evaluated the output of five typical GEC systems.
112
+
113
+ ### Source Data
114
+
115
+ #### Initial Data Collection and Normalization
116
+
117
+ Authors collected manual evaluations for the grammaticality, fluency, and meaning preservation of the system outputs of 1,381 sentences from CoNLL 2013.
118
+ To collect the manual evaluations for various system outputs, each source sentence was corrected by the following five typical systems: statistical machine translation (SMT) (Grundkiewicz and Junczys-Dowmunt, 2018), recurrent neural network (RNN) (Luong et al., 2015), convolutional neural network (CNN) (Chollampatt and Ng, 2018), self-attention network (SAN) (Vaswani et al., 2017), and SAN with copy mechanism (SAN+Copy) (Zhao et al., 2019).
119
+
120
+ #### Who are the source language producers?
121
+
122
+ machine-generated
123
+
124
+ ### Annotations
125
+
126
+ #### Annotation process
127
+
128
+ By excluding duplicate corrected sentences, manual evaluation for the grammaticality, fluency, and meaning preservation were assigned to a total of 4,223 sentences, as follows:
129
+ - Grammaticality: Annotators evaluated the grammatical correctness of the system output. The authors followed the five-point scale evaluation criteria (4: Perfect, 3: Comprehensible, 2: Somewhat comprehensible, 1: Incomprehensible, and 0: Other) proposed by Heilman et al. (2014).
130
+ - Fluency: Annotators evaluated how natural the sentence sounds for native speakers. The authors followed the criteria (4: Extremely natural, 3: Somewhat natural, 2: Somewhat unnatural, and 1: Extremely unnatural) proposed by Lau et al. (2015).
131
+ - Meaning preservation: Annotators evaluated the extent to which the meaning of source sentences is preserved in system output. The authors followed the criteria (4: Identical, 3: Minor differences, 2: Moderate differences, 1: Sub- stantially different, and 0: Other) proposed by Xu et al. (2016).
132
+
133
+ Finally, the authors created a dataset with manual evaluations for a total of 4,221 sentences, excluding sentences in which three or more annotators answered “0: Other.”
134
+
135
+ #### Who are the annotators?
136
+
137
+ Five native English annotators reqruited by using Amazon Mechaincal turk
138
+
139
+ ### Personal and Sensitive Information
140
+
141
+ [More Information Needed]
142
+
143
+ ## Considerations for Using the Data
144
+
145
+ ### Social Impact of Dataset
146
+
147
+ [More Information Needed]
148
+
149
+ ### Discussion of Biases
150
+
151
+ [More Information Needed]
152
+
153
+ ### Other Known Limitations
154
+
155
+ [More Information Needed]
156
+
157
+ ## Additional Information
158
+
159
+ ### Dataset Curators
160
+
161
+ [More Information Needed]
162
+
163
+ ### Licensing Information
164
+
165
+ [More Information Needed]
166
+
167
+ ### Citation Information
168
+
169
+ @inproceedings{yoshimura-etal-2020-reference,
170
+ title = "{SOME}: Reference-less Sub-Metrics Optimized for Manual Evaluations of Grammatical Error Correction",
171
+ author = "Yoshimura, Ryoma and
172
+ Kaneko, Masahiro and
173
+ Kajiwara, Tomoyuki and
174
+ Komachi, Mamoru",
175
+ booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
176
+ month = dec,
177
+ year = "2020",
178
+ address = "Barcelona, Spain (Online)",
179
+ publisher = "International Committee on Computational Linguistics",
180
+ url = "https://www.aclweb.org/anthology/2020.coling-main.573",
181
+ pages = "6516--6522",
182
+ abstract = "We propose a reference-less metric trained on manual evaluations of system outputs for grammatical error correction (GEC). Previous studies have shown that reference-less metrics are promising; however, existing metrics are not optimized for manual evaluations of the system outputs because no dataset of the system output exists with manual evaluation. This study manually evaluates outputs of GEC systems to optimize the metrics. Experimental results show that the proposed metric improves correlation with the manual evaluation in both system- and sentence-level meta-evaluation. Our dataset and metric will be made publicly available.",
183
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "A dataset for GEC metrics with manual evaluations of grammaticality, fluency, and meaning preservation for system outputs. More detail about the creation of the dataset can be found in Yoshimura et al. (2020).\n", "citation": "@inproceedings{yoshimura-etal-2020-reference,\n title = \"{SOME}: Reference-less Sub-Metrics Optimized for Manual Evaluations of Grammatical Error Correction\",\n author = \"Yoshimura, Ryoma and\n Kaneko, Masahiro and\n Kajiwara, Tomoyuki and\n Komachi, Mamoru\",\n booktitle = \"Proceedings of the 28th International Conference on Computational Linguistics\",\n month = dec,\n year = \"2020\",\n address = \"Barcelona, Spain (Online)\",\n publisher = \"International Committee on Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.coling-main.573\",\n pages = \"6516--6522\",\n abstract = \"We propose a reference-less metric trained on manual evaluations of system outputs for grammatical error correction (GEC). Previous studies have shown that reference-less metrics are promising; however, existing metrics are not optimized for manual evaluations of the system outputs because no dataset of the system output exists with manual evaluation. This study manually evaluates outputs of GEC systems to optimize the metrics. Experimental results show that the proposed metric improves correlation with the manual evaluation in both system- and sentence-level meta-evaluation. Our dataset and metric will be made publicly available.\",\n}\n", "homepage": "https://github.com/tmu-nlp/TMU-GFM-Dataset", "license": "", "features": {"source": {"dtype": "string", "id": null, "_type": "Value"}, "output": {"dtype": "string", "id": null, "_type": "Value"}, "grammer": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "fluency": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "meaning": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "system": {"dtype": "string", "id": null, "_type": "Value"}, "ave_g": {"dtype": "float32", "id": null, "_type": "Value"}, "ave_f": {"dtype": "float32", "id": null, "_type": "Value"}, "ave_m": {"dtype": "float32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "tmu_gfm_dataset", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1446144, "num_examples": 4221, "dataset_name": "tmu_gfm_dataset"}}, "download_checksums": {"https://raw.githubusercontent.com/tmu-nlp/TMU-GFM-Dataset/main/tmu-gfm-dataset.csv": {"num_bytes": 1270197, "checksum": "4e3ede9107aa4b4180b0912a11d67999060f9257c9be82e60b3e379ca2aac716"}}, "download_size": 1270197, "post_processing_size": null, "dataset_size": 1446144, "size_in_bytes": 2716341}}
dummy/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35600c05e476827892305a91b9f33081a09282c6f5e69bc9c7676a736ae912ef
3
+ size 641
tmu_gfm_dataset.py ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """TMU-GFM-Dataset."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import csv
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @inproceedings{yoshimura-etal-2020-reference,
26
+ title = "{SOME}: Reference-less Sub-Metrics Optimized for Manual Evaluations of Grammatical Error Correction",
27
+ author = "Yoshimura, Ryoma and
28
+ Kaneko, Masahiro and
29
+ Kajiwara, Tomoyuki and
30
+ Komachi, Mamoru",
31
+ booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
32
+ month = dec,
33
+ year = "2020",
34
+ address = "Barcelona, Spain (Online)",
35
+ publisher = "International Committee on Computational Linguistics",
36
+ url = "https://www.aclweb.org/anthology/2020.coling-main.573",
37
+ pages = "6516--6522",
38
+ abstract = "We propose a reference-less metric trained on manual evaluations of system outputs for grammatical error correction (GEC). Previous studies have shown that reference-less metrics are promising; however, existing metrics are not optimized for manual evaluations of the system outputs because no dataset of the system output exists with manual evaluation. This study manually evaluates outputs of GEC systems to optimize the metrics. Experimental results show that the proposed metric improves correlation with the manual evaluation in both system- and sentence-level meta-evaluation. Our dataset and metric will be made publicly available.",
39
+ }
40
+ """
41
+
42
+ _DESCRIPTION = """\
43
+ A dataset for GEC metrics with manual evaluations of grammaticality, fluency, and meaning preservation for system outputs. \
44
+ More detail about the creation of the dataset can be found in Yoshimura et al. (2020).
45
+ """
46
+
47
+ _HOMEPAGE = "https://github.com/tmu-nlp/TMU-GFM-Dataset"
48
+
49
+ _LICENSE = ""
50
+
51
+ _URLs = {
52
+ "default": "https://raw.githubusercontent.com/tmu-nlp/TMU-GFM-Dataset/main/tmu-gfm-dataset.csv",
53
+ }
54
+
55
+
56
+ class TmuGfmDataset(datasets.GeneratorBasedBuilder):
57
+ """TMU-GFM-Dataset."""
58
+
59
+ VERSION = datasets.Version("1.1.0")
60
+
61
+ def _info(self):
62
+ features = datasets.Features(
63
+ {
64
+ "source": datasets.Value("string"),
65
+ "output": datasets.Value("string"),
66
+ "grammer": datasets.Sequence(datasets.Value("int32")),
67
+ "fluency": datasets.Sequence(datasets.Value("int32")),
68
+ "meaning": datasets.Sequence(datasets.Value("int32")),
69
+ "system": datasets.Value("string"),
70
+ "ave_g": datasets.Value("float"),
71
+ "ave_f": datasets.Value("float"),
72
+ "ave_m": datasets.Value("float"),
73
+ }
74
+ )
75
+ return datasets.DatasetInfo(
76
+ description=_DESCRIPTION,
77
+ features=features,
78
+ supervised_keys=None,
79
+ homepage=_HOMEPAGE,
80
+ license=_LICENSE,
81
+ citation=_CITATION,
82
+ )
83
+
84
+ def _split_generators(self, dl_manager):
85
+ """Returns SplitGenerators."""
86
+ my_urls = _URLs[self.config.name]
87
+ data_url = dl_manager.download(my_urls)
88
+ return [
89
+ datasets.SplitGenerator(
90
+ name=datasets.Split.TRAIN,
91
+ gen_kwargs={
92
+ "filepath": data_url,
93
+ "split": "train",
94
+ },
95
+ ),
96
+ ]
97
+
98
+ def _generate_examples(self, filepath, split):
99
+ """ Yields examples. """
100
+
101
+ with open(filepath, encoding="utf-8") as f:
102
+ data = csv.reader(f)
103
+ _ = next(data)
104
+ for id_, row in enumerate(data):
105
+ yield id_, {
106
+ "source": row[0],
107
+ "output": row[1],
108
+ "grammer": row[2].split(","),
109
+ "fluency": row[3].split(","),
110
+ "meaning": row[4].split(","),
111
+ "system": row[5],
112
+ "ave_g": row[6],
113
+ "ave_f": row[7],
114
+ "ave_m": row[8],
115
+ }