system HF staff commited on
Commit
a54bfe8
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ - other-language-learner
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - conditional-text-generation
19
+ task_ids:
20
+ - conditional-text-generation-other-grammatical-error-correction
21
+ ---
22
+
23
+ # Dataset Card for Cambridge English Write & Improve + LOCNESS Dataset
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-instances)
33
+ - [Data Splits](#data-instances)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+
48
+ ## Dataset Description
49
+
50
+ - **Homepage:** https://www.cl.cam.ac.uk/research/nl/bea2019st/#data
51
+ - **Repository:**
52
+ - **Paper:** https://www.aclweb.org/anthology/W19-4406/
53
+ - **Leaderboard:** https://www.cl.cam.ac.uk/research/nl/bea2019st/#results
54
+ - **Point of Contact:**
55
+
56
+ ### Dataset Summary
57
+
58
+ Write & Improve (Yannakoudakis et al., 2018) is an online web platform that assists non-native English students with their writing. Specifically, students from around the world submit letters, stories, articles and essays in response to various prompts, and the W&I system provides instant feedback. Since W&I went live in 2014, W&I annotators have manually annotated some of these submissions and assigned them a CEFR level.
59
+
60
+ The LOCNESS corpus (Granger, 1998) consists of essays written by native English students. It was originally compiled by researchers at the Centre for English Corpus Linguistics at the University of Louvain. Since native English students also sometimes make mistakes, we asked the W&I annotators to annotate a subsection of LOCNESS so researchers can test the effectiveness of their systems on the full range of English levels and abilities.
61
+
62
+ ### Supported Tasks and Leaderboards
63
+
64
+ [More Information Needed]
65
+
66
+ ### Languages
67
+
68
+ [More Information Needed]
69
+
70
+ ## Dataset Structure
71
+
72
+ ### Data Instances
73
+
74
+ [More Information Needed]
75
+
76
+ ### Data Fields
77
+
78
+ [More Information Needed]
79
+
80
+ ### Data Splits
81
+
82
+ [More Information Needed]
83
+
84
+ ## Dataset Creation
85
+
86
+ ### Curation Rationale
87
+
88
+ [More Information Needed]
89
+
90
+ ### Source Data
91
+
92
+ #### Initial Data Collection and Normalization
93
+
94
+ [More Information Needed]
95
+
96
+ #### Who are the source language producers?
97
+
98
+ [More Information Needed]
99
+
100
+ ### Annotations
101
+
102
+ #### Annotation process
103
+
104
+ [More Information Needed]
105
+
106
+ #### Who are the annotators?
107
+
108
+ [More Information Needed]
109
+
110
+ ### Personal and Sensitive Information
111
+
112
+ [More Information Needed]
113
+
114
+ ## Considerations for Using the Data
115
+
116
+ ### Social Impact of Dataset
117
+
118
+ [More Information Needed]
119
+
120
+ ### Discussion of Biases
121
+
122
+ [More Information Needed]
123
+
124
+ ### Other Known Limitations
125
+
126
+ [More Information Needed]
127
+
128
+ ## Additional Information
129
+
130
+ ### Dataset Curators
131
+
132
+ [More Information Needed]
133
+
134
+ ### Licensing Information
135
+
136
+ [More Information Needed]
137
+
138
+ ### Citation Information
139
+
140
+ ```
141
+ @inproceedings{bryant-etal-2019-bea,
142
+ title = "The {BEA}-2019 Shared Task on Grammatical Error Correction",
143
+ author = "Bryant, Christopher and
144
+ Felice, Mariano and
145
+ Andersen, {\O}istein E. and
146
+ Briscoe, Ted",
147
+ booktitle = "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
148
+ month = aug,
149
+ year = "2019",
150
+ address = "Florence, Italy",
151
+ publisher = "Association for Computational Linguistics",
152
+ url = "https://www.aclweb.org/anthology/W19-4406",
153
+ doi = "10.18653/v1/W19-4406",
154
+ pages = "52--75",
155
+ abstract = "This paper reports on the BEA-2019 Shared Task on Grammatical Error Correction (GEC). As with the CoNLL-2014 shared task, participants are required to correct all types of errors in test data. One of the main contributions of the BEA-2019 shared task is the introduction of a new dataset, the Write{\&}Improve+LOCNESS corpus, which represents a wider range of native and learner English levels and abilities. Another contribution is the introduction of tracks, which control the amount of annotated data available to participants. Systems are evaluated in terms of ERRANT F{\_}0.5, which allows us to report a much wider range of performance statistics. The competition was hosted on Codalab and remains open for further submissions on the blind test set.",
156
+ }
157
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "Write & Improve (Yannakoudakis et al., 2018) is an online web platform that assists non-native\nEnglish students with their writing. Specifically, students from around the world submit letters,\nstories, articles and essays in response to various prompts, and the W&I system provides instant\nfeedback. Since W&I went live in 2014, W&I annotators have manually annotated some of these\nsubmissions and assigned them a CEFR level.\n", "citation": "@inproceedings{bryant-etal-2019-bea,\n title = \"The {BEA}-2019 Shared Task on Grammatical Error Correction\",\n author = \"Bryant, Christopher and\n Felice, Mariano and\n Andersen, {\\O}istein E. and\n Briscoe, Ted\",\n booktitle = \"Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications\",\n month = aug,\n year = \"2019\",\n address = \"Florence, Italy\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/W19-4406\",\n doi = \"10.18653/v1/W19-4406\",\n pages = \"52--75\",\n abstract = \"This paper reports on the BEA-2019 Shared Task on Grammatical Error Correction (GEC). As with the CoNLL-2014 shared task, participants are required to correct all types of errors in test data. One of the main contributions of the BEA-2019 shared task is the introduction of a new dataset, the Write{\\&}Improve+LOCNESS corpus, which represents a wider range of native and learner English levels and abilities. Another contribution is the introduction of tracks, which control the amount of annotated data available to participants. Systems are evaluated in terms of ERRANT F{\\_}0.5, which allows us to report a much wider range of performance statistics. The competition was hosted on Codalab and remains open for further submissions on the blind test set.\",\n}\n", "homepage": "https://www.cl.cam.ac.uk/research/nl/bea2019st/#data", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "userid": {"dtype": "string", "id": null, "_type": "Value"}, "cefr": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "edits": {"feature": {"start": {"dtype": "int32", "id": null, "_type": "Value"}, "end": {"dtype": "int32", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wi_locness", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4375795, "num_examples": 3000, "dataset_name": "wi_locness"}, "validation": {"name": "validation", "num_bytes": 447055, "num_examples": 300, "dataset_name": "wi_locness"}}, "download_checksums": {"https://www.cl.cam.ac.uk/research/nl/bea2019st/data/wi+locness_v2.1.bea19.tar.gz": {"num_bytes": 6120469, "checksum": "d5cbf68cda3da0c3af69dd672614d07287bfe996b87da0c75051d5349d76c666"}}, "download_size": 6120469, "post_processing_size": null, "dataset_size": 4822850, "size_in_bytes": 10943319}, "wi": {"description": "Write & Improve (Yannakoudakis et al., 2018) is an online web platform that assists non-native\nEnglish students with their writing. Specifically, students from around the world submit letters,\nstories, articles and essays in response to various prompts, and the W&I system provides instant\nfeedback. Since W&I went live in 2014, W&I annotators have manually annotated some of these\nsubmissions and assigned them a CEFR level.\n", "citation": "@inproceedings{bryant-etal-2019-bea,\n title = \"The {BEA}-2019 Shared Task on Grammatical Error Correction\",\n author = \"Bryant, Christopher and\n Felice, Mariano and\n Andersen, {\\O}istein E. and\n Briscoe, Ted\",\n booktitle = \"Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications\",\n month = aug,\n year = \"2019\",\n address = \"Florence, Italy\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/W19-4406\",\n doi = \"10.18653/v1/W19-4406\",\n pages = \"52--75\",\n abstract = \"This paper reports on the BEA-2019 Shared Task on Grammatical Error Correction (GEC). As with the CoNLL-2014 shared task, participants are required to correct all types of errors in test data. One of the main contributions of the BEA-2019 shared task is the introduction of a new dataset, the Write{\\&}Improve+LOCNESS corpus, which represents a wider range of native and learner English levels and abilities. Another contribution is the introduction of tracks, which control the amount of annotated data available to participants. Systems are evaluated in terms of ERRANT F{\\_}0.5, which allows us to report a much wider range of performance statistics. The competition was hosted on Codalab and remains open for further submissions on the blind test set.\",\n}\n", "homepage": "https://www.cl.cam.ac.uk/research/nl/bea2019st/#data", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "userid": {"dtype": "string", "id": null, "_type": "Value"}, "cefr": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "edits": {"feature": {"start": {"dtype": "int32", "id": null, "_type": "Value"}, "end": {"dtype": "int32", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wi_locness", "config_name": "wi", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4375795, "num_examples": 3000, "dataset_name": "wi_locness"}, "validation": {"name": "validation", "num_bytes": 447055, "num_examples": 300, "dataset_name": "wi_locness"}}, "download_checksums": {"https://www.cl.cam.ac.uk/research/nl/bea2019st/data/wi+locness_v2.1.bea19.tar.gz": {"num_bytes": 6120469, "checksum": "d5cbf68cda3da0c3af69dd672614d07287bfe996b87da0c75051d5349d76c666"}}, "download_size": 6120469, "post_processing_size": null, "dataset_size": 4822850, "size_in_bytes": 10943319}, "locness": {"description": "Write & Improve (Yannakoudakis et al., 2018) is an online web platform that assists non-native\nEnglish students with their writing. Specifically, students from around the world submit letters,\nstories, articles and essays in response to various prompts, and the W&I system provides instant\nfeedback. Since W&I went live in 2014, W&I annotators have manually annotated some of these\nsubmissions and assigned them a CEFR level.\n", "citation": "@inproceedings{bryant-etal-2019-bea,\n title = \"The {BEA}-2019 Shared Task on Grammatical Error Correction\",\n author = \"Bryant, Christopher and\n Felice, Mariano and\n Andersen, {\\O}istein E. and\n Briscoe, Ted\",\n booktitle = \"Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications\",\n month = aug,\n year = \"2019\",\n address = \"Florence, Italy\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/W19-4406\",\n doi = \"10.18653/v1/W19-4406\",\n pages = \"52--75\",\n abstract = \"This paper reports on the BEA-2019 Shared Task on Grammatical Error Correction (GEC). As with the CoNLL-2014 shared task, participants are required to correct all types of errors in test data. One of the main contributions of the BEA-2019 shared task is the introduction of a new dataset, the Write{\\&}Improve+LOCNESS corpus, which represents a wider range of native and learner English levels and abilities. Another contribution is the introduction of tracks, which control the amount of annotated data available to participants. Systems are evaluated in terms of ERRANT F{\\_}0.5, which allows us to report a much wider range of performance statistics. The competition was hosted on Codalab and remains open for further submissions on the blind test set.\",\n}\n", "homepage": "https://www.cl.cam.ac.uk/research/nl/bea2019st/#data", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "cefr": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "edits": {"feature": {"start": {"dtype": "int32", "id": null, "_type": "Value"}, "end": {"dtype": "int32", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wi_locness", "config_name": "locness", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 138176, "num_examples": 50, "dataset_name": "wi_locness"}}, "download_checksums": {"https://www.cl.cam.ac.uk/research/nl/bea2019st/data/wi+locness_v2.1.bea19.tar.gz": {"num_bytes": 6120469, "checksum": "d5cbf68cda3da0c3af69dd672614d07287bfe996b87da0c75051d5349d76c666"}}, "download_size": 6120469, "post_processing_size": null, "dataset_size": 138176, "size_in_bytes": 6258645}}
dummy/locness/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a644cb66e96cc2d828316c563b876789fbc3428565a8404c1578bc94c4632e0e
3
+ size 3099
dummy/wi/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46f8684af0d145b0b2d27e9ef87f5acab48dc1240877aeabec02c49b8c1b364f
3
+ size 6691
wi_locness.py ADDED
@@ -0,0 +1,222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Write & Improve (Yannakoudakis et al., 2018) is an online web platform that assists non-native
16
+ English students with their writing. Specifically, students from around the world submit letters,
17
+ stories, articles and essays in response to various prompts, and the W&I system provides instant
18
+ feedback. Since W&I went live in 2014, W&I annotators have manually annotated some of these
19
+ submissions and assigned them a CEFR level.
20
+
21
+ The LOCNESS corpus (Granger, 1998) consists of essays written by native English students.
22
+ It was originally compiled by researchers at the Centre for English Corpus Linguistics at the
23
+ University of Louvain. Since native English students also sometimes make mistakes, we asked
24
+ the W&I annotators to annotate a subsection of LOCNESS so researchers can test the effectiveness
25
+ of their systems on the full range of English levels and abilities."""
26
+
27
+ from __future__ import absolute_import, division, print_function
28
+
29
+ import json
30
+ from pathlib import Path
31
+
32
+ import datasets
33
+
34
+
35
+ _CITATION = """\
36
+ @inproceedings{bryant-etal-2019-bea,
37
+ title = "The {BEA}-2019 Shared Task on Grammatical Error Correction",
38
+ author = "Bryant, Christopher and
39
+ Felice, Mariano and
40
+ Andersen, {\\O}istein E. and
41
+ Briscoe, Ted",
42
+ booktitle = "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
43
+ month = aug,
44
+ year = "2019",
45
+ address = "Florence, Italy",
46
+ publisher = "Association for Computational Linguistics",
47
+ url = "https://www.aclweb.org/anthology/W19-4406",
48
+ doi = "10.18653/v1/W19-4406",
49
+ pages = "52--75",
50
+ abstract = "This paper reports on the BEA-2019 Shared Task on Grammatical Error Correction (GEC). As with the CoNLL-2014 shared task, participants are required to correct all types of errors in test data. One of the main contributions of the BEA-2019 shared task is the introduction of a new dataset, the Write{\\&}Improve+LOCNESS corpus, which represents a wider range of native and learner English levels and abilities. Another contribution is the introduction of tracks, which control the amount of annotated data available to participants. Systems are evaluated in terms of ERRANT F{\\_}0.5, which allows us to report a much wider range of performance statistics. The competition was hosted on Codalab and remains open for further submissions on the blind test set.",
51
+ }
52
+ """
53
+
54
+ _DESCRIPTION = """\
55
+ Write & Improve (Yannakoudakis et al., 2018) is an online web platform that assists non-native
56
+ English students with their writing. Specifically, students from around the world submit letters,
57
+ stories, articles and essays in response to various prompts, and the W&I system provides instant
58
+ feedback. Since W&I went live in 2014, W&I annotators have manually annotated some of these
59
+ submissions and assigned them a CEFR level.
60
+ """
61
+
62
+ _HOMEPAGE = "https://www.cl.cam.ac.uk/research/nl/bea2019st/#data"
63
+
64
+ _LICENSE = ""
65
+
66
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
67
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
68
+ _URL = "https://www.cl.cam.ac.uk/research/nl/bea2019st/data/wi+locness_v2.1.bea19.tar.gz"
69
+
70
+
71
+ class WiLocness(datasets.GeneratorBasedBuilder):
72
+ """\
73
+ Write & Improve (Yannakoudakis et al., 2018) is an online web platform that assists non-native
74
+ English students with their writing. Specifically, students from around the world submit letters,
75
+ stories, articles and essays in response to various prompts, and the W&I system provides instant
76
+ feedback. Since W&I went live in 2014, W&I annotators have manually annotated some of these
77
+ submissions and assigned them a CEFR level."""
78
+
79
+ VERSION = datasets.Version("1.1.0")
80
+
81
+ # This is an example of a dataset with multiple configurations.
82
+ # If you don't want/need to define several sub-sets in your dataset,
83
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
84
+
85
+ # If you need to make complex sub-parts in the datasets with configurable options
86
+ # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
87
+ # BUILDER_CONFIG_CLASS = MyBuilderConfig
88
+
89
+ # You will be able to load one or the other configurations in the following list with
90
+ # data = datasets.load_dataset('my_dataset', 'first_domain')
91
+ # data = datasets.load_dataset('my_dataset', 'second_domain')
92
+ BUILDER_CONFIGS = [
93
+ datasets.BuilderConfig(
94
+ name="wi",
95
+ version=VERSION,
96
+ description="This part of the dataset includes the Write & Improve data for levels A, B and C",
97
+ ),
98
+ datasets.BuilderConfig(
99
+ name="locness",
100
+ version=VERSION,
101
+ description="This part of the dataset includes the Locness part of the W&I-Locness dataset",
102
+ ),
103
+ ]
104
+
105
+ # DEFAULT_CONFIG_NAME = "first_domain" # It's not mandatory to have a default configuration. Just use one if it make sense.
106
+
107
+ def _info(self):
108
+ if self.config.name == "wi":
109
+ features = datasets.Features(
110
+ {
111
+ "id": datasets.Value("string"),
112
+ "userid": datasets.Value("string"),
113
+ "cefr": datasets.Value("string"),
114
+ "text": datasets.Value("string"),
115
+ "edits": datasets.Sequence(
116
+ {
117
+ "start": datasets.Value("int32"),
118
+ "end": datasets.Value("int32"),
119
+ "text": datasets.Value("string"),
120
+ }
121
+ ),
122
+ }
123
+ )
124
+ elif self.config.name == "locness":
125
+ features = datasets.Features(
126
+ {
127
+ "id": datasets.Value("string"),
128
+ "cefr": datasets.Value("string"),
129
+ "text": datasets.Value("string"),
130
+ "edits": datasets.Sequence(
131
+ {
132
+ "start": datasets.Value("int32"),
133
+ "end": datasets.Value("int32"),
134
+ "text": datasets.Value("string"),
135
+ }
136
+ ),
137
+ }
138
+ )
139
+ else:
140
+ assert False
141
+ return datasets.DatasetInfo(
142
+ # This is the description that will appear on the datasets page.
143
+ description=_DESCRIPTION,
144
+ # This defines the different columns of the dataset and their types
145
+ features=features, # Here we define them above because they are different between the two configurations
146
+ # If there's a common (input, target) tuple from the features,
147
+ # specify them here. They'll be used if as_supervised=True in
148
+ # builder.as_dataset.
149
+ supervised_keys=None,
150
+ # Homepage of the dataset for documentation
151
+ homepage=_HOMEPAGE,
152
+ # License for the dataset if available
153
+ license=_LICENSE,
154
+ # Citation for the dataset
155
+ citation=_CITATION,
156
+ )
157
+
158
+ def _split_generators(self, dl_manager):
159
+ """Returns SplitGenerators."""
160
+
161
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
162
+ # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
163
+ # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
164
+ data_dir = Path(dl_manager.download_and_extract(_URL)) / "wi+locness" / "json"
165
+
166
+ if self.config.name == "wi":
167
+ return [
168
+ datasets.SplitGenerator(
169
+ name=datasets.Split.TRAIN,
170
+ # These kwargs will be passed to _generate_examples
171
+ gen_kwargs={"filepath": data_dir, "split": "train"},
172
+ ),
173
+ datasets.SplitGenerator(
174
+ name=datasets.Split.VALIDATION,
175
+ # These kwargs will be passed to _generate_examples
176
+ gen_kwargs={"filepath": data_dir, "split": "validation"},
177
+ ),
178
+ ]
179
+ elif self.config.name == "locness":
180
+ return [
181
+ datasets.SplitGenerator(
182
+ name=datasets.Split.VALIDATION,
183
+ # These kwargs will be passed to _generate_examples
184
+ gen_kwargs={"filepath": data_dir, "split": "validation"},
185
+ ),
186
+ ]
187
+ else:
188
+ assert False
189
+
190
+ def _generate_examples(self, filepath, split):
191
+ """ Yields examples. """
192
+
193
+ if split == "validation":
194
+ split = "dev"
195
+
196
+ if self.config.name == "wi":
197
+ levels = ["A", "B", "C"]
198
+ elif self.config.name == "locness":
199
+ levels = ["N"]
200
+ else:
201
+ assert False
202
+
203
+ for level in levels:
204
+ with open(filepath / f"{level}.{split}.json", "r", encoding="utf-8") as fp:
205
+ for line in fp:
206
+ o = json.loads(line)
207
+
208
+ edits = []
209
+ for (start, end, text) in o["edits"][0][1:][0]:
210
+ edits.append({"start": start, "end": end, "text": text})
211
+
212
+ out = {
213
+ "id": o["id"],
214
+ "cefr": o["cefr"],
215
+ "text": o["text"],
216
+ "edits": edits,
217
+ }
218
+
219
+ if self.config.name == "wi":
220
+ out["userid"] = o.get("userid", "")
221
+
222
+ yield o["id"], out