system HF staff commited on
Commit
dd29e5c
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ monolingual:
4
+ - no-annotation
5
+ monolingual_raw:
6
+ - found
7
+ parallel:
8
+ - expert-generated
9
+ parallel_raw:
10
+ - expert-generated
11
+ language_creators:
12
+ - found
13
+ languages:
14
+ monolingual:
15
+ - chr
16
+ - en
17
+ monolingual_raw:
18
+ - chr
19
+ parallel:
20
+ - chr
21
+ - en
22
+ parallel_raw:
23
+ - chr
24
+ - en
25
+ licenses:
26
+ - other-different-license-per-source
27
+ multilinguality:
28
+ monolingual:
29
+ - multilingual
30
+ monolingual_raw:
31
+ - monolingual
32
+ parallel:
33
+ - translation
34
+ parallel_raw:
35
+ - translation
36
+ size_categories:
37
+ monolingual:
38
+ - 100K<n<1M
39
+ monolingual_raw:
40
+ - 1K<n<10K
41
+ parallel:
42
+ - 10K<n<100K
43
+ parallel_raw:
44
+ - 10K<n<100K
45
+ source_datasets:
46
+ - original
47
+ task_categories:
48
+ monolingual:
49
+ - conditional-text-generation
50
+ monolingual_raw:
51
+ - sequence-modeling
52
+ parallel:
53
+ - conditional-text-generation
54
+ parallel_raw:
55
+ - conditional-text-generation
56
+ task_ids:
57
+ monolingual:
58
+ - machine-translation
59
+ monolingual_raw:
60
+ - language-modeling
61
+ parallel:
62
+ - machine-translation
63
+ parallel_raw:
64
+ - machine-translation
65
+ ---
66
+
67
+ # Dataset Card for ChrEn
68
+
69
+ ## Table of Contents
70
+ - [Dataset Description](#dataset-description)
71
+ - [Dataset Summary](#dataset-summary)
72
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
73
+ - [Languages](#languages)
74
+ - [Dataset Structure](#dataset-structure)
75
+ - [Data Instances](#data-instances)
76
+ - [Data Fields](#data-instances)
77
+ - [Data Splits](#data-instances)
78
+ - [Dataset Creation](#dataset-creation)
79
+ - [Curation Rationale](#curation-rationale)
80
+ - [Source Data](#source-data)
81
+ - [Annotations](#annotations)
82
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
83
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
84
+ - [Social Impact of Dataset](#social-impact-of-dataset)
85
+ - [Discussion of Biases](#discussion-of-biases)
86
+ - [Other Known Limitations](#other-known-limitations)
87
+ - [Additional Information](#additional-information)
88
+ - [Dataset Curators](#dataset-curators)
89
+ - [Licensing Information](#licensing-information)
90
+ - [Citation Information](#citation-information)
91
+
92
+ ## Dataset Description
93
+
94
+ - **Repository:** [Github repository for ChrEn](https://github.com/ZhangShiyue/ChrEn)
95
+ - **Paper:** [ChrEn: Cherokee-English Machine Translation for Endangered Language Revitalization](https://arxiv.org/abs/2010.04791)
96
+ - **Point of Contact:** [benfrey@email.unc.edu](benfrey@email.unc.edu)
97
+
98
+ ### Dataset Summary
99
+
100
+ ChrEn is a Cherokee-English parallel dataset to facilitate machine translation research between Cherokee and English.
101
+ ChrEn is extremely low-resource contains 14k sentence pairs in total, split in ways that facilitate both in-domain and out-of-domain evaluation.
102
+ ChrEn also contains 5k Cherokee monolingual data to enable semi-supervised learning.
103
+
104
+ ### Supported Tasks and Leaderboards
105
+
106
+ The dataset is intended to use for `machine-translation` between Enlish (`en`) and Cherokee (`chr`).
107
+
108
+ ### Languages
109
+
110
+ The dataset contains Enlish (`en`) and Cherokee (`chr`) text. The data encompasses both existing dialects of Cherokee: the Overhill dialect, mostly spoken in Oklahoma (OK), and the Middle dialect, mostly used in North Carolina (NC).
111
+
112
+ ## Dataset Structure
113
+
114
+ ### Data Instances
115
+
116
+ [More Information Needed]
117
+
118
+ ### Data Fields
119
+
120
+ [More Information Needed]
121
+
122
+ ### Data Splits
123
+
124
+ [More Information Needed]
125
+
126
+ ## Dataset Creation
127
+
128
+ ### Curation Rationale
129
+
130
+ [More Information Needed]
131
+
132
+ ### Source Data
133
+
134
+ #### Initial Data Collection and Normalization
135
+
136
+ Many of the source texts were translations of English materials, which means that the Cherokee structures may not be 100% natural in terms of what a speaker might spontaneously produce. Each text was translated by people who speak Cherokee as the first language, which means there is a high probability of grammaticality. These data were originally available in PDF version. We apply the Optical Character Recognition (OCR) via Tesseract OCR engine to extract the Cherokee and English text.
137
+
138
+ #### Who are the source language producers?
139
+
140
+ [More Information Needed]
141
+
142
+ ### Annotations
143
+
144
+ #### Annotation process
145
+
146
+ [More Information Needed]
147
+
148
+ #### Who are the annotators?
149
+
150
+ The sentences were manually aligned by Dr. Benjamin Frey a proficient second-language speaker of Cherokee, who also fixed the errors introduced by OCR. This process is time-consuming and took several months.
151
+
152
+ ### Personal and Sensitive Information
153
+
154
+ [More Information Needed]
155
+
156
+ ## Considerations for Using the Data
157
+
158
+ ### Social Impact of Dataset
159
+
160
+ [More Information Needed]
161
+
162
+ ### Discussion of Biases
163
+
164
+ [More Information Needed]
165
+
166
+ ### Other Known Limitations
167
+
168
+ [More Information Needed]
169
+
170
+ ## Additional Information
171
+
172
+ ### Dataset Curators
173
+
174
+ The dataset was gathered and annotated by Shiyue Zhang, Benjamin Frey, and Mohit Bansal at UNC Chapel Hill.
175
+
176
+ ### Licensing Information
177
+
178
+ The copyright of the data belongs to original book/article authors or translators (hence, used for research purpose; and please contact Dr. Benjamin Frey for other copyright questions).
179
+
180
+ ### Citation Information
181
+
182
+ ```
183
+ @inproceedings{zhang2020chren,
184
+ title={ChrEn: Cherokee-English Machine Translation for Endangered Language Revitalization},
185
+ author={Zhang, Shiyue and Frey, Benjamin and Bansal, Mohit},
186
+ booktitle={EMNLP2020},
187
+ year={2020}
188
+ }
189
+ ```
chr_en.py ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ChrEn: Cherokee-English Machine Translation data"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import openpyxl # noqa: requires this pandas optional dependency for reading xlsx files
20
+ import pandas as pd
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """\
26
+ @inproceedings{zhang2020chren,
27
+ title={ChrEn: Cherokee-English Machine Translation for Endangered Language Revitalization},
28
+ author={Zhang, Shiyue and Frey, Benjamin and Bansal, Mohit},
29
+ booktitle={EMNLP2020},
30
+ year={2020}
31
+ }
32
+ """
33
+
34
+ _DESCRIPTION = """\
35
+ ChrEn is a Cherokee-English parallel dataset to facilitate machine translation research between Cherokee and English.
36
+ ChrEn is extremely low-resource contains 14k sentence pairs in total, split in ways that facilitate both in-domain and out-of-domain evaluation.
37
+ ChrEn also contains 5k Cherokee monolingual data to enable semi-supervised learning.
38
+ """
39
+
40
+ _HOMEPAGE = "https://github.com/ZhangShiyue/ChrEn"
41
+
42
+ _LICENSE = ""
43
+
44
+ _URLs = {
45
+ "monolingual_raw": "https://github.com/ZhangShiyue/ChrEn/raw/main/data/raw/monolingual_data.xlsx",
46
+ "parallel_raw": "https://github.com/ZhangShiyue/ChrEn/raw/main/data/raw/parallel_data.xlsx",
47
+ "monolingual_chr": "https://raw.githubusercontent.com/ZhangShiyue/ChrEn/main/data/monolingual/chr",
48
+ "monolingual_en5000": "https://raw.githubusercontent.com/ZhangShiyue/ChrEn/main/data/monolingual/en5000",
49
+ "monolingual_en10000": "https://raw.githubusercontent.com/ZhangShiyue/ChrEn/main/data/monolingual/en10000",
50
+ "monolingual_en20000": "https://raw.githubusercontent.com/ZhangShiyue/ChrEn/main/data/monolingual/en20000",
51
+ "monolingual_en50000": "https://raw.githubusercontent.com/ZhangShiyue/ChrEn/main/data/monolingual/en50000",
52
+ "monolingual_en100000": "https://raw.githubusercontent.com/ZhangShiyue/ChrEn/main/data/monolingual/en100000",
53
+ "parallel_train.chr": "https://github.com/ZhangShiyue/ChrEn/raw/main/data/parallel/train.chr",
54
+ "parallel_train.en": "https://github.com/ZhangShiyue/ChrEn/raw/main/data/parallel/train.en",
55
+ "parallel_dev.chr": "https://github.com/ZhangShiyue/ChrEn/raw/main/data/parallel/dev.chr",
56
+ "parallel_dev.en": "https://github.com/ZhangShiyue/ChrEn/raw/main/data/parallel/dev.en",
57
+ "parallel_out_dev.chr": "https://github.com/ZhangShiyue/ChrEn/raw/main/data/parallel/out_dev.chr",
58
+ "parallel_out_dev.en": "https://github.com/ZhangShiyue/ChrEn/raw/main/data/parallel/out_dev.en",
59
+ "parallel_test.chr": "https://github.com/ZhangShiyue/ChrEn/raw/main/data/parallel/test.chr",
60
+ "parallel_test.en": "https://github.com/ZhangShiyue/ChrEn/raw/main/data/parallel/test.en",
61
+ "parallel_out_test.chr": "https://github.com/ZhangShiyue/ChrEn/raw/main/data/parallel/out_test.chr",
62
+ "parallel_out_test.en": "https://github.com/ZhangShiyue/ChrEn/raw/main/data/parallel/out_test.en",
63
+ }
64
+
65
+
66
+ # TODO: Name of the dataset usually match the script name with CamelCase instead of snake_case
67
+ class ChrEn(datasets.GeneratorBasedBuilder):
68
+ """ChrEn: Cherokee-English Machine Translation data."""
69
+
70
+ VERSION = datasets.Version("1.0.0")
71
+
72
+ BUILDER_CONFIGS = [
73
+ datasets.BuilderConfig(name="monolingual_raw", version=VERSION, description="Monolingual data with metadata"),
74
+ datasets.BuilderConfig(name="parallel_raw", version=VERSION, description="Parallel data with metadata"),
75
+ datasets.BuilderConfig(name="monolingual", version=VERSION, description="Monolingual data text only"),
76
+ datasets.BuilderConfig(
77
+ name="parallel", version=VERSION, description="Parallel data text pairs only with default split"
78
+ ),
79
+ ]
80
+
81
+ DEFAULT_CONFIG_NAME = (
82
+ "parallel" # It's not mandatory to have a default configuration. Just use one if it make sense.
83
+ )
84
+
85
+ def _info(self):
86
+ if (
87
+ self.config.name == "monolingual_raw"
88
+ ): # This is the name of the configuration selected in BUILDER_CONFIGS above
89
+ features = datasets.Features(
90
+ {
91
+ "text_sentence": datasets.Value("string"),
92
+ "text_title": datasets.Value("string"),
93
+ "speaker": datasets.Value("string"),
94
+ "date": datasets.Value("int32"),
95
+ "type": datasets.Value("string"),
96
+ "dialect": datasets.Value("string"),
97
+ }
98
+ )
99
+ elif (
100
+ self.config.name == "parallel_raw"
101
+ ): # This is the name of the configuration selected in BUILDER_CONFIGS above
102
+ features = datasets.Features(
103
+ {
104
+ "line_number": datasets.Value("string"), # doesn't always map to a number
105
+ "sentence_pair": datasets.Translation(languages=["en", "chr"]),
106
+ "text_title": datasets.Value("string"),
107
+ "speaker": datasets.Value("string"),
108
+ "date": datasets.Value("int32"),
109
+ "type": datasets.Value("string"),
110
+ "dialect": datasets.Value("string"),
111
+ }
112
+ )
113
+ elif (
114
+ self.config.name == "parallel"
115
+ ): # This is an example to show how to have different features for "first_domain" and "second_domain"
116
+ features = datasets.Features(
117
+ {
118
+ "sentence_pair": datasets.Translation(languages=["en", "chr"]),
119
+ }
120
+ )
121
+ elif (
122
+ self.config.name == "monolingual"
123
+ ): # This is an example to show how to have different features for "first_domain" and "second_domain"
124
+ features = datasets.Features(
125
+ {
126
+ "sentence": datasets.Value("string"),
127
+ }
128
+ )
129
+ return datasets.DatasetInfo(
130
+ description=_DESCRIPTION,
131
+ features=features, # Here we define them above because they are different between the two configurations
132
+ supervised_keys=None,
133
+ homepage=_HOMEPAGE,
134
+ license=_LICENSE,
135
+ citation=_CITATION,
136
+ )
137
+
138
+ def _split_generators(self, dl_manager):
139
+ """Returns SplitGenerators."""
140
+ data_dir = dl_manager.download(_URLs)
141
+ if self.config.name in [
142
+ "monolingual_raw",
143
+ "parallel_raw",
144
+ ]: # This is the name of the configuration selected in BUILDER_CONFIGS above
145
+ return [
146
+ datasets.SplitGenerator(
147
+ name="full",
148
+ gen_kwargs={
149
+ "filepaths": data_dir,
150
+ "split": "full",
151
+ },
152
+ )
153
+ ]
154
+ elif self.config.name == "monolingual":
155
+ return [
156
+ datasets.SplitGenerator(
157
+ name=spl,
158
+ gen_kwargs={
159
+ "filepaths": data_dir,
160
+ "split": spl,
161
+ },
162
+ )
163
+ for spl in ["chr", "en5000", "en10000", "en20000", "en50000", "en100000"]
164
+ ]
165
+ else:
166
+ return [
167
+ datasets.SplitGenerator(
168
+ name=spl,
169
+ gen_kwargs={
170
+ "filepaths": data_dir,
171
+ "split": spl,
172
+ },
173
+ )
174
+ for spl in ["train", "dev", "out_dev", "test", "out_test"]
175
+ ]
176
+
177
+ def _generate_examples(self, filepaths, split):
178
+ if self.config.name == "monolingual_raw":
179
+ keys = ["text_sentence", "text_title", "speaker", "date", "type", "dialect"]
180
+ with open(filepaths["monolingual_raw"], "rb") as f:
181
+ monolingual = pd.read_excel(f, engine="openpyxl")
182
+ for id_, row in enumerate(monolingual.itertuples()):
183
+ yield id_, dict(zip(keys, row[1:]))
184
+ elif self.config.name == "parallel_raw":
185
+ keys = ["line_number", "en_sent", "chr_sent", "text_title", "speaker", "date", "type", "dialect"]
186
+ with open(filepaths["parallel_raw"], "rb") as f:
187
+ parallel = pd.read_excel(f, engine="openpyxl")
188
+ for id_, row in enumerate(parallel.itertuples()):
189
+ res = dict(zip(keys, row[1:]))
190
+ res["sentence_pair"] = {"en": res["en_sent"], "chr": res["chr_sent"]}
191
+ res["line_number"] = str(res["line_number"])
192
+ del res["en_sent"]
193
+ del res["chr_sent"]
194
+ yield id_, res
195
+ elif self.config.name == "monolingual":
196
+ f = open(filepaths[f"monolingual_{split}"], encoding="utf-8")
197
+ for id_, line in enumerate(f):
198
+ yield id_, {"sentence": line.strip()}
199
+ elif self.config.name == "parallel":
200
+ fi = open(filepaths[f"parallel_{split}.en"], encoding="utf-8")
201
+ fo = open(filepaths[f"parallel_{split}.chr"], encoding="utf-8")
202
+ for id_, (line_en, line_chr) in enumerate(zip(fi, fo)):
203
+ yield id_, {"sentence_pair": {"en": line_en.strip(), "chr": line_chr.strip()}}
dummy/monolingual/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:751f5c147cf7cb9220c140e5ba9df6c3b7bd4cc6b9b0245b400d53b08187e9d1
3
+ size 29105
dummy/monolingual_raw/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:751f5c147cf7cb9220c140e5ba9df6c3b7bd4cc6b9b0245b400d53b08187e9d1
3
+ size 29105
dummy/parallel/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:751f5c147cf7cb9220c140e5ba9df6c3b7bd4cc6b9b0245b400d53b08187e9d1
3
+ size 29105
dummy/parallel_raw/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:751f5c147cf7cb9220c140e5ba9df6c3b7bd4cc6b9b0245b400d53b08187e9d1
3
+ size 29105