Davlan commited on
Commit
a35d7f9
1 Parent(s): 0ce0dc3

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +261 -1
  2. masakhapos.py +203 -0
README.md CHANGED
@@ -1,3 +1,263 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language:
5
+ - bm
6
+ - bbj
7
+ - ee
8
+ - fon
9
+ - ha
10
+ - ig
11
+ - rw
12
+ - lg
13
+ - luo
14
+ - mos
15
+ - ny
16
+ - pcm
17
+ - sn
18
+ - sw
19
+ - tn
20
+ - tw
21
+ - wo
22
+ - xh
23
+ - yo
24
+ - zu
25
+ language_creators:
26
+ - expert-generated
27
+ license:
28
+ - afl-3.0
29
+ multilinguality:
30
+ - multilingual
31
+ pretty_name: masakhapos
32
+ size_categories:
33
+ - 1K<n<10K
34
+ source_datasets:
35
+ - original
36
+ tags:
37
+ - pos
38
+ - masakhapos
39
+ - masakhane
40
+ task_categories:
41
+ - token-classification
42
+ task_ids:
43
+ - named-entity-recognition
44
+
45
  ---
46
+
47
+
48
+ # Dataset Card for [Dataset Name]
49
+
50
+ ## Table of Contents
51
+ - [Table of Contents](#table-of-contents)
52
+ - [Dataset Description](#dataset-description)
53
+ - [Dataset Summary](#dataset-summary)
54
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
55
+ - [Languages](#languages)
56
+ - [Dataset Structure](#dataset-structure)
57
+ - [Data Instances](#data-instances)
58
+ - [Data Fields](#data-fields)
59
+ - [Data Splits](#data-splits)
60
+ - [Dataset Creation](#dataset-creation)
61
+ - [Curation Rationale](#curation-rationale)
62
+ - [Source Data](#source-data)
63
+ - [Annotations](#annotations)
64
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
65
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
66
+ - [Social Impact of Dataset](#social-impact-of-dataset)
67
+ - [Discussion of Biases](#discussion-of-biases)
68
+ - [Other Known Limitations](#other-known-limitations)
69
+ - [Additional Information](#additional-information)
70
+ - [Dataset Curators](#dataset-curators)
71
+ - [Licensing Information](#licensing-information)
72
+ - [Citation Information](#citation-information)
73
+ - [Contributions](#contributions)
74
+
75
+ ## Dataset Description
76
+
77
+ - **Homepage:** [homepage](https://github.com/masakhane-io/masakhane-pos/)
78
+ - **Repository:** [github](https://github.com/masakhane-io/masakhane-pos/)
79
+ - **Paper:** [paper](https://aclanthology.org/2023.acl-long.609/)
80
+ - **Point of Contact:** [Masakhane](https://www.masakhane.io/) or didelani@lsv.uni-saarland.de
81
+
82
+ ### Dataset Summary
83
+
84
+ MasakhaPOS is the largest publicly available high-quality dataset for part-of-speech (POS) tagging in 20 African languages. The languages covered are:
85
+
86
+
87
+ The train/validation/test sets are available for all the 20 languages.
88
+
89
+ For more details see https://aclanthology.org/2023.acl-long.609/
90
+
91
+
92
+ ### Supported Tasks and Leaderboards
93
+
94
+ [More Information Needed]
95
+
96
+ - `Part-of-speech`: The performance in this task is measured with [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) (higher is better).
97
+ ### Languages
98
+
99
+ There are 20 languages available :
100
+ - Bambara (bam)
101
+ - Ghomala (bbj)
102
+ - Ewe (ewe)
103
+ - Fon (fon)
104
+ - Hausa (hau)
105
+ - Igbo (ibo)
106
+ - Kinyarwanda (kin)
107
+ - Luganda (lug)
108
+ - Dholuo (luo)
109
+ - Mossi (mos)
110
+ - Chichewa (nya)
111
+ - Nigerian Pidgin
112
+ - chShona (sna)
113
+ - Kiswahili (swą)
114
+ - Setswana (tsn)
115
+ - Twi (twi)
116
+ - Wolof (wol)
117
+ - isiXhosa (xho)
118
+ - Yorùbá (yor)
119
+ - isiZulu (zul)
120
+
121
+ ## Dataset Structure
122
+
123
+ ### Data Instances
124
+
125
+ The examples look like this for Yorùbá:
126
+
127
+ ```
128
+ from datasets import load_dataset
129
+ data = load_dataset('masakhane/masakhapos', 'yor')
130
+
131
+ # Please, specify the language code
132
+
133
+ # A data point consists of sentences seperated by empty line and tab-seperated tokens and tags.
134
+ {'id': '0',
135
+ 'ner_tags': [0, 10, 10, 16, 0, 14, 0, 16, 0],
136
+ 'tokens': ['Ọ̀gbẹ́ni', 'Nuhu', 'Adam', 'kúrò', 'nípò', 'bí', 'ẹní', 'yọ', 'jìgá']
137
+ }
138
+ ```
139
+
140
+ ### Data Fields
141
+
142
+ - `id`: id of the sample
143
+ - `tokens`: the tokens of the example text
144
+ - `upos`: the POS tags of each token
145
+
146
+ The POS tags correspond to this list:
147
+ ```
148
+ "NOUN", "PUNCT", "ADP", "NUM", "SYM", "SCONJ", "ADJ", "PART", "DET", "CCONJ", "PROPN", "PRON", "X", "ADV", "INTJ", "VERB", "AUX",```
149
+
150
+ The definition of the tags can be found on [UD website](https://universaldependencies.org/u/pos/)
151
+
152
+ ### Data Splits
153
+
154
+ For all languages, there are three splits.
155
+
156
+ The original splits were named `train`, `dev` and `test` and they correspond to the `train`, `validation` and `test` splits.
157
+
158
+ The splits have the following sizes :
159
+
160
+ | Language | train | validation | test |
161
+ |-----------------|------:|-----------:|------:|
162
+ | Bambara | 775 | 154 | 619 |
163
+ | Ghomala | 750 | 149 | 599 |
164
+ | Ewe | 728 | 145 | 582 |
165
+ | Fon | 810 | 161 | 646 |
166
+ | Hausa | 753 | 150 | 601 |
167
+ | Igbo | 803 | 160 | 642 |
168
+ | Kinyarwanda | 757 | 151 | 604 |
169
+ | Luganda | 733 | 146 | 586 |
170
+ | Luo | 758 | 151 | 606 |
171
+ | Mossi | 757 | 151 | 604 |
172
+ | Chichewa | 728 | 145 | 582 |
173
+ | Nigerian-Pidgin | 752 | 150 | 600 |
174
+ | chiShona | 747 | 149 | 596 |
175
+ | Kiswahili | 693 | 138 | 553 |
176
+ | Setswana | 754 | 150 | 602 |
177
+ | Akan/Twi | 785 | 157 | 628 |
178
+ | Wolof | 782 | 156 | 625 |
179
+ | isiXhosa | 752 | 150 | 601 |
180
+ | Yoruba | 893 | 178 | 713 |
181
+ | isiZulu | 753 | 150 | 601 |
182
+
183
+ ## Dataset Creation
184
+
185
+ ### Curation Rationale
186
+
187
+ The dataset was introduced to introduce new resources to 20 languages that were under-served for natural language processing.
188
+
189
+ [More Information Needed]
190
+
191
+ ### Source Data
192
+
193
+ The source of the data is from the news domain, details can be found here https://aclanthology.org/2023.acl-long.609/
194
+ #### Initial Data Collection and Normalization
195
+
196
+ The articles were word-tokenized, information on the exact pre-processing pipeline is unavailable.
197
+
198
+ #### Who are the source language producers?
199
+
200
+ The source language was produced by journalists and writers employed by the news agency and newspaper mentioned above.
201
+
202
+ ### Annotations
203
+
204
+ #### Annotation process
205
+
206
+ Details can be found here https://aclanthology.org/2023.acl-long.609/
207
+
208
+ #### Who are the annotators?
209
+
210
+ Annotators were recruited from [Masakhane](https://www.masakhane.io/)
211
+
212
+ ### Personal and Sensitive Information
213
+
214
+ The data is sourced from newspaper source and only contains mentions of public figures or individuals
215
+
216
+ ## Considerations for Using the Data
217
+
218
+ ### Social Impact of Dataset
219
+ [More Information Needed]
220
+
221
+
222
+ ### Discussion of Biases
223
+ [More Information Needed]
224
+
225
+
226
+ ### Other Known Limitations
227
+
228
+ Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.
229
+
230
+ ## Additional Information
231
+
232
+ ### Dataset Curators
233
+
234
+
235
+ ### Licensing Information
236
+
237
+ The licensing status of the data is CC 4.0 Non-Commercial
238
+
239
+ ### Citation Information
240
+
241
+ Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
242
+ ```
243
+ @inproceedings{dione-etal-2023-masakhapos,
244
+ title = "{M}asakha{POS}: Part-of-Speech Tagging for Typologically Diverse {A}frican languages",
245
+ author = "Dione, Cheikh M. Bamba and Adelani, David Ifeoluwa and Nabende, Peter and Alabi, Jesujoba and Sindane, Thapelo and Buzaaba, Happy and Muhammad, Shamsuddeen Hassan and Emezue, Chris Chinenye and Ogayo, Perez and Aremu, Anuoluwapo and Gitau, Catherine and Mbaye, Derguene and Mukiibi, Jonathan and Sibanda, Blessing and Dossou, Bonaventure F. P. and Bukula, Andiswa and Mabuya, Rooweither and Tapo, Allahsera Auguste and Munkoh-Buabeng, Edwin and Memdjokam Koagne, Victoire and Ouoba Kabore, Fatoumata and Taylor, Amelia and Kalipe, Godson and Macucwa, Tebogo and Marivate, Vukosi and Gwadabe, Tajuddeen and Elvis, Mboning Tchiaze and Onyenwe, Ikechukwu and Atindogbe, Gratien and Adelani, Tolulope and Akinade, Idris and Samuel, Olanrewaju and Nahimana, Marien and Musabeyezu, Th{\'e}og{\`e}ne and Niyomutabazi, Emile and Chimhenga, Ester and Gotosa, Kudzai and Mizha, Patrick and Agbolo, Apelete and Traore, Seydou and Uchechukwu, Chinedu and Yusuf, Aliyu and Abdullahi, Muhammad and Klakow, Dietrich",
246
+ editor = "Rogers, Anna and
247
+ Boyd-Graber, Jordan and
248
+ Okazaki, Naoaki",
249
+ booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
250
+ month = jul,
251
+ year = "2023",
252
+ address = "Toronto, Canada",
253
+ publisher = "Association for Computational Linguistics",
254
+ url = "https://aclanthology.org/2023.acl-long.609",
255
+ doi = "10.18653/v1/2023.acl-long.609",
256
+ pages = "10883--10900",
257
+ abstract = "In this paper, we present AfricaPOS, the largest part-of-speech (POS) dataset for 20 typologically diverse African languages. We discuss the challenges in annotating POS for these languages using the universal dependencies (UD) guidelines. We conducted extensive POS baseline experiments using both conditional random field and several multilingual pre-trained language models. We applied various cross-lingual transfer models trained with data available in the UD. Evaluating on the AfricaPOS dataset, we show that choosing the best transfer language(s) in both single-source and multi-source setups greatly improves the POS tagging performance of the target languages, in particular when combined with parameter-fine-tuning methods. Crucially, transferring knowledge from a language that matches the language family and morphosyntactic properties seems to be more effective for POS tagging in unseen languages.",
258
+ }
259
+ ```
260
+
261
+ ### Contributions
262
+
263
+ Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
masakhapos.py ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """MasakhaPOS: Part-of-Speech Tagging for Typologically Diverse African Languages"""
18
+
19
+ import datasets
20
+
21
+
22
+ logger = datasets.logging.get_logger(__name__)
23
+
24
+
25
+ _CITATION = """\
26
+ @inproceedings{dione-etal-2023-masakhapos,
27
+ title = "{M}asakha{POS}: Part-of-Speech Tagging for Typologically Diverse {A}frican languages",
28
+ author = "Dione, Cheikh M. Bamba and Adelani, David Ifeoluwa and Nabende, Peter and Alabi, Jesujoba and Sindane, Thapelo and Buzaaba, Happy and Muhammad, Shamsuddeen Hassan and Emezue, Chris Chinenye and Ogayo, Perez and Aremu, Anuoluwapo and Gitau, Catherine and Mbaye, Derguene and Mukiibi, Jonathan and Sibanda, Blessing and Dossou, Bonaventure F. P. and Bukula, Andiswa and Mabuya, Rooweither and Tapo, Allahsera Auguste and Munkoh-Buabeng, Edwin and Memdjokam Koagne, Victoire and Ouoba Kabore, Fatoumata and Taylor, Amelia and Kalipe, Godson and Macucwa, Tebogo and Marivate, Vukosi and Gwadabe, Tajuddeen and Elvis, Mboning Tchiaze and Onyenwe, Ikechukwu and Atindogbe, Gratien and Adelani, Tolulope and Akinade, Idris and Samuel, Olanrewaju and Nahimana, Marien and Musabeyezu, Th{\'e}og{\`e}ne and Niyomutabazi, Emile and Chimhenga, Ester and Gotosa, Kudzai and Mizha, Patrick and Agbolo, Apelete and Traore, Seydou and Uchechukwu, Chinedu and Yusuf, Aliyu and Abdullahi, Muhammad and Klakow, Dietrich",
29
+ editor = "Rogers, Anna and
30
+ Boyd-Graber, Jordan and
31
+ Okazaki, Naoaki",
32
+ booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
33
+ month = jul,
34
+ year = "2023",
35
+ address = "Toronto, Canada",
36
+ publisher = "Association for Computational Linguistics",
37
+ url = "https://aclanthology.org/2023.acl-long.609",
38
+ doi = "10.18653/v1/2023.acl-long.609",
39
+ pages = "10883--10900",
40
+ abstract = "In this paper, we present AfricaPOS, the largest part-of-speech (POS) dataset for 20 typologically diverse African languages. We discuss the challenges in annotating POS for these languages using the universal dependencies (UD) guidelines. We conducted extensive POS baseline experiments using both conditional random field and several multilingual pre-trained language models. We applied various cross-lingual transfer models trained with data available in the UD. Evaluating on the AfricaPOS dataset, we show that choosing the best transfer language(s) in both single-source and multi-source setups greatly improves the POS tagging performance of the target languages, in particular when combined with parameter-fine-tuning methods. Crucially, transferring knowledge from a language that matches the language family and morphosyntactic properties seems to be more effective for POS tagging in unseen languages.",
41
+ }
42
+ """
43
+
44
+ _DESCRIPTION = """\
45
+ MasakhaPOS is the largest publicly available high-quality dataset for part-of-speech (POS) tagging in 20 African languages. The languages covered are:
46
+ - Bambara (bam)
47
+ - Ghomala (bbj)
48
+ - Ewe (ewe)
49
+ - Fon (fon)
50
+ - Hausa (hau)
51
+ - Igbo (ibo)
52
+ - Kinyarwanda (kin)
53
+ - Luganda (lug)
54
+ - Dholuo (luo)
55
+ - Mossi (mos)
56
+ - Chichewa (nya)
57
+ - Nigerian Pidgin
58
+ - chShona (sna)
59
+ - Kiswahili (swą)
60
+ - Setswana (tsn)
61
+ - Twi (twi)
62
+ - Wolof (wol)
63
+ - isiXhosa (xho)
64
+ - Yorùbá (yor)
65
+ - isiZulu (zul)
66
+
67
+ The train/validation/test sets are available for all the ten languages.
68
+
69
+ For more details see https://aclanthology.org/2023.acl-long.609/
70
+ """
71
+ _URL = "https://github.com/masakhane-io/masakhane-pos/raw/main/data/"
72
+ _TRAINING_FILE = "train.txt"
73
+ _DEV_FILE = "dev.txt"
74
+ _TEST_FILE = "test.txt"
75
+
76
+
77
+ class MasakhaposConfig(datasets.BuilderConfig):
78
+ """BuilderConfig for MasakhaposConfig"""
79
+
80
+ def __init__(self, **kwargs):
81
+ """BuilderConfig for MasakhaposConfig.
82
+
83
+ Args:
84
+ **kwargs: keyword arguments forwarded to super.
85
+ """
86
+ super(MasakhaposConfig, self).__init__(**kwargs)
87
+
88
+
89
+ class Masakhapos(datasets.GeneratorBasedBuilder):
90
+ """Masakhapos dataset."""
91
+
92
+ BUILDER_CONFIGS = [
93
+ MasakhaposConfig(name="bam", version=datasets.Version("1.0.0"), description="Masakhapos Bambara dataset"),
94
+ MasakhaposConfig(name="bbj", version=datasets.Version("1.0.0"), description="Masakhapos Ghomala dataset"),
95
+ MasakhaposConfig(name="ewe", version=datasets.Version("1.0.0"), description="Masakhapos Ewe dataset"),
96
+ MasakhaposConfig(name="fon", version=datasets.Version("1.0.0"), description="Masakhapos Fon dataset"),
97
+ MasakhaposConfig(name="hau", version=datasets.Version("1.0.0"), description="Masakhapos Hausa dataset"),
98
+ MasakhaposConfig(name="ibo", version=datasets.Version("1.0.0"), description="Masakhapos Igbo dataset"),
99
+ MasakhaposConfig(name="kin", version=datasets.Version("1.0.0"), description="Masakhapos Kinyarwanda dataset"),
100
+ MasakhaposConfig(name="lug", version=datasets.Version("1.0.0"), description="Masakhapos Luganda dataset"),
101
+ MasakhaposConfig(name="luo", version=datasets.Version("1.0.0"), description="Masakhapos Luo dataset"),
102
+ MasakhaposConfig(name="mos", version=datasets.Version("1.0.0"), description="Masakhapos Mossi dataset"),
103
+ MasakhaposConfig(name="nya", version=datasets.Version("1.0.0"), description="Masakhapos Chichewa` dataset"),
104
+ MasakhaposConfig(
105
+ name="pcm", version=datasets.Version("1.0.0"), description="Masakhapos Nigerian-Pidgin dataset"
106
+ ),
107
+ MasakhaposConfig(name="sna", version=datasets.Version("1.0.0"), description="Masakhapos Shona dataset"),
108
+ MasakhaposConfig(name="swa", version=datasets.Version("1.0.0"), description="Masakhapos Swahili dataset"),
109
+ MasakhaposConfig(name="tsn", version=datasets.Version("1.0.0"), description="Masakhapos Setswana dataset"),
110
+ MasakhaposConfig(name="twi", version=datasets.Version("1.0.0"), description="Masakhapos Twi dataset"),
111
+ MasakhaposConfig(name="wol", version=datasets.Version("1.0.0"), description="Masakhapos Wolof dataset"),
112
+ MasakhaposConfig(name="xho", version=datasets.Version("1.0.0"), description="Masakhapos Xhosa dataset"),
113
+ MasakhaposConfig(name="yor", version=datasets.Version("1.0.0"), description="Masakhapos Yoruba dataset"),
114
+ MasakhaposConfig(name="zul", version=datasets.Version("1.0.0"), description="Masakhapos Zulu dataset"),
115
+ ]
116
+
117
+ def _info(self):
118
+ return datasets.DatasetInfo(
119
+ description=_DESCRIPTION,
120
+ features=datasets.Features(
121
+ {
122
+ "id": datasets.Value("string"),
123
+ "tokens": datasets.Sequence(datasets.Value("string")),
124
+ "upos": datasets.Sequence(
125
+ datasets.features.ClassLabel(
126
+ names=[
127
+ "NOUN",
128
+ "PUNCT",
129
+ "ADP",
130
+ "NUM",
131
+ "SYM",
132
+ "SCONJ",
133
+ "ADJ",
134
+ "PART",
135
+ "DET",
136
+ "CCONJ",
137
+ "PROPN",
138
+ "PRON",
139
+ "X",
140
+ "_",
141
+ "ADV",
142
+ "INTJ",
143
+ "VERB",
144
+ "AUX",
145
+ ]
146
+
147
+ )
148
+ ),
149
+ }
150
+ ),
151
+ supervised_keys=None,
152
+ homepage="https://aclanthology.org/2023.acl-long.609/",
153
+ citation=_CITATION,
154
+ )
155
+
156
+ def _split_generators(self, dl_manager):
157
+ """Returns SplitGenerators."""
158
+ urls_to_download = {
159
+ "train": f"{_URL}{self.config.name}/{_TRAINING_FILE}",
160
+ "dev": f"{_URL}{self.config.name}/{_DEV_FILE}",
161
+ "test": f"{_URL}{self.config.name}/{_TEST_FILE}",
162
+ }
163
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
164
+
165
+ return [
166
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
167
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
168
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
169
+ ]
170
+
171
+ def _generate_examples(self, filepath):
172
+ logger.info("⏳ Generating examples from = %s", filepath)
173
+ with open(filepath, encoding="utf-8") as f:
174
+ guid = 0
175
+ tokens = []
176
+ pos_tags = []
177
+ for line in f:
178
+ if line.startswith('-DOCSTART-'):
179
+ continue
180
+ if line == "" or line == "\n":
181
+ if tokens:
182
+ yield guid, {
183
+ "id": str(guid),
184
+ "tokens": tokens,
185
+ "upos": pos_tags,
186
+ }
187
+ guid += 1
188
+ tokens = []
189
+ pos_tags = []
190
+ else:
191
+ # Masakhapos tokens are space separated
192
+ splits = line.strip().split()
193
+ tokens.append(splits[0])
194
+ pos_tag = splits[-1]
195
+ pos_tags.append(pos_tag)
196
+ # last example
197
+ if tokens:
198
+ yield guid, {
199
+ "id": str(guid),
200
+ "tokens": tokens,
201
+ "upos": pos_tags,
202
+ }
203
+