Datasets:

Multilinguality:
multilingual
Size Categories:
1K<n<10K
Language Creators:
expert-generated
Annotations Creators:
expert-generated
Source Datasets:
original
ArXiv:
License:
Davlan commited on
Commit
5cec56e
1 Parent(s): 3765aa6

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +260 -2
  2. masakhaner2.py +186 -0
README.md CHANGED
@@ -1,3 +1,261 @@
1
  ---
2
- license: afl-3.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language:
5
+ - bm
6
+ - bbj
7
+ - ee
8
+ - fon
9
+ - ha
10
+ - ig
11
+ - rw
12
+ - lg
13
+ - luo
14
+ - mos
15
+ - ny
16
+ - pcm
17
+ - sn
18
+ - sw
19
+ - tn
20
+ - tw
21
+ - wo
22
+ - xh
23
+ - yo
24
+ - zu
25
+ language_creators:
26
+ - expert-generated
27
+ license:
28
+ - afl-3.0
29
+ multilinguality:
30
+ - multilingual
31
+ pretty_name: masakhaner2.0
32
+ size_categories:
33
+ - 1K<n<10K
34
+ source_datasets:
35
+ - original
36
+ tags:
37
+ - ner
38
+ - masakhaner
39
+ - masakhane
40
+ task_categories:
41
+ - token-classification
42
+ task_ids:
43
+ - named-entity-recognition---
44
+
45
+ # Dataset Card for [Dataset Name]
46
+
47
+ ## Table of Contents
48
+ - [Table of Contents](#table-of-contents)
49
+ - [Dataset Description](#dataset-description)
50
+ - [Dataset Summary](#dataset-summary)
51
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
52
+ - [Languages](#languages)
53
+ - [Dataset Structure](#dataset-structure)
54
+ - [Data Instances](#data-instances)
55
+ - [Data Fields](#data-fields)
56
+ - [Data Splits](#data-splits)
57
+ - [Dataset Creation](#dataset-creation)
58
+ - [Curation Rationale](#curation-rationale)
59
+ - [Source Data](#source-data)
60
+ - [Annotations](#annotations)
61
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
62
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
63
+ - [Social Impact of Dataset](#social-impact-of-dataset)
64
+ - [Discussion of Biases](#discussion-of-biases)
65
+ - [Other Known Limitations](#other-known-limitations)
66
+ - [Additional Information](#additional-information)
67
+ - [Dataset Curators](#dataset-curators)
68
+ - [Licensing Information](#licensing-information)
69
+ - [Citation Information](#citation-information)
70
+ - [Contributions](#contributions)
71
+
72
+ ## Dataset Description
73
+
74
+ - **Homepage:** [homepage](https://github.com/masakhane-io/masakhane-ner)
75
+ - **Repository:** [github](https://github.com/masakhane-io/masakhane-ner)
76
+ - **Paper:** [paper](https://arxiv.org/abs/2103.11811)
77
+ - **Point of Contact:** [Masakhane](https://www.masakhane.io/) or didelani@lsv.uni-saarland.de
78
+
79
+ ### Dataset Summary
80
+
81
+ MasakhaNER 2.0 is the largest publicly available high-quality dataset for named entity recognition (NER) in 20 African languages created by the Masakhane community.
82
+
83
+ Named entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example:
84
+
85
+ [PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .
86
+
87
+ MasakhaNER 2.0 is a named entity dataset consisting of PER, ORG, LOC, and DATE entities annotated by Masakhane for 20 African languages
88
+
89
+ The train/validation/test sets are available for all the ten languages.
90
+
91
+ For more details see https://arxiv.org/abs/2103.11811
92
+
93
+
94
+ ### Supported Tasks and Leaderboards
95
+
96
+ [More Information Needed]
97
+
98
+ - `named-entity-recognition`: The performance in this task is measured with [F1](https://huggingface.co/metrics/f1) (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data.
99
+
100
+ ### Languages
101
+
102
+ There are 20 languages available :
103
+ - Bambara (bam)
104
+ - Ghomala (bbj)
105
+ - Ewe (ewe)
106
+ - Fon (fon)
107
+ - Hausa (hau)
108
+ - Igbo (ibo)
109
+ - Kinyarwanda (kin)
110
+ - Luganda (lug)
111
+ - Dholuo (luo)
112
+ - Mossi (mos)
113
+ - Chichewa (nya)
114
+ - Nigerian Pidgin
115
+ - chShona (sna)
116
+ - Kiswahili (swą)
117
+ - Setswana (tsn)
118
+ - Twi (twi)
119
+ - Wolof (wol)
120
+ - isiXhosa (xho)
121
+ - Yorùbá (yor)
122
+ - isiZulu (zul)
123
+
124
+ ## Dataset Structure
125
+
126
+ ### Data Instances
127
+
128
+ The examples look like this for Yorùbá:
129
+
130
+ ```
131
+ from datasets import load_dataset
132
+ data = load_dataset('masakhaner', 'yor')
133
+
134
+ # Please, specify the language code
135
+
136
+ # A data point consists of sentences seperated by empty line and tab-seperated tokens and tags.
137
+ {'id': '0',
138
+ 'ner_tags': [B-DATE, I-DATE, 0, 0, 0, 0, 0, B-PER, I-PER, I-PER, O, O, O, O],
139
+ 'tokens': ['Wákàtí', 'méje', 'ti', 'ré', 'kọjá', 'lọ', 'tí', 'Luis', 'Carlos', 'Díaz', 'ti', 'di', 'awati', '.']
140
+ }
141
+ ```
142
+
143
+ ### Data Fields
144
+
145
+ - `id`: id of the sample
146
+ - `tokens`: the tokens of the example text
147
+ - `ner_tags`: the NER tags of each token
148
+
149
+ The NER tags correspond to this list:
150
+ ```
151
+ "O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE",
152
+ ```
153
+
154
+ In the NER tags, a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and dates & time (DATE).
155
+
156
+ It is assumed that named entities are non-recursive and non-overlapping. In case a named entity is embedded in another named entity usually, only the top level entity is marked.
157
+
158
+ ### Data Splits
159
+
160
+ For all languages, there are three splits.
161
+
162
+ The original splits were named `train`, `dev` and `test` and they correspond to the `train`, `validation` and `test` splits.
163
+
164
+ The splits have the following sizes :
165
+
166
+ | Language | train | validation | test |
167
+ |-----------------|------:|-----------:|------:|
168
+ | Bambara | 4463 | 638 | 1274 |
169
+ | Ghomala | 3384 | 483 | 966 |
170
+ | Ewe | 3505 | 501 | 1001 |
171
+ | Fon. | 4343 | 621 | 1240 |
172
+ | Hausa | 5716 | 816 | 1633 |
173
+ | Igbo | 7634 | 1090 | 2181 |
174
+ | Kinyarwanda | 7825 | 1118 | 2235 |
175
+ | Luganda | 4942 | 706 | 1412 |
176
+ | Luo | 5161 | 737 | 1474 |
177
+ | Mossi | 4532 | 648 | 1613 |
178
+ | Nigerian-Pidgin | 5646 | 806 | 1294 |
179
+ | Chichewa | 6250 | 893 | 1785 |
180
+ | chiShona | 6207 | 887 | 1773 |
181
+ | Kiswahili | 6593 | 942 | 1883 |
182
+ | Setswana | 3289 | 499 | 996 |
183
+ | Akan/Twi | 4240 | 605 | 1211 |
184
+ | Wolof | 4593 | 656 | 1312 |
185
+ | isiXhosa | 5718 | 817 | 1633 |
186
+ | Yoruba | 6877 | 983 | 1964 |
187
+ | isiZulu | 5848 | 836 | 1670 |
188
+
189
+ ## Dataset Creation
190
+
191
+ ### Curation Rationale
192
+
193
+ The dataset was introduced to introduce new resources to ten languages that were under-served for natural language processing.
194
+
195
+ [More Information Needed]
196
+
197
+ ### Source Data
198
+
199
+ The source of the data is from the news domain, details can be found here https://arxiv.org/abs/2103.11811
200
+
201
+ #### Initial Data Collection and Normalization
202
+
203
+ The articles were word-tokenized, information on the exact pre-processing pipeline is unavailable.
204
+
205
+ #### Who are the source language producers?
206
+
207
+ The source language was produced by journalists and writers employed by the news agency and newspaper mentioned above.
208
+
209
+ ### Annotations
210
+
211
+ #### Annotation process
212
+
213
+ Details can be found here https://arxiv.org/abs/2103.11811
214
+
215
+ #### Who are the annotators?
216
+
217
+ Annotators were recruited from [Masakhane](https://www.masakhane.io/)
218
+
219
+ ### Personal and Sensitive Information
220
+
221
+ The data is sourced from newspaper source and only contains mentions of public figures or individuals
222
+
223
+ ## Considerations for Using the Data
224
+
225
+ ### Social Impact of Dataset
226
+ [More Information Needed]
227
+
228
+
229
+ ### Discussion of Biases
230
+ [More Information Needed]
231
+
232
+
233
+ ### Other Known Limitations
234
+
235
+ Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.
236
+
237
+ ## Additional Information
238
+
239
+ ### Dataset Curators
240
+
241
+
242
+ ### Licensing Information
243
+
244
+ The licensing status of the data is CC 4.0 Non-Commercial
245
+
246
+ ### Citation Information
247
+
248
+ Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
249
+ ```
250
+ @article{Adelani2022MasakhaNER2A,
251
+ title={MasakhaNER 2.0: Africa-centric Transfer Learning for Named Entity Recognition},
252
+ author={David Ifeoluwa Adelani and Graham Neubig and Sebastian Ruder and Shruti Rijhwani and Michael Beukman and Chester Palen-Michel and Constantine Lignos and Jesujoba Oluwadara Alabi and Shamsuddeen Hassan Muhammad and Peter Nabende and Cheikh M. Bamba Dione and Andiswa Bukula and Rooweither Mabuya and Bonaventure F. P. Dossou and Blessing K. Sibanda and Happy Buzaaba and Jonathan Mukiibi and Godson Kalipe and Derguene Mbaye and Amelia Taylor and Fatoumata Kabore and Chris C. Emezue and Anuoluwapo Aremu and Perez Ogayo and Catherine W. Gitau and Edwin Munkoh-Buabeng and Victoire Memdjokam Koagne and Allahsera Auguste Tapo and Tebogo Macucwa and Vukosi Marivate and Elvis Mboning and Tajuddeen R. Gwadabe and Tosin P. Adewumi and Orevaoghene Ahia and Joyce Nakatumba-Nabende and Neo L. Mokono and Ignatius M Ezeani and Chiamaka Ijeoma Chukwuneke and Mofetoluwa Adeyemi and Gilles Hacheme and Idris Abdulmumin and Odunayo Ogundepo and Oreen Yousuf and Tatiana Moteu Ngoli and Dietrich Klakow},
253
+ journal={ArXiv},
254
+ year={2022},
255
+ volume={abs/2210.12391}
256
+ }
257
+ ```
258
+
259
+ ### Contributions
260
+
261
+ Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
masakhaner2.py ADDED
@@ -0,0 +1,186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """MasakhaNER: Named Entity Recognition for African Languages"""
18
+
19
+ import datasets
20
+
21
+
22
+ logger = datasets.logging.get_logger(__name__)
23
+
24
+
25
+ _CITATION = """\
26
+ @article{Adelani2022MasakhaNER2A,
27
+ title={MasakhaNER 2.0: Africa-centric Transfer Learning for Named Entity Recognition},
28
+ author={David Ifeoluwa Adelani and Graham Neubig and Sebastian Ruder and Shruti Rijhwani and Michael Beukman and Chester Palen-Michel and Constantine Lignos and Jesujoba Oluwadara Alabi and Shamsuddeen Hassan Muhammad and Peter Nabende and Cheikh M. Bamba Dione and Andiswa Bukula and Rooweither Mabuya and Bonaventure F. P. Dossou and Blessing K. Sibanda and Happy Buzaaba and Jonathan Mukiibi and Godson Kalipe and Derguene Mbaye and Amelia Taylor and Fatoumata Kabore and Chris C. Emezue and Anuoluwapo Aremu and Perez Ogayo and Catherine W. Gitau and Edwin Munkoh-Buabeng and Victoire Memdjokam Koagne and Allahsera Auguste Tapo and Tebogo Macucwa and Vukosi Marivate and Elvis Mboning and Tajuddeen R. Gwadabe and Tosin P. Adewumi and Orevaoghene Ahia and Joyce Nakatumba-Nabende and Neo L. Mokono and Ignatius M Ezeani and Chiamaka Ijeoma Chukwuneke and Mofetoluwa Adeyemi and Gilles Hacheme and Idris Abdulmumin and Odunayo Ogundepo and Oreen Yousuf and Tatiana Moteu Ngoli and Dietrich Klakow},
29
+ journal={ArXiv},
30
+ year={2022},
31
+ volume={abs/2210.12391}
32
+ }
33
+ """
34
+
35
+ _DESCRIPTION = """\
36
+ MasakhaNER 2.0 is the largest publicly available high-quality dataset for named entity recognition (NER) in 20 African languages.
37
+
38
+ Named entities are phrases that contain the names of persons, organizations, locations, times and quantities.
39
+
40
+ Example:
41
+ [PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .
42
+ MasakhaNER is a named entity dataset consisting of PER, ORG, LOC, and DATE entities annotated by Masakhane for 20 African languages:
43
+ - Bambara (bam)
44
+ - Ghomala (bbj)
45
+ - Ewe (ewe)
46
+ - Fon (fon)
47
+ - Hausa (hau)
48
+ - Igbo (ibo)
49
+ - Kinyarwanda (kin)
50
+ - Luganda (lug)
51
+ - Dholuo (luo)
52
+ - Mossi (mos)
53
+ - Chichewa (nya)
54
+ - Nigerian Pidgin
55
+ - chShona (sna)
56
+ - Kiswahili (swą)
57
+ - Setswana (tsn)
58
+ - Twi (twi)
59
+ - Wolof (wol)
60
+ - isiXhosa (xho)
61
+ - Yorùbá (yor)
62
+ - isiZulu (zul)
63
+
64
+ The train/validation/test sets are available for all the ten languages.
65
+
66
+ For more details see https://arxiv.org/abs/2103.11811
67
+ """
68
+
69
+ _URL = "https://github.com/masakhane-io/masakhane-ner/raw/main/MasakhaNER2.0/data/"
70
+ _TRAINING_FILE = "train.txt"
71
+ _DEV_FILE = "dev.txt"
72
+ _TEST_FILE = "test.txt"
73
+
74
+
75
+ class MasakhanerConfig(datasets.BuilderConfig):
76
+ """BuilderConfig for Masakhaner"""
77
+
78
+ def __init__(self, **kwargs):
79
+ """BuilderConfig for Masakhaner.
80
+
81
+ Args:
82
+ **kwargs: keyword arguments forwarded to super.
83
+ """
84
+ super(MasakhanerConfig, self).__init__(**kwargs)
85
+
86
+
87
+ class Masakhaner(datasets.GeneratorBasedBuilder):
88
+ """Masakhaner dataset."""
89
+
90
+ BUILDER_CONFIGS = [
91
+ MasakhanerConfig(name="bam", version=datasets.Version("1.0.0"), description="Masakhaner Bambara dataset"),
92
+ MasakhanerConfig(name="bbj", version=datasets.Version("1.0.0"), description="Masakhaner Ghomala dataset"),
93
+ MasakhanerConfig(name="ewe", version=datasets.Version("1.0.0"), description="Masakhaner Ewe dataset"),
94
+ MasakhanerConfig(name="fon", version=datasets.Version("1.0.0"), description="Masakhaner Fon dataset"),
95
+ MasakhanerConfig(name="hau", version=datasets.Version("1.0.0"), description="Masakhaner Hausa dataset"),
96
+ MasakhanerConfig(name="ibo", version=datasets.Version("1.0.0"), description="Masakhaner Igbo dataset"),
97
+ MasakhanerConfig(name="kin", version=datasets.Version("1.0.0"), description="Masakhaner Kinyarwanda dataset"),
98
+ MasakhanerConfig(name="lug", version=datasets.Version("1.0.0"), description="Masakhaner Luganda dataset"),
99
+ MasakhanerConfig(name="mos", version=datasets.Version("1.0.0"), description="Masakhaner Mossi dataset"),
100
+ MasakhanerConfig(name="nya", version=datasets.Version("1.0.0"), description="Masakhaner Chichewa` dataset"),
101
+ MasakhanerConfig(
102
+ name="pcm", version=datasets.Version("1.0.0"), description="Masakhaner Nigerian-Pidgin dataset"
103
+ ),
104
+ MasakhanerConfig(name="sna", version=datasets.Version("1.0.0"), description="Masakhaner Shona dataset"),
105
+ MasakhanerConfig(name="swa", version=datasets.Version("1.0.0"), description="Masakhaner Swahili dataset"),
106
+ MasakhanerConfig(name="tsn", version=datasets.Version("1.0.0"), description="Masakhaner Setswana dataset"),
107
+ MasakhanerConfig(name="twi", version=datasets.Version("1.0.0"), description="Masakhaner Twi dataset"),
108
+ MasakhanerConfig(name="wol", version=datasets.Version("1.0.0"), description="Masakhaner Wolof dataset"),
109
+ MasakhanerConfig(name="xho", version=datasets.Version("1.0.0"), description="Masakhaner Xhosa dataset"),
110
+ MasakhanerConfig(name="yor", version=datasets.Version("1.0.0"), description="Masakhaner Yoruba dataset"),
111
+ MasakhanerConfig(name="zul", version=datasets.Version("1.0.0"), description="Masakhaner Zulu dataset"),
112
+ ]
113
+
114
+ def _info(self):
115
+ return datasets.DatasetInfo(
116
+ description=_DESCRIPTION,
117
+ features=datasets.Features(
118
+ {
119
+ "id": datasets.Value("string"),
120
+ "tokens": datasets.Sequence(datasets.Value("string")),
121
+ "ner_tags": datasets.Sequence(
122
+ datasets.features.ClassLabel(
123
+ names=[
124
+ "O",
125
+ "B-PER",
126
+ "I-PER",
127
+ "B-ORG",
128
+ "I-ORG",
129
+ "B-LOC",
130
+ "I-LOC",
131
+ "B-DATE",
132
+ "I-DATE",
133
+ ]
134
+ )
135
+ ),
136
+ }
137
+ ),
138
+ supervised_keys=None,
139
+ homepage="https://arxiv.org/abs/2210.12391",
140
+ citation=_CITATION,
141
+ )
142
+
143
+ def _split_generators(self, dl_manager):
144
+ """Returns SplitGenerators."""
145
+ urls_to_download = {
146
+ "train": f"{_URL}{self.config.name}/{_TRAINING_FILE}",
147
+ "dev": f"{_URL}{self.config.name}/{_DEV_FILE}",
148
+ "test": f"{_URL}{self.config.name}/{_TEST_FILE}",
149
+ }
150
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
151
+
152
+ return [
153
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
154
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
155
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
156
+ ]
157
+
158
+ def _generate_examples(self, filepath):
159
+ logger.info("⏳ Generating examples from = %s", filepath)
160
+ with open(filepath, encoding="utf-8") as f:
161
+ guid = 0
162
+ tokens = []
163
+ ner_tags = []
164
+ for line in f:
165
+ if line == "" or line == "\n":
166
+ if tokens:
167
+ yield guid, {
168
+ "id": str(guid),
169
+ "tokens": tokens,
170
+ "ner_tags": ner_tags,
171
+ }
172
+ guid += 1
173
+ tokens = []
174
+ ner_tags = []
175
+ else:
176
+ # Masakhaner tokens are space separated
177
+ splits = line.split(" ")
178
+ tokens.append(splits[0])
179
+ ner_tags.append(splits[1].rstrip())
180
+ # last example
181
+ if tokens:
182
+ yield guid, {
183
+ "id": str(guid),
184
+ "tokens": tokens,
185
+ "ner_tags": ner_tags,
186
+ }