Datasets:
Upload README..md
Browse files- README..md +271 -0
README..md
ADDED
@@ -0,0 +1,271 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
annotations_creators:
|
2 |
+
- expert-generated
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- fr
|
6 |
+
- am
|
7 |
+
- bm
|
8 |
+
- bbj
|
9 |
+
- ee
|
10 |
+
- fon
|
11 |
+
- ha
|
12 |
+
- ig
|
13 |
+
- lg
|
14 |
+
- mos
|
15 |
+
- ny
|
16 |
+
- pcm
|
17 |
+
- rw
|
18 |
+
- sn
|
19 |
+
- sw
|
20 |
+
- tn
|
21 |
+
- tw
|
22 |
+
- wo
|
23 |
+
- xh
|
24 |
+
- yo
|
25 |
+
- zu
|
26 |
+
language_creators:
|
27 |
+
- expert-generated
|
28 |
+
license:
|
29 |
+
- cc-by-nc-4.0
|
30 |
+
multilinguality:
|
31 |
+
- translation
|
32 |
+
- multilingual
|
33 |
+
pretty_name: mafand
|
34 |
+
size_categories:
|
35 |
+
- 1K<n<10K
|
36 |
+
source_datasets:
|
37 |
+
- original
|
38 |
+
tags:
|
39 |
+
- news, mafand, masakhane
|
40 |
+
task_categories:
|
41 |
+
- translation
|
42 |
+
task_ids: []
|
43 |
+
|
44 |
+
# Dataset Card for [Needs More Information]
|
45 |
+
|
46 |
+
## Table of Contents
|
47 |
+
- [Dataset Description](#dataset-description)
|
48 |
+
- [Dataset Summary](#dataset-summary)
|
49 |
+
- [Supported Tasks](#supported-tasks-and-leaderboards)
|
50 |
+
- [Languages](#languages)
|
51 |
+
- [Dataset Structure](#dataset-structure)
|
52 |
+
- [Data Instances](#data-instances)
|
53 |
+
- [Data Fields](#data-instances)
|
54 |
+
- [Data Splits](#data-instances)
|
55 |
+
- [Dataset Creation](#dataset-creation)
|
56 |
+
- [Curation Rationale](#curation-rationale)
|
57 |
+
- [Source Data](#source-data)
|
58 |
+
- [Annotations](#annotations)
|
59 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
60 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
61 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
62 |
+
- [Discussion of Biases](#discussion-of-biases)
|
63 |
+
- [Other Known Limitations](#other-known-limitations)
|
64 |
+
- [Additional Information](#additional-information)
|
65 |
+
- [Dataset Curators](#dataset-curators)
|
66 |
+
- [Licensing Information](#licensing-information)
|
67 |
+
- [Citation Information](#citation-information)
|
68 |
+
|
69 |
+
## Dataset Description
|
70 |
+
|
71 |
+
- **Homepage:** https://github.com/masakhane-io/lafand-mt
|
72 |
+
- **Repository:** https://github.com/masakhane-io/lafand-mt
|
73 |
+
- **Paper:** https://aclanthology.org/2022.naacl-main.223/
|
74 |
+
- **Leaderboard:** [Needs More Information]
|
75 |
+
- **Point of Contact:** David Adelani (didelani@lsv.uni-saarland.de)
|
76 |
+
|
77 |
+
### Dataset Summary
|
78 |
+
|
79 |
+
MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages.
|
80 |
+
|
81 |
+
### Supported Tasks and Leaderboards
|
82 |
+
|
83 |
+
Machine Translation
|
84 |
+
|
85 |
+
### Languages
|
86 |
+
|
87 |
+
The languages covered are:
|
88 |
+
- Amharic
|
89 |
+
- Bambara
|
90 |
+
- Ghomala
|
91 |
+
- Ewe
|
92 |
+
- Fon
|
93 |
+
- Hausa
|
94 |
+
- Igbo
|
95 |
+
- Kinyarwanda
|
96 |
+
- Luganda
|
97 |
+
- Luo
|
98 |
+
- Mossi
|
99 |
+
- Nigerian-Pidgin
|
100 |
+
- Chichewa
|
101 |
+
- Shona
|
102 |
+
- Swahili
|
103 |
+
- Setswana
|
104 |
+
- Twi
|
105 |
+
- Wolof
|
106 |
+
- Xhosa
|
107 |
+
- Yoruba
|
108 |
+
- Zulu
|
109 |
+
|
110 |
+
## Dataset Structure
|
111 |
+
|
112 |
+
### Data Instances
|
113 |
+
|
114 |
+
{"translation": {"src": "--- President Buhari will determine when to lift lockdown Minister", "tgt": "--- ��r� Buhari l� l� y�h�n pad� l�r� �t� k�n�l�gb�l� M�n�s�t�"}}
|
115 |
+
|
116 |
+
|
117 |
+
{"translation": {"en": "--- President Buhari will determine when to lift lockdown Minister", "yo": "--- ��r� Buhari l� l� y�h�n pad� l�r� �t� k�n�l�gb�l� M�n�s�t�"}}
|
118 |
+
|
119 |
+
|
120 |
+
### Data Fields
|
121 |
+
|
122 |
+
"translation": name of the task
|
123 |
+
"src" : source language e.g en
|
124 |
+
"tgt": target language e.g yo
|
125 |
+
|
126 |
+
### Data Splits
|
127 |
+
|
128 |
+
Train/dev/test split
|
129 |
+
|
130 |
+
language| Train| Dev |Test
|
131 |
+
-|-|-|-
|
132 |
+
amh |-|899|1037
|
133 |
+
bam |3302|1484|1600
|
134 |
+
bbj |2232|1133|1430
|
135 |
+
ewe |2026|1414|1563
|
136 |
+
fon |2637|1227|1579
|
137 |
+
hau |5865|1300|1500
|
138 |
+
ibo |6998|1500|1500
|
139 |
+
kin |-|460|1006
|
140 |
+
lug |4075|1500|1500
|
141 |
+
luo |4262|1500|1500
|
142 |
+
mos |2287|1478|1574
|
143 |
+
nya |-|483|1004
|
144 |
+
pcm |4790|1484|1574
|
145 |
+
sna |-|556|1005
|
146 |
+
swa |30782|1791|1835
|
147 |
+
tsn |2100|1340|1835
|
148 |
+
twi |3337|1284|1500
|
149 |
+
wol |3360|1506|1500|
|
150 |
+
xho |-|486|1002|
|
151 |
+
yor |6644|1544|1558|
|
152 |
+
zul |3500|1239|998|
|
153 |
+
|
154 |
+
|
155 |
+
## Dataset Creation
|
156 |
+
|
157 |
+
### Curation Rationale
|
158 |
+
|
159 |
+
MAFAND was created from the news domain, translated from English or French to an African language
|
160 |
+
|
161 |
+
### Source Data
|
162 |
+
|
163 |
+
#### Initial Data Collection and Normalization
|
164 |
+
|
165 |
+
[Needs More Information]
|
166 |
+
|
167 |
+
#### Who are the source language producers?
|
168 |
+
|
169 |
+
[Masakhane](https://github.com/masakhane-io/lafand-mt)
|
170 |
+
[Igbo](https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_en_mt)
|
171 |
+
[Swahili](https://opus.nlpl.eu/GlobalVoices.php)
|
172 |
+
[Hausa](https://www.statmt.org/wmt21/translation-task.html)
|
173 |
+
[Yoruba](https://github.com/uds-lsv/menyo-20k_MT)
|
174 |
+
|
175 |
+
### Annotations
|
176 |
+
|
177 |
+
#### Annotation process
|
178 |
+
|
179 |
+
[Needs More Information]
|
180 |
+
|
181 |
+
#### Who are the annotators?
|
182 |
+
|
183 |
+
Masakhane members
|
184 |
+
|
185 |
+
### Personal and Sensitive Information
|
186 |
+
|
187 |
+
[Needs More Information]
|
188 |
+
|
189 |
+
## Considerations for Using the Data
|
190 |
+
|
191 |
+
### Social Impact of Dataset
|
192 |
+
|
193 |
+
[Needs More Information]
|
194 |
+
|
195 |
+
### Discussion of Biases
|
196 |
+
|
197 |
+
[Needs More Information]
|
198 |
+
|
199 |
+
### Other Known Limitations
|
200 |
+
|
201 |
+
[Needs More Information]
|
202 |
+
|
203 |
+
## Additional Information
|
204 |
+
|
205 |
+
### Dataset Curators
|
206 |
+
|
207 |
+
[Needs More Information]
|
208 |
+
|
209 |
+
### Licensing Information
|
210 |
+
|
211 |
+
[CC-BY-4.0-NC](https://creativecommons.org/licenses/by-nc/4.0/)
|
212 |
+
|
213 |
+
### Citation Information
|
214 |
+
|
215 |
+
@inproceedings{adelani-etal-2022-thousand,
|
216 |
+
title = "A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation",
|
217 |
+
author = "Adelani, David and
|
218 |
+
Alabi, Jesujoba and
|
219 |
+
Fan, Angela and
|
220 |
+
Kreutzer, Julia and
|
221 |
+
Shen, Xiaoyu and
|
222 |
+
Reid, Machel and
|
223 |
+
Ruiter, Dana and
|
224 |
+
Klakow, Dietrich and
|
225 |
+
Nabende, Peter and
|
226 |
+
Chang, Ernie and
|
227 |
+
Gwadabe, Tajuddeen and
|
228 |
+
Sackey, Freshia and
|
229 |
+
Dossou, Bonaventure F. P. and
|
230 |
+
Emezue, Chris and
|
231 |
+
Leong, Colin and
|
232 |
+
Beukman, Michael and
|
233 |
+
Muhammad, Shamsuddeen and
|
234 |
+
Jarso, Guyo and
|
235 |
+
Yousuf, Oreen and
|
236 |
+
Niyongabo Rubungo, Andre and
|
237 |
+
Hacheme, Gilles and
|
238 |
+
Wairagala, Eric Peter and
|
239 |
+
Nasir, Muhammad Umair and
|
240 |
+
Ajibade, Benjamin and
|
241 |
+
Ajayi, Tunde and
|
242 |
+
Gitau, Yvonne and
|
243 |
+
Abbott, Jade and
|
244 |
+
Ahmed, Mohamed and
|
245 |
+
Ochieng, Millicent and
|
246 |
+
Aremu, Anuoluwapo and
|
247 |
+
Ogayo, Perez and
|
248 |
+
Mukiibi, Jonathan and
|
249 |
+
Ouoba Kabore, Fatoumata and
|
250 |
+
Kalipe, Godson and
|
251 |
+
Mbaye, Derguene and
|
252 |
+
Tapo, Allahsera Auguste and
|
253 |
+
Memdjokam Koagne, Victoire and
|
254 |
+
Munkoh-Buabeng, Edwin and
|
255 |
+
Wagner, Valencia and
|
256 |
+
Abdulmumin, Idris and
|
257 |
+
Awokoya, Ayodele and
|
258 |
+
Buzaaba, Happy and
|
259 |
+
Sibanda, Blessing and
|
260 |
+
Bukula, Andiswa and
|
261 |
+
Manthalu, Sam",
|
262 |
+
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
|
263 |
+
month = jul,
|
264 |
+
year = "2022",
|
265 |
+
address = "Seattle, United States",
|
266 |
+
publisher = "Association for Computational Linguistics",
|
267 |
+
url = "https://aclanthology.org/2022.naacl-main.223",
|
268 |
+
doi = "10.18653/v1/2022.naacl-main.223",
|
269 |
+
pages = "3053--3070",
|
270 |
+
abstract = "Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.",
|
271 |
+
}
|