Davlan commited on
Commit
090e1f2
1 Parent(s): b49b945

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -0
README.md CHANGED
@@ -1,3 +1,64 @@
1
  ---
2
  license: afl-3.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: afl-3.0
3
  ---
4
+
5
+ ### Citation Information
6
+ ```
7
+ @inproceedings{adelani-etal-2022-thousand,
8
+ title = "A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation",
9
+ author = "Adelani, David and
10
+ Alabi, Jesujoba and
11
+ Fan, Angela and
12
+ Kreutzer, Julia and
13
+ Shen, Xiaoyu and
14
+ Reid, Machel and
15
+ Ruiter, Dana and
16
+ Klakow, Dietrich and
17
+ Nabende, Peter and
18
+ Chang, Ernie and
19
+ Gwadabe, Tajuddeen and
20
+ Sackey, Freshia and
21
+ Dossou, Bonaventure F. P. and
22
+ Emezue, Chris and
23
+ Leong, Colin and
24
+ Beukman, Michael and
25
+ Muhammad, Shamsuddeen and
26
+ Jarso, Guyo and
27
+ Yousuf, Oreen and
28
+ Niyongabo Rubungo, Andre and
29
+ Hacheme, Gilles and
30
+ Wairagala, Eric Peter and
31
+ Nasir, Muhammad Umair and
32
+ Ajibade, Benjamin and
33
+ Ajayi, Tunde and
34
+ Gitau, Yvonne and
35
+ Abbott, Jade and
36
+ Ahmed, Mohamed and
37
+ Ochieng, Millicent and
38
+ Aremu, Anuoluwapo and
39
+ Ogayo, Perez and
40
+ Mukiibi, Jonathan and
41
+ Ouoba Kabore, Fatoumata and
42
+ Kalipe, Godson and
43
+ Mbaye, Derguene and
44
+ Tapo, Allahsera Auguste and
45
+ Memdjokam Koagne, Victoire and
46
+ Munkoh-Buabeng, Edwin and
47
+ Wagner, Valencia and
48
+ Abdulmumin, Idris and
49
+ Awokoya, Ayodele and
50
+ Buzaaba, Happy and
51
+ Sibanda, Blessing and
52
+ Bukula, Andiswa and
53
+ Manthalu, Sam",
54
+ booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
55
+ month = jul,
56
+ year = "2022",
57
+ address = "Seattle, United States",
58
+ publisher = "Association for Computational Linguistics",
59
+ url = "https://aclanthology.org/2022.naacl-main.223",
60
+ doi = "10.18653/v1/2022.naacl-main.223",
61
+ pages = "3053--3070",
62
+ abstract = "Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.",
63
+ }
64
+ ```