{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:13:57.806440Z" }, "title": "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-Resource Languages", "authors": [ { "first": "Kelechi", "middle": [], "last": "Ogueji", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Waterloo", "location": {} }, "email": "kelechi.ogueji@uwaterloo.ca" }, { "first": "Yuxin", "middle": [], "last": "Zhu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Waterloo", "location": {} }, "email": "yuxin.zhu@uwaterloo.ca" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Waterloo", "location": {} }, "email": "jimmylin@uwaterloo.ca" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Pretrained multilingual language models have been shown to work well on many languages for a variety of downstream NLP tasks. However, these models are known to require a lot of training data. This consequently leaves out a huge percentage of the world's languages as they are under-resourced. Furthermore, a major motivation behind these models is that lower-resource languages benefit from joint training with higher-resource languages. In this work, we challenge this assumption and present the first attempt at training a multilingual language model on only low-resource languages. We show that it is possible to train competitive multilingual language models on less than 1 GB of text. Our model, named AfriBERTa, covers 11 African languages, including the first language model for 4 of these languages. Evaluations on named entity recognition and text classification spanning 10 languages show that our model outperforms mBERT and XLM-R in several languages and is very competitive overall. Results suggest that our \"small data\" approach based on similar languages may sometimes work better than joint training on large datasets with high-resource languages. Code, data and models are released at https://github. com/keleog/afriberta.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Pretrained multilingual language models have been shown to work well on many languages for a variety of downstream NLP tasks. However, these models are known to require a lot of training data. This consequently leaves out a huge percentage of the world's languages as they are under-resourced. Furthermore, a major motivation behind these models is that lower-resource languages benefit from joint training with higher-resource languages. In this work, we challenge this assumption and present the first attempt at training a multilingual language model on only low-resource languages. We show that it is possible to train competitive multilingual language models on less than 1 GB of text. Our model, named AfriBERTa, covers 11 African languages, including the first language model for 4 of these languages. Evaluations on named entity recognition and text classification spanning 10 languages show that our model outperforms mBERT and XLM-R in several languages and is very competitive overall. Results suggest that our \"small data\" approach based on similar languages may sometimes work better than joint training on large datasets with high-resource languages. Code, data and models are released at https://github. com/keleog/afriberta.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Pretrained language models have risen to the fore of natural language processing (NLP), achieving impressive performance on a variety of NLP tasks. The multilingual version of these models such as XLM-R and mBERT (Devlin et al., 2019) have also been shown to generalize well to many languages. However, these models are known to require a lot of training data, which is often absent for low-resource languages. Also, high-resource languages usually make up a significant part of the training data, as it is hypothesized that they help boost transfer to lower-resource languages. Hence, there has been no known attempt to investigate if it is possible to pretrain multilingual language models solely on low-resource languages without any transfer from higher-resource languages, despite the numerous benefits that this could provide. Motivated by this gap in the literature, the goal of our work is to explore the viability of multilingual language models pretrained from scratch on low-resource languages and to understand how to pretrain such models in this setting.", "cite_spans": [ { "start": 213, "end": 234, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We introduce AfriBERTa, a transformer-based multilingual language models trained on 11 African languages, all of which are low-resource. 1 We evaluate this model on named entity recognition (NER) and text classification downstream tasks on 10 low-resource languages. Our models outperform larger models like mBERT and XLM-R by up to 10 F1 points on text classification, and also outperform these models on several languages in the NER task. Across all languages, we obtain very competitive performance to these larger models. In summary, our contributions are as follows:", "cite_spans": [ { "start": 137, "end": 138, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. We show that competitive multilingual language models can be pretrained from scratch solely on low-resource languages without any highresource transfer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. We show that it is possible to pretrain these models on less than 1 GB of text data and highlight the many practical benefits of this.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. Our extensive experiments highlight important factors to consider when pretraining multilingual language models in low-resource settings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "4. We introduce language models for 4 languages, improving the representation of low-resource languages in modern NLP tools.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our results show that, for the first time, it is possible to pretrain a multilingual language model from scratch on only low-resource languages and obtain good performance on downstream tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In recent years, unsupervised learning of text representations has significantly advanced natural language processing tasks. Static representations from pretrained word embeddings (Mikolov et al., 2013; Pennington et al., 2014) were improved upon by learning contextualized representations (Peters et al., 2018) . This has been noticeably improved further by pretraining language models (Radford et al., 2018; Devlin et al., 2019) based on transformers (Vaswani et al., 2017) . These models have also been extended to the multilingual setting where a single language model is pretrained on several languages without any explicit cross-lingual supervision Devlin et al., 2019) . However, much of this progress has been focused on languages with relatively large amounts of data, commonly referred to as high-resource languages. There has especially been very little focus on African languages, despite the over 2000 languages spoken on the continent making up 30.1% of all living languages (Eberhard et al., 2019) . This is further visible in NLP publications on these languages. In all the Association for Computational Linguistics (ACL) conferences hosted in 2019, only 0.19% author affiliations were located in Africa (Caines, 2019) . Other works (Joshi et al., 2020) have also noted the great disparity in the coverage of languages by NLP technologies. They note that over 90% of the world's 7000+ languages are under-studied by the NLP community.", "cite_spans": [ { "start": 180, "end": 202, "text": "(Mikolov et al., 2013;", "ref_id": "BIBREF29" }, { "start": 203, "end": 227, "text": "Pennington et al., 2014)", "ref_id": "BIBREF33" }, { "start": 290, "end": 311, "text": "(Peters et al., 2018)", "ref_id": "BIBREF34" }, { "start": 387, "end": 409, "text": "(Radford et al., 2018;", "ref_id": "BIBREF35" }, { "start": 410, "end": 430, "text": "Devlin et al., 2019)", "ref_id": "BIBREF12" }, { "start": 453, "end": 475, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF38" }, { "start": 655, "end": 675, "text": "Devlin et al., 2019)", "ref_id": "BIBREF12" }, { "start": 989, "end": 1012, "text": "(Eberhard et al., 2019)", "ref_id": "BIBREF14" }, { "start": 1220, "end": 1234, "text": "(Caines, 2019)", "ref_id": "BIBREF8" }, { "start": 1249, "end": 1269, "text": "(Joshi et al., 2020)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "There have been a few works on learning pretrained embeddings for African languages, although many of them have been static and trained on a specific language (Ezeani et al., 2018; Ogueji and Ahia, 2019; Alabi et al., 2019; Dossou and Sabry, 2021) . More recently, Azunre et al. (2021) trained a BERT model on the Twi language. However, they note that their model is biased to the religious domain because much of their data comes from that domain.", "cite_spans": [ { "start": 159, "end": 180, "text": "(Ezeani et al., 2018;", "ref_id": "BIBREF15" }, { "start": 181, "end": 203, "text": "Ogueji and Ahia, 2019;", "ref_id": "BIBREF31" }, { "start": 204, "end": 223, "text": "Alabi et al., 2019;", "ref_id": "BIBREF3" }, { "start": 224, "end": 247, "text": "Dossou and Sabry, 2021)", "ref_id": "BIBREF13" }, { "start": 265, "end": 285, "text": "Azunre et al. (2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "While some African languages have been included in multilingual language models, this coverage only scratches the surface of the number of spoken African languages. Furthermore, the languages always make up a minuscule percentage of the training set. For instance, amongst the 104 languages that mBERT was pretrained on, only 3 are African. 2 In XLM-R, there are only 8 African languages out of the 100 languages. In terms of dataset size, the story is the same. African languages make up 4.80 GB out of about 2395 GB that XLM-R was pretrained on, representing just 0.2% of the entire dataset . In mBERT, African languages make up just 0.24 GB out of the approximately 100 GB that the model was pretrained on. All of this call for an obvious need for increased representation of African languages in modern NLP tools for the over 1.3 billion speakers on the continent. 3 Pretrained language models have been shown to perform well when there is a lot of data (Liu et al., 2019; , but some works have focused on using relatively smaller amounts of data. showed that it is possible to obtain state-of-the-art result with a French BERT model pretrained on small-scale diverse data. In another work, Micheli et al. (2020) showed that training a French BERT language model on 100 MB of data yields similar performance on question answering as models pretrained on larger datasets. Furthermore, Ortiz Su\u00e1rez et al. 2020obtained state-of-the-art performance with ELMo (Peters et al., 2018) language models pretrained on less than 1 GB of Wikipedia text, and Zhang et al. (2020) show that RoBERTa language models (Liu et al., 2019) trained on 10 to 100 million tokens can encode most syntactic and semantic features in its learned text representations.", "cite_spans": [ { "start": 869, "end": 870, "text": "3", "ref_id": null }, { "start": 958, "end": 976, "text": "(Liu et al., 2019;", "ref_id": null }, { "start": 1195, "end": 1216, "text": "Micheli et al. (2020)", "ref_id": "BIBREF28" }, { "start": 1460, "end": 1481, "text": "(Peters et al., 2018)", "ref_id": "BIBREF34" }, { "start": 1550, "end": 1569, "text": "Zhang et al. (2020)", "ref_id": "BIBREF42" }, { "start": 1604, "end": 1622, "text": "(Liu et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "A common theme among these works is their focus on monolingual language models. While it is possible to learn monolingual language models on smaller amounts of data, it remains to be seen if it is possible in the multilingual case. Our work is the first, to the best of our knowledge, that focuses on pretraining a multilingual language model solely on low-resource languages without any transfer from higher-resource languages. Amharic, Gahuza (a code-mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yor\u00f9b\u00e1. These languages all come from three language families: Niger-Congo, Afro Asiatic and English Creole. We select these languages because they are the languages supported by the British Broadcasting Corporation (BBC) News, which was our main source of data. 4 We also obtain additional data from the Common Crawl Corpus for languages available there, specifically Amharic, Afaan Oromoo, Amharic, Hausa, Igbo, Somali and Swahili. Table 1 provides details about the languages used in pretraining our models.", "cite_spans": [ { "start": 828, "end": 829, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Size: The total size of our pretraining corpus is 0.94 GB (108.8 million tokens). In comparison, XLM-R was pretrained on about 2395 GB (164.0 billion tokens) , and mBERT was trained on roughly 100 GB (12.8 billion to- kens). 5 Following findings from Liu et al. 2019and that more data is always better for pretrained language modelling, our small corpus makes our task even more challenging, and one can already see that our model is at a disadvantage compared to XLM-R and mBERT. Our corpus contains approximately 5.45 million sentences and 108.8 million tokens. Table 2 presents more details about the dataset size for each language. It can be observed that languages like Swahili, Hausa and Somali have the most amount of data, while languages like Tigrinya have very little data with just about 12,000 sentences.", "cite_spans": [], "ref_spans": [ { "start": 564, "end": 571, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Language XLM-R mBERT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "For each language we pretrained on that is present in XLM-R or mBERT, we compare the size of that language in our dataset to its size in the pretraining corpora of mBERT and XLM-R. From the comparison details in Table 3 , we can see that XLM-R always has more data for languages present in our pretraining corpus and theirs. In fact, on average, we can see that the size of the language is always at least two times more in XLM-R. For mBERT, we can see that AfriBERTa has more data for Hausa and Yor\u00f9b\u00e1, which are present in both corpora. However, one would expect that, given that both languages are in the Latin script, there should be enough high-resource transfer to help them outperform our model.", "cite_spans": [], "ref_spans": [ { "start": 212, "end": 219, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We remove lines that are empty or only contain punctuation. Given that there is significant overlap between the African language corpora in Common Crawl and the BBC News data that we crawled, we perform extensive deduplication for each language by removing exact matched sentences. We also enforce a minimum length restriction by only retaining sentences with more than 5 tokens. We observe that the quality of the dataset from Common Crawl is very low, confirming recent findings from Caswell et al. (2021) . Hence, we manually clean the data as much as we can by removing texts in the wrong language, while trying to throw out as little data as possible.", "cite_spans": [ { "start": 486, "end": 507, "text": "Caswell et al. (2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessing:", "sec_num": null }, { "text": "We train a transformer (Vaswani et al., 2017) with the standard masked language modelling objective of Devlin et al. (2019) without next sentence prediction. This is also the same approach used in XLM-R . We pretrain on text data containing all languages, sampling batches from different languages. We sample languages such that our model does not see the same language over several consecutive batches.", "cite_spans": [ { "start": 23, "end": 45, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF38" }, { "start": 103, "end": 123, "text": "Devlin et al. (2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "We utilize subword tokenization on the raw text data using SentencePiece (Kudo and Richardson, 2018) trained with a unigram language model (Kudo, 2018) . We sample training sentences from different languages for the tokenizer following the sampling method described in Conneau and Lample (2019) with \u03b1 = 0.3.", "cite_spans": [ { "start": 73, "end": 100, "text": "(Kudo and Richardson, 2018)", "ref_id": "BIBREF22" }, { "start": 139, "end": 151, "text": "(Kudo, 2018)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "Pretraining: We take out varying amounts of evaluation sentences from each language's original monolingual dataset, depending on the language's size. Our total evaluation set containing all languages consists of roughly 440,000 sentences. We evaluate the perplexity on this dataset to measure language model performance. However, following Conneau et al. (2020), we continue pretraining even after validation perplexity stops decreasing. Effectively, we pretrain on around 0.94 GB of data and evaluate on around 0.08 GB of data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.3" }, { "text": "We evaluate named entity recognition (NER) using the recently released MasakhaNER dataset (Adelani et al., 2021). The dataset covers the following ten languages: Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Luo, Nigerian Pidgin, Swahili, Wolof and Yor\u00f9b\u00e1. The authors established strong baselines on the dataset ranging from simpler methods like CNN-BiLSTM-CRF to pretrained language models like mBERT and XLM-R.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NER:", "sec_num": null }, { "text": "Text Classification: We use the news topic classification dataset from Hedderich et al. (2020) , which covers Hausa and Yoruba. The authors established strong transfer learning and distant supervision baselines. They find that both mBERT and XLM-R outperform simpler neural network baselines in few-shot and zero-shot settings.", "cite_spans": [ { "start": 71, "end": 94, "text": "Hedderich et al. (2020)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "NER:", "sec_num": null }, { "text": "All models are trained with the Huggingface Transformers library (Wolf et al., 2020) (v4.2.1) . In the following initial experiments, we pretrain each model for 60,000 steps and use a maximum sequence length of 512. We pretrain using a batch size of 32 and accumulate the gradients for 4 steps. Optimization is done using AdamW (Loshchilov and Hutter, 2017 ) with a learning rate of 1e-4 and 6000 linear warm-up steps. We report F1 scores on the NER dataset averaged over 3 runs with different random seeds. Following initial explorations, we found a vocabulary size of 40k, excluding special tokens, to yield good results across different model sizes, so we use this for initial experiments.", "cite_spans": [ { "start": 65, "end": 84, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF41" }, { "start": 328, "end": 356, "text": "(Loshchilov and Hutter, 2017", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 85, "end": 93, "text": "(v4.2.1)", "ref_id": null } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.4" }, { "text": "NER models are trained by adding a linear classification layer to the pretrained transformer model and fine-tuning all parameters. Following Adelani et al. 2021, we train for 50 epochs with a batch size of 16, a learning rate of 5e-5 and also optimize with AdamW.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.4" }, { "text": "Text classification models are trained by adding a linear classification layer to the pretrained transformer model and fine-tuning all parameters. We train for 25 epochs with a batch size of 32, warmup steps of 100, learning rate of 5e-5 and optimize with AdamW as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.4" }, { "text": "In this section, we compare variants of AfriBERTa models to each other in a bid to understand how to pretrain multilingual language models in small data regimes. We pretrain variants from the point of view of model architecture, taking three factors into consideration: (i) model depth, (ii) number of attention heads and (iii) vocabulary size. We define performance as \"good transfer to downstream task\". Because the NER dataset covers more languages, we fine-tune and evaluate our models on it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design Space Exploration", "sec_num": "4.1" }, { "text": "Model Depth: We compare models with 4, 6, 8 and 10 layers. For each model, we use 4 attention heads and adjust the size of the hidden units and feed-forward layers so that all models have approximately the same number of parameters. From preliminary experiments, models with more than 10 layers did not yield substantially better performance. This is expected, given the small size of the data. Because of this, coupled with computational constraints, we do not explore settings with more than 10 layers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design Space Exploration", "sec_num": "4.1" }, { "text": "As we can see from the results in Table 4 , deeper models always outperform shallower models. However, the performance gain diminishes with size. For example, the gain from increasing the model to 6 layers from 4 layers is roughly 1 F1 point. However, the gain from increasing from 6 layers to 10 layers is only \u223c0.4. This corroborates the recent universality overfitting findings from Kaplan et al. (2020) , who showed that the performance of transformer language models improves predictably as long as data size and model depth are scaled in tandem, otherwise there is a diminishing return.", "cite_spans": [ { "start": 386, "end": 406, "text": "Kaplan et al. (2020)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 34, "end": 41, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Design Space Exploration", "sec_num": "4.1" }, { "text": "In general, our results suggests that deeper models also work well when pretraining multilingual language models on small datasets. This follows previous works on understanding the cross-lingual ability of multilingual language models (K et al., 2019) , which have shown that deeper models have better cross-lingual performance. However, gains from increasing depth are relatively minimal because of the size of our corpus.", "cite_spans": [ { "start": 235, "end": 251, "text": "(K et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Design Space Exploration", "sec_num": "4.1" }, { "text": "For each layer size (4, 6, 8 and 10), we train models with three different numbers of attention heads: 2, 4 and 6. Again, initial experiments with more than 6 attention heads did not yield any better results, so we do not explore more than 6 heads. Results are presented in Table 5 .", "cite_spans": [], "ref_spans": [ { "start": 274, "end": 281, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Number of Attention Heads:", "sec_num": null }, { "text": "The results suggest that there is a diminishing return to the number of attention heads when the model is deep. Shallower models need more attention heads to attain competitive performance. However, when the model is deep enough, it is very competitive with as few as two attention heads. This suggests that results from recent works (K et al., 2019; Michel et al., 2019) , which suggest that transformers can do without a large number of attention heads, also hold true for multilingual language models on small datasets.", "cite_spans": [ { "start": 334, "end": 350, "text": "(K et al., 2019;", "ref_id": "BIBREF19" }, { "start": 351, "end": 371, "text": "Michel et al., 2019)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Number of Attention Heads:", "sec_num": null }, { "text": "Vocabulary Size: Previous works have suggested that on small datasets, one should employ a small vocabulary size (Sennrich and Zhang, 2019; Araabi and Monz, 2020) . However, it remains to be seen if this holds in the multilingual setting since several languages will be competing for vocabulary 2021. The best score for each language and overall best scores are in bold. We also report the model parameter size in parentheses.", "cite_spans": [ { "start": 113, "end": 139, "text": "(Sennrich and Zhang, 2019;", "ref_id": "BIBREF36" }, { "start": 140, "end": 162, "text": "Araabi and Monz, 2020)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Number of Attention Heads:", "sec_num": null }, { "text": "space and have found that increasing the vocabulary size improves multilingual performance. We evaluate our best model size on varying vocabulary sizes and report results in Table 6 . As we can see from the results, increasing the vocabulary size does not always yield good results on smaller datasets. While a small vocabulary size performs relatively poorly, medium sized vocabularies can sometimes outperform larger ones. Due to computation constraints, we selected vocabulary size of 70k for the final models below.", "cite_spans": [], "ref_spans": [ { "start": 174, "end": 181, "text": "Table 6", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Number of Attention Heads:", "sec_num": null }, { "text": "Final Model Selection: We release three Afri-BERTa pretrained model sizes: small (4 layers), base (8 layers) and large (10 layers). Each model has 6 attention heads, 768 hidden units, 3072 feedforward size and a maximum length of 512. Their respective parameter sizes are 97 million, 111 million and 126 million. We use float16 operations to speed up training and reduce memory usage. Pretraining is done for 460,000 steps with 40,000 linear warm-up steps and then the learning rate is decreased linearly. We pretrain with a batch size of 32 on 2 Nvidia V100 GPUs and accumulate the gradients for 8 steps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Number of Attention Heads:", "sec_num": null }, { "text": "As we can see in Table 7 , even the AfriBERTa small model, which is almost three times smaller than XLM-R, obtains competitive NER results across all languages, trailing XLM-R by less than 3 F1 points. This represents a great opportunity for deployment in resource constrained scenarios, which is usually common for applications in low-resource languages. Our best performing model is Afri-BERTa large, which outperforms mBERT and is very competitive with XLM-R across all languages. AfriBERTa large even outperforms both models on several languages that all three models were pretrained on, such as Hausa, Amharic and Swahili.", "cite_spans": [], "ref_spans": [ { "start": 17, "end": 24, "text": "Table 7", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "NER Comparisons", "sec_num": "4.2" }, { "text": "It should be noted that AfriBERTa large achieves all this with less than half of the number of parameters of XLM-R and about 45M fewer parameters than mBERT. Furthermore, we can see that our models performs very well on languages that were not part of our pretraining corpus, such as Luo, Wolof and Luganda. This demonstrates its strong cross-lingual capabilities, despite smaller parameter sizes and pretraining corpus size. A notable observation is that both mBERT and XLM-R out- perform AfriBERTa on Nigerian Pidgin, despite not being trained on the language. This is likely because of the language's high similarity with English. Nigerian Pidgin is an English Creole, meaning it borrows and shares a lot of its properties (including words) with English. Since both mBERT and XLM-R were pretrained on very large amounts of English data, it is no surprise that they perform so well on Nigerian Pidgin. In summary, our small, base and large models' performance are comparable to mBERT and XLM-R across all languages, despite being pretrained on a substantially smaller corpus and having fewer model parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NER Comparisons", "sec_num": "4.2" }, { "text": "We also compare our best model (AfriBERTa large) to XLM-R base and mBERT on text classification. As we can see from the results in Table 8, AfriBERTa large clearly outperforms both XLM-R and mBERT by over 10 F1 points on Yor\u00f9b\u00e1 and up to 7 F1 points on Hausa. Results show that mBERT slightly outperforms XLM-R on Yor\u00f9b\u00e1, most likely because it was pretrained on it, while XLM-R was not. XLM-R also outperforms mBERT on Hausa, presumably for the same reason. It should be noted that our model was pretrained on around half as much Hausa data as XLM-R, but still outperforms it substantially. An important observation is that AfriBERTa outperforms both XLM-R and mBERT on text classification, but not so much on the NER task. This suggests that, perhaps, some downstream tasks benefit from larger multilingual models with high-resource transfer than other tasks. However, we leave this interesting observation for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Classification Comparisons", "sec_num": "4.3" }, { "text": "In this section, we discuss some other contributions of this work. At a high level, AfriBERTa presents the first evidence that multilingual language models are viable with very little training data. This offers numerous benefits for the NLP community, especially for low-resource languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Opportunities for Smaller Curated Datasets: Our empirical results suggest that state-of-the-art NLP methods like multilingual language models can be made more accessible for low-resource languages. Caswell et al. (2021) recently showed that web-crawled multilingual corpora available for many languages, especially low-resource ones, are usually of very low quality. They found issues such as wrong-language content, erroneous language codes and low-quality sentences. Our work opens the door to competitive multilingual language models on smaller curated datasets for low-resource languages.", "cite_spans": [ { "start": 198, "end": 219, "text": "Caswell et al. (2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Another possible benefit of these smaller curated datasets is that they would tend to contain local content as opposed to foreign content as is in the Wikipedia and other relatively larger datasets of these languages. Models trained on such datasets with local content could potentially be more useful to the speakers of the languages given that they would be trained on data with local context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Our work challenges the commonly held belief in the NLP community that lower-resource languages need higher-resource languages in multilingual language models. Instead, we empirically demonstrate that pretraining on similar low-resource languages in a multilingual setting may sometimes be better than pretraining using high-resource and low-resource languages together. This approach should be considered in future work, especially since there have been recent findings (Wang et al., 2020 ) that lowresource languages also experience negative interference in multilingual models.", "cite_spans": [ { "start": 471, "end": 489, "text": "(Wang et al., 2020", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Strength of Language Similarity:", "sec_num": null }, { "text": "Potential Ethical Benefits: Recent works have called for more considerations of ethics and related concerns in the development of pretrained language models (Bender et al., 2021) . These concerns have ranged from environmental and financial (Strubell et al., 2019) to societal bias (Kurita et al., 2019; Basta et al., 2019) We believe our work offers the potential to ad- dress some of these concerns, while developing language technology for under-served languages. A comparison of model and data sizes of common multilingual models is presented in Table 9 . Smaller dataset sizes, like ours, mean that these datasets can more easily be cleaned, filtered, analyzed and possibly de-biased in comparison to the humongous data sizes of larger language models.", "cite_spans": [ { "start": 157, "end": 178, "text": "(Bender et al., 2021)", "ref_id": "BIBREF7" }, { "start": 241, "end": 264, "text": "(Strubell et al., 2019)", "ref_id": "BIBREF37" }, { "start": 282, "end": 303, "text": "(Kurita et al., 2019;", "ref_id": "BIBREF23" }, { "start": 304, "end": 323, "text": "Basta et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 550, "end": 557, "text": "Table 9", "ref_id": "TABREF14" } ], "eq_spans": [], "section": "Strength of Language Similarity:", "sec_num": null }, { "text": "We have also shown that smaller-sized models can outperform larger models, despite using smaller training resources. This represents a potential for reduced environmental impact. While \"low-resource\" is commonly used in the NLP community to describe a lack of data resources, Nekoto et al. (2020) have argued that \"low-resource\" also includes a wide range of societal problems, including computational constraints. Thus, our work embodies the broader spirit of \"lowresource\", as we develop more efficient models on smaller data sizes for under-served languages.", "cite_spans": [ { "start": 276, "end": 296, "text": "Nekoto et al. (2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Strength of Language Similarity:", "sec_num": null }, { "text": "Improving the Representation of African Languages in Modern NLP tools: As discussed in section 2, there is very poor representation of African languages in modern NLP tools. Recently, there have been significant efforts towards closing this gap (Alabi et al., 2019; Ogueji and Ahia, 2019; Nekoto et al., 2020; Ahia and Ogueji, 2020; Fan et al., 2020; Azunre et al., 2021; Dossou and Sabry, 2021; Adelani et al., 2021) . Our work follows along this path, as there is a need to build language technologies for the over 1.3 billion people on the continent. Besides showing that multilingual language models are viable on low-resource African languages with small training data, we also introduce the first language models for four of these languages: Kinyarwanda, Kirundi, Nigerian Pidgin and Tigrinya. These are four languages with over 50 million speakers (Eberhard et al., 2019) who are active users of digital tools. However, these languages have noticeably deficient support in NLP technologies. Our work represents an important step towards improving this.", "cite_spans": [ { "start": 245, "end": 265, "text": "(Alabi et al., 2019;", "ref_id": "BIBREF3" }, { "start": 266, "end": 288, "text": "Ogueji and Ahia, 2019;", "ref_id": "BIBREF31" }, { "start": 289, "end": 309, "text": "Nekoto et al., 2020;", "ref_id": null }, { "start": 310, "end": 332, "text": "Ahia and Ogueji, 2020;", "ref_id": "BIBREF2" }, { "start": 333, "end": 350, "text": "Fan et al., 2020;", "ref_id": "BIBREF16" }, { "start": 351, "end": 371, "text": "Azunre et al., 2021;", "ref_id": null }, { "start": 372, "end": 395, "text": "Dossou and Sabry, 2021;", "ref_id": "BIBREF13" }, { "start": 396, "end": 417, "text": "Adelani et al., 2021)", "ref_id": null }, { "start": 855, "end": 878, "text": "(Eberhard et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Strength of Language Similarity:", "sec_num": null }, { "text": "In this work, we introduced AfriBERTa, a multilingual language model pretrained on less than 1 GB of data from 11 African languages. We show that this model is competitive with models pretrained on larger datasets and even outperforms them on some languages. Our comprehensive experiments also highlight important factors to consider when pretraining multilingual language models on smaller datasets. More importantly, we highlight some practical benefits of viable language models on smaller datasets. Finally, we release code, pretrained models and the dataset to stimulate further work on multilingual language models for low-resource languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "One of the languages (Gahuza) is counted twice because it is a code-mixed language consisting of Kinyarwanda and Kirundi.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/google-research/bert/ blob/master/multilingual.md 3 https://www.worldometers.info/ world-population/africa-population/ (accessed on February 19, 2021)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.bbc.co.uk/ws/languages(scraped up to January 17, 2021)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/mayhewsw/ multilingual-data-stats/tree/main/wiki", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada and an AI for Social Good grant from the Waterloo AI Institute; computational resources were provided by Compute Ontario and Compute Canada.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Aremu Anuoluwapo, Catherine Gitau, Derguene Mbaye, Jesujoba O. Alabi, Seid Muhie Yimam, Tajuddeen Gwadabe, Ignatius Ezeani, Rubungo Andre Niyongabo", "authors": [ { "first": "Jade", "middle": [ "Z" ], "last": "David Ifeoluwa Adelani", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Abbott", "suffix": "" }, { "first": "", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "D", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "Julia", "middle": [], "last": "'souza", "suffix": "" }, { "first": "Constantine", "middle": [], "last": "Kreutzer", "suffix": "" }, { "first": "Chester", "middle": [], "last": "Lignos", "suffix": "" }, { "first": "Happy", "middle": [], "last": "Palen-Michel", "suffix": "" }, { "first": "Shruti", "middle": [], "last": "Buzaaba", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Rijhwani", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "Shamsuddeen", "middle": [ "Hassan" ], "last": "Israel Abebe Azime", "suffix": "" }, { "first": "Chris", "middle": [ "Chinenye" ], "last": "Muhammad", "suffix": "" }, { "first": "Joyce", "middle": [], "last": "Emezue", "suffix": "" }, { "first": "Perez", "middle": [], "last": "Nakatumba-Nabende", "suffix": "" }, { "first": "", "middle": [], "last": "Ogayo", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Ifeoluwa Adelani, Jade Z. Abbott, Graham Neubig, Daniel D'souza, Julia Kreutzer, Constan- tine Lignos, Chester Palen-Michel, Happy Buza- aba, Shruti Rijhwani, Sebastian Ruder, Stephen Mayhew, Israel Abebe Azime, Shamsuddeen Has- san Muhammad, Chris Chinenye Emezue, Joyce Nakatumba-Nabende, Perez Ogayo, Aremu An- uoluwapo, Catherine Gitau, Derguene Mbaye, Je- sujoba O. Alabi, Seid Muhie Yimam, Tajud- deen Gwadabe, Ignatius Ezeani, Rubungo An- dre Niyongabo, Jonathan Mukiibi, Verrah Otiende, Iroro Orife, Davis David, Samba Ngom, Tosin P.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Ayodele Awokoya, Mouhamadane Mboup, Dibora Gebreyohannes", "authors": [ { "first": "Paul", "middle": [], "last": "Adewumi", "suffix": "" }, { "first": "Mofetoluwa", "middle": [], "last": "Rayson", "suffix": "" }, { "first": "Gerald", "middle": [], "last": "Adeyemi", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Muriuki", "suffix": "" }, { "first": "Chiamaka", "middle": [], "last": "Anebi", "suffix": "" }, { "first": "Nkiruka", "middle": [], "last": "Chukwuneke", "suffix": "" }, { "first": "Eric", "middle": [ "Peter" ], "last": "Odu", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Wairagala", "suffix": "" }, { "first": "Clemencia", "middle": [], "last": "Oyerinde", "suffix": "" }, { "first": "Tobius", "middle": [], "last": "Siro", "suffix": "" }, { "first": "Temilola", "middle": [], "last": "Saul Bateesa", "suffix": "" }, { "first": "Yvonne", "middle": [], "last": "Oloyede", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Wambui", "suffix": "" }, { "first": "Deborah", "middle": [], "last": "Akinode", "suffix": "" }, { "first": "Maurice", "middle": [], "last": "Nabagereka", "suffix": "" }, { "first": "", "middle": [], "last": "Katusiime", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adewumi, Paul Rayson, Mofetoluwa Adeyemi, Ger- ald Muriuki, Emmanuel Anebi, Chiamaka Chuk- wuneke, Nkiruka Odu, Eric Peter Wairagala, Samuel Oyerinde, Clemencia Siro, Tobius Saul Bateesa, Temilola Oloyede, Yvonne Wambui, Victor Akin- ode, Deborah Nabagereka, Maurice Katusiime, Ayo- dele Awokoya, Mouhamadane Mboup, Dibora Ge- breyohannes, Henok Tilaye, Kelechi Nwaike, De- gaga Wolde, Abdoulaye Faye, Blessing Sibanda, Orevaoghene Ahia, Bonaventure F. P. Dossou, Kelechi Ogueji, Thierno Ibrahima Diop, Abdoulaye Diallo, Adewale Akinfaderin, Tendai Marengereke, and Salomey Osei. 2021. MasakhaNER: Named entity recognition for African languages. CoRR, abs/2103.11811.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Towards supervised and unsupervised neural machine translation baselines for Nigerian Pidgin. CoRR, abs", "authors": [ { "first": "Orevaoghene", "middle": [], "last": "Ahia", "suffix": "" }, { "first": "Kelechi", "middle": [], "last": "Ogueji", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Orevaoghene Ahia and Kelechi Ogueji. 2020. To- wards supervised and unsupervised neural machine translation baselines for Nigerian Pidgin. CoRR, abs/2003.12660.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Massive vs. curated word embeddings for low-resourced languages. the case of Yor\u00f9b\u00e1 and Twi", "authors": [ { "first": "O", "middle": [], "last": "Jesujoba", "suffix": "" }, { "first": "Kwabena", "middle": [], "last": "Alabi", "suffix": "" }, { "first": "David", "middle": [ "Ifeoluwa" ], "last": "Amponsah-Kaakyire", "suffix": "" }, { "first": "Cristina", "middle": [], "last": "Adelani", "suffix": "" }, { "first": "", "middle": [], "last": "Espa\u00f1a-Bonet", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jesujoba O. Alabi, Kwabena Amponsah-Kaakyire, David Ifeoluwa Adelani, and Cristina Espa\u00f1a-Bonet. 2019. Massive vs. curated word embeddings for low-resourced languages. the case of Yor\u00f9b\u00e1 and Twi. CoRR, abs/1912.02481.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Optimizing transformer for low-resource neural machine translation", "authors": [ { "first": "Ali", "middle": [], "last": "Araabi", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "3429--3435", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.304" ] }, "num": null, "urls": [], "raw_text": "Ali Araabi and Christof Monz. 2020. Optimizing transformer for low-resource neural machine transla- tion. In Proceedings of the 28th International Con- ference on Computational Linguistics, pages 3429- 3435, Barcelona, Spain (Online). International Com- mittee on Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Evaluating the underlying gender bias in contextualized word embeddings", "authors": [ { "first": "Christine", "middle": [], "last": "Basta", "suffix": "" }, { "first": "Marta", "middle": [ "R" ], "last": "Costa-Juss\u00e0", "suffix": "" }, { "first": "Noe", "middle": [], "last": "Casas", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing", "volume": "", "issue": "", "pages": "33--39", "other_ids": { "DOI": [ "10.18653/v1/W19-3805" ] }, "num": null, "urls": [], "raw_text": "Christine Basta, Marta R. Costa-juss\u00e0, and Noe Casas. 2019. Evaluating the underlying gender bias in con- textualized word embeddings. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 33-39, Florence, Italy. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "On the dangers of stochastic parrots: Can language models be too big?", "authors": [ { "first": "Emily", "middle": [ "M" ], "last": "Bender", "suffix": "" }, { "first": "Timnit", "middle": [], "last": "Gebru", "suffix": "" }, { "first": "Angelina", "middle": [], "last": "Mcmillan-Major", "suffix": "" }, { "first": "Shmargaret", "middle": [], "last": "Shmitchell", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21", "volume": "", "issue": "", "pages": "610--623", "other_ids": { "DOI": [ "10.1145/3442188.3445922" ] }, "num": null, "urls": [], "raw_text": "Emily M. Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Confer- ence on Fairness, Accountability, and Transparency, FAccT '21, page 610-623, New York, NY, USA. As- sociation for Computing Machinery.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The geographic diversity of NLP conferences", "authors": [ { "first": "Andrew", "middle": [], "last": "Caines", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Caines. 2019. The geographic diversity of NLP conferences.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Unsupervised cross-lingual representation learning at scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8440--8451", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.747" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Crosslingual language model pretraining", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Conneau and Guillaume Lample. 2019. Cross- lingual language model pretraining. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "AfriVEC: Word embedding models for African languages. case study of Fon and Nobiin", "authors": [ { "first": "F", "middle": [ "P" ], "last": "Bonaventure", "suffix": "" }, { "first": "Mohammed", "middle": [], "last": "Dossou", "suffix": "" }, { "first": "", "middle": [], "last": "Sabry", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bonaventure F. P. Dossou and Mohammed Sabry. 2021. AfriVEC: Word embedding models for African lan- guages. case study of Fon and Nobiin. CoRR, abs/2103.05132.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Ethnologue: Languages of the worlds", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Eberhard", "suffix": "" }, { "first": "Gary", "middle": [ "F" ], "last": "Simons", "suffix": "" }, { "first": "Charles", "middle": [ "D" ], "last": "Fenning", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Eberhard, Gary F. Simons, and Charles D. Fenning. 2019. Ethnologue: Languages of the worlds. (twenty second edition).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Transferred embeddings for Igbo similarity, analogy, and diacritic restoration tasks", "authors": [ { "first": "Ignatius", "middle": [], "last": "Ezeani", "suffix": "" }, { "first": "Ikechukwu", "middle": [], "last": "Onyenwe", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Hepple", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Workshop on Semantic Deep Learning", "volume": "", "issue": "", "pages": "30--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ignatius Ezeani, Ikechukwu Onyenwe, and Mark Hep- ple. 2018. Transferred embeddings for Igbo similar- ity, analogy, and diacritic restoration tasks. In Pro- ceedings of the Third Workshop on Semantic Deep Learning, pages 30-38, Santa Fe, New Mexico. As- sociation for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Beyond English-centric multilingual machine translation", "authors": [ { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Shruti", "middle": [], "last": "Bhosale", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Zhiyi", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Ahmed", "middle": [], "last": "El-Kishky", "suffix": "" }, { "first": "Siddharth", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Mandeep", "middle": [], "last": "Baines", "suffix": "" }, { "first": "Onur", "middle": [], "last": "Celebi", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Vitaliy", "middle": [], "last": "Liptchinsky", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Man- deep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vi- taliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin. 2020. Be- yond English-centric multilingual machine transla- tion. CoRR, abs/2010.11125.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Transfer learning and distant supervision for multilingual transformer models: A study on African languages", "authors": [ { "first": "Michael", "middle": [ "A" ], "last": "Hedderich", "suffix": "" }, { "first": "David", "middle": [], "last": "Adelani", "suffix": "" }, { "first": "Dawei", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Jesujoba", "middle": [], "last": "Alabi", "suffix": "" }, { "first": "Udia", "middle": [], "last": "Markus", "suffix": "" }, { "first": "Dietrich", "middle": [], "last": "Klakow", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "2580--2591", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.204" ] }, "num": null, "urls": [], "raw_text": "Michael A. Hedderich, David Adelani, Dawei Zhu, Je- sujoba Alabi, Udia Markus, and Dietrich Klakow. 2020. Transfer learning and distant supervision for multilingual transformer models: A study on African languages. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 2580-2591, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The state and fate of linguistic diversity and inclusion in the NLP world", "authors": [ { "first": "Pratik", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Sebastin", "middle": [], "last": "Santy", "suffix": "" }, { "first": "Amar", "middle": [], "last": "Budhiraja", "suffix": "" }, { "first": "Kalika", "middle": [], "last": "Bali", "suffix": "" }, { "first": "Monojit", "middle": [], "last": "Choudhury", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6282--6293", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.560" ] }, "num": null, "urls": [], "raw_text": "Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 6282-6293, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Cross-lingual ability of multilingual BERT: an empirical study", "authors": [ { "first": "K", "middle": [], "last": "Karthikeyan", "suffix": "" }, { "first": "Zihan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth. 2019. Cross-lingual ability of mul- tilingual BERT: an empirical study. CoRR, abs/1912.07840.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Scaling laws for neural language models", "authors": [ { "first": "Jared", "middle": [], "last": "Kaplan", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Mccandlish", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Henighan", "suffix": "" }, { "first": "Tom", "middle": [ "B" ], "last": "Brown", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Chess", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Gray", "suffix": "" }, { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. CoRR, abs/2001.08361.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Subword regularization: Improving neural network translation models with multiple subword candidates", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "66--75", "other_ids": { "DOI": [ "10.18653/v1/P18-1007" ] }, "num": null, "urls": [], "raw_text": "Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple sub- word candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 66-75, Mel- bourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "John", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "66--71", "other_ids": { "DOI": [ "10.18653/v1/D18-2012" ] }, "num": null, "urls": [], "raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Measuring bias in contextualized word representations", "authors": [ { "first": "Keita", "middle": [], "last": "Kurita", "suffix": "" }, { "first": "Nidhi", "middle": [], "last": "Vyas", "suffix": "" }, { "first": "Ayush", "middle": [], "last": "Pareek", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing", "volume": "", "issue": "", "pages": "166--172", "other_ids": { "DOI": [ "10.18653/v1/W19-3823" ] }, "num": null, "urls": [], "raw_text": "Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W. Black, and Yulia Tsvetkov. 2019. Measuring bias in contextualized word representations. In Proceed- ings of the First Workshop on Gender Bias in Natu- ral Language Processing, pages 166-172, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Fixing weight decay regularization in Adam", "authors": [ { "first": "Ilya", "middle": [], "last": "Loshchilov", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Hutter", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Loshchilov and Frank Hutter. 2017. Fixing weight decay regularization in Adam. CoRR, abs/1711.05101.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "\u00c9ric de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot", "authors": [ { "first": "Louis", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Muller", "suffix": "" }, { "first": "Pedro Javier Ortiz", "middle": [], "last": "Su\u00e1rez", "suffix": "" }, { "first": "Yoann", "middle": [], "last": "Dupont", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Romary", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7203--7219", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.645" ] }, "num": null, "urls": [], "raw_text": "Louis Martin, Benjamin Muller, Pedro Javier Or- tiz Su\u00e1rez, Yoann Dupont, Laurent Romary, \u00c9ric de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2020. CamemBERT: a tasty French language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7203-7219, Online. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Are sixteen heads really better than one?", "authors": [ { "first": "Paul", "middle": [], "last": "Michel", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? In Ad- vances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "On the importance of pre-training data volume for compact language models", "authors": [ { "first": "Vincent", "middle": [], "last": "Micheli", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Martin D'hoffschmidt", "suffix": "" }, { "first": "", "middle": [], "last": "Fleuret", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "7853--7858", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.632" ] }, "num": null, "urls": [], "raw_text": "Vincent Micheli, Martin d'Hoffschmidt, and Fran\u00e7ois Fleuret. 2020. On the importance of pre-training data volume for compact language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7853-7858, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems", "volume": "26", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Blessing Sibanda, Blessing Bassey, Ayodele Olabiyi, Arshath Ramkilowan, Alp \u00d6ktem, Adewale Akinfaderin, and Abdallah Bashir. 2020. Participatory research for low-resourced machine translation: A case study in African languages", "authors": [ { "first": "Wilhelmina", "middle": [], "last": "Nekoto", "suffix": "" }, { "first": "Vukosi", "middle": [], "last": "Marivate", "suffix": "" }, { "first": "Tshinondiwa", "middle": [], "last": "Matsila", "suffix": "" }, { "first": "Timi", "middle": [], "last": "Fasubaa", "suffix": "" }, { "first": "Taiwo", "middle": [], "last": "Fagbohungbe", "suffix": "" }, { "first": "Shamsuddeen", "middle": [], "last": "Solomon Oluwole Akinola", "suffix": "" }, { "first": "Salomon", "middle": [ "Kabongo" ], "last": "Muhammad", "suffix": "" }, { "first": "Salomey", "middle": [], "last": "Kabenamualu", "suffix": "" }, { "first": "Freshia", "middle": [], "last": "Osei", "suffix": "" }, { "first": "Rubungo", "middle": [ "Andre" ], "last": "Sackey", "suffix": "" }, { "first": "Ricky", "middle": [], "last": "Niyongabo", "suffix": "" }, { "first": "Perez", "middle": [], "last": "Macharm", "suffix": "" }, { "first": "Orevaoghene", "middle": [], "last": "Ogayo", "suffix": "" }, { "first": "Musie", "middle": [], "last": "Ahia", "suffix": "" }, { "first": "Mofetoluwa", "middle": [], "last": "Meressa Berhe", "suffix": "" }, { "first": "Masabata", "middle": [], "last": "Adeyemi", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Mokgesi-Selinga", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Okegbemi", "suffix": "" }, { "first": "Kolawole", "middle": [], "last": "Martinus", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Tajudeen", "suffix": "" }, { "first": "Kelechi", "middle": [], "last": "Degila", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Ogueji", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Siminyu", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Kreutzer", "suffix": "" }, { "first": "Jamiil Toure", "middle": [], "last": "Webster", "suffix": "" }, { "first": "Jade", "middle": [], "last": "Ali", "suffix": "" }, { "first": "Iroro", "middle": [], "last": "Abbott", "suffix": "" }, { "first": "Ignatius", "middle": [], "last": "Orife", "suffix": "" }, { "first": "", "middle": [], "last": "Ezeani", "suffix": "" }, { "first": "Abdulkadir", "middle": [], "last": "Idris", "suffix": "" }, { "first": "Herman", "middle": [], "last": "Dangana", "suffix": "" }, { "first": "Hady", "middle": [], "last": "Kamper", "suffix": "" }, { "first": "Goodness", "middle": [], "last": "Elsahar", "suffix": "" }, { "first": "Ghollah", "middle": [], "last": "Duru", "suffix": "" }, { "first": "Murhabazi", "middle": [], "last": "Kioko", "suffix": "" }, { "first": "", "middle": [], "last": "Espoir", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Elan Van Biljon", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Whitenack", "suffix": "" }, { "first": "", "middle": [], "last": "Onyefuluchi", "suffix": "" } ], "year": null, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "2144--2160", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.195" ] }, "num": null, "urls": [], "raw_text": "Wilhelmina Nekoto, Vukosi Marivate, Tshinondiwa Matsila, Timi Fasubaa, Taiwo Fagbohungbe, Solomon Oluwole Akinola, Shamsuddeen Muham- mad, Salomon Kabongo Kabenamualu, Salomey Osei, Freshia Sackey, Rubungo Andre Niyongabo, Ricky Macharm, Perez Ogayo, Orevaoghene Ahia, Musie Meressa Berhe, Mofetoluwa Adeyemi, Masabata Mokgesi-Selinga, Lawrence Okegbemi, Laura Martinus, Kolawole Tajudeen, Kevin Degila, Kelechi Ogueji, Kathleen Siminyu, Julia Kreutzer, Jason Webster, Jamiil Toure Ali, Jade Abbott, Iroro Orife, Ignatius Ezeani, Idris Abdulkadir Dangana, Herman Kamper, Hady Elsahar, Good- ness Duru, Ghollah Kioko, Murhabazi Espoir, Elan van Biljon, Daniel Whitenack, Christopher Onyefuluchi, Chris Chinenye Emezue, Bonaventure F. P. Dossou, Blessing Sibanda, Blessing Bassey, Ayodele Olabiyi, Arshath Ramkilowan, Alp \u00d6ktem, Adewale Akinfaderin, and Abdallah Bashir. 2020. Participatory research for low-resourced machine translation: A case study in African languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2144-2160, Online. Association for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Pidgin-UNMT: Unsupervised neural machine translation from West African Pidgin to English", "authors": [ { "first": "Kelechi", "middle": [], "last": "Ogueji", "suffix": "" }, { "first": "Orevaoghene", "middle": [], "last": "Ahia", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kelechi Ogueji and Orevaoghene Ahia. 2019. Pidgin- UNMT: Unsupervised neural machine translation from West African Pidgin to English. CoRR, abs/1912.03444.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A monolingual approach to contextualized word embeddings for mid-resource languages", "authors": [ { "first": "Pedro Javier Ortiz", "middle": [], "last": "Su\u00e1rez", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Romary", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1703--1714", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.156" ] }, "num": null, "urls": [], "raw_text": "Pedro Javier Ortiz Su\u00e1rez, Laurent Romary, and Beno\u00eet Sagot. 2020. A monolingual approach to contextual- ized word embeddings for mid-resource languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1703-1714, Online. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "GloVe: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": { "DOI": [ "10.3115/v1/D14-1162" ] }, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": { "DOI": [ "10.18653/v1/N18-1202" ] }, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Improving language understanding by generative pre-training. Technical report", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Narasimhan", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Salimans", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training. Technical re- port, OpenAI.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Revisiting lowresource neural machine translation: A case study", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Biao", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "211--221", "other_ids": { "DOI": [ "10.18653/v1/P19-1021" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich and Biao Zhang. 2019. Revisiting low- resource neural machine translation: A case study. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 211- 221, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Energy and policy considerations for deep learning in NLP", "authors": [ { "first": "Emma", "middle": [], "last": "Strubell", "suffix": "" }, { "first": "Ananya", "middle": [], "last": "Ganesh", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3645--3650", "other_ids": { "DOI": [ "10.18653/v1/P19-1355" ] }, "num": null, "urls": [], "raw_text": "Emma Strubell, Ananya Ganesh, and Andrew McCal- lum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 3645-3650, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "On negative interference in multilingual models: Findings and a meta-learning treatment", "authors": [ { "first": "Zirui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zachary", "middle": [ "C" ], "last": "Lipton", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "4438--4450", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.359" ] }, "num": null, "urls": [], "raw_text": "Zirui Wang, Zachary C. Lipton, and Yulia Tsvetkov. 2020. On negative interference in multilingual mod- els: Findings and a meta-learning treatment. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4438-4450, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "CCNet: Extracting high quality monolingual datasets from web crawl data", "authors": [ { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Marie-Anne", "middle": [], "last": "Lachaux", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "4003--4012", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Wenzek, Marie-Anne Lachaux, Alexis Con- neau, Vishrav Chaudhary, Francisco Guzm\u00e1n, Ar- mand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality monolingual datasets from web crawl data. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 4003-4012, Marseille, France. European Language Resources Association.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "Remi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "", "middle": [], "last": "Drame", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "38--45", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-demos.6" ] }, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "When do you need billions of words of pretraining data?", "authors": [ { "first": "Yian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Warstadt", "suffix": "" }, { "first": "Haau-Sing", "middle": [], "last": "Li", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yian Zhang, Alex Warstadt, Haau-Sing Li, and Samuel R. Bowman. 2020. When do you need billions of words of pretraining data? CoRR, abs/2011.04946.", "links": null } }, "ref_entries": { "TABREF1": { "html": null, "content": "
Information: For each language,
its family, number of speakers (Eberhard et al., 2019),
and regions in Africa spoken.
Language# Sent.# Tok. Size (GB)
Afaan Oromoo410,8406,870,9590.051
Amharic525,0241,303,0860.213
Gahuza131,9523,669,5380.026
Hausa1,282,996 27,889,2990.150
Igbo337,0816,853,5000.042
Nigerian Pidgin161,8428,709,4980.048
Somali995,043 27,332,3480.170
Swahili1,442,911 30,053,8340.185
Tigrinya12,075280,3970.027
Yor\u00f9b\u00e1149,1474,385,7970.027
Total5,448,911 108,800,6000.939
", "text": "Language", "num": null, "type_str": "table" }, "TABREF2": { "html": null, "content": "", "text": "Dataset Size: Size of each language in the dataset covering numbers of sentences, tokens and uncompressed disk size.", "num": null, "type_str": "table" }, "TABREF4": { "html": null, "content": "
", "text": "Comparing Sizes Across Models: Comparison of the dataset sizes (GB) of languages present in XLM-R, mBERT and AfriBERTa. \"-\" indicates language was not present in model's pretraining corpus.", "num": null, "type_str": "table" }, "TABREF5": { "html": null, "content": "
# Layers # Paramsamhhauibokinlugluopcmswawolyoravg
474.8M62.18
", "text": "89.66 87.03 69.29 67.23 59.00 83.57 83.89 77.04 67.02 75.97 6 74.7M 61.59 90.34 85.81 72.76 66.39 61.43 86.27 84.02 76.61 68.54 76.91 8 74.6M 62.04 90.96 86.33 74.00 68.66 60.96 84.43 84.16 76.11 67.38 77.00 10 74.3M 62.14 90.69 87.36 75.74 67.87 60.59 84.79 84.70 76.17 67.51 77.27", "num": null, "type_str": "table" }, "TABREF6": { "html": null, "content": "
# Layers # Att. Heads # Params amh hau ibo kin lug luo pcm swa wol yoravg
4260.1M58.23 88.78 84.63 71.28 65.68 56.91 83.84 82.44 76.69 64.64 74.99
4460.1M60.09 89.34 87.08 72.95 68.25 60.10 84.08 83.17 76.29 66.73 76.44
4660.1M60.26 89.49 86.01 72.69 67.82 59.85 84.68 83.73 76.22 67.66 76.46
6274.3M60.54 89.72 87.25 72.68 70.23 59.98 84.52 83.25 76.00 67.00 76.74
6474.3M63.29 90.19 86.05 74.26 68.58 59.23 84.74 83.46 77.62 67.04 76.80
6674.3M60.38 90.86 86.70 73.12 68.54 61.68 84.59 82.80 79.02 68.48 77.31
8288.5M60.32 90.55 85.32 75.38 69.89 62.73 85.50 83.51 79.07 68.09 77.78
8488.5M61.90 90.79 86.67 74.28 68.45 61.57 85.64 83.88 78.48 70.16 77.77
8688.5M60.92 90.16 86.95 74.71 70.66 60.75 85.48 84.87 78.04 71.16 78.09
102102.6M 59.87 90.78 87.10 73.73 66.29 60.03 85.04 83.47 81.12 69.06 77.40
104102.6M 63.95 91.33 87.11 75.24 68.96 63.36 85.66 84.67 74.60 69.27 77.80
106102.6M 63.94 90.54 87.39 75.90 69.19 61.73 85.77 84.66 75.64 69.48 77.81
", "text": "Effect of Number of Layers: NER dev F1 scores (averaged over three different random seeds) on each language for models with different layer depth, but same number of parameters. The sizes of the embedding and feed-foward layers are adjusted such that feed-foward is always approximately 4 times embedding size. The highest F1-score per language is underlined, while the highest overall average is in bold.", "num": null, "type_str": "table" }, "TABREF7": { "html": null, "content": "", "text": "", "num": null, "type_str": "table" }, "TABREF8": { "html": null, "content": "
8625k76.
", "text": "# Layers # Att. Heads Vocab Size # Params 9M 60.56 89.96 85.84 73.23 69.67 61.86 85.11 84.34 75.40 68.35 77.09 8 6 40k 88.5M 60.92 90.16 86.95 74.71 70.66 60.75 85.48 84.87 78.04 71.16 78.09 8 6 55k 99.9M 63.65 90.17 87.28 72.47 67.47 61.49 85.59 85.09 77.56 69.06 77.35 8 6 70k 111.5M 66.17 91.25 87.74 77.44 68.29 59.91 87.00 87.05 77.49 68.82 78.33 8 6 85k 123.1M 62.35 90.42 87.44 77.01 68.20 61.98 86.46 85.87 72.84 70.14 77.82", "num": null, "type_str": "table" }, "TABREF9": { "html": null, "content": "
LanguageInInInCNN-BiLSTM mBERT XLM-R AfriBERTa AfriBERTa AfriBERTa
mBERT XLM-R? AfriBERTa?CRFbasesmallbaselarge
(172M) (270M)(97M)(111M)(126M)
amhnoyesyes52.890.070.9667.9071.8073.82
haunoyesyes83.7087.3489.4489.0190.1090.17
ibononoyes78.4885.1184.5186.6386.7087.38
kinnonoyes64.6170.9873.9369.9173.2273.78
lugnonono74.3180.5680.7176.4479.3078.85
luononono66.4272.6575.1467.3170.6370.23
pcmnonoyes66.4387.7887.3982.9284.8785.70
swayesyesyes79.2686.3787.5585.6888.0087.96
wolnonono60.4366.1064.3860.1061.8261.81
yoryesnoyes67.0778.6477.5876.0879.3681.32
avg---69.3671.5579.1676.2078.6079.10
avg (excl. amh)---71.1979.5080.0777.1279.3679.69
", "text": "Effect of Vocabulary Size: NER dev F1 scores (averaged over three different random seeds) on the best model size with varying vocabulary sizes. The highest overall average F1-score is in bold.", "num": null, "type_str": "table" }, "TABREF10": { "html": null, "content": "", "text": "Comparison of NER Results: F1-scores on the test sets of each language. XLM-R and mBERT results obtained from Adelani et al.", "num": null, "type_str": "table" }, "TABREF12": { "html": null, "content": "
", "text": "Comparison of Text Classification Results: F1-scores on the test sets. The best score for each language is in bold.", "num": null, "type_str": "table" }, "TABREF14": { "html": null, "content": "
", "text": "Comparing Sizes: Comparison of datasets and model sizes between XLM-R, mBERT and Afri-BERTa.", "num": null, "type_str": "table" } } } }