{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:13:28.058719Z" }, "title": "IndT5: A Text-to-Text Transformer for 10 Indigenous Languages", "authors": [ { "first": "El", "middle": [], "last": "Moatez", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Billah", "middle": [], "last": "Nagoudi", "suffix": "", "affiliation": { "laboratory": "Natural Language Processing Lab", "institution": "", "location": {} }, "email": "" }, { "first": "Wei-Rui", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "Natural Language Processing Lab", "institution": "", "location": {} }, "email": "" }, { "first": "Muhammad", "middle": [], "last": "Abdul-Mageed", "suffix": "", "affiliation": { "laboratory": "Natural Language Processing Lab", "institution": "", "location": {} }, "email": "muhammad.mageed@ubc.ca" }, { "first": "Hasan", "middle": [], "last": "Cavusoglu", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of British Columbia", "location": {} }, "email": "2cavusoglu@sauder.ubc.ca" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Transformer language models have become fundamental components of natural language processing based pipelines. Although several Transformer models have been introduced to serve many languages, there is a shortage of models pre-trained for low-resource and Indigenous languages. In this work, we introduce IndT5, the first Transformer language model for Indigenous languages. To train IndT5, we build IndCorpus-a new dataset for ten Indigenous languages and Spanish. We also present the application of IndT5 to machine translation by investigating different approaches to translate between Spanish and the Indigenous languages as part of our contribution to the AmericasNLP 2021 Shared Task on Open Machine Translation. IndT5 and IndCorpus are publicly available for research. 1", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Transformer language models have become fundamental components of natural language processing based pipelines. Although several Transformer models have been introduced to serve many languages, there is a shortage of models pre-trained for low-resource and Indigenous languages. In this work, we introduce IndT5, the first Transformer language model for Indigenous languages. To train IndT5, we build IndCorpus-a new dataset for ten Indigenous languages and Spanish. We also present the application of IndT5 to machine translation by investigating different approaches to translate between Spanish and the Indigenous languages as part of our contribution to the AmericasNLP 2021 Shared Task on Open Machine Translation. IndT5 and IndCorpus are publicly available for research. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Indigenous languages are starting to attract attention in the field of natural language processing (NLP), with the number of related publications growing in recent years (Mager et al., 2018) . In spite of this interest, there remains a multitude of challenges for handling Indigenous languages. Complexity of the morphological systems of some of these languages and lack of standard orthography for writing them are among these challenges (Mager et al., 2018; Littell et al., 2018) . The most fundamental issue facing NLP efforts, however, remains the lack of digital textual data that can be exploited for systems development.", "cite_spans": [ { "start": 170, "end": 190, "text": "(Mager et al., 2018)", "ref_id": "BIBREF12" }, { "start": 439, "end": 459, "text": "(Mager et al., 2018;", "ref_id": "BIBREF12" }, { "start": 460, "end": 481, "text": "Littell et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we describe a scenario usually faced when trying to develop NLP systems for Indigenous languages and we focus on machine translation (MT). We adopt a neural machine translation approach (NMT) (Koehn, 2017) as our method. We show that, in spite of its recent success on many 1 https://github.com/UBC-NLP/IndT5 Figure 1 : A map of the ten Indigenous languages covered by IndT5, our text-to-text Transformer model, and our IndCorpus dataset. The languages are mainly spoken in five Latin American countries. contexts, NMT still struggles in very low-resource settings involving Indigenous languages. This is due to the core difficulty of lack of parallel textual data, but also even monolingual data.", "cite_spans": [ { "start": 206, "end": 219, "text": "(Koehn, 2017)", "ref_id": null } ], "ref_spans": [ { "start": 323, "end": 331, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although our main goal in this work in particular is to develop translation models from Spanish to several Indigenous languages of the Americas, we adopt a transfer learning approach where we offer resources that can be exploited for other downstream tasks. Namely, we build a dataset for ten Indigenous languages and Spanish which we refer to as IndCorpus. Figure 1 and Table 1 provide an overview of the ten Indigenous languages in our new dataset (Eberhard et al., 2021) . We also exploit IndCorpus for pre-training a Transformer language model following the unified approach introduced by (Raffel et al., 2019 ). Our resulting model,", "cite_spans": [ { "start": 450, "end": 473, "text": "(Eberhard et al., 2021)", "ref_id": "BIBREF4" }, { "start": 593, "end": 613, "text": "(Raffel et al., 2019", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 358, "end": 366, "text": "Figure 1", "ref_id": null }, { "start": 371, "end": 378, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Code Main location Speakers Aymara aym Bolivia 1,677,100 Ash\u00e1ninka cni Peru 35,200 Bribri bzd Costa Rica 7,000 Guarani gn Paraguay 6,652,790 H\u00f1\u00e4h\u00f1u oto Mexico 88,500 Nahuatl nah Mexico 410,000 Quechua quy Peru 7,384,920 Rar\u00e1muri tar Mexico 9,230 Shipibo-Konibo shp Peru 22,500 Wixarika hch Mexico 52,500 Table 1 : Overview of our ten Indigenous languages (Eberhard et al., 2021) .", "cite_spans": [ { "start": 397, "end": 420, "text": "(Eberhard et al., 2021)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 5, "end": 353, "text": "Main location Speakers Aymara aym Bolivia 1,677,100 Ash\u00e1ninka cni Peru 35,200 Bribri bzd Costa Rica 7,000 Guarani gn Paraguay 6,652,790 H\u00f1\u00e4h\u00f1u oto Mexico 88,500 Nahuatl nah Mexico 410,000 Quechua quy Peru 7,384,920 Rar\u00e1muri tar Mexico 9,230 Shipibo-Konibo shp Peru 22,500 Wixarika hch Mexico 52,500 Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Language", "sec_num": null }, { "text": "IndT5, treats every text NLP problem as a \"textto-text\" problem, i.e. taking text as input and producing new text as output. We apply IndT5 to the MT task as a way to transfer knowledge acquired by the model to this particular context. Our experiments show the utility of our new language model and the dataset it exploits for the downstream Indigenous MT task but that very large space for improvement still exists. The rest of the paper is organized as follows: In Section 2, we introduce recent MT work in lowresource and Indigenous languages settings. In Section 3, we describe how we develop our new language model for ten Indigenous languages. In Section 4, we describe our NMT models. We conclude in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language", "sec_num": null }, { "text": "A number of methods and techniques have been proposed to mitigate the effects of having rather small datasets for machine translation. These include data augmentation, transfer learning, hyperparameter tuning, incorporating linguistic knowledge, and knowledge distillation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Low-Resource MT", "sec_num": "2.1" }, { "text": "Since the main bottleneck of low-resource MT is the lack of abundant parallel textual data, data augmentation is straightforwardly a potential method to enhance the model performance. Back translation is a way to augment parallel data (Sennrich et al., 2016a) . By training a target-to-source translation model with original data and feeding in monolingual data of target language, synthetic parallel data is generated. If the target language is rich in textual data, much synthetic parallel data can be added into training data and may benefit the final translation model. Transfer learning is another method that can boost the performance of MT on low-resource languages (Zoph et al., 2016; Nguyen and Chiang, 2017; Kocmi and Bojar, 2018) . The rationale behind one approach to transfer learning is that knowledge obtained while translating high-resource languages may be transferable to translation of lowresource languages. In Zoph et al. (2016) , a parent model is first trained on a high-resource language pair (i.e., French to English) then a child model is trained on a low-resource language pair (i.e., Uzbek to English). The Uzbek-English model has 10.7 BLEU score without parent model and 15.0 with the parent model. It is also shown that the more similar the two source languages, the more performance gain is possible. For example, a Spanish-English MT model has 16.4 BLEU score without parent model and 31.0 with French-English parent model. The performance gain is much more than when transferring French-English parent model to the more distant context of the Uzbek-English child model. Sennrich and Zhang (2019) argue that instead of using hyperparameters that work in high-resource settings, there should be a set of hyperparameters specific to the low-resource scenario. For example, keeping the vocabulary size small, training a model with relatively small capacity, and having smaller batch size may be beneficial to model performance. When building a vocabulary with BPE, by reducing the the number of merge operations, a smaller vocabulary can be obtained and an inclusion of lowfrequency (sub)words can be avoided. Inclusion of inclusion of low-frequency (sub)words could otherwise negatively influencing representation learning effectiveness.", "cite_spans": [ { "start": 235, "end": 259, "text": "(Sennrich et al., 2016a)", "ref_id": "BIBREF21" }, { "start": 673, "end": 692, "text": "(Zoph et al., 2016;", "ref_id": "BIBREF26" }, { "start": 693, "end": 717, "text": "Nguyen and Chiang, 2017;", "ref_id": "BIBREF14" }, { "start": 718, "end": 740, "text": "Kocmi and Bojar, 2018)", "ref_id": "BIBREF7" }, { "start": 931, "end": 949, "text": "Zoph et al. (2016)", "ref_id": "BIBREF26" }, { "start": 1603, "end": 1628, "text": "Sennrich and Zhang (2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Low-Resource MT", "sec_num": "2.1" }, { "text": "Leveraging linguistic knowledge for data augmentation, Zhou et al. (2019) use a rule-based syntax parser and a dictionary to generate parallel data. By reordering target-language sentences into source-language syntactic structure and then mapping target-language words into source-language words with a dictionary, the size of parallel data is enlarged and translation performance is improved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Low-Resource MT", "sec_num": "2.1" }, { "text": "Baziotis et al. (2020) leverage a language model to help enhance the performance of the translation model. Similar to the idea of knowledge distillation (Hinton et al., 2015 ), a teacher model and a student model are trained where the language model plays the role of teacher and translation model plays the role of student. With this design, the teacher model needs only monolingual data and does not have to rely on large parallel data.", "cite_spans": [ { "start": 153, "end": 173, "text": "(Hinton et al., 2015", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Low-Resource MT", "sec_num": "2.1" }, { "text": "Unlike high-resource languages such as English and French, Indigenous languages are often lowresource. Due to this, it is common that researchers of Indigenous languages adopt methods that can fare well in low-resource scenarios. This includes using the Transformer architecture and its variants in both low-resource (Adebara et al., 2021 (Adebara et al., , 2020 Przystupa and Abdul-Mageed, 2019) and Indigenous language (Feldman and Coto-Solano, 2020; Orife, 2020; Le and Sadat, 2020) settings.", "cite_spans": [ { "start": 317, "end": 338, "text": "(Adebara et al., 2021", "ref_id": "BIBREF0" }, { "start": 339, "end": 362, "text": "(Adebara et al., , 2020", "ref_id": null }, { "start": 363, "end": 396, "text": "Przystupa and Abdul-Mageed, 2019)", "ref_id": "BIBREF19" }, { "start": 421, "end": 452, "text": "(Feldman and Coto-Solano, 2020;", "ref_id": "BIBREF5" }, { "start": 453, "end": 453, "text": "", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "MT of Indigenous Languages", "sec_num": "2.2" }, { "text": "Despite the fact that Indigenous languages face difficulties similar to most low-resource languages, there are some challenges specific to Indigenous languages. As Mager et al. (2018) point out, some Indigenous languages have complex morphological systems and some have various non-standardized orthographic conventions. For example, Micher (2018) shows that in Inuktitut, an Indigineous language in North America with a complex morphological system, a corpus of one million tokens, there are about 225K different types for Inuktitut while about 30K types for English. Also, Micher (2018) shows that there can be lack of standardized spelling for some words. For example, the word Haammalat in Inuktitut has another seven different forms.", "cite_spans": [ { "start": 164, "end": 183, "text": "Mager et al. (2018)", "ref_id": "BIBREF12" }, { "start": 334, "end": 347, "text": "Micher (2018)", "ref_id": "BIBREF13" }, { "start": 575, "end": 588, "text": "Micher (2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "MT of Indigenous Languages", "sec_num": "2.2" }, { "text": "To cope with the issue of complex morphology, Ortega et al. (2020) build a translation model for Qeuchua, an Indigenous language of South America, with an integrated morphological segmentation method. To treat orthographic variation, Feldman and Coto-Solano (2020) standardize text with a rule-based system which converts diacritics and letters to contemporary orthographic convention.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MT of Indigenous Languages", "sec_num": "2.2" }, { "text": "We train an Indigenous language model adopting the unified and flexible text-to-text transfer Transformer (T5) approach (Raffel et al., 2019) . T5 treats every text-based language task as a \"textto-text\" problem, taking text format as input and producing new text format as output. T5 is essentially an encoder-decoder Transformer (Vaswani et al., 2017) , with the encoder and decoder similar in configuration and size to a BERT Base (Devlin et al., 2019) but with some architectural modifica-tions. Modifications include applying a normalization layer before a sub-block and adding a pre-norm (i.e., initial input to the sub-block output). We call our resulting model IndT5. We now describe our dataset, vocabulary, and pre-training method for developing IndT5.", "cite_spans": [ { "start": 120, "end": 141, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF20" }, { "start": 331, "end": 353, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF24" }, { "start": 434, "end": 455, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "IndT5", "sec_num": "3" }, { "text": "We build IndCorpus, a collection of ten Indigenous languages and Spanish comprising 1.17 GB of text (\u223c5.37M sentences), to pre-train IndT5. IndCorpus is collected from both Wikipedia and the Bible. Table 2 provides the size and number of sentences for each language in our dataset.", "cite_spans": [], "ref_spans": [ { "start": 198, "end": 205, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Training Data", "sec_num": "3.1" }, { "text": "The T5 (Raffel et al., 2019) model is based on a vocabulary acquired by the SentencePiece library 2 using English, French, German, and Romanian web pages from \"Colossal Clean Crawled Corpus\" (or C4 for short). We use a similar procedure to create our Indigenous languages vocabulary. Namely, we use SentencePiece (Kudo, 2018) to encode text as WordPiece (Sennrich et al., 2016b) tokens with a vocabulary size of 100K WordPieces extracted from IndCorpus.", "cite_spans": [ { "start": 7, "end": 28, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF20" }, { "start": 313, "end": 325, "text": "(Kudo, 2018)", "ref_id": "BIBREF9" }, { "start": 354, "end": 378, "text": "(Sennrich et al., 2016b)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "IndT5 Vocabulary", "sec_num": "3.2" }, { "text": "We leverage our unlabeled Indigenous corpus, IndCorpus, to pre-train IndT5. For that, we use a denoising objective (Raffel et al., 2019 ) that does not require labels. The main idea is feeding the model with corrupted (masked) versions of the original sentence, and training it to reconstruct the original sentence. Inspired by BERT's objective (i.e., masked language model) (Devlin et al., 2019) , the denoising objective (Raffel et al., 2019) works by randomly sampling and dropping out 15% of tokens in the input sequence. All consecutive spans of dropped-out tokens are then replaced by a single sentinel token. We pre-train our model for 100K steps on the IndCorpus using the T5 Base architecture. 3 We refer to this model as IndT5 100k . Afterwards, we further pre-train on only the ten Indigenous languages part of our dataset (i.e., without the Spanish data) for 40K steps. We refer to this version of the model as IndT5 140k . For both pre-training steps, we use a learning rate of 0.01, a batch size of 128 sequences, and a maximum sequence length of 512. We use the original implementation of T5 in the TensorFlow framework. 4 . We train the models on Google Cloud TPU with 8 cores (v3.8) from TensorFlow Research Cloud (TFRC). 5", "cite_spans": [ { "start": 115, "end": 135, "text": "(Raffel et al., 2019", "ref_id": "BIBREF20" }, { "start": 375, "end": 396, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" }, { "start": 423, "end": 444, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF20" }, { "start": 703, "end": 704, "text": "3", "ref_id": null }, { "start": 1136, "end": 1137, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Pre-Training", "sec_num": "3.3" }, { "text": "As part of the AmericasNLP 2021 Shared Task on Open Machine Translation, the training (Train) and development (Dev) datasets for ten target Indigeneous languages along with the source language Spanish were released. All the datasets are manually translated. Table 3 shows the number of sentences of different language pairs in shared task data. Table 4 provides example sentences extracted from the Dev dataset with their corresponding translations.", "cite_spans": [], "ref_spans": [ { "start": 258, "end": 265, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 345, "end": 352, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Parallel Data", "sec_num": "4.1" }, { "text": "For all languages pairs except quy and gn, we fine-tune each of the two versions of our language model, i.e., both IndT5 100k and IndT5 140k , under two conditions: (A) we train on Train using 100% of Dev data for validation, for 150 epochs; (B) we fine-tune the best epoch from setting A for 50 epochs, adding 80% of Dev data to Train (using the remaining 20% Dev for validation).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "4.2" }, { "text": "We report the results of both IndT5 100k and IndT5 140k models using two metrics: BLEU score (Papineni et al., 2002) and ChrF++ (Popovi\u0107, 2017) . Tables 5 and 6 show the results of both models on Test sets for each of the language pairs using settings A and B described in Section 4.2, respectively.", "cite_spans": [ { "start": 93, "end": 116, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF17" }, { "start": 128, "end": 143, "text": "(Popovi\u0107, 2017)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 146, "end": 160, "text": "Tables 5 and 6", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "4.3" }, { "text": "The results presented in Table 5 and Table 6 show that all our models, with both settings A and B, outperform the respective baselines across all languages. An exception is the languages aym and shp. As expected, fine-tuning the IndT5 100k and IndT5 140k models using the training data and 80% of the Dev data (i.e., setting B) improves the results with a mean of +0.003% and +0.04% in ChrF++ on the Test data, respectively. Interestingly, fur- ther pre-training IndT5 on only the ten Indigenous languages (i.e. target languages) produces better results with an average improvement of +0.003% and +0.004% in settings A and B, respectively. Overall, the impact of limited data is clear.", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 32, "text": "Table 5", "ref_id": "TABREF5" }, { "start": 37, "end": 44, "text": "Table 6", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Discussion", "sec_num": "4.4" }, { "text": "In this work, we introduced a new Transformer language model (IndT5) and a dataset (IndCorpus) for ten Indigenous languages and Spanish. We applied IndT5 to the MT task on eight languages pairs as part of our submission to the AmericasNLP 2021 Shared Task. While IndT5 helps improve translation, the task remains hard due to absence of parallel as well as mono- lingual data. In the future, we plan to integrate statistical MT methods to augment our data as well as investigate best hyperparameters for our neural models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://github.com/google/ sentencepiece 3 Both encoder and decoder of T5Base model has 12 layers each with 12 attention heads, and 768 hidden units.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/google-research/ text-to-text-transfer-transformer 5 https://www.tensorflow.org/tfrc", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, Compute Canada, and UBC ARC-Sockeye. We also thank the Google TFRC program for providing us with free TPU access.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Translating the Unseen? Yor\u00f9b\u00e1-English MT in Low-Resource, Morphologically-Unmarked Settings. AfricNLP", "authors": [ { "first": "Ife", "middle": [], "last": "Adebara", "suffix": "" }, { "first": "Muhammad", "middle": [], "last": "Abdul-Mageed", "suffix": "" }, { "first": "Miikka", "middle": [], "last": "Silfverberg", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ife Adebara, Muhammad Abdul-Mageed, and Miikka Silfverberg. 2021. Translating the Unseen? Yor\u00f9b\u00e1- English MT in Low-Resource, Morphologically- Unmarked Settings. AfricNLP.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Translating similar languages: Role of mutual intelligibility in multilingual transformers", "authors": [], "year": 2020, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "381--386", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ife Adebara, El Moatez Billah Nagoudi, and Muham- mad Abdul Mageed. 2020. Translating similar lan- guages: Role of mutual intelligibility in multilingual transformers. In Proceedings of the Fifth Confer- ence on Machine Translation, pages 381-386, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Language model prior for lowresource neural machine translation", "authors": [ { "first": "Christos", "middle": [], "last": "Baziotis", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.14928" ] }, "num": null, "urls": [], "raw_text": "Christos Baziotis, Barry Haddow, and Alexandra Birch. 2020. Language model prior for low- resource neural machine translation. arXiv preprint arXiv:2004.14928.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Ethnologue: Languages of the world. twenty-fourth edition", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Eberhard", "suffix": "" }, { "first": "Gary", "middle": [ "F" ], "last": "Simons", "suffix": "" }, { "first": "Charles", "middle": [ "D" ], "last": "Fennig", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Eberhard, Gary F. Simons, and Charles D. Fennig. 2021. Ethnologue: Languages of the world. twenty-fourth edition. Dallas, Texas. SIL Interna- tional.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Neural machine translation models with back-translation for the extremely low-resource indigenous language bribri", "authors": [ { "first": "Isaac", "middle": [], "last": "Feldman", "suffix": "" }, { "first": "Rolando", "middle": [], "last": "Coto-Solano", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "3965--3976", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isaac Feldman and Rolando Coto-Solano. 2020. Neu- ral machine translation models with back-translation for the extremely low-resource indigenous language bribri. In Proceedings of the 28th International Con- ference on Computational Linguistics, pages 3965- 3976.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Distilling the knowledge in a neural network", "authors": [ { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1503.02531" ] }, "num": null, "urls": [], "raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Trivial transfer learning for low-resource neural machine translation", "authors": [ { "first": "Tom", "middle": [], "last": "Kocmi", "suffix": "" }, { "first": "Ondrej", "middle": [], "last": "Bojar", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Kocmi and Ondrej Bojar. 2018. Trivial transfer learning for low-resource neural machine translation. CoRR, abs/1809.00357.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Subword regularization: Improving neural network translation models with multiple subword candidates", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.10959" ] }, "num": null, "urls": [], "raw_text": "Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple sub- word candidates. arXiv preprint arXiv:1804.10959.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Addressing challenges of indigenous languages through neural machine translation: The case of inuktitut-english", "authors": [ { "first": "N", "middle": [ "Tan" ], "last": "Le", "suffix": "" }, { "first": "F", "middle": [], "last": "Sadat", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Tan Le and F. Sadat. 2020. Addressing challenges of indigenous languages through neural machine trans- lation: The case of inuktitut-english.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Indigenous language technologies in Canada: Assessment, challenges, and successes", "authors": [ { "first": "Patrick", "middle": [], "last": "Littell", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Kazantseva", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Kuhn", "suffix": "" }, { "first": "Aidan", "middle": [], "last": "Pine", "suffix": "" }, { "first": "Antti", "middle": [], "last": "Arppe", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Cox", "suffix": "" }, { "first": "Marie-Odile", "middle": [], "last": "Junker", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "2620--2632", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Littell, Anna Kazantseva, Roland Kuhn, Aidan Pine, Antti Arppe, Christopher Cox, and Marie- Odile Junker. 2018. Indigenous language technolo- gies in Canada: Assessment, challenges, and suc- cesses. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2620-2632, Santa Fe, New Mexico, USA. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Challenges of language technologies for the indigenous languages of the Americas", "authors": [ { "first": "Manuel", "middle": [], "last": "Mager", "suffix": "" }, { "first": "Ximena", "middle": [], "last": "Gutierrez-Vasques", "suffix": "" }, { "first": "Gerardo", "middle": [], "last": "Sierra", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Meza-Ruiz", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "55--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manuel Mager, Ximena Gutierrez-Vasques, Gerardo Sierra, and Ivan Meza-Ruiz. 2018. Challenges of language technologies for the indigenous languages of the Americas. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 55-69, Santa Fe, New Mexico, USA. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Addressing Challenges of Machine Translation of Inuit Language. ARL-TN", "authors": [ { "first": "J", "middle": [ "C" ], "last": "Micher", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.C. Micher. 2018. Addressing Challenges of Machine Translation of Inuit Language. ARL-TN. US Army Research Laboratory.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Transfer learning across low-resource, related languages for neural machine translation", "authors": [ { "first": "Q", "middle": [], "last": "Toan", "suffix": "" }, { "first": "David", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Toan Q. Nguyen and David Chiang. 2017. Transfer learning across low-resource, related languages for neural machine translation. CoRR, abs/1708.09803.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Towards neural machine translation for edoid languages", "authors": [ { "first": "", "middle": [], "last": "Iroro Orife", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.10704" ] }, "num": null, "urls": [], "raw_text": "Iroro Orife. 2020. Towards neural machine trans- lation for edoid languages. arXiv preprint arXiv:2003.10704.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Neural machine translation with a polysynthetic low resource language. Machine Translation", "authors": [ { "first": "John", "middle": [ "E" ], "last": "Ortega", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Castro Mamani", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2020, "venue": "", "volume": "34", "issue": "", "pages": "325--346", "other_ids": { "DOI": [ "10.1007/s10590-020-09255-9" ] }, "num": null, "urls": [], "raw_text": "John E. Ortega, Richard Castro Mamani, and Kyunghyun Cho. 2020. Neural machine translation with a polysynthetic low resource language. Ma- chine Translation, 34(4):325-346.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "chrf++: words helping character n-grams", "authors": [ { "first": "Maja", "middle": [], "last": "Popovi\u0107", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the second conference on machine translation", "volume": "", "issue": "", "pages": "612--618", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maja Popovi\u0107. 2017. chrf++: words helping character n-grams. In Proceedings of the second conference on machine translation, pages 612-618.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Neural machine translation of low-resource and similar languages with backtranslation", "authors": [ { "first": "Michael", "middle": [], "last": "Przystupa", "suffix": "" }, { "first": "Muhammad", "middle": [], "last": "Abdul-Mageed", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation", "volume": "3", "issue": "", "pages": "224--235", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Przystupa and Muhammad Abdul-Mageed. 2019. Neural machine translation of low-resource and similar languages with backtranslation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 224-235.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter J", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.10683" ] }, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Improving neural machine translation models with monolingual data", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "86--96", "other_ids": { "DOI": [ "10.18653/v1/P16-1009" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1715--1725", "other_ids": { "DOI": [ "10.18653/v1/P16-1162" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Revisiting lowresource neural machine translation: A case study", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Biao", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "211--221", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich and Biao Zhang. 2019. Revisiting low- resource neural machine translation: A case study. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 211- 221.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "6000--6010", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 6000-6010.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Handling syntactic divergence in lowresource machine translation", "authors": [ { "first": "Chunting", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Junjie", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chunting Zhou, Xuezhe Ma, Junjie Hu, and Graham Neubig. 2019. Handling syntactic divergence in low- resource machine translation.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Transfer learning for low-resource neural machine translation", "authors": [ { "first": "Barret", "middle": [], "last": "Zoph", "suffix": "" }, { "first": "Deniz", "middle": [], "last": "Yuret", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "May", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1568--1575", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1568-1575.", "links": null } }, "ref_entries": { "TABREF1": { "text": "Datasets in IndCorpus by language", "num": null, "content": "
LanguagesTrain DevTest
es-aym6, 531 996 1, 003
es-cni3, 883 883 1, 003
es-bzd7, 506 996 1, 003
es-gn26, 032 995 1, 003
es-oto4, 889 599 1, 003
es-nah16, 145 672 1, 003
es-quy125, 008 996 1, 003
es-tar14, 720 995 1, 003
es-shp14, 592 996 1, 003
es-hch8, 966 994 1, 003
", "type_str": "table", "html": null }, "TABREF2": { "text": "Distribution of MT data", "num": null, "content": "", "type_str": "table", "html": null }, "TABREF3": { "text": "Algunos actores usan el teatro comunitario para mejorar. Yaqhip akturanakax juk'amp yatsu\u00f1atakiw ayllunkir tiyatrur mantapxi.Los artistas de IRT ayudan a los ni\u00f1os en las escuelas.IRT artistanakax jisk'a yatiqa\u00f1 utankir wawanakaruw yanapapxi.", "num": null, "content": "
PairSentenceTranslation
es-aym
es-cniPens\u00e9 que hab\u00edas ido al campamento.Nokenkeshireashitaka pijaiti imabeyetinta.
Viajar es un beneficio que obtenemos.Akenayeeterika aparo ayeeti aneakeri.
es-bzdFui a un seminario que se hizo v\u00eda sat\u00e9lite.Ye' d\u00eb'r\u00f6 seminario \u00e3 w\u00e9x y\u00f6' sat\u00e9lite k\u0129.
El grupo est\u00e1 interesado en temas ambientales.E' wakpa k\u0129 ujt\u00e8 ki\u00e0n\u00e3 e' d\u00f6r k\u00e1x ajk\u00f3qn\u0169k.
es-gnVe\u00eda a su hermana todos los d\u00edas.Ko'\u00eako'\u00eare ohecha heind\u00fdpe.
Ramona nunca ha estado en Concord.Ramona no\u00eeriva Concord-pe.
es-nahSanto trabaj\u00f3 para Disney y oper\u00f3 las tazas de t\u00e9.zanto quitequitilih Disney huan quinpexontih in cafen caxitl
La hermana de la abuela no era blanca.ihueltiuh in cihtli ixchipahuac catca
es-quyDe vez en cuando me gusta comer ensalada.Yananpiqa ensaladatam mikuytam munani
Ellos viv\u00edan en Broad Street.Broad Streetpi paykuna yacharqaku.
es-tarEs un hombre griego.Bil\u00e9 rej\u00f3i Griego ju
Nuestro padre dijo que no los llamaran animales.Kini on\u00f3 aniy\u00e9 mapu ke chuw\u00e9 nam\u00fati an\u00e9ba ajar\u00e9 j\u00e1kami.
es-shpEl Museo se ve afectado por las inversiones.Ja Museora en oinai inversionesbaon afectana.
Loren Field es el cient\u00edfico principal de la escuelaNato Loren Field riki cient\u00edfico rekena axeti xobonko
es-hchEra una selva tropical.pe h+k+t+kai metsi+ra+ ye tsie nieka ti+x+kat+.
", "type_str": "table", "html": null }, "TABREF4": { "text": "Example sentences of the various language pairs and corresponding translations (from Dev set).", "num": null, "content": "
PairBaseline Bleu ChrF++Setting A Bleu ChrF++ Bleu ChrF++ Setting B
aym0.30.1881.010.1780.760.186
cni0.030.1040.090.1760.090.178
bzd0.540.0770.860.110.890.111
oto0.010.0590.030.0810.040.083
nah0.330.182--0.160.196
tar0.010.0460.060.102--
hch3.180.1264.950.1865.090.186
", "type_str": "table", "html": null }, "TABREF5": { "text": "Evaluation results of IndT5 100k in BLEU and ChrF++ on the Test sets for the different language pairs.", "num": null, "content": "", "type_str": "table", "html": null }, "TABREF7": { "text": "Evaluation results of IndT5 140k in BLEU and ChrF++ on the Test sets for the different language pairs.", "num": null, "content": "
", "type_str": "table", "html": null } } } }