{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:13:25.665051Z" }, "title": "Peru is Multilingual, Its Machine Translation Should Be Too?", "authors": [ { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "", "affiliation": { "laboratory": "", "institution": "ILCC University of Edinburgh", "location": {} }, "email": "a.oncevay@ed.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Peru is a multilingual country with a long history of contact between the indigenous languages and Spanish. Taking advantage of this context for machine translation is possible with multilingual approaches for learning both unsupervised subword segmentation and neural machine translation models. The study proposes the first multilingual translation models for four languages spoken in Peru: Aymara, Ashaninka, Quechua and Shipibo-Konibo, providing both many-to-Spanish and Spanishto-many models and outperforming pairwise baselines in most of them. The task exploited a large English-Spanish dataset for pretraining, monolingual texts with tagged backtranslation, and parallel corpora aligned with English. Finally, by fine-tuning the best models, we also assessed the out-of-domain capabilities in two evaluation datasets for Quechua and a new one for Shipibo-Konibo 1 .", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Peru is a multilingual country with a long history of contact between the indigenous languages and Spanish. Taking advantage of this context for machine translation is possible with multilingual approaches for learning both unsupervised subword segmentation and neural machine translation models. The study proposes the first multilingual translation models for four languages spoken in Peru: Aymara, Ashaninka, Quechua and Shipibo-Konibo, providing both many-to-Spanish and Spanishto-many models and outperforming pairwise baselines in most of them. The task exploited a large English-Spanish dataset for pretraining, monolingual texts with tagged backtranslation, and parallel corpora aligned with English. Finally, by fine-tuning the best models, we also assessed the out-of-domain capabilities in two evaluation datasets for Quechua and a new one for Shipibo-Konibo 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Neural Machine Translation (NMT) has opened several research directions to exploit as many and diverse data as possible. Massive multilingual NMT models, for instance, take advantage of several language-pair datasets in a single system (Johnson et al., 2017) . This offers several advantages, such as a simple training process and enhanced performance of the language-pairs with little data (although sometimes detrimental to the high-resource language-pairs). However, massive models of dozens of languages are not necessarily the best outcome, as it is demonstrated that smaller clusters still offer the same benefits (Tan et al., 2019; .", "cite_spans": [ { "start": 236, "end": 258, "text": "(Johnson et al., 2017)", "ref_id": "BIBREF12" }, { "start": 620, "end": 638, "text": "(Tan et al., 2019;", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Peru offers a rich diversity context for machine translation research with 47 native languages (Simons and Fenning, 2019) . All of them are highly distinguishing from Castilian Spanish, the primary official language in the country and the one spoken by the majority of the population. However, from the computational perspective, all of these languages do not have enough resources, such as monolingual or parallel texts, and most of them are considered endangered (Zariquiey et al., 2019) .", "cite_spans": [ { "start": 95, "end": 121, "text": "(Simons and Fenning, 2019)", "ref_id": "BIBREF27" }, { "start": 465, "end": 489, "text": "(Zariquiey et al., 2019)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this context, the main question then arises: shouldn't machine translation be multilingual for languages spoken in a multilingual country like Peru? By taking advantage of few resources, and other strategies such as multilingual unsupervised subword segmentation models (Kudo, 2018) , pretraining with high resource language-pairs (Kocmi and Bojar, 2018) , back-translation (Sennrich et al., 2016a) , and fine-tuning (Neubig and Hu, 2018) , we deployed the first many-to-one and one-to-many multilingual NMT models (paired with Spanish) for four indigenous languages: Aymara, Ashaninka, Quechua and Shipibo-Konibo.", "cite_spans": [ { "start": 273, "end": 285, "text": "(Kudo, 2018)", "ref_id": "BIBREF16" }, { "start": 334, "end": 357, "text": "(Kocmi and Bojar, 2018)", "ref_id": "BIBREF14" }, { "start": 377, "end": 401, "text": "(Sennrich et al., 2016a)", "ref_id": "BIBREF25" }, { "start": 420, "end": 441, "text": "(Neubig and Hu, 2018)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In Peru, before NMT, there were studies in rulebased MT, based on the Apertium platform (Forcada et al., 2011) , for Quechua Eastern Apurimac (qve) and Quechua Cuzco (quz) (Cavero and Madariaga, 2007) . Furthermore, Ortega and Pillaipakkamnatt (2018) improved alignments for quz by using an agglutinative language as Finnish as a pivot. Apart from the Quechua variants, only Aymara (Coler and Homola, 2014) and Shipibo-Konibo (Galarreta et al., 2017) have been addressed with rule-based and statistical MT, respectively. Ortega et al. (2020b) for Southern Quechua, and G\u00f3mez Montoya et al. (2019) for Shipibo-Konibo, are the only studies that employed sequence-tosequence NMT models. They also performed transfer learning experiments with potentially related language pairs (e.g. Finnish or Turkish, which are agglutinative languages). However, as far as we know, this is the first study that trains a multilingual model for some language spoken in Peru. For related work on multilingual NMT, we refer the readers to the survey of Dabre et al. (2020) .", "cite_spans": [ { "start": 88, "end": 110, "text": "(Forcada et al., 2011)", "ref_id": "BIBREF9" }, { "start": 172, "end": 200, "text": "(Cavero and Madariaga, 2007)", "ref_id": "BIBREF6" }, { "start": 216, "end": 250, "text": "Ortega and Pillaipakkamnatt (2018)", "ref_id": "BIBREF21" }, { "start": 382, "end": 406, "text": "(Coler and Homola, 2014)", "ref_id": "BIBREF7" }, { "start": 426, "end": 450, "text": "(Galarreta et al., 2017)", "ref_id": "BIBREF10" }, { "start": 521, "end": 542, "text": "Ortega et al. (2020b)", "ref_id": "BIBREF22" }, { "start": 575, "end": 596, "text": "Montoya et al. (2019)", "ref_id": "BIBREF11" }, { "start": 1031, "end": 1050, "text": "Dabre et al. (2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "To enhance replicability, we only used the datasets provided in the AmericasNLP Shared Task 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Languages and datasets", "sec_num": "3" }, { "text": "\u2022 Southern Quechua: with 6+ millions of speakers and several variants, it is the most widespread indigenous language in Peru. AmericasNLP provides evaluation sets in the standard Southern Quechua, which is based mostly on the Quechua Ayacucho (quy) variant. There is parallel data from dictionaries and Jehovah Witnesses (Agi\u0107 and Vuli\u0107, 2019) . There is parallel corpus aligned with English too. We also include the close variant of Quechua Cusco (quz) to support the multilingual learning. \u2022 Aymara (aym): with 1.7 million of speakers (mostly in Bolivia). The parallel and monolingual data is extracted from a news website (Global Voices) and distributed by OPUS (Tiedemann, 2012) . There are aligned data with English too. \u2022 Shipibo-Konibo (shp): a Panoan language with almost 30,000 speakers in the Amazonian region. There are parallel data from dictionaries, educational material (Galarreta et al., 2017) , language learning flashcards (G\u00f3mez Montoya et al., 2019), plus monolingual data from educational books (Bustamante et al., 2020) . \u2022 Ashaninka (cni): an Arawakan language with 45,000 speakers in the Amazon. There is parallel data from dictionaries, laws and books (Ortega et al., 2020a) , plus monolingual corpus (Bustamante et al., 2020) .", "cite_spans": [ { "start": 321, "end": 343, "text": "(Agi\u0107 and Vuli\u0107, 2019)", "ref_id": "BIBREF0" }, { "start": 665, "end": 682, "text": "(Tiedemann, 2012)", "ref_id": "BIBREF29" }, { "start": 885, "end": 909, "text": "(Galarreta et al., 2017)", "ref_id": "BIBREF10" }, { "start": 1016, "end": 1041, "text": "(Bustamante et al., 2020)", "ref_id": "BIBREF4" }, { "start": 1177, "end": 1199, "text": "(Ortega et al., 2020a)", "ref_id": "BIBREF20" }, { "start": 1226, "end": 1251, "text": "(Bustamante et al., 2020)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Languages and datasets", "sec_num": "3" }, { "text": "The four languages are highly agglutinative or polysynthetic, meaning that they usually express a large amount of information in just one word with several joint morphemes. This is a real challenge for MT and subword segmentation methods, given the high probability of addressing a \"rare word\" for the system. We also note that each language belongs to a different language family, but that is not a problem for multilingual models, as usually the family-based clusters are not the most effective ones Pre-processing The datasets were noisy and not cleaned. Lines are reduced according to several heuristics: Arabic numbers or punctuation do not match in the parallel sentences, there are more symbols or numbers than words in a sentence, the ratio of words from one side is five times larger or shorter than the other, among others. Table 5 in the Appendix includes the original and cleaned data size per language-pair, whereas Table 1 presents the final sizes.", "cite_spans": [], "ref_spans": [ { "start": 834, "end": 841, "text": "Table 5", "ref_id": null }, { "start": 929, "end": 936, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Languages and datasets", "sec_num": "3" }, { "text": "English-Spanish datasets We consider the Eu-roParl (1.7M sentences) (Koehn, 2005) and the NewsCommentary-v8 (174k sentences) corpora for pre-training.", "cite_spans": [ { "start": 68, "end": 81, "text": "(Koehn, 2005)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Languages and datasets", "sec_num": "3" }, { "text": "The train data have been extracted from different domains and sources, which are not necessarily the same as the evaluation sets provided for the Shared Task. Therefore, the official development set (995 sentences per language) is split into three parts: 25%-25%-50%. The first two parts are our custom dev and devtest sets 3 . We add the 50% section to the training set with a sampling distribution of 20%, to reduce the domain gap in the training data. Likewise, we extract a sample of the training and double the size of the development set. The mixed data in the validation set is relevant, as it allows to evaluate how the model fits with all the domains. We used the same multi-text sentences for evaluation, and avoid any overlapping of the Spanish side with the training set, this is also important as we are going to evaluate multilingual models. Evaluation for all the models used BLEU (Papineni et al., 2002) and chrF (Popovi\u0107, 2015) ", "cite_spans": [ { "start": 896, "end": 919, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF23" }, { "start": 929, "end": 944, "text": "(Popovi\u0107, 2015)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.1" }, { "text": "Ortega et al. (2020b) used morphological information, such as affixes, to guide the Byte-Pair-Encoding (BPE) segmentation algorithm (Sennrich et al., 2016b) for Quechua. However, their improvement is not significant, and according to Bostrom and Durrett (2020) , BPE tends to oversplit roots of infrequent words. They showed that a unigram language model (Kudo, 2018) seems like a better alternative to split affixes and preserve roots (in English and Japanese).", "cite_spans": [ { "start": 132, "end": 156, "text": "(Sennrich et al., 2016b)", "ref_id": "BIBREF26" }, { "start": 234, "end": 260, "text": "Bostrom and Durrett (2020)", "ref_id": "BIBREF3" }, { "start": 355, "end": 367, "text": "(Kudo, 2018)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Multilingual subword segmentation", "sec_num": "4.2" }, { "text": "To take advantage of the potential lexical sharing of the languages (e.g. loanwords) and address the polysynthetic nature of the indigenous languages, we trained a unique multilingual segmentation model by sampling all languages with a uniform distribution. We used the unigram model implementation in SentencePiece (Kudo and Richardson, 2018 ) with a vocabulary size of 32,000.", "cite_spans": [ { "start": 316, "end": 342, "text": "(Kudo and Richardson, 2018", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Multilingual subword segmentation", "sec_num": "4.2" }, { "text": "For the experiments, we used a Transformer-base model (Vaswani et al., 2017) with the default configuration in Marian NMT (Junczys-Dowmunt et al., 2018) . The steps are as follows:", "cite_spans": [ { "start": 54, "end": 76, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF30" }, { "start": 122, "end": 152, "text": "(Junczys-Dowmunt et al., 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "4.3" }, { "text": "Pre-training We pre-trained two MT models with the Spanish-English language-pair in both directions. We did not include an agglutinative language like Finnish (Ortega et al., 2020b) for two reasons: it is not a must to consider highly related languages for effective transfer learning (e.g. English-German to English-Tamil (Bawden et al., 2020)), and we wanted to translate the English side of en-aym, en-quy and en-quz to augment their correspondent Spanish-paired datasets. The en\u2192es and es\u2192en models achieved 34.4 and 32.3 BLEU points, respectively, in the newsdev2013 set.", "cite_spans": [ { "start": 159, "end": 181, "text": "(Ortega et al., 2020b)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "4.3" }, { "text": "Multilingual fine-tuning Using the pre-trained en\u2192es model, we fine-tuned the first multilingual model many-to-Spanish. Following established practices, we used a uniform sampling for all the datasets (quz-es included) to avoid under-fitting the low-resource language-pairs 4 . Results are in Table 2 , row (a). We replicated this to the es\u2192many direction (row (e)), using the es\u2192en model.", "cite_spans": [], "ref_spans": [ { "start": 293, "end": 300, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Procedure", "sec_num": "4.3" }, { "text": "Back-translation With model (a), we backtranslated (BT) the monolingual data of the indigenous languages and train models (b) and (f): original plus BT data. However, the results with BT data underperformed or did not converge. Potential reasons are the noisy translation outputs of model (a) and the larger amount of BT than humantranslated sentences for all languages, even though we sampled BT and human translations uniformly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "4.3" }, { "text": "To alleviate the issue, we add a special tag for the BT data (Caswell et al., 2019) . With BT[t], we send a signal to the model that it is processing synthetic data, and thus, it may not hurt the learning over the real data. Table 2 (rows (c,g) ) shows the results.", "cite_spans": [ { "start": 61, "end": 83, "text": "(Caswell et al., 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 225, "end": 232, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 233, "end": 244, "text": "(rows (c,g)", "ref_id": null } ], "eq_spans": [], "section": "Tagged back-translation (BT[t])", "sec_num": null }, { "text": "We obtained pairwise systems by fine-tuning the same pre-trained models (without any back-translated data). For a straightforward comparison, they used the same multilingual SentencePiece model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pairwise baselines", "sec_num": null }, { "text": "One of the most exciting outcomes is the deteriorated performance of the multilingual models using BT data, as we usually expect that added backtranslated texts would benefit performance. Using tags (BT[t]) to differentiate which data is synthetic or not is only a simple step to address this issue; however, there could be evaluated more informed strategies for denoising or performing online data selection (Wang et al., 2018) .", "cite_spans": [ { "start": 409, "end": 428, "text": "(Wang et al., 2018)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Analysis and discussion", "sec_num": "5" }, { "text": "Besides, in the translation into Spanish, the multilingual model without BT data outperforms the rest models in all languages but Quechua, where the pairwise system achieved the best translation accuracy. Quechua is the \"highest\"-resource languagepair in the experiment, and its performance is deteriorated in the multilingual setting 5 . A similar scenario is shown in the other translation direction from Spanish, where the best multilingual setting (+BT[t]) cannot overcome the es\u2192quy model in the devtest set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis and discussion", "sec_num": "5" }, { "text": "Nevertheless, the gains for Aymara, Ashaninka and Shipibo-Konibo are outstanding. Moreover, we note that the models are not totally overfitted to any of the evaluation sets. Exceptions are es\u2192aym and es\u2192quy, with a significant performance dropping from dev to devtest, meaning that it started to overfit to the training data. However, for Spanish\u2192Ashaninka, we observe that the model achieved a better performance in the devtest set. This is due to oversampling of the same-domain dev partition for training ( \u00a74.1) and the small original training set. Concerning the results on the official test set, the performance is lower than the results with the custom evaluation sets. The main potential reason is that the official test is four times bigger than the custom devtest, and therefore, offers more diversity and challenge for the evaluation. Another point to highlight is that the best result in the Spanish-Quechua language-pair is obtained by a multilingual model (the scores between the model (e) and (g) are not significantly different) instead of the pairwise baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis and discussion", "sec_num": "5" }, { "text": "Decoding an indigenous language is still a challenging task, and the relatively low BLEU scores cannot suggest a translation with proper adequacy or fluency. However, BLEU works at the wordlevel, and other character-level metrics should be considered to better assess the highly agglutinative nature of the languages. For reference, we also report the chrF scores in Table 3 for the best multilingual setting and the pairwise baseline. As for the Spanish decoding, fluency is preserved from the English\u2192Spanish pre-trained model 6 , but more adequacy is needed.", "cite_spans": [], "ref_spans": [ { "start": 367, "end": 374, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Analysis and discussion", "sec_num": "5" }, { "text": "It is relevant to assess out-of-domain capabilities, but more important to evaluate whether the models are still capable to fine-tune without overfitting. We use a small evaluation set for Quechua (Kallpa, with 100 sentences), which contains sentences extracted from a magazine (Ortega et al., 2020b) . Likewise, we introduce a new evaluation set for Shipibo-Konibo (Kirika, 200 sentences), which contains short traditional stories.", "cite_spans": [ { "start": 278, "end": 300, "text": "(Ortega et al., 2020b)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Out-of-domain evaluation", "sec_num": "6" }, { "text": "We tested our best model for each language-pair, fine-tune it (+FT) with half of the out-of-domain dataset, and evaluate it in the other half. To avoid overfitting, we controlled cross-entropy loss and considered very few updates for validation steps. Results are shown in Table 3 , where we observe that it is possible to fine-tune the multilingual or pairwise models to the new domains without loosing too much performance in the original test.", "cite_spans": [], "ref_spans": [ { "start": 273, "end": 280, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Out-of-domain evaluation", "sec_num": "6" }, { "text": "The Quechua translations rapidly improved with the fine-tuning step, and there is a small gain in the original test for es\u2192quy, although the scores are relatively low in general. Nevertheless, our model could outperform others (by extrapolation, we can assume that the scores for the rule-based Apertium system (Cavero and Madariaga, 2007) and Ortega et al. (2020b)'s NMT system are similar in half of the dataset).", "cite_spans": [ { "start": 311, "end": 339, "text": "(Cavero and Madariaga, 2007)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Out-of-domain evaluation", "sec_num": "6" }, { "text": "For Shipibo-Konibo, we also observe some small gains in both directions without hurting the previous performance, but the scores are far from being robust. Kirika is challenging given its old style: the translations are extracted from an old book written by missionaries, and even when the spelling has been modernised, there are differences in the use of some auxiliary verbs for instance (extra words that affect the evaluation metric) 7 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Out-of-domain evaluation", "sec_num": "6" }, { "text": "Peru is multilingual, ergo, its machine translation should be too! We conclude that multilingual machine translation models can enhance the performance in truly low-resource languages like Aymara, Ashaninka and Shipibo-Konibo, in translation from and into Spanish. For Quechua, even when the pairwise system performed better in this study, there is a simple step to give a multilingual setting another opportunity: to include a higher-resource languagepair that may support the multilingual learning process. This could be related in some aspect like morphology (another agglutinative language) or the discourse (domain). Other approaches focused on more advanced sampling or adding specific layers to restore the performance of the higher-resource languages might be considered as well. Besides, tagged back-translation allowed to take some advantage of the monolingual data; however, one of the most critical following steps is to obtain a more robust many-to-Spanish model to generate backtranslated data with more quality. Furthermore, to address the multi-domain nature of these datasets, we could use domain tags to send more signals to the model and support further fine-tuning steps. Finally, after addressing the presented issues in this study, and to enable zero-shot translation, we plan to train the first many-to-many multilingual model for indigenous languages spoken in Peru.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and future work", "sec_num": "7" }, { "text": "Available in: https://github.com/aoncevay/mt-peru", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/AmericasNLP/americasnlp2021", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We are also reporting the results on the official test sets after the finalisation of the Shared Task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Temperature-based sampling or automatically learned data scorers are more advanced strategies(Wang et al., 2020). However, we left that analysis for further work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In multilingual training, this behaviour is usually observed, and other approaches, such as injecting adapter layers(Bapna and Firat, 2019), might help to mitigate the issue. We left the analysis for further work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This might be confirmed by a proper human evaluation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The dataset, with further analysis, is available at: https: //github.com/aoncevay/mt-peru", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The author is supported by funding from the European Union's Horizon 2020 research and innovation programme under grant agreements No 825299 (GoURMET) and the EP-SRC fellowship grant EP/S001271/1 (MTStretch). The author is also thankful to the insightful feedback of the anonymous reviewers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "S (orig.) S (clean) % clean T /S (src) T /S (tgt) ratio T src/tgt es-aym 6,453 Table 5 : Statistics and cleaning for all parallel corpora. We observe that the Shipibo-Konibo and Ashaninka corpora are the least noisy ones. S = number of sentences, T = number of tokens. There are sentence alignment issues in the Quechua datasets, which require a more specialised tool to address.", "cite_spans": [], "ref_spans": [ { "start": 79, "end": 86, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Appendix", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "JW300: A widecoverage parallel corpus for low-resource languages", "authors": [ { "first": "\u017deljko", "middle": [], "last": "Agi\u0107", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3204--3210", "other_ids": { "DOI": [ "10.18653/v1/P19-1310" ] }, "num": null, "urls": [], "raw_text": "\u017deljko Agi\u0107 and Ivan Vuli\u0107. 2019. JW300: A wide- coverage parallel corpus for low-resource languages. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3204-3210, Florence, Italy. Association for Compu- tational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Simple, scalable adaptation for neural machine translation", "authors": [ { "first": "Ankur", "middle": [], "last": "Bapna", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "1538--1548", "other_ids": { "DOI": [ "10.18653/v1/D19-1165" ] }, "num": null, "urls": [], "raw_text": "Ankur Bapna and Orhan Firat. 2019. Simple, scal- able adaptation for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 1538- 1548, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The University of Edinburgh's English-Tamil and English-Inuktitut submissions to the WMT20 news translation task", "authors": [ { "first": "Rachel", "middle": [], "last": "Bawden", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Radina", "middle": [], "last": "Dobreva", "suffix": "" }, { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Valerio Miceli", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Barone", "suffix": "" }, { "first": "", "middle": [], "last": "Williams", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "92--99", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rachel Bawden, Alexandra Birch, Radina Dobreva, Arturo Oncevay, Antonio Valerio Miceli Barone, and Philip Williams. 2020. The University of Ed- inburgh's English-Tamil and English-Inuktitut sub- missions to the WMT20 news translation task. In Proceedings of the Fifth Conference on Machine Translation, pages 92-99, Online. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Byte pair encoding is suboptimal for language model pretraining", "authors": [ { "first": "Kaj", "middle": [], "last": "Bostrom", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "4617--4624", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.414" ] }, "num": null, "urls": [], "raw_text": "Kaj Bostrom and Greg Durrett. 2020. Byte pair encod- ing is suboptimal for language model pretraining. In Findings of the Association for Computational Lin- guistics: EMNLP 2020, pages 4617-4624, Online. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "No data to crawl? monolingual corpus creation from PDF files of truly low-resource languages in Peru", "authors": [ { "first": "Gina", "middle": [], "last": "Bustamante", "suffix": "" }, { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Zariquiey", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "2914--2923", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gina Bustamante, Arturo Oncevay, and Roberto Zariquiey. 2020. No data to crawl? monolingual corpus creation from PDF files of truly low-resource languages in Peru. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 2914-2923, Marseille, France. European Language Resources Association.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Tagged back-translation", "authors": [ { "first": "Isaac", "middle": [], "last": "Caswell", "suffix": "" }, { "first": "Ciprian", "middle": [], "last": "Chelba", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation", "volume": "", "issue": "", "pages": "53--63", "other_ids": { "DOI": [ "10.18653/v1/W19-5206" ] }, "num": null, "urls": [], "raw_text": "Isaac Caswell, Ciprian Chelba, and David Grangier. 2019. Tagged back-translation. In Proceedings of the Fourth Conference on Machine Translation (Vol- ume 1: Research Papers), pages 53-63, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Traductor morfol\u00f3gico del castellano y quechua (Morphological translator of Castilian Spanish and Quechua)", "authors": [ { "first": "Castro", "middle": [], "last": "Indhira", "suffix": "" }, { "first": "Jaime Farf\u00e1n", "middle": [], "last": "Cavero", "suffix": "" }, { "first": "", "middle": [], "last": "Madariaga", "suffix": "" } ], "year": 2007, "venue": "Revista I+ i", "volume": "", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Indhira Castro Cavero and Jaime Farf\u00e1n Madariaga. 2007. Traductor morfol\u00f3gico del castellano y quechua (Morphological translator of Castilian Spanish and Quechua). Revista I+ i, 1(1).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Rule-based machine translation for Aymara", "authors": [ { "first": "Matthew", "middle": [], "last": "Coler", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Homola", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "67--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Coler and Petr Homola. 2014. Rule-based machine translation for Aymara, pages 67-80. Cam- bridge University Press.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A comprehensive survey of multilingual neural machine translation", "authors": [ { "first": "Raj", "middle": [], "last": "Dabre", "suffix": "" }, { "first": "Chenhui", "middle": [], "last": "Chu", "suffix": "" }, { "first": "Anoop", "middle": [], "last": "Kunchukuttan", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raj Dabre, Chenhui Chu, and Anoop Kunchukuttan. 2020. A comprehensive survey of multilingual neu- ral machine translation.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Apertium: a free/open-source platform for rule-based machine translation. Machine translation", "authors": [ { "first": "Mireia", "middle": [], "last": "Mikel L Forcada", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Ginest\u00ed-Rosell", "suffix": "" }, { "first": "Jim", "middle": [ "O" ], "last": "Nordfalk", "suffix": "" }, { "first": "Sergio", "middle": [], "last": "'regan", "suffix": "" }, { "first": "Juan", "middle": [ "Antonio" ], "last": "Ortiz-Rojas", "suffix": "" }, { "first": "Felipe", "middle": [], "last": "P\u00e9rez-Ortiz", "suffix": "" }, { "first": "Gema", "middle": [], "last": "S\u00e1nchez-Mart\u00ednez", "suffix": "" }, { "first": "Francis", "middle": [ "M" ], "last": "Ram\u00edrez-S\u00e1nchez", "suffix": "" }, { "first": "", "middle": [], "last": "Tyers", "suffix": "" } ], "year": 2011, "venue": "", "volume": "25", "issue": "", "pages": "127--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel L Forcada, Mireia Ginest\u00ed-Rosell, Jacob Nord- falk, Jim O'Regan, Sergio Ortiz-Rojas, Juan An- tonio P\u00e9rez-Ortiz, Felipe S\u00e1nchez-Mart\u00ednez, Gema Ram\u00edrez-S\u00e1nchez, and Francis M Tyers. 2011. Aper- tium: a free/open-source platform for rule-based ma- chine translation. Machine translation, 25(2):127- 144.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Corpus creation and initial SMT experiments between Spanish and Shipibo-konibo", "authors": [ { "first": "Ana-Paula", "middle": [], "last": "Galarreta", "suffix": "" }, { "first": "Andr\u00e9s", "middle": [], "last": "Melgar", "suffix": "" }, { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "238--244", "other_ids": { "DOI": [ "10.26615/978-954-452-049-6_033" ] }, "num": null, "urls": [], "raw_text": "Ana-Paula Galarreta, Andr\u00e9s Melgar, and Arturo On- cevay. 2017. Corpus creation and initial SMT ex- periments between Spanish and Shipibo-konibo. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 238-244, Varna, Bulgaria. INCOMA Ltd.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A continuous improvement framework of machine translation for Shipibo-konibo", "authors": [ { "first": "H\u00e9ctor Erasmo G\u00f3mez", "middle": [], "last": "Montoya", "suffix": "" }, { "first": "Kervy Dante Rivas", "middle": [], "last": "Rojas", "suffix": "" }, { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on Technologies for MT of Low Resource Languages", "volume": "", "issue": "", "pages": "17--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "H\u00e9ctor Erasmo G\u00f3mez Montoya, Kervy Dante Rivas Rojas, and Arturo Oncevay. 2019. A continuous improvement framework of machine translation for Shipibo-konibo. In Proceedings of the 2nd Work- shop on Technologies for MT of Low Resource Lan- guages, pages 17-23, Dublin, Ireland. European As- sociation for Machine Translation.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", "authors": [ { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Thorat", "suffix": "" }, { "first": "Fernanda", "middle": [], "last": "Vi\u00e9gas", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Wattenberg", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Macduff", "middle": [], "last": "Hughes", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "339--351", "other_ids": { "DOI": [ "10.1162/tacl_a_00065" ] }, "num": null, "urls": [], "raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: En- abling zero-shot translation. Transactions of the As- sociation for Computational Linguistics, 5:339-351.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Marian: Fast neural machine translation in C++", "authors": [ { "first": "Marcin", "middle": [], "last": "Junczys-Dowmunt", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Grundkiewicz", "suffix": "" }, { "first": "Tomasz", "middle": [], "last": "Dwojak", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Neckermann", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Seide", "suffix": "" }, { "first": "Ulrich", "middle": [], "last": "Germann", "suffix": "" }, { "first": "Alham", "middle": [], "last": "Fikri Aji", "suffix": "" }, { "first": "Nikolay", "middle": [], "last": "Bogoychev", "suffix": "" }, { "first": "F", "middle": [ "T" ], "last": "Andr\u00e9", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Martins", "suffix": "" }, { "first": "", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2018, "venue": "Proceedings of ACL 2018, System Demonstrations", "volume": "", "issue": "", "pages": "116--121", "other_ids": { "DOI": [ "10.18653/v1/P18-4020" ] }, "num": null, "urls": [], "raw_text": "Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, Andr\u00e9 F. T. Martins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in C++. In Proceedings of ACL 2018, System Demonstrations, pages 116- 121, Melbourne, Australia. Association for Compu- tational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Trivial transfer learning for low-resource neural machine translation", "authors": [ { "first": "Tom", "middle": [], "last": "Kocmi", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", "volume": "", "issue": "", "pages": "244--252", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Kocmi and Ond\u0159ej Bojar. 2018. Trivial transfer learning for low-resource neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 244-252, Bel- gium, Brussels. Association for Computational Lin- guistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Europarl: A parallel corpus for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "MT summit", "volume": "5", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, vol- ume 5, pages 79-86. Citeseer.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Subword regularization: Improving neural network translation models with multiple subword candidates", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "66--75", "other_ids": { "DOI": [ "10.18653/v1/P18-1007" ] }, "num": null, "urls": [], "raw_text": "Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple sub- word candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 66-75, Mel- bourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "John", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "66--71", "other_ids": { "DOI": [ "10.18653/v1/D18-2012" ] }, "num": null, "urls": [], "raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Rapid adaptation of neural machine translation to new languages", "authors": [ { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Junjie", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "875--880", "other_ids": { "DOI": [ "10.18653/v1/D18-1103" ] }, "num": null, "urls": [], "raw_text": "Graham Neubig and Junjie Hu. 2018. Rapid adapta- tion of neural machine translation to new languages. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 875-880, Brussels, Belgium. Association for Com- putational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Bridging linguistic typology and multilingual machine translation with multi-view language representations", "authors": [ { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "2391--2406", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.187" ] }, "num": null, "urls": [], "raw_text": "Arturo Oncevay, Barry Haddow, and Alexandra Birch. 2020. Bridging linguistic typology and multilingual machine translation with multi-view language repre- sentations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 2391-2406, Online. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Overcoming resistance: The normalization of an Amazonian tribal language", "authors": [ { "first": "John", "middle": [], "last": "Ortega", "suffix": "" }, { "first": "Richard", "middle": [ "Alexander" ], "last": "Castro-Mamani", "suffix": "" }, { "first": "Jaime Rafael Montoya", "middle": [], "last": "Samame", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages", "volume": "", "issue": "", "pages": "1--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Ortega, Richard Alexander Castro-Mamani, and Jaime Rafael Montoya Samame. 2020a. Overcom- ing resistance: The normalization of an Amazonian tribal language. In Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages, pages 1-13, Suzhou, China. Association for Compu- tational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Using morphemes from agglutinative languages like Quechua and Finnish to aid in low-resource translation", "authors": [ { "first": "John", "middle": [], "last": "Ortega", "suffix": "" }, { "first": "Krishnan", "middle": [], "last": "Pillaipakkamnatt", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the AMTA 2018 Workshop on Technologies for MT of Low Resource Languages", "volume": "", "issue": "", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Ortega and Krishnan Pillaipakkamnatt. 2018. Us- ing morphemes from agglutinative languages like Quechua and Finnish to aid in low-resource trans- lation. In Proceedings of the AMTA 2018 Workshop on Technologies for MT of Low Resource Languages (LoResMT 2018), pages 1-11.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Neural machine translation with a polysynthetic low resource language", "authors": [ { "first": "E", "middle": [], "last": "John", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Ortega", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Castro Mamani", "suffix": "" }, { "first": "", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2020, "venue": "Machine Translation", "volume": "34", "issue": "4", "pages": "325--346", "other_ids": {}, "num": null, "urls": [], "raw_text": "John E Ortega, Richard Castro Mamani, and Kyunghyun Cho. 2020b. Neural machine trans- lation with a polysynthetic low resource language. Machine Translation, 34(4):325-346.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "chrF: character n-gram F-score for automatic MT evaluation", "authors": [ { "first": "Maja", "middle": [], "last": "Popovi\u0107", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "392--395", "other_ids": { "DOI": [ "10.18653/v1/W15-3049" ] }, "num": null, "urls": [], "raw_text": "Maja Popovi\u0107. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Improving neural machine translation models with monolingual data", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "86--96", "other_ids": { "DOI": [ "10.18653/v1/P16-1009" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1715--1725", "other_ids": { "DOI": [ "10.18653/v1/P16-1162" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Ethnologue: Languages of the World. Twentysecond edition. Dallas Texas: SIL international. Online version", "authors": [ { "first": "F", "middle": [], "last": "Gary", "suffix": "" }, { "first": "Charles", "middle": [ "D" ], "last": "Simons", "suffix": "" }, { "first": "", "middle": [], "last": "Fenning", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gary F. Simons and Charles D. Fenning, editors. 2019. Ethnologue: Languages of the World. Twenty- second edition. Dallas Texas: SIL international. On- line version: http://www.ethnologue.com.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Multilingual neural machine translation with language clustering", "authors": [ { "first": "Xu", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Jiale", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Di", "middle": [], "last": "He", "suffix": "" }, { "first": "Yingce", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "963--973", "other_ids": { "DOI": [ "10.18653/v1/D19-1089" ] }, "num": null, "urls": [], "raw_text": "Xu Tan, Jiale Chen, Di He, Yingce Xia, Tao Qin, and Tie-Yan Liu. 2019. Multilingual neural machine translation with language clustering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 963-973, Hong Kong, China. Association for Computational Lin- guistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Parallel data, tools and interfaces in opus", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in opus. In Proceedings of the Eight Interna- tional Conference on Language Resources and Eval- uation (LREC'12), Istanbul, Turkey. European Lan- guage Resources Association (ELRA).", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Denoising neural machine translation training with trusted data and online data selection", "authors": [ { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Taro", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "Macduff", "middle": [], "last": "Hughes", "suffix": "" }, { "first": "Tetsuji", "middle": [], "last": "Nakagawa", "suffix": "" }, { "first": "Ciprian", "middle": [], "last": "Chelba", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", "volume": "", "issue": "", "pages": "133--143", "other_ids": { "DOI": [ "10.18653/v1/W18-6314" ] }, "num": null, "urls": [], "raw_text": "Wei Wang, Taro Watanabe, Macduff Hughes, Tetsuji Nakagawa, and Ciprian Chelba. 2018. Denois- ing neural machine translation training with trusted data and online data selection. In Proceedings of the Third Conference on Machine Translation: Re- search Papers, pages 133-143, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Balancing training for multilingual neural machine translation", "authors": [ { "first": "Xinyi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8526--8537", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.754" ] }, "num": null, "urls": [], "raw_text": "Xinyi Wang, Yulia Tsvetkov, and Graham Neubig. 2020. Balancing training for multilingual neural ma- chine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 8526-8537, Online. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Obsolescencia ling\u00fc\u00edstica, descripci\u00f3n gramatical y documentaci\u00f3n de lenguas en el per\u00fa: hacia un estado de la cuesti\u00f3n", "authors": [ { "first": "Roberto", "middle": [], "last": "Zariquiey", "suffix": "" }, { "first": "Harald", "middle": [], "last": "Hammarstr\u00f6m", "suffix": "" }, { "first": "M\u00f3nica", "middle": [], "last": "Arakaki", "suffix": "" }, { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "" }, { "first": "John", "middle": [], "last": "Miller", "suffix": "" } ], "year": 2019, "venue": "Lexis", "volume": "43", "issue": "2", "pages": "271--337", "other_ids": { "DOI": [ "10.18800/lexis.201902.001" ] }, "num": null, "urls": [], "raw_text": "Roberto Zariquiey, Harald Hammarstr\u00f6m, M\u00f3nica Arakaki, Arturo Oncevay, John Miller, Aracelli Gar- c\u00eda, and Adriano Ingunza. 2019. Obsolescencia ling\u00fc\u00edstica, descripci\u00f3n gramatical y documentaci\u00f3n de lenguas en el per\u00fa: hacia un estado de la cuesti\u00f3n. Lexis, 43(2):271-337.", "links": null } }, "ref_entries": { "TABREF1": { "text": "Number of sentences in monolingual and parallel corpora aligned with Spanish (es) or English (en). The latter are used for en\u2192es translation and we only noted non-duplicated sentences w.r.t. the *-es corpora.", "type_str": "table", "num": null, "html": null, "content": "" }, "TABREF2": { "text": "metrics.", "type_str": "table", "num": null, "html": null, "content": "
BLEUAymaraAshaninkaQuechuaShipibo-Konibo
\u2192Spanishdevdevtest test dev devtest testdevdevtest testdevdevtesttest
(a) Multilingual 11.119.953.70 8.409.375.21 12.46 11.03 8.04 10.34 12.72 10.07
(b) Multi+BT10.768.392.87 7.305.343.44 11.488.857.51 9.1310.777.58
(c) Multi+BT[t] 10.728.422.86 7.455.693.15 11.37 10.02 7.12 8.8110.737.18
(d) Pairwise9.467.662.04 4.233.962.38 15.21 14.00 8.20 7.729.484.44
Spanish\u2192devdevtest test dev devtest testdevdevtest testdevdevtesttest
(e) Multilingual 8.676.282.19 6.74 11.72 5.54 10.045.374.51 10.82 10.446.69
(f) Multi+BT3.312.590.79 1.293.382.82 1.362.021.73 1.633.762.98
(g) Multi+BT[t] 10.556.542.31 7.36 13.17 5.40 10.775.294.23 11.98 11.127.45
(h) Pairwise7.084.961.65 4.128.403.82 10.676.113.96 8.767.896.15
" }, "TABREF3": { "text": "BLEU scores for the dev and devtest custom partitions and the official test set, including all the multilingual and pairwise MT systems into and from Spanish. BT = Back-translation. BT[t] = Tagged back-translation. Multilingual 31.73 28.82 22.01 26.78 26.82 22.27 32.92 32.99 29.45 31.41 33.49 31.26 (d) Pairwise 28.77 25.03 19.79 20.43 20.40 18.83 36.01 36.06 30.90 27.25 29.91 25.31 Multi+BT[t] 37.32 35.17 26.70 38.94 38.44 30.81 44.60 38.94 37.80 40.67 39.47 33.43 (h) Pairwise 28.89 28.23 21.13 32.55 32.29 27.10 45.77 39.68 36.86 34.97 34.96 27.09", "type_str": "table", "num": null, "html": null, "content": "
chrFAymaraAshaninkaQuechuaShipibo-Konibo
\u2192Spanishdevdevtesttestdevdevtesttestdevdevtesttestdevdevtesttest
(a) Spanish\u2192devdevtesttestdevdevtesttestdevdevtesttestdevdevtesttest
(g)
" }, "TABREF4": { "text": "", "type_str": "table", "num": null, "html": null, "content": "" }, "TABREF6": { "text": "Out-of-domain BLEU scores. Best model is fine-tuned (+FT) with half of the dataset and evaluated in the other half. \u2206t = original test score variation.", "type_str": "table", "num": null, "html": null, "content": "
" } } } }